We need much better understanding of information processing and computation as its primary form. Future progress of new computational devices capable of dealing with problems of big data, internet of things, semantic web, cognitive robotics and neuroinformatics depends on the adequate models of computation. In this article we first present the current state of the art through systematization of existing models and mechanisms, and outline basic structural framework of computation. We argue that defining computation as information processing, and given that there is no information without (physical) representation, the dynamics of information on the fundamental level is physical/ intrinsic/ natural computation. As a special case, intrinsic computation is used for designed computation in computing machinery. Intrinsic natural computation occurs on variety of levels of physical processes, containing the levels of computation of living organisms (including highly intelligent animals) as well as designed computational devices. The present article offers a typology of current models of computation and indicates future paths for the advancement of the field; both by the development of new computationalmodels and by learning from nature how to better compute using different mechanisms of intrinsic computation.

COLLEGE OF SCIENCE ComputationalModeling & Data Analytics COLLEGE OF SCIENCE ComputationalModeling & Data Analytics The Bachelor of Science in ComputationalModeling and Data Analytics (CMDA mathematics. It imparts the unique blend of skills from Statistics, Mathematics, and Computer Science needed

The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

ComputerModeling Illuminates Degradation Pathways of Cations in Alkaline Membrane Fuel Cells Cation degradation insights obtained by computationalmodeling could result in better performance are effective in increasing cation stability. With the help of computationalmodeling, more cations are being

This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

An apparatus, computer software, and a method of determining position inside a building comprising selecting on a PDA at least two walls of a room in a digitized map of a building or a portion of a building, pointing and firing a laser rangefinder at corresponding physical walls, transmitting collected range information to the PDA, and computing on the PDA a position of the laser rangefinder within the room.

with building formal, mathematical models both for aspects of the computational process and for features discuss this issue in Section 3.1. 6th Irish Workshop on Formal Methods (IWFM'03), eWiC, British Computer traditionally associated with computer science are logic and discrete mathematics, the latter including set theo

ComputerModelling of Pigeon Navigation according to the "Map and Compass"-Model Ulrich Nehmzow@zoology.uni-frankfurt.de Abstract This paper presents a computermodel of pigeon navigation (homing), based on Kramer's map-and-compass intersecting gradients which are used by the birds to determine the correct compass heading for home

As noted in the proceedings of this conference it is of importance to determine if quantum mechanics imposes fundamental limits on the computation process. Some aspects of this problem have been examined by the development of different types of quantum mechanical Hamiltonian models of Turing machines. (Benioff 1980, 1982a, 1982b, 1982c). Turing machines were considered because they provide a standard representation of all digital computers. Thus, showing the existence of quantum mechanical models of all Turing machines is equivalent to showing the existence of quantum mechanical models of all digital computers. The types of models considered all had different properties. Some were constructed on two-dimensional lattices of quantum spin systems of spin 1/2 (Benioff 1982b, 1982c) or higher spins (Benioff 1980). All the models considered Turing machine computations which were made reversible by addition of a history tape. Quantum mechanical models of Bennett's reversible machines (Bennett 1973) in which the model makes a copy of the computation result and then erases the history and undoes the computation in lockstep to recover the input were also developed (Benioff 1982a). To avoid technical complications all the types of models were restricted to modelling an arbitrary but finite number of computation steps.

The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).

Deformable Models & Applications (Part I) Ye Duan Department of Computer Science University of Missouri at Columbia December 21, 2004 Ye Duan Department of Computer Science University of Missouri at Columbia December 21, 2004 University of Missouri at ColumbiaDepartment of Computer Science #12;Department

The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide state-of-the-art simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of high-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.

In the last decade, efficient use of energy has become a topic of global significance, touching almost every area of modern life, including computing. From mobile to desktop to server, energy efficiency concerns are now ubiquitous. However...

The quantum circuit model is the most widely used model of quantum computation. It provides both a framework for formulating quantum algorithms and an architecture for the physical construction of quantum computers. However, several other models of quantum computation exist which provide useful alternative frameworks for both discovering new quantum algorithms and devising new physical implementations of quantum computers. In this thesis, I first present necessary background material for a general physics audience and discuss existing models of quantum computation. Then, I present three results relating to various models of quantum computation: a scheme for improving the intrinsic fault tolerance of adiabatic quantum computers using quantum error detecting codes, a proof that a certain problem of estimating Jones polynomials is complete for the one clean qubit complexity class, and a generalization of perturbative gadgets which allows k-body interactions to be directly simulated using 2-body interactions. Lastly, I discuss general principles regarding quantum computation that I learned in the course of my research, and using these principles I propose directions for future research.

after model has been sent to CENTAR). We then present an interactive, graphical, icon based modeling program, Alpha, that lets the user "draw" the model on screen and translates it into a syn tactically correct CENTAR input model which is also free...

This thesis presents a programming-language viewpoint for morphogenesis, the process of shape formation during embryological development. We model morphogenesis as a self-organizing, self-repairing amorphous computation ...

This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in experiments, special experimental methods were devised to create similar boundary conditions in the iron films. Preliminary MFM studies conducted on single and polycrystalline iron films with small sub-areas created with focused ion beam have correlated quite well qualitatively with phase-field simulations. However, phase-field model dimensions are still small relative to experiments thus far. We are in the process of increasing the size of the models and decreasing specimen size so both have identical dimensions. Ongoing research is focused on validation of the phase-field model. Validation is being accomplished through comparison with experimentally obtained MFM images (in progress), and planned measurements of major hysteresis loops and first order reversal curves. Extrapolation of simulation sizes to represent a more stochastic bulk-like system will require sampling of various simulations (i.e., with single non-magnetic defect, single magnetic defect, single grain boundary, single dislocation, etc.) with distributions of input parameters. These outputs can then be compared to laboratory magnetic measurements and ultimately to simulate magnetic Barkhausen noise signals.

The development of accurate, predictive models for use in determining wellbore integrity requires detailed information about the chemical and mechanical changes occurring in hardened Portland cements. X-ray computed tomography (XRCT) provides a method that can nondestructively probe these changes in three dimensions. Here, we describe a method for extracting subvoxel mineralogical and chemical information from synchrotron XRCT images by combining advanced image segmentation with geochemical models of cement alteration. The method relies on determining “effective linear activity coefficients” (ELAC) for the white light source to generate calibration curves that relate the image grayscales to material composition. The resulting data set supports the modeling of cement alteration by CO2-rich brine with discrete increases in calcium concentration at reaction boundaries. The results of these XRCT analyses can be used to further improve coupled geochemical and mechanical models of cement alteration in the wellbore environment.

CENTAR model), and by interacting with the user and providing feedback by checking for errors and advising corrections. The architecture of Alpha is presented with its constituent libraries explained in their internal working and external interactions...

ASSISTANT PROFESSOR OF MECHANICAL ENGINEERING COMPUTATIONALMODELING COLLEGE OF ENGINEERING The Department of Mechanical Engineering at Colorado State University invites applications for a tenure processes with emphasis on applying the models to engineering systems of interest in the energy or materials

Quantitative social science is not only about regression analysis or, in general, data inference. Computer simulations of social mechanisms have an over 60 years long history. They have been used for many different purposes -- to test scenarios, to test the consistency of descriptive theories (proof-of-concept models), to explore emerging phenomena, for forecasting, etc. In this essay, we sketch these historical developments, the role of mechanistic models in the social sciences and the influences from natural and formal sciences. We argue that mechanistic computationalmodels form a natural common ground for social and natural sciences, and look forward to possible future information flow across the social-natural divide.

The authors discuss the computing systems, usage patterns and event data models used to analyze Run II data from the CDF-II experiment at the Tevatron collider. A critical analysis of the current implementation and design reveals some of the stronger and weaker elements of the system, which serve as lessons for future experiments. They highlight a need to maintain simplicity for users in the face of an increasingly complex computing environment.

. Â· A Scott family for A is a set of formulas, with a fixed finite tuple of parameters c in A, such that each diagram of A, D(A). A is computable (recursive) if its Turing degree is 0. Â· D(A) may be of much lower Turing degree than Th(A). N, the standard model of arithmetic, is computable. True Arithmetic, TA = Th

The state of knowledge of global warming will be presented and two aspects examined: observational evidence and a review of the state of computermodeling of climate change due to anthropogenic increases in greenhouse gases. Observational evidence, indeed, shows global warming, but it is difficult to prove that the changes are unequivocally due to the greenhouse-gas effect. Although observational measurements of global warming are subject to ``correction,`` researchers are showing consistent patterns in their interpretation of the data. Since the 1960s, climate scientists have been making their computermodels of the climate system more realistic. Models started as atmospheric models and, through the addition of oceans, surface hydrology, and sea-ice components, they then became climate-system models. Because of computer limitations and the limited understanding of the degree of interaction of the various components, present models require substantial simplification. Nevertheless, in their present state of development climate models can reproduce most of the observed large-scale features of the real system, such as wind, temperature, precipitation, ocean current, and sea-ice distribution. The use of supercomputers to advance the spatial resolution and realism of earth-system models will also be discussed.

A Model for the Human Computer Interface Evaluation in Safety Critical Computer Applications Fabio of the IEEE International Conference and Workshop: Engineering of Computer­Based Systems March 1998, Jerusalem, Israel #12; 179 A Model for the Human Computer Interface Evaluation in Safety Critical Computer

Semantic network research has seen a resurgence from its early history in the cognitive sciences with the inception of the Semantic Web initiative. The Semantic Web effort has brought forth an array of technologies that support the encoding, storage, and querying of the semantic network data structure at the world stage. Currently, the popular conception of the Semantic Web is that of a data modeling medium where real and conceptual entities are related in semantically meaningful ways. However, new models have emerged that explicitly encode procedural information within the semantic network substrate. With these new technologies, the Semantic Web has evolved from a data modeling medium to a computational medium. This article provides a classification of existing computationalmodeling efforts and the requirements of supporting technologies that will aid in the further growth of this burgeoning domain.

Numerical methods play an ever more important role in astrophysics. This is especially true in theoretical works, but of course, even in purely observational projects, data analysis without massive use of computational methods has become unthinkable. The key utility of computer simulations comes from their ability to solve complex systems of equations that are either intractable with analytic techniques or only amenable to highly approximative treatments. Simulations are best viewed as a powerful complement to analytic reasoning, and as the method of choice to model systems that feature enormous physical complexity such as star formation in evolving galaxies, the topic of this 43rd Saas Fee Advanced Course. The organizers asked me to lecture about high performance computing and numerical modelling in this winter school, and to specifically cover the basics of numerically treating gravity and hydrodynamics in the context of galaxy evolution. This is still a vast field, and I necessarily had to select a subset ...

dovier@dimi.uniud.it 3 Dept. of Computer Science, New Mexico State University epontell of Computer Science at New Mexico State University, where he also serves as the Di- rector of the Knowledge in the context of energy landscape studies (24; 17; 2; 22; 1). Commonly, Monte Carlo simulations, based

Vortices in superconductors: modelling and computer simulations B y Jennifer Deang1 , Qiang D u2 Vortices in superconductors are tubes of magnetic flux, or equivalently, cylindrical current loops is of importance both to the understanding of the basic physics of superconductors and to the design of devices. We

Ős ability to present the user with a large computational infrastructure that will allow for the processing in a routine fashion to solve difficult atomic resolution structures, containing as many as 1000 unique non-hydrogen

The geological surveying presently uses methods and tools for the computermodeling of 3D-structures of the geographical subsurface and geotechnical characterization as well as the application of geoinformation systems for management and analysis of spatial data, and their cartographic presentation. The objectives of this paper are to present a 3D geological surface model of Latur district in Maharashtra state of India. This study is undertaken through the several processes which are discussed in this paper to generate and visualize the automated 3D geological surface model of a projected area.

A high-tech computermodel called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for front-line fire fighters. The science team is looking into levels of bark beetle-induced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.

973 Major Subject: Industrial Engineering COMPUTERMODELS FOR EVALUATING FINANCIAL DECISION ALTERNATIVES A Thesis by JAMES CARROLL CHRISTIAN Approved as to style and content by: . '; . . i', , ( (Chairman of Committee) (Head of Depar n... of this research is to bridge this gap by de- veloping the methodology necessary to solve personal finance problems in a quantitative method through the application of engineering economy principles. ACKNOWLEDGEMENTS I would like to express my sincere...

dovier@dimi.uniud.it 3 Dept. of Computer Science, New Mexico State University epontell dramatically. In these cases, constraint programming can be exploited to generate suboptimal candi- dates by mining the protein data bank, e.g., a collection of rotamers, can be introduced to provide additional

This report summarizes work performed by Sandia National Laboratories for the Federal Aviation Administration. The technical issues involved in fire modeling for aircraft fire research are identified, as well as computational fire tools for addressing those issues, and the research which is needed to advance those tools in order to address long-range needs. Fire field models are briefly reviewed, and the VULCAN model is selected for further evaluation. Calculations are performed with VULCAN to demonstrate its applicability to aircraft fire problems, and also to gain insight into the complex problem of fires involving aircraft. Simulations are conducted to investigate the influence of fire on an aircraft in a cross-wind. The interaction of the fuselage, wind, fire, and ground plane is investigated. Calculations are also performed utilizing a large eddy simulation (LES) capability to describe the large- scale turbulence instead of the more common k-{epsilon} turbulence model. Additional simulations are performed to investigate the static pressure and velocity distributions around a fuselage in a cross-wind, with and without fire. The results of these simulations provide qualitative insight into the complex interaction of a fuselage, fire, wind, and ground plane. Reasonable quantitative agreement is obtained in the few cases for which data or other modeling results exist Finally, VULCAN is used to quantify the impact of simplifying assumptions inherent in a risk assessment compatible fire model developed for open pool fire environments. The assumptions are seen to be of minor importance for the particular problem analyzed. This work demonstrates the utility of using a fire field model for assessing the limitations of simplified fire models. In conclusion, the application of computational fire modeling tools herein provides both qualitative and quantitative insights into the complex problem of aircraft in fires.

7. Business Models #12;LearningsfromfoundingaComputerVisionStartup Flickr:dystopos How are you models ! ! (not only technology) #12;LearningsfromfoundingaComputerVisionStartup Auction business model! Bricks and clicks business model! Collective business models! Component business model! Cutting out

7. Business Models #12;LearningsfromfoundingaComputerVisionStartup Flickr:dystopos How are you models (not only technology) #12;LearningsfromfoundingaComputerVisionStartup Auction business model Bricks and clicks business model Collective business models Component business model Cutting out

the computer facility available. 9m D. Drew for developing J-field. FOHMA'fg, and for his advice on PAF programming. iv TABLE OF CONTENTS I. XNTRODUCT ION. ZZ. A FINITE SEQUENTIALLY COMPACT PROCESS FOR THE ADJOINS OF MATRXCES OVER ARBITRARY INTEGRAL... is probably nowhere more evident than when working with matrices. In this thesis an efficient techn1que for determ1ning exact matric adJoints is developed. The technique is applicable to either singular or non-singular aatr1ces with integral entries...

91 An Interactive ComputerModel of Two-Country Trade Bill Hamlen and Kevin Hamlen Abstract We introduce an interactive computermodel of two-country trade that allows students to investigate is to present an interactive computermodel of two-country international trade that allows students

This document reports on the research of Kenneth Letendre, the recipient of a Sandia Graduate Research Fellowship at the University of New Mexico. Warfare is an extreme form of intergroup competition in which individuals make extreme sacrifices for the benefit of their nation or other group to which they belong. Among animals, limited, non-lethal competition is the norm. It is not fully understood what factors lead to warfare. We studied the global variation in the frequency of civil conflict among countries of the world, and its positive association with variation in the intensity of infectious disease. We demonstrated that the burden of human infectious disease importantly predicts the frequency of civil conflict and tested a causal model for this association based on the parasite-stress theory of sociality. We also investigated the organization of social foraging by colonies of harvester ants in the genus Pogonomyrmex, using both field studies and computermodels.

The paper contains a new non-perturbative representation for subleading contribution to the free energy of multicut solution for hermitian matrix model. This representation is a generalisation of the formula, proposed by Klemm, Marino and Theisen for two cut solution, which was obtained by comparing the cubic matrix model with the topological B-model on the local Calabi-Yau geometry $\\hat {II}$ and was checked perturbatively. In this paper we give a direct proof of their formula and generalise it to the general multicut solution.

describe a facial animation project that uses specialized imaging devices to capture models of human heads Vision to Computer Graphics). Visual Modeling for Computer Animation: Graphics with a Vision Demetri a personal retrospective on image-based modeling for computer animation. As we shall see, one of the projects

The problems of the integration of engineering models in computer-aided preliminary design are reviewed. This paper details the research, development, and testing of modifications to Paper Airplane, a LISP-based computer ...

Determination of user intent at the computer interface through eye-gaze monitoring can significantly aid applications for the disabled, as well as telerobotics and process control interfaces. Whereas current eye-gaze control applications are limited to object selection and x/y gazepoint tracking, a methodology was developed here to discriminate a more abstract interface operation: zooming-in or out. This methodology first collects samples of eve-gaze location looking at controlled stimuli, at 30 Hz, just prior to a user`s decision to zoom. The sample is broken into data frames, or temporal snapshots. Within a data frame, all spatial samples are connected into a minimum spanning tree, then clustered, according to user defined parameters. Each cluster is mapped to one in the prior data frame, and statistics are computed from each cluster. These characteristics include cluster size, position, and pupil size. A multiple discriminant analysis uses these statistics both within and between data frames to formulate optimal rules for assigning the observations into zooming, zoom-out, or no zoom conditions. The statistical procedure effectively generates heuristics for future assignments, based upon these variables. Future work will enhance the accuracy and precision of the modeling technique, and will empirically test users in controlled experiments.

Cloud computing, a term whose origins have been in existence for more than a decade, has come into fruition due to technological capabilities and marketplace demands. Cloud computing can be defined as a scalable and flexible ...

Computer Graphics Volume 15, Number3 August 1981 A REFLECTANCE MODEL FOR COMPUTER GRAPHICS Robert L. Cook Program of Computer Graphics Cornell University Ithaca, New York 14853 Kenneth E. Torrance Sibley with incidence angle. The paper presents a method for obtaining the spectral energy distribution of the light

A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics. The computing system also analyzes the cell information of the electrochemical cell with a Butler-Volmer (BV) expression modified to determine exchange current density of the electrochemical cell by including kinetic performance information related to pulse-time dependence, electrode surface availability, or a combination thereof. A set of sigmoid-based expressions may be included with the modified-BV expression to determine kinetic performance as a function of pulse time. The determined exchange current density may be used with the modified-BV expression, with or without the sigmoid expressions, to analyze other characteristics of the electrochemical cell. Model parameters can be defined in terms of cell aging, making the overall kinetics model amenable to predictive estimates of cell kinetic performance along the aging timeline.

\\emph{Are analog models of computations more powerful than classical models of computations?} From a series of recent papers, it is now clear that many realistic analog models of computations are provably equivalent to classical digital models of computations from a \\emph{computability} point of view. Take, for example, the probably most realistic model of analog computation, the General Purpose Analog Computer (GPAC) model from Claude Shannon, a model for Differential Analyzers, which are analog machines used from 1930s to early 1960s to solve various problems. It is now known that functions computable by Turing machines are provably exactly those that are computable by GPAC. This paper is about next step: understanding if this equivalence also holds at the \\emph{complexity} level. In this paper we show that the realistic models of analog computation -- namely the General Purpose Analog Computer (GPAC) -- can simulate Turing machines in a computationally efficient manner. More concretely we show that, modulo...

A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics of the electrochemical cell. The computing system also develops a mechanistic level model of the electrochemical cell to determine performance fade characteristics of the electrochemical cell and analyzing the mechanistic level model to estimate performance fade characteristics over aging of a similar electrochemical cell. The mechanistic level model uses first constant-current pulses applied to the electrochemical cell at a first aging period and at three or more current values bracketing a first exchange current density. The mechanistic level model also is based on second constant-current pulses applied to the electrochemical cell at a second aging period and at three or more current values bracketing the second exchange current density.

Computationally Efficient Cardiac Bioelectricity Models Toward Whole-Heart Simulation Nathan A of developing new insights and techniques in simulating the electrical behavior of the human heart. While very A computationally feasible whole-heart model could be invaluable in the study of human heart pathology

The Two Server Problem Models of Online Computation Results A Randomized Algorithm for Two Servers, James Oravec supported by NSF grant CCR-0312093 Wolfgang Bein A Randomized Algorithm for Two Servers in Cross Polytope S #12;The Two Server Problem Models of Online Computation Results The Randomized 2-Server

Applying High Performance Computing to Analyzing by Probabilistic Model Checking Mobile Cellular on the use of high performance computing in order to analyze with the proba- bilistic model checker PRISM. The Figure Generation Script 22 2 #12;1. Introduction We report in this paper on the use of high performance

This paper provides a brief presentation of the formal computer security model currently being developed at the Los Alamos Department of Energy (DOE) Center for Computer Security (CCS). The initial motivation for this effort was the need to provide a method by which DOE computer security policy implementation could be tested and verified. The actual analytical model was a result of the integration of current research in computer security and previous modeling and research experiences. The model is being developed to define a generic view of the computer and network security domains, to provide a theoretical basis for the design of a security model, and to address the limitations of present models. Formal mathematical models for computer security have been designed and developed in conjunction with attempts to build secure computer systems since the early 70's. The foundation of the Los Alamos DOE CCS model is a series of functionally dependent probability equations, relations, and expressions. The mathematical basis appears to be justified and is undergoing continued discrimination and evolution. We expect to apply the model to the discipline of the Bell-Lapadula abstract sets of objects and subjects. 5 refs.

,008 average megawatts of conservation8. The electricity price forecast used for this initial estimResource Portfolio Model's Determination of Conservation's Cost- Effectiveness1 The regional Resource Portfolio Model (RPM) finds large amounts of conservation cost effective. The cost of some

The increasing relevance of areas such as real-time and embedded systems, pervasive computing, hybrid systems control, and biological and social systems modeling is bringing a growing attention to the temporal aspects of computing, not only in the computer science domain, but also in more traditional fields of engineering. This article surveys various approaches to the formal modeling and analysis of the temporal features of computer-based systems, with a level of detail that is suitable also for non-specialists. In doing so, it provides a unifying framework, rather than just a comprehensive list of formalisms. The paper first lays out some key dimensions along which the various formalisms can be evaluated and compared. Then, a significant sample of formalisms for time modeling in computing are presented and discussed according to these dimensions. The adopted perspective is, to some extent, historical, going from "traditional" models and formalisms to more modern ones.

Clique-detection Models in Computational Biochemistry and Genomics S. Butenko and W. E. Wilhelm,wilhelm}@tamu.edu Abstract Many important problems arising in computational biochemistry and genomics have been formulated and genomic aspects of the problems as well as to the graph-theoretic aspects of the solution approaches. Each

Inverse Modelling in Geology by Interactive Evolutionary Computation Chris Wijns a,b,, Fabio of geological processes, in the absence of established numerical criteria to act as inversion targets, requires evolutionary computation provides for the inclusion of qualitative geological expertise within a rigorous

A computer program has been developed to determine the maximum specific power for prismatic-core reactors as a function of maximum allowable fuel temperature, core pressure drop, and coolant velocity. The prismatic-core reactors consist of hexagonally shaped fuel elements grouped together to form a cylindrically shaped core. A gas coolant flows axially through circular channels within the elements, and the fuel is dispersed within the solid element material either as a composite or in the form of coated pellets. Different coolant, fuel, coating, and element materials can be selected to represent different prismatic-core concepts. The computer program allows the user to divide the core into any arbitrary number of axial levels to account for different axial power shapes. An option in the program allows the automatic determination of the core height that results in the maximum specific power. The results of parametric specific power calculations using this program are presented for various reactor concepts.

Computermodeling techniques, when applied to language acquisition problems, give an often unrealized insight into the diachronic change that occurs in language over successive generations. This paper shows that using ...

A novel computationalmodel of smoldering combustion capable of predicting both forward and opposed propagation is developed. This is accomplished by considering the one-dimensional, transient, governing equations for ...

A computational and application-oriented introduction to the modeling of large-scale systems in a wide variety of decision-making domains and the optimization of such systems using state-of-the-art optimization software. ...

In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

The present paper scrutinizes the principle of quantum determinism, which maintains that the complete information about the initial quantum state of a physical system should determine the system's quantum state at any other time. As it shown in the paper, assuming the strong exponential time hypothesis, SETH, which conjectures that known algorithms for solving computational NP-complete problems (often brute-force algorithms) are optimal, the quantum deterministic principle cannot be used generally, i.e., for randomly selected physical systems, particularly macroscopic systems. In other words, even if the initial quantum state of an arbitrary system were precisely known, as long as SETH is true it might be impossible in the real world to predict the system's exact final quantum state. The paper suggests that the breakdown of quantum determinism in a process, in which a black hole forms and then completely evaporates, might actually be physical evidence supporting SETH.

multifaceted construct and interventions that achieve the best outcomes are multicomponent interventions. The SDLMI (Wehmeyer et al., 2000) is a model of teaching (i.e., intended for teachers as end users to guide and direct instruction) that supports teachers... the impact of the SDLMI (Wehmeyer et al., 2000) on self-determination. Project personnel contacted school districts, and districts that agreed to participate (« < 20) identi- fied high school campuses {n < 39) to participate. Next, the primary district...

A personal computer based hourly simulation model was developed based on the CBS/ICE routines in the DOE-2.1 mainframe building simulation software. The menu driven new model employs more efficient data and information handling than the previous...

ComputationalModeling of Brain Dynamics during Repetitive Head Motions Igor Szczyrba School the HIC scale to arbitrary head motions. Our simulations of the brain dynamics in sagittal and horizontal injury modeling, resonance effects 1 Introduction A rapid head motion can result in a severe brain injury

MODELS AND METRICS FOR ENERGY-EFFICIENT COMPUTER SYSTEMS A DISSERTATION SUBMITTED TO THE DEPARTMENT promising energy-efficient technolo- gies, and models to understand the effects of resource utilization decisions on power con- sumption. To facilitate energy-efficiency improvements, this dissertation presents

A new code for nuclear shell-model calculations, "KSHELL", is developed. It aims at carrying out both massively parallel computation and single-node computation in the same manner. We solve the Schr\\"{o}dinger's equation in the $M$-scheme shell-modelmodel space, utilizing Thick-Restart Lanczos method. During the Lanczos iteration, the whole Hamiltonian matrix elements are generated "on-the-fly" in every matrix-vector multiplication. The vectors of the Lanczos method are distributed and stored on memory of each parallel node. We report that the newly developed code has high parallel efficiency on FX10 supercomputer and a PC with multi-cores.

The modeling work described herein represents Sandia National Laboratories (SNL) portion of a collaborative three-year project with Northrop Grumman Electronic Systems (NGES) and the University of Missouri to develop an advanced, thermal ground-plane (TGP), which is a device, of planar configuration, that delivers heat from a source to an ambient environment with high efficiency. Work at all three institutions was funded by DARPA/MTO; Sandia was funded under DARPA/MTO project number 015070924. This is the final report on this project for SNL. This report presents a numerical model of a pulsating heat pipe, a device employing a two phase (liquid and its vapor) working fluid confined in a closed loop channel etched/milled into a serpentine configuration in a solid metal plate. The device delivers heat from an evaporator (hot zone) to a condenser (cold zone). This new model includes key physical processes important to the operation of flat plate pulsating heat pipes (e.g. dynamic bubble nucleation, evaporation and condensation), together with conjugate heat transfer with the solid portion of the device. The model qualitatively and quantitatively predicts performance characteristics and metrics, which was demonstrated by favorable comparisons with experimental results on similar configurations. Application of the model also corroborated many previous performance observations with respect to key parameters such as heat load, fill ratio and orientation.

This research effort focuses on methodology for quantifying the effects of model uncertainty and discretization error on computationalmodeling and simulation. The work is directed towards developing methodologies which treat model form assumptions within an overall framework for uncertainty quantification, for the purpose of developing estimates of total prediction uncertainty. The present effort consists of work in three areas: framework development for sources of uncertainty and error in the modeling and simulation process which impact model structure; model uncertainty assessment and propagation through Bayesian inference methods; and discretization error estimation within the context of non-deterministic analysis.

The Seldon terrorist model represents a multi-disciplinary approach to developing organization software for the study of terrorist recruitment and group formation. The need to incorporate aspects of social science added a significant contribution to the vision of the resulting Seldon toolkit. The unique addition of and abstract agent category provided a means for capturing social concepts like cliques, mosque, etc. in a manner that represents their social conceptualization and not simply as a physical or economical institution. This paper provides an overview of the Seldon terrorist model developed to study the formation of cliques, which are used as the major recruitment entity for terrorist organizations.

of parameters of energy functions used in template-free modelling and refinement. Although many protein Engine cloud platform and is a showcase of how the emerging PaaS (Platform as a Service) technology could, the predicted structure is compared against the target native structure. This type of evaluation is performed

The goal of the presented paper is to provide an introduction to the basic computationalmodels used in quantum information theory. We review various models of quantum Turing machine, quantum circuits and quantum random access machine (QRAM) along with their classical counterparts. We also provide an introduction to quantum programming languages, which are developed using the QRAM model. We review the syntax of several existing quantum programming languages and discuss their features and limitations.

Significant advances were made on all objectives of the research program. We have developed fast multiresolution methods for performing electronic structure calculations with emphasis on constructing efficient representations of functions and operators. We extended our approach to problems of scattering in solids, i.e. constructing fast algorithms for computing above the Fermi energy level. Part of the work was done in collaboration with Robert Harrison and George Fann at ORNL. Specific results (in part supported by this grant) are listed here and are described in greater detail. (1) We have implemented a fast algorithm to apply the Green's function for the free space (oscillatory) Helmholtz kernel. The algorithm maintains its speed and accuracy when the kernel is applied to functions with singularities. (2) We have developed a fast algorithm for applying periodic and quasi-periodic, oscillatory Green's functions and those with boundary conditions on simple domains. Importantly, the algorithm maintains its speed and accuracy when applied to functions with singularities. (3) We have developed a fast algorithm for obtaining and applying multiresolution representations of periodic and quasi-periodic Green's functions and Green's functions with boundary conditions on simple domains. (4) We have implemented modifications to improve the speed of adaptive multiresolution algorithms for applying operators which are represented via a Gaussian expansion. (5) We have constructed new nearly optimal quadratures for the sphere that are invariant under the icosahedral rotation group. (6) We obtained new results on approximation of functions by exponential sums and/or rational functions, one of the key methods that allows us to construct separated representations for Green's functions. (7) We developed a new fast and accurate reduction algorithm for obtaining optimal approximation of functions by exponential sums and/or their rational representations.

This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). Anisotropic mechanical properties of densified BSCCO powders are of paramount importance during thermo-mechanical processing of superconducting tapes and wires. Maximum current transport requires high relative density and a high degree of alignment of the single crystal planes of the BSCCO. Unfortunately this configuration causes high stresses that can lead to cracking, and thus reduce the density, and the conductive properties of the tape. The current work develops a micromechanical material mode to model is calibrated and compared to experimental results, and then employed to analyze the effects of initial texture and confinement pressure and shear strains in the core of oxide powder-in-tube (OPIT) processed tapes are calculated by finite-element analysis. The calculated deformations were then applied as boundary conditions to the micromechanical model. Our calculated results were used to interpret a set of prototypical rolling experiments. 11 refs., 5 figs.

@dimi.uniud.it Enrico Pontelli Dept. Computer Science, New Mexico State University, epontell@cs.nmsu.edu Abstract--using a simplified pairwise energy model in [2] and a more precise energy model in [9]. In these approaches

A GAS KICK MODEL FOR THE PERSONAL COMPUTER A Thesis by CLAYTON LOWELL MILLER Submitted to the Graduate College of Texas A6M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE May 1987 Major Subject...: Petroleum Engineering A GAS KICK MODEL FOR THE PERSONAL COMPUTER A Thesis by CLAYTON LOWELL MILLER Approved as to style and content by: Wana C. vkam-Wold (Chair f Committee) Robert W. Heine (Member) Tibor G. Rozg yi (Member) W. D. Von Gonten Head...

In this project, the authors enhanced their ability to numerically simulate bounded plasmas that are dominated by low-frequency electric and magnetic fields. They moved towards this goal in several ways; they are now in a position to play significant roles in the modeling of low-frequency electromagnetic plasmas in several new industrial applications. They have significantly increased their facility with the computational methods invented to solve the low frequency limit of Maxwell`s equations (DiPeso, Hewett, accepted, J. Comp. Phys., 1993). This low frequency model is called the Streamlined Darwin Field model (SDF, Hewett, Larson, and Doss, J. Comp. Phys., 1992) has now been implemented in a fully non-neutral SDF code BEAGLE (Larson, Ph.D. dissertation, 1993) and has further extended to the quasi-neutral limit (DiPeso, Hewett, Comp. Phys. Comm., 1993). In addition, they have resurrected the quasi-neutral, zero-electron-inertia model (ZMR) and began the task of incorporating internal boundary conditions into this model that have the flexibility of those in GYMNOS, a magnetostatic code now used in ion source work (Hewett, Chen, ICF Quarterly Report, July--September, 1993). Finally, near the end of this project, they invented a new type of banded matrix solver that can be implemented on a massively parallel computer -- thus opening the door for the use of all their ADI schemes on these new computer architecture`s (Mattor, Williams, Hewett, submitted to Parallel Computing, 1993).

Purpose: Computed tomography (CT) imaging is the modality of choice for lung cancer diagnostics. With the increasing number of lung interventions on sublobar level in recent years, determining and visualizing pulmonary segments in CT images and, in oncological cases, reliable segment-related information about the location of tumors has become increasingly desirable. Computer-assisted identification of lung segments in CT images is subject of this work.Methods: The authors present a new interactive approach for the segmentation of lung segments that uses the Euclidean distance of each point in the lung to the segmental branches of the pulmonary artery. The aim is to analyze the potential of the method. Detailed manual pulmonary artery segmentations are used to achieve the best possible segment approximation results. A detailed description of the method and its evaluation on 11 CT scans from clinical routine are given.Results: An accuracy of 2–3 mm is measured for the segment boundaries computed by the pulmonary artery-based method. On average, maximum deviations of 8 mm are observed. 135 intersegmental pulmonary veins detected in the 11 test CT scans serve as reference data. Furthermore, a comparison of the presented pulmonary artery-based approach to a similar approach that uses the Euclidean distance to the segmental branches of the bronchial tree is presented. It shows a significantly higher accuracy for the pulmonary artery-based approach in lung regions at least 30 mm distal to the lung hilum.Conclusions: A pulmonary artery-based determination of lung segments in CT images is promising. In the tests, the pulmonary artery-based determination has been shown to be superior to the bronchial tree-based determination. The suitability of the segment approximation method for application in the planning of segment resections in clinical practice has already been verified in experimental cases. However, automation of the method accompanied by an evaluation on a larger number of test cases is required before application in the daily clinical routine.

The main bottleneck in modeling transport in molecular devices is to develop the correct formulation of the problem and efficient algorithms for analyzing the electronic structure and dynamics using, for example, the time-dependent density functional theory. We have divided this task into several steps. The first step is to developing the right mathematical formulation and numerical algorithms for analyzing the electronic structure using density functional theory. The second step is to study time-dependent density functional theory, particularly the far-field boundary conditions. The third step is to study electronic transport in molecular devices. We are now at the end of the first step. Under DOE support, we have made subtantial progress in developing linear scaling and sub-linear scaling algorithms for electronic structure analysis. Although there has been a huge amount of effort in the past on developing linear scaling algorithms, most of the algorithms developed suffer from the lack of robustness and controllable accuracy. We have made the following progress: (1) We have analyzed thoroughly the localization properties of the wave-functions. We have developed a clear understanding of the physical as well as mathematical origin of the decay properties. One important conclusion is that even for metals, one can choose wavefunctions that decay faster than any algebraic power. (2) We have developed algorithms that make use of these localization properties. Our algorithms are based on non-orthogonal formulations of the density functional theory. Our key contribution is to add a localization step into the algorithm. The addition of this localization step makes the algorithm quite robust and much more accurate. Moreover, we can control the accuracy of these algorithms by changing the numerical parameters. (3) We have considerably improved the Fermi operator expansion (FOE) approach. Through pole expansion, we have developed the optimal scaling FOE algorithm.

Reactor physics calculations have led to the development of nine liquid-metal fast breeder reactor (LMFBR) models for the ORIGEN2 computer code. Four of the models are based on the U-Pu fuel cycle, two are based on the Th-U-Pu fuel cycle, and three are based on the Th-/sup 233/U fuel cycle. The reactor models are based on cross sections taken directly from the reactor physics codes. Descriptions of the reactor models as well as values for the ORIGEN2 flux parameters THERM, RES, and FAST are given.

A computationalmodel for the simulation of a laser-cutting process has been developed using a finite element method. A transient heat transfer model is considered that deals with the material-cutting process using a Gaussian continuous wave laser beam. Numerical experimentation is carried out for mesh refinements and the rate of convergence in terms of groove shape and temperature. Results are also presented for the prediction of groove depth with different moving speeds.

In an effort to reduce the cost of biofuels, the National Renewable Energy Laboratory (NREL) has merged biochemistry with modern computing and mathematics. The result is a model of carbon metabolism that will help researchers understand and engineer the process of photosynthesis for optimal biofuel production.

Thermal building simulation and computer generation of nodal models H. BOYER, J.P. CHABRIAT, B in the development of several packages simulating the dynamic behaviour of buildings. This paper shows the adaptation. This article shows the chosen method in the case of our thermal simulation program for buildings, CODYRUN. #12

simulation, improving the efficiency of simulations for those large systems, building effective and flexibleCall for Papers ACM Transactions on Modeling and Computer Simulation Special Issue on Simulation Pierre L'Ecuyer, University of Montreal In connection with the 2011 INFORMS Simulation Society Research

-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some input variables computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We). The "mean model" allows to estimate the sensi- tivity indices of each scalar input variables, while

properties in response to the needs of a particular device or situation. These smart electronics have the potential to lead to entirely new generations of electronic devices--such as military and civilian Science Challenge Â­ Computationalmodeling of ultrafast digital electronics Â· To understand how

Theory ComputationalModeling of Pancreatic Cancer Reveals Kinetics of Metastasis Suggesting and size distribution of metas- tases as well as patient survival. These findings were validated death and one of the most aggressive malignancies in humans, with a five-year relative survival rate

innovati nNREL ComputerModels Integrate Wind Turbines with Floating Platforms Far off the shores of energy-hungry coastal cities, powerful winds blow over the open ocean, where the water is too deep for today's seabed-mounted offshore wind turbines. For the United States to tap into these vast offshore

ComputationalModel for Forced Expiration from Asymmetric Normal Lungs ADAM G. POLAK 1 losses along the airway branches. Calculations done for succeeding lung volumes result in the semidynamic to the choke points, characteristic differences of lung regional pressures and volumes, and a shape

This paper presents a framework for calibrating computationalmodels using data from sev- eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, model-form uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computationalmodel constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer- tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimization-based strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser- vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., roll-up and extrapolation).

Purpose: Photo-stimulable phosphor computed radiography (CR) has characteristics that allow the output to be manipulated by both radiation and optical light. The authors have developed a method that uses these characteristics to carry out radiation field and light field coincidence quality assurance on linear accelerators.Methods: CR detectors from Kodak were used outside their cassettes to measure both radiation and light field edges from a Varian linear accelerator. The CR detector was first exposed to a radiation field and then to a slightly smaller light field. The light impinged on the detector's latent image, removing to an extent the portion exposed to the light field. The detector was then digitally scanned. A MATLAB-based algorithm was developed to automatically analyze the images and determine the edges of the light and radiation fields, the vector between the field centers, and the crosshair center. Radiographic film was also used as a control to confirm the radiation field size.Results: Analysis showed a high degree of repeatability with the proposed method. Results between the proposed method and radiographic film showed excellent agreement of the radiation field. The effect of varying monitor units and light exposure time was tested and found to be very small. Radiation and light field sizes were determined with an uncertainty of less than 1 mm, and light and crosshair centers were determined within 0.1 mm.Conclusions: A new method was developed to digitally determine the radiation and light field size using CR photo-stimulable phosphor plates. The method is quick and reproducible, allowing for the streamlined and robust assessment of light and radiation field coincidence, with no observer interpretation needed.

This tutorial was one of eight tutorials selected to be presented at the Third International Conference on Intelligent Systems for Molecular Biology which was held in the United Kingdom from July 16 to 19, 1995. The authors intend to review the state of the art in the experimental determination of protein 3D structure (focus on nuclear magnetic resonance), and in the theoretical prediction of protein function and of protein structure in 1D, 2D and 3D from sequence. All the atomic resolution structures determined so far have been derived from either X-ray crystallography (the majority so far) or Nuclear Magnetic Resonance (NMR) Spectroscopy (becoming increasingly more important). The authors briefly describe the physical methods behind both of these techniques; the major computational methods involved will be covered in some detail. They highlight parallels and differences between the methods, and also the current limitations. Special emphasis will be given to techniques which have application to ab initio structure prediction. Large scale sequencing techniques increase the gap between the number of known proteins sequences and that of known protein structures. They describe the scope and principles of methods that contribute successfully to closing that gap. Emphasis will be given on the specification of adequate testing procedures to validate such methods.

of Informatics, University of Edinburgh, Edinburgh, United Kingdom, 2 ATR Computational Neuroscience Laboratories uncertainties, along with energy and accuracy demands. The insights from this computationalmodel could be used. This is an effortless task, however if suddenly a seemingly random wind gust perturbs the umbrella, you will typically

Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.

The thermal-hydraulic (T-H) models and solution schemes employed by the MEKIN computer code have been examined. The effects of T-H input parameters on- predicted fuel temperatures and coolant densities were determined in ...

This report outlines the effort to model a time-dependent, 2- dimensional, turbulent, nonpremixed flame with full chemistry with the aid of parallel computing tools. In this study, the mixing process and the chemical reactions occurring in the flow field are described in terms of the single-point probability density function (PDF), while the turbulent viscosity is determined by the standard kappa-epsilon model. The initial problem solved is a H[sub 2]/Air flame whose chemistry is described by 28 elementary reactions involving 9 chemical species.

The confluence of MaRIE (Matter-Radiation Interactions in Extreme) and extreme (exascale) computing timelines offers a unique opportunity in co-designing the elements of materials discovery, with theory and high performance computing, itself co-designed by constrained optimization of hardware and software, and experiments. MaRIE's theory, modeling, and computation (TMC) roadmap efforts have paralleled 'MaRIE First Experiments' science activities in the areas of materials dynamics, irradiated materials and complex functional materials in extreme conditions. The documents that follow this executive summary describe in detail for each of these areas the current state of the art, the gaps that exist and the road map to MaRIE and beyond. Here we integrate the various elements to articulate an overarching theme related to the role and consequences of heterogeneities which manifest as competing states in a complex energy landscape. MaRIE experiments will locate, measure and follow the dynamical evolution of these heterogeneities. Our TMC vision spans the various pillar science and highlights the key theoretical and experimental challenges. We also present a theory, modeling and computation roadmap of the path to and beyond MaRIE in each of the science areas.

The results of the work are contained in the publications resulting from the grant (which are listed below). Here I summarize the main findings from the last period of the award, 2006-2007: • Published a paper in Science with Igor Levin outlining the “Nanostructure Problem”, our inability to solve structure at the nanoscale. • Published a paper in Nature demonstrating the first ever ab-initio structure determination of a nanoparticle from atomic pair distribution function (PDF) data. • Published one book and 3 overview articles on PDF methods and the nanostructure problem. • Completed a project that sought to find a structural response to the presence of the so-called “intermediate phase” in network glasses which appears close to the rigidity percolation threshold in these systems. The main result was that we did not see convincing evidence for this, which drew into doubt the idea that GexSe1-x glasses were a model system exhibiting rigidity percolation.

We will investigate the use of derivative information in complex computermodel emulation when the correlation function is of the compactly supported Bohman class. To this end, a Gaussian process model similar to that used by Kaufman et al. (2011) is extended to a situation where first partial derivatives in each dimension are calculated at each input site (i.e. using gradients). A simulation study in the ten-dimensional case is conducted to assess the utility of the Bohman correlation function against strictly positive correlation functions when a high degree of sparsity is induced.

As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

A comprehensive model has been developed for the computation of corrosion rates of carbon steels in the presence of lithium bromide-based brines that are used as working fluids for absorption refrigeration cycles. The model combines a thermodynamic model that provides realistic speciation of aqueous systems with an electrochemical model for partial cathodic and anodic processes on the metal surface. The electrochemical model includes the adsorption of halides, which strongly influences the corrosion process. Also, the model takes into account the formation of passive films, which become important at high temperatures, at which the refrigeration equipment operates. The model has been verified by comparing calculated corrosion rates with laboratory data for carbon steels in LiBr solutions. Good agreement between the calculated and experimental corrosion rates has been obtained. In particular, the model is capable of reproducing the effects of changes in alkalinity and molybdate concentration on the rates of general corrosion. The model has been incorporated into a program that makes it possible to analyze the effects of various conditions such as temperature, pressure, solution composition or flow velocity on corrosion rates.

We prove the equivalence between adiabatic quantum computation and quantum computation in the circuit model. An explicit adiabatic computation procedure is given that generates a ground state from which the answer can be extracted. The amount of time needed is evaluated by computing the gap. We show that the procedure is computationally efficient.

We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and non-terrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.

Discharges for magnetron sputter thin film deposition systems involve complex plasmas that are sensitively dependent on magnetic field configuration and strength, working gas species and pressure, chamber geometry, and discharge power. The authors present a numerical formulation for the general solution of these plasmas as a component of a comprehensive simulation capability for planar magnetron sputtering. This is an extensible, fully three-dimensional model supporting realistic magnetic fields and is self-consistently solvable on a desktop computer. The plasma model features a hybrid approach involving a Monte Carlo treatment of energetic electrons and ions, along with a coupled fluid model for thermalized particles. Validation against a well-known one-dimensional system is presented. Various strategies for improving numerical stability are investigated as is the sensitivity of the solution to various model and process parameters. In particular, the effect of magnetic field, argon gas pressure, and discharge power are studied.

EconoGrid: A detailed Simulation Model of a Standards-based Grid Compute Economy EconoGrid is a detailed simulation model, implemented in SLX1 , of a grid compute economy that implements selected of users. In a grid compute economy, computing resources are sold to users in a market where price

Computational fluid dynamics (CFD) modeling, which has recently proven to be an effective means of analysis and optimization of energy-conversion processes, has been extended to coal gasification in this paper. A 3D mathematical model has been developed to simulate the coal gasification process in a pressurized spout-fluid bed. This CFD model is composed of gas-solid hydrodynamics, coal pyrolysis, char gasification, and gas phase reaction submodels. The rates of heterogeneous reactions are determined by combining Arrhenius rate and diffusion rate. The homogeneous reactions of gas phase can be treated as secondary reactions. A comparison of the calculated and experimental data shows that most gasification performance parameters can be predicted accurately. This good agreement indicates that CFD modeling can be used for complex fluidized beds coal gasification processes. 37 refs., 7 figs., 5 tabs.

In the second year of the project, the Eulerian-Lagrangian formulation for analyzing three-phase slurry flows in a bubble column is further developed. The approach uses an Eulerian analysis of liquid flows in the bubble column, and makes use of the Lagrangian trajectory analysis for the bubbles and particle motions. An experimental set for studying a two-dimensional bubble column is also developed. The operation of the bubble column is being tested and diagnostic methodology for quantitative measurements is being developed. An Eulerian computationalmodel for the flow condition in the two-dimensional bubble column is also being developed. The liquid and bubble motions are being analyzed and the results are being compared with the experimental setup. Solid-fluid mixture flows in ducts and passages at different angle of orientations were analyzed. The model predictions were compared with the experimental data and good agreement was found. Gravity chute flows of solid-liquid mixtures is also being studied. Further progress was also made in developing a thermodynamically consistent model for multiphase slurry flows with and without chemical reaction in a state of turbulent motion. The balance laws are obtained and the constitutive laws are being developed. Progress was also made in measuring concentration and velocity of particles of different sizes near a wall in a duct flow. The technique of Phase-Doppler anemometry was used in these studies. The general objective of this project is to provide the needed fundamental understanding of three-phase slurry reactors in Fischer-Tropsch (F-T) liquid fuel synthesis. The other main goal is to develop a computational capability for predicting the transport and processing of three-phase coal slurries. The specific objectives are: (1) To develop a thermodynamically consistent rate-dependent anisotropic model for multiphase slurry flows with and without chemical reaction for application to coal liquefaction. Also establish the material parameters of the model. (2) To provide experimental data for phasic fluctuation and mean velocities, as well as the solid volume fraction in the shear flow devices. (3) To develop an accurate computational capability incorporating the new rate-dependent and anisotropic model for analyzing reacting and nonreacting slurry flows, and to solve a number of technologically important problems related to Fischer-Tropsch (F-T) liquid fuel production processes. (4) To verify the validity of the developed model by comparing the predicted results with the performed and the available experimental data under idealized conditions.

This paper presents the development of a methodology to determine retrofit energy savings in buildings when few measured preretrofit data are available. Calibration of the DOE-2 building energy analysis computer program for a 250,000 ft2 building...

This paper presents the development of a methodology to determine retrofit energy savings in buildings when few measured preretrofit data are available. Calibration of the DOE-2 building energy analysis computer program for a 250,000 ft2 building...

The objective of this project was to investigate the complex fracture of ice and understand its role within larger ice sheet simulations and global climate change. This objective was achieved by developing novel physics based models for ice, novel numerical tools to enable the modeling of the physics and by collaboration with the ice community experts. At the present time, ice fracture is not explicitly considered within ice sheet models due in part to large computational costs associated with the accurate modeling of this complex phenomena. However, fracture not only plays an extremely important role in regional behavior but also influences ice dynamics over much larger zones in ways that are currently not well understood. To this end, our research findings through this project offers significant advancement to the field and closes a large gap of knowledge in understanding and modeling the fracture of ice sheets in the polar regions. Thus, we believe that our objective has been achieved and our research accomplishments are significant. This is corroborated through a set of published papers, posters and presentations at technical conferences in the field. In particular significant progress has been made in the mechanics of ice, fracture of ice sheets and ice shelves in polar regions and sophisticated numerical methods that enable the solution of the physics in an efficient way.

In the first year of the project, solid-fluid mixture flows in ducts and passages at different angle of orientations were analyzed. The model predictions are compared with the experimental data and good agreement was found. Progress was also made in analyzing the gravity chute flows of solid-liquid mixtures. An Eulerian-Lagrangian formulation for analyzing three-phase slurry flows in a bubble column is being developed. The approach uses an Eulerian analysis of gas liquid flows in the bubble column, and makes use of the Lagrangian particle tracking procedure to analyze the particle motions. Progress was also made in developing a rate dependent thermodynamically consistent model for multiphase slurry flows in a state of turbulent motion. The new model includes the effect of phasic interactions and leads to anisotropic effective phasic stress tensors. Progress was also made in measuring concentration and velocity of particles of different sizes near a wall in a duct flow. The formulation of a thermodynamically consistent model for chemically active multiphase solid-fluid flows in a turbulent state of motion was also initiated. The general objective of this project is to provide the needed fundamental understanding of three-phase slurry reactors in Fischer-Tropsch (F-T) liquid fuel synthesis. The other main goal is to develop a computational capability for predicting the transport and processing of three-phase coal slurries. The specific objectives are: (1) To develop a thermodynamically consistent rate-dependent anisotropic model for multiphase slurry flows with and without chemical reaction for application to coal liquefaction. Also to establish the material parameters of the model. (2) To provide experimental data for phasic fluctuation and mean velocities, as well as the solid volume fraction in the shear flow devices. (3) To develop an accurate computational capability incorporating the new rate-dependent and anisotropic model for analyzing reacting and nonreacting slurry flows, and to solve a number of technologically important problems related to Fischer-Tropsch (F-T) liquid fuel production processes. (4) To verify the validity of the developed model by comparing the predicted results with the performed and the available experimental data under idealized conditions.

A 2007 Model Curriculum for a Liberal Arts Degree in Computer Science Liberal Arts Computer Science Consortium February 25, 2007 In 1986, guidelines for a computer science major degree program offered in the context of the liberal arts were developed by the Liberal Arts Computer Science Consortium (LACS) [4

Polyethylene is one of the most widely used plastics, and over 60 million tons are produced worldwide every year. Polyethylene is obtained by the catalytic polymerization of ethylene in gas and liquid phase reactors. The gas phase processes are more advantageous, and use fluidized-bed reactors for production of polyethylene. Since they operate so close to the melting point of the polymer, agglomeration is an operational concern in all slurry and gas polymerization processes. Electrostatics and hot spot formation are the main factors that contribute to agglomeration in gas-phase processes. Electrostatic charges in gas phase polymerization fluidized bed reactors are known to influence the bed hydrodynamics, particle elutriation, bubble size, bubble shape etc. Accumulation of electrostatic charges in the fluidized-bed can lead to operational issues. In this work a first-principles electrostatic model is developed and coupled with a multi-fluid computational fluid dynamic (CFD) model to understand the effect of electrostatics on the dynamics of a fluidized-bed. The multi-fluid CFD model for gas-particle flow is based on the kinetic theory of granular flows closures. The electrostatic model is developed based on a fixed, size-dependent charge for each type of particle (catalyst, polymer, polymer fines) phase. The combined CFD model is first verified using simple test cases, validated with experiments and applied to a pilot-scale polymerization fluidized-bed reactor. The CFD model reproduced qualitative trends in particle segregation and entrainment due to electrostatic charges observed in experiments. For the scale up of fluidized bed reactor, filtered models are developed and implemented on pilot scale reactor.

An Exact Modeling of Signal Statistics in Energy-integrating X-ray Computed Tomography Yi Fan1 used by modern computed tomography (CT) scanners and has been an interesting research topic 1. INTRODUCTION In x-ray computed tomography (CT), Poisson noise model has been widely used in noise

-intensive procedure can exploit the grid's ability to present the user with a large computational infrastructure, containing as many as 1000 unique non- Hydrogen atoms, which could not be solved by traditional reciprocal

In this paper, thermoelectric properties of nanoporous silicon are modeled and studied by using a computational approach. The computational approach combines a quantum non-equilibrium Green's function (NEGF) coupled with the Poisson equation for electrical transport analysis, a phonon Boltzmann transport equation (BTE) for phonon thermal transport analysis and the Wiedemann-Franz law for calculating the electronic thermal conductivity. By solving the NEGF/Poisson equations self-consistently using a finite difference method, the electrical conductivity ? and Seebeck coefficient S of the material are numerically computed. The BTE is solved by using a finite volume method to obtain the phonon thermal conductivity k{sub p} and the Wiedemann-Franz law is used to obtain the electronic thermal conductivity k{sub e}. The figure of merit of nanoporous silicon is calculated by ZT=S{sup 2}?T/(k{sub p}+k{sub e}). The effects of doping density, porosity, temperature, and nanopore size on thermoelectric properties of nanoporous silicon are investigated. It is confirmed that nanoporous silicon has significantly higher thermoelectric energy conversion efficiency than its nonporous counterpart. Specifically, this study shows that, with a n-type doping density of 10{sup 20}?cm{sup –3}, a porosity of 36% and nanopore size of 3 nm ×?3?nm, the figure of merit ZT can reach 0.32 at 600?K. The results also show that the degradation of electrical conductivity of nanoporous Si due to the inclusion of nanopores is compensated by the large reduction in the phonon thermal conductivity and increase of absolute value of the Seebeck coefficient, resulting in a significantly improved ZT.

The GLODEP2 computer code was utilized to determine biological impact to humans on a global scale using up-to-date estimates of biological risk. These risk factors use varied biological damage models for assessing effects. All the doses reported are the unsheltered, unweathered, smooth terrain, external gamma dose. We assume the unperturbed atmosphere in determining injection and deposition. Effects due to ''nuclear winter'' may invalidate this assumption. The calculations also include scenarios that attempt to assess the impact of the changing nature of the nuclear stockpile. In particular, the shift from larger to smaller yield nuclear devices significantly changes the injection pattern into the atmosphere, and hence significantly affects the radiation doses that ensue. We have also looked at injections into the equatorial atmosphere. In total, we report here the results for 8 scenarios. 10 refs., 6 figs., 11 tabs.

We give a sheaf theoretic interpretation of Potts models with external magnetic field, in terms of constructible sheaves and their Euler characteristics. We show that the polynomial countability question for the hypersurfaces defined by the vanishing of the partition function is affected by changes in the magnetic field: elementary examples suffice to see non-polynomially countable cases that become polynomially countable after a perturbation of the magnetic field. The same recursive formula for the Grothendieck classes, under edge-doubling operations, holds as in the case without magnetic field, but the closed formulae for specific examples like banana graphs differ in the presence of magnetic field. We give examples of computation of the Euler characteristic with compact support, for the set of real zeros, and find a similar exponential growth with the size of the graph. This can be viewed as a measure of topological and algorithmic complexity. We also consider the computational complexity question for evaluations of the polynomial, and show both tractable and NP-hard examples, using dynamic programming.

In this project, an Eulerian-Lagrangian formulation for analyzing three-phase slurry flows in a bubble column was developed. The approach used an Eulerian analysis of liquid flows in the bubble column, and made use of the Lagrangian trajectory analysis for the bubbles and particle motions. The bubble-bubble and particle-particle collisions are included the model. The model predictions are compared with the experimental data and good agreement was found An experimental setup for studying two-dimensional bubble columns was developed. The multiphase flow conditions in the bubble column were measured using optical image processing and Particle Image Velocimetry techniques (PIV). A simple shear flow device for bubble motion in a constant shear flow field was also developed. The flow conditions in simple shear flow device were studied using PIV method. Concentration and velocity of particles of different sizes near a wall in a duct flow was also measured. The technique of Phase-Doppler anemometry was used in these studies. An Eulerian volume of fluid (VOF) computationalmodel for the flow condition in the two-dimensional bubble column was also developed. The liquid and bubble motions were analyzed and the results were compared with observed flow patterns in the experimental setup. Solid-fluid mixture flows in ducts and passages at different angle of orientations were also analyzed. The model predictions were compared with the experimental data and good agreement was found. Gravity chute flows of solid-liquid mixtures were also studied. The simulation results were compared with the experimental data and discussed A thermodynamically consistent model for multiphase slurry flows with and without chemical reaction in a state of turbulent motion was developed. The balance laws were obtained and the constitutive laws established.

Forced outages and boiler unavailability of coal-fired fossil plants is most often caused by fire-side corrosion of boiler waterwalls and tubing. Reliable coatings are required for Ultrasupercritical (USC) application to mitigate corrosion since these boilers will operate at a much higher temperatures and pressures than in supercritical (565 C {at} 24 MPa) boilers. Computationalmodeling efforts have been undertaken to design and assess potential Fe-Cr-Ni-Al systems to produce stable nanocrystalline coatings that form a protective, continuous scale of either Al{sub 2}O{sub 3} or Cr{sub 2}O{sub 3}. The computationalmodeling results identified a new series of Fe-25Cr-40Ni with or without 10 wt.% Al nanocrystalline coatings that maintain long-term stability by forming a diffusion barrier layer at the coating/substrate interface. The computationalmodeling predictions of microstructure, formation of continuous Al{sub 2}O{sub 3} scale, inward Al diffusion, grain growth, and sintering behavior were validated with experimental results. Advanced coatings, such as MCrAl (where M is Fe, Ni, or Co) nanocrystalline coatings, have been processed using different magnetron sputtering deposition techniques. Several coating trials were performed and among the processing methods evaluated, the DC pulsed magnetron sputtering technique produced the best quality coating with a minimum number of shallow defects and the results of multiple deposition trials showed that the process is repeatable. scale, inward Al diffusion, grain growth, and sintering behavior were validated with experimental results. The cyclic oxidation test results revealed that the nanocrystalline coatings offer better oxidation resistance, in terms of weight loss, localized oxidation, and formation of mixed oxides in the Al{sub 2}O{sub 3} scale, than widely used MCrAlY coatings. However, the ultra-fine grain structure in these coatings, consistent with the computationalmodel predictions, resulted in accelerated Al diffusion from the coating into the substrate. An effective diffusion barrier interlayer coating was developed to prevent inward Al diffusion. The fire-side corrosion test results showed that the nanocrystalline coatings with a minimum number of defects have a great potential in providing corrosion protection. The coating tested in the most aggressive environment showed no evidence of coating spallation and/or corrosion attack after 1050 hours exposure. In contrast, evidence of coating spallation in isolated areas and corrosion attack of the base metal in the spalled areas were observed after 500 hours. These contrasting results after 500 and 1050 hours exposure suggest that the premature coating spallation in isolated areas may be related to the variation of defects in the coating between the samples. It is suspected that the cauliflower-type defects in the coating were presumably responsible for coating spallation in isolated areas. Thus, a defect free good quality coating is the key for the long-term durability of nanocrystalline coatings in corrosive environments. Thus, additional process optimization work is required to produce defect-free coatings prior to development of a coating application method for production parts.

A computer program is described to calculate momentum distributions in stripping and diffraction dissociation reactions. A Glauber model is used with the scattering wavefunctions calculated in the eikonal approximation. The program is appropriate for knockout reactions at intermediate energy collisions (30 MeV $\\leq$ E$_{lab}/$nucleon $\\leq 2000$ MeV). It is particularly useful for reactions involving unstable nuclear beams, or exotic nuclei (e.g. neutron-rich nuclei), and studies of single-particle occupancy probabilities (spectroscopic factors) and other related physical observables. Such studies are an essential part of the scientific program of radioactive beam facilities, as in for instance the proposed RIA (Rare Isotope Accelerator) facility in the US.

/Novem (Dutch Government). ISAPP (Integrated Systems Approach to Petroleum Production) is a joint project as applied in the field of petroleum reservoir engineering. Starting from a large-scale, physics-based modelmodels in petroleum reservoir engineering. Petroleum reservoir engineering is concerned with maximizing

A method is described for determining the parameters of a model from experimental data based upon the utilization of Bayes' theorem. This method has several advantages over the least-squares method as it is commonly used; one important advantage is that the assumptions under which the parameter values have been determined are more clearly evident than in many results based upon least squares. Bayes' method has been used to develop a computer code which can be utilized to analyze neutron cross-section data by means of the R-matrix theory. The required formulae from the R-matrix theory are presented, and the computer implementation of both Bayes' equations and R-matrix theory is described. Details about the computer code and compelte input/output information are given.

Multiple equilibria in a coupled ocean–atmosphere–sea ice general circulation model (GCM) of an aquaplanet with many degrees of freedom are studied. Three different stable states are found for exactly the same set of ...

Ray tracing computations in the smoothed SEG/EAGE Salt Model V#19;aclav Bucha Department to compute rays and synthetic seismograms of refracted and re ected P-waves in the smoothed SEG/EAGE Salt The original 3-D SEG/EAGE Salt Model (Aminzadeh et al. 1997) is very complex model and cannot be used for ray

This paper describes the application of a new, integrated modeling and simulation framework, encompassing the space situational awareness (SSA) enterprise, to the recent Cosmos-Iridium collision. This framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel, high-performance computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the application of this framework to the recent collision of the Cosmos and Iridium satellites, including (1) detailed hydrodynamic modeling of the satellite collision and resulting debris generation, (2) orbital propagation of the simulated debris and analysis of the increased risk to other satellites (3) calculation of the radar and optical signatures of the simulated debris and modeling of debris detection with space surveillance radar and optical systems (4) determination of simulated debris orbits from modeled space surveillance observations and analysis of the resulting orbital accuracy, (5) comparison of these modeling and simulation results with Space Surveillance Network observations. We will also discuss the use of this integrated modeling and simulation framework to analyze the risks and consequences of future satellite collisions and to assess strategies for mitigating or avoiding future incidents, including the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.

Recently the partial-wave cutoff method was developed as a new calculational scheme for a functional determinant of quantum field theory in radial backgrounds. For the contribution given by an infinite sum of large partial waves, we derive explicitly radial-WKB series in the angular momentum cutoff for d=2, 3, 4, and 5 (d is the space-time dimension), which has uniform validity irrespectively of any specific values assumed for other parameters. Utilizing this series, precision evaluation of the renormalized functional determinant is possible with a relatively small number of low partial-wave contributions determined separately. We illustrate the power of this scheme in a numerically exact evaluation of the prefactor (expressed as a functional determinant) in the case of the false vacuum decay of 4D scalar field theory.

This project is a collaborative effort between the University of Akron, Illinois Institute of Technology and two industries: UOP and Energy International. The tasks involve the development of transient two and three dimensional computer codes for slurry bubble column reactors, optimization, comparison to data, and measurement of input parameters, such as the viscosity and restitution coefficients. To understand turbulence, measurements were done in the riser with 530 micron glass beads using a PIV technique. This report summarizes the measurements and simulations completed as described in details in the attached paper, ''Computational and Experimental Modeling of Three-Phase Slurry-Bubble Column Reactor.'' The Particle Image Velocimetry method described elsewhere (Gidaspow and Huilin, 1996) was used to measure the axial and tangential velocities of the particles. This method was modified with the use of a rotating colored transparent disk. The velocity distributions obtained with this method shows that the distribution is close to Maxwellian. From the velocity measurements the normal and the shear stresses were computed. Also with the use of the CCD camera a technique was developed to measure the solids volume fraction. The granular temperature profile follows the solids volume fraction profile. As predicted by theory, the granular temperature is highest at the center of the tube. The normal stress in the direction of the flow is approximately 10 times larger than that in the tangential direction. The is lower at the center where the is higher at that point. The Reynolds shear stress was small, producing a restitution coefficient near unity. The normal Reynolds stress in the direction of flow is large due to the fact that it is produced by the large gradient of velocity in the direction of flow compared to the small gradient in the {theta} and r directions. The kinetic theory gives values of viscosity that agree with our previous measurements (Gidaspow, Wu and Mostofi, 1999). The values of viscosity obtained from pressure drop minus weight of bed measurements agree at the center of the tube.

Forced outages and boiler unavailability in conventional coal-fired fossil power plants is most often caused by fireside corrosion of boiler waterwalls. Industry-wide, the rate of wall thickness corrosion wastage of fireside waterwalls in fossil-fired boilers has been of concern for many years. It is significant that the introduction of nitrogen oxide (NOx) emission controls with staged burners systems has increased reported waterwall wastage rates to as much as 120 mils (3 mm) per year. Moreover, the reducing environment produced by the low-NOx combustion process is the primary cause of accelerated corrosion rates of waterwall tubes made of carbon and low alloy steels. Improved coatings, such as the MCrAl nanocoatings evaluated here (where M is Fe, Ni, and Co), are needed to reduce/eliminate waterwall damage in subcritical, supercritical, and ultra-supercritical (USC) boilers. The first two tasks of this six-task project-jointly sponsored by EPRI and the U.S. Department of Energy (DE-FC26-07NT43096)-have focused on computationalmodeling of an advanced MCrAl nanocoating system and evaluation of two nanocrystalline (iron and nickel base) coatings, which will significantly improve the corrosion and erosion performance of tubing used in USC boilers. The computationalmodel results showed that about 40 wt.% is required in Fe based nanocrystalline coatings for long-term durability, leading to a coating composition of Fe-25Cr-40Ni-10 wt.% Al. In addition, the long term thermal exposure test results further showed accelerated inward diffusion of Al from the nanocrystalline coatings into the substrate. In order to enhance the durability of these coatings, it is necessary to develop a diffusion barrier interlayer coating such TiN and/or AlN. The third task 'Process Advanced MCrAl Nanocoating Systems' of the six-task project jointly sponsored by the Electric Power Research Institute, EPRI and the U.S. Department of Energy (DE-FC26-07NT43096)- has focused on processing of advanced nanocrystalline coating systems and development of diffusion barrier interlayer coatings. Among the diffusion interlayer coatings evaluated, the TiN interlayer coating was found to be the optimum one. This report describes the research conducted under the Task 3 workscope.

CPT: An Energy-Efficiency Model for Multi-core Computer Systems Weisong Shi, Shinan Wang and Bing efficiency of computer systems. These techniques affect the energy efficiency across different layers metric that represents the energy efficiency of a computer system, for a specific configuration, given

Trace-Based Analysis and Prediction of Cloud Computing User Behavior Using the Fractal Modeling and technology. In this paper, we investigate the characteristics of the cloud computing requests received the alpha- stable distribution. Keywords- cloud computing; alpha-stable distribution; fractional order

specially designed within the framework of this research. A computational heat transfer model is constructed. The developed mean model constitutes the basis of the computational stochastic heat transfer model that has been to the experimental ones. Keywords: computational heat transfer modeling, uncertainties, probabilistic modeling

Many computationalmodels of visual attention have been created from a wide variety of different approaches to predict where people look in images. Each model is usually introduced by demonstrating performances on new ...

computationalmodel using finite element method to predict the viscoelastic behavior of cement paste, and using this model, virtual tests can be carried out to improve understanding of the mechanisms of viscoelastic behavior. The primary finding from...

The material presented in this report involved experimental work performed to assist in determining the constants for a computermodeling program being developed by Production Engineering for use in trap design. Included in this study is bed distribution studies to define uranium loading on alumina (Al/sub 2/O/sub 3/) and sodium fluoride (NaF) with respect to bed zones. A limited amount of work was done on uranium penetration into NaF pellets. Only the experimental work is reported here; Production Engineering will use this data to develop constants for the computermodel. Some of the significant conclusions are: NaF has more capacity to load UF/sub 6/, but Al/sub 2/O/sub 3/ distributes the load more equally; velocity, system pressure, and operating temperature influence uranium loading; and in comparative tests NaF had a loading of 25%, while Al/sub 2/O/sub 3/ was 13%. 2 refs., 10 figs., 5 tabs.

processes and high product quality standards required. In particular, computer simulation of industrial equilibrium and transient warp analysis, and displaying the results graphically. Four example panels were

, Ronald F. DeMara1 1 Intelligent Systems Laboratory School of Electrical Engineering and Computer Science. CGFs are computer-controlled behavioral models of combatants used to serve as opponents against whom promise for providing power- ful learning models" in a recent National Research Council Report [5]. Also

Energy Aware Algorithm Design via Probabilistic Computing: From Algorithms and Models to Moore opportunities for being energy-aware, the most fundamental limits are truly rooted in the physics of energy of models of computing for energy-aware al- gorithm design and analysis, culminating in establishing

Protocols for BoundedÂ­Concurrent Secure TwoÂ­Party Computation in the Plain Model Yehuda Lindell # Department of Computer Science BarÂ­Ilan University Ramat Gan, 52900, Israel lindell@cs.biu.ac.il September 26Â­composition, in the plain model (where the only setup assumption made is that the parties communicate via authenticated

An empirical hydraulic model has been developed for determining the energy required for cleaning a vertical and nearly vertical well bore plugged with sand particles. The model considers pressure losses and cleanout time and compares sand cleanout time during direct and reverse circulation of water. Good agreement was obtained between the model and experimental results.

Sound-insulation layer modelling in car computational vibroacoustics in the medium-frequency range In a previous article, a simplified low- and medium-frequency model for un- certain automotive sound-insulation. In this paper, the insulation simplified model is implemented in an in- dustrial stochastic vibroacoustic model

compound undergoes when subjected to composting. The purpose of this thesis is to define these processes and develop a model for determining the fate of organic compounds in waste during in-vessel composting Volatilization and biodegradation are found...

Designing a computer generated character involves many steps, including the structure that is responsible for moving the character in a organic manner. There are several ways to develop a character to control the motion exhibited by the skin...

Tools in support of fire safety engineering design have proliferated in the last few years due to the increased performance of computers. These tools are currently being used in a generalized manner in areas such as egress, ...

Cloud Computing has held organizations across the globe spell bound with its promise. As it moves from being a buzz word and hype into adoption, organizations are faced with question of how to best adopt cloud. Existing ...

The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed high-level waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590-WTP-ICD-MG-01-019, ICD-19 - Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at full-scale.

Glioma is the most common form of primary brain tumor. Demographically, the risk of occurrence increases until old age. Here we present a novel computationalmodel to reproduce the probability of glioma incidence across the lifespan. Previous mathematical models explaining glioma incidence are framed in a rather abstract way, and do not directly relate to empirical findings. To decrease this gap between theory and experimental observations, we incorporate recent data on cellular and molecular factors underlying gliomagenesis. Since evidence implicates the adult neural stem cell as the likely cell-of-origin of glioma, we have incorporated empirically-determined estimates of neural stem cell number, cell division rate, mutation rate and oncogenic potential into our model. We demonstrate that our model yields results which match actual demographic data in the human population. In particular, this model accounts for the observed peak incidence of glioma at approximately 80 years of age, without the need to assert...

TRANSPORT PARAMETER DETERMINATION AND MODELING OF SODIUM AND STRONTIUM PLUMES AT THE IDAHO NATIONAL ENGINEERING LABORATORY A Thesis by JOHN THOMAS LONDERGAN Submitted to the Graduate College of Texas A&M University in partial fulfillment... of the requirements for the degree of MASTER OF SCIENCE May 1987 Major Subject: Geophysics TRANSPORT PARAMETER DETERMINATION AND MODELING OF SODIUM AND STRONTIUM PLUMES AT THE IDAHO NATIONAL ENGINEERING LABORATORY A Thesis by JOHN THOMAS LONDERGAN Approved...

A CONCEPTUAL MODEL FOR DETERMINING YIELD LOSS DUE TO DROUGHT STRESS IN SORGHUM A Thesis by PAUL ROBERT KOCH Submitted to the Graduate College of Texas AkM University in partial fulfillment of the requirements for the degree of MASTER... OF SCIENCE May 1987 Major Subject: Agricultural Engineering A CONCEPTUAL MODEL FOR DETERMINING YIELD LOSS DUE TO DROUGHT STRESS IN SORGHUM A thesis by PAUL ROBERT KOCH Approved as to style and content by: Marshall J. McFarland (Chair of Committee...

The mission of the pmodel center project is to develop software technology to support scalable parallel programming models for terascale systems. The goal of the specific UD subproject is in the context developing an efficient and robust methodology and tools for HPC programming. More specifically, the focus is on developing new programming models which facilitate programmers in porting their application onto parallel high performance computing systems. During the course of the research in the past 5 years, the landscape of microprocessor chip architecture has witnessed a fundamental change – the emergence of multi-core/many-core chip architecture appear to become the mainstream technology and will have a major impact to for future generation parallel machines. The programming model for shared-address space machines is becoming critical to such multi-core architectures. Our research highlight is the in-depth study of proposed fine-grain parallelism/multithreading support on such future generation multi-core architectures. Our research has demonstrated the significant impact such fine-grain multithreading model can have on the productivity of parallel programming models and their efficient implementation.

The twin-arginine translocation (Tat) system transports folded proteins of various sizes across both bacterial and plant thylakoid membranes. The membrane-associated TatA protein is an essential component of the Tat translocon, and a broad distribution of different sized TatA-clusters is observed in bacterial membranes. We assume that the size dynamics of TatA clusters are affected by substrate binding, unbinding, and translocation to associated TatBC clusters, where clusters with bound translocation substrates favour growth and those without associated substrates favour shrinkage. With a stochastic model of substrate binding and cluster dynamics, we numerically determine the TatA cluster size distribution. We include a proportion of targeted but non-translocatable (NT) substrates, with the simplifying hypothesis that the substrate translocatability does not directly affect cluster dynamical rate constants or substrate binding or unbinding rates. This amounts to a translocation model without specific quality control. Nevertheless, NT substrates will remain associated with TatA clusters until unbound and so will affect cluster sizes and translocation rates. We find that the number of larger TatA clusters depends on the NT fraction $f$. The translocation rate can be optimized by tuning the rate of spontaneous substrate unbinding, $\\Gamma_U$. We present an analytically solvable three-state model of substrate translocation without cluster size dynamics that follows our computed translocation rates, and that is consistent with {\\em in vitro} Tat-translocation data in the presence of NT substrates.

The Gearbox Reliability Collaborative (GRC) has conducted extensive field and dynamometer test campaigns on two heavily instrumented wind turbine gearboxes. In this paper, data from the planetary stage is used to evaluate the accuracy and computation time of numerical models of the gearbox. First, planet-bearing load and motion data is analyzed to characterize planetary stage behavior in different environments and to derive requirements for gearbox models and life calculations. Second, a set of models are constructed that represent different levels of fidelity. Simulations of the test conditions are compared to the test data and the computational cost of the models are compared. The test data suggests that the planet-bearing life calculations should be made separately for each bearing on a row due to unequal load distribution. It also shows that tilting of the gear axes is related to planet load share. The modeling study concluded that fully flexible models were needed to predict planet-bearing loading in some cases, although less complex models were able to achieve good correlation in the field-loading case. Significant differences in planet load share were found in simulation and were dependent on the scope of the model and the bearing stiffness model used.

use case is a nuclear accident like a core meltdown at a atomic power plant, where atomic radiation emits in the air. The Lagrangian model can predict how the nuclear cloud spreads under different that will be computed. Particle: One single molecule floating in the wind field. Compute unit: One unit that runs

operating system control, can be switched either on and off or between several power states of varying powerProbabilistic Model Checking and Power­Aware Computing #3; Marta Kwiatkowska Gethin Norman David­aware computing aims either to maximise the per­ formance of a system under certain constraints on its power

operating system control, can be switched either on and off or between several power states of varying powerProbabilistic Model Checking and Power-Aware Computing Marta Kwiatkowska Gethin Norman David Parker-aware computing aims either to maximise the per- formance of a system under certain constraints on its power

A Network Model and Computational Approach for the 99 Mo Supply Chain for Nuclear Medicine Ladimer S. Nagurney1 and Anna Nagurney2 1Department of Electrical and Computer Engineering University University of Massachusetts - Amherst, Massachusetts 01003 Fall 2011 Joint Meeting Of The New England

Computer representation of the model covariance function resulting from travel-time tomography Lud a supplement to the paper by Klime#20;s (2002b) on the stochastic travel{time tomography. It contains brief covariance function is a function of 6 coordinates with pro- nounced singularities. The computer

that could be used for biofuel and other metabolic engineering applications. Â· Performed high of Microbial Pathogens Infectious disease is the leading cause of death worldwide. While genomics has had system in biofuel and nutraceutical production. With the aid of computational techniques, we can predict

of new blood vessels (angiogenesis) is an important approach in cancer treatment. However, the complexity-based approach for the discovery of novel potential cancer treatments using a high fidelity simulation in cancer treatment [2]. This paper introduces a computational approach to search for novel intervention

. For example, the overall time for learning STRIPS actions' effects is O(T Â· n). For other cases the update per- imate the representation with a k-CNF formula, yielding an overall time of O(T Â· nk ) for the entire, and games. Other applications, such as robotics, human-computer interfaces, and program and

. For example, the overall time for learning STRIPS actions' effects is O(T Â· n). For other cases the update per approxÂ­ imate the representation with a kÂ­CNF formula, yielding an overall time of O(T Â· n k, virtual worlds, and games. Other applications, such as robotics, humanÂ­computer interfaces, and progr

of photochemical air pollution (smog) in industrialised cities. However, computational hardware demands can that have been used as part of an air pollution study being conducted in Melbourne, Australia. We also necessary to perform real air pollution studies. The system is used as part of the Melbourne Airshed study

Computational Fluid Dynamics Modeling of a Lithium/Thionyl Chloride Battery with Electrolyte Flow W-dimensional model is developed to simulate discharge of a primary lithium/thionyl chloride battery. The model to the first task with important examples of lead-acid,1-3 nickel-metal hydride,4-8 and lithium-based batteries

experimentally. The process of model discovery for energy- aware systems, in advance of controller design. Such models are also prerequisites for the appli- cation of control theory to energy-aware systems. We.e., the computing system to be controlled) using system identification; (2) use the plant model to design

that the commission is considering, electricity market models, production cost/optimal power flow models, and hybridsComments on the use of computermodels for merger analysis in the electricity industry FERC Docket for market power in electricity markets. These analyses have yielded several insights about the application

Computationalmodeling of damage evolution in unidirectional fiber reinforced ceramic matrix mechanical re- sponse of a ceramic matrix composite is simulated by a numerical model for a ®ber-matrix unit evolution in brittle matrix composites was developed. This modeling is based on an axisymmetric unit cell

. INTRODUCTION The inverse problem in groundwater modeling is generally ill-posed and non-unique. The typical geological heterogeneity has not been possible in common groundwater modeling practice. The principal reasons-Marquardt methods, and (3) lack of experience within the groundwater modeling community with regularized inversion

This thesis examines how computermodelling matters for policy-making by looking at two case studies of European fisheries management. Based on documentary analysis and ethnographic interviews and observations, the main ...

A well-known challenge in computable general equilibrium (CGE) models is to maintain correspondence between the forecasted economic and physical quantities over time. Maintaining such a correspondence is necessary to ...

In this thesis a new parallel computational method is proposed for modeling threedimensional dynamic fracture of brittle solids. The method is based on a combination of the discontinuous Galerkin (DG) formulation of the ...

using MPI. The results show the cluster system can simultaneously support up to 32 processes for MPI program with high performance of interprocess communication. The parallel computations of phase field model of magnetic materials implemented by a MPI...

A COMPUTER SIMULATION MODEL FOR THE PREDICTION OF . EMPERATURE DISTRIBUTIONS IN RADIOFREQUENCY HYPERTHERMIA TREATMENT A Thesis by JEANNE MARIE ROTHE Submitted to the Graduate College of Texas ASM University in Partial fulfillment... of the requirement for the degree of MASTER OF SCIENCE DECEMBER 1983 Major Subject: Bioengineering A COMPUTER SIMULATION MODEL FOR THE PREDICTION OF TEMPERATURE DISTRIBUTIONS IN RADIOFREQUENCY HYPERTHERMIA TREATMENT A Thesis by JEANNE MARIE ROTHE Approved...

A NEW) EFFICIENT COMPUTATIONALMODEL FOR THE PREDICTION OF FLUID SEAL FLOWFIELDS A Thesis by ROBERT IRWIN HIBBS, JR. Submitted to the Office of Graduate Studies of Texas ASM University in partial fulfillment of the requirement for the degree... of MASTER OF SCIENCE December 1988 Major Subject: Mechanical Engineering A NEW, EFFICIENT COMPUTATIONALMODEL FOR THE PREDICTION OF FLUID SEAL FLOWFIELDS A Thesis by ROBERT IRWIN HIBBS, JR. Approved as to style and content by: David L. Rhode...

Computer simulations have been performed, aimed at achieving a better understanding of the geological and physical processes involved in the formation of sedimentary basins in general and the Black Warrior basin of Alabama and Mississippi in particular. Microscopic-level computermodeling of sandstone porosity reduction has been done, elucidating the detailed small-scale dynamics which lead to the geological phenomenon of pressure solution. A new technique has been developed for 1D burial and thermal modeling of sedimentary basins based on stratigraphic data from test wells. It is significantly faster than previous methods, and can be used in interactive menu-oriented program requiring relatively little learning time or prior computer experience. This allows a geologist to rapidly determine the results of various different hypotheses about basin formation, providing insight which may help determine which is correct. A program has also been written to simulate tectonic-plate collisions and rifting processes using viscoelastic hydrodynamics.

We revisit the problem of radial pulsations of neutron stars by computing four general-relativistic polytropic models, in which "density" and "adiabatic index" are involved with their discrete meanings: (i) "rest-mass density" or (ii) "mass-energy density" regarding the density, and (i) "constant" or (ii) "variable" regarding the adiabatic index. Considering the resulting four discrete combinations, we construct corresponding models and compute for each model the frequencies of the lowest three radial modes. Comparisons with previous results are made. The deviations of respective frequencies of the resolved models seem to exhibit a systematic behavior, an issue discussed here in detail.

In collaboration with researchers at Vanderbilt University, North Carolina State University, Princeton and Oakridge National Laboratory we developed multiscale modeling and simulation methods capable of modeling the synthesis, assembly, and operation of molecular electronics devices. Our role in this project included the development of coarse-grained molecular and mesoscale models and simulation methods capable of simulating the assembly of millions of organic conducting molecules and other molecular components into nanowires, crossbars, and other organized patterns.

, building forensic specialists, manufacturer representatives, facilities managers, IAQ specialists of modeling for new products are demonstrated by both group and individual interaction. · You will learn how

obstacle is introduced. This model applied to the estimation of the efficiency of free flow turbines allows reserved. Keywords-Cavitation flows, Riabouchinsky model, Kirchhoff method, Pree boundary problems. 1 by the recent progress in the development of free flow turbines [l] for the purpose of estimating

We demonstrate an improvement of predictive capability brought to a non-linear material model using a combination of test data, sensitivity analysis, uncertainty quantification, and calibration. A model that captures increasingly complicated phenomena, such as plasticity, temperature and strain rate effects, is analyzed. Predictive maturity is defined, here, as the accuracy of the model to predict multiple Hopkinson bar experiments. A statistical discrepancy quantifies the systematic disagreement (bias) between measurements and predictions. Our hypothesis is that improving the predictive capability of a model should translate into better agreement between measurements and predictions. This agreement, in turn, should lead to a smaller discrepancy. We have recently proposed to use discrepancy and coverage, that is, the extent to which the physical experiments used for calibration populate the regime of applicability of the model, as basis to define a Predictive Maturity Index (PMI). It was shown that predictive maturity could be improved when additional physical tests are made available to increase coverage of the regime of applicability. This contribution illustrates how the PMI changes as 'better' physics are implemented in the model. The application is the non-linear Preston-Tonks-Wallace (PTW) strength model applied to Beryllium metal. We demonstrate that our framework tracks the evolution of maturity of the PTW model. Robustness of the PMI with respect to the selection of coefficients needed in its definition is also studied.

; and predictive modeling for large datasets. First, we develop a spatial-temporal model for local wind fields in a wind farm with more than 200 wind turbines. Our framework utilizes the correlation among the derivatives of wind speeds to find a neighborhood...

phase equilibria. The technique is demonstrated with examples using the NRTL and electrolyte-NRTL (eNRTL) models. In two of the NRTL examples, results are found that contradict previous work. In the eNRTL time that a method for parameter estimation in the eNRTL model from binary LLE data (mutual solubility

closely on known anatomy and physiology. First, we assume that the thalamic targets, which relay ascend the external globus pallidus (GPe) and the subthalamic nucleus (STN). As a test of the model, the system

DNA conformation within cells has many important biological implications, but there are challenges both in modeling DNA due to the need for specialized techniques, and experimentally since tracing out in vivo conformations ...

A grand challenge of systems biology is to model the cell. The cell is an integrated network of cellular functions. Each cellular function, such as immune response, cell division, metabolism or apoptosis, is defined by an ...

This research is focused on a better quantification of the variations in CO{sub 2} exchanges between the atmosphere and biosphere and the factors responsible for these exchangers. The principal approach is to infer the variations in the exchanges from variations in the atmospheric CO{sub 2} distribution. The principal tool involves using a global three-dimensional tracer transport model to advect and convect CO{sub 2} in the atmosphere. The tracer model the authors used was developed at the Goddard institute for Space Studies (GISS) and is derived from the GISS atmospheric general circulation model. A special run of the GCM is made to save high-frequency winds and mixing statistics for the tracer model.

-damage constitutive model, the effect of the micromechanical properties of concrete, such as aggregate shape, distribution, and volume fraction, the ITZ thickness, and the strength of the ITZ and mortar matrix on the iv tensile behavior of concrete... Page 7.1 2-D Meso-scale Analysis Model of Concrete ................................ 103 7.2 Material Properties of the ITZ and Mortar Matrix ......................... 104 7.3 The Effect of the Aggregate Shape...

Sandia National Laboratories is investing in projects that aim to develop computationalmodeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decision-making. This raises the issue of how to evaluate computationalmodeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worth of review and assessment of the ASC program's approach to computationalmodel verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model evaluation in situations of high consequence decision-making.

STOCHASTIC COMPUTATIONAL DYNAMICAL MODEL OF UNCERTAIN STRUCTURE COUPLED WITH AN INSULATION LAYER the effect of insulation layers in complex dynamical systems for low- and medium-frequency ranges such as car booming noise analysis, one introduces a sim- plified stochastic model of insulation layers based

­aware systems. Such models are also prerequisites for the appli­ cation of control theory to energyModel Discovery for Energy­Aware Computing Systems: An Experimental Evaluation Appears, is a critical first step in designing advanced controllers that can dynamically man­ age the energy

computationally tenable is shown herein. Due to the complicated nature of the many cracks and their interactions, a multi-scale micro-meso-local-global methodology is employed in order to model damage modes. Interface degradation is first modeled analytically...

was developed to study the thermal conductivity of single walled carbon nanotube (SWNT)-polymer composites1 Computationalmodeling of thermal conductivity of single walled carbon nanotube polymer resistance on effective conductivity of composites were quantified. The present model is a useful tool

water brings unique challenges [15]. Major difficulties include it- s lack of matchable featuresIEEE TRANSACTION ON VISUALIZATION AND COMPUTER GRAPHICS 1 Water Surface Modeling from A Single and Phillip Willis Abstract--We introduce a video based approach for producing water surface models. Recent

Expressing and computing passage time measures of GSPN models with HASL Elvio Gilberto Amparore1 measures in (Tagged) GSPNs using the Hybrid Automata Stochastic Logic (HASL) and the statistical model), formally express them in HASL terms and assess them by means of simulation in the COSMOS tool. The interest

Computationalmodel to evaluate port wine stain depth profiling using pulsed photothermal-thermal model to evaluate the use of pulsed photothermal radiometry (PPTR) for depth profiling of port wine the desired effect. A diagnostic measurement of the distribution of laser energy deposition and ensuing

with CO2, for example). A major challenge in numerical simulations of moving contact linesAn efficient computationalmodel for macroscale simulations of moving contact lines Y. Sui1 simulation of moving contact lines. The main purpose is to formulate and test a model wherein the macroscale

a variably saturated porous medium with exponential diffusivity, such as soil, rock or concrete is given by uAsymptotical Computations for a Model of Flow in Saturated Porous Media P. Amodio a , C.J. Budd b for an implicit second order ordinary differential equation which arises in models of flow in saturated porous

A Three-Dimensional ComputationalModel of PEM Fuel Cell with Serpentine Gas Channels by Phong) fuel cell with serpentine gas flow channels is presented in this thesis. This comprehensive model accounts for important transport phenomena in a fuel cell such as heat transfer, mass transfer, electrode

A microscopic quantum mechanical model of computers as represented by Turing machines is constructed. It is shown that for each number N and Turing machine Q there exists a Hamiltonian H/sub N//sup Q/ and a class of appropriate initial states such that, if PSI/sub Q//sup N/(0) is such an initial state, then PSI/sub Q//sup N/(t) = exp(-iH/sub N//sup Q/t) PSI/sub Q//sup N/(0) correctly describes at times t/sub 3/, t/sub 6/,..., t/sub 3N/ model states that correspond to the completion of the first, second,..., Nth computation step of Q. The model parameters can be adjusted so that for an arbitrary time interval ..delta.. around t/sub 3/, t/sub 6/,..., t/sub 3N/, the machine part of PSI/sub Q//sup N/(t) is stationary. 1 figure.

An Ontology-based Model to Determine the Automation Level of an Automated Vehicle for Co). In addition, an automated vehicle should also self-assess its own perception abilities, and not only perceive this idea, cybercars were designed as fully automated vehicles [3], thought since its inception as a new

-42, with sequence LMVGGVVIA) forms a structured aggregate which is classified as an amyloid fibril based primarilyDetermination of Peptide Amide Configuration in a Model Amyloid Fibril by Solid-State NMR P. R these aggregates form. The primary constituent of the amyloid plaques characteristic of AD are a family of 39

. Senseman and Robbins [2,3] supported this hypothesis. They used voltage sensitive dye methods to show 750 cells from different cortical layers. Our model captures the basic geometry and temporal structure and are best characterized. These are two types of pyramidal cells (the lateral and medial pyramidal cells

the calculation of degradation propensity is coupled with a flow model of the solids and gas phases in the pipeline. Numerical results are presented for degradation of granulated sugar in an industrial scale handling, because of the change in particle properties such as particle size distribution, shape and

how the deformation of a silicon surface caused by a high energy C60 impact can eject a large cages. But also the use of C60 ions in secondary mass spectrometry (SIMS) as a probing beam is showing in collaboration with the University of Karlsruhe, the simulation models have been verified for both low energy

highly populated seismic region in the U.S., it has well- characterized geological structures (including in characterizing earthquake source and basin material properties, a critical remaining challenge is to invert basin geology and earthquake sources, and to use this capability to model and forecast strong ground

in neural network modeling, machine learning, adaptive systems in general and self-organising systems] and verification of real-time systems [6]. A large amount of this research has been performed using the CADP verification environment, which is one of the most powerful tool suites available, boasting a spectrum

derived material properties of cells have found to vary by orders of magnitude even for the same cell type. The primary cause of such disparity is attributed to the stimulation process, and the theoretical models used to interpret the experimental data...

Academy of Sciences, Warsaw, Poland zytkow@uncc.edu Abstract Model construction is one of the key scienti of the mainsteps. As a body of mass m rolls down its kinetic energy grows from zero to mv2=2, where v is the nal velocity. At the same time, its potential energy decreases from gmh to zero, where g is Earth acceleration

US Army Corps of Engineers - Portland District required that a two-dimensional (2D) depth-averaged and a three-dimensional (3D) free-surface numerical models to be developed and validated for the John Day tailrace. These models were used to assess potential impact of a select group of structural and operational alternatives to tailrace flows aimed at improving fish survival at John Day Dam. The 2D model was used for the initial assessment of the alternatives in conjunction with a reduced-scale physical model of the John Day Project. A finer resolution 3D model was used to more accurately model the details of flow in the stilling basin and near-project tailrace hydraulics. Three-dimensional model results were used as input to the Pacific Northwest National Laboratory particle tracking software, and particle paths and times to pass a downstream cross section were used to assess the relative differences in travel times resulting from project operations and structural scenarios for multiple total river flows. Streamlines and neutrally-buoyant particles were seeded in all turbine and spill bays with flows. For a Total River of 250 kcfs running with the Fish Passage Plan spill pattern and a spillwall, the mean residence times for all particles were little changed; however the tails of the distribution were truncated for both spillway and powerhouse release points, and, for the powerhouse releases, reduced the residence time for 75% of the particles to pass a downstream cross section from 45.5 minutes to 41.3 minutes. For a total river of 125 kcfs configured with the operations from the Fish Passage Plan for the temporary spillway weirs and for a proposed spillwall, the neutrally-buoyant particle tracking data showed that the river with a spillwall in place had the overall mean residence time increase; however, the residence time for 75% of the powerhouse-released particles to pass a downstream cross section was reduced from 102.4 min to 89 minutes.

This paper provides an overview of the variable refrigerant flow heat pump computermodel included with the Department of Energy's EnergyPlusTM whole-building energy simulation software. The mathematical model for a variable refrigerant flow heat pump operating in cooling or heating mode, and a detailed model for the variable refrigerant flow direct-expansion (DX) cooling coil are described in detail.

? weights for river stage prediction (Chau, 2006). Other evolutionary algorithms, such as Differential Evaluation (DE) (Storn and Price, 1997) and Artificial Immune Systems (AIS) (de Castro and Von Zuben, 2002a; de Castro and Von Zuben, 2002b), although... is to structure the hydrologic model as a probability model, then the confidence interval of model output can be computed (Montanari et al., 1997). Representative methods of this category include Markov Chain Monte Carlo (MCMC) and a Generalized Likelihood...

This article presents a Python model and library that can be used for student investigation of the application of fundamental physics on a specific problem: the role of magnetic field in solar wind acceleration. The paper begins with a short overview of the open questions in the study of the solar wind and how they relate to many commonly taught physics courses. The physics included in the model, The Efficient Modified Parker Equation Solving Tool (TEMPEST), is laid out for the reader. Results using TEMPEST on a magnetic field structure representative of the minimum phase of the Sun's activity cycle are presented and discussed. The paper suggests several ways to use TEMPEST in an educational environment and provides access to the current version of the code.

To accelerate the introduction of new cast alloys, the simultaneous modeling and simulation of multiphysical phenomena needs to be considered in the design and optimization of mechanical properties of cast components. The required models related to casting defects, such as microporosity and hot tears, are reviewed. Three aluminum alloys are considered A356, 356 and 319. The data on calculated solidification shrinkage is presented and its effects on microporosity levels discussed. Examples are given for predicting microporosity defects and microstructure distribution for a plate casting. Models to predict fatigue life and yield stress are briefly highlighted here for the sake of completion and to illustrate how the length scales of the microstructure features as well as porosity defects are taken into account for modeling the mechanical properties. Thus, the data on casting defects, including microstructure features, is crucial for evaluating the final performance-related properties of the component. ACKNOWLEDGEMENTS This work was performed under a Cooperative Research and Development Agreement (CRADA) with the Nemak Inc., and Chrysler Co. for the project "High Performance Cast Aluminum Alloys for Next Generation Passenger Vehicle Engines. The author would also like to thank Amit Shyam for reviewing the paper and Andres Rodriguez of Nemak Inc. Research sponsored by the U. S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Office, as part of the Propulsion Materials Program under contract DE-AC05-00OR22725 with UT-Battelle, LLC. Part of this research was conducted through the Oak Ridge National Laboratory's High Temperature Materials Laboratory User Program, which is sponsored by the U. S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Program.

By developing a new model and its finite element implementation, we analyze the Berry phase low-dimensional semiconductor nanostructures, focusing on quantum dots (QDs). In particular, we solve the Schrödinger equation and investigate the evolution of the spin dynamics during the adiabatic transport of the QDs in the 2D plane along circular trajectory. Based on this study, we reveal that the Berry phase is highly sensitive to the Rashba and Dresselhaus spin-orbit lengths.

This paper provides verification results of the EnergyPlus variable refrigerant flow (VRF) heat pump computermodel using manufacturer's performance data. The paper provides an overview of the VRF model, presents the verification methodology, and discusses the results. The verification provides quantitative comparison of full and part-load performance to manufacturer's data in cooling-only and heating-only modes of operation. The VRF heat pump computermodel uses dual range bi-quadratic performance curves to represent capacity and Energy Input Ratio (EIR) as a function of indoor and outdoor air temperatures, and dual range quadratic performance curves as a function of part-load-ratio for modeling part-load performance. These performance curves are generated directly from manufacturer's published performance data. The verification compared the simulation output directly to manufacturer's performance data, and found that the dual range equation fit VRF heat pump computermodel predicts the manufacturer's performance data very well over a wide range of indoor and outdoor temperatures and part-load conditions. The predicted capacity and electric power deviations are comparbale to equation-fit HVAC computermodels commonly used for packaged and split unitary HVAC equipment.

The introduction of new technologies like adaptive automation systems and advanced alarms processing and presentation techniques in nuclear power plants is already having an impact on the safety and effectiveness of plant operations and also the role of the control room operator. This impact is expected to escalate dramatically as more and more nuclear power utilities embark on upgrade projects in order to extend the lifetime of their plants. One of the most visible impacts in control rooms will be the need to replace aging alarm systems. Because most of these alarm systems use obsolete technologies, the methods, techniques and tools that were used to design the previous generation of alarm system designs are no longer effective and need to be updated. The same applies to the need to analyze and redefine operators’ alarm handling tasks. In the past, methods for analyzing human tasks and workload have relied on crude, paper-based methods that often lacked traceability. New approaches are needed to allow analysts to model and represent the new concepts of alarm operation and human-system interaction. State-of-the-art task simulation tools are now available that offer a cost-effective and efficient method for examining the effect of operator performance in different conditions and operational scenarios. A discrete event simulation system was used by human factors researchers at the Idaho National Laboratory to develop a generic alarm handling model to examine the effect of operator performance with simulated modern alarm system. It allowed analysts to evaluate alarm generation patterns as well as critical task times and human workload predicted by the system.

Although fisheries biology studies are frequently performed at US Army Corps of Engineers (USACE) projects along the Columbia and Snake Rivers, there is currently no consistent definition of the ``forebay'' and ``tailrace'' regions for these studies. At this time, each study may use somewhat arbitrary lines (e.g., the Boat Restriction Zone) to define the upstream and downstream limits of the study, which may be significantly different at each project. Fisheries researchers are interested in establishing a consistent definition of project forebay and tailrace regions for the hydroelectric projects on the lower Columbia and Snake rivers. The Hydraulic Extent of a project was defined by USACE (Brad Eppard, USACE-CENWP) as follows: The river reach directly upstream (forebay) and downstream (tailrace) of a project that is influenced by the normal range of dam operations. Outside this reach, for a particular river discharge, changes in dam operations cannot be detected by hydraulic measurement. The purpose of this study was to, in consultation with USACE and regional representatives, develop and apply a consistent set of criteria for determining the hydraulic extent of each of the projects in the lower Columbia and Snake rivers. A 2D depth-averaged river model, MASS2, was applied to the Snake and Columbia Rivers. New computational meshes were developed most reaches and the underlying bathymetric data updated to the most current survey data. The computational meshes resolved each spillway bay and turbine unit at each project and extended from project to project. MASS2 was run for a range of total river flows and each flow for a range of project operations at each project. The modeled flow was analyzed to determine the range of velocity magnitude differences and the range of flow direction differences at each location in the computational mesh for each total river flow. Maps of the differences in flow direction and velocity magnitude were created. USACE fishery biologists requested data analysis to determine the project hydraulic extent based on the following criteria: 1) For areas where the mean velocities are less than 4 ft/s, the water velocity differences between operations are not greater than 0.5 ft/sec and /or the differences in water flow direction are not greater than 10 degrees, 2) If mean water velocity is 4.0 ft/second or greater the boundary is determined using the differences in water flow direction (i.e., not greater than 10 degrees). Based on these criteria, and excluding areas with a mean velocity of less than 0.1 ft/s (within the error of the model), a final set of graphics were developed that included data from all flows and all operations. Although each hydroelectric project has a different physical setting, there were some common results. The downstream hydraulic extent tended to be greater than the hydraulic extent in the forebay. The hydraulic extent of the projects tended to be larger at the mid-range flows. At higher flows, the channel geometry tends to reduce the impact of project operations.

to include concurrent engineering.1 In both cases, the Unit has established strong ties with the Computer with defining product data models that will support concurrent engineering. Both fabrication and assembly in manufacturing must be structured like concurrent engineering activities: the users of the research must be part

According to the observations, in our expansive and isotropic relativistic Universe for the gravitational phenomena in a Newtonian approximation the Newtonian non-modified relations are valid. The Friedmann general equations of isotropic and homogeneous universe dynamics describe an infinite number of models of expansive and isotropic relativistic universe in the Newtonian approximation, but only in one of them the Newtonian non-modified relations are valid. These facts give - till now not considered - possibility for unambiguous deductive-reductive determination of the Friedmannian model, which describes our observed Universe.

Bayesian methods have been very successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model $\\eta(\\theta)$ where $\\theta$ denotes the uncertain, best input setting. Hence the statistical model is of the form $y = \\eta(\\theta) + \\epsilon$, where $\\epsilon$ accounts for measurement, and possibly other error sources. When non-linearity is present in $\\eta(\\cdot)$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and non-standard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. While quite generally applicable, MCMC requires thousands, or even millions of evaluations of the physics model $\\eta(\\cdot)$. This is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computationalmodel runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory (DFT) model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory (ANL).

This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.

We estimate the resource requirements for the quantum simulation of the ground state energy of the one dimensional quantum transverse Ising model (TIM), based on the surface code implementation of a fault tolerant quantum computer. The surface code approach has one of the highest known tolerable error rates (1%) which makes it currently one of the most practical quantum computing schemes. Compared to results of the same model using the concatenated Steane code, the current results indicate that the simulation time is comparable but the number of physical qubits for the surface code is 2-3 orders of magnitude larger than that of the concatenation code. Considering that the error threshold requirements of the surface code is four orders of magnitude higher than the concatenation code, building a quantum computer with a surface code implementation appears more promising given current physical hardware capabilities.

In this work, two models for calculating heat transfer through a cooled vertical wall covered with a running slag layer are investigated. The first one relies on a discretization of the velocity equation, and the second one relies on an analytical solution. The aim is to find a model that can be used for calculating local heat flux boundary conditions in computational fluid dynamics (CFD) analysis of such processes. Two different cases where molten deposits exist are investigated: the black liquor recovery boiler and the coal gasifier. The results show that a model relying on discretization of the velocity equation is more flexible in handling different temperature-viscosity relations. Nevertheless, a model relying on an analytical solution is the one fast enough for a potential use as a CFD submodel. Furthermore, the influence of simplifications to the heat balance in the model is investigated. It is found that simplification of the heat balance can be applied when the radiation heat flux is dominant in the balance. 9 refs., 7 figs., 10 tabs.

Evolutionary algorithms are parallel computing algorithms and simulated annealing algorithm is a sequential computing algorithm. This paper inserts simulated annealing into evolutionary computations and successful developed a hybrid Self-Adaptive Evolutionary Strategy $\\mu+\\lambda$ method and a hybrid Self-Adaptive Classical Evolutionary Programming method. Numerical results on more than 40 benchmark test problems of global optimization show that the hybrid methods presented in this paper are very effective. Lennard-Jones potential energy minimization is another benchmark for testing new global optimization algorithms. It is studied through the amyloid fibril constructions by this paper. To date, there is little molecular structural data available on the AGAAAAGA palindrome in the hydrophobic region (113-120) of prion proteins.This region belongs to the N-terminal unstructured region (1-123) of prion proteins, the structure of which has proved hard to determine using NMR spectroscopy or X-ray crystallography ...

Computing combustion noise by combining Large Eddy Simulations with analytical models +++++ Presented by Ignacio Duran Abstract Two mechanisms control combustion noise generation as shown by Marble. A method to calculate combustion-generated noise has been implemented in a tool called CHORUS. The method

The objective of this Funding Opportunity Announcement (FOA) is to leverage scientific advancements in mathematics and computation for application to power system models and software tools, with the long-term goal of enabling real-time protection and control based on wide-area sensor measurements.

ComputationalModeling and the Experimental Plasma Research Program A White Paper Submitted of the fusion energy program. The experimental plasma research (EPR) program is well positioned to make major in fusion development and promote scientific discovery. Experimental plasma research projects explore

In-Vehicle Testing and ComputerModeling of Electric Vehicle Batteries B. Thomas, W.B. Gu, J was performed for both VRLA and NiMH batteries using Penn State University's electric vehicle, the Electric Lion and hybrid-electric vehicles. A thorough understanding of battery systems from the point of view

.534 kN to 5.34 kN. In worst- case tests representing a complete lack of superior femoral head bonePre-clinical evaluation of ceramic femoral head resurfacing prostheses using computationalmodels in resurfacing hip replacement (RHR) have been reported as early femoral neck fracture, infection, and loosening

Medical Nuclear Supply Chain Design: A Tractable Network Model and Computational Approach Anna of medical nuclear supply chains. Our focus is on the molybdenum supply chain, which is the most commonly is of special relevance to healthcare given the medical nuclear product's widespread use as well as the aging

AN ADVANCED COMPUTATIONAL APPROACH TO SYSTEM MODELING OF TOKAMAK POWER PLANTS Zoran Dragojlovic1 power plant system studies is being developed for the ARIES program. An operational design space has power plants. This allows examination of a multi-dimensional trade space as opposed to traditional

Building ventilation : a pressure airflow modelcomputer generation and elements of validation H when heating a residential building, approximately 30% of the energy loss is due to air renewal[1. Thus in tropical climates, natural ventilation affects essentially the inside comfort by favouring

COMPUTATIONAL CHALLENGES IN THE NUMERICAL TREATMENT OF LARGE AIR POLLUTION MODELS I. DIMOV , K. GEORGIEVy, TZ. OSTROMSKY , R. J. VAN DER PASz, AND Z. ZLATEVx Abstract. The air pollution, and especially the reduction of the air pollution to some acceptable levels, is an important environmental problem, which

Utero-fetal unit and pregnant woman modeling using a computer graphics approach for dosimetry for fetuses during pregnancy. Human fetus exposure can only be assessed through simulated dosimetry studies performed in vivo on animals and in vitro at the cellular level are complemented by simulated dosimetry

A ComputationalModel of Aging and Calcification in the Aortic Heart Valve Eli J. Weinberg1 of America Abstract The aortic heart valve undergoes geometric and mechanical changes over time. The cusps of a normal, healthy valve thicken and become less extensible over time. In the disease calcific aortic

systems with automatic control algorithms. Modernization can also improve the quality of service to water irrigation canals are now designed and built using modern technologies allowing advanced control proceduresTeaching canal hydraulics and control using a computer game or a scale model canal Pierre

The primary purpose of the work reported in this thesis was to develop a versatile computermodel to facilitate the design and analysis of hybrid vehicle drive-trains. A hybrid vehicle is one in which power for propulsion comes from two distinct...

can be mitigated by using dye-sensitized solar cells (DSSCs),4 which use organic dye molecules coated by nearly an order of magnitude through plasmon enhanced absorption by the dye.10 This particular solar cellComputationalModeling of Plasmon-Enhanced Light Absorption in a Multicomponent Dye Sensitized

tons of crude oil into the Gulf of Mexico. In order to understand the fate and impact of the discharged, causing the riser pipe to rupture and crude oil to flow into the Gulf of Mexico from an approximate depthMany Task Computing for Modeling the Fate of Oil Discharged from the Deep Water Horizon Well

A Simulation Technique for Performance Analysis of Generic Petri Net Models of Computer Systems1 Abstract Many timed extensions for Petri nets have been proposed in the literature, but their analytical solutions impose limitations on the time distributions and the net topology. To overcome these limitations

Application to the Efficiency of Free Flow Turbines A. GORBAN' Institute of ComputationalModeling, Russian obstacle is considered. Its application to estimating the efficiency of free flow turbines is discussed hydraulic turbines, i.e., the turbines that work without dams [l]. For this kind of turbine, the term

The primary purpose of the work reported in this thesis was to develop a versatile computermodel to facilitate the design and analysis of hybrid vehicle drive-trains. A hybrid vehicle is one in which power for propulsion comes from two distinct...

Computable General Equilibrium Models for the Analysis of Energy and Climate Policies Ian Sue Wing of energy and environmental policies. Perhaps the most important of these applications is the analysis Change, MIT Prepared for the International Handbook of Energy Economics Abstract This chapter is a simple

The US Army Corps of Engineers Portland District (CENWP) has developed a computational fluid dynamics (CFD) model of the John Day forebay on the Columbia River to aid in the development and design of alternatives to improve juvenile salmon passage at the John Day Project. At the request of CENWP, Pacific Northwest National Laboratory (PNNL) Hydrology Group has conducted a technical review of CENWP's CFD model run in CFD solver software, STAR-CD. PNNL has extensive experience developing and applying 3D CFD models run in STAR-CD for Columbia River hydroelectric projects. The John Day forebay model developed by CENWP is adequately configured and validated. The model is ready for use simulating forebay hydraulics for structural and operational alternatives. The approach and method are sound, however CENWP has identified some improvements that need to be made for future models and for modifications to this existing model.

Yock, Adam D., E-mail: ADYock@mdanderson.org; Kudchadker, Rajat J. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and The Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas 77030 (United States)] [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and The Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas 77030 (United States); Rao, Arvind [Department of Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and the Graduate School of Biomedical Sciences, the University of Texas Health Science Center at Houston, Houston, Texas 77030 (United States)] [Department of Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and the Graduate School of Biomedical Sciences, the University of Texas Health Science Center at Houston, Houston, Texas 77030 (United States); Dong, Lei [Scripps Proton Therapy Center, San Diego, California 92121 and The Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas 77030 (United States)] [Scripps Proton Therapy Center, San Diego, California 92121 and The Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas 77030 (United States); Beadle, Beth M.; Garden, Adam S. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)] [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Court, Laurence E. [Department of Radiation Physics and Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and The Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas 77030 (United States)] [Department of Radiation Physics and Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and The Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas 77030 (United States)

2014-05-15T23:59:59.000Z

Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: ?11.6%–23.8%) and 14.6% (range: ?7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: ?6.8%–40.3%) and 13.1% (range: ?1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: ?11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.

A K-L turbulence mix model driven with a drag-buoyancy source term is tested in an Eulerian code in a series of basic unit-physics tests, as part of a mix validation milestone. The model and the closure coefficient values are derived in the work of Dimonte-Tipton [D-T] in Phys.Flu.18, 085101 (2006), and many of the test problems were reported there, where the mix model operated in Lagrange computations. The drag-buoyancy K-L mix model was implemented within the Eulerian code framework by A.J. Scannapieco. Mix model performance is evaluated in terms of mix width growth rates compared to experiments in select regimes. Results in our Eulerian code are presented for several unit-physics I-D test problems including the decay of homogeneous isotropic turbulence (HIT), Rayleigh-Taylor (RT) unstable mixing, shock amplification of initial turbulence, Richtmyer-Meshkov (RM) mixing in several single shock test cases and in comparison to two RM experiments including re-shock (Vetter-Sturtevant and Poggi, et.al.). Sensitivity to model parameters, to Atwood number, and to initial conditions are examined. Results here are in good agreement in some tests (HIT, RT) with the previous results reported for the mix model in the Lagrange calculations. The HIT turbulent decay agrees closely with analytic expectations, and the RT growth rate matches experimental values for the default values of the model coefficients proposed in [D-T]. Results for RM characterized with a power law growth rate differ from the previous mix model work but are still within the range for reasonable agreement with experiments. Sensitivity to IC values in the RM studies are examined; results are sensitive to initial values of L[t=O], which largely determines the RM mix layer growth rate, and generally differs from the IC values used in the RT studies. Result sensitivity to initial turbulence, K[t=O], is seen to be small but significant above a threshold value. Initial conditions can be adjusted so that single shock RM mix width results match experiments but we have not been able to obtain a good match for first shock and re-shock growth rates in the same experiment with a single set of parameters and Ie. Problematic issues with KH test problems are described. Resolution studies for an RM test problem show the K-L mix growth rate decreases as it converges at a supra-linear rate, and, convergence requires a fine grid (on the order of 10 microns). For comparison, a resolution study of a second mix model [Scannapieco and Cheng, Phys.Lett.A, 299(1),49, (2002)] acting on a two fluid interface problem was examined. The mix in this case was found to increase with grid resolution at low to moderate resolutions, but converged at comparably fine resolutions. In conclusion, these tests indicate that the Eulerian code K-L model, using the Dimonte Tipton default model closure coefficients, achieve reasonable results across many of the unit-physics experimental conditions. However, we were unable to obtain good matches simultaneously for shock and re-shock mix in a single experiment. Results are sensitive to initial conditions in the regimes under study, with different IC best suited to RT or RM mix. It is reasonable to expect IC sensitivity in extrapolating to high energy density regimes, or to experiments with deceleration due to arbitrary combinations of RT and RM. As a final comparison, the atomically generated mix fraction and the mix width were each compared for the K-L mix model and the Scannapieco model on an identical RM test problem. The Scannapieco mix fraction and width grow linearly. The K-L mix fraction and width grow with the same power law exponent, in contrast to expectations from analysis. In future work it is proposed to do more head-to-head comparisons between these two models and other mix model options on a full suite of physics test problems, such as interfacial deceleration due to pressure build-up during an idealized ICF implosion.

A computational study of the convective heat transfer in the weld pool during gas tungsten arch (GTA) welding of Type 304 stainless steel is presented. The solution of the transport equations is based on a control volume approach which utilized directly, the integral form of the governing equations. The computationalmodel considers buoyancy and electromagnetic and surface tension forces in the solution of convective heat transfer in the weld pool. In addition, the model treats the weld pool surface as a deformable free surface. The computationalmodel includes weld metal vaporization and temperature dependent thermophysical properties. The results indicate that consideration of weld pool vaporization effects and temperature dependent thermophysical properties significantly influence the weld model predictions. Theoretical predictions of the weld pool surface temperature distributions and the cross-sectional weld pool size and shape wee compared with corresponding experimental measurements. Comparison of the theoretically predicted and the experimentally obtained surface temperature profiles indicated agreement with {plus minus} 8%. The predicted weld cross-section profiles were found to agree very well with actual weld cross-sections for the best theoretical models. 26 refs., 8 figs.

The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).

The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).

The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such accelerator-modeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beam-dynamics framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation packagemore »capable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).« less

A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.

SOLA-DM is a three-dimensional finite-difference computer code designed to model the dynamics of an incompressible fluid and the transport of discrete particulate material around obstacles impervious to flow. The numerical methods used in this code are described. SOLA-DM was used to predict the particle flux sampled by the 10-mm Dorr-Oliver Cyclone and MINIRAM dust monitors. Various geometric and dynamic variations of monitor and airflow combinations were tested. The code predictions are shown in computer-generated graphic plots.

We describe how to apply the transport method to compute inflationary observables in a broad range of multiple-field models. The method is efficient and encompasses scenarios with curved field-space metrics, violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes and models with quasi-single-field dynamics. In this note we focus on practical issues. It is accompanied by a Mathematica code which can be used to explore suitable models, or as a basis for further development.

We describe how to apply the transport method to compute inflationary observables in a broad range of multiple-field models. The method is efficient and encompasses scenarios with curved field-space metrics, violations of slow-roll conditions and turns of the trajectory in field space. It can be used for an arbitrary mass spectrum, including massive modes and models with quasi-single-field dynamics. In this note we focus on practical issues. It is accompanied by a Mathematica code which can be used to explore suitable models, or as a basis for further development.

On the Use of ComputationalModels for Wave Climate Assessment in Support of the Wave Energy On the Use of ComputationalModels for Wave Climate Assessment in Support of the Wave Energy Industry Effective, economic extraction of ocean wave energy requires an intimate under- standing of the ocean wave

This report summarizes the results of an expert-opinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelve-member panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a two-day meeting held on Aug. 10-11, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computationalmodeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the experienced user-base and the experimental validation base was decaying away quickly.

Trust management techniques must be adapted to the unique needs of the application architectures and problem domains to which they are applied. For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, certain characteristics of the mobile agent ant swarm -- their lightweight, ephemeral nature and indirect communication -- make this adaptation especially challenging. This thesis looks at the trust issues and opportunities in swarm-based autonomic computing systems and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and still serves to protect the swarm. After analyzing the applicability of trust management research as it has been applied to architectures with similar characteristics, this thesis specifies the required characteristics for trust management mechanisms used to monitor the trustworthiness of entities in a swarm-based autonomic computing system and describes a trust model that meets these requirements.

Variable Refrigerant Flow (VRF) heat pumps are often regarded as energy efficient air-conditioning systems which offer electricity savings as well as reduction in peak electric demand while providing improved individual zone setpoint control. One of the key advantages of VRF systems is minimal duct losses which provide significant reduction in energy use and duct space. However, there is limited data available to show their actual performance in the field. Since VRF systems are increasingly gaining market share in the US, it is highly desirable to have more actual field performance data of these systems. An effort was made in this direction to monitor VRF system performance over an extended period of time in a US national lab test facility. Due to increasing demand by the energy modeling community, an empirical model to simulate VRF systems was implemented in the building simulation program EnergyPlus. This paper presents the comparison of energy consumption as measured in the national lab and as predicted by the program. For increased accuracy in the comparison, a customized weather file was created by using measured outdoor temperature and relative humidity at the test facility. Other inputs to the model included building construction, VRF system model based on lab measured performance, occupancy of the building, lighting/plug loads, and thermostat set-points etc. Infiltration model inputs were adjusted in the beginning to tune the computermodel and then subsequent field measurements were compared to the simulation results. Differences between the computermodel results and actual field measurements are discussed. The computer generated VRF performance closely resembled the field measurements.

As CMOS technology scales nano-metrically, reliability turns out to be a decisive subject in the design methodology of nano-sized circuit systems. As a result, several computational approaches have been developed to compute and evaluate reliability of desired nano-electronic circuits. The process of computing reliability becomes very troublesome and time consuming as the computational complexity build ups with the desired circuit size. Therefore, being able to measure reliability instantly and superiorly is fast becoming necessary in designing modern logic integrated circuits. For this purpose, the paper firstly looks into the development of an automated reliability evaluation tool based on the generalization of Probabilistic Gate Model (PGM) and Boolean Difference-based Error Calculator (BDEC) models. The Matlab-based tool allows users to significantly speed-up the task of reliability analysis for very large number of nano-electronic circuits. Secondly, by using the developed automated tool, the paper explores into a comparative study involving reliability computation and evaluation by PGM and, BDEC models for different implementations of same functionality circuits. Based on the reliability analysis, BDEC gives exact and transparent reliability measures, but as the complexity of the same functionality circuits with respect to gate error increases, reliability measure by BDEC tends to be lower than the reliability measure by PGM. The lesser reliability measure by BDEC is well explained in this paper using distribution of different signal input patterns overtime for same functionality circuits. Simulation results conclude that the reliability measure by BDEC depends not only on faulty gates but it also depends on circuit topology, probability of input signals being one or zero and also probability of error on signal lines.

A ComputationalModel based on Gross' Emotion Regulation Theory1 Tibor Bosse (tbosse for emotion regulation by formalizing the model informally described by Gross (1998). The model has been of emotional response) and qualitative aspects (such as decisions to regulate one's emotion). This model

The paper demonstrates the use of computermodels for parametric studies and optimization of surface and subsurface eddy current techniques. The study with high-frequency probe investigates the effect of eddy current frequency and probe shape on the detectability of flaws in the steel substrate. The low-frequency sliding probe study addresses the effect of conductivity between the fastener and the hole, frequency and coil separation distance on detectability of flaws in subsurface layers.

OF SCIENCE May 1977 Major Subject: Flectrical Engineering A FLOATING-POINT PROCESSOR FOR THE TEXAS INSTRUMENTS MODEL 980A COMPUTER A Thesis by HUBERT ELDIE BRINKMANN, JR. Approved as to style and content by: C airman o. Committee) Hea , Depar ent... part of the subtrahend has been two's complemented. Floating-Point Multiplication After the characteristic and mantissa have been separated, t?o characteristics of the two numbers are added and the mantissas are multiplied to initiate...

Far off the shores of energy-hungry coastal cities, powerful winds blow over the open ocean, where the water is too deep for today's seabed-mounted offshore wind turbines. For the United States to tap into these vast offshore wind energy resources, wind turbines must be mounted on floating platforms to be cost effective. Researchers at the National Renewable Energy Laboratory (NREL) are supporting that development with computermodels that allow detailed analyses of such floating wind turbines.

Computational fluid dynamics (CFD) is recognized as a powerful engineering tool. That is, CFD has advanced over the years to the point where it can now give us deep insight into the analysis of very complex processes. There is a danger, though, that an engineer can place too much confidence in a simulation. If a user is not careful, it is easy to believe that if you plug in the numbers, the answer comes out, and you are done. This assumption can lead to significant errors. As we discovered in the course of a study on behalf of the Department of Energy's Savannah River Site in South Carolina, CFD models fail to capture some of the large variations inherent in complex processes. These variations, or scatter, in experimental data emerge from physical tests and are inadequately captured or expressed by calculated mean values for a process. This anomaly between experiment and theory can lead to serious errors in engineering analysis and design unless a correction factor, or safety factor, is experimentally validated. For this study, blending times for the mixing of salt solutions in large storage tanks were the process of concern under investigation. This study focused on the blending processes needed to mix salt solutions to ensure homogeneity within waste tanks, where homogeneity is required to control radioactivity levels during subsequent processing. Two of the requirements for this task were to determine the minimum number of submerged, centrifugal pumps required to blend the salt mixtures in a full-scale tank in half a day or less, and to recommend reasonable blending times to achieve nearly homogeneous salt mixtures. A full-scale, low-flow pump with a total discharge flow rate of 500 to 800 gpm was recommended with two opposing 2.27-inch diameter nozzles. To make this recommendation, both experimental and CFD modeling were performed. Lab researchers found that, although CFD provided good estimates of an average blending time, experimental blending times varied significantly from the average.

Computational Fluid Dynamics (CFD) codes are being increasingly used in the field of fire safety engineering. They provide, amongst other things, velocity, species and heat flux distributions throughout the computational ...

In this report, we evaluate the performance of a K-L drag-buoyancy mix model, described in a reference study by Dimonte-Tipton [1] hereafter denoted as [D-T]. The model was implemented in an Eulerian multi-material AMR code, and the results are discussed here for a series of unit physics tests. The tests were chosen to calibrate the model coefficients against empirical data, principally from RT (Rayleigh-Taylor) and RM (Richtmyer-Meshkov) experiments, and the present results are compared to experiments and to results reported in [D-T]. Results show the Eulerian implementation of the mix model agrees well with expectations for test problems in which there is no convective flow of the mass averaged fluid, i.e., in RT mix or in the decay of homogeneous isotropic turbulence (HIT). In RM shock-driven mix, the mix layer moves through the Eulerian computational grid, and there are differences with the previous results computed in a Lagrange frame [D-T]. The differences are attributed to the mass averaged fluid motion and examined in detail. Shock and re-shock mix are not well matched simultaneously. Results are also presented and discussed regarding model sensitivity to coefficient values and to initial conditions (IC), grid convergence, and the generation of atomically mixed volume fractions.

A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.

John Bongaarts' proximate determinantsmodel of fertility has accounted for over 90 percent of variation in the total fertility rate (TFR) of primarily developing nations and historical populations. Recently, dramatically low fertility rates across...

of Stirling Cycle Numbers Counts Unlabeled Acyclic Single-Source Automata David Callan Department 2007, revised 7 May 2008, accepted 18 May 2008. We show that a determinant of Stirling cycle numbers a formula for the number of acyclic automata with a given set of sources. Keywords: Stirling cycle number

Development and validation of constitutive models for polycrystalline materials subjected to high strain-rate loading over a range of temperatures are needed to predict the response of engineering materials to in-service type conditions. To account accurately for the complex effects that can occur during extreme and variable loading conditions, requires significant and detailed computational and modeling efforts. These efforts must be integrated fully with precise and targeted experimental measurements that not only verify the predictions of the models, but also provide input about the fundamental processes responsible for the macroscopic response. Achieving this coupling between modeling and experiment is the guiding principle of this program. Specifically, this program seeks to bridge the length scale between discrete dislocation interactions with grain boundaries and continuum models for polycrystalline plasticity. Achieving this goal requires incorporating these complex dislocation-interface interactions into the well-defined behavior of single crystals. Despite the widespread study of metal plasticity, this aspect is not well understood for simple loading conditions, let alone extreme ones. Our experimental approach includes determining the high-strain rate response as a function of strain and temperature with post-mortem characterization of the microstructure, quasi-static testing of pre-deformed material, and direct observation of the dislocation behavior during reloading by using the in situ transmission electron microscope deformation technique. These experiments will provide the basis for development and validation of physically-based constitutive models. One aspect of the program involves the direct observation of specific mechanisms of micro-plasticity, as these indicate the boundary value problem that should be addressed. This focus on the pre-yield region in the quasi-static effort (the elasto-plastic transition) is also a tractable one from an experimental and modeling viewpoint. In addition, our approach will minimize the need to fit model parameters to experimental data to obtain convergence. These are critical steps to reach the primary objective of simulating and modeling material performance under extreme loading conditions. During this project, the following achievements have been obtained: 1. Twins have been observed to act as barriers to dislocation propagation and as sources of and sinks to dislocations. 2. Nucleation of deformation twins in nitrogen strengthened steel is observed to be closely associated with planar slip bands. The appearance of long twins through heavily dislocated microstructures occurs by short twins nucleating at one slip band, propagating through the dislocation-free region, and terminating at the next slip band. This process is repeated throughout the entire grain. 3. A tamped-laser ablation loading technique has been developed to introduce high strain rate, high stress and low strains. 4. Both dislocation slip and twinning are present in high strain-rate deformed zirconium, with the relative contribution of each mode to the deformation depending on the initial texture. 5. In situ IR thermal measurements have been used to show that the majority of plastic work is dissipated as heat even under conditions in which twinning is the dominant deformation mode.

This paper proposes a general quantum algorithm that can be applied to any classical computer program. Each computational step is written using reversible operators, but the operators remain classical in that the qubits take on values of only zero and one. This classical restriction on the quantum states allows the copying of qubits, a necessary requirement for doing general classical computation. Parallel processing of the quantum algorithm proceeds because of the superpositioning of qubits, the only aspect of the algorithm that is strictly quantum mechanical. Measurement of the system collapses the superposition, leaving only one state that can be observed. In most instances, the loss of information as a result of measurement would be unacceptable. But the linguistically motivated theory of Analogical Modeling (AM) proposes that the probabilistic nature of language behavior can be accurately modeled in terms of the simultaneous analysis of all possible contexts (referred to as supracontexts) providing one selects a single supracontext from those supracontexts that are homogeneous in behavior (namely, supracontexts that allow no increase in uncertainty). The amplitude for each homogeneous supracontext is proportional to its frequency of occurrence, with the result that the probability of selecting one particular supracontext to predict the behavior of the system is proportional to the square of its frequency.

We develop a dynamical coupled-channels model of K^- p reactions, aiming at extracting the parameters associated with hyperon resonances and providing the elementary antikaon-nucleon scattering amplitudes that can be used for investigating various phenomena in the strangeness sector such as the production of hypernuclei from kaon-nucleus reactions. The model consists of (a) meson-baryon (MB) potentials v_{M'B',MB} derived from the phenomenological SU(3) Lagrangian, and (b) vertex interactions Gamma_{MB,Y*} for describing the decays of the bare excited hyperon states (Y*) into MB states. The model is defined in a channel space spanned by the two-body barK N, pi Sigma, pi Lambda, eta Lambda, and K Xi states and also the three-body pi pi Lambda and pi barK N states that have the resonant pi Sigma* and barK* N components, respectively. The resulting coupled-channels scattering equations satisfy the multichannel unitarity conditions and account for the dynamical effects arising from the off-shell rescattering processes. The model parameters are determined by fitting the available data of the unpolarized and polarized observables of the K^- p --> barK N, pi Sigma, pi Lambda, eta Lambda, K Xi reactions in the energy region from the threshold to invariant mass W=2.1 GeV. Two models with equally good chi^2 fits to the data have been constructed. The partial-wave amplitudes obtained from the constructed models are compared with the results from a recent partial-wave analysis by the Kent State University group. We discuss the differences between these three analysis results. Our results at energies near the threshold suggest that the higher partial waves should be treated on the same footing as the S wave if one wants to understand the nature of Lambda(1405)1/2^- using the data below the barK N threshold, as will be provided by the J-PARC E31 experiment.

General purpose languages, such as C++, permit the construction of various high level abstractions to hide redundant, low level details and accelerate programming productivity. Example abstractions include functions, data structures, classes, templates and so on. However, the use of abstractions significantly impedes static code analyses and optimizations, including parallelization, applied to the abstractions complex implementations. As a result, there is a common perception that performance is inversely proportional to the level of abstraction. On the other hand, programming large scale, possibly heterogeneous high-performance computing systems is notoriously difficult and programmers are less likely to abandon the help from high level abstractions when solving real-world, complex problems. Therefore, the need for programming models balancing both programming productivity and execution performance has reached a new level of criticality. We are exploring a novel abstraction-friendly programming model in order to support high productivity and high performance computing. We believe that standard or domain-specific semantics associated with high level abstractions can be exploited to aid compiler analyses and optimizations, thus helping achieving high performance without losing high productivity. We encode representative abstractions and their useful semantics into an abstraction specification file. In the meantime, an accessible, source-to-source compiler infrastructure (the ROSE compiler) is used to facilitate recognizing high level abstractions and utilizing their semantics for more optimization opportunities. Our initial work has shown that recognizing abstractions and knowing their semantics within a compiler can dramatically extend the applicability of existing optimizations, including automatic parallelization. Moreover, a new set of optimizations have become possible within an abstraction-friendly and semantics-aware programming model. In the future, we will apply our programming model to more large scale applications. In particular, we plan to classify and formalize more high level abstractions and semantics which are relevant to high performance computing. We will also investigate better ways to allow language designers, library developers and programmers to communicate abstraction and semantics information with each other.

Computational fluid dynamics (CFD) models were developed to support the siting and design of a behavioral guidance system (BGS) structure in The Dalles Dam (TDA) forebay on the Columbia River. The work was conducted by Pacific Northwest National Laboratory for the U.S. Army Corps of Engineers, Portland District (CENWP). The CFD results were an invaluable tool for the analysis, both from a Regional and Agency perspective (for the fish passage evaluation) and a CENWP perspective (supporting the BGS design and location). The new CFD model (TDA forebay model) included the latest bathymetry (surveyed in 1999) and a detailed representation of the engineered structures (spillway, powerhouse main, fish, and service units). The TDA forebay model was designed and developed in a way that future studies could easily modify or, to a large extent, reuse large portions of the existing mesh. This study resulted in these key findings: (1) The TDA forebay model matched well with field-measured velocity data. (2) The TDA forebay model matched observations made at the 1:80 general physical model of the TDA forebay. (3) During the course of this study, the methodology typically used by CENWP to contour topographic data was shown to be inaccurate when applied to widely-spaced transect data. Contouring methodologies need to be revisited--especially before such things as modifying the bathymetry in the 1:80 general physical model are undertaken. Future alignments can be evaluated with the model staying largely intact. The next round of analysis will need to address fish passage demands and navigation concerns. CFD models can be used to identify the most promising locations and to provide quantified metrics for biological, hydraulic, and navigation criteria. The most promising locations should then be further evaluated in the 1:80 general physical model.

Application of BPL technologies to existing overhead high-voltage power lines would benefit greatly from improved simulation tools capable of predicting performance - such as the electromagnetic fields radiated from such lines. Existing EMTP-based frequency-dependent line models are attractive since their parameters are derived from physical design dimensions which are easily obtained. However, to calculate the radiated electromagnetic fields, detailed current distributions need to be determined. This paper presents a method of using EMTP line models to determine the current distribution on the lines, as well as a technique for using these current distributions to determine the radiated electromagnetic fields.

The visualization of large, remotely located data sets necessitates the development of a distributed computing pipeline in order to reduce the data, in stages, to a manageable size. The required baseline infrastructure for launching such a distributed pipeline is becoming available, but few services support even marginally optimal resource selection and partitioning of the data analysis workflow. We explore a methodology for building a model of overall application performance using a composition of the analytic models of individual components that comprise the pipeline. The analytic models are shown to be accurate on a testbed of distributed heterogeneous systems. The prediction methodology will form the foundation of a more robust resource management service for future Grid-based visualization applications.

Pulverized coal injection (PCI) into the blast furnace (BF) has been recognized as an effective way to decrease the coke and total energy consumption along with minimization of environmental impacts. However, increasing the amount of coal injected into the BF is currently limited by the lack of knowledge of some issues related to the process. It is therefore important to understand the complex physical and chemical phenomena in the PCI process. Due to the difficulty in attaining trus BF measurements, Computational fluid dynamics (CFD) modeling has been identified as a useful technology to provide such knowledge. CFD simulation is powerful for providing detailed information on flow properties and performing parametric studies for process design and optimization. In this project, comprehensive 3-D CFD models have been developed to simulate the PCI process under actual furnace conditions. These models provide raceway size and flow property distributions. The results have provided guidance for optimizing the PCI process.

Understanding how stimuli and synaptic connectivity in uence the statistics of spike patterns in neural networks is a central question in computational neuroscience. Maximum Entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. But, in spite of good performance in terms of prediction, the ?tting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuro-mimetic models) provide a probabilistic mapping between stimulus, network architecture and spike patterns in terms of conditional proba- bilities. In this paper we build an exact analytical mapping between neuro-mimetic and Maximum Entropy models.

J. Theis, ComputationalModeling in Biology, Institute of Bioinformatics and Systems BiologyTitle: From data analysis to network modeling, with applications in systems biology Author: Fabian at detailed models of the system of interest. Our application focus are biological networks, namely gene

The sensors and controls research group at the United States Department of Energy (DOE) National Energy Technology Laboratory (NETL) is continuing to develop the Combustion Control and Diagnostics Sensor (CCADS) for gas turbine applications. CCADS uses the electrical conduction of the charged species generated during the combustion process to detect combustion instabilities and monitor equivalence ratio. As part of this effort, combustion models are being developed which include the interaction between the electric field and the transport of charged species. The primary combustion process is computed using a flame wrinkling model (Weller et. al. 1998) which is a component of the OpenFOAM toolkit (Jasak et. al. 2004). A sub-model for the transport of charged species is attached to this model. The formulation of the charged-species model similar that applied by Penderson and Brown (1993) for the simulation of laminar flames. The sub-model consists of an additional flux due to the electric field (drift flux) added to the equations for the charged species concentrations and the solution the electric potential from the resolved charge density. The subgrid interactions between the electric field and charged species transport have been neglected. Using the above procedure, numerical simulations are performed and the results compared with several recent CCADS experiments.

A computational fluid dynamics (CFD) model was used in an investigation into the suppression of a surface vortex that forms and the south-most spilling bay at The Dalles Project. The CFD work complemented work at the prototype and the reduced-scale physical models. The CFD model was based on a model developed for other work in the forebay but had additional resolution added near the spillway. Vortex suppression devices (VSDs) were to placed between pier noses and/or in the bulkhead slot of the spillway bays. The simulations in this study showed that placing VSD structures or a combination of structures to suppress the vortex would still result in near-surface flows to be entrained in a vortex near the downstream spillwall. These results were supported by physical model and prototype studies. However, there was a consensus of the fish biologists at the physical model that the fish would most likely move north and if the fish went under the VSD it would immediately exit the forebay through the tainter gate and not get trapped between VSDs or the VSDs and the tainter gate if the VSDs were deep enough.

This report documents a computable general equilibrium (CGE) model of the economy of Haiti, emphasizing energy use in agriculture. CGE models compare favorably with econometric models for developing countries in terms of their ability to take advantage of available data. The model of Haiti contains ten production sectors: manufacturing, services, transportation, electricity, rice, coffee, sugar cane, sugar refining, general agriculture, and fuelwood and charcoal. All production functions use functional forms which permit factor substitution. Consumption is specified for three income categories of consumers and a government sector with a linear expenditure system (LES) of demand equations. The economy exports four categories of products and imports six. Balanced trade and capital accounts are required for equilibrium. Total sectoral allocations of land, labor and capital are constrained to equal the quantities of these inputs in the Haitian economy as of the early 1980s. The model can be used to study the consequences of fiscal and trade policies and sectorally oriented productivity improvement policies. Guidance is offered regarding how to use the model to study economic growth and technological change. Limitations of the mode are also pointed out as well as user strategies which can lessen or work around some of those limitations. 19 refs.

The computer programs in the package are based on the material presented in the document, Control of Open Fugitive Dust Sources, EPA-450/3-88-008. The programs on these diskettes serve two purposes. Their primary purpose is to facilitate the process of data entry, allowing the user not only to enter and verify the data which he/she possesses, but also to access additional data which might not be readily available. The second purpose is to calculate emission rates for the particular source category selected using the data previously entered and verified. Software Description: The program is written in BASIC programming language for implementation on an IBM-PC/AT and compatible machines using DOS.2X or higher operating system. Hard disk with 5 1/4 inch disk drive or two disk drives, wide carriage printer (132-character) or printer capable of printing text in condensed mode required. Text editor or word processing program capable of manipulating ASCII or DOS text files is optional.

This report provides computational results of an extensive study to examine the following: (1) infinite media neutron-multiplication factors; (2) material bucklings; (3) bounding infinite media critical concentrations; (4) bounding finite critical dimensions of water-reflected and homogeneously water-moderated one-dimensional systems (i.e., spheres, cylinders of infinite length, and slabs that are infinite in two dimensions) that were comprised of various proportions and densities of plutonium oxides and uranium oxides, each having various isotopic compositions; and (5) sensitivity coefficients of delta k-eff with respect to critical geometry delta dimensions were determined for each of the three geometries that were studied. The study was undertaken to support the development of a standard that is sponsored by the International Standards Organization (ISO) under Technical Committee 85, Nuclear Energy (TC 85)--Subcommittee 5, Nuclear Fuel Technology (SC 5)--Working Group 8, Standardization of Calculations, Procedures and Practices Related to Criticality Safety (WG 8). The designation and title of the ISO TC 85/SC 5/WG 8 standard working draft is WD 14941, ''Nuclear energy--Fissile materials--Nuclear criticality control and safety of plutonium-uranium oxide fuel mixtures outside of reactors.'' Various ISO member participants performed similar computational studies using their indigenous computational codes to provide comparative results for analysis in the development of the standard.

A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.

and NCAR in the development of a comprehensive, earth systems model. This model incorporates the most-performance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well. Our collaborators in climate research include the National Center

It was recently demonstrated that contact binaries occur in globular clusters (GCs) only immediately below turn-off point and in the region of blue straggler stars (BSs). In addition, observations indicate that at least a significant fraction of BSs in these clusters was formed by the binary mass-transfer mechanism. The aim of our present investigation is to obtain and analyze a set of evolutionary models of cool, close detached binaries with a low metal abundance, which are characteristic of GC. We computed the evolution of 975 models of initially detached, cool close binaries with different initial parameters. The models include mass exchange between components as well as mass and angular momentum loss due to the magnetized winds for very low-metallicity binaries with Z = 0.001. The models are interpreted in the context of existing data on contact binary and blue straggler members of GCs. The model parameters agree well with the observed positions of the GC contact binaries in the Hertzsprung-Russell diagra...

The calculation of airflows is of great importance for detailed building thermal simulation computer codes, these airflows most frequently constituting an important thermal coupling between the building and the outside on one hand, and the different thermal zones on the other. The driving effects of air movement, which are the wind and the thermal buoyancy, are briefly outlined and we look closely at their coupling in the case of buildings, by exploring the difficulties associated with large openings. Some numerical problems tied to the resolving of the non-linear system established are also covered. Part of a detailled simulation software (CODYRUN), the numerical implementation of this airflow model is explained, insisting on data organization and processing allowing the calculation of the airflows. Comparisons are then made between the model results and in one hand analytical expressions and in another and experimental measurements in case of a collective dwelling.

A computermodel was developed to calculate residual radioactive material guidelines for the US Department of Energy (DOE). This model, called RESRAD, can be run on IBM or IBM-compatible microcomputer. Seven potential exposure pathways from contaminated soil are analyzed, including external radiation exposure and internal radiation exposure from inhalation and food digestion. The RESRAD code has been applied to several DOE sites to derive soil cleanup guidelines. The experience gained indicates that a comprehensive set of site-specific hydrogeologic and geochemical input parameters must be used for a realistic pathway analysis. The RESRAD code is a useful tool; it is easy to run and very user-friendly. 6 refs., 12 figs.

Mutant ubiquitin found in neurodegenerative diseases has been thought to hamper activation of transcription factor nuclear factor-kappa B (NF-{kappa}B) by inhibiting ubiquitin-proteasome system (UPS). It has been reported that ubiquitin also is involved in signal transduction in an UPS-independent manner. We used a modeling and simulation approach to delineate the roles of ubiquitin on NF-{kappa}B activation. Inhibition of proteasome complex increased maximal activation of IKK mainly by decreasing the UPS efficiency. On the contrary, mutant ubiquitin decreased maximal activity of IKK. Computationalmodeling showed that the inhibition effect of mutant ubiquitin is mainly attributed to decreased activity of UPS-independent function of ubiquitin. Collectively, our results suggest that mutant ubiquitin affects NF-{kappa}B activation in an UPS-independent manner.

A potential hazardous waste site investigation was conducted by the Environmental Protection Agency to determine whether ground water, surface water, or area soils and sediments were contaminated as a result of waster water discharges or improper solid waste disposal practices of a pesticide manufacturer. One of the compounds discharged into the environment was 1,1,1,2-tetrachloro-2,2-bis(p-chlorophenyl)ethane, commonly referred to as tetrachloro-DDT. Unlike a great many of the DDT analogs, tetrachloro-DDT has come under only limited scrutiny, mainly because it was dismissed as having poor insecticidal properties relative to DDT and other analogs. Its metabolism in ingesting organisms, and degradative pathways in the environment have consequently been left uncertain. This model ecosystem study was undertaken to examine the unanswered questions concerning the metabolic and environmental fate of tetrachloro-DDT. The relevance of this study pertains to disposal practices of pesticide manufacturers who use tetrachloro-DDT as a product precursor.

Recent advances in sensor technology and engineering have made it possible to assemble many related sensors in a common array, often of small physical size. Sensor arrays may report an entire vector of measured values in each data collection cycle, typically one value per sensor per sampling time. The larger quantities of data provided by larger arrays certainly contain more information, however in some cases experience suggests that dramatic increases in array size do not always lead to corresponding improvements in the practical value of the data. The work leading to this report was motivated by the need to develop computational planning tools to approximate the relative effectiveness of arrays of different size (or scale) in a wide variety of contexts. The basis of the work is a statistical model of a generic sensor array. It includes features representing measurement error, both common to all sensors and independent from sensor to sensor, and the stochastic relationships between the quantities to be measured by the sensors. The model can be used to assess the effectiveness of hypothetical arrays in classifying objects or events from two classes. A computer program is presented for evaluating the misclassification rates which can be expected when arrays are calibrated using a given number of training samples, or the number of training samples required to attain a given level of classification accuracy. The program is also available via email from the first author for a limited time.

The Salt Disposition Integration (SDI) portfolio of projects provides the infrastructure within existing Liquid Waste facilities to support the startup and long term operation of the Salt Waste Processing Facility (SWPF). Within SDI, the Blend and Feed Project will equip existing waste tanks in the Tank Farms to serve as Blend Tanks where 300,000-800,000 gallons of salt solution will be blended in 1.3 million gallon tanks and qualified for use as feedstock for SWPF. Blending requires the miscible salt solutions from potentially multiple source tanks per batch to be well mixed without disturbing settled sludge solids that may be present in a Blend Tank. Disturbing solids may be problematic both from a feed quality perspective as well as from a process safety perspective where hydrogen release from the sludge is a potential flammability concern. To develop the necessary technical basis for the design and operation of blending equipment, Savannah River National Laboratory (SRNL) completed scaled blending and transfer pump tests and computational fluid dynamics (CFD) modeling. A 94 inch diameter pilot-scale blending tank, including tank internals such as the blending pump, transfer pump, removable cooling coils, and center column, were used in this research. The test tank represents a 1/10.85 scaled version of an 85 foot diameter, Type IIIA, nuclear waste tank that may be typical of Blend Tanks used in SDI. Specifically, Tank 50 was selected as the tank to be modeled per the SRR, Project Engineering Manager. SRNL blending tests investigated various fixed position, non-rotating, dual nozzle pump designs, including a blending pump model provided by the blend pump vendor, Curtiss Wright (CW). Primary research goals were to assess blending times and to evaluate incipient sludge disturbance for waste tanks. Incipient sludge disturbance was defined by SRR and SRNL as minor blending of settled sludge from the tank bottom into suspension due to blending pump operation, where the sludge level was shown to remain constant. To experimentally model the sludge layer, a very thin, pourable, sludge simulant was conservatively used for all testing. To experimentally model the liquid, supernate layer above the sludge in waste tanks, two salt solution simulants were used, which provided a bounding range of supernate properties. One solution was water (H{sub 2}O + NaOH), and the other was an inhibited, more viscous salt solution. The research performed and data obtained significantly advances the understanding of fluid mechanics, mixing theory and CFD modeling for nuclear waste tanks by benchmarking CFD results to actual experimental data. This research significantly bridges the gap between previous CFD models and actual field experiences in real waste tanks. A finding of the 2009, DOE, Slurry Retrieval, Pipeline Transport and Plugging, and Mixing Workshop was that CFD models were inadequate to assess blending processes in nuclear waste tanks. One recommendation from that Workshop was that a validation, or bench marking program be performed for CFD modeling versus experiment. This research provided experimental data to validate and correct CFD models as they apply to mixing and blending in nuclear waste tanks. Extensive SDI research was a significant step toward bench marking and applying CFD modeling. This research showed that CFD models not only agreed with experiment, but demonstrated that the large variance in actual experimental data accounts for misunderstood discrepancies between CFD models and experiments. Having documented this finding, SRNL was able to provide correction factors to be used with CFD models to statistically bound full scale CFD results. Through the use of pilot scale tests performed for both types of pumps and available engineering literature, SRNL demonstrated how to effectively apply CFD results to salt batch mixing in full scale waste tanks. In other words, CFD models were in error prior to development of experimental correction factors determined during this research, which provided a technique to use CFD models fo

Potts model is a powerful tool to uncover community structure in complex networks. Here, we propose a new framework to reveal the optimal number of communities and stability of network structure by quantitatively analyzing the dynamics of Potts model. Specifically we model the community structure detection Potts procedure by a Markov process, which has a clear mathematical explanation. Then we show that the local uniform behavior of spin values across multiple timescales in the representation of the Markov variables could naturally reveal the network's hierarchical community structure. In addition, critical topological information regarding to multivariate spin configuration could also be inferred from the spectral signatures of the Markov process. Finally an algorithm is developed to determine fuzzy communities based on the optimal number of communities and the stability across multiple timescales. The effectiveness and efficiency of our algorithm are theoretically analyzed as well as experimentally validate...

Lignin is a heterogeneous alkyl-aromatic polymer that constitutes up to 30% of plant cell walls, and is used for water transport, structure, and defense. The highly irregular and heterogeneous structure of lignin presents a major obstacle in the development of strategies for its deconstruction and upgrading. Here we present mechanistic studies of the acid-catalyzed cleavage of lignin aryl-ether linkages, combining both experimental studies and quantum chemical calculations. Quantum mechanical calculations provide a detailed interpretation of reaction mechanisms including possible intermediates and transition states. Solvent effects on the hydrolysis reactions were incorporated through the use of a conductor-like polarizable continuum model (CPCM) and with cluster models including explicit water molecules in the first solvation shell. Reaction pathways were computed for four lignin model dimers including 2-phenoxy-phenylethanol (PPE), 1-(para-hydroxyphenyl)-2-phenoxy-ethanol (HPPE), 2-phenoxy-phenyl-1,3-propanediol (PPPD), and 1-(para-hydroxyphenyl)-2-phenoxy-1,3-propanediol (HPPPD). Lignin model dimers with a para-hydroxyphenyl ether (HPPE and HPPPD) show substantial differences in reactivity relative to the phenyl ether compound (PPE and PPPD) which have been clarified theoretically and experimentally. The significance of these results for acid deconstruction of lignin in plant cell walls will be discussed.

or action. · Scalability. The system should require a human set-up that is at most sublinear in the number-time rendering system. Figure 1. A view of our 15-million polygon model of a coal-fired power plant. The MMR: Massive Model Rendering System The Challenge Overview. Computer-aided design (CAD) applications

An overview of the computer code TOPAZ (Transient-One-Dimensional Pipe Flow Analyzer) is presented. TOPAZ models the flow of compressible and incompressible fluids through complex and arbitrary arrangements of pipes, valves, flow branches and vessels. Heat transfer to and from the fluid containment structures (i.e. vessel and pipe walls) can also be modeled. This document includes discussions of the fluid flow equations and containment heat conduction equations. The modeling philosophy, numerical integration technique, code architecture, and methods for generating the computational mesh are also discussed.

1 Service and Utility Oriented Distributed Computing Systems: Challenges and Opportunities) networks have emerged as popular platforms for the next generation parallel and distributed computing. Utility computing is envisioned to be the next generation of IT evolution that depicts how computing needs

Sandro Spina Dept. of Computer Science and A.I. New Computing Building University of Malta, Malta sandro.spina@um.edu.mt Gordon Pace Dept. of Computer Science and A.I. New Computing Building University of Malta, Malta gordon techniques. CSAW '06 CSAI Department, University of Malta Processes can be described using some formal

An evaluation of thermodynamic aspects of hot corrosion of the superalloys Haynes 242 and HastelloyTM N in the eutectic mixtures of KF and ZrF4 is carried out for development of Advanced High Temperature Reactor (AHTR). This work models the behavior of several superalloys, potential candidates for the AHTR, using computational thermodynamics tool (ThermoCalc), leading to the development of thermodynamic description of the molten salt eutectic mixtures, and on that basis, mechanistic prediction of hot corrosion. The results from these studies indicated that the principal mechanism of hot corrosion was associated with chromium leaching for all of the superalloys described above. However, HastelloyTM N displayed the best hot corrosion performance. This was not surprising given it was developed originally to withstand the harsh conditions of molten salt environment. However, the results obtained in this study provided confidence in the employed methods of computational thermodynamics and could be further used for future alloy design efforts. Finally, several potential solutions to mitigate hot corrosion were proposed for further exploration, including coating development and controlled scaling of intermediate compounds in the KF-ZrF4 system.

The materials included in the Airborne Radiological Computer System, Model-II (ARCS-II) were assembled with several considerations in mind. First, the system was designed to measure and record the airborne gamma radiation levels and the corresponding latitude and longitude coordinates, and to provide a first overview look of the extent and severity of an accident's impact. Second, the portable system had to be light enough and durable enough that it could be mounted in an aircraft, ground vehicle, or watercraft. Third, the system must control the collection and storage of the data, as well as provide a real-time display of the data collection results to the operator. The notebook computer and color graphics printer components of the system would only be used for analyzing and plotting the data. In essence, the provided equipment is composed of an acquisition system and an analysis system. The data can be transferred from the acquisition system to the analysis system at the end of the data collection or at some other agreeable time.

In recent years, there have been several attempts to study the effect of critical variables on welding by computationalmodeling. It is widely recognized that temperature distributions and weld pool shapes are keys to quality weldments. It would be very useful to obtain relevant information about the thermal cycle experienced by the weld metal, the size and shape of the weld pool, and the local solidification rates, temperature distributions in the heat-affected zone (HAZ), and associated phase transformations. The solution of moving boundary problems, such as weld pool fluid flow and heat transfer, that involve melting and/or solidification is inherently difficult because the location of the solid-liquid interface is not known a priori and must be obtained as a part of the solution. Because of non-linearity of the governing equations, exact analytical solutions can be obtained only for a limited number of idealized cases. Therefore, considerable interest has been directed toward the use of numerical methods to obtain time-dependant solutions for theoretical models that describe the welding process. Numerical methods can be employed to predict the transient development of the weld pool as an integral part of the overall heat transfer conditions. The structure of the model allows each phenomenon to be addressed individually, thereby gaining more insight into their competing interactions. 19 refs., 6 figs., 1 tab.

This report has demonstrated techniques that can be used to construct solutions to the 3-D electromagnetic inverse problem using full wave equation modeling. To this point great progress has been made in developing an inverse solution using the method of conjugate gradients which employs a 3-D finite difference solver to construct model sensitivities and predicted data. The forward modeling code has been developed to incorporate absorbing boundary conditions for high frequency solutions (radar), as well as complex electrical properties, including electrical conductivity, dielectric permittivity and magnetic permeability. In addition both forward and inverse codes have been ported to a massively parallel computer architecture which allows for more realistic solutions that can be achieved with serial machines. While the inversion code has been demonstrated on field data collected at the Richmond field site, techniques for appraising the quality of the reconstructions still need to be developed. Here it is suggested that rather than employing direct matrix inversion to construct the model covariance matrix which would be impossible because of the size of the problem, one can linearize about the 3-D model achieved in the inverse and use Monte-Carlo simulations to construct it. Using these appraisal and construction tools, it is now necessary to demonstrate 3-D inversion for a variety of EM data sets that span the frequency range from induction sounding to radar: below 100 kHz to 100 MHz. Appraised 3-D images of the earth`s electrical properties can provide researchers opportunities to infer the flow paths, flow rates and perhaps the chemistry of fluids in geologic mediums. It also offers a means to study the frequency dependence behavior of the properties in situ. This is of significant relevance to the Department of Energy, paramount to characterizing and monitoring of environmental waste sites and oil and gas exploration.

The problem of successfully simulating ionic fluids at low temperature and low density states is well known in the simulation literature: using conventional methods, the system is not able to equilibrate rapidly due to the presence of strongly associated cation-anion pairs. In this manuscript we present a numerical method for speeding up computer simulations of the restricted primitive model (RPM) at low temperatures (around the critical temperature) and at very low densities (down to $10^{-10}\\sigma^{-3}$, where $\\sigma$ is the ion diameter). Experimentally, this regime corresponds to typical concentrations of electrolytes in nonaqueous solvents. As far as we are aware, this is the first time that the RPM has been equilibrated at such extremely low concentrations. More generally, this method could be used to equilibrate other systems that form aggregates at low concentrations.

Rhythmic neuronal oscillations across a broad range of frequencies, as well as spatiotemporal phenomena, such as waves and bumps, have been observed in various areas of the brain and proposed as critical to brain function. While there is a long and distinguished history of studying rhythms in nerve cells and neuronal networks in healthy organisms, the association and analysis of rhythms to diseases are more recent developments. Indeed, it is now thought that certain aspects of diseases of the nervous system, such as epilepsy, schizophrenia, Parkinson's, and sleep disorders, are associated with transitions or disruptions of neurological rhythms. This focus issue brings together articles presenting modeling, computational, analytical, and experimental perspectives about rhythms and dynamic transitions between them that are associated to various diseases.

A growing trend in developing large and complex applications on today's Teraflop scale computers is to integrate stand-alone and/or semi-independent program components into a comprehensive simulation package. One example is the Community Climate System Model which consists of atmosphere, ocean, land-surface and sea-ice components. Each component is semi-independent and has been developed at a different institution. We study how this multi-component, multi-executable application can run effectively on distributed memory architectures. For the first time, we clearly identify five effective execution modes and develop the MPH library to support application development utilizing these modes. MPH performs component-name registration, resource allocation and initial component handshaking in a flexible way.

The objective of this study is to examine the usefulness and effectiveness of currently existing models that simulate the release of uranium hexafluoride from UF{sub 6}-handling facilities, subsequent reactions of UF{sub 6} with atmospheric moisture, and the dispersion of UF{sub 6} and reaction products in the atmosphere. The study evaluates screening-level and detailed public-domain models that were specifically developed for UF{sub 6} and models that were originally developed for the treatment of dense gases but are applicable to UF{sub 6} release, reaction, and dispersion. The model evaluation process is divided into three specific tasks: model-component evaluation; applicability evaluation; and user interface and quality assurance and quality control (QA/QC) evaluation. Within the model-component evaluation process, a model`s treatment of source term, thermodynamics, and atmospheric dispersion are considered and model predictions are compared with actual observations. Within the applicability evaluation process, a model`s applicability to Integrated Safety Analysis, Emergency Response Planning, and Post-Accident Analysis, and to site-specific considerations are assessed. Finally, within the user interface and QA/QC evaluation process, a model`s user-friendliness, presence and clarity of documentation, ease of use, etc. are assessed, along with its handling of QA/QC. This document presents the complete methodology used in the evaluation process.

In this project, a computationalmodeling approach for analyzing flow and ash transport and deposition in filter vessels was developed. An Eulerian-Lagrangian formulation for studying hot-gas filtration process was established. The approach uses an Eulerian analysis of gas flows in the filter vessel, and makes use of the Lagrangian trajectory analysis for the particle transport and deposition. Particular attention was given to the Siemens-Westinghouse filter vessel at Power System Development Facility in Wilsonville in Alabama. Details of hot-gas flow in this tangential flow filter vessel are evaluated. The simulation results show that the rapidly rotation flow in the spacing between the shroud and the vessel refractory acts as cyclone that leads to the removal of a large fraction of the larger particles from the gas stream. Several alternate designs for the filter vessel are considered. These include a vessel with a short shroud, a filter vessel with no shroud and a vessel with a deflector plate. The hot-gas flow and particle transport and deposition in various vessels are evaluated. The deposition patterns in various vessels are compared. It is shown that certain filter vessel designs allow for the large particles to remain suspended in the gas stream and to deposit on the filters. The presence of the larger particles in the filter cake leads to lower mechanical strength thus allowing for the back-pulse process to more easily remove the filter cake. A laboratory-scale filter vessel for testing the cold flow condition was designed and fabricated. A laser-based flow visualization technique is used and the gas flow condition in the laboratory-scale vessel was experimental studied. A computermodel for the experimental vessel was also developed and the gas flow and particle transport patterns are evaluated.

During a review of the Advanced Test Reactor safety basis at the Idaho National Laboratory, human factors engineers identified ergonomic and human reliability risks involving the inadvertent exposure of a fuel element to the air during manual fuel movement and inspection in the canal. There were clear indications that these risks increased the probability of human error and possible severe physical outcomes to the operator. In response to this concern, a detailed study was conducted to determine the probability of the inadvertent exposure of a fuel element. Due to practical and safety constraints, the task network analysis technique was employed to study the work procedures at the canal. Discrete-event simulation software was used to model the entire procedure as well as the salient physical attributes of the task environment, such as distances walked, the effect of dropped tools, the effect of hazardous body postures, and physical exertion due to strenuous tool handling. The model also allowed analysis of the effect of cognitive processes such as visual perception demands, auditory information and verbal communication. The model made it possible to obtain reliable predictions of operator performance and workload estimates. It was also found that operator workload as well as the probability of human error in the fuel inspection and transfer task were influenced by the concurrent nature of certain phases of the task and the associated demand on cognitive and physical resources. More importantly, it was possible to determine with reasonable accuracy the stages as well as physical locations in the fuel handling task where operators would be most at risk of losing their balance and falling into the canal. The model also provided sufficient information for a human reliability analysis that indicated that the postulated fuel exposure accident was less than credible.

The Hybrid Energy Systems Testing (HYTEST) Laboratory is being established at the Idaho National Laboratory to develop and test hybrid energy systems with the principal objective to safeguard U.S. Energy Security by reducing dependence on foreign petroleum. A central component of the HYTEST is the slurry bubble column reactor (SBCR) in which the gas-to-liquid reactions will be performed to synthesize transportation fuels using the Fischer Tropsch (FT) process. SBCRs are cylindrical vessels in which gaseous reactants (for example, synthesis gas or syngas) is sparged into a slurry of liquid reaction products and finely dispersed catalyst particles. The catalyst particles are suspended in the slurry by the rising gas bubbles and serve to promote the chemical reaction that converts syngas to a spectrum of longer chain hydrocarbon products, which can be upgraded to gasoline, diesel or jet fuel. These SBCRs operate in the churn-turbulent flow regime which is characterized by complex hydrodynamics, coupled with reacting flow chemistry and heat transfer, that effect reactor performance. The purpose of this work is to develop a computational multiphase fluid dynamic (CMFD) model to aid in understanding the physico-chemical processes occurring in the SBCR. Our team is developing a robust methodology to couple reaction kinetics and mass transfer into a four-field model (consisting of the bulk liquid, small bubbles, large bubbles and solid catalyst particles) that includes twelve species: (1) CO reactant, (2) H2 reactant, (3) hydrocarbon product, and (4) H2O product in small bubbles, large bubbles, and the bulk fluid. Properties of the hydrocarbon product were specified by vapor liquid equilibrium calculations. The absorption and kinetic models, specifically changes in species concentrations, have been incorporated into the mass continuity equation. The reaction rate is determined based on the macrokinetic model for a cobalt catalyst developed by Yates and Satterfield [1]. The model includes heat generation due to the exothermic chemical reaction, as well as heat removal from a constant temperature heat exchanger. Results of the CMFD simulations (similar to those shown in Figure 1) will be presented.

Variable Refrigerant Flow (VRF) heat pumps are increasingly used in commercial buildings in the United States. Monitored energy use of field installations have shown, in some cases, savings exceeding 30% compared to conventional heating, ventilating, and air-conditioning (HVAC) systems. A simulation study was conducted to identify the installation or operational characteristics that lead to energy savings for VRF systems. The study used the Department of Energy EnergyPlus? building simulation software and four reference building models. Computer simulations were performed in eight U.S. climate zones. The baseline reference HVAC system incorporated packaged single-zone direct-expansion cooling with gas heating (PSZ-AC) or variable-air-volume systems (VAV with reheat). An alternate baseline HVAC system using a heat pump (PSZ-HP) was included for some buildings to directly compare gas and electric heating results. These baseline systems were compared to a VRF heat pump model to identify differences in energy use. VRF systems combine multiple indoor units with one or more outdoor unit(s). These systems move refrigerant between the outdoor and indoor units which eliminates the need for duct work in most cases. Since many applications install duct work in unconditioned spaces, this leads to installation differences between VRF systems and conventional HVAC systems. To characterize installation differences, a duct heat gain model was included to identify the energy impacts of installing ducts in unconditioned spaces. The configuration of variable refrigerant flow heat pumps will ultimately eliminate or significantly reduce energy use due to duct heat transfer. Fan energy is also studied to identify savings associated with non-ducted VRF terminal units. VRF systems incorporate a variable-speed compressor which may lead to operational differences compared to single-speed compression systems. To characterize operational differences, the computermodel performance curves used to simulate cooling operation are also evaluated. The information in this paper is intended to provide a relative difference in system energy use and compare various installation practices that can impact performance. Comparative results of VRF versus conventional HVAC systems include energy use differences due to duct location, differences in fan energy when ducts are eliminated, and differences associated with electric versus fossil fuel type heating systems.

Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.

Correct modeling of the space environment, including radiative forces, is an important aspect of space situational awareness for geostationary (GEO) spacecraft. Solar radiation pressure has traditionally been modeled using ...

Customer Lifetime Value or CLV is a restriction on interactive marketing to help a company in arranging financial for the marketing of new customer acquisition and customer retention. Additionally CLV can be able to segment customers for financial arrangements. Stochastic models for the fairly new CLV used a Markov chain. In this model customer retention probability and new customer acquisition probability play an important role. This model is originally introduced by Pfeifer and Carraway in 2000 [1]. They introduced several CLV models, one of them only involves customer and former customer. In this paper we expand the model by adding the assumption of the transition from former customer to customer. In the proposed model, the CLV value is higher than the CLV value obtained by Pfeifer and Caraway model. But our model still requires a longer convergence time.

The purpose of this study was to retrospectively determine the local control rate and contributing factors to local progression after computed tomography (CT)-guided radiofrequency ablation (RFA) for unresectable lung tumor. This study included 138 lung tumors in 72 patients (56 men and 16 women; age 70.0 {+-} 11.6 years (range 31-94); mean tumor size 2.1 {+-} 1.2 cm [range 0.2-9]) who underwent lung RFA between June 2000 and May 2009. Mean follow-up periods for patients and tumors were 14 and 12 months, respectively. The local progression-free rate and survival rate were calculated to determine the contributing factors to local progression. During follow-up, 44 of 138 (32%) lung tumors showed local progression. The 1-, 2-, 3-, and 5-year overall local control rates were 61, 57, 57, and 38%, respectively. The risk factors for local progression were age ({>=}70 years), tumor size ({>=}2 cm), sex (male), and no achievement of roll-off during RFA (P < 0.05). Multivariate analysis identified tumor size {>=}2 cm as the only independent factor for local progression (P = 0.003). For tumors <2 cm, 17 of 68 (25%) showed local progression, and the 1-, 2-, and 3-year overall local control rates were 77, 73, and 73%, respectively. Multivariate analysis identified that age {>=}70 years was an independent determinant of local progression for tumors <2 cm in diameter (P = 0.011). The present study showed that 32% of lung tumors developed local progression after CT-guided RFA. The significant risk factor for local progression after RFA for lung tumors was tumor size {>=}2 cm.

The characteristics of a biological fluid sample having an analyte are determined from a model constructed from plural known biological fluid samples. The model is a function of the concentration of materials in the known fluid samples as a function of absorption of wideband infrared energy. The wideband infrared energy is coupled to the analyte containing sample so there is differential absorption of the infrared energy as a function of the wavelength of the wideband infrared energy incident on the analyte containing sample. The differential absorption causes intensity variations of the infrared energy incident on the analyte containing sample as a function of sample wavelength of the energy, and concentration of the unknown analyte is determined from the thus-derived intensity variations of the infrared energy as a function of wavelength from the model absorption versus wavelength function.

Knowledge of viscosity of flow streams is essential for the design and operation of production facilities, drilling operations and reservoir engineering calculations. The determination of the viscosity of a reservoir fluid at downhole conditions...

Three uranium hexafluoride-(UF{sub 6}-) specific models--HGSYSTEM/UF{sub 6}, Science Application International Corporation, and RTM-96; three dense-gas models--DEGADIS, SLAB, and the Chlorine Institute methodology; and one toxic chemical model--AFTOX--are evaluated on their capabilities to simulate the chemical reactions, thermodynamics, and atmospheric dispersion of UF{sub 6} released from accidents at nuclear fuel-cycle facilities, to support Integrated Safety Analysis, Emergency Response Planning, and Post-Accident Analysis. These models are also evaluated for user-friendliness and for quality assurance and quality control features, to ensure the validity and credibility of the results. Model performance evaluations are conducted for the three UF{sub 6}-specific models, using field data on releases of UF{sub 6} and other heavy gases. Predictions from the HGSYSTEM/UF{sub 6} and SAIC models are within an order of magnitude of the field data, but the SAIC model overpredicts beyond an order of magnitude for a few UF{sub 6}-specific data points. The RTM-96 model provides overpredictions within a factor of 3 for all data points beyond 400 m from the source. For one data set, however, the RTM-96 model severely underpredicts the observations within 200 m of the source. Outputs of the models are most sensitive to the meteorological parameters at large distances from the source and to certain source-specific and meteorological parameters at distances close to the source. Specific recommendations are being made to improve the applicability and usefulness of the three models and to choose a specific model to support the intended analyses. Guidance is also provided on the choice of input parameters for initial dilution, building wake effects, and distance to completion of UF{sub 6} reaction with water.

The Virtual Fields Method (VFM) is an inverse method for constitutive model parameter identication that relies on full-eld experimental measurements of displacements. VFM is an alternative to standard approaches that require several experiments of simple geometries to calibrate a constitutive model. VFM is one of several techniques that use full-eld exper- imental data, including Finite Element Method Updating (FEMU) techniques, but VFM is computationally fast, not requiring iterative FEM analyses. This report describes the im- plementation and evaluation of VFM primarily for nite-deformation plasticity constitutive models. VFM was successfully implemented in MATLAB and evaluated using simulated FEM data that included representative experimental noise found in the Digital Image Cor- relation (DIC) optical technique that provides full-eld displacement measurements. VFM was able to identify constitutive model parameters for the BCJ plasticity model even in the presence of simulated DIC noise, demonstrating VFM as a viable alternative inverse method. Further research is required before VFM can be adopted as a standard method for constitu- tive model parameter identication, but this study is a foundation for ongoing research at Sandia for improving constitutive model calibration.

An overarching goal of the Department of EnergyÂ? mission is the efficient deployment and engineering of microbial and plant systems to enable biomass conversion in pursuit of high energy density liquid biofuels. This has spurred the pace at which new organisms are sequenced and annotated. This torrent of genomic information has opened the door to understanding metabolism in not just skeletal pathways and a handful of microorganisms but for truly genome-scale reconstructions derived for hundreds of microbes and plants. Understanding and redirecting metabolism is crucial because metabolic fluxes are unique descriptors of cellular physiology that directly assess the current cellular state and quantify the effect of genetic engineering interventions. At the same time, however, trying to keep pace with the rate of genomic data generation has ushered in a number of modeling and computational challenges related to (i) the automated assembly, testing and correction of genome-scale metabolic models, (ii) metabolic flux elucidation using labeled isotopes, and (iii) comprehensive identification of engineering interventions leading to the desired metabolism redirection.

With the shortage of fossil fuel and the increasing environmental awareness, wind energy is becoming more and more important. As the market for wind energy grows, wind turbines and wind farms are becoming larger. Current utility-scale turbines extend a significant distance into the atmospheric boundary layer. Therefore, the interaction between the atmospheric boundary layer and the turbines and their wakes needs to be better understood. The turbulent wakes of upstream turbines affect the flow field of the turbines behind them, decreasing power production and increasing mechanical loading. With a better understanding of this type of flow, wind farm developers could plan better-performing, less maintenance-intensive wind farms. Simulating this flow using computational fluid dynamics is one important way to gain a better understanding of wind farm flows. In this study, we compare the performance of actuator disc and actuator line models in producing wind turbine wakes and the wake-turbine interaction between multiple turbines. We also examine parameters that affect the performance of these models, such as grid resolution, the use of a tip-loss correction, and the way in which the turbine force is projected onto the flow field.

Long lasting phosphorescence in barium aluminates can be achieved by doping with rare earth ions in divalent charge states. The rare earth ions are initially in a trivalent charge state, but are reduced to a divalent charge state before being doped into the material. In this paper, the reduction of trivalent rare earth ions in the BaAl{sub 2}O{sub 4} lattice is studied by computer simulation, with the energetics of the whole reduction and doping process being modelled by two methods, one based on single ion doping and one which allows dopant concentrations to be taken into account. A range of different reduction schemes are considered and the most energetically favourable schemes identified. - Graphical abstract: The doping and subsequent reduction of a rare earth ion into the barium aluminate lattice. Highlights: > The doping of barium aluminate with rare earth ions reduced in a range of atmospheres has been modelled. > The overall solution energy for the doping process for each ion in each reducing atmosphere is calculated using two methods. > The lowest energy reduction process is predicted and compared with experimental results.

The Uranium Mill Tailings Remediation Action (UMTRA) Project is responsible for the assessment and remedial action at the 24 former uranium mill tailings sites located in the US. The surface restoration phase, which includes containment and stabilization of the abandoned uranium mill tailings piles, has a specific termination date and is nearing completion. Therefore, attention has now turned to the groundwater restoration phase, which began in 1991. Regulated constituents in groundwater whose concentrations or activities exceed maximum contaminant levels (MCLs) or background levels at one or more sites include, but are not limited to, uranium, selenium, arsenic, molybdenum, nitrate, gross alpha, radium-226 and radium-228. The purpose of this report is to recommend computer codes that can be used to assist the UMTRA groundwater restoration effort. The report includes a survey of applicable codes in each of the following areas: (1) groundwater flow and contaminant transport modeling codes, (2) hydrogeochemical modeling codes, (3) pump and treat optimization codes, and (4) decision support tools. Following the survey of the applicable codes, specific codes that can best meet the needs of the UMTRA groundwater restoration program in each of the four areas are recommended.

This manual presents and describes a package of computermodels uniquely developed for boiler thermal performance and emissions evaluations by the Energy and Environmental Research Corporation. The model package permits boiler heat transfer, fuels combustion, and pollutant emissions predictions related to a number of practical boiler operations such as fuel-switching, fuels co-firing, and reburning NO{sub x} reductions. The models are adaptable to most boiler/combustor designs and can handle burner fuels in solid, liquid, gaseous, and slurried forms. The models are also capable of performing predictions for combustion applications involving gaseous-fuel reburning, and co-firing of solid/gas, liquid/gas, gas/gas, slurry/gas fuels. The model package is conveniently named as BPACK (Boiler Package) and consists of six computer codes, of which three of them are main computational codes and the other three are input codes. The three main codes are: (a) a two-dimensional furnace heat-transfer and combustion code: (b) a detailed chemical-kinetics code; and (c) a boiler convective passage code. This user`s manual presents the computermodel package in two volumes. Volume 1 describes in detail a number of topics which are of general users` interest, including the physical and chemical basis of the models, a complete description of the model applicability, options, input/output, and the default inputs. Volume 2 contains a detailed record of the worked examples to assist users in applying the models, and to illustrate the versatility of the codes.

Stockman, Mark [Georgia State University Research Foundation] [Georgia State University Research Foundation; Gray, Steven [Argon National Laboratory] [Argon National Laboratory

2014-02-21T23:59:59.000Z

The program is directed toward development of new computational approaches to photoprocesses in nanostructures whose geometry and composition are tailored to obtain desirable optical responses. The emphasis of this specific program is on the development of computational methods and prediction and computational theory of new phenomena of optical energy transfer and transformation on the extreme nanoscale (down to a few nanometers).

of a fault- tolerant quantum computer. The surface code approach has one of the highest known tolerable error of the surface code is four orders of magnitude higher than the concatenation code, building a quantum computer implementation, a new approach to building a quantum computer with the surface code (which is a kind

at the scale of high performance computer clusters and warehouse scale data centers, system level simulations and results for rack scale photonic interconnection networks for high performance computing. Keywords: optical to the newsworthy power consumption [3], latency [4] and bandwidth challenges [5] of high performance computing (HPC

, and the IZMEM/DMSP model was constructed using ground-magnetometer measurements and calibrated against DMSP ion. These convection models must accommodate a raft of geo- physical conditions, particularly IMF orientation, solar and Maynard, 1987; Rich and Maynard, 1989) and sup- ported the two cell convection pattern predicted

Experimental and computationalmodeling of oscillatory flow within a baffled tube containing describes numerical simulation and matching experimental results for oscillatory flow within a baffled tube the basic mechanism of OFM in a horizontal single-orifice baffled tube. As the fluid passes through

The goal of the radiation modeling effort was to develop and implement a radiation algorithm that is fast and accurate for the underhood environment. As part of this CRADA, a net-radiation model was chosen to simulate radiative heat transfer in an underhood of a car. The assumptions (diffuse-gray and uniform radiative properties in each element) reduce the problem tremendously and all the view factors for radiation thermal calculations can be calculated once and for all at the beginning of the simulation. The cost for online integration of heat exchanges due to radiation is found to be less than 15% of the baseline CHAD code and thus very manageable. The off-line view factor calculation is constructed to be very modular and has been completely integrated to read CHAD grid files and the output from this code can be read into the latest version of CHAD. Further integration has to be performed to accomplish the same with STAR-CD. The main outcome of this effort is to obtain a highly scalable and portable simulation capability to model view factors for underhood environment (for e.g. a view factor calculation which took 14 hours on a single processor only took 14 minutes on 64 processors). The code has also been validated using a simple test case where analytical solutions are available. This simulation capability gives underhood designers in the automotive companies the ability to account for thermal radiation - which usually is critical in the underhood environment and also turns out to be one of the most computationally expensive components of underhood simulations. This report starts off with the original work plan as elucidated in the proposal in section B. This is followed by Technical work plan to accomplish the goals of the project in section C. In section D, background to the current work is provided with references to the previous efforts this project leverages on. The results are discussed in section 1E. This report ends with conclusions and future scope of work in section F.

Modelling of turbine blade-induced turbulence (BIT) is discussed within the framework of three-dimensional Reynolds-averaged Navier-Stokes (RANS) actuator disk computations. We first propose a generic (baseline) BIT model, which is applied only to the actuator disk surface, does not include any model coefficients (other than those used in the original RANS turbulence model) and is expected to be valid in the limiting case where BIT is fully isotropic and in energy equilibrium. The baseline model is then combined with correction functions applied to the region behind the disk to account for the effect of rotor tip vortices causing a mismatch of Reynolds shear stress between short- and long-time averaged flow fields. Results are compared with wake measurements of a two-bladed wind turbine model of Medici and Alfredsson [Wind Energy, Vol. 9, 2006, pp. 219-236] to demonstrate the capability of the new model.