Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Massive hydraulic fracturing (MHF) from a lower wellbore (EE-2) created a large man-made reservoir which did not intersect the upper well (EE-3). To create a heat extraction flow loop, the upper well was sidetracked and redrilled (EE-3A) down into a microseismic cloud around EE-2 mapped during the MHF. The potential to intersect numerous fracture zones in the redrilled bore was apparent from seismicity. To economically and effectively isolate and test these microseismic zones required that a functional open hole packer be developed. The packer would be exposed to soak temperatures as high as 500/sup 0/F (260/sup 0/C) with cool down to 100/sup 0/F (40/sup 0/C) at differential pressures exceeding 5000 psi (35 Mpa). A functional packer has been designed, manufactured, and successfully used for the creation of a hot dry rock (HDR) reservoir. 5 figs., 1 tab.

Monthly averaged data is presented which describes the availability of solar radiation at 248 National Weather Service stations. Monthly and annual average daily insolation and temperature values have been computed from a base of 24 to 25 years of data. Average daily maximum, minimum, and monthly temperatures are provided for most locations in both Celsius and Fahrenheit. Heating and cooling degree-days were computed relative to a base of 18.3/sup 0/C (65/sup 0/F). For each station, global anti K/sub T/ (cloudiness index) were calculated on a monthly and annual basis. (MHR)

Causal consistency stipulates that causally dependent writes to data items should be executed in causal order. Traditionally this has been done by causally ordered message delivery using vector clocks. In a vector clock of size N, each element of the ... Keywords: causal consistency, collaboration, mobility, replication, vector clocks

BSc (Hons) Molecular Biology DEGREE PROGRAMME GUIDE 2013-2014 #12;BSc (Hons) Molecular Biology - Year 2 - Year 3 - Year 4 Introduction Molecular biology aims to understand living systems by focusing on the molecular components upon which they are built. Molecular biology is one of great successes of 20th century

Constructing complex software systems by integrating different software components is a promising and challenging approach. With the functionality of software components given by models it is possible to ensure consistency of such models before implementation ...

SMT typically models translation at the sentence level, ignoring wider document context. Does this hurt the consistency of translated documents? Using a phrase-based SMT system in various data conditions, we show that SMT translates documents remarkably ...

The historical, cultural, and intellectual importance of archiving the web has been widely recognized. Today, all countries with high Internet penetration rate have established high-profile archiving initiatives to crawl and archive the fast-disappearing ... Keywords: consistency, digital preservation, social network, web archiving

The thermodynamic consistency of quasiparticle boson system with effective mass $m^*$ and zero chemical potential is studied. We take the quasiparticle gluon plasma model as a toy model. The failure of previous treatments based on traditional partial derivative is addressed. We show that a consistent thermodynamic treatment can be applied to such boson system provided that a new degree of freedom $m^*$ is introduced in the partial derivative calculation. A pressure modification term different from the vacuum contribution is derived based on the new independent variable $m^*$. A complete and self-consistent thermodynamic treatment for quasiparticle system, which can be widely applied to effective mass models, has been constructed.

In the small package shipping industry (as in other industries), companies try to differentiate themselves by providing high levels of customer service. This can be accomplished in several ways, including online tracking of packages, ensuring on-time delivery, and offering residential pickups. Some companies want their drivers to develop relationships with customers on a route and have the same drivers visit the same customers at roughly the same time on each day that the customers need service. These service requirements, together with traditional constraints on vehicle capacity and route length, define a variant of the classical capacitated vehicle routing problem, which we call the consistent VRP (ConVRP). In this paper, we formulate the problem as a mixed-integer program and develop an algorithm to solve the ConVRP that is based on the record-to-record travel algorithm. We compare the performance of our algorithm to the optimal mixed-integer program solutions for a set of small problems and then apply our algorithm to five simulated data sets with 1,000 customers and a real-world data set with more than 3,700 customers. We provide a technique for generating ConVRP benchmark problems from vehicle routing problem instances given in the literature and provide our solutions to these instances. The solutions produced by our algorithm on all problems do a very good job of meeting customer service objectives with routes that have a low total travel time.

Write buffering is one of many successful mechanisms that improves the performance and scalability of multiprocessors. However, it leads to more complex memory system behavior, which cannot be described using intuitive consistency models, such as Sequential ... Keywords: Memory consistency framework, alpha, coherence, partial store order, relaxed memory order, sequential consistency, sparc multiprocessors, total store order, write-buffer architectures

We propose a general modeling framework to evaluate the performance of cache consistency algorithms. In addition to the usual hit rate, we introduce the hit* rate as a consistency measure, which captures the fraction of non-stale downloads ... Keywords: TTL, bounds on the renewal function, cache consistency, renewal theory, stochastic modeling, web caching

in Mechanical Engineering MSc in Satellite Communication Systems MSc in Sustainable Energy Technology Research, mathematics, physics or an applied science. Also refer to Applicant profile, listed with relevant course in Mechanical Engineering, the MSc in Sustainable Energy Technology or a research degree, we must receive your

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

This small, non-contact optical sensor increases the capability and flexibility of computer controlled machines by detecting its relative position to a workpiece in all six degrees of freedom (DOF). At a fraction of the cost, it is over 200 times faster and up to 25 times more accurate than competing 3-DOF sensors. Applications range from flexible manufacturing to a 6-DOF mouse for computers. Until now, highly agile and accurate machines have been limited by their inability to adjust to changes in their tasks. By enabling them to sense all six degrees of position, these machines can now adapt to new and complicated tasks without human intervention or delay--simplifying production, reducing costs, and enhancing the value and capability of flexible manufacturing.

This small, non-contact optical sensor increases the capability and flexibility of computer controlled machines by detecting its relative position to a workpiece in all six degrees of freedom (DOF). At a fraction of the cost, it is over 200 times faster and up to 25 times more accurate than competing 3-DOF sensors. Applications range from flexible manufacturing to a 6-DOF mouse for computers. Until now, highly agile and accurate machines have been limited by their inability to adjust to changes in their tasks. By enabling them to sense all six degrees of position, these machines can now adapt to new and complicated tasks without human intervention or delay--simplifying production, reducing costs, and enhancing the value and capability of flexible manufacturing. 3 figs.

This paper shows how the theory of Communicating Sequential Processes (CSP) can be used to establish that a protocol guarantees sequential consistency. The protocol in question is an accepted design based upon lazy caching; it is an ideal example for ... Keywords: CSP, lazy caching protocol, sequential consistency, specification, verification

This thesis reports on the design and implementation of Network Assisted NFSCK (or NAN), an extension to NFSCK, a research project about checking file system consistency at NetApp. NFSCK requires disk space to store temporary ...

We present our chemically consistent GALEV Evolutionary Synthesis models for galaxies and point out differences to previous generations of models and their effects on the interpretation of local and high-redshift galaxy data.

We study formally the consistency problem, for replicated shared data, in the Action-Constraint framework (ACF). ACF can describe a large range of application semantics and replication protocols, including optimistic and/or partial replication. ACF is used to decompose the consistency problem into simpler sub-problems. Each is easily understood. Existing algorithms from the literature can be explained as combinations of concrete sub-problem implementations. Using ACF, we design a new serialisation algorithm that does not cause aborts and only needs pairwise agreement (not global consensus).

We develop a scheme for the minimal coupling of all standard types of tensor and spinor field matter to Plebanski gravity. This theory is a geometric reformulation of vacuum general relativity in terms of two-form frames and connection one-forms, and provides a covariant basis for various quantization approaches. Using the spinor formalism we prove the consistency of the newly proposed matter coupling by demonstrating the full equivalence of Plebanski gravity plus matter to Einstein-Cartan gravity. As a by-product we also show the consistency of some previous suggestions for matter actions.

Inconsistencies in design models should be detected immediately to save the engineer from unnecessary rework. Yet, tools are not capable of keeping up with the engineers' rate of model changes. This paper presents an approach for quickly, correctly, ... Keywords: consistency, design feedback, incremental analysis

We propose a method for disambiguating uncertain detections of events by seeking global explanations for activities. Given a noisy visual input, and exploiting our knowledge of the activity and its constraints, one can provide a consistent set of events ... Keywords: Activity analysis, Event recognition, Global explanations

Degree-hours have many applications in fields such as agriculture, architecture, and power generation. Since daily mean temperatures are more readily available than hourly temperatures, the difference between mean daily degree-hours computed from ...

In this paper we highlight our recent work in arXiv:0803.4504. In that work, we proposed a new consistency test of quintessence models for dark energy. Our test gave a simple and direct signature if certain category of quintessence models was not consistent with the observational data. For a category that passed the test, we further constrained its characteristic parameter. Specifically, we found that the exponential potential was ruled out at the 95% confidence level and the power-law potential was ruled out at the 68% confidence level based on the current observational data. We also found that the confidence interval of the index of the power-law potential was between -2 and 0 at the 95% confidence level.

A fundamental issue for any quantum cosmological theory is to specify how probabilities can be assigned to various quantum events or sequences of events such as the occurrence of singularities or bounces. In previous work, we have demonstrated how this issue can be successfully addressed within the consistent histories approach to quantum theory for Wheeler-DeWitt-quantized cosmological models. In this work, we generalize that analysis to the exactly solvable loop quantization of a spatially flat, homogeneous and isotropic cosmology sourced with a massless, minimally coupled scalar field known as sLQC. We provide an explicit, rigorous and complete decoherent histories formulation for this model and compute the probabilities for the occurrence of a quantum bounce vs. a singularity. Using the scalar field as an emergent internal time, we show for generic states that the probability for a singularity to occur in this model is zero, and that of a bounce is unity, complementing earlier studies of the expectation values of the volume and matter density in this theory. We also show from the consistent histories point of view that all states in this model, whether quantum or classical, achieve arbitrarily large volume in the limit of infinite `past' or `future' scalar `time', in the sense that the wave function evaluated at any arbitrary fixed value of the volume vanishes in that limit. Finally, we briefly discuss certain misconceptions concerning the utility of the consistent histories approach in these models.

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

As an application of the solution of the equations of electromagnetic self-consistency in a plasma, found in a previous paper, the study of controlled thermo-nuclear fusion is undertaken. This study utilizes the resonance which can be developed in the plasma, as indicated by the above solution, and is based to an analysis of the underlying forced oscillation under friction. As a consequence, we find that, in this way, controlled thermonuclear fusion seems now to be feasible in principle. The treatment is rather elementary, and it may serve as a guide for more detailed calculations.

The origins of the hot solar corona and the supersonically expanding solar wind are still the subject of much debate. This paper summarizes some of the essential ingredients of realistic and self-consistent models of solar wind acceleration. It also outlines the major issues in the recent debate over what physical processes dominate the mass, momentum, and energy balance in the accelerating wind. A key obstacle in the way of producing realistic simulations of the Sun-heliosphere system is the lack of a physically motivated way of specifying the coronal heating rate. Recent models that assume the energy comes from Alfven waves that are partially reflected, and then dissipated by magnetohydrodynamic turbulence, have been found to reproduce many of the observed features of the solar wind. This paper discusses results from these models, including detailed comparisons with measured plasma properties as a function of solar wind speed. Some suggestions are also given for future work that could answer the many remain...

Heating degree days Heating degree days Dataset Summary Description The National Oceanic and Atmospheric Administration's (NOAA) National Environmental Satellite, Data, and Information Services (NESDIS), in conjunction with the National Climatic Data Center (NCDC) publish monthly and annual climate data by state for the U.S., including, heating degree days (total number of days per month and per year). The average values for each state are weighted by population, using 2000 Census data. The base temperature for this dataset is 65 degrees F. Source NOAA Date Released Unknown Date Updated June 24th, 2005 (9 years ago) Keywords climate Heating degree days NOAA Data application/vnd.ms-excel icon Heating Degree Data, by State (xls, 208.4 KiB) Quality Metrics Level of Review Some Review

Objectives: We wanted to develop a method for evaluating the consistency and usefulness of LOINC code use across different institutions, and to evaluate the degree of interoperability that can be attained when using LOINC codes for laboratory data exchange. ... Keywords: Clinical laboratory information systems, Consistency, Controlled vocabulary, Data exchange standards, Evaluation research, LOINC, LOINC usage, Semantic interoperability, Usefulness

Stochastic effects during inflation can be addressed by averaging the quantum inflaton field over Hubble-patch-sized domains. The averaged field then obeys a Langevin-type equation into which short-scale fluctuations enter as a noise term. We solve the Langevin equation for an inflaton field with a Dirac-Born-Infeld (DBI) kinetic term perturbatively in the noise and use the result to determine the field value's probability density function (PDF). In this calculation, both the shape of the potential and the warp factor are arbitrary functions, and the PDF is obtained with and without volume effects due to the finite size of the averaging domain. DBI kinetic terms typically arise in string-inspired inflationary scenarios in which the scalar field is associated with some distance within the (compact) extra dimensions. The inflaton's accessible range of field values therefore is limited because of the extra dimensions' finite size. We argue that in a consistent stochastic approach the inflaton's PDF must vanish for geometrically forbidden field values. We propose to implement these extra-dimensional spatial restrictions into the PDF by installing absorbing (or reflecting) walls at the respective boundaries in field space. As a toy model, we consider a DBI inflaton between two absorbing walls and use the method of images to determine its most general PDF. The resulting PDF is studied in detail for the example of a quartic warp factor and a chaotic inflaton potential. The presence of the walls is shown to affect the inflaton trajectory for a given set of parameters.

programs prior to beginning their graduate work. If a student decides to enter the combined program after with advanced work may be admitted to the Graduate School through the Graduate Program in Urban PlanningJOINT DEGREE PROGRAM LEADING TO THE MASTER OF URBAN PLANNING AND MASTER OF ARTS IN GEOGRAPHY DEGREE

cooling degree days cooling degree days Dataset Summary Description The National Oceanic and Atmospheric Administration's (NOAA) National Environmental Satellite, Data, and Information Services (NESDIS), in conjunction with the National Climatic Data Center (NCDC) publish monthly and annual climate data by state for the U.S., including, cooling degree days (total number of days per month and per year). The average values for each state are weighted by population, using 2000 Census data. The base temperature for this dataset is 65 degrees F. Source NOAA Date Released Unknown Date Updated June 24th, 2005 (9 years ago) Keywords climate cooling degree days NOAA Data application/vnd.ms-excel icon hcs_51_avg_cdd.xls (xls, 215.6 KiB) Quality Metrics Level of Review Some Review

The SLD experiment at the Stanford Linear Accelerator Center had a significant gap in its muon tracking coverage, provided by the Warm Iron Calorimeter. Supplemental planes of limited streamer tube chambers were added to improve the coverage in the vicinity of the gap at 0.65 commissioning of the forty-five degree chamber region of the SLAC SLD Warm Iron Calorimeter is presented. This task involved the completion of the forty-five degree chamber region geometry for the Warm Iron Calorimeter's fitter and swimmer and the changing of the way multiple scattering effects are treated in the fitter algorithm.

The spatial degrees of freedom (dof) of atmospheric flows are estimated by comparing the variance of the theoretical standardized chi-squared distribution with the sum of the squared eigenvalues of a spatial correlation matrix, dof = N2/?I = 1N?i...

This survey is designed to include those programs sponsored by the Department of Energy. The survey is designed to include those programs offering a major in nuclear engineering or course work equivalent to a major in other engineering disciplines that prepare the graduates to perform as nuclear engineers. This survey provides data on nuclear engineering enrollments and degrees for use in labor market analyses, information on education programs for students, and information on new graduates to employers, government agencies, academia and professional societies.

Video: Microbial Bebop - "Fifty Degrees North, Four Degrees West" Video: Microbial Bebop - "Fifty Degrees North, Four Degrees West" Share Topic Environment Biology Environmental biology Metagenomics This musical composition was created from data of microbes (bacteria, algae and other microorganisms) sampled in the English Channel. Argonne National Laboratory biologist Peter Larsen created the songs as a unique way to present and comprehend large datasets. More details: All of the data in this composition derives from twelve observed time points collected at monthly intervals at the L4 Station during 2007. The composition is composed of seven choruses. Each chorus has the same chord progression of 12 measures each in which chords are derived from monthly measures of temperature and chlorophyll A concentrations. The

Degree Day .Net Degree Day .Net Logo for Degree Day.net Website that generates heating and cooling degree days for locations worldwide. Degree days are commonly used in calculations relating to building energy consumption. Once you have chosen a weather station (of which there are thousands available) and specified the degree days you want (e.g. what base temperature, do you want them broken down in daily, weekly or monthly format), Degree Days.net will calculate your degree days, and give them to you as a CSV file that you can open directly in a spreadsheet. Screen Shots Keywords degree days, HDD, CDD Validation/Testing A comprehensive suite of automated tests have been written to test the software. Expertise Required Degree Days.net makes it very easy to specify and generate degree days, so

The frequency-independent RMS temperature fluctuations determined from the COBE-DMR two year sky maps are used to infer the parameter Q_{rms-PS}, which characterizes the normalization of power law models of primordial cosmological temperature anisotropy. In particular, a 'cross'-RMS statistic is used to determine Q_{rms-PS} for a forced fit to a scale-invariant Harrison-Zel'dovich (n = 1) spectral model. Using a joint analysis of the 7 degree and 10 degree RMS temperature derived from both the 53 and 90 GHz sky maps, we find Q_{rms-PS} = 17.0^{+2.5}_{-2.1} uK when the low quadrupole is included, and Q_{rms-PS} = 19.4^{+2.3}_{-2.1} uK excluding the quadrupole. These results are consistent with the n = 1 fits from more sensitive methods (e.g. power spectrum, correlation function). The effect of the low quadrupole derived from the COBE-DMR data on the inferred Q_{rms-PS} normalization is investigated. A bias to lower Q_{rms-PS} is found when the quadrupole is included. The higher normalization for a forced n = 1 fit is then favored by the cross-RMS technique. As initially pointed out in Wright et al. (1994a) and further discussed here, analytic formulae for the RMS sky temperature fluctuations will NOT provide the correct normalization amplitude.

Nonlinear Dirac equations in D+1 space-time are obtained by variation of the spinor action whose Lagrangian components have the same conformal degree and the coupling parameter of the self-interaction term is dimensionless. In 1+1 dimension, we show that these requirements result in the "conventional" quartic form of the nonlinear interaction and present the general equation for various coupling modes. These include, but not limited to, the Thirring and Gross-Neveu models. We obtain a numerical solution for the special case of the spin and pseudo-spin symmetric modes..

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Different stakeholders in the design of an enterprise information system have their own view on that design. To help produce a coherent design this paper presents a framework that aids in specifying relations and consistency rules between such views. ... Keywords: Conceptual modelling, Enterprise information systems, Multi-viewpoint design, View integration, Viewpoint consistency

This paper evaluates the performance of several popular corner detectors using two newly defined criteria. The majority of authors of published corner detectors have not used theoretical criteria to measure the consistency and accuracy of their algorithms. ... Keywords: Accuracy, CSS, Consistency, Corner detection, Performance evaluation

The use of constraint propagation is the main feature of any constraint solver. It is thus of prime importance to manage the propagation in an efficient and effective fashion. There are two classes of propagation algorithms for general constraints: fine-grained ... Keywords: Arc consistency, Constraint networks, Constraint programming systems, Non-binary constraints, Path consistency

We formulate a programmer-centric description of the memory consistency model provided by the Itanium architecture. This allows reasoning about programs at a non-operational level in the natural way, not obscured by the implementation details of the ... Keywords: itanium multi-processor, programmer-centric memory consistency

A relational database is inconsistent if it does not satisfy a given set of integrity constraints. Nevertheless, it is likely that most of the data in it is consistent with the constraints. In this paper we apply logic programming based on answer ... Keywords: answer set programming, consistency, databases, integrity constraints

Let a group $G$ act on a finite dimensional vector space $V$ over an algebraically closed field $K$ of characteristic $p$. Then $\\beta_{\\sep}(G)$ is the minimal number such that, for any $V$, the invariants of degree less or equal than this number have the same separating properties as the whole invariant ring $K[V]^{G}$. Derksen and Kemper have shown $\\beta_{\\sep}(G)\\le |G|$. We show $\\beta_{\\sep}(G)=|G|$ for $p$-groups and cyclic groups, and $\\beta_{\\sep}(G)=\\infty$ for infinite unipotent groups. We also show $\\beta_{\\sep}(G)\\le \\beta_{\\sep}(G/N)\\beta_{\\sep}(N)$ for a normal divisor $N$ of finite index.

Training Reciprocity Achieves Greater Consistency, Saves Time and Training Reciprocity Achieves Greater Consistency, Saves Time and Money for Idaho, Other DOE Sites Training Reciprocity Achieves Greater Consistency, Saves Time and Money for Idaho, Other DOE Sites November 26, 2013 - 12:00pm Addthis IDAHO FALLS, Idaho - Contracting companies supporting EM's cleanup program at the Idaho site volunteered to be among the first to use a new DOE training reciprocity program designed to bring more consistency to health and safety training across the complex, reduce redundancy and realize savings and other efficiencies. The DOE Office of Health, Safety and Security (HSS) program is meant to eliminate the need for Department employees and contractors to take redundant training when they move among multiple sites in the complex.

Lattice Boltzmann simulations have been very successful in simulating liquid-gas and other multi-phase fluid systems. However, the underlying second order analysis of the equation of motion has long been known to be insufficient to consistently derive the fourth order terms that are necessary to represent an extended interface. These same terms are also responsible for thermodynamic consistency, i.e. to obtain a true equilibrium solution with both a constant chemical potential and a constant pressure. In this article we present an equilibrium analysis of non-ideal lattice Boltzmann methods of sufficient order to identify those higher order terms that lead to a lack of thermodynamic consistency. We then introduce a thermodynamically consistent forcing method.

We propose a new consistency protocol for distributed shared memory (DSM) where different shared objects are replicated at each site. This protocol was developed for the cooperative platform called CAliF: Cooperative Application Framework. This system uses the DSM to allow programmers to share objects or variables without having to manage the exchange. We present an algorithm which uses the token technique. The token is data structure which contains the updates of shared data. These data are carried through the ring on the token, named Pilgrim. Pilgrim protocol provides both reliable consistency and guaranteed performance according to the type of application described. The protocol is discussed and proved, and we demonstrate its qualities. Key words: consistency protocol, cooperative work, distributed shared memory, virtual ring. 1 Introduction In this paper, we propose a new consistency protocol for distributed shared memory (DSM) where different shared objects are replicated at eac...

Typical social networking functionalities such as feed following are known to be hard to scale. Different from the popular approach that sacrifices consistency for scalability, in this paper we describe, implement, and evaluate a method that can simultaneously ...

We study the problem of learning a latent tree graphical model where samples are available only from a subset of variables. We propose two consistent and computationally efficient algorithms for learning minimal latent ...

Training Reciprocity Achieves Greater Consistency, Saves Time and Training Reciprocity Achieves Greater Consistency, Saves Time and Money for Idaho, Other DOE Sites Training Reciprocity Achieves Greater Consistency, Saves Time and Money for Idaho, Other DOE Sites November 26, 2013 - 12:00pm Addthis IDAHO FALLS, Idaho - Contracting companies supporting EM's cleanup program at the Idaho site volunteered to be among the first to use a new DOE training reciprocity program designed to bring more consistency to health and safety training across the complex, reduce redundancy and realize savings and other efficiencies. The DOE Office of Health, Safety and Security (HSS) program is meant to eliminate the need for Department employees and contractors to take redundant training when they move among multiple sites in the complex.

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

In this paper we theoretically and empirically study the degree and connectivity of the Internet's scale-free topology at the autonomous system (AS) level. The basic features of the scale-free network have influence on the normalization constant of the degree distribution p(k). We develop a mathematics model of the Internet's scale-free topology. On this model we theoretically get the formulas of the average degree, the ratios of the kmin-degree (minimum degree) nodes and the kmax-degree (maximum degree) nodes, the fraction of the degrees (or links) in the hands of the richer (top best-connected) nodes. We find the average degree is larger for smaller power-law exponent {\\lambda} and larger minimum or maximum degree. The ratio of the kmin-degree nodes is larger for larger {\\lambda} and smaller kmin or kmax. The ratio of the kmax-degree ones is larger for smaller {\\lambda} and kmax or larger kmin. The richer nodes hold most of the total degrees of the AS-level Internet topology. In addition, we reveal the rati...

This annual survey collects 2006 data on the number of health physics degrees awarded as well as the number of students enrolled in health physics academic programs. Thirty universities offer health physics degrees; all responded to the survey.

Forecasts Forecasts Degree Day Forecasts example chart Quick and easy web-based tool that provides free 14-day ahead degree day forecasts for 1,200 stations in the U.S. and Canada. Degree Day Forecasts charts show this year, last year and three-year average. Historical degree day charts and energy usage forecasts are available from the same site. Keywords degree days, historical weather, mean daily temperature Validation/Testing Degree day data provided by AccuWeather.com, updated daily at 0700. Expertise Required No special expertise required. Simple to use. Users Over 1,000 weekly users. Audience Anyone who needs degree day forecasts (next 14 days) for the U.S. and Canada. Input Select a weather station (1,200 available) and balance point temperature. Output Charts show (1) degree day (heating and cooling) forecasts for the next 14

A multi-degree-of-freedom vehicle employs a compliant linkage to accommodate the need for a variation in the distance between drive wheels or drive systems which are independently steerable and drivable. The subject vehicle is provided with rotary encodes to provide signals representative of the orientation of the steering pivot associated with each such drive wheel or system, and a linear encoder which issues a signal representative of the fluctuations in the distance between the drive elements. The wheels of the vehicle are steered and driven in response to the linear encoder signal, there being provided a controller system for minimizing the fluctuations in the distance. The controller system is a software implementation of a plurality of controllers, operating at the chassis level and at the vehicle level. A trajectory interpolator receives x-displacement, y-displacement, and .theta.-displacement signals and produces to the vehicle level controller trajectory signals corresponding to interpolated control signals. The x-displacement, y-displacement, and .theta.-displacement signals are received from a human operator, via a manipulable joy stick.

This annual report details the the number of health physics bachelor's, master's, and doctoral degrees awarded at a sampling of academic programs from 1998-2004. It also looks at health physics degrees by curriculum and the number of students enrolled in health physics degree programs at 28 U.S. universities in 2004.

This annual report details the number of health physics bachelor's, master's, and postdoctoral degrees awarded at a sampling of academic programs from 1998-2005. It also looks at health physics degrees by curriculum and the number of students enrolled in health physics degree programs at 30 U.S. universities in 2005.

This annual report details the number of nuclear engineering bachelor's, master's, and doctoral degrees awarded at a sampling of academic programs from 1998-2005. it also looks at nuclear engineering degrees by curriculum and the number of students enrolled in nuclear engineering degree programs at 30 U.S. universities in 2005.

Generation of a Consistent Terrestrial Net Generation of a Consistent Terrestrial Net Primary Production Data Set Final Report NASA Reference Number TE/99-0005 May 3, 2001 Richard J. Olson and Jonathan M. O. Scurlock Environmental Sciences Division Oak Ridge National Laboratory Oak Ridge, Tennessee 37831-6407 This project, "Generation of a Consistent Terrestrial Net Primary Production Data Set", is a coordinated, international effort to compile global estimates of terrestrial net primary productivity (NPP) for parameterization, calibration, and validation of NPP models. The project (NASA Reference Number TE/99-0005) was funded by the National Aeronautics and Space Administration (NASA), Office of Earth Science, Terrestrial Ecology Program under Interagency Agreement number 2013-M164-A1, under

Here we present a self-consistent quasi-particle model for quark-gluon plasma and apply it to explain the non-ideal behaviour seen in lattice simulations. The basic idea, borrowed from electrodynamic plasma, is that the gluons acquire mass as it propagates through plasma due to collective effects and is approximately equal to the plasma frequency. The statistical mechanics and thermodynamics of such a system is studied by treating it as an ideal gas of massive gluons. Since mass or plasma frequency depends on density, which itself is a thermodynamic quantity, the whole problem need to be solved self-consistently.

SURVEY UNIVERSE SURVEY UNIVERSE The survey includes degrees granted between September 1, 2007, and August 31, 2008, and fall 2008 enrollments. Thirty-one academic programs reported having nuclear engineering programs during 2008, and data was provided by all thirty-one programs. DEGREE DATA Bachelor's Degrees. The number of B.S. degrees granted in 2008 by nuclear engineering programs increased by 10% over 2007, and is the highest number reported since 1988. (See Table 1.) This is the fifth consecutive year of increases. The rate of increase in 2008 was, however, the lowest in five years. Nuclear engineering majors accounted for 89% of all B.S. degrees. (See Table 2.) Graduate Degrees. The number of master's degrees granted in 2008 increased for the sixth consecutive

report shows number of health physics degrees increased for report shows number of health physics degrees increased for graduates, decreased for undergraduates in 2010 Decreased number of B.S. degrees remains higher than levels in the early 2000 FOR IMMEDIATE RELEASE Dec. 20, 2011 FY12-09 OAK RIDGE, Tenn.-The number of health physics graduate degrees increased for both master's and doctoral candidates in 2010, but decreased for bachelor's degrees, says a report released this year by the Oak Ridge Institute for Science and Education. The ORISE report, Health Physics Enrollments and Degrees Survey, 2010 Data, surveyed 24 academic programs with enrollment and degree data and included students majoring in health physics or in an option program equivalent to a major, such as other health physics-based programs embedded in life

In this final report in the field of theoretical nuclear physics we note important accomplishments.We were confronted with "anomoulous" magnetic moments by the experimetalists and were able to expain them. We found unexpected partial dynamical symmetries--completely unknown before, and were able to a large extent to expain them.The importance of a self consistent shell model was emphasized.

A nuclear FrÂ´echet space consisting of C -functions and failing the bounded approximation property Dietmar Vogt Abstract An easy and transparent example is given of a nuclear FreÂ´echet space failing of Grothendieck whether every nuclear FrÂ´echet space has the bounded approximation property was open for quite

We consider a biased molecular junction subjected to external time-dependent electromagnetic field. We discuss local field formation due to both surface plasmon-polariton excitations in the contacts and the molecular response. Employing realistic parameters we demonstrate that such self-consistent treatment is crucial for proper description of the junction transport characteristics.

In some real world applications, such as spectrometry, functional models achieve better predictive performances if they work on the derivatives of order m of their inputs rather than on the original functions. As a consequence, the use of derivatives ... Keywords: Consistency, Derivatives, Functional Data Analysis, RKHS, SVM, Smoothing splines, Statistical learning

Previous researchers interested in physical assessment of speech intelligibility have largely based their predictions on preservation of spectral shape. A new approach is presented in which intelligibility is predicted to be preserved only if a transformation modifies relevant speech parameters in a consistent manner. In particular

In this annual report we illustrate the methodology of the consistent data assimilation that allows to use the information coming from integral experiments for improving the basic nuclear parameters used in cross section evaluation. A series of integral experiments are analyzed using the EMPIRE evaluated files for 242Pu and 105Pd. In particular irradiation experiments (PROFIL-1 and -2, TRAPU-1, -2 and -3) provide information about capture cross sections, and a critical configuration, COSMO, where fission spectral indexes were measured, provides information about fission cross section. The observed discrepancies between calculated and experimental results are used in conjunction with the computed sensitivity coefficients and covariance matrix for nuclear parameters in a consistent data assimilation. The results obtained by the consistent data assimilation indicate that not so large modifications on some key identified nuclear parameters allow to obtain reasonable C/E. However, for some parameters such variations are outside the range of 1 s of their initial standard deviation. This can indicate a possible conflict between differential measurements (used to calculate the initial standard deviations) and the integral measurements used in the statistical data adjustment. Moreover, an inconsistency between the C/E of two sets of irradiation experiments (PROFIL and TRAPU) is observed for 242Pu. This is the end of this project funded by the Nuclear Physics Program of the DOE Office of Science. We can indicate that a proof of principle has been demonstrated for a few isotopes for this innovative methodology. However, we are still far from having explored all the possibilities and made this methodology to be considered proved and robust. In particular many issues are worth further investigation: • Non-linear effects • Flexibility of nuclear parameters in describing cross sections • Multi-isotope consistent assimilation • Consistency between differential and integral experiments

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

6 Solar Thermal Collector Shipments by Type, Price, and Trade 6 Solar Thermal Collector Shipments by Type, Price, and Trade Total Shipments, 1974-2009 Trade, 1978-2009 Price of Total Shipments, 1986-2009 Number of U.S. Manufacturers by Type of Collector, 1974-2009 Average Annual Shipments per Manufacturer, 1974-2009 292 U.S. Energy Information Administration / Annual Energy Review 2011 1 Prices are not adjusted for inflation. See "Nominal Dollars" in Glossary. 2 Collectors that generally operate in the temperature range of 140 degreesFahrenheit to 180 degreesFahrenheit but can also operate at temperatures as low as 110 degreesFahrenheit. Special collectors-evacuated tube collectors or concentrating (focusing) collectors-are included in the medium-temperature category. 3 Collectors that generally operate at temperatures below 110 degreesFahrenheit.

For challenging radiation transport problems, hybrid methods combine the accuracy of Monte Carlo methods with the global information present in deterministic methods. One of the most successful hybrid methods is CADIS Consistent Adjoint Driven Importance Sampling. This method uses a deterministic adjoint solution to construct a biased source distribution and consistent weight windows to optimize a specific tally in a Monte Carlo calculation. The method has been implemented into transport codes using just the spatial and energy information from the deterministic adjoint and has been used in many applications to compute tallies with much higher figures-of-merit than analog calculations. CADIS also outperforms user-supplied importance values, which usually take long periods of user time to develop. This work extends CADIS to develop weight windows that are a function of the position, energy, and direction of the Monte Carlo particle. Two types of consistent source biasing are presented: one method that biases the source in space and energy while preserving the original directional distribution and one method that biases the source in space, energy, and direction. Seven simple example problems are presented which compare the use of the standard space/energy CADIS with the new space/energy/angle treatments.

We present a lattice formulation of a dynamic self-consistent field (DSCF) theory that is capable of resolving interfacial structure, dynamics and rheology in inhomogeneous, compressible melts and blends of unentangled homopolymer chains. The joint probability distribution of all the Kuhn segments in the fluid, interacting with adjacent segments and walls, is approximated by a product of one-body probabilities for free segments interacting solely with an external potential field that is determined self-consistently. The effect of flow on ideal chain conformations is modeled with FENE-P dumbbells, and related to stepping probabilities in a random walk. Free segment and stepping probabilities generate statistical weights for chain conformations in a self-consistent field, and determine local volume fractions of chain segments. Flux balance across unit lattice cells yields mean-field transport equations for the evolution of free segment probabilities and of momentum densities on the Kuhn length scale. Diffusive and viscous contributions to the fluxes arise from segmental hops modeled as a Markov process, with transition rates reflecting changes in segmental interaction, kinetic energy, and entropic contributions to the free energy under flow.

Pollutant emissions such as aerosols and tropospheric ozone precursors substantially influence climate. While future century-scale scenarios for these emissions have become more realistic through the inclusion of emission controls, they still potentially lack consistency between surface pollutant concentrations and regional levels of affluence. We demonstrate a methodology combining use of an integrated assessment model and a three-dimensional atmospheric chemical transport model, whereby a reference scenario is constructed by requiring consistent surface pollutant levels as a function of regional income over the 21st century. By adjusting air pollutant emission control parameters, we improve agreement between modeled PM2.5 and economic income among world regions through time; agreement for ozone is also improved but is more difficult to achieve because of the strong influence of upwind world regions. The scenario examined here was used as the basis for one of the Representative Concentration Pathway (RCP) scenarios. This analysis methodology could also be used to examine the consistency of other pollutant emission scenarios.

Composite preform fiber architectures range from the very simple to the complex, and the extremes are typified by parallel continuous fibers and complicated three-dimensional woven structures. Subsequent processing of these preforms to produce dense composites may depend critically on the geometry of the interfiber porosity. The goal of this study is to fully characterize the structure of a 0{degree}/90{degree} cloth layup preform using x-ray tomographic microscopy (XTM). This characterization includes the measurement of intercloth channel widths and their variability, the transverse distribution of through-cloth holes, and the distribution of preform porosity. The structure of the intercloth porosity depends critically on the magnitude and direction of the offset between adjacent cloth layers. The structures observed include two-dimensional networks of open pipes linking adjacent holes, arrays of parallel one-dimensional pipes linking holes, and relatively closed channels exhibiting little structure, and these different structures would appear to offer very different resistances to gas flow through the preform. These measurements, and future measurements for different fiber architectures, will yield improved understanding of the role of preform structure on processing. {copyright} {ital 1998 Materials Research Society.}

The survey includes degrees granted between September 1, 2010 and August 31, 2011. Enrollment information refers to the fall term 2011. The enrollment and degree data include students majoring in nuclear engineering or in an option program equivalent to a major. Thirty-two academic programs reported having nuclear engineering programs during 2011, and data was received from all thirty-two programs. The data for two nuclear engineering programs include enrollments and degrees in health physics options that are also reported in the health physics enrollments and degrees data.

We construct a self-consistent model which describes a black hole from formation to evaporation including the back reaction from the Hawking radiation. In the case where a null shell collapses, at the beginning the evaporation occurs, but it stops eventually, and a horizon and singularity appear. On the other hand, in the generic collapse process of a continuously distributed null matter, the black hole evaporates completely without forming a macroscopically large horizon nor singularity. We also find a stationary solution in the heat bath, which can be regarded as a normal thermodynamic object.

Extended Higgs sectors appear in many models for physics beyond the Standard Model. Current Higgs measurements at the LHC are starting to significantly constrain them. We study their Higgs coupling patterns at tree level as well as including quantum corrections. Our benchmarks include a dark singlet-doublet extension and several two-doublet setups. Using SFitter we translate the current Higgs coupling measurements for one light Higgs state into their respective parameter spaces. Finally, we show how two-Higgs-doublet models can serve as a consistent ultraviolet completion of an assumed single Standard-Model-like Higgs boson with free couplings.

A simple renormalization theory of plasma particle interactions is proposed. It primarily stems from generic properties of equilibrium distribution functions and allows one to obtain the so-called generalized Poisson-Boltzmann equation for an effective interaction potential of two chosen particles in the presence of a third one. The same equation is then strictly derived from the Bogolyubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy for equilibrium distribution functions in the pair correlation approximation. This enables one to construct a self-consistent chemical model of partially ionized plasmas, correctly accounting for the close interrelation of charged and neutral components thereof. Minimization of the system free energy provides ionization equilibrium and, thus, permits one to study the plasma composition in a wide range of its parameters. Unlike standard chemical models, the proposed one allows one to study the system correlation functions and thereby to obtain an equation of state which agrees well with exact results of quantum-mechanical activity expansions. It is shown that the plasma and neutral components are strongly interrelated, which results in the short-range order formation in the corresponding subsystem. The mathematical form of the results obtained enables one to both firmly establish this fact and to determine a characteristic length of the structure formation. Since the cornerstone of the proposed self-consistent chemical model of partially ionized plasmas is an effective pairwise interaction potential, it immediately provides quite an efficient calculation scheme not only for thermodynamical functions but for transport coefficients as well.

Suppose the postulate of measurement in quantum mechanics can be extended to quantum field theory; then a local projective measurement at some moment on an object locally coupled with a relativistic quantum field will result in a projection or collapse of the wavefunctional of the combined system defined on the whole time-slice associated with the very moment of the measurement, if the relevant degrees of freedom have nonzero correlations. This implies that the wavefunctionals in the same Hamiltonian system but defined in different reference frames would collapse on different time-slices passing through the same local event where the measurement was done. Are these post-measurement states consistent with each other? We illustrate that the quantum states of the Raine-Sciama-Grove detector-field system started with the same initial Gaussian state defined on the same initial time-slice, then collapsed by the measurements on the pointlike detectors on different time-slices in different frames, will evolve to the same state of the combined system up to a coordinate transformation when compared on the same final time-slice. Such consistency is guaranteed by the spatial locality of interactions and the general covariance in a relativistic system, together with the spatial locality of measurements and the linearity of quantum dynamics in its quantum theory. - Highlights: Black-Right-Pointing-Pointer Spatially local quantum measurements in detector-field models are studied. Black-Right-Pointing-Pointer Local quantum measurement collapses the wavefunctional on the whole time-slice. Black-Right-Pointing-Pointer In different frames wavefunctionals of a field would collapse on different time-slices. Black-Right-Pointing-Pointer States collapsed by the same measurement will be consistent on the same final slice.

The survey includes degrees granted between September 1, 2006 and August 31, 2007. Enrollment information refers to the fall term 2007. Twenty-nine academic programs were included in the survey universe, and 28 of the 29 responded. The report includes data by degree level including citizenship, gender, and race/ethnicity plus enrollments of junior and senior undergraduate students and graduate students.

This survey includes degrees granted between September 1, 2008 and August 31, 2009. Enrollment information refers to the fall term 2009. Twenty-four academic programs were included in the survey universe, and all twenty-four responded. The report includes data by degree level including citizenship, gender, and race/ethnicity, plus enrollments of junior and senior undergraduate students and graduate students.

Bachelor of Arts in Social Work Degree (BASW) Program e School of Social Work offers a Bachelor of Arts degree with a major in Social Work. is new BASW program is the only baccalaureate social work program in the Oregon University System. e Portland State University's School of Social Work is excited

The self-similar model of coronal transients by B. C. Low is reconsidered. Due to a modification of the basic set of the initial assumptions of the model, a new class of more consistent solutions is found. The main advantage of these new solutions is that they do not contain areas with a physically inconsistent negative pressure. Instead, the novel solutions are derived on the basis of a special prescription for the thermal pressure of the transients that guarantees, by design, its positiveness throughout the whole evolution domain. The possible importance of these solutions for understanding the physics of the transient interplanetary coronal mass ejections (ICMEs; originating from the Sun), and magnetic clouds as a subclass of these, is discussed. A practical example is cited illustrating the application of our analytic results to describe some properties of real ICMEs. Some directions and scopes for further research are outlined.

We study the surface tension of electrolyte solutions at the air/water and oil/water interfaces. Employing field-theoretical methods, and considering short-range interactions of anions with the surface, we expand the Helmholtz free energy to first-order in a loop expansion and calculate self-consistently the excess surface tension. We obtain analytically the surface-tension dependence on the ionic strength, ionic size and ion-surface interaction, as a direct generalization of the well-known Onsager-Samaras theory. Our theory fits well a wide range of concentrations for different salts using two fit parameters, reproducing the reverse Hofmeister series for anions at the air/water and oil/water interfaces.

Event processing will play an increasingly important role in constructing enterprise applications that can immediately react to business critical events. Various technologies have been proposed in recent years, such as event processing, data streams and asynchronous messaging (e.g. pub/sub). We believe these technologies share a common processing model and differ only in target workload, including query language features and consistency requirements. We argue that integrating these technologies is the next step in a natural progression. In this paper, we present an overview and discuss the foundations of CEDR, an event streaming system that embraces a temporal stream model to unify and further enrich query language features, handle imperfections in event delivery, define correctness guarantees, and define operator semantics. We describe specific contributions made so far and outline next steps in developing the CEDR system.

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

We provide a consistent description of the kinetic equation with a triangle anomaly which is compatible with the entropy principle of the second law of thermodynamics and the charge/energy-momentum conservation equations. In general an anomalous source term is necessary to ensure that the equations for the charge and energy-momentum conservation are satisfied and that the correction terms of distribution functions are compatible to these equations. The constraining equations from the entropy principle are derived for the anomaly-induced leading order corrections to the particle distribution functions. The correction terms can be determined for the minimum number of unknown coefficients in one charge and two charge cases by solving the constraining equations.

Pu Shi; Gao Jianhua; Wang Qun [Interdisciplinary Center for Theoretical Study and Department of Modern Physics, University of Science and Technology of China, Hefei 230026 (China)

We study the surface tension of electrolyte solutions at the air/water and oil/water interfaces. Employing field-theoretical methods, and considering short-range interactions of anions with the surface, we expand the Helmholtz free energy to first-order in a loop expansion and calculate self-consistently the excess surface tension. We obtain analytically the surface-tension dependence on the ionic strength, ionic size and ion-surface interaction, as a direct generalization of the well-known Onsager-Samaras theory. Our theory fits well a wide range of concentrations for different salts using two fit parameters, reproducing the reverse Hofmeister series for anions at the air/water and oil/water interfaces.

We perform cosmological N-body simulations of the Dvali-Gabadadze-Porrati braneworld model, by solving the full non-linear equations of motion for the scalar degree of freedom in this model, the brane bending mode. While coupling universally to matter, the brane-bending mode has self-interactions that become important as soon as the density field becomes non-linear. These self-interactions lead to a suppression of the field in high-density environments, and restore gravity to General Relativity. The code uses a multi-grid relaxation scheme to solve the non-linear field equation in the quasi-static approximation. We perform simulations of a flat self-accelerating DGP model without cosmological constant. However, the type of non-linear interactions of the brane-bending mode, which are the focus of this study, are generic to a wide class of braneworld cosmologies. The results of the DGP simulations are compared with standard gravity simulations assuming the same expansion history, and with DGP simulations using the linearized equation for the brane bending mode. This allows us to isolate the effects of the non-linear self-couplings of the field which are noticeable already on quasi-linear scales. We present results on the matter power spectrum and the halo mass function, and discuss the behavior of the brane bending mode within cosmological structure formation. We find that, independently of CMB constraints, the self-accelerating DGP model is strongly constrained by current weak lensing and cluster abundance measurements.

Methods for computing Hashin-Shtrikman bounds and related self-consistent estimates of elastic constants for polycrystals composed of crystals having orthorhombic symmetry have been known for about three decades. However, these methods are underutilized, perhaps because of some perceived difficulties with implementing the necessary computational procedures. Several simplifications of these techniques are introduced, thereby reducing the overall computational burden, as well as the complications inherent in mapping out the Hashin-Shtrikman bounding curves. The self-consistent estimates of the effective elastic constants are very robust, involving a quickly converging iteration procedure. Once these self-consistent values are known, they may then be used to speed up the computations of the Hashin-Shtrikman bounds themselves. It is shown furthermore that the resulting orthorhombic polycrystal code can be used as well to compute both bounds and self-consistent estimates for polycrystals of higher-symmetry tetragonal, hexagonal, and cubic (but not trigonal) materials. The self-consistent results found this way are shown to be the same as those obtained using the earlier methods, specifically those methods designed specially for each individual symmetry type. But the Hashin-Shtrikman bounds found using the orthorhombic code are either the same or (more typically) tighter than those found previously for these special cases (i.e., tetragonal, hexagonal, and cubic). The improvement in the Hashin-Shtrikman bounds is presumably due to the additional degrees of freedom introduced into the available search space.

The Product Consistency Test (PCT), American Society for Testing Materials (ASTM) Standard C1285, is currently used world wide for testing glass and glass-ceramic waste forms for high level waste (HLW), low level waste (LLW), and hazardous wastes. Development of the PCT was initiated in 1986 because HLW glass waste forms required extensive characterization before actual production began and required continued characterization during production ({ge}25 years). Non-radioactive startup was in 1994 and radioactive startup was in 1996. The PCT underwent extensive development from 1986-1994 and became an ASTM consensus standard in 1994. During the extensive laboratory testing and inter- and intra-laboratory round robins using non-radioactive and radioactive glasses, the PCT was shown to be very reproducible, to yield reliable results rapidly, to distinguish between glasses of different durability and homogeneity, and to easily be performed in shielded cell facilities with radioactive samples. In 1997, the scope was broadened to include hazardous and mixed (radioactive and hazardous) waste glasses. In 2002, the scope was broadened to include glass-ceramic waste forms which are currently being recommended for second generation nuclear wastes yet to be generated in the nuclear renaissance. Since the PCT has proven useful for glass-ceramics with up to 75% ceramic component and has been used to evaluate Pu ceramic waste forms, the use of this test for other ceramic/mineral waste forms such as geopolymers, hydroceramics, and fluidized bed steam reformer mineralized product is under investigation.

The low-level waste (LLW) performance assessment (PA) process has been traditionally focused on disposal facilities at a few United States Department of Energy (USDOE) sites and commercial disposal facilities. In recent years, there has been a dramatic increase in the scope of the use of PA-like modeling approaches, involving multiple activities, facilities, contractors and regulators. The scope now includes, for example: (1) National Environmental Policy Act (NEPA) assessments, (2) CERCLA disposal cells, (3) Waste Determinations and High-Level Waste (HLW) Closure activities, (4) Potential on-site disposal of Transuranic (TRU) waste, and (5) In-situ decommissioning (including potential use of existing facilities for disposal). The dramatic increase in the variety of activities requiring more detailed modeling has resulted in a similar increase in the potential for inconsistency in approaches both at a site and complexwide scale. This paper includes a summary of USDOE Environmental Management (EM) sponsored initiatives and activities for improved consistency. New initiatives entitled the Performance Assessment Community of Practice and Performance Assessment Assistance Team are also introduced.

The degree of a CSP instance is the maximum number of times that a variable may appear in the scope of constraints. We consider the approximate counting problem for Boolean CSPs with bounded-degree instances for constraint languages containing the two unary constant relations {0} and {1}. When the maximum degree is at least 25 we obtain a complete classification of the complexity of this problem. It is exactly solvable in polynomial-time if every relation in the constraint language is affine. It is equivalent to the problem of approximately counting independent sets in bipartite graphs if every relation can be expressed as conjunctions of {0}, {1} and binary implication. Otherwise, there is no FPRAS unless NP=RP. For lower degree bounds, additional cases arise in which the complexity is related to the complexity of approximately counting independent sets in hypergraphs.

Site-specific total electric energy and heating oil consumption for individual residences show a very high correlation with National Weather Service airport temperature data when transformed to heating degree days. Correlations of regional total ...

The survey includes degrees granted between September 1, 2007, and August 31, 2008, and fall 2008 enrollments. Thirty-one academic programs reported having nuclear engineering programs during 2008, and data was provided by all thirty-one programs.

The survey includes degrees granted between September 1, 2006, and August 1, 2007, and fall 2007 enrollments. Thirty-one academic programs reported having nuclear engineering programs during 2007, and data was obtained for all thirty-one.

The survey includes degrees granted between September 1, 2008 and August 31, 2009, and fall 2009 enrollments. Thirty-two academic programs reported having nuclear engineering programs during 2009, and data was obtained from all thirty-two.

The survey includes degrees granted between September 1, 2007 and August 31, 2008. Enrollment information refers to the fall term 2008. Twenty-six academic programs were included in the survey universe, and all 26 programs provided data.

A new 6-degree of freedom dynamometer is presented. Six load cells measure the normal forces at the contact points of a three groove kinematic coupling. Three toggle clamps are used to preload the machine, so that it does ...

The Department of Mechanical Engineering at Texas A&M offers unique degree programs with a specialization in energy management. The most popular of the degree offered is a professional degree, the Master of Engineering, which blends technical courses in energy management with professional development courses such as finance, management accounting, and economics. The industrial-oriented degree also requires a 3-6 month internship in industry, for which the students receive academic credit. The internship program allows students to receive valuable on-the-job experience while providing industries with trained engineers to assist in solving specific problems. The overall objective of the energy management program is to train industrial energy managers who will be able to help solve one of the most urgent, long-term problems facing this country--the energy shortage.

This paper describes a puzzle introduced by German noch so, a degree operator and Negative Polarity Item. Noch so sentences allow for paraphrases containing a scalar particle (like even), suggesting that its polarity sensitivity can receive an analysis ...

Regional and national heating fuel demand is related to both weather and population density. This study analyzes the variability of population-weighted, seasonal heating degree days for the coterminous 48 states. A risk assessment of unusual ...

Degree-days are fundamental design parameters in many application fields such as power generation and consumption, agriculture, architecture, snow melt estimation, environmental energy planning, population siting, and military domains. Depending ...

Time series of approximate United States average annual per capita heating and cooling degree days for the years 1895–1983 are presented. The data reflect the combined effects of climate fluctuations and population shifts, and can be used in ...

We have developed an efficient Lagrangian formulation of manipulators with small numbers of degrees of freedom. The efficiency derives from the lack of velocities, accelerations, and generalized forces. The number of ...

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

We intend to develop part of the theoretical tools needed for the detection of gravitational waves coming from the capture of a compact object, 1-100 solar masses, by a Supermassive Black Hole, up to a 10 billion solar masses, located at the centre of most galaxies. The analysis of the accretion activity unveils the star population around the galactic nuclei, and tests the physics of black holes and general relativity. The captured small mass is considered a probe of the gravitational field of the massive body, allowing a precise measurement of the particle motion up to the final absorption. The knowledge of the gravitational signal, strongly affected by the self-force - the orbital displacement due to the captured mass and the emitted radiation - is imperative for a successful detection. The results include a strategy for wave equations with a singular source term for all type of orbits. We are now tackling the evolution problem, first for radial fall in Regge- Wheeler gauge, and later for generic orbits in the harmonic or de Donder gauge for Schwarzschild-Droste black holes. In the Extreme Mass Ratio Inspiral, the determination of the orbital evolution demands that the motion of the small mass be continuously corrected by the self-force, i.e. the self-consistent evolution. At each of the integration steps, the self-force must be computed over an adequate number of modes; further, a differential-integral system of general relativistic equations is to be solved and the outputs regularised for suppressing divergences. Finally, for the provision of the computational power, parallelisation is under examination.

We present an efficient algorithm for subdividing non-uniform B-splines of arbitrary degree in a manner similar to the Lane-Riesenfeld subdivision algorithm for uniform Bsplines of arbitrary degree. Our algorithm consists of doubling the control points followed by d rounds of non-uniform averaging similar to the d rounds of uniform averaging in the Lane-Riesenfeld algorithm for uniform B-splines of degree d. However, unlike the Lane-Riesenfeld algorithm which follows most directly from the continuous convolution formula for the uniform B-spline basis functions, our algorithm follows naturally from blossoming. We show that our knot insertion method is simpler and more efficient than previous knot insertion algorithms for non-uniform B-splines. 1.

In this theoretical study, we analyze quantum walks on complex networks, which model network-based processes ranging from quantum computing to biology and even sociology. Specifically, we analytically relate the average long time probability distribution for the location of a unitary quantum walker to that of a corresponding classical walker. The distribution of the classical walker is proportional to the distribution of degrees, which measures the connectivity of the network nodes and underlies many methods for analyzing classical networks including website ranking. The quantum distribution becomes exactly equal to the classical distribution when the walk has zero energy and at higher energies the difference, the so-called quantumness, is bounded by the energy of the initial state. We give an example for which the quantumness equals a Renyi entropy of the normalized weighted degrees, guiding us to regimes for which the classical degree-dependent result is recovered and others for which quantum effects dominate.

We consider weather radar measurements at simultaneous transmission and simultaneous reception of horizontal and vertical polarizations and show that the degree of polarization at simultaneous transmit (p{sub s}) is related to differential reflectivity and copolar correlation coefficient at simultaneous transmit (namely, Z{sub DR}s and {rho}{sub hy}s). We evaluate the potential of degree of polarization at simultaneous transmit for weather radar applications. Ultimately, we explore the consequences of adjusting the transmit polarization state of dual-polarization weather radars to circular polarization.

The seasonal cycle in the volume and formation rate of Eighteen Degree Water (EDW) in the North Atlantic is quantified over the 3-yr period from 2004 to 2006. The EDW layer is defined as all waters that have a temperature between 17° and 19°C. ...

Nuclear safeguards are a set of activities to verify that a State is living up to its international undertakings not to use nuclear programs for nuclear weapons purposes. International Atomic Energy Agency (IAEA) uses a hierarchical assessment system ... Keywords: cumulative belif degree, decision making, fuzzy linguistic terms, nuclear safeguards

Joint Degree Program in Social Work and Law MSW and JD The complexity of current national debates and programs, and social work advocacy activities, including clinical practice, and the law. It is therefore work to remain in good standing and for graduation, the MSW Program follows the Graduate School

This paper reports on the commissioning and first running experience of the CMS Zero Degree Calorimeters during December 2009. All channels worked correctly. The ZDCs were timed into the data acquisition system using beam splash events. These data also allowed us to make a first estimate of channel-by-channel variations in gain.

We investigated the spawning patterns of Chinook salmon Oncorhynchus tshawytscha on the lower Cowlitz River, Washington (USA) using a unique set of fine- and coarse-scale 35 temporal and spatial data collected during bi-weekly aerial surveys conducted in 1991-2009 (500 m to 28 km resolution) and 2008-2009 (100-500 m resolution). Redd locations were mapped from a helicopter during 2008 and 2009 with a hand-held global positioning system (GPS) synchronized with in-flight audio recordings. We examined spatial patterns of Chinook salmon redd reoccupation among and within years in relation to segment-scale geomorphic features. Chinook salmon spawned in the same sections each year with little variation among years. On a coarse scale, five years (1993, 1998, 2000, 2002, and 2009) were compared for reoccupation. Redd locations were highly correlated among years resulting in a minimum correlation coefficient of 0.90 (adjusted P = 0.002). Comparisons on a fine scale (500 m) between 2008 and 2009 also revealed a high degree of consistency among redd locations (P < 0.001). On a finer temporal scale, we observed that salmon spawned in the same sections during the first and last week (2008: P < 0.02; and 2009: P < 0.001). Redds were clustered in both 2008 and 2009 (P < 0.001). Regression analysis with a generalized linear model at the 500-m scale indicated that river kilometer and channel bifurcation were positively associated with redd density, whereas sinuosity was negatively associated with redd density. Collecting data on specific redd locations with a GPS during aerial surveys was logistically feasible and cost effective and greatly enhanced the spatial precision of Chinook salmon spawning surveys.

Fuel includes any combustible gas or liquid, by whatever name the gas or liquid may be known or sold, of a kind used in an internal combustion engine for the generation of power to propel a motor vehicle on the highways, except fuel that is subject to the tax imposed by the Motor Vehicle Fuel License Tax Law and the Diesel Fuel Tax Law. For example, fuel includes, but is not limited to, liquefied petroleum gases, kerosene, distillate, stove oil, natural gas in liquid or gaseous form, and alcohol fuels. “Alcohol fuel ” includes: ethanol (ethyl alcohol), methanol, (methyl alcohol), or blends of gasoline and alcohol (including any denaturant) containing 15 percent, or less, gasoline by volume measured at 60 degreesFahrenheit. “Natural gas ” means naturally occurring mixtures of hydrocarbon gases and vapors consisting principally of methane whether in gaseous or liquid form. The taxable unit for compressed natural gas (gaseous form) is 100 cubic feet of gas measured at 14.73 pounds of pressure per square inch at 60 degreesFahrenheit. The taxable unit for liquid natural gas and other liquid fuels is the United States gallon, which is 231 cubic inches. To convert liters to gallons, the quantity of liters shall be multiplied by.26417 to determine the equivalent quantity in gallons. The resulting figure should be rounded to the nearest tenth of a gallon.

Surface measurements of solar irradiance of the atmosphere were made by a multipurpose computer-controlled scanning photometer at the Rattlesnake Mountain Observatory. The observatory is located at 46.4{degrees}N, 119.6{degrees}W at an elevation of 1088 m above mean sea level. The photometer measures the attenuation of direct solar radiation for different wavelengths using 12 filters. Five of these filters (ie., at 428 nm, 486 nm, 535 nm, 785 nm, and 1010 nm, with respective half-power widths of 2, 2, 3, 18, and 28 nm) are suitable for monitoring variations in the total optical depth of the atmosphere. Total optical depths for the five wavelength bands were derived from solar irradiance measurements taken at the observatory from August 5, 1979, to September 2, 1994; these total optical depth data are distributed with this numeric data package (NDP). To determine the contribution of atmospheric aerosols to the total optical depths, the effects of Rayleigh scattering and ozone absorption were subtracted (other molecular scattering was minimal for the five filters) to obtain total column aerosol optical depths. The total aerosol optical depths were further decomposed into tropospheric and stratospheric components by calculating a robustly smoothed mean background optical depth (tropospheric component) for each wavelength using data obtained during periods of low stratospheric aerosol loading. By subtracting the smoothed background tropospheric aerosol optical depths from the total aerosol optical depths, residual aerosol optical depths were obtained. These residuals are good estimates of the stratospheric aerosol optical depth at each wavelength and may be used to monitor the long-term effects of volcanic eruptions on the atmosphere. These data are available as an NDP from the Carbon Dioxide Information Analysis Center (CDIAC), and the NDP consists of this document and a set of computerized data files.

We analyze Niels Bohr's proposed two-slit interference experiment with highly charged particles that argues that the consistency of elementary quantum mechanics requires that the electromagnetic field must be quantized. In the experiment a particle's path through the slits is determined by measuring the Coulomb field that it produces at large distances; under these conditions the interference pattern must be suppressed. The key is that as the particle's trajectory is bent in diffraction by the slits it must radiate and the radiation must carry away phase information. Thus the radiation field must be a quantized dynamical degree of freedom. On the other hand, if one similarly tries to determine the path of a massive particle through an inferometer by measuring the Newtonian gravitational potential the particle produces, the interference pattern would have to be finer than the Planck length and thus undiscernable. Unlike for the electromagnetic field, Bohr's argument does not imply that the gravitational field must be quantized.

We analyze Niels Bohr's proposed two-slit interference experiment with highly charged particles that argues that the consistency of elementary quantum mechanics requires that the electromagnetic field must be quantized. In the experiment a particle's path through the slits is determined by measuring the Coulomb field that it produces at large distances; under these conditions the interference pattern must be suppressed. The key is that as the particle's trajectory is bent in diffraction by the slits it must radiate and the radiation must carry away phase information. Thus the radiation field must be a quantized dynamical degree of freedom. On the other hand, if one similarly tries to determine the path of a massive particle through an inferometer by measuring the Newtonian gravitational potential the particle produces, the interference pattern would have to be finer than the Planck length and thus undiscernable. Unlike for the electromagnetic field, Bohr's argument does not imply that the gravitational field ...

range of social work applications. The program's faculty represents a wide range of talents, expertise of this three-year program consists of foundation courses that follow the core areas of social work practiceOverview Obtain a master's degree in social work from the fully-accredited School of Social Work

A method is given for detecting, indicating, and controlling the degree of fluidization in a fluid-bed reactor into which powdered material is fed. The method comprises admitting of gas into the reactor, inserting a springsupported rod into the powder bed of the reactor, exciting the rod to vibrate at its resonant frequency, deriving a signal responsive to the amplitude of vibi-ation of the rod and spring, the signal being directiy proportional to the rate of flow of the gas through the reactor, displaying the signal to provide an indication of the degree of fluidization within the reactor, and controlling the rate of gas flow into the reactor until said signal stabilizes at a constant value to provide substantially complete fluidization within the reactor. (AEC)

ORISE report shows graduation, enrollment rates for nuclear engineering ORISE report shows graduation, enrollment rates for nuclear engineering candidates are still at highest ranges reported since 1980s Report also shows shifts in career opportunities beyond graduation in nuclear utilities FOR IMMEDIATE RELEASE Nov. 2, 2011 FY12-04 OAK RIDGE, Tenn.-After a one-year decline, the number of graduate and undergraduate nuclear engineering degrees earned in the United States bounced back in 2010. A recent report from the Oak Ridge Institute for Science and Education shows enrollments of both undergraduate and graduate nuclear engineering students are still in the highest ranges reported since the early 1980s. Despite the continued growth trend in enrollments and degrees, the report also revealed that the reported plans of graduates show fewer had plans to

Based on the concept and techniques of first-passage probability in Markov chain theory, this letter provides a rigorous proof for the existence of the steady-state degree distribution of the scale-free network generated by the Barabasi-Albert (BA) model, and mathematically re-derives the exact analytic formulas of the distribution. The approach developed here is quite general, applicable to many other scale-free types of complex networks.

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

We have developed and tested (calibration, linearity, and cross-axis errors) a new six-degree-of-freedom mechanical seismic sensor for collocated measurements of three translational and three rotational ground motion velocity components. The device consists of standard geophones arranged in parallel pairs to detect spatial gradients. The instrument operates in a high-frequency range (above the natural frequency of the geophones, 4.5 Hz). Its theoretical sensitivity limit in this range is 10{sup -9} m/s in ground velocity and 10{sup -9} rad/s in rotation rate. Small size and weight, and easy installation and maintenance make the instrument useful for local-earthquake recording and seismic prospecting.

In this paper, we use a new method to decrease the parameterized complexity bound for finding the minimum vertex cover of connected max-degree-3 undirected graphs. The key operation of this method is reduction of the size of a particular subset of edges which we introduce in this paper and is called as "real-cycle" subset. Using "real-cycle" reductions alone we compute a complexity bound $O(1.15855^k)$ where $k$ is size of the optimal vertex cover. Combined with other techniques, the complexity bound can be further improved to be $O(1.1504^k)$. This is currently the best complexity bound.

Estimating photo-consistency is one of the most important ingredients for any 3D stereo reconstruction technique that is based on a volumetric scene representation. This paper presents a new, illumination invariant photo-consistency measure for high ...

Experimental observations of synchrotron radiation diffraction from a thin surface layer at a 90-degree Bragg reflection are reported and discussed. The synchrotron experiments were performed using a bending magnet source at the European Synchrotron Radiation Facility (ESRF) in France and undulator sources at the Advanced Photon Source (APS) in the U.S. and SPring-8 in Japan. Thin (0.5, 1.0 and 1.5 micron) InGaAs films deposited on a GaAs (100) substrate were studied near the 90- degree using the GaAs (800) reflection. A slight, less than 0.1%, difference in the lattice spacing between the layer and the substrate is sufficient to allow a direct and exclusive observation of the diffraction profile from a thin layer as if it was a 'free-standing' thin crystal. This research opens new possibilities for x-ray optical schemes and the development of novel analytical techniques for surface/interface x-ray diffraction studies.

One of the most important parameters to consider when designing a manipulator is the number of degrees of freedom (DOFs). This article focuses on the question: How many DOFs are necessary and sufficient for fault tolerance, and how should these DOFs be distributed along the length of the manipulator? A manipulator is fault tolerant if it can complete its task even when one of its joints fails and is immobilized. The number of DOFs needed for fault tolerance strongly depends on the knowledge available about the task. In this article, two approaches are explored. First, for the design of a general purpose fault-tolerant manipulator, it is assumed that neither the exact task trajectory nor the redundancy resolution algorithm are known a priori and the manipulator has no joint limits. In this case, two redundant DOFs are necessary and sufficient to sustain one joint failure, as is demonstrated in two design templates for spatial fault-tolerant manipulators. In this second approach, both the Cartesian task path and the redundancy resolution algorithm are assumed to be known. The design of such a task-specific fault-tolerant manipulator requires only one degree of redundancy. 22 refs., 11 figs., 2 tabs.

In Korea, heating degree days (HDD) and cooling degree days (CDD) have been widely used as climatic indicators for the assessment of the impact of climate change, but arbitrary or customary base temperatures have been used for calculation of HDD ...

Because of the complicated interaction of the sludge compost components, it makes the compost maturity degree judging system appear the nonlinearity and uncertainty. According to the physical circumstances of sludge compost, a compost maturity degree ...

We derive upper bounds for the number of degrees of freedom of two-dimensional Navier--Stokes turbulence freely decaying from a smooth initial vorticity field $\\omega(x,y,0)=\\omega_0$. This number, denoted by $N$, is defined as the minimum dimension such that for $n\\ge N$, arbitrary $n$-dimensional balls in phase space centred on the solution trajectory $\\omega(x,y,t)$, for $t>0$, contract under the dynamics of the system linearized about $\\omega(x,y,t)$. In other words, $N$ is the minimum number of greatest Lyapunov exponents whose sum becomes negative. It is found that $N\\le C_1R_e$ when the phase space is endowed with the energy norm, and $N\\le C_2R_e(1+\\ln R_e)^{1/3}$ when the phase space is endowed with the enstrophy norm. Here $C_1$ and $C_2$ are constant and $R_e$ is the Reynolds number defined in terms of $\\omega_0$, the system length scale, and the viscosity $\

The survey includes degrees granted between September 1, 2010 and August 31, 2011. Enrollment information refers to the fall term 2011. The enrollment and degree data include students majoring in health physics or in an option program equivalent to a major. Twenty-four academic programs reported having health physics programs during 2011. The data for two health physics options within nuclear engineering programs are also included in the enrollments and degrees that are reported in the nuclear engineering enrollments and degrees data.

, as evidenced by Table 1. Operating System Period of measurement Hours used Total defects Total reboots Defect at which defects occur within a software system, (known as the defect density and usually measured software systems behaviour a thesis in support of the degree of LL.M. by dissertation at the University

We present a second order self-consistent implicit/explicit (methods that use the combination of implicit and explicit discretizations are often referred to as IMEX (implicit/explicit) methods [2,1,3]) time integration technique for solving radiation ... Keywords: Radiation hydrodynamics, Self-consistent IMEX method

Object-oriented modeling favors the modeling of object behavior from different viewpoints and the successive refinement of behavioral models in the development process. This gives rise to consistency problems of behavioral models. The absence of a formal ... Keywords: CSP, UML, behavioral consistency, object-oriented modeling

There is an increasing demand for the runtime reconfiguration of distributed systems in response to changing environments and evolving requirements. Reconfiguration must be done in a safe and low-disruptive way. In this paper, we propose version consistency ... Keywords: component-based distributed system, dynamic reconfiguration, version-consistency

A Performance Comparison of Homeless and HomeÂ­based Lazy Release Consistency Protocols in Software based on lazy release consistency. In particular, we compare the performance of Princeton's homeÂ­based most of their data were migratory, while the homeÂ­based protocol performed better for one. For this one

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Because of the complicated interaction of the sludge compost components, it makes the compost maturity degree judging system appear the non-linearity and uncertainty. According to the physical circumstances of sludge compost, a compost maturity degree ... Keywords: Compost, Maturity degree, Radial basic function network, Modeling

We use very precise frequencies of low-degree solar-oscillation modes measured from 4752 days of data collected by the Birmingham Solar-Oscillations Network (BiSON) to derive seismic information on the solar core. We compare these observations to results from a large Monte Carlo simulation of standard solar models, and use the results to constrain the mean molecular weight of the solar core, and the metallicity of the solar convection zone. We find that only a high value of solar metallicity is consistent with the seismic observations. We can determine the mean molecular weight of the solar core to a very high precision, and, dependent on the sequence of Monte Carlo models used, find that the average mean molecular weight in the inner 20% by radius of the Sun ranges from 0.7209 to 0.7231, with uncertainties of less than 0.5% on each value. Our lowest seismic estimate of solar metallicity is Z=0.0187 and our highest is Z=0.0239, with uncertainties in the range of 12--19%. Our results indicate that the discrepancies between solar models constructed with low metallicity and the helioseismic observations extend to the solar core and thus cannot be attributed to deficiencies in the modeling of the solar convection zone.

We investigate the connection between local and global dynamics of two N-degree of freedom Hamiltonian systems with different origins describing one-dimensional nonlinear lattices: The Fermi-Pasta-Ulam (FPU) model and a discretized version of the nonlinear Schrodinger equation related to Bose-Einstein Condensation (BEC). We study solutions starting in the vicinity of simple periodic orbits (SPOs) representing in-phase (IPM) and out-of-phase motion (OPM), which are known in closed form and whose linear stability can be analyzed exactly. Our results verify that as the energy E increases for fixed N, beyond the destabilization threshold of these orbits, all positive Lyapunov exponents exhibit a transition between two power laws, occurring at the same value of E. The destabilization energy E_c per particle goes to zero as N goes to infinity following a simple power-law. However, using SALI, a very efficient indicator we have recently introduced for distinguishing order from chaos, we find that the two Hamiltonians have very different dynamics near their stable SPOs: For example, in the case of the FPU system, as the energy increases for fixed N, the islands of stability around the OPM decrease in size, the orbit destabilizes through period-doubling bifurcation and its eigenvalues move steadily away from -1, while for the BEC model the OPM has islands around it which grow in size before it bifurcates through symmetry breaking, while its real eigenvalues return to +1 at very high energies. Still, when calculating Lyapunov spectra, we find for the OPMs of both Hamiltonians that the Lyapunov exponents decrease following an exponential law and yield extensive Kolmogorov--Sinai entropies per particle, in the thermodynamic limit of fixed energy density E/N with E and N arbitrarily large.

This paper provides a theoretical framework for controlling a manipulator with hyper degrees of freedom (HDOF) . An HDOF manipulator has the capability to achieve various kinds of tasks. To make full use of its capability, shape control is proposed here; that is, not only the tip of a manipulator, but also its whole body is controlled. To formulate control objectives for shape control, the authors define a shape correspondence between an HDOF manipulator and a spatial curve that prescribes a desired shape. The shape correspondence is defined by using solutions of a nonlinear optimization problem termed the shape-inverse problem. They give theorems on the existence of the solutions, and on an existence region that allows them to convert shape-control problems into more tractable ones. A shape-regulation control problem is considered first to bring an HDOF manipulator onto a given time-invariant curve. The idea of estimating the desired curve parameters is the crucial key to solving the problem by Lyapunov design. The derived shape-regulation law includes the estimator, which infers the desired curve parameters corresponding to the desired joint positions on the curve. The idea of the desired curve-parameter estimation is also effective for shape tracking where a time-varying curve is used for prescribing a moving desired shape. Considering an estimator with second-order dynamics enables the authors to find two shape-tracking control laws by utilizing conventional tracking methods in manipulator control. They show the simulation results of applying the derived shape-tracking control laws to a 20-DOF manipulator.

An algorithm to generate wave fields consistent with forecasts from the official U.S. tropical cyclone forecast centers has been made available in near–real time to forecasters since summer 2007. The algorithm removes the tropical cyclone from ...

The consistency between rainfall projections obtained from direct climate model output and statistical downscaling is evaluated. Results are averaged across an area large enough to overcome the difference in spatial scale between these two types ...

processing. In contrast to previous techniques that handlenode failures, our approach also tolerates network failuresand network partitions. The approach is based on a principledtrade-off between consistency and availability ...

Fractal dimensions derived from log–log variograms are useful for characterizing spatial structure and scaling behavior in snow depth distributions. This study examines the temporal consistency of snow depth scaling features at two sites using ...

is consistent as the distant site tends to infinity. We also explore the numerical performances of our on a single observation of the path till the time it reaches a distant site, and prove that the estima- tor

A comprehensive and cohesive aerosol measurement record with consistent, well-understood uncertainties is a prerequisite to understanding aerosol impacts on long-term climate and environmental variability. Objectives to attaining such an ...

The diffusion–dissipation parameterizations usually adopted in GCMs are not physically consistent. Horizontal momentum diffusion, applied in the form of a hyperdiffusion, does not conserve angular momentum and the associated dissipative heating ...

Applying wavelength scaling, dimensionally consistent expressions of the ocean surface friction coefficient can be developed for both wind sea and mixed sea in the ocean. For a wind sea with a monopeak wave spectrum, the natural choice of the ...

A multigrid numerical method has been applied to a three-dimensional, high-resolution diagnostic model for flow over complex terrain using a mass-consistent approach. The theoretical background for the model is based on a variational analysis ...

A new algorithm to generate wave heights consistent with tropical cyclone official forecasts from the Joint Typhoon Warning Center (JTWC) has been developed. The process involves generating synthetic observations from the forecast track and the ...

A method is presented for deriving physically consistent profiles of temperature, humidity, and cloud liquid water content. This approach combines a ground-based multichannel microwave radiometer, a cloud radar, a lidar-ceilometer, the nearest ...

Solar Cookers to Bring Hope to Earthquake Victims Solar Cookers to Bring Hope to Earthquake Victims Solar Cookers to Bring Hope to Earthquake Victims March 8, 2010 - 11:00am Addthis How does it work? The type of cooker distributed by Solar Cookers International consists of two parts: a heat-resistant plastic bag placed around a dark-colored cooking pot. When sunlight passes through the bag and hits the pot, it's converted into heat energy. The heat energy can't get out of the plastic bag as easily as the light got in, which traps the heat inside. This allows cookers to reach temperatures around 250 degreesFahrenheit, high enough to boil water. January's devastating earthquake made Haiti's previous power infrastructure problems even worse. According to the World Bank, Haitians meet about 70 percent of their power needs by burning firewood or charcoal.

Cookers to Bring Hope to Earthquake Victims Cookers to Bring Hope to Earthquake Victims Solar Cookers to Bring Hope to Earthquake Victims March 8, 2010 - 11:00am Addthis How does it work? The type of cooker distributed by Solar Cookers International consists of two parts: a heat-resistant plastic bag placed around a dark-colored cooking pot. When sunlight passes through the bag and hits the pot, it's converted into heat energy. The heat energy can't get out of the plastic bag as easily as the light got in, which traps the heat inside. This allows cookers to reach temperatures around 250 degreesFahrenheit, high enough to boil water. January's devastating earthquake made Haiti's previous power infrastructure problems even worse. According to the World Bank, Haitians meet about 70 percent of their power needs by burning firewood or charcoal.

The influence of hole-hole propagation in addition to the conventional particle-particle propagation, on the energy per nucleon and the momentum distribution is investigated. The results are compared to the Brueckner-Hartree-Fock (BHF) calculations with a continuous choice and conventional choice for the single-particle spectrum. The Bethe-Goldstone equation has been solved using realistic $NN$ interactions. Also, the structure of nucleon self-energy in nuclear matter is evaluated. All the self-energies are calculated self-consistently. Starting from the BHF approximation without the usual angle-average approximation, the effects of hole-hole contributions and a self-consistent treatment within the framework of the Green function approach are investigated. Using the self-consistent self-energy, the hole and particle self-consistent spectral functions including the particle-particle and hole-hole ladder contributions in nuclear matter are calculated using realistic $NN$ interactions. We found that, the difference in binding energy between both results, i.e. BHF and self-consistent Green function, is not large. This explains why is the BHF ignored the 2h1p contribution.

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

The Born rule is at the foundation of quantum mechanics and transforms our classical way of understanding probabilities by predicting that interference occurs between pairs of independent paths of a single object. One consequence of the Born rule is that three way (or three paths) quantum interference does not exist. In order to test the consistency of the Born rule, we examine detection probabilities in three path intereference using an ensemble of spin-1/2 quantum registers in liquid state nuclear magnetic resonance (LSNMR). As a measure of the consistency, we evaluate the ratio of three way interference to two way interference. Our experiment bounded the ratio to the order of $10^{-3} \\pm 10^{-3}$, and hence it is consistent with Born's rule.

Real world is dynamic in its nature, so techniques attempting to model the real world should take this dynamicity in consideration. A well known Constraint Satisfaction Problem (CSP) can be extended this way to a so called Dynamic Constraint Satisfaction Problem (DynCSP) that supports adding and removing constraints in runtime. As Arc Consistency is one of the major techniques in solving CSPs, its dynamic version is of a particular interest for DynCSPs. This paper presents an improved version of AC|DC-2 algorithm for maintaining maximal arc consistency after constraint retraction. This improvement leads to runtimes better than the so far fastest dynamic arc consistency algorithm DnAC-6 while keeping low memory consumption. Moreover, the proposed algorithm is open in the sense of using either non-optimal AC-3 algorithm keeping a minimal memory consumption or optimal AC-3.1 algorithm improving runtime for constraint addition but increasing a memory consumption.

The solubility of hydrogen (H) in plutonium metal (Pu) was measured in the temperature range of 475 to 825{degree}C for unalloyed Pu (UA) and in the temperature range of 475 to 625{degree}C for Pu containing two-weight-percent gallium (TWP). For TWP metal, in the temperature range 475 to 600{degree}C, the saturated solution has a maximum hydrogen to plutonium ration (H/Pu) of 0.00998 and the standard enthalpy of formation ({Delta}H{degree}{sub f(s)}) is (-0.128 {plus minus} 0.0123) kcal/mol. The phase boundary of the solid solution in equilibrium with plutonium dihydride (PuH{sub 2}) is temperature independent. In the temperature range 475 to 625{degree}C, UA metal has a maximum solubility at H/Pu = 0.011. The phase boundary between the solid solution region and the metal+PuH{sub 2} two-phase region is temperature dependent. The solubility of hydrogen in UA metal was also measured in the temperature range 650 to 825{degree}C with {Delta}H{degree}{sub f(s)} = (-0.104 {plus minus} 0.0143) kcal/mol and {Delta}S{degree}{sub f(s)} = 0. The phase boundary is temperature dependent and the maximum hydrogen solubility has H/Pu = 0.0674 at 825{degree}C. 52 refs., 28 figs., 9 tabs.

The extension of Green's functions techniques to the complex energy plane provides access to fully dressed quasi-particle properties from a microscopic perspective. Using self-consistent ladder self-energies, we find both spectra and lifetimes of such quasi-particles in nuclear matter. With a consistent choice of the group velocity, the nucleon mean-free path can be computed. Our results indicate that, for energies above 50 MeV at densities close to saturation, a nucleon has a mean-free path of 4 to 5 femtometers.

TYPES OF PAINT DIY CHECKLIST PAINT Â· Consists of pigments, additives and binders in an oil or water more available. The most commonly used paints in DIY projects are water- and oil-based. Most.Choosingthecorrectpainttypeforyour jobwillrequiredecisionsbasedonbothaestheticandtechnicalrequirements. #12;TYPES OF PAINT DIY CHECKLIST WATER-BASED ADVANTAGES Â· Quick drying time (1 - 6 hours

Changing System Interfaces Consistently: a New Refinement Strategy for CSP B Steve Schneider refinement in the context of CSP B. Our motivation to include this notion of refinement within the CSP B to change the events of a CSP process and the B machines when refining a system. Notions of refinement based

(Smart) Look-Ahead Arc Consistency and the Pursuit of CSP Tractability Hubie Chen 1 and V#19. The constraint satisfaction problem (CSP) can be formu- lated as the problem of deciding, given a pair (A; B) of relational struc- tures, whether or not there is a homomorphism from A to B. Although the CSP is in general

The purpose of this paper is the calculation of mass-consistent wind velocity fields over complex orography on the basis of existing measurements. Measured data are used to generate an initial wind velocity field that in general does not satisfy ...

Nuclear Databases: National Resource Nuclear databases consists of carefully organized scientific information that has been gathered over 50 years of low-energy nuclear physics research worldwide. These powerful databases have enormous value and they represent a genuine national resource. Six core nuclear

It is shown that the master equation introduced by Jones & Hore and purported to describe radical-ion-pair reactions is not self-consistent. This is because the average of single-molecule realizations does not reproduce the predictions of the master equation.

We present a consistent numerical model for coupling radiation to hydrodynamics at low Mach number. The hydrodynamical model is based on a low-Mach asymptotic in the compressible flow that removes acoustic wave propagation while retaining the compressibility ... Keywords: Diffusion flame, Low-Mach number flows, M1 model, Natural convection, Radiation hydrodymanics

A Scheduling Algorithm for Consistent Monitoring Results with Solar Powered High but critical task for solar powered wireless high power embedded systems. Our algorithm relies on an energy Few bytes per second Up to 2MB per second Peak power (mW) 198 2200 Solar harvesting is one of the most

This paper presents a new consistent and stabilized finite-element formulation for fourth-order incompressible flow problems. The formulation is based on the C^0-interior penalty method, the Galerkin least-square (GLS) scheme, which assures that the ... Keywords: Discontinuous Galerkin methods, Fourth-order problems, GLS stability, Second gradient

It is proved that the bilateral consistent prekernel is not empty and intersects the core of (boundary) balanced games. The proof is introduced in a general framework, which enables us to apply it to pure exchange economy environments. As a result a ...

This report provides equations, based on analyses and test data, for determining the directional stress indices and stress intensification factors (SIFs) for 90 degree elbows. Present methodologies used to determine these parameters are generally overly conservative. The report contains results of an investigation into the stress intensification factors and directional stress indices of 90 degree elbows.

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Pre-medical Studies A900 Division of Biomedical and Life Sciences Degree Schemes Our Pre-medical Studies course provides an opportunity for entry into medical education for individuals with non to enter year 1 of the Liverpool University MBChB medical degree studying at Lancaster or Liverpool

In the recently introduced model for cleaning a graph with brushes, we use a degree-greedy algorithm to clean a random d-regular graph on n vertices (with dn even). We then use a differential equations method to find the (asymptotic) ... Keywords: cleaning process, degree--greedy algorithm, differential equations method, random d-regular graphs

Joint Professional/Graduate Degree Program FORM: The faculties of the College of _____ and the College/School/Department of ___________ have approved a joint degree program culminating in both/Specialist/Engineer/Master of _____ degree, awarded by the College/School/Department of ______. Under the joint degree program, a student can

Appendix B to 10 CFR Part 20 contains numerical data for controlling the intake of radionuclides in the workplace or in the environment. These data, derived from the recommendations of the International Commission on Radiological Protection (ICRP), do not provide a numerically consistent basis for demonstrating compliance with the limitation on dose stated in the regulation. This situation is largely a consequence of the numerical procedures used by the ICRP which did not maintain, in a strict numerical sense, the hierarchial relationship among the radiation protection quantities. In this work recommended values of the quantities in Appendix B to CFR Part 20 are developed using the dose coefficients of the applicable ICRP publications and a numerical procedure which ensures that the tabulated quantities are numerically consistent.

We use a tight-binding Bogoliubov-de Gennes (BdG) formalism to self-consistently calculate the proximity effect, Josephson current, and local density of states in ballistic graphene SNS Josephson junctions. Both short and long junctions, with respect to the superconducting coherence length, are considered, as well as different doping levels of the graphene. We show that self-consistency does not notably change the current-phase relationship derived earlier for short junctions using the non-selfconsistent Dirac-BdG formalism but predict a significantly increased critical current with a stronger junction length dependence. In addition, we show that in junctions with no Fermi level mismatch between the N and S regions superconductivity persists even in the longest junctions we can investigate, indicating a diverging Ginzburg-Landau superconducting coherence length in the normal region.

Background: Deep third minima have been predicted in some non-self-consistent models to impact fission pathways of thorium and uranium isotopes. These predictions have guided the interpretation of resonances seen experimentally. On the other hand, self-consistent calculations consistently predict very shallow potential-energy surfaces in the third minimum region. Purpose: We investigate the interpretation of third-minimum configurations in terms of dimolecular states. We study the isentropic potential-energy surfaces of selected even-even thorium and uranium isotopes at several excitation energies. In order to understand the driving effects behind the presence of third minima, we study the interplay between pairing and shell effects. Methods: We use the finite-temperature superfluid nuclear density functional theory. We consider a traditional functional, SkM*, and a recent functional, UNEDF1, optimized for fission studies. Results: We predict very shallow or no third minima in the potential-energy surfaces of 232Th and 232U. In Th and U isotopes with N=136 and 138, the third minima are deeper. We show that the reflection-asymmetric configurations around the third minimum can be associated with dimolecular states involving the spherical doubly magic 132Sn and a lighter deformed Zr or Mo fragment. The potential-energy surfaces for 228,232Th and 232U at several excitation energies are presented. Conclusions: We show that the neutron shell effect that governs the existence of the dimolecular states around the third minimum is consistent with the spherical-to-deformed shape transition in the Zr and Mo isotopes around N=58. We demonstrate that the thermal reduction of pairing and enhancement of shell effects at small excitation energies help to develop deeper third minima. At large excitation energies, shell effects are washed out and third minima disappear altogether.

We offer a new method for determining the wind source term for energy and momentum fluxes transfer from the atmosphere to the wind-driven sea. This new source-term formulation is based on extensive analysis of experimental data collected at different sites around the world. It is shown that this new wind source term to be consistent both with numerical solution of exact equation for resonant four-wave interactions and available experimental data.

Recently one of us derived the action of modified gravity consistent with the holographic and new-agegraphic dark energy. In this paper, we investigate the stability of the Lagrangians of the modified gravity as discussed in [M. R. Setare, Int. J. Mod. Phys. D 17 (2008) 2219; M. R. Setare, Astrophys. Space Sci. 326 (2010) 27]. We also calculate the statefinder parameters which classify our dark energy model.

This paper describes an all-electron implementation of the self-consistent GW (sc-GW) approach -- i.e. based on the solution of the Dyson equation -- in an all-electron numeric atom-centered orbital (NAO) basis set. We cast Hedin's equations into a matrix form that is suitable for numerical calculations by means of i) the resolution of identity technique to handle 4-center integrals; and ii) a basis representation for the imaginary-frequency dependence of dynamical operators. In contrast to perturbative G0W0, sc-GW provides a consistent framework for ground- and excited-state properties and facilitates an unbiased assessment of the GW approximation. For excited-states, we benchmark sc-GW for five molecules relevant for organic photovoltaic applications: thiophene, benzothiazole, 1,2,5-thiadiazole, naphthalene, and tetrathiafulvalene. At self-consistency, the quasi-particle energies are found to be in good agreement with experiment and, on average, more accurate than G0W0 based on Hartree-Fock (HF) or density-...

We construct a consistency test of General Relativity (GR) on cosmological scales. This test enables us to distinguish between the two alternatives to explain the late-time accelerated expansion of the universe, that is, dark energy models based on GR and modified gravity models without dark energy. We derive the consistency relation in GR which is written only in terms of observables - the Hubble parameter, the density perturbations, the peculiar velocities and the lensing potential. The breakdown of this consistency relation implies that the Newton constant which governs large-scale structure is different from that in the background cosmology, which is a typical feature in modified gravity models. We propose a method to perform this test by reconstructing the weak lensing spectrum from measured density perturbations and peculiar velocities. This reconstruction relies on Poisson's equation in GR to convert the density perturbations to the lensing potential. Hence any inconsistency between the reconstructed lensing spectrum and the measured lensing spectrum indicates the failure of GR on cosmological scales. The difficulties in performing this test using actual observations are discussed.

Both superbursters and soft X-ray transients probe the process of deep crustal heating in compact stars. It was recently shown that the transfer of matter from crust to core in a strange star can heat the crust and ignite superbursts provided certain constraints on the strange quark matter equation of state are fulfilled. We derive corresponding constraints on the equation of state for soft X-ray transients assuming their quiescent emission is powered in the same way, and further discuss the time dependence of this heating mechanism in transient systems. We approach this using a simple parametrized model for deep crustal heating in strange stars assuming slow neutrino cooling in the core and blackbody photon emission from the surface.The constraints derived for hot frequently accreting soft X-ray transients are always consistent with those for superbursters. The colder sources are consistent for low values of the quark matter binding energy, heat conductivity and neutrino emissivity. The heating mechanism is very time dependent which may help to explain cold sources with long recurrence times. Thus deep crustal heating in strange stars can provide a consistent explanation for superbursters and soft X-ray transients.

We consider the effect of backreaction of quantized massive fields on the metric of extreme black holes (EBH). We find the analytical approximate expression for the stress-energy tensor for a scalar (with an arbitrary coupling), spinor and vector fields near an event horizon. We show that, independent of a concrete type of EBH, the energy measured by a freely falling observer is finite on the horizon, so that quantum backreaction is consistent with the existence of EBH. For the Reissner-Nordstrom EBH with a total mass M_{tot} and charge Q we show that for all cases of physical interest M_{tot}< Q. We also discuss different types of quantum-corrected Bertotti-Robinson spacetimes, find for them exact self-consistent solutions and consider situations in which tiny quantum corrections lead to the qualitative change of the classical geometry and topology. In all cases one should start not from a classical background with further adding quantum corrections but from the quantum-corrected self-consistent geometries from the very beginning.

Let Dtt denote the set of truth-table degrees. A bijection p from Dtt to Dtt is an automorphism if for all truth-table degrees x and y we have x =tt b we have p(x) = x. We first prove that for every 2-generic real X we have X' is not tt below X + 0'. We next prove that for every real X >=tt 0' there is a real Y such that Y + 0' =tt Y' =tt X. Finally, we use this to demonstrate that every automorphism of the truth-table degrees is fixed on some cone.

Fully consistent axially-symmetric-deformed quasiparticle random phase approximation calculations have been performed with the D1S Gogny force. Giant resonances in exotic nuclei as well as in deformed Mg and Si isotopes have been studied. Dipole responses have been calculated in Ne isotopes and N=16 isotones to study the existence of soft dipole modes in exotic nuclei. The same formalism has been used to describe multipole responses up to octupole in the deformed and heavy nucleus {sup 238}U. Low energy spectroscopy of nickel isotopes has been studied, revealing 0{sup +} states which display a particular structure.

The fit of precision electroweak data to the Minimal Standard Model currently gives an upper limit on the Higgs boson mass of 170 GeV at 95% confidence. Nevertheless, it is often said that the Higgs boson could be much heavier in more general models. In this paper, we critically review models that have been proposed in the literature that allow a heavy Higgs boson consistent with the precision electroweak constraints. All have unusual features, and all can be distinguished from the Minimal Standard Model either by improved precision measurements or by other signatures accessible to next-generation colliders.

Calculations of the one-hole spectral function of 16O for small missing energies are reviewed. The self-consistent Green's function approach is employed together with the Faddeev equations technique in order to study the coupling of both particle-particle and particle-hole phonons to the single-particle motion. The results indicate that the characteristics of hole fragmentation are related to the low-lying states of 16O and an improvement of the description of this spectrum, beyond the random phase approximation, is required to understand the experimental strength distribution. A first calculation in this direction that accounts for two-phonon states is discussed.

The need for structural materials with high-temperature strength and oxidation resistance coupled with adequate lower-temperature toughness for potential use at temperatures above {approx} 1000 degrees C has remained a persistent challenge in materials science. In this work, one promising class of intermetallic alloys is examined, namely boron-containing molybdenum silicides, with compositions in the range Mo (bal), 12-17 at. percentSi, 8.5 at. percentB, processed using both ingot (I/M) and powder (P/M) metallurgy methods. Specifically, the oxidation (''pesting''), fracture toughness and fatigue-crack propagation resistance of four such alloys, which consisted of {approx}21 to 38 vol. percent a-Mo phase in an intermetallic matrix of Mo3Si and Mo5SiB2 (T2), were characterized at temperatures between 25 degrees and 1300 degrees C. The boron additions were found to confer superior ''pest'' resistance (at 400 degrees to 900 degrees C) as compared to unmodified molybdenum silicides, such as Mo5Si3. Moreover , although the fracture and fatigue properties of the finer-scale P/M alloys were only marginally better than those of MoSi2, for the I/M processed microstructures with coarse distributions of the a-Mo phase, fracture toughness properties were far superior, rising from values above 7 MPa sqrt m at ambient temperatures to almost 12 MPa sqrt m at 1300 degrees C.

In this thesis, I designed and implemented an optical system for freehand interactions in six degrees of freedom. A single camera captures a pen's location and orientation, including roll, tilt, x, y, and z by reading ...

The authors systematically investigate two easily computed measures of the effective number of spatial degrees of freedom (ESDOF), or number of independently varying spatial patterns, of a time-varying field of data. The first measure is based on ...

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

The degree of a CSP instance is the maximum number of times that a variable may appear in the scope of constraints. We consider the approximate counting problem for Boolean CSPs with bounded-degree instances, for constraint languages containing the two unary constant relations {0} and {1}. When the maximum degree is at least 25 we obtain a complete classification of the complexity of this problem. It is exactly solvable in polynomial-time if every relation in the constraint language is affine. It is equivalent to the problem of approximately counting independent sets in bipartite graphs if every relation can be expressed as conjunctions of {0}, {1} and binary implication. Otherwise, there is no FPRAS unless NP=RP. For lower degree bounds, additional cases arise in which the complexity is related to the complexity of approximately counting independent sets in hypergraphs.

The undergraduate degree of computer and cyber security has been offered at the School of Information Technology, Phetchaburi Rajabhat University, Thailand since 2005. Our program requires direct field experience when students are taking upper-level ...

The survey includes degrees granted between September 1, 2009 and August 31, 2010, and fall 2010 enrollments. Thirty-two academic programs reported having nuclear engineering programs during 2010, and data was obtained from all thirty-two.

Third-degree diurnal tides are estimated from long time series of sea level measurements at three North Atlantic tide gauges. Although their amplitudes are only a few millimeters or less, their admittances are far larger than those of second-...

Using data derived from the American Meteorological Society–University Corporation for Atmospheric Research Curricula and U.S. Department of Education statistics, it is found that the number of meteorology bachelor's degree recipients in the ...

services agencies Federal, state and local government particularly Departments of Welfare and Health Degree? Page 3 AREAS EMPLOYERS STRATEGIES GOVERNMENT Federal, state and local government agencies agencies Summer camp programs Adult and child daycare providers Programs servicing children and adults

problems in nuclear reactor design, etc. For the purposes of this paper the principle of linearized at hand. There is, in addition, a natural motivation for using degree theory which is explained in w4

This thesis presents SoundStrand, a novel tangible interface for composing music. A new paradigm is also presented - one that allows for music composition with limited degrees of freedom, and therefore is well suited for ...

A method and system for enhancing the transient stability of an intertied three-phase electric power generating system. A set of power exporting generators (10) is connected to a set of power importing generators (20). When a transient cannot be controlled by conventional stability controls, and imminent loss of synchronism is detected (such as when the equivalent rotor angle difference between the two generator sets exceeds a predetermined value, such as 150 degrees), the intertie is disconnected by circuit breakers. Then a switch (30) having a 120-degree phase rotation, or a circuit breaker having a 120-degree phase rotation is placed in the intertie. The intertie is then reconnected. This results in a 120-degree reduction in the equivalent rotor angle difference between the two generator sets, making the system more stable and allowing more time for the conventional controls to stabilize the transient.

Finding the largest independent set in a graph is a notoriously difficult NP-complete combinatorial optimization problem. Moreover, even for graphs with largest degree 3, no polynomial time approximation algorithm exists ...

In this talk, we discuss the compatibility of different deeply inelastic neutrino-nucleus data sets and the universal nuclear PDFs. This is an issue that has lately been investigated by different groups but the conclusions have been surprisingly contradictory. While some studies have found a good overall agreement between the nuclear PDFs and the neutrino data, others have claimed for an incompatibility. Here, we demonstrate that the independent neutrino data sets from NuTeV, CHORUS and CDHSW collaborations differ in the absolute overall normalization and that it is not possible to accurately reproduce all the data simultaneously with a single set of PDFs. Our strategy to overcome this difficulty and allow a consistent use of all neutrino data in global PDF analyses is to normalize the data by the integrated cross-sections thereby cancelling possible inaccuracies in the absolute normalization. Indeed, this brings all data to a surprisingly good mutual agreement underscoring the x-dependence of the nuclear modifications in a model-independent way. The consistency of these data with the present nuclear PDFs is verified by introducing a method to test the effect of a new data set in an existing global fit that performed a Hessian error analysis.

A grid-free variant of the Direct Simulation Monte Carlo (DSMC) method is proposed, named the Isotropic DSMC (I-DSMC) method, that is suitable for simulating collision-dominated dense fluid flows. The I-DSMC algorithm eliminates all grid artifacts from the traditional DSMC algorithm and is Galilean invariant and microscopically isotropic. The stochastic collision rules in I-DSMC are modified to introduce a non-ideal structure factor that gives consistent compressibility, as first proposed in [Phys. Rev. Lett. 101:075902 (2008)]. The resulting Stochastic Hard Sphere Dynamics (SHSD) fluid is empirically shown to be thermodynamically identical to a deterministic Hamiltonian system of penetrable spheres interacting with a linear core pair potential, well-described by the hypernetted chain (HNC) approximation. We develop a kinetic theory for the SHSD fluid to obtain estimates for the transport coefficients that are in excellent agreement with particle simulations over a wide range of densities and collision rates. The fluctuating hydrodynamic behavior of the SHSD fluid is verified by comparing its dynamic structure factor against theory based on the Landau-Lifshitz Navier-Stokes equations. We also study the Brownian motion of a nano-particle suspended in an SHSD fluid and find a long-time power-law tail in its velocity autocorrelation function consistent with hydrodynamic theory and molecular dynamics calculations.

Current analysis methodology for the Soil Structure Interaction (SSI) analysis of nuclear facilities is specified in ASCE Standard 4. This methodology is based on the use of deterministic procedures with the intention that enough conservatism is included in the specified procedures to achieve an 80% probability of non-exceedance in the computed response of a Structure, System. or Component for given a mean seismic design input. Recently developed standards are aimed at achieving performance-based, risk consistent seismic designs that meet specified target performance goals. These design approaches rely upon accurately characterizing the probability (hazard) level of system demands due to seismic loads consistent with Probabilistic Seismic Hazard Analyses. This paper examines the adequacy of the deterministic SSI procedures described in ASCE 4-98 to achieve an 80th percentile of Non-Exceedance Probability (NEP) in structural demand, given a mean seismic input motion. The study demonstrates that the deterministic procedures provide computed in-structure response spectra that are near or greater than the target 80th percentile NEP for site profiles other than those resulting in high levels of radiation damping. The deterministic procedures do not appear to be as robust in predicting peak accelerations, which correlate to structural demands within the structure.

Recent results obtained by applying the method of self-consistent Green's functions to nuclei and nuclear matter are reviewed. Particular attention is given to the description of experimental data obtained from the (e,e'p) and (e,e'2N) reactions that determine one and two-nucleon removal probabilities in nuclei since the corresponding amplitudes are directly related to the imaginary parts of the single-particle and two-particle propagators. For this reason and the fact that these amplitudes can now be calculated with the inclusion of all the relevant physical processes, it is useful to explore the efficacy of the method of self-consistent Green's functions in describing these experimental data. Results for both finite nuclei and nuclear matter are discussed with particular emphasis on clarifying the role of short-range correlations in determining various experimental quantities. The important role of long-range correlations in determining the structure of low-energy correlations is also documented. For a complete understanding of nuclear phenomena it is therefore essential to include both types of physical correlations. We demonstrate that recent experimental results for these reactions combined with the reported theoretical calculations yield a very clear understanding of the properties of {\\em all} protons in the nucleus. We propose that this knowledge of the properties of constituent fermions in a correlated many-body system is a unique feature of nuclear physics.

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

The finding that the widths of type 1a supernovae light curves increase with redshift appears to provide strong evidence for an expanding universe. This paper argues that the observations are consistent with a static cosmology where redshift is produced by a tired-light mechanism. For type 1a supernovae there is a strong correlation between peak luminosity and the width of the light curve, the Phillips relation. In an expanding universe this relation is used to combine the absolute magnitude with the stretch factor to obtain a corrected apparent peak magnitude. In a model for a static universe where width rather than stretch factor is used there is different apparent peak magnitude. Since the analysis program explicitly uses the stretch factor rather than width in its use of the Phillips relation its application in a static universe produces a systematic bias in the peak magnitudes. In addition, the stretch selection that is valid for an expanding universe produces another small bias in the data that must be included in a static universe. The aim of this paper is to show that, using the Phillips relation, and allowing for these biases, the data are consistent with a static model. In a static model the density distribution of type 1a supernovae is independent of redshift. This prediction agrees with the observations.

A relation between the degree of pulse compression and energy efficiency is derived for femtosecond laser pulse compressors that utilise spectral broadening of pulses in a gas-filled capillary. We show that the degree of compression has a maximum at an energy efficiency from 15% to 30%. A 15-fold compression of a 290-fs pulse with an energy efficiency of 24% is demonstrated.

We will show that the roots of a polynomial equation in one variable of degree n are related to the solutions of a symmetric quadratic form in n-1 variables with constant positive integer coefficients. The classic polynomial notation will be rewritten to define a characteristic discriminant of a polynomial of degree n. A new set of characteristic roots allows expressing the characteristic discriminant as the result of a symmetric quadratic form.

Interface consistency is an important basic concept in web design and has an effect on performance and satisfaction of end users. Consistency also has significant effects on the learning performance of both expert and novice end users. Consequently, ... Keywords: Consistency, Measurement, Methods, Performance, Shadow Expert Technique, Usability Test

Program FY 2009 Operational Plan Program FY 2009 Operational Plan Goal 2: Preserve and Enhance Technical Capability Objective 1 Point Paper NNSA/SSO/AMFO/8 Jul 09/adt Objective 1: Identify resource and organizational structure needs to improve qualification consistency and transportability. Actions: 1. Determine appropriate resource levels 2. Determine effective organizational structure Methodology A TQP Resource Management Questionnaire was developed to address the actions above. The scope of the questionnaire broadened to include questions concerning TQP-related definitions, mentorship, and centralization of TQP tasks directly under the FTCP. The questionnaire was sent to all FTCP Agents and associate members who were given approximately 45 days to respond. Summary of questionnaire results:

The spontaneous fission lifetime of 264Fm has been studied within nuclear density functional theory by minimizing the collective action integral for fission in a two-dimensional quadrupole collective space representing elongation and triaxiality. The collective potential and inertia tensor are obtained self-consistently using the Skyrme energy density functional and density-dependent pairing interaction. The resulting spontaneous fission lifetimes are compared with the static result obtained with the minimum-energy pathway. We show that fission pathways strongly depend on assumptions underlying collective inertia. With the non-perturbative mass parameters, the dynamic fission pathway becomes strongly triaxial and it approaches the static fission valley. On the other hand, when the standard perturbative cranking inertia tensor is used, axial symmetry is restored along the path to fission; an effect that is an artifact of the approximation used.

A new approach is proposed, the consistent data assimilation, that allows to link the integral data experiment results to basic nuclear parameters employed by evaluators to generate ENDF/B point energy files in order to improve them. Practical examples are provided for the structural materials 23Na and 56Fe. The sodium neutron propagation experiments, EURACOS and JANUS-8, are used to improve via modifications of 23Na nuclear parameters (like scattering radius, resonance parameters, Optical model parameters, Statistical Hauser-Feshbach model parameters, and Preequilibrium Exciton model parameters) the agreement of calculation versus experiments for a series of measured reaction rate detectors slopes. For the 56Fe case the EURACOS and ZPR3 assembly 54 are used. Results have shown inconsistencies in the set of nuclear parameters used so that further investigation is needed. Future work involves comparison of results against a more traditional multigroup adjustments, and extension to other isotope of interest in the reactor community.

In many plasma physics and charged-particle beam dynamics problems, Coulomb collisions are modeled by a Fokker-Planck equation. In order to incorporate these collisions, we present a three-dimensional parallel Langevin simulation method using a Particle-In-Cell (PIC) approach implemented on high-performance parallel computers. We perform, for the first time, a fully self-consistent simulation, in which the friction and diffusion coefficients are computed from first principles. We employ a two-dimensional domain decomposition approach within a message passing programming paradigm along with dynamic load balancing. Object oriented programming is used to encapsulate details of the communication syntax as well as to enhance reusability and extensibility. Performance tests on the SGI Origin 2000 and the Cray T3E-900 have demonstrated good scalability. Work is in progress to apply our technique to intrabeam scattering in accelerators.

An efficient method for generating residual statics corrections to compensate for surface-consistent static time shifts in stacked seismic traces. The method includes a step of framing the residual static corrections as a global optimization problem in a parameter space. The method also includes decoupling the global optimization problem involving all seismic traces into several one-dimensional problems. The method further utilizes a Stochastic Pijavskij Tunneling search to eliminate regions in the parameter space where a global minimum is unlikely to exist so that the global minimum may be quickly discovered. The method finds the residual statics corrections by maximizing the total stack power. The stack power is a measure of seismic energy transferred from energy sources to receivers.

We extend the self-consistent Green's functions formalism to take into account three-body interactions. We analyze the perturbative expansion in terms of Feynman diagrams and define effective one- and two-body interactions, which allows for a substantial reduction of the number of diagrams. The procedure can be taken as a generalization of the normal ordering of the Hamiltonian to fully correlated density matrices. We give examples up to third order in perturbation theory. To define nonperturbative approximations, we extend the equation of motion method in the presence of three-body interactions. We propose schemes that can provide nonperturbative resummation of three-body interactions. We also discuss two different extensions of the Koltun sum rule to compute the ground state of a many-body system.

This paper describes the comparison between homeless and home-based Lazy Release Consistency (LRC) protocols which are used to implement Distributed Shared Memory (DSM) in cluster computing. We present a performance evaluation of parallel applications running on homeless and home-based LRC protocols. We compared the performance between Tread-Marks, which uses homeless LRC protocol, and our home-based DSM system. We found that the home-based DSM system has shown better scalability than TreadMarks in parallel applications we tested. This poor scalability in the homeless protocol is caused by a hot spot and garbage collection, but we have shown that these factors do not affect the scalability of the home-based protocol.

Out-of-plane structures of the GaN(0001) surface in the metal-organic chemical vapor deposition (MOCVD) environment have been determined using in situ grazing-incidence X-ray scattering. The authors measured 11{bar 2}{ell} crystal truncation rod intensities at a variety of temperatures and ammonia partial pressures on both sides of the 1 x 1 to ({radical}3 x 2{radical}3)R30{degree} surface phase transition. The out-of-plane structure of the ({radical}3 x 2{radical}3)R30{degree} phase appears to be nearly independent of temperature below the transition, while the structure of the 1 x 1 phase changes increase rapidly as the phase transition is approached from above. A model for the structure of the 1 x 1 phase with a partially-occupied top Ga layer agrees well with the data. The observed temperature dependence is consistent with a simple model of the equilibrium between the vapor phase and the surface coverage of Ga and N. In addition, the authors present results on the kinetics of reconstruction domain coarsening following a quench into the ({radical}3 x 2{radical}3)R30{degree} phase field.

This report presents the findings from a 1997 enrollment and degree survey sent to 46 institutions offering a major in nuclear engineering or an option program received their degrees within the nuclear engineering major programs.

Plasma sprayed composite coatings of metal-bonded chromium carbide with additions of silver and thermochemically stable fluorides were previously reported to be lubricative in pin on disk bench tests from room temperature to 900{degree}C. An early coating formulation of this type, designated as PS200, was successfully tested as a cylinder coating in a Stirling engine at a TRRT of 760{degree}C (1450{degree}F) in a hydrogen atmosphere, and as a backup lubricant for gas bearings to 650{degree}C (1250{degree}F). A subsequent optimization program as shown that tribological properties are further improved by increasing the solid lubricant content. The improved coating is designated as PS212. The same powder formulation has been used to make free-standing powder metallurgy (PM212) parts by sintering or hot isostatic pressing. The process is very attractive for making parts that cannot be readily plasma sprayed such as bushings and cylinders that have small bore diameters and/or high length to diameter ratios. The properties of coatings and free-standing parts fabricated from these powders are reviewed. 6 refs., 14 figs., 1 tab.

Measurement-based quantum computation (MBQC) and adiabatic quantum computation (AQC) are two very different computational methods. While in MBQC computation is driven by adaptive measurements on a large entangled state, in AQC it is the adiabatic transition to a ground state holding the solution to the problem which results in computation. In this paper we combine MBQC on graph states with AQC and investigate how properties, such as computational depth, energy gap and Hamiltonian degree, translate into each other. Following an approach proposed by Bacon and Flammia, we show that any measurement-based quantum computation on a graph state with gflow can be converted into an adiabatic computation, which we call adiabatic graph-state quantum computation (AGQC). We then identify how a trade-off can be made between computational depth and Hamiltonian degree, and clarify the effects of out-of-order measurements in the adiabatic computation. In the extreme case, we present a translation to AGQC where all computations can be carried out in constant time, at the expense of having high degree starting Hamiltonian. This leads to a natural conjecture for a lower bound on the cost of simulating large degree operators using smaller degree operators.

Cooling Degree Days, by State (Weighted by Population, per 2000 Census) Cooling Degree Days, by State (Weighted by Population, per 2000 Census) Dataset Summary Description The National Oceanic and Atmospheric Administration's (NOAA) National Environmental Satellite, Data, and Information Services (NESDIS), in conjunction with the National Climatic Data Center (NCDC) publish monthly and annual climate data by state for the U.S., including, cooling degree days (total number of days per month and per year). The average values for each state are weighted by population, using 2000 Census data. The base temperature for this dataset is 65 degrees F. Included here are monthly and annual values averaged over several periods of time: 1931-2000, 1931-60, 1941-70, 1951-80, 1961-90, 1971-2000 (standard deviation is also provided). Detailed monthly climatic information (including cooling degree days) is available for the time period between 1895 and 2011, from NOAA (http://www7.ncdc.noaa.gov/CDO/CDODivisionalSelect.jsp#).

Determining an accurate position for a submm galaxy (SMG) is the crucial step that enables us to move from the basic properties of an SMG sample - source counts and 2-D clustering - to an assessment of their detailed, multi-wavelength properties, their contribution to the history of cosmic star formation and their links with present-day galaxy populations. In this paper, we identify robust radio and/or IR counterparts, and hence accurate positions, for over two thirds of the SCUBA HAlf-Degree Extragalactic Survey (SHADES) Source Catalogue, presenting optical, 24-um and radio images of each SMG. Observed trends in identification rate have given no strong rationale for pruning the sample. Uncertainties in submm position are found to be consistent with theoretical expectations, with no evidence for significant additional sources of error. Employing the submm/radio redshift indicator, via a parameterisation appropriate for radio-identified SMGs with spectroscopic redshifts, yields a median redshift of 2.8 for the radio-identified subset of SHADES, somewhat higher than the median spectroscopic redshift. We present a diagnostic colour-colour plot, exploiting Spitzer photometry, in which we identify regions commensurate with SMGs at very high redshift. Finally, we find that significantly more SMGs have multiple robust counterparts than would be expected by chance, indicative of physical associations. These multiple systems are most common amongst the brightest SMGs and are typically separated by 2-6", or 15-50/(sin i) kpc at z ~ 2, consistent with early bursts seen in merger simulations.

If mutual gravitational scattering among exoplanets occurs, then it may produce unique orbital properties. For example, two-planet systems that lie near the boundary between circulation and libration of their periapses could result if planet-planet scattering ejected a former third planet quickly, leaving one planet on an eccentric orbit and the other on a circular orbit. We first improve upon previous work that examined the apsidal behavior of known multiplanet systems by doubling the sample size and including observational uncertainties. This analysis recovers previous results that demonstrated that many systems lay on the apsidal boundary between libration and circulation. We then performed over 12,000 three-dimensional N-body simulations of hypothetical three-body systems that are unstable, but stabilize to two-body systems after an ejection. Using these synthetic two-planet systems, we test the planet-planet scattering hypothesis by comparing their apsidal behavior, over a range of viewing angles, to that of the observed systems and find that they are statistically consistent regardless of the multiplicity of the observed systems. Finally, we combine our results with previous studies to show that, from the sampled cases, the most likely planetary mass function prior to planet-planet scattering follows a power law with index -1.1. We find that this pre-scattering mass function predicts a mutual inclination frequency distribution that follows an exponential function with an index between -0.06 and -0.1.

The objective of this proposal is the development of a consistent multi-group theory that accurately accounts for the energy-angle coupling associated with collapsed-group cross sections. This will allow for coarse-group transport and diffusion theory calculations that exhibit continuous energy accuracy and implicitly treat cross- section resonances. This is of particular importance when considering the highly heterogeneous and optically thin reactor designs within the Next Generation Nuclear Plant (NGNP) framework. In such reactors, ignoring the influence of anisotropy in the angular flux on the collapsed cross section, especially at the interface between core and reflector near which control rods are located, results in inaccurate estimates of the rod worth, a serious safety concern. The scope of this project will include the development and verification of a new multi-group theory enabling high-fidelity transport and diffusion calculations in coarse groups, as well as a methodology for the implementation of this method in existing codes. This will allow for a higher accuracy solution of reactor problems while using fewer groups and will reduce the computational expense. The proposed research represents a fundamental advancement in the understanding and improvement of multi- group theory for reactor analysis.

In the present study, a framework for modeling two-phase evaporating flow is presented, which employs an Eulerian-Lagrangian-Lagrangian approach. For the continuous phase, a joint velocity-composition probability density function (PDF) method is used. Opposed to other approaches, such PDF methods require no modeling for turbulent convection and chemical source terms. For the dispersed phase, the PDF of velocity, diameter, temperature, seen gas velocity and seen gas composition is calculated. This provides a unified formulation, which allows to consistently address the different modeling issues associated with such a system. Because of the high dimensionality, particle methods are employed to solve the PDF transport equations. To further enhance computational efficiency, a local particle time-stepping algorithm is implemented and a particle time-averaging technique is employed to reduce statistical and bias errors. In comparison to previous studies, a significantly smaller number of droplet particles per grid cell can be employed for the computations, which rely on two-way coupling between the droplet and gas phases. The framework was validated using established experimental data and a good overall agreement can be observed.

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

This report describes a technique of using a mass-consistent model to derive wind speeds over a microscale region of complex terrain. A serious limitation in the use of these numerical models is that the calculated wind field is highly sensitive to some input parameters, such as those specifying atmospheric stability. Because accurate values for these parameters are not usually known, confidence in the calculated winds is low. However, values for these parameters can be found by tuning the model to existing wind observations within a microscale area. This tuning is accomplished by using a single-variable, unconstrained optimization procedure that adjusts the unknown parameters so that the error between the observed winds and model calculations of these winds is minimized. Model verification is accomplished by using eight sets of hourly averaged wind data. These data are obtained from measurements made at approximately 30 sites covering a wind farm development in the Altamont Pass area. When the model is tuned to a small subset of the 30 sites, an accurate determination of the wind speeds was made for the remaining sites in six of the eight cases. (The two that failed were low wind speed cases.) Therefore, when this technique is used, numerical modeling shows great promise as a tool for microscale siting of wind turbines in complex terrain.

Ductility and fracture toughness is a major stumbling block in using depleted uranium as a structural material. The ability to correctly model deformation of uranium can be used to create process path methods to improve its structural design ability. The textural evolution of depleted uranium was simulated using a visco-plastic self consistent model and analyzed by comparing pole figures of the simulations and experimental samples. Depleted uranium has the same structure as alpha uranium, which is an orthorhombic phase of uranium. Both deformation slip and twin systems were compared. The VPSC model was chosen to simulate this material because the model encompasses both low-symmetry materials as well as twinning in materials. This is of particular interest since depleted uranium has a high propensity for twinning, which dominates deformation and texture evolution. Simulated results were compared to experimental results to measure the validity of the model. One specific twin system, the {l_brace}176{r_brace}[512] twin, was of specific notice. The VPSC model was used to simulate the influence of this twin on depleted uranium and was compared with a mechanically shocked depleted uranium sample. Under high strain rate shock deformation conditions, the {l_brace}176{r_brace}[512] twin system appears to be a dominant deformation system. By simulating a compression process using the VPSC model with the {l_brace}176{r_brace}[512] twin as the dominant deformation mode, a favorable comparison could be made between the experimental and simulated textures. (authors)

Previous work examined degree of hybridization on the fuel economy of a hybrid electric sport utility vehicle. It was observed that not only was the vehicle control strategy important, but that its definition should be coupled with the component sizing process. Both degree of hybridization and the energy management strategy have been optimized simultaneously in this study. Simple mass scaling algorithms were employed to capture the effect of component and vehicle mass variations as a function of degree of hybridization. Additionally, the benefits of regenerative braking and power buffering have been maximized using optimization methods to determine appropriate battery pack sizing. Both local and global optimization routines were applied to improve the confidence in the solution being close to the true optimum. An optimal configuration and energy management strategy that maximizes the benefit of hybridization for a hydrogen fuel cell hybrid SUV was derived. The optimal configuration was explored, and sensitivity to drive cycle in the optimization process was studied.

We present a series of models for the plasma properties along open magnetic flux tubes rooted in solar coronal holes, streamers, and active regions. These models represent the first self-consistent solutions that combine: (1) chromospheric heating driven by an empirically guided acoustic wave spectrum, (2) coronal heating from Alfven waves that have been partially reflected, then damped by anisotropic turbulent cascade, and (3) solar wind acceleration from gradients of gas pressure, acoustic wave pressure, and Alfven wave pressure. The only input parameters are the photospheric lower boundary conditions for the waves and the radial dependence of the background magnetic field along the flux tube. For a single choice for the photospheric wave properties, our models produce a realistic range of slow and fast solar wind conditions by varying only the coronal magnetic field. Specifically, a 2D model of coronal holes and streamers at solar minimum reproduces the latitudinal bifurcation of slow and fast streams seen by Ulysses. The radial gradient of the Alfven speed affects where the waves are reflected and damped, and thus whether energy is deposited below or above the Parker critical point. As predicted by earlier studies, a larger coronal ``expansion factor'' gives rise to a slower and denser wind, higher temperature at the coronal base, less intense Alfven waves at 1 AU, and correlative trends for commonly measured ratios of ion charge states and FIP-sensitive abundances that are in general agreement with observations. These models offer supporting evidence for the idea that coronal heating and solar wind acceleration (in open magnetic flux tubes) can occur as a result of wave dissipation and turbulent cascade. (abridged abstract)

Under application of an electric field greater than a triggering electric field $E_c \\sim 0.4$ kV/mm, suspensions obtained by dispersing particles of the synthetic clay fluoro-hectorite in a silicon oil, aggregate into chain- and/or column-like structures parallel to the applied electric field. This micro-structuring results in a transition in the suspensions' rheological behavior, from a Newtonian-like behavior to a shear-thinning rheology with a significant yield stress. This behavior is studied as a function of particle volume fraction and strength of the applied electric field, $E$. The steady shear flow curves are observed to scale onto a master curve with respect to $E$, in a manner similar to what was recently found for suspensions of laponite clay [42]. In the case of Na-fluorohectorite, the corresponding dynamic yield stress is demonstrated to scale with respect to $E$ as a power law with an exponent $\\alpha \\sim 1.93$, while the static yield stress inferred from constant shear stress tests exhibits a similar behavior with $\\alpha \\sim 1.58$. The suspensions are also studied in the framework of thixotropic fluids: the bifurcation in the rheology behavior when letting the system flow and evolve under a constant applied shear stress is characterized, and a bifurcation yield stress, estimated as the applied shear stress at which viscosity bifurcation occurs, is measured to scale as $E^\\alpha$ with $\\alpha \\sim 0.5$ to 0.6. All measured yield stresses increase with the particle fraction $\\Phi$ of the suspension. For the static yield stress, a scaling law $\\Phi^\\beta$, with $\\beta = 0.54$, is found. The results are found to be reasonably consistent with each other. Their similarities with-, and discrepancies to- results obtained on laponite-oil suspensions are discussed.

Together with the variational indicators of chaos, the spectral analysis methods have also achieved great popularity in the field of chaos detection. The former are based on the concept of local exponential divergence. The latter are based on the numerical analysis of some particular quantities of a single orbit, e.g. its frequency. In spite of having totally different conceptual bases, they are used for the very same goals such as, for instance, separating the chaotic and the regular component. In fact, we show herein that the variational indicators serve to distinguish both components of a Hamiltonian system in a more reliable fashion than a spectral analysis method does. We study two start spaces for different energy levels of a self-consistent triaxial stellar dynamical model by means of some selected variational indicators and a spectral analysis method. In order to select the appropriate tools for this paper, we extend previous studies where we make a comparison of several variational indicators on different scenarios. Herein, we compare the Average Power Law Exponent (APLE) and an alternative quantity given by the Mean Exponential Growth factor of Neary Orbits (MEGNO): the MEGNO's Slope Estimation of the largest Lyapunov Characteristic Exponent (SElLCE). The spectral analysis method selected for the investigation is the Frequency Modified Fourier Transform (FMFT). Besides a comparative study of the APLE, the Fast Lyapunov Indicator (FLI), the Orthogonal Fast Lyapunov Indicator (OFLI) and the MEGNO/SElLCE, we show that the SElLCE could be an appropriate alternative to the MEGNO when studying large samples of initial conditions. The SElLCE separates the chaotic and the regular components reliably and identifies the different levels of chaoticity. We show that the FMFT is not as reliable as the SElLCE to describe clearly the chaotic domains in the experiments.

We present a new generation of chemically consistent evolutionary synthesis models for galaxies of various spectral types from E through Sd. The models follow the chemical enrichment of the ISM and take into account the increasing initial metallicity of successive stellar generations using recently published metallicity dependent stellar evolutionary isochrones, spectra and yields. Our first set of closed-box 1-zone models does not include any spatial resolution or dynamics. For a Salpeter initial mass function (IMF) the star formation rate(SFR) and its time evolution are shown to successfully parameterise spectral galaxy types E, ..., Sd. We show how the stellar metallicity distribution in various galaxy types build up with time to yield after $\\sim 12$ Gyr agreement with stellar metallicity distributions observed in our and other local galaxies. The models give integrated galaxy spectra over a wide wavelength range (90.9\\AA - 160$\\mu$m), which for ages of $\\sim 12$ Gyr are in good agreement not only with observed broad band colours but also with template spectra for the respective galaxy types. Using filter functions for Johnson-Cousins, as well as for HST broad band filters in the optical and Bessel & Brett's NIR filter system, we calculate the luminosity and colour evolution of model galaxies over a Hubble time. Including a standard cosmological model and the attenuation by intergalactic hydrogen we present evolutionary and cosmological corrections as well as apparent luminosities in various filters over the redshift range from z $\\sim 5$ to the present for our galaxy types and compare to earlier models using single (=solar) metallicity input physics only. We also present a first comparison of our cc models to HDF data.(Abridged abstract)

A simple nonlinear Reduced Order Model to study global, regional and local instabilities in Boiling Water Reactors is described. The ROM consists of three submodels: neutron-kinetic, thermal-hydraulic and heat-transfer models. The neutron-kinetic model allows representing the time evolution of the three first neutron kinetic modes: the fundamental, the first and the second azimuthal modes. The thermal-hydraulic model describes four heated channels in order to correctly simulate out-of-phase behavior. The coupling between the different submodels is performed via both void and Doppler feedback mechanisms. After proper spatial homogenization, the governing equations are discretized in the time-domain. Several modifications, compared to other existing ROMs, have been implemented, and are reported in this paper. One novelty of the ROM is the inclusion of both azimuthal modes, which allows to study combined instabilities (in-phase and out-of-phase), as well as to investigate the corresponding interference effects between them. The second modification concerns the precise estimation of so-called reactivity coefficients or C{sub mn}{sup *V,D} - coefficients by using direct cross-section data from SIMULATE-3 combined with the CORE SIM core simulator in order to calculate Eigenmodes. Furthermore, a non-uniform two-step axial power profile is introduced to simulate the separate heat production in the single and two-phase regions, respectively. An iterative procedure was developed to calculate the solution to the coupled neutron-kinetic/thermal-hydraulic static problem prior to solving the time-dependent problem. Besides, the possibility of taking into account the effect of local instabilities is demonstrated in a simplified manner. The present ROM is applied to the investigation of an actual instability that occurred at the Swedish Forsmark-1 BWR in 1996/1997. The results generated by the ROM are compared with real power plant measurements performed during stability tests and show a good qualitative agreement. The present study provides some insight in a deeper understanding of the physical principles which drive both core-wide and local instabilities. (authors)

A major problem with cavitation in pumps and other hydraulic devices is that there is no effective method for detecting or predicting its inception. The traditional approach is to declare the pump in cavitation when the total head pressure drops by some arbitrary value (typically 3o/0) in response to a reduction in pump inlet pressure. However, the pump is already cavitating at this point. A method is needed in which cavitation events are captured as they occur and characterized by their process dynamics. The object of this research was to identify specific features of cavitation that could be used as a model-based descriptor in a context-dependent condition-based maintenance (CD-CBM) anticipatory prognostic and health assessment model. This descriptor was based on the physics of the phenomena, capturing the salient features of the process dynamics. An important element of this concept is the development and formulation of the extended process feature vector @) or model vector. Thk model-based descriptor encodes the specific information that describes the phenomena and its dynamics and is formulated as a data structure consisting of several elements. The first is a descriptive model abstracting the phenomena. The second is the parameter list associated with the functional model. The third is a figure of merit, a single number between [0,1] representing a confidence factor that the functional model and parameter list actually describes the observed data. Using this as a basis and applying it to the cavitation problem, any given location in a flow loop will have this data structure, differing in value but not content. The extended process feature vector is formulated as follows: E`> [ , {parameter Iist}, confidence factor]. (1) For this study, the model that characterized cavitation was a chirped-exponentially decaying sinusoid. Using the parameters defined by this model, the parameter list included frequency, decay, and chirp rate. Based on this, the process feature vector has the form: @=> [, {01 = a, ~= b, ~ = c}, cf = 0.80]. (2) In this experiment a reversible catastrophe was examined. The reason for this is that the same catastrophe could be repeated to ensure the statistical significance of the data.

The Department of Energy Office of Environmental Management (DOE/EM) plans to conduct the Plutonium Disposition Project at the Savannah River Site (SRS) to disposition excess weapons-usable plutonium. A plutonium glass waste form is the preferred option for immobilization of the plutonium for subsequent disposition in a geologic repository. A reference glass composition (Lanthanide Borosilicate (LaBS) Frit B) was developed during the Plutonium Immobilization Program (PIP) to immobilize plutonium in the late 1990's. A limited amount of performance testing was performed on this baseline composition before efforts to further pursue Pu disposition via a glass waste form ceased. Recent FY05 studies have further investigated the LaBS Frit B formulation as well as development of a newer LaBS formulation denoted as LaBS Frit X. The objectives of this present task were to fabricate plutonium loaded LaBS Frit X glass and perform corrosion testing to provide near-term data that will increase confidence that LaBS glass product is suitable for disposal in the Yucca Mountain Repository. Specifically, testing was conducted in an effort to provide data to Yucca Mountain Project (YMP) personnel for use in performance assessment calculations. Plutonium containing LaBS glass with the Frit X composition with a 9.5 wt% PuO{sub 2} loading was prepared for testing. Glass was prepared to support Product Consistency Testing (PCT) at Savannah River National Laboratory (SRNL). The glass was thoroughly characterized using x-ray diffraction (XRD) and scanning electron microscopy coupled with energy dispersive spectroscopy (SEM/EDS) prior to performance testing. A series of PCTs were conducted at SRNL using quenched Pu Frit X glass with varying exposed surface areas. Effects of isothermal and can-in-canister heat treatments on the Pu Frit X glass were also investigated. Another series of PCTs were performed on these different heat-treated Pu Frit X glasses. Leachates from all these PCTs were analyzed to determine the dissolved concentrations of key elements. Acid stripping of leach vessels was performed to determine the concentration of the glass constituents that may have sorbed on the vessels during leach testing. Additionally, the leachate solutions were ultrafiltered to quantify colloid formation.

We show that for a connected graph with n nodes and e edges and maximum degree at most 3, the size of the dominating set found by the greedy algorithm is at most 10n - 2e/13 if e ? 11/10n, 11n - ... Keywords: algorithms, dominating set, maximum size

In this paper, I give a short proof of a recent result by Sokal, showing that all zeros of the chromatic polynomial $P_G(q)$ of a finite graph $G$ of maximal degree $D$ lie in the disk $|q|< K D$, where $K$ is a constant that is strictly smaller than ...

Digital games are marketed, mass-produced, and consumed by an increasing number of people and the game industry is only expected to grow. In response, postsecondary institutions in the UK and the U.S. have started to create game degree programs. Though ... Keywords: Education, curriculum, game, instruction

To obtain a better understanding of WIL rationale and practices in Australian ICT degrees, a survey of managers and educational leaders of ICT was undertaken. These survey results were analysed and informed by discussions at a forum of ICT educational ... Keywords: academia, industry, professional practice, student experience, work integrated learning

PROCESS DESIGN AND CONTROL Steady-State Operational Degrees of Freedom with Application to Refrigeration Cycles JÃ¸rgen Bauck Jensen and Sigurd Skogestad* Department of Chemical Engineering, Norwegian Uni of the circulating refrigerant are also discussed. Two liquified natural gas (LNG) processes of current interest

Study in Indonesia... and gain credit towards your degree! JournalismProfessionalPracticum inIndonesia will deep- en their understanding of Indonesia whilst developing their journalism skills within will be a high-calibre journalist who has experience with different elements of the media in both Indonesia

Study in Indonesia and gain credit towards your degree! InternationalRelationsinIndonesia, Europe and the United States as well as Indonesia, are highly qualified and ex- perienced as both the Australia Indonesia Institute, the Department of Education, Employment and Workplace Relations, the Myer

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

The CompuTer SCienCe program The bachelor of science degree in computer science offered in the fundamentals of computer science, elements of practical application and an appreciation for liberal learning to work within various areas of computer science and to work across other disciplines. program edu

Photo of the Week: What You Needed to Contain 100 Million Degree Photo of the Week: What You Needed to Contain 100 Million Degree Plasma for 100 Millionths of a Second... in 1974 Photo of the Week: What You Needed to Contain 100 Million Degree Plasma for 100 Millionths of a Second... in 1974 April 22, 2013 - 4:59pm Addthis In the early years of magnetic fusion, there was talk among scientists of controlling nuclear energy to create useful power. To do this, scientists heated plasma to temperatures as high as 100 million degrees Celsius -- ten times hotter than the center of the sun. Controlling such high levels of energy required the construction of large machines that could withstand these extremely high energy levels. In this 1974 photo, laboratory scientists are shown working on Scyllac, one of the largest machines used for magnetic fusion experiments, located at Los Alamos National Laboratory. Scyllac filled a 100-by-100-foot building from wall to wall, and used 12 miles of one-inch cables and 3,000 capacitors to contain hot plasma the size of a small garden hose for just 100 millionths of a second. Learn more about early magnetic fusion experiments at LANL. | Photo courtesy of Los Alamos National Laboratory.

66 66 Varnish cache server Browse Upload data GDR 429 Throttled (bot load) Error 429 Throttled (bot load) Throttled (bot load) Guru Meditation: XID: 2142278566 Varnish cache server Heating Degree Days, by State (Weighted by Population, per 2000 Census) Dataset Summary Description The National Oceanic and Atmospheric Administration's (NOAA) National Environmental Satellite, Data, and Information Services (NESDIS), in conjunction with the National Climatic Data Center (NCDC) publish monthly and annual climate data by state for the U.S., including, heating degree days (total number of days per month and per year). The average values for each state are weighted by population, using 2000 Census data. The base temperature for this dataset is 65 degrees F. Included here are monthly and annual values averaged over several periods of time: 1931-2000, 1931-60, 1941-70, 1951-80, 1961-90, 1971-2000 (standard deviation is also provided). Detailed monthly climatic information (including heating degree days) is available for the time period between 1895 and 2011, from NOAA (http://www7.ncdc.noaa.gov/CDO/CDODivisionalSelect.jsp#).

The Department of Energy Office of Environmental Management (DOE/EM) plans to conduct the Plutonium Disposition Project at the Savannah River Site (SRS) to disposition excess weapons-usable plutonium. A plutonium glass waste form is the preferred option for immobilization of the plutonium for subsequent disposition in a geologic repository. A reference glass composition (Lanthanide Borosilicate (LaBS) Frit B) was developed during the Plutonium Immobilization Program (PIP) to immobilize plutonium in the late 1990's. A limited amount of performance testing was performed on this baseline composition before efforts to further pursue Pu disposition via a glass waste form ceased. Recent FY05 studies have further investigated the LaBS Frit B formulation as well as development of a newer LaBS formulation denoted as LaBS Frit X. The objectives of this present task were to fabricate plutonium loaded LaBS Frit X glass and perform corrosion testing to provide near-term data that will increase confidence that LaBS glass product is suitable for disposal in the Yucca Mountain Repository. Specifically, testing was conducted in an effort to provide data to Yucca Mountain Project (YMP) personnel for use in performance assessment calculations. Plutonium containing LaBS glass with the Frit X composition with a 9.5 wt% PuO{sub 2} loading was prepared for testing. Glass was prepared to support Product Consistency Testing (PCT) at Savannah River National Laboratory (SRNL). The glass was thoroughly characterized using x-ray diffraction (XRD) and scanning electron microscopy coupled with energy dispersive spectroscopy (SEM/EDS) prior to performance testing. A series of PCTs were conducted at SRNL using quenched Pu Frit X glass with varying exposed surface areas. Effects of isothermal and can-in-canister heat treatments on the Pu Frit X glass were also investigated. Another series of PCTs were performed on these different heat-treated Pu Frit X glasses. Leachates from all these PCTs were analyzed to determine the dissolved concentrations of key elements. Acid stripping of leach vessels was performed to determine the concentration of the glass constituents that may have sorbed on the vessels during leach testing. Additionally, the leachate solutions were ultrafiltered to quantify colloid formation. Characterization of the quenched Pu Frit X glass prior to testing revealed that some crystalline plutonium oxide was present in the glass. The crystalline particles had a disklike morphology and likely formed via coarsening of particles in areas compositionally enriched in plutonium. Similar results had also been observed in previous Pu Frit B studies. Isothermal 1250 C heat-treated Pu Frit X glasses showed two different crystalline phases (PuO{sub 2} and Nd{sub 2}Hf{sub 2}O{sub 7}), as well as a peak shift in the XRD spectra that is likely due to a solid solution phase PuO{sub 2}-HfO{sub 2} formation. Micrographs of this glass showed a clustering of some of the crystalline phases. Pu Frit X glass subjected to the can-in-canister heating profile also displayed the two PuO{sub 2} and Nd{sub 2}Hf{sub 2}O{sub 7} phases from XRD analysis. Additional micrographs indicate crystalline phases in this glass were of varying forms (a spherical PuO{sub 2} phase that appeared to range in size from submicron to {approx}5 micron, a dendritic-type phase that was comprised of mixed lanthanides and plutonium, and a minor phase that contained Pu and Hf), and clustering of the phases was also observed.

Sulfur and nitrogen oxides emitted to the atmosphere have been linked to the acidification of water bodies and soils and perturbations in the earth's radiation balance. In order to model the global transport and transformation of SO{sub x} and NO{sub x}, detailed spatial and temporal emission inventories are required. Benkovitz et al. (1996) published the development of an inventory of 1985 global emissions of SO{sub x} and NO{sub x} from anthropogenic sources. The inventory was gridded to a 1{degree} x 1{degree} latitude-longitude grid and has served as input to several global modeling studies. There is now a need to provide modelers with an update of this inventory to a more recent year, with a split of the emissions into elevated and low level sources. This paper describes the development of a 1990 update of the SO{sub x} and NO{sub x} global inventories that also includes a breakdown of sources into 17 sector groups. The inventory development starts with a gridded global default EDGAR inventory (Olivier et al, 1996). In countries where more detailed national inventories are available, these are used to replace the emissions for those countries in the global default. The gridded emissions are distributed into two height levels (0-100m and >100m) based on the final plume heights that are estimated to be typical for the various sectors considered. The sources of data as well as some of the methodologies employed to compile and develop the 1990 global inventory for SO{sub x} and NO{sub x} are discussed. The results reported should be considered to be interim since the work is still in progress and additional data sets are expected to become available.

An implicit and nonlinearly consistent (INC) solution technique is presented for the two-dimensional shallow-water equations. Since the method is implicit, and therefore unconditionally stable, time steps may be used that result in both gravity ...

Precipitation estimation from passive microwave radiometry based on physically based profile retrieval algorithms must be aided by a microphysical generator providing structure information on the lower portions of the cloud, consistent with the ...

Multispectral surface albedo and bidirectional properties are required for accurate determination of the surface and atmosphere solar radiation budget. A method is developed here to obtain time series of these surface characteristics consistent ...

In this paper we introduce a robust matching technique that allows to operate a very accurate selection of corresponding feature points from multiple views. Robustness is achieved by enforcing global geometric consistency at an early stage of the matching ...

This dissertation presents the design, control, and implementation of a compact highprecision multidimensional positioner. This precision-positioning system consists of a novel concentrated-field magnet matrix and a triangular single-moving part that carries three 3-phase permanent-magnet planar-levitation-motor armatures. Since only a single levitated moving part, namely the platen, generates all required fine and coarse motions, this positioning system is reliable and potentially cost-effective. The three planar levitation motors based on the Lorentz-force law not only produce the vertical force to levitate the triangular platen but also control the platen's position and orientation in the horizontal plane. Three laser distance sensors are used to measure vertical, x-, and yrotation motions. Three 2-axis Hall-effect sensors are used to determine lateral motions and rotation motion about the z-axis by measuring the magnetic flux density generated by the magnet matrix. This positioning system has a total mass of 1.52 kg, which is the minimized mass to produce better dynamic performance. In order to reduce the mass of the moving platen, it is made of Delrin with a mass density of 1.54 g/cm3 by Computer Numerical Controlled (CNC) machining. The platen can be regarded a pure mass, and the spring and damping effects are neglected except for the vertical dynamic. Single-input single-output (SISO) digital lead-lag controllers and a multivariable Linear Quadratic Gaussian (LQG) controller were designed and implemented. Real-time control was performed with the Linux-Ubuntu operating system OS. Real Time Application Interface (RTAI) for Linux works with Comedi and Comedi libraries and enables closed-loop real-time control. One of the key advantages of this positioning stage with Hall-effect sensors is the extended travel range and rotation angle in the horizontal mode. The maximum travel ranges of 220 mm in x and 200 mm in y were achieved experimentally. Since the magnet matrix generates periodical sinusoidal flux densities in the x-y plane, the travel range can be extended by increasing the number of magnet pitches. The rotation angle of 12 degrees was achieved in rotation around z. The angular velocities of 0.2094 rad/s and 4.74 rad/s were produced by a 200-mm-diameter circular motion and a 30-mm-diameter spiral motion, respectively. The maximum velocity of 16.25 mm/s was acquired from over one pitch motion. The maximum velocity of 17.5 mm/s in a 8-mm scanning motion was achieved with the acceleration of 72.4 m/s2. Step responses demonstrated a 10-um resolution and 6-um rms position noise in the translational mode. For the vertical mode, step responses of 5 um in z, 0.001 degrees in roation around x, and 0.001 degrees in rotation around y were achieved. This compact single-moving-part positioner has potential applications for precisionpositioning systems in semiconductor- manufacturing.

Raghavendra (STOC 2008) gave an elegant and surprising result: if Khot's Unique Games Conjecture (STOC 2002) is true, then for every constraint satisfaction problem (CSP), the best approximation ratio is attained by a certain simple semidefinite programming and a rounding scheme for it. In this paper, we show that a similar result holds for constant-time approximation algorithms in the bounded-degree model. Specifically, we present the followings: (i) For every CSP, we construct an oracle that serves an access, in constant time, to a nearly optimal solution of a basic LP relaxation of the CSP. (ii) Using the oracle, we present a constant-time rounding scheme that achieves an approximation ratio coincident with the integrality gap of the basic LP. (iii) We give a generic conversion from integrality gaps of basic LPs to hardness results. All of those results are ``unconditional.'' Therefore, for every bounded-degree CSP, we give the best constant-time approximation algorithm among all.

The h?-polynomial of a lattice polytope is the numerator of the generating function of the Ehrhart polynomial. Let P be a lattice polytope with h?-polynomial of degree d and with linear coefficient h ? 1. We show that P has to be a lattice pyramid over a lower-dimensional lattice polytope, if the dimension of P is greater or equal to h ? 1 (2d + 1) + 4d ? 1. This result has a purely combinatorial proof and generalizes a recent theorem of Batyrev. As an application we deduce from an inequality due to Stanley that the volume of a lattice polytope is bounded by a function depending only on the degree and the two heighest non-zero coefficients of the h?-polynomial.

The global radiation climate associated with anomalously cold winter months and cold winters is analyzed for the contiguous United States. The radiation data consist of rehabilitated measured and modeled monthly values of global radiation on both ...

$f(R)$ gravity models belong to an important class of modified gravity models where the late time cosmic accelerated expansion is considered as the manifestation of the large scale modification of the force of gravity. $f(R)$ gravity models can be expressed in terms of a scalar degree of freedom by redefinition of models variable. The conformal transformation of the action from Jordan frame to Einstein frame makes the scalar degree of freedom more explicit and can be studied conveniently. We have investigated the features of the scalar degree of freedoms and the consequent cosmological implications of the power-law ($\\xi R^n$) and the Starobinsky (disappearing cosmological constant) $f(R)$ gravity models numerically in the Einstein frame. Both the models show interesting behaviour of their scalar degree of freedom and could produce the accelerated expansion of the Universe in the Einstein frame with the negative equation of state of the scalar field. However the scalar field potential for the power-law model is the well behaved function of the field, whereas the potential becomes flat for higher value of field in the case of the Starobinsky model. Moreover, the equation of state of the scalar field for the power-law model is always negative and less than -1/3, which corresponds to the behaviour of the dark energy that produces the accelerated expansion of the Universe. This is not always the case for the Starobinsky model. At late times Starobinsky model behaves as cosmological constant $\\Lambda$ as behaves by power-law model for the values of $n\\rightarrow 2$ at all times.

As a key part of DOE`s and industry`s R&D efforts to improve the efficiency, cost, and emissions of power generation, a prototype High Performance Steam System (HPSS) has been designed, built, and demonstrated. The world`s highest temperature ASME Section I coded power plant successfully completed over 100 hours of development tests at 1500{degrees}F and 1500 psig on a 56,000 pound per hour steam generator, control valve and topping turbine at an output power of 5500 hp. This development advances the HPSS to 400{degrees}F higher steam temperature than the current best technology being installed around the world. Higher cycle temperatures produce higher conversion efficiencies and since steam is used to produce the large majority of the world`s power, the authors expect HPSS developments will have a major impact on electric power production and cogeneration in the twenty-first century. Coal fueled steam plants now produce the majority of the United States electric power. Cogeneration and reduced costs and availability of natural gas have now made gas turbines using Heat Recovery Steam Generators (HRSG`s) and combined cycles for cogeneration and power generation the lowest cost producer of electric power in the United States. These gas fueled combined cycles also have major benefits in reducing emissions while reducing the cost of electricity. Development of HPSS technology can significantly improve the efficiency of cogeneration, steam plants, and combined cycles. Figure 2 is a TS diagram that shows the HPSS has twice the energy available from each pound of steam when expanding from 1500{degrees}F and 1500 psia to 165 psia (150 psig, a common cogeneration process steam pressure). This report describes the prototype component and system design, and results of the 100-hour laboratory tests. The next phase of the program consists of building up the steam turbine into a generator set, and installing the power plant at an industrial site for extended operation.

In order to benefit from further reduction of the vertical IP beta function of the PEP-II high energy ring (HER) the bunch length should be reduced. This will be achieved by changing the phase advance from 60 degree to 90 degree in the four arcs not adjacent to the IR region, thus reducing momentum compaction by about 30% and reducing bunch length from a present 12 mm down to 8.5 mm at low beam current. In preparation to implement the 90 degree lattice the main HER quadrupole and sextupole strings and their power supplies have been reconfigured. The synchrotron tune initially will be lower but can be brought back by raising the rf voltage. Beam emittance is held at 48 nmr by introducing a significant dispersion beat in the arcs. The lattice was successfully commissioned at currents up to 800 mA in August 2007. In this paper we will compare the actual machine with the predicted behaviour, explain the correction strategies used and give an overall assessment of the operation and the benefit of the new lattice configuration.

A novel pH measurement system based on remote absorption spectroscopy via two uni-directional optical fibers has been developed for use in cooling water sampling lines in power plants. The system was designed to operate at 200 degrees Celsius (392 degreesFahrenheit) and 1379 kPa (200 psi) and, so far, has been shown to measure reproducibly the pH of a flowing stream at room temperature and 1379 kPa (200 psi).

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Zinc addition to the reactor coolant system (RCS) of a pressurized water reactor (PWR) is being used for dose rate reduction and primary water stress corrosion cracking (PWSCC) mitigation. This report summarizes results of aqueous zinc oxide solubility experiments from 150 to 350 degrees Celsius (302 to 662 degreesFahrenheit). These experiments were performed to develop quantitative models of solubility and aqueous-phase solute speciation behavior as functions of temperature, pH, and solution compositio...

Department of Chemical and Biological Engineering Information for potential graduate applicants who do not have a degree in Chemical Engineering The department of chemical and biological engineering frequently admits applicants who have a bachelor's degree in a field other than chemical engineering. Many

B.S. in Chemical Science The Chemical Science degree is designed for students who plan programs, and Geology. In addition, Chemical Science can be a valuable major for those interested in business and law. This degree is not intended as a chemical preparation for people who wish to do work directly in Chemistry

A program was developed and implemented at LLNL to provide more detailed, documented Criticality Safety Evaluations of operations in an R&D facility. The new Criticality Safety evaluations were consistent with regulatory requirements of the then new DOE Order 5480.24, Nuclear Criticality Safety. The evaluations provide a criticality safety basis for each operation in the facility in support of the facility Safety Analysis Report. This implementation program provided a transition from one method of conducting and documenting Criticality Safety Evaluations to a new method consistent with new regulatory requirements. The program also allowed continued safe operation of the facility while the new implementation level Criticality Safety Evaluations were developed.

We investigate the self-consistency of the Dyson-Schwinger formalism. We focus on both the QED and the self-interacting scalar field theories. We prove that the set of the Dyson-Schwinger equations, together with the Green-Ward-Takahashi identity, is equivalent to the analogous set of integral equations studied in condensed matter, namely many-body perturbation theory, where it is solved self-consistently and iteratively. In this framework, we compute the non-perturbative solution of the gap equation for the self-interacting scalar field theory.

The objective of the proposed research is to define strategies for the improvement of alloys for structural components, such as the intermediate heat exchanger and primary-to-secondary piping, for service at 1000 degree C in the He environment of the NGNP. Specifically, we will investigate the oxidation/carburization behavior and microstructure stability and how these processes affect creep. While generating this data, the project will also develop a fundamental understanding of how impurities in the He environment affect these degradation processes and how this understanding can be used to develop more useful life prediction methodologies.

Electric conductivity and dielectric permeability of the non-degenerate electronic gas for the collisional plasmas under arbitrary degree of degeneration of electron gas is found. The kinetic equation of Wigner - Vlasov - Boltzmann with collision integral in relaxation form BGK (Bhatnagar, Gross and Krook) in coordinate space is used. Dielectric permeability with using of the relaxation equation in the momentum space has been received by Mermin. Comparison with Mermin's formula has been realized. It is shown, that in the limit when Planck's constant tends to zero expression for dielectric permeability passes in the classical.

A two-phase or four-phase electric machine includes a first stator part and a second stator part disposed about ninety electrical degrees apart. Stator pole parts are positioned near the first stator part and the second stator part. An injector injects a third-harmonic frequency current that is separate from and not produced by the fundamental current driving the first stator part and the second stator part. The electric angular speed of the third-harmonic rotating field comprises .theta. ##EQU00001## where p comprises the number of pole pairs, .theta. comprises a mechanical angle and t comprise time in seconds.

We have used 4752 days of data collected by the Birmingham Solar-Oscillations Network (BiSON) to determine very precise oscillation frequencies of acoustic low-degree modes that probe the solar core. We compare the fine (small frequency) spacings and frequency separation ratios formed from these data with those of different solar models. We find that models constructed with low metallicity are incompatible with the observations. The results provide strong support for lowering the theoretical uncertainties on the neutrino fluxes. These uncertainties had recently been raised due to the controversy over the solar abundances.

Over the last six years, Tomsk Polytechnic University (TPU) has developed a 5½ year engineering degree program in the field of Material Protection Control and Accounting (MPC&A). In 2009 the first students graduated with this new degree. There were 25 job offers from nuclear fuel cycle enterprises of Russia and Kazakhstan for 17 graduates of the program. Due to the rather wide selection of workplaces, all graduates have obtained positions at nuclear enterprises. The program was developed within the Applied Physics and Engineering Department (APED). The laboratory and methodological base has been created taking into consideration the experience of the similar program at the Moscow Engineering Physics Institute (MEPhI). However, the TPU program has some distinguishing features such as the inclusion of special courses pertaining to fuel enrichment and reprocessing. During the last two years, three MPC&A laboratories have been established at APED. This was made possible due to several factors such as establishment of the State innovative educational program at TPU, assistance of the U.S. Department of Energy through Pacific Northwest National Laboratory and Los Alamos National Laboratory, and the financial support of the Swedish Radiation Safety Authority and some Russian private companies. All three of the MPC&A laboratories are part of the Innovative Educational Center “Nuclear Technologies and Non-Proliferation,” which deals with many topics including research activities, development of new curricula for experts training and retraining, and training of master’s students. In 2008, TPU developed a relationship with the International Atomic Energy Agency (IAEA), which was familiarized with APED’s current resources and activities. The IAEA has shown interest in creation of a master’s degree educational program in the field of nuclear security at TPU. A future objective is to acquaint nuclear fuel cycle enterprises with new APED capabilities and involve the enterprises in the scientific and educational projects implemented through the Nuclear Technologies and Non-Proliferation Center. This paper describes the development of the MPC&A engineering degree program and future goals of TPU in the field of nonproliferation education.

The nematic state of the iron-based superconductors is studied in the undoped limit of the three-orbital (xz, yz, xy) spin-fermion model via the introduction of lattice degrees of freedom. Monte Carlo simulations show that in order to stabilize the experimentally observed lattice distortion and nematic order, and to reproduce photoemission experiments, both the spin-lattice and orbital-lattice couplings are needed. The interplay between their respective coupling strengths regulates the separation between the structural and Ne el transition temperatures. Experimental results for the temperature dependence of the resistivity anisotropy and the angle-resolved photoemission orbital spectral weight are reproduced by the present numerical simulations.

Weather-related energy use, in the form of heating, cooling, and ventilation, accounted for more than 40 percent of all delivered energy use in residential and commercial buildings in 2006. Given the relatively large amount of energy affected by ambient temperature in the buildings sector, EIA has reevaluated what it considers normal weather for purposes of projecting future energy use for heating, cooling, and ventilation. In AEO2008, estimates of normal heating and cooling degree-days are based on the population-weighted average for the 10-year period from 1997 through 2006.

Weather-related energy use, in the form of heating, cooling, and ventilation, accounted for more than 40 percent of all delivered energy use in residential and commercial buildings in 2006. Given the relatively large amount of energy affected by ambient temperature in the buildings sector, EIA has reevaluated what it considers normal weather for purposes of projecting future energy use for heating, cooling, and ventilation. In AEO2008, estimates of normal heating and cooling degree-days are based on the population-weighted average for the 10-year period from 1997 through 2006.

AISI 1020 carbon steel was exposed to air at various relative humidities at 65{degrees}C. A ``critical relative humidity`` (CRH) of 75--85% was determined. The CRH is the transitional relative humidity where oxidation/corrosion changes from dry oxidation to aqueous film electrochemical corrosion. Short term testing suggests that aqueous film electrochemical corrosion results in the formation of an inner oxide of Fe{sub 3}O{sub 4}, and an outer oxide of a powdery Fe{sub 2}O{sub 3} and/or Fe{sub 2}O{sub 3}{center_dot}xH{sub 2}O.

Supplemental Supplies Supplemental Supplies Definitions Key Terms Definition Biomass Gas A medium Btu gas containing methane and carbon dioxide, resulting from the action of microorganisms on organic materials such as a landfill. Blast-furnace Gas The waste combustible gas generated in a blast furnace when iron ore is being reduced with coke to metallic iron. It is commonly used as a fuel within steel works. British Thermal Unit (Btu) The quantity of heat required to raise the temperature of 1 pound of liquid water by 1 degreeFahrenheit at the temperature at which water has its greatest density (approximately 39 degreesFahrenheit). Coke-oven Gas The mixture of permanent gases produced by the carbonization of coal in a coke oven at temperatures in excess of 1,000 degrees Celsius.

The attenuation of {sup 60}Co gamma rays and photons of 4, 10, and 18 MV x-ray beams by concrete, steel, and lead has been studied using the Monte Carlo technique for angles of incidence 0{degrees}, 30{degrees}, 45{degrees}, 60{degrees}, and 70{degrees}. Transmission factors have been determined down to < 2 x 10{sup {minus}5} in all cases. The results show that deviation from the obliquity factor increases with angle but is not significant for angles {le} 45{degrees}. AT 70{degrees} angle of incidence and a transmission factor of 10{sup {minus}5}, the obliquity factor varies between 1.2 and 1.9 for concrete, between 1.4 and 1.7 for steel, and between 1.4 and 1.5 for lead for the range of energies investigated. This amounts to an additional 86 and 50 cm of concrete, 25 and 23 cm of steel, and 8 and 14 cm of lead for {sup 60}Co and 18 MV x rays, respectively. The results for {sup 60}Co is concrete and lead are in good agreement with previously published experimental work. Fits to the data using mathematical models allow reconstruction of all data curves to better than 1% on average and 7% in the worst single case. 9 refs., 14 figs., 6 tabs.

In this paper a novel framework for three-dimensional surface reconstruction by self-consistent fusion of shading and shadow features is presented. Based on the analysis of at least two pixel-synchronous images of the scene under different illumination ... Keywords: Lunar surface, Photoclinometry, Quality inspection, Shadow analysis, Shape from shading, Surface reconstruction

The present study provides a consistent and unified theory for the three types of linear waves of the shallow-water equations (SWE) in a zonal channel on the ? plane: Kelvin, inertia–gravity (Poincaré), and planetary (Rossby). The new theory is ...

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Located within 10 Degree-Sign of the North Pole, northern Ellesmere Island offers continuous darkness in the winter months. This capability can greatly enhance the detection efficiency of planetary transit surveys and other time domain astronomy programs. We deployed two wide-field cameras at 80 Degree-Sign N, near Eureka, Nunavut, for a 152 hr observing campaign in 2012 February. The 16 megapixel camera systems were based on commercial f/1.2 lenses with 70 mm and 42 mm apertures, and they continuously imaged 504 and 1295 deg{sup 2}, respectively. In total, the cameras took over 44,000 images and produced better than 1% precision light curves for approximately 10,000 stars. We describe a new high-speed astrometric and photometric data reduction pipeline designed for the systems, test several methods for the precision flat fielding of images from very-wide-angle cameras, and evaluate the cameras' image qualities. We achieved a scintillation-limited photometric precision of 1%-2% in each 10 s exposure. Binning the short exposures into 10 minute chunks provided a photometric stability of 2-3 mmag, sufficient for the detection of transiting exoplanets around the bright stars targeted by our survey. We estimate that the cameras, when operated over the full Arctic winter, will be capable of discovering several transiting exoplanets around bright (m{sub V} < 9.5) stars.

A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.

A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.

This work presents an effective model for strongly interacting matter and the QCD equation of state (EoS). The model includes both hadron and quark degrees of freedom and takes into account the transition of chiral symmetry restoration as well as the deconfinement phase transition. At low temperatures $T$ and baryonic densities $\\rho_B$ a hadron resonance gas is described using a SU(3)-flavor sigma-omega model and a quark phase is introduced in analogy to PNJL models for higher $T$ and $\\rho_B$. In this way, the correct asymptotic degrees of freedom are used in a wide range of $T$ and $\\rho_B$. Here, results of this model concerning the chiral and deconfinement phase transitions and thermodynamic model properties are presented. Large hadron resonance multiplicities in the transition region emphasize the importance of heavy-mass resonance states in this region and their impact on the chiral transition behavior. The resulting phase diagram of QCD matter at small chemical potentials is in line with latest lattice QCD and thermal model results.

Boehmite solubilities were measured at 150, 200, and 250[degrees]C at pH values from 1 to 10 at 100 bars total pressure and used to determine the stability constants for the mononuclear aluminum hydroxide complexes Al(OH)[sup 2+], Al(OH)[sup +][sub 2], Al(OH)[sub 0][sub 3], AL(OH)[sup -][sub 4], and the solubility product of boehmite. Buffer solutions of HCl-KCl, acetic acid-sodium acetate, sodium bicarbonate-carbonic acid, and boric acid-potassium hydroxide were used to control pH. Our solubility data are in good agreement with boehmite solubility measurements in perchloric acid and sodium hydroxide solutions reported by KUYUNKO et al. (1983). The stability constants for the aluminum hydroxide species were determined from the solubility data using a Ridge regression technique. The results indicate that aluminum ion hydrolysis becomes stronger at higher temperatures, and the stability field of the neutral complex Al(OH)[sup 0][sub 3] becomes larger. The results are used to provide a set of equilibrium constants for aluminum hydroxide complex formation and boehmite hydrolysis from 0-300[degrees]C.

Advanced nuclear power reactor designs such as (Very) High Temperature Reactors (V/HTR) employ TRISO fuel particles that typically have a sub-millimetre U-based fuel kernel coated with three isotropic ceramic layers-a layer of silicon carbide sandwiched between pyrocarbon layers of different density. Evaluation of the ceramic layer thickness and of the degree of sphericity of these typical nuclear fuel particles is required at each step of the fabrication, in order to estimate future fuel performance under irradiation conditions. This study is based on the image processing of polished cross-sections, realized near the equatorial plane. From these 2D images, some measurements are carried out, giving an estimation of the diameter values for a sample of particles at each step of the coating process. These values are then statistically extended to the third dimension in order to obtain the thickness of each layer and the degree of sphericity of each particle. A representation of diameter and layer thickness in polar coordinates enables one to identify steps for which the coating process is defective or deviating from nominal objectives.

Purpose: Deformable image registration (DIR) is necessary for accurate dose accumulation between multiple radiotherapy image sets. DIR algorithms can suffer from inverse and transitivity inconsistencies. When using deformation vector fields (DVFs) that exhibit inverse-inconsistency and are nontransitive, dose accumulation on a given image set via different image pathways will lead to different accumulated doses. The purpose of this study was to investigate the dosimetric effect of and propose a postprocessing solution to reduce inverse consistency and transitivity errors. Methods: Four MVCT images and four phases of a lung 4DCT, each with an associated calculated dose, were selected for analysis. DVFs between all four images in each data set were created using the Fast Symmetric Demons algorithm. Dose was accumulated on the fourth image in each set using DIR via two different image pathways. The two accumulated doses on the fourth image were compared. The inverse consistency and transitivity errors in the DVFs were then reduced. The dose accumulation was repeated using the processed DVFs, the results of which were compared with the accumulated dose from the original DVFs. To evaluate the influence of the postprocessing technique on DVF accuracy, the original and processed DVF accuracy was evaluated on the lung 4DCT data on which anatomical landmarks had been identified by an expert. Results: Dose accumulation to the same image via different image pathways resulted in two different accumulated dose results. After the inverse consistency errors were reduced, the difference between the accumulated doses diminished. The difference was further reduced after reducing the transitivity errors. The postprocessing technique had minimal effect on the accuracy of the DVF for the lung 4DCT images. Conclusions: This study shows that inverse consistency and transitivity errors in DIR have a significant dosimetric effect in dose accumulation; Depending on the image pathway taken to accumulate the dose, different results may be obtained. A postprocessing technique that reduces inverse consistency and transitivity error is presented, which allows for consistent dose accumulation regardless of the image pathway followed.

Thisreport documents the examination of unclad fragments of unirradiated CANDU fuel, and irradiated LWR fuel, after approximately 2.8 years of oxidation in air at 130 degrees Centigrade and 170 degrees Centigrade. During oxidation, the various fuel specimens were isolated in separate vials, which were designed to permit free access of air, while preventing cross-contamination. Two specimens of each fuel type were recovered for examination from each experiment. The irradiated fuel specimens were weighed a...

The parameters for symmetrical mixing of ions of the same sign in the virial-coefficient (Pitzer) system are evaluated from literature data for 25/sup 0/C in a manner consistent with the higher-order limiting law of Friedman. Twenty-four systems involve cation mixing with a common anion and fourteen involve anion mixing with a common cation. Heat of mixing data were similarly treated in a recent publication; the results give the temperature coefficients of some of these same parameters. The combined results yield the mixing parameters as functions of temperature on a basis both self-consistent and in accord with the limiting law. The results also yield, for a few systems without a common ion, predicted values in good agreement with experimental data.

Ion Cooling and Ejection from Two Stage Linear Quadrupole Ion Trap consisted of RFQ ion guides Ion Cooling and Ejection from Two Stage Linear Quadrupole Ion Trap consisted of RFQ ion guides Kozlovskiy V.I., Filatov V. V., Shchepunov (UNIRIB, O.R.A.U. Oak Ridge, TN, USA) V. A., Brusov V. S., Pikhtelev A. R., Zelenov V. V. Introduction The primary objective of this work concerns linear quadrupole ion traps, which are commonly used to interface a continuous ion beam from an external source with a mass analyzer, requiring bunched or pulsed beams. We assume that the ions prepared for mass analysis, are well spatially shaped, and normalized by ion kinetic energy. (Slava, I don't understand the meaning of the previous sentence - I wrote it as I interpreted what you are saying - I may be all wrong) In our work, such a device was developed and built to interface a source of continuous ion beams and a

The new approach outlined in Paper I (Spurzem \\& Giersz 1996) to follow the individual formation and evolution of binaries in an evolving, equal point-mass star cluster is extended for the self-consistent treatment of relaxation and close three- and four-body encounters for many binaries (typically a few percent of the initial number of stars in the cluster). The distribution of single stars is treated as a conducting gas sphere with a standard anisotropic gaseous model. A Monte Carlo technique is used to model the motion of binaries, their formation and subsequent hardening by close encounters, and their relaxation (dynamical friction) with single stars and other binaries. The results are a further approach towards a realistic model of globular clusters with primordial binaries without using special hardware. We present, as our main result, the self-consistent evolution of a cluster consisting of 300.000 equal point-mass stars, plus 30.000 equal mass binaries over several hundred half-mass relaxation tim...

We study the eigenvalue distribution of the Kirchhoff matrix of a large-scale probabilistic network with a prescribed expected degree sequence. This spectrum plays a key role in many dynamical and structural network problems ...

It is demonstrated that boreal winter accumulated heating degree-days, a weather derivative product that is frequently demanded by energy suppliers (among others), can be skillfully predicted with a lead time of 1 month, that is, at the beginning ...

want professional training in the multidisciplinary field of wetlands science and management Management POLSCI 786 Policy Evaluation POLSCI 784 Environmental Policy POLSCI 787 Policy Analysis & ChoiceProfessional Master's Degree in Wetlands Conservation This program is designed for students who

This paper examines the 46 frequencies found in the {delta} Sct star KIC 8054146 involving a frequency spacing of exactly 2.814 cycles day{sup -1} (32.57 {mu}Hz), which is also a dominant low-frequency peak near or equal to the rotational frequency. These 46 frequencies range up to 146 cycles day{sup -1}. Three years of Kepler data reveal distinct sequences of these equidistantly spaced frequencies, including the basic sequence and side lobes associated with other dominant modes (i.e., small amplitude modulations). The amplitudes of the basic sequence show a high-low pattern. The basic sequence follows the equation f{sub m} = 2.8519 + m*2.81421 cycles day{sup -1} with m ranging from 25 to 35. The zero-point offset and the lack of low-order harmonics eliminate an interpretation in terms of a Fourier series of a non-sinusoidal light curve. The exactness of the spacing eliminates high-order asymptotic pulsation. The frequency pattern is not compatible with simple hypotheses involving single or multiple spots, even with differential rotation. The basic high-frequency sequence is interpreted in terms of prograde sectoral modes. These can be marginally unstable, while their corresponding low-degree counterparts are stable due to stronger damping. The measured projected rotation velocity (300 km s{sup -1}) indicates that the star rotates with {approx}>70% of the Keplerian break-up velocity. This suggests a near equator-on view. We qualitatively examine the visibility of prograde sectoral high-degree g-modes in integrated photometric light in such a geometrical configuration and find that prograde sectoral modes can reproduce the frequencies and the odd-even amplitude pattern of the high-frequency sequence.

We present calculations for symmetric nuclear matter using chiral nuclear interactions within the Self-Consistent Green's Functions approach in the ladder approximation. Three-body forces are included via effective one-body and two-body interactions, computed from an uncorrelated average over a third particle. We discuss the effect of the three-body forces on the total energy, computed with an extended Galitskii-Migdal-Koltun sum-rule, as well as on single-particle properties. Saturation properties are substantially improved when three-body forces are included, but there is still some underlying dependence on the SRG evolution scale.

1: Categorical Exclusion Determination 1: Categorical Exclusion Determination CX-003211: Categorical Exclusion Determination Geothermal Development in Hot Springs Valley CX(s) Applied: A9, B3.1, B5.12 Date: 08/04/2010 Location(s): Montana Office(s): Energy Efficiency and Renewable Energy, Golden Field Office Flathead Electric Co-op would re-work an existing geothermal well to explore greater depths for 165 degreesFahrenheit waters adequate to generate up to 10 megawatts of power through low temperature binary cycle generation. The existing well was drilled in 1982 to a depth of 261 feet encountering temperatures as high as 135 degreesFahrenheit. The only laboratory work anticipated is water sampling analysis that would be done at local laboratories. DOCUMENT(S) AVAILABLE FOR DOWNLOAD CX-003211.pdf

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Oil shales are fine-grained sedimentary rocks that contain relatively large amounts of kerogen, which can be converted into liquid and gaseous hydrocarbons (petroleum liquids, natural gas liquids, and methane) by heating the rock, usually in the absence of oxygen, to 650 to 700 degreesFahrenheit (in situ retorting) or 900 to 950 degreesFahrenheit (surface retorting) [60]. (Oil shale is, strictly speaking, a misnomer in that the rock is not necessarily a shale and contains no crude oil.) The richest U.S. oil shale deposits are located in Northwest Colorado, Northeast Utah, and Southwest Wyoming. Currently, those deposits are the focus of petroleum industry research and potential future production. Among the three States, the richest oil shale deposits are on Federal lands in Northwest Colorado.

thermal unit thermal unit Jump to: navigation, search Dictionary.png British thermal unit The amount of heat required to raise the temperature of one pound of water one degreeFahrenheit; often used as a unit of measure for the energy content of fuels.[1][2] View on Wikipedia Wikipedia Definition The British thermal unit (BTU or Btu) is a traditional unit of energy equal to about 1055 joules. It is the amount of energy needed to cool or heat one pound of water by one degreeFahrenheit. In scientific contexts the BTU has largely been replaced by the SI unit of energy, the joule. The unit is most often used as a measure of power (as BTU/h) in the power, steam generation, heating, and air conditioning industries, and also as a measure of agricultural energy production (BTU/kg). It is still used

The Sensor Fish device is being used at Northwest hydropower projects to better understand the conditions fish experience during passage through hydroturbines and other dam bypass alternatives. Since its initial development in 1997, the Sensor Fish has undergone numerous design changes to improve its function and extend the range of its use. The most recent Sensor Fish design, the three degree of freedom (3DOF) device, has been used successfully to characterize the environment fish experience when passing through turbines, in spill, or in engineered fish bypass facilities at dams. Pacific Northwest National Laboratory (PNNL) is in the process of redesigning the current 3DOF Sensor Fish device package to improve its field performance. Rate gyros will be added to the new six degree of freedom (6DOF) device so that it will be possible to observe the six linear and angular accelerations of the Sensor Fish as it passes the dam. Before the 6DOF Sensor Fish device can be developed and deployed, governing equations of motion must be developed in order to understand the design implications of instrument selection and placement within the body of the device. In this report, we describe a fairly general formulation for the coordinate systems, equations of motion, force and moment relationships necessary to simulate the 6DOF movement of an underwater body. Some simplifications are made by considering the Sensor Fish device to be a rigid, axisymmetric body. The equations of motion are written in the body-fixed frame of reference. Transformations between the body-fixed and interial reference frames are performed using a formulation based on quaternions. Force and moment relationships specific to the Sensor Fish body are currently not available. However, examples of the trajectory simulations using the 6DOF equations are presented using existing low and high-Reynolds number force and moment correlations. Animation files for the test cases are provided in an attached CD. The next phase of the work will focus on the refinement and application of the 6DOF simulator developed in this project. Experimental and computational studies are planned to develop a set of force and moment relationships that are specific to the Sensor Fish body over the range of Reynolds numbers that it experiences. Lab testing of prototype 6DOF Sensor Fish will also allow for refinement of the trajectory simulations through comparison with observations in test flumes. The 6DOF simulator will also be an essential component in tools to analyze field data measured using the next generation Sensor Fish. The 6DOF simulator will be embedded in a moving-machinery computational fluid dynamics (CFD) model for hydroturbines to numerically simulate the 6DOF Sensor Fish.

We compare two theoretical approaches to dielectric diblock copolymer melts in an external electric field. The first is a relatively simple analytic expansion in the relative copolymer concentration, and includes the full electrostatic contribution consistent with that expansion. It is valid close to the order-disorder transition point, the weak segregation limit. The second employs self-consistent field (SCF) theory and includes the full electrostatic contribution to the free energy at any copolymer segregation. It is more accurate but computationally more intensive. Motivated by recent experiments, we explore a section of the phase diagram in the three-dimensional parameter space of the block architecture, the interaction parameter and the external electric field. The relative stability of the lamellar, hexagonal and distorted body-centered-cubic (bcc) phases is compared within the two models. As function of an increasing electric field, the distorted bcc region in the phase diagram shrinks and disappears above a triple point, at which the lamellar, hexagonal and distorted bcc phases coexist. We examine the deformation of the bcc phase under the influence of the external field. While the elongation of the spheres is larger in the one-mode expansion than that predicted by the full SCF theory, the general features of the schemes are in satisfactory agreement. This indicates the general utility of the simple theory for exploratory calculations.

A Bayesian analysis of the world's $p(\\gamma,K^+)\\Lambda$ data is presented. We adopt a Regge-plus-resonance framework featuring consistent interactions for nucleon resonances up to spin $J = 5/2$. The power of the momentum dependence of the consistent interaction structure rises with the spin of the resonance. This leads to unphysical structures in the energy dependence of the computed cross sections when the short-distance physics is cut off with standard hadronic form factors. A plausible, spin-dependent modification of the hadronic form factor is proposed which suppresses the unphysical artifacts. Next, we evaluate all possible combinations of 11 candidate resonances. The best model is selected from the 2048 model variants by calculating the Bayesian evidence values against the world's $p(\\gamma,K^+)\\Lambda$ data. From the proposed selection of 11 resonances, we find that the following nucleon resonances have the highest probability of contributing to the reaction: $S_{11}(1535)$, $S_{11}(1650)$, $F_{15}(1680)$, $P_{13}(1720)$, $D_{13}(1900)$, $P_{13}(1900)$, $P_{11}(1900)$, and $F_{15}(2000)$.

A Bayesian analysis of the world's $p(\\gamma,K^+)\\Lambda$ data is presented. We adopt a Regge-plus-resonance framework featuring consistent interactions for nucleon resonances up to spin $J = 5/2$. The power of the momentum dependence of the consistent interaction structure rises with the spin of the resonance. This leads to unphysical structures in the energy dependence of the computed cross sections when the short-distance physics is cut off with standard hadronic form factors. A plausible, spin-dependent modification of the hadronic form factor is proposed which suppresses the unphysical artifacts. Next, we evaluate all possible combinations of 11 candidate resonances. The best model is selected from the 2048 model variants by calculating the Bayesian evidence values against the world's $p(\\gamma,K^+)\\Lambda$ data. From the proposed selection of 11 resonances, we find that the following nucleon resonances have the highest probability of contributing to the reaction: $S_{11}(1535)$, $S_{11}(1650)$, $F_{15}(...

The free energy cost of confining a star polymer where $f$ flexible polymer chains containing $N$ monomeric units are tethered to a central unit in a slit with two parallel repulsive walls a distance $D$ apart is considered, for good solvent conditions. Also the parallel and perpendicular components of the gyration radius of the star polymer, and the monomer density profile across the slit are obtained. Theoretical descriptions via Flory theory and scaling treatments are outlined, and compared to numerical self-consistent field calculations (applying the Scheutjens-Fleer lattice theory) and to Molecular Dynamics results for a bead-spring model. It is shown that Flory theory and self-consistent field (SCF) theory yield the correct scaling of the parallel linear dimension of the star with $N$, $f$ and $D$, but cannot be used for estimating the free energy cost reliably. We demonstrate that the same problem occurs already for the confinement of chains in cylindrical tubes. We also briefly discuss the problem of a free or grafted star polymer interacting with a single wall, and show that the dependence of confining force on the functionality of the star is different for a star confined in a nanoslit and a star interacting with a single wall, which is due to the absence of a symmetry plane in the latter case.

To predict the evolution of electron clouds and their effect on the beam, the high energy physics community has relied so far on the complementary use of 'buildup' and 'single/multi-bunch instability' reduced descriptions. The former describes the evolution of electron clouds at a given location in the ring, or 'station', under the influence of prescribed beams and external fields [1], while the latter (sometimes also referred as the 'quasi-static' approximation [2]) follows the interaction between the beams and the electron clouds around the accelerator with prescribed initial distributions of electrons, assumed to be concentrated at a number of discrete 'stations' around the ring. Examples of single bunch instability codes include HEADTAIL [3], QuickPIC [4, 5], and PEHTS [6]. By contrast, a fully self-consistent approach, in which both the electron cloud and beam distributions evolve simultaneously under their mutual influence without any restriction on their relative motion, is required for modeling the interaction of high-intensity beams with electron clouds for heavy-ion beam-driven fusion and warm-dense matter science. This community has relied on the use of Particle-In-Cell (PIC) methods through the development and use of the WARP-POSINST code suite [1, 7, 8]. The development of novel numerical techniques (including adaptive mesh refinement, and a new 'drift-Lorentz' particle mover for tracking charged particles in magnetic fields using large time steps) has enabled the first application of WARP-POSINST to the fully self-consistent modeling of beams and electron clouds in high energy accelerators [9], albeit for only a few betatron oscillations. It was recently observed [10] that there exists a preferred frame of reference which minimizes the number of computer operations needed to simulate the interaction of relativistic objects. This opens the possibility of reducing the cost of fully self-consistent simulations for the interaction of ultrarelativistic beams with electron cloud by orders of magnitude. The computational cost of the fully self-consistent mode is then predicted to be comparable to that of the quasi-static mode, assuming that several stations per betatron period are needed. During the workshop, there was some debate about the number of stations per betatron period that are needed when using the quasi-static mode. The argument was made that if there is less than one station per betatron period, then artificial resonances can be triggered and the resulting emittance growth provides an upper bound. The emittance growth thus obtained will fall either above or below the operational requirements of the machine. In the latter case, one can conclude that the electron effect that has been simulated is of no concern. However, if the emittance growth that was obtained is above the threshold, then the results become inconclusive, and simulations which resolve the betatron motion are then needed. In this case, according to [10], the fully self-consistent approach becomes an option. The aim of this paper is to investigate whether this option is indeed practical.

The curing of cross-linkable encapsulation is a critical consideration for photovoltaic (PV) modules manufactured using a lamination process. Concerns related to ethylene-co-vinyl acetate (EVA) include the quality (e.g., expiration and uniformity) of the films or completion (duration) of the cross-linking of the EVA within a laminator. Because these issues are important to both EVA and module manufacturers, an international standard has recently been proposed by the Encapsulation Task-Group within the Working Group 2 (WG2) of the International Electrotechnical Commission (IEC) Technical Committee 82 (TC82) for the quantification of the degree of cure for EVA encapsulation. The present draft of the standard calls for the use of differential scanning calorimetry (DSC) as the rapid, enabling secondary (test) method. Both the residual enthalpy- and melt/freeze-DSC methods are identified. The DSC methods are calibrated against the gel content test, the primary (reference) method. Aspects of other established methods, including indentation and rotor cure metering, were considered by the group. Key details of the test procedure will be described.

It is commonly assumed that quantum field theory arises by applying ordinary quantum mechanics to the low energy effective degrees of freedom of a more fundamental theory defined at ultra-high-energy/short-wavelength scales. We shall argue here that, even for free quantum fields, there are holistic aspects of quantum field theory that cannot be properly understood in this manner. Specifically, the ``subtractions'' needed to define nonlinear polynomial functions of a free quantum field in curved spacetime are quite simple and natural from the quantum field theoretic point of view, but are at best extremely ad hoc and unnatural if viewed as independent renormalizations of individual modes of the field. We illustrate this point by contrasting the analysis of the Casimir effect, the renormalization of the stress-energy tensor in time-dependent spacetimes, and anomalies from the point of quantum field theory and from the point of view of quantum mechanics applied to the independent low energy modes of the field. Some implications for the cosmological constant problem are discussed.

A study of the non linear modes of a two degree of freedom mechanical system with bilateral elastic stop is considered. The issue related to the non-smoothness of the impact force is handled through a regularization technique. In order to obtain the Nonlinear Normal Mode (NNM), the harmonic balance method with a large number of harmonics, combined with the asymptotic numerical method, is used to solve the regularized problem. These methods are present in the software "package" MANLAB. The results are validated from periodic orbits obtained analytically in the time domain by direct integration of the non regular problem. The two NNMs starting respectively from the two linear normal modes of the associated underlying linear system are discussed. The energy-frequency plot is used to present a global vision of the behavior of the modes. The dynamics of the modes are also analyzed comparing each periodic orbits and modal lines. The first NNM shows an elaborate dynamics with the occurrence of multiple impacts per period. On the other hand, the second NNM presents a more simple dynamics with a localization of the displacement on the first mass.

A study has confirmed the feasibility of designing, fabricating and installing resonant magnetic field perturbation (RMP) coils in JET(1) with the objective of controlling edge localized modes (ELM). A system of two rows of in-vessel coils, above the machine midplane, has been chosen as it not only can investigate the physics of and achieve the empirical criteria for ELM suppression, but also permits variation of the spectra allowing for comparison with other experiments. These coils present several engineering challenges. Conditions in JET necessitate the installation of these coils via remote handling, which will impose weight, dimensional and logistical limitations. And while the encased coils are designed to be conventionally wound and bonded, they will not have the usual benefit of active cooling. Accordingly, coil temperatures are expected to reach 350 degrees C during bakeout as well as during plasma operations. These elevated temperatures are beyond the safe operating limits of conventional OFHC copper and the epoxies that bond and insulate the turns of typical coils. This has necessitated the use of an alternative copper alloy conductor C18150 (CuCrZr). More importantly, an alternative to epoxy had to be found. An R&D program was initiated to find the best available insulating and bonding material. The search included polyimides and ceramic polymers. The scope and status of this R&D program, as well as the critical engineering issues encountered to date are reviewed and discussed.

The 460-ton dipole for the Hall A 4-GeV/c High Resolution Spectrometer has a bend angle of 45{sup o}, with an 8.4-m radius of curvature and an effective length of 6.6 m. It has a useful width of 100 cm and a 25-cm gap at the central radius of curvature. The dipole provides focusing in the dispersive plane by means of rotated (by 30 degrees) entrance and exit pole faces as well as a field index of -1.25. The end contour geometries have been designed to eliminate higher-order aberrations. The maximum central field is 1.6 T at 4 GeV/c. A field quality of 2 x 10{sup -4} (maximum deviation from the design value) is required over an excitation range from 0.16 T to 1.6 T. The 1.8-kA conductor is a 36-wire flattened cable. It has been designed to have limited cryostability at 4.5 K and 1.3 atm. Each coil is wound as one double pancake against the outer wall of the helium vessel in order to react the in-plane (hoop) loads. The bath-cooled, planar coil features negative curvature on its inner radius and at the exit. The coil produces 400 KAT at full excitation. The stored energy of this magnet is 3.5 MJ.

Imaging the magnetic fields around a non-magnetic impurity can provide a clear benchmark for quantifying the degree of magnetic frustration. Focusing on the strongly frustrated J{sub 1}-J{sub 2} model and the spatially anisotropic J{sub 1a}-J{sub 1b}-J{sub 2} model, very distinct low energy behaviors reflect different levels of magnetic frustration. In the J{sub 1}-J{sub 2} model, bound magnons appear trapped near the impurity in the ground state and strongly reduce the ordered moments for sites proximal to the impurity. In contrast, local moments in the J{sub 1a}-J{sub 1b}-J{sub 2} model are enhanced on the impurity neighboring sites. These theoretical predictions can be probed by experiments such as nuclear magnetic resonance and scanning tunneling microscopy, and the results can elucidate the role of frustration in antiferromagnets and help narrow the possible models to understand magnetism in the iron pnictdies.

The proton Zero Degree Calorimeter (ZP) for the ALICE experiment will measure the energy of the spectator protons in heavy ion collisions at the CERN LHC. Since all the spectator protons have the same energy, the calorimeter's response is proportional to their number, providing a direct information on the centrality of the collision. The ZP is a spaghetti calorimeter, which collects and measures the Cherenkov light produced by the shower particles in silica optical fibers embedded in a brass absorber. The details of its construction will be shown. The calorimeter was tested at the CERN SPS using pion and electron beams with momenta ranging from 50 to 200 GeV/c. The response of the calorimeter and its energy resolution have been studied as a function of the beam energy. Also, the signal uniformity and a comparison between the transverse profile of the hadronic and electromagnetic shower are presented. Moreover, the differences between the calorimeter's responses to protons and pions of the same energy have been investigated, exploiting the proton contamination in the positive pion beams.

The curing of cross-linkable encapsulation is a critical consideration for photovoltaic (PV) modules manufactured using a lamination process. Concerns related to ethylene-co-vinyl acetate (EVA) include the quality (e.g., expiration and uniformity) of the films or completion (duration) of the cross-linking of the EVA within a laminator. Because these issues are important to both EVA and module manufacturers, an international standard has recently been proposed by the Encapsulation Task-Group within the Working Group 2 (WG2) of the International Electrotechnical Commission (IEC) Technical Committee 82 (TC82) for the quantification of the degree of cure for EVA encapsulation. The present draft of the standard calls for the use of differential scanning calorimetry (DSC) as the rapid, enabling secondary (test) method. Both the residual enthalpy- and melt/freeze-DSC methods are identified. The DSC methods are calibrated against the gel content test, the primary (reference) method. Aspects of other established methods, including indentation and rotor cure metering, were considered by the group. Key details of the test procedure will be described.

Design considerations for various types of energy conserving window treatments to avoid condensation related maintenance problems are discussed. The window heat losses, dew point temperatures and allowable relative humidities at which condensation may occur on interior glass surfaces at an interior temperature of 65 DEGF (degreesFahrenheit) and exterior temperatures from -50 to 30 DEGF were calculated by computer. Vapor pressures were also computed to show the importance of vapor (air) tight weather stripping and coverings for window treatments.

We present an efficient general approach to first principles molecular dynamics simulations based on extended Lagrangian Born-Oppenheimer molecular dynamics in the limit of vanishing self-consistent field optimization. The reduction of the optimization requirement reduces the computational cost to a minimum, but without causing any significant loss of accuracy or longterm energy drift. The optimization-free first principles molecular dynamics requires only one single diagonalization per time step and yields trajectories at the same level of accuracy as "exact", fully converged, Born-Oppenheimer molecular dynamics simulations. The optimization-free limit of extended Lagrangian Born-Oppenheimer molecular dynamics therefore represents an ideal starting point for a robust and efficient formulation of a new generation first principles quantum mechanical molecular dynamics simulation schemes.

The chiral low-energy constants cD and cE are constrained by means of accurate ab initio calculations of the A = 3 binding energies and, for the first time, of the triton {beta} decay. We demonstrate that these low-energy observables allow a robust determination of the two undetermined constants. The consistency of the interactions and currents in chiral effective field theory is key to this remarkable result. The two- plus three-nucleon interactions from chiral effective field theory defined by properties of the A = 2 system and the present determination of c{sub D} and c{sub E} are successful in predicting properties of the A = 3, and 4 systems.

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

We develop the general scheme for modified $f(R)$ gravity reconstruction from any realistic FRW cosmology. We formulate several versions of modified gravity compatible with Solar System tests where the following sequence of cosmological epochs occurs: a. matter dominated phase (with or without usual matter), transition from decceleration to acceleration, accelerating epoch consistent with recent WMAP data b. $\\Lambda$CDM cosmology without cosmological constant. As a rule, such modified gravities are expressed implicitly (in terms of special functions) with late-time asymptotics of known type (for instance, the model with negative and positive powers of curvature). In the alternative approach, it is demonstrated that even simple versions of modified gravity may lead to the unification of matter dominated and accelerated phases at the price of the introduction of compensating dark energy.

We develop the general scheme for modified $f(R)$ gravity reconstruction from any realistic FRW cosmology. We formulate several versions of modified gravity compatible with Solar System tests where the following sequence of cosmological epochs occurs: a. matter dominated phase (with or without usual matter), transition from decceleration to acceleration, accelerating epoch consistent with recent WMAP data b. $\\Lambda$CDM cosmology without cosmological constant. As a rule, such modified gravities are expressed implicitly (in terms of special functions) with late-time asymptotics of known type (for instance, the model with negative and positive powers of curvature). In the alternative approach, it is demonstrated that even simple versions of modified gravity may lead to the unification of matter dominated and accelerated phases at the price of the introduction of compensating dark energy.

The most promising technique for the control of neoclassical tearing modes in tokamak experiments is the compensation of the missing bootstrap current with electron-cyclotron current drive. In this frame, the dynamics of magnetic islands has been studied extensively in terms of the modified Rutherford equation, including the presence of current drive, either analytically described or computed by numerical methods. In this article, a self-consistent model for the dynamic evolution of the magnetic island and the driven current is derived, which takes into account the island's magnetic topology and its effect on the current drive. The model combines the modified Rutherford equation with a ray-tracing approach to electron-cyclotron wave propagation and absorption. Numerical results exhibit a decrease in the time required for complete stabilization with respect to the conventional computation (not taking into account the island geometry), which increases with increasing initial island size and radial misalignment ...

We present the time-dependent restricted-active-space self-consistent field (TD-RASSCF) theory as a new framework for the time-dependent many-electron problem. The theory generalizes the multiconfigurational time-dependent Hartree-Fock (MCTDHF) theory by incorporating the restricted-active-space scheme well known in time-independent quantum chemistry. Optimization of the orbitals as well as the expansion coefficients at each time step makes it possible to construct the wave function accurately while using only a relatively small number of electronic configurations. In numerical calculations of high-order harmonic generation spectra of a one-dimensional model of atomic beryllium interacting with a strong laser pulse, the TD-RASSCF method is reasonably accurate while largely reducing the computational complexity. The TD-RASSCF method has the potential to treat large atoms and molecules beyond the capability of the MCTDHF method.

In 2007, the observed Earth flyby anomalies have been successfully simulated using an empirical formula (H. J. Busack, 2007). This simulation has led to the prediction of anomaly values, to be expected for the Rosetta flybys of Mars in 2007, and following twice of Earth in 2007 and 2009. While the data for the Mars flyby are yet under evaluation, the predictions of the formula for the last two Earth flybys of Rosetta are fully confirmed now. This is remarkable, since an alternatively proposed formula (Anderson et al., 2007) failed to predict the correct values for the recent flybys. For the Mercury flybys of the Messenger spacecraft, this alternative formula predicts a null result. In the meantime, Doppler residuals of these flybys on 14.01.2008 and 06.10.2008 are availabel. On both flybys, significant residuals were observed, using gravity data derived by Mariner 10 on Mercury (D. E. Smith et al., 2009). According to the authors, these residuals cannot be eliminated totally by adjustment of the second degree gravity coefficients and by assumption of irregular mass concentrations of acceptable value on Mercury. In this investigation, I adapt the output of the simulation program to compare with the measured Doppler residuals of the Mercury flybys without changing the formerly derived parameters for the Earth flybys. The simulation with these parameters leads to Doppler residuals of the Mercury flybys compatible with the measured curves. Additionally, the expected flyby anomalies are calculated. Since the gravity field of Mercury is not explored yet with sufficient accuracy, this result cannot be falsified or confirmed until the evaluation of the coming Mercury orbits of Messenger will be finished. If the proposed empirical formula would be confirmed then again, this would be a strong indication of an underlying physical reality.

We consider the voltage structure in the open-field circuit and outer magnetosphere of a magnetar. The standard polar-cap model for radio pulsars is modified significantly when the polar magnetic field exceeds 1.8x10^{14} G. Pairs are created by accelerated particles via resonant scattering of thermal X-rays, followed by the nearly instantaneous conversion of the scattered photon to a pair. A surface gap is then efficiently screened by e+- creation, which regulates the voltage in the inner part of the circuit to ~10^9 V. We also examine the electrostatic gap structure that can form when the magnetic field is somewhat weaker, and deduce a voltage 10-30 times larger over a range of surface temperatures. We examine carefully how the flow of charge back to the star above the gap depends on the magnitude of the current that is extracted from the surface of the star, on the curvature of the magnetic field lines, and on resonant drag. The rates of different channels of pair creation are determined self-consistently, including the non-resonant scattering of X-rays, and collisions between gamma rays and X-rays. We find that the electrostatic gap solution has too small a voltage to sustain the observed pulsed radio output of magnetars unless i) the magnetic axis is nearly aligned with the rotation axis and the light of sight; or ii) the gap is present on the closed as well as the open magnetic field lines. Several properties of the radio magnetars -- their rapid variability, broad pulses, and unusually hard radio spectra -- are consistent with a third possibility, that the current in the outer magnetosphere is strongly variable, and a very high rate of pair creation is sustained by a turbulent cascade.

The consistency of loop regularization (LORE) method is explored in multiloop calculations. A key concept of the LORE method is the introduction of irreducible loop integrals (ILIs) which are evaluated from the Feynman diagrams by adopting the Feynman parametrization and ultraviolet-divergence-preserving(UVDP) parametrization. It is then inevitable for the ILIs to encounter the divergences in the UVDP parameter space due to the generic overlapping divergences in the 4-dimensional momentum space. By computing the so-called $\\alpha\\beta\\gamma$ integrals arising from two loop Feynman diagrams, we show how to deal with the divergences in the parameter space with the LORE method. By identifying the divergences in the UVDP parameter space to those in the subdiagrams, we arrive at the Bjorken-Drell's analogy between Feynman diagrams and electrical circuits. The UVDP parameters are shown to correspond to the conductance or resistance in the electrical circuits, and the divergence in Feynman diagrams is ascribed to the infinite conductance or zero resistance. In particular, the sets of conditions required to eliminate the overlapping momentum integrals for obtaining the ILIs are found to be associated with the conservations of electric voltages, and the momentum conservations correspond to the conservations of electrical currents, which are known as the Kirchhoff's laws in the electrical circuits analogy. As an application to the massive scalar $\\phi^4$ theory, it enables us to obtain the well-known logarithmic running of the coupling constant and the consistent power-law running of the scalar mass at two loop level. Especially, we present an explicit demonstration on the general procedure of applying the LORE method to the multiloop calculations of Feynman diagrams when merging with the advantage of Bjorken-Drell's circuit analogy.

The search for gluonic degrees of freedom in mesons is an experimental challenge. The most promising approach is to look for mesons with exotic quantum numbers that can not be described by quark degrees of freedom only. The GlueX experiment at Jefferson Lab in Hall-D, currently under construction, will search for such hybrid mesons with exotic quantum numbers by scattering a linearly polarized high energetic photon beam off a liquid hydrogen target. An amplitude analysis will be employed to search for such resonances in the data and determine their quantum numbers.

We have mapped faint 1667 OH line emission (T{sub A} Almost-Equal-To 20-40 mK in our Almost-Equal-To 30' beam) along many lines of sight in the Galaxy covering an area of Almost-Equal-To 4 Degree-Sign Multiplication-Sign 4 Degree-Sign in the general direction of l Almost-Equal-To 108 Degree-Sign , b Almost-Equal-To 5 Degree-Sign . The OH emission is widespread, similar in extent to the local H I (r {approx}< 2 kpc) both in space and in velocity. The OH profile amplitudes show a good general correlation with those of H I in spectral channels of Almost-Equal-To 1 km s{sup -1}; this relation is described by T{sub A} (OH) Almost-Equal-To 1.50 Multiplication-Sign 10{sup -4} T{sub B} (H I) for values of T{sub B} (H I) {approx}< 60-70 K. Beyond this the H I line appears to 'saturate', and few values are recorded above Almost-Equal-To 90 K. However, the OH brightness continues to rise, by a further factor Almost-Equal-To 3. The OH velocity profiles show multiple features with widths typically 2-3 km s{sup -1}, but less than 10% of these features are associated with CO(1-0) emission in existing surveys of the area smoothed to comparable resolution.

by terms; term courses populated by degree plans per each major. Notes: helpful hints about that specific and click OK to return to Main Screen. Lock: locks that course into that term: a course can be locked.e. bowling, Tai Chi, allows student to select that course to take that term. Refresh Suggestions: Â· Sets

General regional and temporal trends in maximum freezing degree-days (FDD's) are identified for the shore zone of the Great Lakes Basin for the 80 winter periods 1897–1977. The cumulative frequency distribution of FDD's at cub of 25 locations is ...

We use 1944 processors of the Earth Simulator to model seismic wave propagation resulting from large earthquakes. Simulations are conducted based upon the spectral-element method, a high-degree finite-element technique with an exactly diagonal mass matrix. ...

arXiv:submit/0451583[physics.gen-ph]8Apr2012 Including Nuclear Degrees of Freedom in a Lattice and Engineering, University of Engineering and Technology. Lahore, Pakistan Abstract. Motivated by many condensed matter and nuclear systems are described initially on the same footing. Since it may be possible

Development of an advanced flow-through external pressure-balanced reference electrode opens the door for more accurate measurements of corrosion potential, redox potential, and pH in power plant waters at temperatures up to 400 degrees C. Such measurements allow a more accurate assessment of an environment's corrosivity and promote more effective corrosion control.

Locomotion control of legged robots is a very challenging task because very accurate foot trajectory tracking control is necessary for stable walking. An electro-hydraulically actuated walking robot has sufficient power to walk on rough terrain and carry ... Keywords: Hydraulic actuator, One-step-ahead fuzzy control, Robot locomotion, Six-legged walking robot, Two-degree-of-freedom fuzzy control

We review the literature on the gender gap on concept inventories in physics. Across studies, men consistently score higher on pre-tests of the Force Concept Inventory (FCI) and Force and Motion Conceptual Evaluation (FMCE) by about 10%, and in most cases score higher on post-tests as well, also by about 10%. The average difference in normalized gain is about 6%. This difference is much smaller than the average difference in normalized gain between traditional lecture and interactive engagement (25%), but is large enough that it could impact the results of studies comparing the effectiveness of different teaching methods. Based on our analysis of 24 published articles comparing the impact of 34 factors that could potentially influence the gender gap, no single factor is sufficient to explain the gap. Several high-profile studies that have claimed to account for or reduce the gender gap have failed to be replicated, suggesting that isolated claims of explanations of the gender gap should be interpreted with ca...

The hydromagnetic structure of a neutron star accreting symmetrically at both magnetic poles is calculated as a function of accreted mass, M_a, and polar cap radius,starting from a centered magnetic dipole and evolving through a quasistatic sequence of two-dimensional, Grad-Shafranov equilibria. The calculation is the first to track fully the growth of high-order magnetic multipoles, due to equatorward hydromagnetic spreading, while simultaneously preserving flux freezing and a self-consistent mass-flux distribution. Equilibria are constructed numerically by an iterative scheme and analytically by Green functions. Two key results are obtained, with implications for recycled pulsars. (i) The mass required to significantly reduce the magnetic dipole moment, 10^{-5} Msun, greatly exceeds previous estimates (~ 10^{-10} Msun), which ignored the confining stress exerted by the compressed equatorial magnetic field. (ii) Magnetic bubbles, disconnected from the stellar surface, form in the later stages of accretion (M_a > 10^{-4} Msun).

A simple and well known method of estimating residential heating loads is the variable base degree-day method, in which the steady-state heat loss rate (UA) is multiplied by the degree-days based from the balance temperature of the structure. The balance temperature is a function of the UA as well as the average rate of internal heat gains, reflecting the displacement of the heating requirements by these gains. Currently, the heat gains from solar energy are lumped with those from appliances to estimate an average rate over the day. This ignores the effects of the timing of the gains from solar energy, which are more highly concentrated during daytime hours, hence more frequently exceeding the required space heat and less utilizable than the gains from appliances. Simulations or specialized passive solar energy calculation methods have previously been required to account for this effect. This paper presents curves of the fraction of the absorbed solar energy utilized for displacement of space heat, developed by comparing heating loads calculated using a variable base degree-day method (ignoring solar gains) to heating loads from a large number of detailed DOE-2 simulations. The difference in the loads predicted by the two methods can be interpreted as the utilized solar gains. The solar utilization decreases as the thermal integrity increases, as expected, and the solar utilizations are similar across climates. They can be used to estimate the utilized fraction of the absorbed solar energy and, with the load predicted by the variable base degree-day calculation, form a modified degree-day method that closely reproduces the loads predicted by the DOE-2 simulation model and is simple enough for hand calculations. 6 refs., 6 figs., 2 tabs.

A generic prediction in the paradigm of weakly interacting dark matter is the production of relativistic particles from dark matter pair-annihilation in regions of high dark matter density. Ultra-relativistic electrons and positrons produced in the center of the Galaxy by dark matter annihilation should produce a diffuse synchrotron emission. While the spectral shape of the synchrotron dark matter haze depends on the particle model (and secondarily on the galactic magnetic fields), the morphology of the haze depends primarily on (1) the dark matter density distribution, (2) the galactic magnetic field morphology, and (3) the diffusion model for high-energy cosmic-ray leptons. Interestingly, an unidentified excess of microwave radiation with characteristics similar to those predicted by dark matter models has been claimed to exist near the galactic center region in the data reported by the WMAP satellite, and dubbed the "WMAP haze". In this study, we carry out a self-consistent treatment of the variables enumerated above, enforcing constraints from the available data on cosmic rays, radio surveys and diffuse gamma rays. We outline and make predictions for the general morphology and spectral features of a "dark matter haze" and we compare them to the WMAP haze data. We also characterize and study the spectrum and spatial distribution of the inverse Compton emission resulting from the same population of energetic electrons and positrons. We point out that the spectrum and morphology of the radio emission at different frequencies is a powerful diagnostics to test whether a galactic synchrotron haze indeed originates from dark matter annihilation.

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

We summarize the results of two experimental programs at the Alternating Gradient Synchrotron of BNL to measure the nuclear transparency of nuclei measured in the A(p,2p) quasielastic scattering process near 90 Deg .in the pp center of mass. The incident momenta varied from 5.9 to 14.4 GeV/c, corresponding to 4.8 nuclear transparency near 90 Deg. c.m., and the nuclear transparency for deuterons was studied. Second, we review the techniques used in an earlier experiment, E834, and show that the two experiments are consistent for the Carbon data. E834 also determines the nuclear transparencies for Li, Al, Cu, and Pb nuclei as well as for C. We find for both E850 and E834 that the A(p,2p) nuclear transparency, unlike that for A(e,e'p) nuclear transparency, is incompatible with a constant value versus energy as predicted by Glauber calculations. The A(p,2p) nuclear transparency for C and Al increases by a factor of two between 5.9 and 9.5 GeV/c incident proton momentum. At its peak the A(p,2p) nuclear transparency is about 80% of the constant A(e,e'p) nuclear transparency. Then the nuclear transparency falls back to the Glauber level again. This oscillating behavior is generally interpreted as an interplay between two components of the pN scattering amplitude; one short ranged and perturbative, and the other long ranged and strongly absorbed in the nuclear medium. We suggest a number of experiments for further studies of nuclear transparency effects.

Data sets of one degree latitude by one degree longitude carbon dioxide (CO{sub 2}) emissions in units of thousand metric tons of carbon (C) per year from anthropogenic sources have been produced for 1950, 1960, 1970, 1980 and 1990. Detailed geographic information on CO{sub 2} emissions can be critical in understanding the pattern of the atmospheric and biospheric response to these emissions. Global, regional and national annual estimates for 1950 through 1992 were published previously. Those national, annual CO{sub 2} emission estimates were based on statistics on fossil-fuel burning, cement manufacturing and gas flaring in oil fields as well as energy production, consumption and trade data, using the methods of Marland and Rotty. The national annual estimates were combined with gridded one-degree data on political units and 1984 human populations to create the new gridded CO{sub 2} emission data sets. The same population distribution was used for each of the years as proxy for the emission distribution within each country. The implied assumption for that procedure was that per capita energy use and fuel mix is uniform over a political unit. The consequence of this first-order procedure is that the spatial changes observed over time are solely due to changes in national energy consumption and nation-based fuel mix. Increases in emissions over time are apparent for most areas.

This study determined the perceptions of Illinois community college graduates of mining technology associate degree programs for the period May 1974 to May 1980. The three community colleges offering the programs were Rend Lake College, Southeastern Illinois College, and Wabash Valley College. A questionnaire was formulated and mailed in the Fall of 1981 to the subject graduates and (with two follow-ups) achieved a 53.0% response rate, or 283 of the 534 graduates with current addresses. (Of the total 634 graduates, 100 questionnaires were not deliverable by the post office). Based upon the analysis of data related to the research questions of the study, certain conclusions were drawn, namely: 1) the curriculum content, instructional quality, and facilities were at an acceptable level for the preparation of coal mining technology students; and 2) the Illinois associate degree coal mining technology programs have been functional in preparing students for mining careers.

Energy fluctuations in a single classical degree of freedom above the ground state at thermodynamic equilibrium at temperature T are typically of average magnitude {approx}k{sub B}T. However, we show that the average magnitude of such fluctuations can be much larger (or much smaller) than k{sub B}T, indeed, that at least in principle it can be infinite (or arbitrarily close to 0). Nevertheless, the average energy fluctuation magnitude being untypically large (or untypically small) does not violate the second law of thermodynamics. For, if the average magnitude of energy fluctuations is much larger than k{sub B}T, then particle motion along the degree of freedom must manifest extreme spatial delocalization. The cost of locating the fluctuating particle along its degree of freedom equals or exceeds the large energy gain obtained upon finding it with an energy of much more than k{sub B}T above its ground state. The particle loses as much or more ability to do work via its spatial delocalization than it gains via the energy fluctuation. Similarly, if the average magnitude of energy fluctuations is much smaller than k{sub B}T, then the small energy yield obtainable upon locating the particle is compensated for by the small cost of locating it.

Reserves Summary Reserves Summary Definitions Key Terms Definition Dry Natural Gas Natural gas which remains after: 1) the liquefiable hydrocarbon portion has been removed from the gas stream (i.e., gas after lease, field, and/or plant separation); and 2) any volumes of nonhydrocarbon gases have been removed where they occur in sufficient quantity to render the gas unmarketable. (Note: Dry natural gas is also known as consumer-grade natural gas. The parameters for measurement are cubic feet at 60 degreesFahrenheit and 14.73 pounds per square inch absolute.) Natural Gas Associated-Dissolved The combined volume of natural gas which occurs in crude oil reservoirs either as free gas (associated) or as gas in solution with crude oil (dissolved). Natural Gas Liquids Those hydrocarbons in natural gas which are separated from the gas through the processes of absorption, condensation, adsorption, or other methods in gas processing or cycling plants. Generally such liquids consist of propane and heavier hydrocarbons and are commonly referred to as condensate, natural gasoline, or liquefied petroleum gases. Where hydrocarbon components lighter than propane are recovered as liquids, these components are included with natural gas liquids.

A self-consistent method of inverting high spectral resolution, Rayleigh-Mie lidar signals to obtain profiles of atmospheric state variables, as well as aerosol properties, is presented. Assumed are a known air pressure at a reference height, ...

Performance Evaluation of Two HomeÂ­Based Lazy Release Consistency Protocols for Shared Virtual Memory Systems, called Overlapped HomeÂ­based LRC (OHLRC), takes advantage of the communication processor found on each

The core structure and stability of the 90 degree sign partial dislocation in diamond is studied within isotropic elasticity theory and ab initio total energy calculations. The double-period reconstruction is found to be more stable than the single-period reconstruction for a broad range of stress states. The analysis of the ab initio results shows further that elasticity theory is valid for dislocation spacings as small as 10-20 Angstrom, thus allowing ab initio calculations to provide reliable parameters for continuum theory analysis. (c) 2000 The American Physical Society.

In light of recent studies that show oxygen isotope fractionation in carbonate minerals to be a function of HCO3 2-; and CO3 2- concentrations, the oxygen isotope fractionation and exchange between water and components of the carbonic acid system (HCO3 2-, CO3 2-, and CO2(aq)) were investigated at 15, 25, and 40 (degrees)C. To investigate oxygen isotope exchange between HCO3 2-, CO3 -2, and H2O, NaHCO3 solutions were prepared and the pH was adjusted over a range of 2 to 12 by the addition of small amounts of HCl or NaOH. After thermal, chemical, and isotopic equilibrium was attained, BaCl2 was added to the NaHCO3 solutions. This resulted in immediate BaCO3 precipitation; thus, recording the isotopic composition of the dissolved inorganic carbon. Data from experiments at 15, 25, and 40 (degrees)C (1 atm) show that the oxygen isotope fractionation between HCO3 2-; and H2O as a function of temperature is governed by the equation: 1000 ;HCO3--H2O = 2.66 + 0.05(106T-2) + 1.18 + 0.52. where is the fractionation factor and T is in kelvins. The temperature dependence of oxygen isotope fractionation between CO32 and H2O is 1000 CO32--H2O = 2.28 + 0.03(106T-2) - 1.50 + 0.29. The oxygen isotope fractionation between CO2(aq) and H2O was investigated by acid stripping CO2(aq) from low pH solutions; these data yield the following equation: 1000 CO2(aq)-H2O = 2.52 + 0.03(106T-2) + 12.12 + 0.33. The kinetics of oxygen isotope exchange were also investigated. The half-times for exchange between HCO3- and H2O were 3.6, 1.4, and 0.25 h at 15, 25, and 40 (degrees)C, respectively. The half-times for exchange between CO2 and H2O were 1200, 170, and 41 h at 15, 25, and 40 (degrees) C, respectively. These results show that the 18O of the total dissolved inorganic carbon species can vary as much as 17 at a constant temperature. This could result in temperature independent variations in the 18O of precipitated carbonate minerals, especially in systems that are not chemically buffered.

Experimental data at 30/degree/C are reported for the adsorption of mixtures of benzene and cyclohexane on two types of carbon surface: graphitized carbon and activated charcoal. The properties of the adsorbed solution approach those of bulk liquid at vapor saturation for graphitized carbon, but not for activated charcoal. The mixtures adsorbed on graphitized carbon are nonideal, and the deviations from ideality increase with surface coverage. For activated charcoal, the adsorbed mixtures are nearly ideal at all coverages. Mixture behavior for both adsorbents can be predicted without using experimental data for the adsorbed mixtures. 11 refs.

This paper describes assumptions and procedures used to perform thermal damage analysis caused by post loss-of-coolant-accident (LOCA) hydrogen deflagration at Three Mile Island Unit 2 Reactor. Examination of available photographic evidence yields data on the extent and range of thermal and burn damage. Thermal damage to susceptible material in accessible regions of the reactor building was distributed in non-uniform patterns. No clear explanation for non-uniformity was found in examined evidence, e.g., burned materials were adjacent to materials that appear similar but were not burned. Because these items were in proximity to vertical openings that extend the height of the reactor building, we assume the unburned materials preferentially absorbed water vapor during periods of high, local steam concentration. A control pendant from the polar crane located in the top of the reactor building sustained asymmetric burn damage of decreasing degree from top to bottom. Evidence suggests the polar-crane pendant side that experienced heaviest damage was exposed to intense radiant energy from a transient fire plume in the reactor containment volume. Simple hydrogen-fire-exposure tests and heat transfer calculations approximate the degree of damage found on inspected materials from the containment building and support for an estimated 8% pre-fire hydrogen.

In this article we complete the proof---for a broad class of four-manifolds---of Witten's conjecture that the Donaldson and Seiberg-Witten series coincide, at least through terms of degree less than or equal to c-2, where c is a linear combination of the Euler characteristic and signature of the four-manifold. This article is a revision of sections 4--7 of an earlier version, while a revision of sections 1--3 of that earlier version now appear in a separate companion article (math.DG/0007190). Here, we use our computations of Chern classes for the virtual normal bundles for the Seiberg-Witten strata from the companion article (math.DG/0007190), a comparison of all the orientations, and the PU(2) monopole cobordism to compute pairings with the links of level-zero Seiberg-Witten moduli subspaces of the moduli space of PU(2) monopoles. These calculations then allow us to compute low-degree Donaldson invariants in terms of Seiberg-Witten invariants and provide a partial verification of Witten's conjecture.

In 1983, Fort Valley State University (FVSU) received start-up funds from the US Department of Energy`s Office of Minority Economic Impact to develop a Cooperative Developmental Energy Program (CDEP). The objective of CDEP is to develop a mutually beneficial long-term synergistic relationship among FVSU, two major universities, and the private and governmental sectors of the nation`s energy industry by creating a technology oriented labor base for minorities and women. FVSU accomplishes this objective by (1) developing dual-degree curricula with the University of Oklahoma and the University of Nevada at Las Vegas in energy related disciplines such as engineering, geosciences, and health physics; (2) by recruiting academically talented minority and female students to pursue careers in the above disciplines; and (3) by developing participatory alliances with major energy companies and governmental agencies via internship, co-op, and employment programs. Since its inception in 1983, CDEP has provided over 650 energy internships for FVSU students, they have gained over 250,000 hours of hands-on work experience, and earned over $3 million to help finance their education. Approximately, 900 students have been in the CDEP program. Over 30 have found employment in the energy industry and approximately 35 have gone on to earn Master`s or Ph.D. degrees.

Lignin, composed predominantly of p-hydroxyphenyl (H), guaiacyl (G) and syringyl (S) subunits, is a major component of plant cell walls that imparts resistance toward chemical and microbial deconstruction of plant biomass, rendering its conversion inefficient and costly. Previous studies have shown that alterating lignin composition, i.e., the relative abundance of H, G and S subunits, promises more efficient extraction of sugars from plant biomass. Smaller and less branched lignin chains are more easily extracted during pretreatment, making cellulose more readily degradable. Here, using density functional theory calculations, we show that the incorporation of H subunits into lignin via b-b and b-5 interunit linkages reduces the degree of polymerization in lignin. Frontier molecular orbital analyses of lignin dimers and trimers show that H as a terminal subunit on a growing lignin polymer linked via b-b and b-5 linkage cannot undergo radical formation, preventing further chain growth by endwise polymerization resulting in lignin polymers with lower degree of polymerization. These results indicate that, for endwise polymerization in lignin synthesis, there exists a chemical control that may lay a significant role in determining the structure of lignin.

We introduce a new and easy-to-calculate measure for the expected degree of herd behavior or co-movement between stock prices. This forward looking measure is model-independent and based on observed option data. It is baptized the Herd Behavior Index (HIX). The degree of co-movement in a stock market can be determined by comparing the observed market situation with the extreme (theoretical) situation under which the whole system is driven by a single factor. The HIX is then de…ned as the ratio of an option-based estimate of the risk-neutral variance of the market index and an option-based estimate of the corresponding variance in case of the extreme single factor market situation. The HIX can be determined for any market index provided an appropriate series of vanilla options is traded on this index as well as on its components. As an illustration, we determine historical values of the 30-days HIX for the Dow Jones Industrial Average, covering the period January 2003 to October 2009.

A short comment on "The Jones-Hore theory of radical-ion-pair reactions is not self-consistent" (arXiv:1010.3888v3) is presented. In the comment, it is pointed out that the paper includes a misconception about the Jones-Hore approach in Chem. Phys. Lett. 488 (2010) 90-93. The re-formulation is presented and it is demonstrated that the Jones-Hore theory is consistent at least on the point claimed by I. K. Kominis in the paper.

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

The fatigue and creep-fatigue crack propagation performance of Type 316 stainless steel has been investigated following fast neutron (n) irradiation. The purpose was to evaluate the effects of neutron fluence and temperature on the crack propagation resistance and failure mode of the steel. Results are presented from fatigue tests of the annealed steel that were irradiated at 649 degree C Scanning electron microscope examination of the fracture surfaces of the tested specimens revealed that the failure mode of the specimens which exhibited increased crack propagation rates was primarily intergranular while a transgranular mode was observed for specimens with lower crack propagation rates. The results point toward a synergistic relationship between thermomechanical history, precipitate formation, and hold time effects as the responsible mechanism for the crack propagation performance.

The advent of many-core processors is imposing many changes on the operating system. The resources that are under contention have changed; previously, CPU cycles were the resource in demand and required fair and precise sharing. Now compute cycles are ... Keywords: copy on write, memory management, memory sharing, virtual memory

The autonomous operation of an intelligent service robot in practical applications requires that the robot builds up a map of the environment by itself, even for large environments like supermarkets. This paper presents a solution to the problem of building ...

We present first results from our efforts in automatically increasing and adapting phonetic dictionaries for spontaneous speech recognition. Spontaneous speech adds a variety of phenomena to a speech recognition task: false starts [1], human and nonhuman noises [2], new words [3] and alternative pronunciations. All of these phenomena have to be tackled when adapting a speech recognition system for spontaneous speech. For phonetic dictionaries (especially for spontaneous speech) it is important to choose the pronunciations of a word according to the frequency in which they appear in the database rather than the "correct" pronunciation as it might be found in a lexicon. Additionally modifications of the dictionary should not lead to a higher phoneme confusability. Therefore we propose a data-driven approach to add new pronunciations to a given phonetic dictionary, in a way that they model the given occurrences of words in the database. We show how even a simple approach can lead to signi...

about 7%. consisted of, in tons, natural battery-grade ore, steel production by virtue of its sulfur aluminum alloys and is used in oxide form in dry cell batteries. The overall level and nature of manganese consumption in batteries was denoted by the expansion on schedule of domestic capacity for production

We motivate our study by simulating the particle transport of a thin film deposition process done by PVD (physical vapor deposition) processes. In this paper we present a new model taken into account a self-consistent electrostatic-particle in cell model ...

Thermodynamics have been applied to astronomy, biology, psychology, some social systems and so on. But, various evolutions from astronomy to biology and social systems cannot be only increase of entropy. When fluctuations are magnified due to internal interactions, the statistical independence and the second law of the thermodynamics are not hold. The existence of internal interactions is necessary condition of decrease of entropy in isolated system. We calculate quantitatively the entropy of plasma. Then we discuss the thermodynamics of biology, and obtain a mathematical expression on moderate degree of input negative entropy flow, which is a universal scientific law. Further, the thermodynamics of physiology and psychology, and the thought field are introduced. Qigong and various religious practices are related to these states of order, in which decrease of entropy is shown due to internal interactions of the isolated systems. Finally we discuss possible decrease of entropy in some social systems.

The macroscopic electromechanical coupling properties of ferroelectric polycrystals are composed of linear and nonlinear contributions. The nonlinear contribution is typically associated with the extrinsic effects related to the creation and motion of domain walls. To quantitatively compare the macroscopic nonlinear properties of a lead zirconate titanate ceramic and the degree of domain orientation, in-situ neutron and high-energy x-ray diffraction experiments are performed and they provide the domain orientation density as a function of the external electric field and mechanical compression. Furthermore, the macroscopic strain under the application of external electrical and mechanical loads is measured and the nonlinear strain is calculated by means of the linear intrinsic piezoelectric effect and the linear intrinsic elasticity. The domain orientation density and the nonlinear strain show the same dependence on the external load. The scaling factor that relates to the two values is constant and is the same for both electrical and mechanical loadings.

We present a highly parallel finite element program, Olympus, equipped with an ultrascalable linear solver, Prometheus, applied to micro-FE bone modeling calculations on an IBM SP Power3. Scalability is demonstrated with scaled speedup studies of a non-linear analyses of a vertebral body with over a half of a billion degrees of freedom. We show parallel scalability with up to 4088 processors on the ACSI White machine. This work is significant in that, in the domain of unstructured implicit finite element analysis in solid mechanics with complex geometry, this is the first demonstration of a highly parallel, and e#cient, application of a mathematically optimal linear solution method---smoothed aggregation algebraic multigrid.

The standing-wave electric-field profile within multilayer coatings is significantly perturbated by a nodular defect. The intensity, which is proportional to the electric field squared, is increased in the high index material by {>=}3x at normal incidence and {>=}12x at 45 degrees incidence angle. Therefore it is not surprising that nodular defects are initiation sites of laser-induced damage. In this study, the impact of reflectance-band centering and incident angle are explored for a 1 {micro}m diameter nodular defect seed overcoated with a 24 layer high-reflector constructed of quarter-wave thick alternating layers of hafnia and silica. The modeling was performed using a three-dimensional finite-element analysis code.

The condensation of metal vapor in an inert gas is studied by the molecular dynamics method. Two condensation regimes are investigated: with maintenance of partial pressure of the metal vapor and with a fixed number of metal atoms in the system. The main focus is the study of the cluster energy distribution over the degrees of freedom and mechanisms of the establishment of thermal equilibrium. It is shown that the internal temperature of a cluster considerably exceeds the buffer gas temperature and the thermal balance is established for a time considerably exceeding the nucleation time. It is found that, when the metal vapor concentration exceeds 0.1 of the argon concentration, the growth of clusters with the highest possible internal energy occurs, the condensation rate being determined only by the rate of heat removal from clusters.

This report is a formal documentation of the results of an assessment of the degree to which Lean Principles and Practices have been implemented in the US Aerospace and Defense Industry. An Industry Association team prepared ...

The properties of the short, energetic bursts recently observed from the {gamma}-ray binary LS I +61 Degree-Sign 303 are typical of those showed by high magnetic field neutron stars (NSs) and thus provide a strong indication in favor of a NS being the compact object in the system. Here, we discuss the transitions among the states accessible to a NS in a system like LS I +61 Degree-Sign 303, such as the ejector, propeller, and accretor phases, depending on the NS spin period, magnetic field, and rate of mass captured. We show how the observed bolometric luminosity ({approx}> few Multiplication-Sign 10{sup 35} erg s{sup -1}) and its broadband spectral distribution indicate that the compact object is most probably close to the transition between working as an ejector all along its orbit and being powered by the propeller effect when it is close to the orbit periastron, in a so-called flip-flop state. By assessing the torques acting onto the compact object in the various states, we follow the spin evolution of the system, evaluating the time spent by the system in each of them. Even taking into account the constraint set by the observed {gamma}-ray luminosity, we found that the total age of the system is compatible with being Almost-Equal-To 5-10 kyr, comparable to the typical spin-down ages of high-field NSs. The results obtained are discussed in the context of the various evolutionary stages expected for a NS with a high-mass companion.

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Daylight Glazing Daylight Glazing Exterior glazing over 6 feet above the finished floor. DDC See Direct Digital Control. Deadband The temperature range in which no heating or cooling is used. Decorative Lighting Lighting that is purely ornamental and installed for aesthetic effect. Decorative lighting shall not include general lighting. Degree Day See "Heating Degree Days." Degree Day Base 50F For any one day, when the mean temperature is more than 50°F, there are as many degree days as degreesFahrenheit temperature difference between the mean temperature for the day and 50°F. Annual cooling degree days (CDDs) are the sum of the degree days over a calendar year. Demand The highest amount of power (average kilowatt over an interval) recorded for a building or facility in a selected time frame.

An ab-initio calculation scheme for finite nuclei based on self-consistent Green's functions in the Gorkov formalism is developed. It aims at describing properties of doubly-magic and semi-magic nuclei employing state-of-the-art microscopic nuclear interactions and explicitly treating pairing correlations through the breaking of U(1) symmetry associated with particle number conservation. The present paper introduces the formalism, necessary to undertake applications at (self-consistent) second-order using two-nucleon interactions, in a detailed and self-contained fashion. First applications of such a scheme will be reported soon in a forthcoming publication. Future works will extend the present scheme to include three-nucleon interactions and implement more advanced truncation schemes.

The main purpose of research is to determine the influence by the small dispersive coal dust particles of the different fractional consistence on the technical characteristics of the vertical iodine air filter at nuclear power plant. The research on the transport properties of the small dispersive coal dust particles in the granular filtering medium of absorber in the vertical iodine air filter is completed in the case, when the modeled aerodynamic conditions are similar to the real aerodynamic conditions. It is shown that the appearance of the different fractional consistence of small dispersive coal dust particles with the decreasing dimensions down to the micro and nano sizes at the action of the air dust aerosol stream normally results in a significant change of distribution of the small dispersive coal dust particles masses in the granular filtering medium of an absorber in the vertical iodine air filter, changing the vertical iodine air filter aerodynamic characteristics. The precise characterization of...

We report on the Swift Burst Alert Telescope detection of a short burst from the direction of the TeV binary LS I +61 Degree-Sign 303, resembling those generally labeled as magnetar-like. We show that it is likely that the short burst was indeed originating from LS I +61 Degree-Sign 303 (although we cannot totally exclude the improbable presence of a far-away, line-of-sight magnetar) and that it is a different phenomenon with respect to the previously observed ks-long flares from this system. Accepting the hypothesis that LS I +61 Degree-Sign 303 is the first magnetar detected in a binary system, we study those implications. We find that a magnetar-composed LS I +61 Degree-Sign 303 system would most likely be (i.e., for the usual magnetar parameters and mass-loss rate) subject to a flip-flop behavior, from a rotationally powered regime (in the apastron) to a propeller regime (in the periastron) along each of the LS I +61 Degree-Sign 303 eccentric orbital motion. We prove that, unlike near an apastron, where an interwind shock can lead to the normally observed LS I +61 Degree-Sign 303 behavior, during TeV emission the periastron propeller is expected to efficiently accelerate particles only to sub-TeV energies. This flip-flop scenario would explain the system's behavior when a recurrent TeV emission only appears near the apastron, the anti-correlation of the GeV and TeV emission, and the long-term TeV variability (which seems correlated to LS I +61 Degree-Sign 303's super-orbital period), including the appearance of a low TeV state. Finally, we qualitatively put the multi-wavelength phenomenology into the context of our proposed model and make some predictions for further testing.

The dissolution rate of quartz has been measured at 25[degrees]C in batch reactors and at 200 and 300[degrees]C in mixed flow reactors. Those experiments have been carried out in both pure H[sub 2]O and solutions containing Na or Pb at various ionic strengths and pH. The measured rates were found to increase significantly with the addition of either Na or Pb. In an attempt to determine the mechanism of these effects, the degree of adsorption of Na and Pb were measured on amorphous silica at 25 and 150[degrees]C. At 25[degrees]C, Na is found to adsorb on the quartz surface as an outer-sphere complex, and the corresponding dissolution rate increase is explained by an increase of the ionic strength. By contrast, at 25[degrees], lead, which forms inner-sphere complexes, increases the quartz dissolution rate specifically. At high temperature, quartz dissolution is promoted in the presence of both Na and Pb by a pH-dependent formation of surface inner-sphere complexes. This effect tends to vanish when the degree of saturation of the solution increases, as a result of the competition between electrolyte and aqueous silica adsorption on quartz surface. These results show that the electrolytes which adsorb as inner-sphere complexes dominate the overall reaction at conditions far from equilibrium only. Consequently, for a large range of chemical affinity quartz dissolution in Na and Pb electrolyte solutions can be modeled within the framework of the Transition State Theory by simply taking into account the protonated surface species and the ionic strength of the solution.

A calculation of the pion-production operator up to next-to-next-to-leading order for s-wave pions is performed within chiral effective field theory. In the previous study [Phys. Rev. C 85, 054001 (2012)] we discussed the contribution of the pion-nucleon loops at the same order. Here we extend that study to include explicit Delta degrees of freedom and the 1/m_N^2 corrections to the pion-production amplitude. Using the power counting scheme where the Delta-nucleon mass difference is of the order of the characteristic momentum scale in the production process, we calculate all tree-level and loop diagrams involving Delta up to next-to-next-to-leading order. The long-range part of the Delta loop contributions is found to be of similar size to that from the pion-nucleon loops which supports the counting scheme. The net effect of pion-nucleon and Delta loops is expected to play a crucial role in understanding of the neutral pion production data.

High sensitivity with additional spectral response based on the composite consisting of SnO{sub 2} nanowires (NWs) and CdSe quantum dots (QDs) has been demonstrated. The underlying mechanism is attributed to the spatial separation of photogenerated electrons and holes due to the charge transfer arising from type II band alignment between CdSe QD and SnO{sub 2} NW. This work shows that by selective decoration of suitable QDs, the photocurrent gain of NWs not only can be greatly enhanced, but also can be extended to a wider range photoresponse spectrum. Our result, therefore, provides a very useful guideline to create high efficiency photodetectors.

1.1 These product consistency test methods A and B evaluate the chemical durability of homogeneous glasses, phase separated glasses, devitrified glasses, glass ceramics, and/or multiphase glass ceramic waste forms hereafter collectively referred to as “glass waste forms” by measuring the concentrations of the chemical species released to a test solution. 1.1.1 Test Method A is a seven-day chemical durability test performed at 90 ± 2°C in a leachant of ASTM-Type I water. The test method is static and conducted in stainless steel vessels. Test Method A can specifically be used to evaluate whether the chemical durability and elemental release characteristics of nuclear, hazardous, and mixed glass waste forms have been consistently controlled during production. This test method is applicable to radioactive and simulated glass waste forms as defined above. 1.1.2 Test Method B is a durability test that allows testing at various test durations, test temperatures, mesh size, mass of sample, leachant volume, a...

The main purpose of research is to determine the influence by the small dispersive coal dust particles of the different fractional consistence on the technical characteristics of the vertical iodine air filter at nuclear power plant. The research on the transport properties of the small dispersive coal dust particles in the granular filtering medium of absorber in the vertical iodine air filter is completed in the case, when the modeled aerodynamic conditions are similar to the real aerodynamic conditions. It is shown that the appearance of the different fractional consistence of small dispersive coal dust particles with the decreasing dimensions down to the micro and nano sizes at the action of the air dust aerosol stream normally results in a significant change of distribution of the small dispersive coal dust particles masses in the granular filtering medium of an absorber in the vertical iodine air filter, changing the vertical iodine air filter aerodynamic characteristics. The precise characterization of the aerodynamic resistance of a model of the vertical iodine air filter is completed. The comparative analysis of the technical characteristics of the vertical and horizontal iodine air filters is also made.

The winters of 1976–77 and 1977–78 were severe by virtually any standard. In this study, heating degree day (NDD) accumulations for these two winters as well as for the 1941–70 normals are examined at 31 National Weather Service stations in ...

and heating. Every added degree wastes gobs of energy, which can go unnoticed since you pay a flat utilities you whether an action, such as leaving your computer on, will waste energy. For more information with the lights on. Tuition going towards this energy waste could be spent more productively if we use lights only

credits which apply toward the degree. The ID program advisor works closely with each student to help develop a unique program that meets their needs and future goals. Students will also work with a thesis a structural engineer on the Hanford vitrification design-build project How can I get started

Overview Take your career to the next level with a bachelor's degree in Fire and Emergency Services Administration. Fire and emergency personnel have a long and proud history of providing communities with a wide variety of fire protection, fire prevention, emergency medical, and emergency preparedness services

Density-dependent relations among saturation properties of symmetric nuclear matter and properties of hadronic stars are discussed by applying the conserving chiral nonlinear ({sigma},{pi},{omega}) hadronic mean-field theory. The chiral nonlinear ({sigma},{pi},{omega}) mean-field theory is an extension of the conserving nonlinear (nonchiral) {sigma}-{omega} hadronic mean-field theory which is thermodynamically consistent, relativistic and is a Lorentz-covariant mean-field theory of hadrons. In the extended chiral ({sigma},{pi},{omega}) mean-field model, all the masses of hadrons are produced by the breaking of chiral symmetry, which is different from other conventional chiral partner models. By comparing both nonchiral and chiral mean-field approximations, the effects of the chiral symmetry breaking mechanism on the mass of {sigma}-meson, coefficients of nonlinear interactions and Fermi-liquid properties are investigated in nuclear matter and neutron stars.

We report the results of the first two-dimensional self-consistent simulations directly covering from the photosphere to the interplanetary space. We carefully set up grid points with spherical coordinate to treat Alfv\\'enic waves in the atmosphere with the huge density contrast, and successfully simulate hot coronal wind streaming out as a result of surface convective motion. Footpoint motion excites upwardly propagating Alfv\\'enic waves along an open magnetic flux tube. These waves, traveling in non-uniform medium, suffer reflection, nonlinear mode conversion to compressive modes, and turbulent cascade. Combination of these mechanisms, the Alfv\\'enic waves eventually dissipate to accelerate the solar wind. While the shock heating by the dissipation of the compressive wave plays a primary role in the coronal heating, both turbulent cascade and shock heating contribute to drive the solar wind.

Electron transfer is investigated at the limit of strong friction. The analysis is based on the generic model of a two-state system bilinearly coupled to a harmonic bath. The dynamics is described within the framework of the mixed quantum classical Liouville (MQCL) equation, which is known to be exact for this model. In the case of zero electronic coupling, it is shown that while the dynamics of the electronic populations can be described by a Markovian quantum Smoluchowski equation, that of the electronic coherences are inherently non-Markovian. A non-Markovian modified Zusman equation is derived in the presence of electronic coupling and shown to be self-consistent in cases where the standard Zusman equation is not.

We investigated Capacitance-Voltage (C-V) characteristics of the Depletion Mode Buried Channel InGaAs/InAs Quantum Well FET by using Self-Consistent method incorporating Quantum Mechanical (QM) effects. Though the experimental results of C-V for enhancement type device is available in recent literature, a complete characterization of electrostatic property of depletion type Buried Channel Quantum Well FET (QWFET) structure is yet to be done. C-V characteristics of the device is studied with the variation of three important process parameters: Indium (In) composition, gate dielectric and oxide thickness. We observed that inversion capacitance and ballistic current tend to increase with the increase in Indium (In) content in InGaAs barrier layer.

We investigated Capacitance-Voltage (C-V) characteristics of the Depletion Mode Buried Channel InGaAs/InAs Quantum Well FET by using Self-Consistent method incorporating Quantum Mechanical (QM) effects. Though the experimental results of C-V for enhancement type device is available in recent literature, a complete characterization of electrostatic property of depletion type Buried Channel Quantum Well FET (QWFET) structure is yet to be done. C-V characteristics of the device is studied with the variation of three important process parameters: Indium (In) composition, gate dielectric and oxide thickness. We observed that inversion capacitance and ballistic current tend to increase with the increase in Indium (In) content in InGaAs barrier layer.

Correlation consistent basis sets for accurately describing core-core and core-valence correlation effects in atoms and molecules have been developed for the second row atoms Al - Ar. Two different optimization strategies were investigated, which led to two families of core-valence basis sets when the optimized functions were added to the standard correlation consistent basis sets (cc-pVnZ). In the first case, the exponents of the augmenting primitive Gaussian functions were optimized with respect to the difference between all-electron and valence-electron correlated calculations, i.e., for the core-core plus core-valence correlation energy. This yielded the cc-pCVnZ family of basis sets, which are analogous to the sets developed previously for the first row atoms[D.E. Woon and T.H. Dunning, Jr., J. Chem. Phys. 103, 4572 (1995)]. Although the cc-pCVnZ sets exhibit systematic convergence to the all-electron correlation energy at the complete basis set limit, the intershell (core-valence ) correlation energy converges more slowly than the intrashell (core-core) correlation energy. Since the effect of including the core electrons on the calculation of molecular properties tends to be dominated by core-valence correlation effects, a second scheme for determining the augmenting functions was investigated. In this approach, the exponents of the functions to be added to the cc-pVnZ sets were optimized with respect to just the core-valence (intershell) correlation energy, except that a small amount of core-core correlation energy was included in order to ensure systematic convergence to the complete basis set limit. These new sets, denoted weighted core-valence basis sets (cc-pwCVnZ), significantly improve the convergence of many molecular properties with n. Optimum cc-pwCVnZ sets for the first-row atoms were also developed and show similar advantages.

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

We evaluated usefulness of a coating system consisting of an underlying polyphenylenesulfide (PPS) layer and top polytetrafluoroethylene (PTFE)-blended PPS layer as low friction, water repellent, anti-corrosion barrier film for carbon steel steam separators in geothermal power plants. The experiments were designed to obtain information on kinetic coefficient of friction, surface free energy, hydrothermal oxidation, alteration of molecular structure, thermal stability, and corrosion protection of the coating after immersing the coated carbon steel coupons for up to 35 days in CO{sub 2}-laden brine at 300 C. The superficial layer of the assembled coating was occupied by PTFE self-segregated from PPS during the melt-flowing process of this blend polymer; it conferred an outstanding slipperiness and water repellent properties because of its low friction and surface free energy. However, PTFE underwent hydrothermal oxidation in hot brine, transforming its molecular structure into an alkylated polyfluorocarboxylate salt complex linked to Na. Although such molecular transformation increased the friction and surface free energy, and also impaired the thermal stability of PTFE, the top PTFE-rich PPS layer significantly contributed to preventing the permeation of moisture and corrosive electrolytes through the coating film, so mitigating the corrosion of carbon steel.

We have grown and studied high quality SrRuO{sub 3} films grown by MBE as well as PLD. By changing the oxygen activity during deposition we were able to make SrRuO{sub 3} samples that were stoichiometric (low oxygen activity) or with ruthenium vacancies (high oxygen activity). Samples with strontium vacancies were found impossible to produce since the ruthenium would precipitate out as RuO{sub 2}. The volume of the unit cell of SrRuO{sub 3} becomes larger as more ruthenium vacancies are introduced. The residual resistivity ratio (RRR) and room temperature resistivity were found to systematically depend on the volume of the unit cell and therefore on the amount of ruthenium vacancies. The RRR varied from {approx}30 for stoichiometric samples to less than two for samples that were very ruthenium poor. The room temperature resistivity varied from 190 {micro}{Omega} cm for stoichoimetric samples to over 300 {micro}{Omega} cm for very ruthenium poor samples. UPS spectra show a shift of weight from the coherent peak to the incoherent peak around the Fermi level when samples have more ruthenium vacancies. Core level XPS spectra of the ruthenium 3d lines show a strong screened part in the case of stoichiometric samples. This screened part disappears when ruthenium vacancies are introduced. Both the UPS and the XPS results are consistent with the view that correlation increases as the amount of ruthenium vacancies increase.

733: Final Environmental Assessment 733: Final Environmental Assessment EA-1733: Final Environmental Assessment Calpine Enhanced Geothermal Systems Project The proposed EGS project includes the injection of water, ranging from 50 to 80 degreesFahrenheit, into wells to enhance the permeability of an existing high temperature hydrothermal reservoir that would be harnessed to produce electrical energy. The purpose of the project is to demonstrate the ability to stimulate high temperature rocks by monitoring their early response to carefully designed injection tests. The project would be a collaborative effort between scientists and engineers of Calpine Corporation, Lawrence Berkeley National Laboratory (LBNL), and the DOE. Calpine Enhanced Geothermal Systems Project, DOE/EA-1733 (June 2010)

C-Factor C-Factor Time rate of steady-state heat flow through the unit area of a material or construction surfaces. Units of C-Factor are Btu/h x ft2 x degreesFahrenheit. Note that the C-factor does not include soil or air films. CABO The Council of American Building Officials. Cavity Insulation Insulation installed between structural members such as wood studs, metal framing, and Z-clips. CDD Cooling degree day. See "Cooling Degree Days." CDD50 Cooling degree days base 50Â°F. See "Degree Day Base 50F." CE Combustion efficiency. Ceiling The ceiling requirements apply to portions of the roof and/or ceiling through which heat flows. Ceiling components include the interior surface of flat ceilings below attics, the interior surface of cathedral or vaulted

The solar wind emanates from the hot and tenuous solar corona. Earlier studies using 1.5-dimensional simulations show that Alfven waves generated in the photosphere play an important role in coronal heating through the process of nonlinear mode conversion. In order to understand the physics of coronal heating and solar wind acceleration together, it is important to consider the regions from photosphere to interplanetary space as a single system. We performed 2.5-dimensional, self-consistent magnetohydrodynamic simulations, covering from the photosphere to the interplanetary space for the first time. We carefully set up the grid points with spherical coordinates to treat the Alfven waves in the atmosphere with huge density contrast and successfully simulate the solar wind streaming out from the hot solar corona as a result of the surface convective motion. The footpoint motion excites Alfven waves along an open magnetic flux tube, and these waves traveling upward in the non-uniform medium undergo wave reflection, nonlinear mode conversion from Alfven mode to slow mode, and turbulent cascade. These processes lead to the dissipation of Alfven waves and acceleration of the solar wind. It is found that the shock heating by the dissipation of the slow-mode wave plays a fundamental role in the coronal heating process, whereas the turbulent cascade and shock heating drive the solar wind.

The linearized approximation to the semiclassical initial value representation (LSC-IVR) is used to calculate time correlation functions relevant to the incoherent dynamic structure factor for inelastic neutron scattering from liquid para-hydrogen at 14 K. Various time correlations functions were used which, if evaluated exactly, would give identical results, but they do not because the LSC-IVR is approximate. Some of the correlation functions involve only linear operators, and others involve non-linear operators. The consistency of the results obtained with the various time correlation functions thus provides a useful test of the accuracy of the LSC-IVR approximation and its ability to treat correlation functions involving both linear and nonlinear operators in realistic anharmonic systems. The good agreement of the results obtained from different correlation functions, their excellent behavior in the spectral moment tests based on the exact moment constraints, and their semi-quantitative agreement with the inelastic neutron scattering experimental data all suggest that the LSC-IVR is indeed a good short-time approximation for quantum mechanical correlation functions.

Helium (He) nucleation in liquid metal breeding blankets of a DT fusion reactor may have a significant impact regarding system design, safety and operation. Large He production rates are expected due to tritium (T) fuel self-sufficiency requirement, as both, He and T, are produced at the same rate. Low He solubility, local high concentrations, radiation damage and fluid discontinuities, among other phenomena, may yield the necessary conditions for He nucleation. Hence, He nucleation may have a significant impact on T inventory and may lower the T breeding ratio. A model based on the self-consistent nucleation theory (SCT) with a surface tension curvature correction model has been implemented in OpenFoam(r) CFD code. A modification through a single parameter of the necessary nucleation condition is proposed in order to take into account all the nucleation triggering phenomena, specially radiation induced nucleation. Moreover, the kinetic growth model has been adapted so as to allow for the transition from a cr...

While dark matter (DM) is the key ingredient for a successful theory of structure formation, its microscopic nature remains elusive. Indirect detection may provide a powerful test for some strongly motivated DM particle models. Nevertheless, astrophysical backgrounds are usually expected with amplitudes and spectral features similar to the chased signals. On galactic scales, these backgrounds arise from interactions of cosmic rays (CRs) with the interstellar gas, both being difficult to infer and model in detail from observations. Moreover, the associated predictions unavoidably come with theoretical errors, which are known to be significant. We show that a trustworthy guide for such challenging searches can be obtained by exploiting the full information contained in cosmological simulations of galaxies, which now include baryonic gas dynamics and star formation. We further insert CR production and transport from the identified supernova events and fully calculate the CR distribution in a simulated galaxy. We focus on diffuse gamma-rays, and self-consistently calculate both the astrophysical galactic emission and the dark matter signal. We notably show that adiabatic contraction does not necessarily induce large signal-to-noise ratios in galactic centers, and could anyway be traced from the astrophysical background itself. We finally discuss how all this may be used as a generic diagnostic tool for galaxy formation.

Background: Genome-wide association studies (GWASs) and global profiling of gene expression (microarrays) are two major technological breakthroughs that allow hypothesis-free identification of candidate genes associated with tumorigenesis. It is not obvious whether there is a consistency between the candidate genes identified by GWAS (GWAS genes) and those identified by profiling gene expression (microarray genes). Methodology/Principal Findings: We used the Cancer Genetic Markers Susceptibility database to retrieve single nucleotide polymorphisms from candidate genes for prostate cancer. In addition, we conducted a large meta-analysis of gene expression data in normal prostate and prostate tumor tissue. We identified 13,905 genes that were interrogated by both GWASs and microarrays. On the basis of P values from GWASs, we selected 1,649 most significantly associated genes for functional annotation by the Database for Annotation, Visualization and Integrated Discovery. We also conducted functional annotation analysis using same number of the top genes identified in the meta-analysis of the gene expression data. We found that genes involved in cell adhesion were overrepresented among both the GWAS and microarray genes. Conclusions/Significance: We conclude that the results of these analyses suggest that combining GWAS and microarray data would be a more effective approach than analyzing individual datasets and can help to refine the identification of

The evolution of deformation textures in copper and a brass that are representative of fcc metals with different stacking fault energies (SFEs) during cold rolling is predicted using a self-consistent (SC) model. The material parameters used for describing the micromechanical behavior of each metal are determined from the high-energy X-ray (HEXRD) diffraction data. At small reductions, a reliable prediction of the evolution of the grain orientation distribution that is represented as the continuous increase of the copper and brass components is achieved for both metals when compared with the experimental textures. With increasing deformation, the model could characterize the textures of copper, i.e., the strengthening of the copper component, when dislocation slip is still the dominant mechanism. For a brass at moderate and large reductions, a reliable prediction of its unique feature of texture evolution, i.e., the weakening of the copper component and the strengthening of the brass component, could only be achieved when proper boundary conditions together with some specified slip/twin systems are considered in the continuum micromechanics mainly containing twinning and shear banding. The present investigation suggests that for fcc metals with a low SFE, the mechanism of shear banding is the dominant contribution to the texture development at large deformations.

A review of the degree of applicability of benchmarks containing gadolinium using the computer code KENO V.a and the gadolinium cross sections from the 238-group SCALE cross-section library has been performed for a system that contains {sup 239}Pu, H{sub 2}O, and Gd{sub 2}O{sub 3}. The system (practical problem) is a water-reflected spherical mixture that represents a dry-out condition on the bottom of a sludge receipt and adjustment tank around steam coils. Due to variability of the mixture volume and the H/{sup 239}Pu ratio, approximations to the practical problem, referred to as applications, have been made to envelop possible ranges of mixture volumes and H/{sup 239}Pu ratios. A newly developed methodology has been applied to determine the degree of applicability of benchmarks as well as the penalty that should be added to the safety margin due to insufficient benchmarks.

Measurements of the H-2(d, n)(3) He transverse vector polarization-transfer coefficient K-y(y)' at 0 degrees. are reported for 29 outgoing neutron energies between 3.94 and 8.47MeV. Our new results determine K-y(y)' (0 degrees) more accurately than previous data, especially for neutron energies below 5MeV. Low-energy data for this reaction are important both as a high-intensity source of highly polarized neutrons for nuclear physics studies with polarized neutron beams, and as a test of the emerging theoretical descriptions of the four-body system, where recently substantial progress has been made.

on the Coordinating Board Graduation Report (CBM009) which is certified by the state. The College information is based,485 27.0% Public Policy 231 5.0% 270 5.6% 300 6.1% 313 6.0% 328 6.0% Sciences 624 13.6% 616 12.8% 605 12 Policy Sciences University College Degrees Awarded by College * The Honors College includes only students

Microfracturing in Westerly granite specimens, extended wet and dry, at temperatures to 800/degree/C and confining pressures to 200 MPa, is analyzed with a view toward understanding why, in the brittle field, rock strengths decrease with increasing temperature. Intragranular (IGC) and grain-boundary cracks (GBC) are mapped in two dimensions on either side of the tensile macrofracture, using optical microscopy, to determine, quantitatively, crack lengths and densities and, qualitatively, crack widths and orientations are visually examined to aid in interpretation. Temperature and confining pressure tend to favor the development of different microfracture fabrics. Thermal stresses produce a random orientation of cracks while stresses resulting from the external differential loading of a specimen produce a preferred orientation of cracks parallel to the direction of sigma/sub 1/. In dry experiments, between 600/degree/ and 800/degree/C, both GBC and IGC densities increase with increasing temperature. The increase in crack abundance is responsible for the thermal weakening of the rock. With increasing temperature, GBC play a greater role in the deformational history leading to rock failure. 27 refs., 24 figs.

This analysis assumes that there is a hypothetical large leak at the bottom of Tank 241-C-106 which initiates the dryout of the tank. The time required for a tank to dryout after a leak is of interest for safety reasons. As a tank dries out, its temperature is expected to increase which could affect the structural integrity of the concrete tank dome. Hence, it is of interest to know how fast and how high the temperature in a leaky tank increases, so that mitigation procedures can be planned and implemented in a timely manner. This analysis is focused on tank 241-C-106, which is known to be high thermal tank. The objective of the study was to determine how long it would take for tank 241-C-106 to reach 350 degreesFahrenheit (about 177 degrees Centigrade) after a postulated large leak develops at the bottom center of the tank. The temperature of 350 degreesFahrenheit is the minimum temperature that can cause structural damage to concrete (ACI 1992). The postulated leak at the bottom of the tank and the resulting dryout of the sludge in the tank make this analysis different from previous thermal analyses of the C-106 tank and other tanks, especially the double-shell tanks which are mostly liquid.

We derive estimates for the cosmological bulk flow from the SFI++ Tully-Fisher (TF) catalog. For a sphere of radius 40 h{sup -1} Mpc centered on the Milky Way, we derive a bulk flow of 333 {+-} 38 km s{sup -1} toward Galactic (l, b) = (276 deg., 14 deg.) within a 3{sup 0} 1{sigma} error. Within a radius of 100h{sup -1} Mpc we get 257 {+-} 44 km s{sup -1} toward (l, b) = (279 deg., 10 deg.) within a 6 deg. error. These directions are at 40 deg. to the Supergalactic plane, close to the apex of the motion of the Local Group of galaxies after the Virgocentric infall correction. Our findings are consistent with the {Lambda}CDM model with the latest Wilkinson Microwave Anisotropy Probe (WMAP) best-fit cosmological parameters, but the bulk flow allows independent constraints. For the WMAP-inferred Hubble parameter h = 0.71 and baryonic mean density parameter {Omega}{sub b} = 0.0449, the constraint from the bulk flow on the matter density, {Omega}{sub m}, the normalization of the density fluctuations, {sigma}{sub 8}, and the growth index, {gamma}, can be expressed as {sigma}{sub 8}{Omega}{sup {gamma}-0.55}{sub m}({Omega}{sub m}/0.266){sup 0.28} = 0.86 {+-} 0.11 (for {Omega}{sub m} {approx} 0.266). Fixing {sigma}{sub 8} = 0.8 and {Omega}{sub m} = 0.266 as favored by WMAP, we get {gamma} = 0.495 {+-} 0.096. The constraint derived here rules out popular Dvali-Gabadadze-Porrati models at more than the 99% confidence level. Our results are based on the All Space Constrained Estimate (ACSE) model which reconstructs the bulk flow from an all space three-dimensional peculiar velocity field constrained to match the TF measurements. At large distances, ASCE generates a robust bulk flow from the SFI++ survey that is insensitive to the assumed prior. For comparison, a standard straightforward maximum likelihood estimate leads to very similar results.

October 27, 2009 October 27, 2009 Chairman Boxer, Ranking Member Inhofe, Members of the Committee, thank you for the opportunity to testify today. When I appeared before you in July, I focused on the energy challenge and the grave threat from climate change. The Intergovernmental Panel on Climate Change found in 2007 that the best estimate for the rise in average global temperature by the end of this century would be more than 7 degreesFahrenheit if we continued on a high growth, fossil fuel intensive course. A 2009 MIT study found a fifty percent chance of a 9 degree rise in this century and a 17 percent chance of a nearly 11 degree increase. Eleven degrees may not sound like much, but, during the last ice age, when Canada and much of the United States were covered all year in a glacier, the world was only about 11

Sample records for degrees fahrenheit consisting from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "degrees fahrenheit consisting" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

The objective of this report is to document the analysis that was conducted to evaluate the effect of a potential change to the TSPA-VA base case design that could improve long-term repository performance. The design feature evaluated in this report is a modification of the topographic surface of Yucca Mountain. The modification consists of covering the land surface immediately above the repository foot-print with a thick layer of unconsolidated material utilizing rip-rap and plants to mitigate erosion. This surface modification is designated as Feature 23a or simply abbreviated as F23a. The fundamental aim of F23a is to reduce the net infiltration into the unsaturated zone by enhancing the potential for evapotranspiratiration at the surface; such a change would, in turn, reduce the seepage flux and the rate of radionuclide releases from the repository. Field and modeling studies of water movement in the unsaturated zone have indicated that shallow infiltration at the surface is almost negligible in locations where the bedrock is covered by a sufficiently thick soil layer. In addition to providing storage for meteoric water, a thick soil layer would slow the downward movement of soil moisture to such an extent that evaporation and transpiration could easily transfer most of the soil-water back to the atmosphere. Generic requirements for the effectiveness of this design feature are two-fold. First, the soil layer above the repository foot-print must be thick enough to provide sufficient storage of meteoric water (from episodic precipitation events) and accommodate plant roots. Second, the added soil layer must be engineered so as to mitigate thinning by erosional processes and have sufficient thickness to accommodate the roots of common desert plants. Under these two conditions, it is reasonable to expect that modification would be effective for a significant time period and the net infiltration and deep percolation flux would be reduced by orders of magnitude lower than the present levels. Conceptually, the topographic surface above the repository foot-print would be re-contoured to make it more suitable for placement of unconsolidated materials (e.g., alluvium). Figure 1 shows the region of the surface modification in relation to the location of the repository foot-print. The surface contours in this region after modification are shown in the plot presented in Figure 2. Basically, the surface modification would be accomplished by applying cuts to the ridges slopes on the east flank of Yucca Mountain to produce a relatively uniform slope of about 10%. The alluvium would be covered with rock fragments (to imitate the desert pavement) to reduce erosion. This report documents the modeling assumptions and performance analysis conducted to estimate the long-term performance for Feature 23a. The performance measure for this evaluation is dose-rate. Results are presented that compare the dose-rate time histories for the new design feature to those of the TSPA-VA base case calculation (CRWMS M&O 1998a).

Three candidate materials were investigated in this study in terms of their electrochemical corrosion behavior in unirradiated 0.1 N NaNO{sub 3} solutions at 95{degrees}C. Anodic polarization experiments were conducted to determine the passive current densities, pitting potentials, and other parameters, together with Cyclic Current Reversal Voltammetry tests to evaluate the stability and protectiveness of the passive oxides formed. X-ray diffraction and Auger Electron Spectroscopy were used for identification of the corrosion products as well as Scanning Electron Microscopy for the surface morphology studies. 2 refs., 22 figs., 2 tabs.

The problem of the calculation of equilibrium thermodynamic properties and the establishment of statistical-thermodynamically consistent finite bound-state partition functions in nonideal multicomponent plasma systems is revised within the chemical picture. The present exploration accompanied by the introduction of a generalized consistent formulation, in terms of the solution of the inverse problem, clears ambiguities and gives a better understanding of the problem on top of pointing out weaknesses and inaccuracies/inconsistencies buried in widely used models in literature.

Heat Content of Natural Gas Consumed Heat Content of Natural Gas Consumed Definitions Key Terms Definition British Thermal Unit (Btu) The quantity of heat required to raise the temperature of 1 pound of liquid water by 1 degreeFahrenheit at the temperature at which water has its greatest density (approximately 39 degreesFahrenheit). Delivered to Consumers (Heat Content) Heat content of residential, commercial, industrial, vehicle fuel and electric power deliveries to consumers. Electric Power (Heat Content) Heat content of natural gas used as fuel in the electric power sector. Heat Content The amount of heat energy available to be released by the transformation or use of a specified physical unit of an energy form (e.g., a ton of coal, a barrel of oil, a kilowatthour of electricity, a cubic foot of natural gas, or a pound of steam). The amount of heat energy is commonly expressed in British thermal units (Btu). Note: Heat content of combustible energy forms can be expressed in terms of either gross heat content (higher or upper heating value) or net heat content (lower heating value), depending upon whether or not the available heat energy includes or excludes the energy used to vaporize water (contained in the original energy form or created during the combustion process). The Energy Information Administration typically uses gross heat content values.

Evaporation Experiment Gain? Evaporation Experiment Gain? Name: Xandria Status: student Grade: K-3 Country: USA Date: Spring 2012 Question: I am asking on behalf of my 2nd grader. She is doing a Science Experiment on the rate of evaporation of different liquids (water, salt water, alcohol, vinegar, and bleach). Can you have more liquid than what you originally started with? She discovered that droplets of liquid were "stuck" on the side of the flasks and when she measured the liquid, they were more than 50 ml. Process: She put 50 ml of liquid into flasks. Water was placed in both a flask and beaker...just as a side experiment about the container shape. She placed the liquids in a room with an ambient temperature that fluctuated between 68-75 degreesFahrenheit. Unfortunately, we did not measure the humidity (in reading the archives, it has a huge effect on evaporation). She turned on a work lamp for one hour a day for 10 days. The temperature in the room would get up to 98 degreesFahrenheit.

U.S. Weekly Products Supplied U.S. Weekly Products Supplied Definitions Key Terms Definition Barrel A unit of volume equal to 42 U.S. gallons. Distillate Fuel Oil A general classification for one of the petroleum fractions produced in conventional distillation operations. It includes diesel fuels and fuel oils. Products known as No. 1, No. 2, and No. 4 diesel fuel are used in on-highway diesel engines, such as those in trucks and automobiles, as well as off-highway engines, such as those in railroad locomotives and agricultural machinery. Products known as No. 1, No. 2, and No. 4 fuel oils are used primarily for space heating and electric power generation. Finished Motor Gasoline A complex mixture of relatively volatile hydrocarbons with or without small quantities of additives, blended to form a fuel suitable for use in spark-ignition engines. Motor gasoline, as defined in ASTM Specification D 4814 or Federal Specification VV-G-1690C, is characterized as having a boiling range of 122 to 158 degreesFahrenheit at the 10 percent recovery point to 365 to 374 degreesFahrenheit at the 90 percent recovery point. Motor Gasoline includes conventional gasoline; all types of oxygenated gasoline, including gasohol; and reformulated gasoline, but excludes aviation gasoline. Note: Volumetric data on blending components, such as oxygenates, are not counted in data on finished motor gasoline until the blending components are blended into the gasoline.

to the customer. Additionally, the packaging keeps the product safe from the environment and also protects S I D D I Q I Smart Card Packaging Process Control System KTH Information and Communication Technology #12;Smart Card Packaging Process Control System Saad Ahmed Siddiqi August 1, 2012 Masters Thesis

for the Big Apple. She didn't have a job lined up, or even a place to live. She soon flourished, advancing students will work in research labs for eight weeks under the direction of a WVU faculty research mentor