Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

The present invention is a method of identifying features in indexed data, especially useful for distinguishing signal from noise in data provided as a plurality of ordered pairs. Each of the plurality of ordered pairs has an index and a response. The method has the steps of: (a) providing an index window having a first window end located on a first index and extending across a plurality of indices to a second window end; (b) selecting responses corresponding to the plurality of indices within the index window and computing a measure of dispersion of the responses; and (c) comparing the measure of dispersion to a dispersion critical value. Advantages of the present invention include minimizing signal to noise ratio, signal drift, varying baseline signal and combinations thereof.

A process for identifying a plant having disease tolerance comprising administering to a plant an inhibitory amount of ethylene and screening for ethylene insensitivity, thereby identifying a disease tolerant plant, is described. Plants identified by the foregoing process are also described. 7 figs.

A process for identifying a plant having disease tolerance comprising administering to a plant an inhibitory amount of ethylene and screening for ethylene insensitivity, thereby identifying a disease tolerant plant, is described. Plants identified by the foregoing process are also described.

An ability to exercise market power by suppliers may significantly reduce market efficiency in restructured electricity markets. Many studies have been performed to develop an effective tool to identify market power based on indices. Most often it is ... Keywords: Dispatch sensitivity matrix, HHI, KKT, LI, LMP, MC, Market power, Null space, PTDF, Power transfer distribution factor (PTDF) matrix

A method for locating and mapping the magnitude and extent of terrestrial heat-flow anomalies from 5 to 50 times average with a tenfold improved sensitivity over orthodox applications of aerial temperature-sensing surveys as used for geothermal reconnaissance. The method remotely senses surface temperature anomalies such as occur from geothermal resources or oxidizing ore bodies by: measuring the spectral, spatial, statistical, thermal, and temporal features characterizing infrared radiation emitted by natural terrestrial surfaces; deriving from these measurements the true surface temperature with uncertainties as small as 0.05 to 0.5 K; removing effects related to natural temperature variations of topographic, hydrologic, or meteoric origin, the surface composition, detector noise, and atmospheric conditions; factoring out the ambient normal-surface temperature for non-thermally enhanced areas surveyed under otherwise identical environmental conditions; distinguishing significant residual temperature enhancements characteristic of anomalous heat flows and mapping the extent and magnitude of anomalous heat flows where they occur.

HORIZONS Molecular and morphological methods for identifying plankton: what makes a successful of planktologists in monographs or at the bench. Despite recent rapid growth of molecular methods, taxonomists have been slow to incorporate molecular information in a formal way into species descriptions. Likewise

Disclosed is a method for taking the data generated from an array of responses from a multichannel instrument, and determining the characteristics of a chemical in the sample without the necessity of calibrating or training the instrument with known samples containing the same chemical. The characteristics determined by the method are then used to classify and identify the chemical in the sample. The method can also be used to quantify the concentration of the chemical in the sample.

An approach for systematically screening large volumes of continuous data for repetitive events identified as mining explosions on basis of temporal and amplitude population characteristics. The method extends event clustering through waveform correlation with a new source-region-specific detector. The new signal subspace detector generalizes the matched filter and can be used to increase the number of events associated with a given cluster, thereby increasing the reliability of diagnostic cluster population characteristics. The method can be applied to obtain bootstrap ground truth explosion waveforms for testing discriminants, where actual ground truth is absent. The same events, if associated with to a particular mine, may help calibrate velocity models. The method may also assist earthquake hazard risk assessment by providing what amounts to blasting logs for identified mines. The cluster event lists can be reconciled against earthquake catalogs to screen explosions, otherwise hard to identify from the catalogs.

The information contained in this technical update report represents a first-of-a-kind study to evaluate different methods used to identify boiler air inleakage. The study begins to outline the cost and benefits of using those different methods in addition to describing their application. The collection and assemblage of this information will provide a reference for plant engineering and management personnel as their units experience the problems associated with boiler air inleakage. Through the use of t...

above, show up everywhere in hydrodynamics, and even in the design of numerical hydro methods. Basically the difference between two equal cars moving at 50 km/s hitting each other frontally, and a single car hitting is the change of velocity; therefore, the frontal collision produces the same effects as a car hitting a wall

This work examines a scientometric model that tracks the emergence of an identified technology from initial discovery (via original scientific and conference literature), through critical discoveries (via original scientific, conference literature and patents), transitioning through Technology Readiness Levels (TRLs) and ultimately on to commercial application. During the period of innovation and technology transfer, the impact of scholarly works, patents and on-line web news sources are identified. As trends develop, currency of citations, collaboration indicators, and on-line news patterns are identified. The combinations of four distinct and separate searchable on-line networked sources (i.e., scholarly publications and citation, worldwide patents, news archives, and on-line mapping networks) are assembled to become one collective network (a dataset for analysis of relations). This established network becomes the basis from which to quickly analyze the temporal flow of activity (searchable events) for the example subject domain we investigated.

Methods and compositions for staining based upon nucleic acid sequence that employ nucleic acid probes are provided. Said methods produce staining patterns that can be tailored for specific cytogenetic analyzes. Said probes are appropriate for in situ hybridization and stain both interphase and metaphase chromosomal material with reliable signals. The nucleic acids probes are typically of a complexity greater tha 50 kb, the complexity depending upon the cytogenetic application. Methods and reagents are provided for the detection of genetic rearrangements. Probes and test kits are provided for use in detecting genetic rearrangements, particlularly for use in tumor cytogenetics, in the detection of disease related loci, specifically cancer, such as chronic myelogenous leukemia (CML) and for biological dosimetry. Methods and reagents are described for cytogenetic research, for the differentiation of cytogenetically similar ut genetically different diseases, and for many prognostic and diagnostic applications.

Methods and compositions for staining based upon nucleic acid sequence that employ nucleic acid probes are provided. Said methods produce staining patterns that can be tailored for specific cytogenetic analyses. Said probes are appropriate for in situ hybridization and stain both interphase and metaphase chromosomal material with reliable signals. The nucleic acid probes are typically of a complexity greater than 50 kb, the complexity depending upon the cytogenetic application. Methods and reagents are provided for the detection of genetic rearrangements. Probes and test kits are provided for use in detecting genetic rearrangements, particularly for use in tumor cytogenetics, in the detection of disease related loci, specifically cancer, such as chronic myelogenous leukemia (CML) and for biological dosimetry. Methods and reagents are described for cytogenetic research, for the differentiation of cytogenetically similar but genetically different diseases, and for many prognostic and diagnostic applications.

A method for detecting nucleic acid sequence aberrations by detecting nucleic acid sequences having both a first and a second nucleic acid sequence type, the presence of the first and second sequence type on the same nucleic acid sequence indicating the presence of a nucleic acid sequence aberration. The method uses a first hybridization probe which includes a nucleic acid sequence that is complementary to a first sequence type and a first complexing agent capable of attaching to a second complexing agent and a second hybridization probe which includes a nucleic acid sequence that selectively hybridizes to the second nucleic acid sequence type over the first sequence type and includes a detectable marker for detecting the second hybridization probe.

Convective cell identification methods, besides their operational utility, are useful to identify cells, to understand cell interactions within multicell thunderstorms, and to distinguish between convective and stratiform regions within mesoscale ...

A real-time method and computer system for identifying radioactive materials which collects gamma count rates from a HPGe gamma-radiation detector to produce a high-resolution gamma-ray energy spectrum. A library of nuclear material definitions ("library definitions") is provided, with each uniquely associated with a nuclide or isotope material and each comprising at least one logic condition associated with a spectral parameter of a gamma-ray energy spectrum. The method determines whether the spectral parameters of said high-resolution gamma-ray energy spectrum satisfy all the logic conditions of any one of the library definitions, and subsequently uniquely identifies the material type as that nuclide or isotope material associated with the satisfied library definition. The method is iteratively repeated to update the spectrum and identification in real time.

A system and method for identifying, reporting, and evaluating a presence of a solid, liquid, gas, or other substance of interest, particularly a dangerous, hazardous, or otherwise threatening chemical, biological, or radioactive substance. The system comprises one or more substantially automated, location self-aware remote sensing units; a control unit; and one or more data processing and storage servers. Data is collected by the remote sensing units and transmitted to the control unit; the control unit generates and uploads a report incorporating the data to the servers; and thereafter the report is available for review by a hierarchy of responsive and evaluative authorities via a wide area network. The evaluative authorities include a group of relevant experts who may be widely or even globally distributed.

A system and method for identifying, reporting, and evaluating a presence of a solid, liquid, gas, or other substance of interest, particularly a dangerous, hazardous, or otherwise threatening chemical, biological, or radioactive substance. The system comprises one or more substantially automated, location self-aware remote sensing units; a control unit; and one or more data processing and storage servers. Data is collected by the remote sensing units and transmitted to the control unit; the control unit generates and uploads a report incorporating the data to the servers; and thereafter the report is available for review by a hierarchy of responsive and evaluative authorities via a wide area network. The evaluative authorities include a group of relevant experts who may be widely or even globally distributed.

A system and method for identifying, reporting, and evaluating a presence of a solid, liquid, gas, or other substance of interest, particularly a dangerous, hazardous, or otherwise threatening chemical, biological, or radioactive substance. The system comprises one or more substantially automated, location self-aware remote sensing units; a control unit; and one or more data processing and storage servers. Data is collected by the remote sensing units and transmitted to the control unit; the control unit generates and uploads a report incorporating the data to the servers; and thereafter the report is available for review by a hierarchy of responsive and evaluative authorities via a wide area network. The evaluative authorities include a group of relevant experts who may be widely or even globally distributed.

The present invention relates to an apparatus configured for identification of a material and method of identifying a material. One embodiment of the present invention provides an apparatus configured for identification of a material including a first region configured to receive a first sample and output a first spectrum responsive to exposure of the first sample to radiation; a signal generator configured to provide a reference signal having a reference frequency and a modulation signal having a modulation frequency; a modulator configured to selectively modulate the first spectrum using the modulation signal according to the reference frequency; a second region configured to receive a second sample and output a second spectrum responsive to exposure of the second sample to the first spectrum; and a detector configured to detect the second spectrum.

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Analyses used for radiobioassay and other radiochemical tests are normally designed to meet specified quality objectives, such relative bias, precision, and minimum detectable activity (MDA). In the case of radiobioassay analyses for alpha emitting radionuclides, a major determiner of the process MDA is the instrument background. Alpha spectrometry detectors are often restricted to only a few counts over multi-day periods in order to meet required MDAs for nuclides such as plutonium-239 and americium-241. A detector background criterion is often set empirically based on experience, or frequentist or classical statistics are applied to the calculated background count necessary to meet a required MDA. An acceptance criterion for the detector background is set at the multiple of the estimated background standard deviation above the assumed mean that provides an acceptably small probability of observation if the mean and standard deviation estimate are correct. The major problem with this method is that the observed background counts used to estimate the mean, and thereby the standard deviation when a Poisson distribution is assumed, are often in the range of zero to three counts. At those expected count levels it is impossible to obtain a good estimate of the true mean from a single measurement. As an alternative, Bayesian statistical methods allow calculation of the expected detector background count distribution based on historical counts from new, uncontaminated detectors. This distribution can then be used to identify detectors showing an increased probability of contamination. The effect of varying the assumed range of background counts (i.e., the prior probability distribution) from new, uncontaminated detectors will be is discussed.

An improved nuclear diagnostic methodidentifies a contained target material by measuring on-axis, mono-energetic uncollided particle radiation transmitted through a target material for two penetrating radiation beam energies, and applying specially developed algorithms to estimate a ratio of macroscopic neutron cross-sections for the uncollided particle radiation at the two energies, where the penetrating radiation is a neutron beam, or a ratio of linear attenuation coefficients for the uncollided particle radiation at the two energies, where the penetrating radiation is a gamma-ray beam. Alternatively, the measurements are used to derive a minimization formula based on the macroscopic neutron cross-sections for the uncollided particle radiation at the two neutron beam energies, or the linear attenuation coefficients for the uncollided particle radiation at the two gamma-ray beam energies. A candidate target material database, including known macroscopic neutron cross-sections or linear attenuation coefficients for target materials at the selected neutron or gamma-ray beam energies, is used to approximate the estimated ratio or to solve the minimization formula, such that the identity of the contained target material is discovered.

The fractions skill score (FSS) was one of the measures that formed part of the Intercomparison of Spatial Forecast Verification Methods project. The FSS was used to assess a common dataset that consisted of real and perturbed Weather Research ...

Early detection of infectious mononucleosis is carried out using a sample of human blood by isolating and identifying the presence of Inmono proteins in the sample from a two-dimensional protein map with the proteins being characterized by having isoelectric banding as measured in urea of about -16 to -17 with respect to certain isoelectric point standards and molecular mass of about 70 to 75 K daltons as measured in the presence of sodium dodecylsulfate containing polyacrylamide gels, the presence of the Inmono proteins being correlated with the existence of infectious mononucleosis.

Nuclear Magnetic Resonance (NMR) spectroscopy is a powerful technique for studying bi-molecular interactions at the atomic scale. Our NMR lab is involved in the identification of small molecules, or ligands that bind to target protein receptors, such as tetanus (TeNT) and botulinum (BoNT) neurotoxins, anthrax proteins and HLA-DR10 receptors on non-Hodgkin's lymphoma cancer cells. Once low affinity binders are identified, they can be linked together to produce multidentate synthetic high affinity ligands (SHALs) that have very high specificity for their target protein receptors. An important nanotechnology application for SHALs is their use in the development of robust chemical sensors or biochips for the detection of pathogen proteins in environmental samples or body fluids. Here, we describe a recently developed NMR competition assay based on transferred nuclear Overhauser effect spectroscopy (trNOESY) that enables the identification of sets of ligands that bind to the same site, or a different site, on the surface of TeNT fragment C (TetC) than a known ''marker'' ligand, doxorubicin. Using this assay, we can identify the optimal pairs of ligands to be linked together for creating detection reagents, as well as estimate the relative binding constants for ligands competing for the same site.

Identification of continuous-time autoregressive processes from discrete-time data by replacing the differentiation operator by an approximation is considered. A linear regression model can then be formulated. The least-squares method and the instrumental ... Keywords: Continuous-time AR process, Discrete-time data, Identification

This paper describes a new method for determining the consensus sequences that signal the start of translation and the boundaries between exons and introns (donor and acceptor sites) in eukaryotic mRNA. The method takes into account the dependencies between adjacent bases, in contrast to the usual technique of considering each position independently. When coupled with a dynamic program to compute the most likely sequence, new consensus sequences emerge. The consensus sequence information is summarized in conditional probability matrices which, when used to locate signals in uncharacterized genomic DNA, have greater sensitivity and specificity than conventional matrices. Species-specific versions of these matrices are especially effective at distinguishing true and false sites.

A method for producing a Y chromosome specific probe selected from highly repeating sequences on that chromosome is described. There is little or no nonspecific binding to autosomal and X chromosomes, and a very large signal is provided. Inventive primers allowing the use of PCR for both sample amplification and probe production are described, as is their use in producing large DNA chromosome painting sequences. 9 figs.

A method for producing a Y chromosome specific probe selected from highly repeating sequences on that chromosome is described. There is little or no nonspecific binding to autosomal and X chromosomes, and a very large signal is provided. Inventive primers allowing the use of PCR for both sample amplification and probe production are described, as is their use in producing large DNA chromosome painting sequences.

The relationship between groundwater geochemistry and microbial community structure can be complex and difficult to assess. We applied nonlinear and generalized linear data analysis methods to relate microbial biomarkers (phospholipids fatty acids, PLFA) to groundwater geochemical characteristics at the Shiprock uranium mill tailings disposal site that is primarily contaminated by uranium, sulfate, and nitrate. First, predictive models were constructed using feedforward artificial neural networks (NN) to predict PLFA classes from geochemistry. To reduce the danger of overfitting, parsimonious NN architectures were selected based on pruning of hidden nodes and elimination of redundant predictor (geochemical) variables. The resulting NN models greatly outperformed the generalized linear models. Sensitivity analysis indicated that tritium, which was indicative of riverine influences, and uranium were important in predicting the distributions of the PLFA classes. In contrast, nitrate concentration and inorganic carbon were least important, and total ionic strength was of intermediate importance. Second, nonlinear principal components (NPC) were extracted from the PLFA data using a variant of the feedforward NN. The NPC grouped the samples according to similar geochemistry. PLFA indicators of Gram-negative bacteria and eukaryotes were associated with the groups of wells with lower levels of contamination. The more contaminated samples contained microbial communities that were predominated by terminally branched saturates and branched monounsaturates that are indicative of metal reducers, actinomycetes, and Gram-positive bacteria. These results indicate that the microbial community at the site is coupled to the geochemistry and knowledge of the geochemistry allows prediction of the community composition.

This report presents the results of experimental tests of a concept for using infrared (IR) photos to identify non-operational systems based on their glazing temperatures; operating systems have lower glazing temperatures than those in stagnation. In recent years thousands of new solar hot water (SHW) systems have been installed in some utility districts. As these numbers increase, concern is growing about the systems dependability because installation rebates are often based on the assumption that all of the SHW systems will perform flawlessly for a 20-year period. If SHW systems routinely fail prematurely, then the utilities will have overpaid for grid-energy reduction performance that is unrealized. Moreover, utilities are responsible for replacing energy for loads that failed SHW system were supplying. Thus, utilities are seeking data to quantify the reliability of SHW systems. The work described herein is intended to help meet this need. The details of the experiment are presented, including a description of the SHW collectors that were examined, the testbed that was used to control the system and record data, the IR camera that was employed, and the conditions in which testing was completed. The details of the associated analysis are presented, including direct examination of the video records of operational and stagnant collectors, as well as the development of a model to predict glazing temperatures and an analysis of temporal intermittency of the images, both of which are critical to properly adjusting the IR camera for optimal performance. Many IR images and a video are presented to show the contrast between operating and stagnant collectors. The major conclusion is that the technique has potential to be applied by using an aircraft fitted with an IR camera that can fly over an area with installed SHW systems, thus recording the images. Subsequent analysis of the images can determine the operational condition of the fielded collectors. Specific recommendations are presented relative to the application of the technique, including ways to mitigate and manage potential sources of error.

Spam has grown to become a major threat for email communication. Although spam filters' degree of sophistication has increased ever since, they still produce huge amounts of false positives and false negatives thereby reducing the reliability of email. ... Keywords: address trading, forensics, identification, spam

We analyze a proper time renormalization group equation for Quantum Einstein Gravity in the Einstein-Hilbert truncation and compare its predictions to those of the conceptually different exact renormalization group equation of the effective average action. We employ a smooth infrared regulator of a special type which is known to give rise to extremely precise critical exponents in scalar theories. We find perfect consistency between the proper time and the average action renormalization group equations. In particular the proper time equation, too, predicts the existence of a non-Gaussian fixed point as it is necessary for the conjectured nonperturbative renormalizability of Quantum Einstein Gravity.

The Nordheim integral treatment is an approximate method for determining the neutron spectra within materials containing resonance cross sections. These spectra are necessary to determine the flux-weighted multigroup data properly. In practice, the resonance material multigroup cross sections produced by use of Nordheim-generated spectra are combined with other multigroup cross sections, and further energy and spatial collapsing of the data is performed. The question arises of whether performing this spatial collapse following the Nordheim treatment constitutes double spatial weighting. To investigate this possibility, results were compared with those for a second method of forming the multigroup cross section data, and with the results of a fine-group calculation. It was concluded that the first method above is the procedure to follow for the proper use of the Nordheim integral treatment. 1 table. (RWR)

Proper Orthogonal Decomposition for Flow Calculations and Optimal Control in a Horizontal CVD calculations are discussed. AMS Subject Classification: 76N10, 65K10, 49J20 & 35C10 \\Lambda This research a chemical reaction in the gas phase above the surface of the film to deposit desired materials onto

A simple, fully automated, and efficient method to determine the structural properties and evolution (tracking) of cloud shields of convective systems (CS) is described. The method, which is based on the maximum spatial correlation tracking ...

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

This report is about applying a Fisher ratio method to entire four dimensional (4D) data sets from third-order instrumentation data. The Fisher ratio method uses a novel indexing scheme to discover the unknown chemical differences among known classes of complex samples. This is the first report of a Fisher ratio analysis procedure applied to entire 4D data sets of third-order separation data, which, in this case, is comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry analyses of metabolite extracts using all of the collected mass channels. Current analysis methods for third-order separation data use only userdefined subsets of the 4D data set.

Liquid loading is a serious problem in gas wells. Many proven artificial lift methods have been used to alleviate this problem. However, a complete workflow to determine the most suitable artificial lift method for given well conditions does not exist. In 2008, Han Young Park presented his thesis of decision matrix tool using a decision tree technique for data mining that determined the best artificial lift method for liquid loading in gas wells from seven artificial lift methods: plunger lift, gas lift, ESP, PCP, rod pump, jet pump, and piston pump. He determined the technical feasibility and the cost evaluation of these seven techniques. His workflow consisted of three rounds. The first round was the preliminary screening round. By using all input well conditions, the impractical techniques were screened out. In the second round, all the techniques from round one were graded and ranked. In the third round, the economic evaluation was performed by using cost for each artificial lift method and assuming the constant additional gas production per day to determine net present value (NPV) and internal rate of return (IRR). In this thesis, we propose an extended workflow from the Han-Youngs thesis for the decision matrix tool. We added integrated production simulations (reservoir to wellhead) step with commercial software in between the second and third round. We performed simulations of the various artificial lift methods to see the additional gains from each technique. We used the additional gas production resulted from simulation to calculate economic yardsticks (the third round), NPV and IRR. Moreover, we made the decision matrix more complete by adding three more liquid unloading techniques to the decision matrix: velocity string, foam injection, and heated tubing. We have also updated all screening conditions, the technical scores, and the costs for the decision matrix from the previous study using literature reviews, information from the projects sponsor, information from service company and our own judgment. The aim of the decision matrix is to allow operators to screen quickly and efficiently for the most suitable artificial lift method to solve the liquid loading problem under given well conditions.

Substantial environmental contamination has occurred from coal tar creosote and pentachlorophenol (C5P) in wood preserving solutions. The present studies focused on the characterization and remediation of these contaminants. The first objective was to delineate a sequence of biological changes caused by chlorinated phenol (CP) exposure. In Clone 9 cells, short-term exposure to 10 ?M C5P decreased pH, GJIC, and GSH, and increased ROS generation. Long-term exposure caused mitochondrial membrane depolarization (25 ?M), increased intracellular Ca2+ (50 ?M), and plasma membrane depolarization (100 ?M). Cells were affected similarly by C5P or 2,3,4,5-C4P, and similarly by 2,3,5-C3P or 3,5-C2P. Endpoints were affected by dose, time, and the number of chlorine substituents on specific congeners. Thus, this information may be used to identify and quantify unknown CPs in a mixture to be remediated. Due to the toxic effects observed due to CP exposure in vitro, the objective of the second study was to develop multi-functional sorbents to remediate CPs and other components of wood preserving waste from groundwater. Cetylpyridinium-exchanged low pH montmorillonite clay (CP-LPHM) was bonded to either sand (CP-LPHM/sand) or granular activated carbon (CP-LPHM/GAC). Laboratory studies utilizing aqueous solution derived from wood preserving waste indicated that 3:2 CP-LPHM/GAC and CP-LPHM/sand were the most effective formulations. In situ elution of oil-water separator effluent indicated that both organoclay-containing composites have a high capacity for contaminants identified in wood preserving waste, in particular high molecular weight and carcinogenic PAHs. Further, GAC did not add substantial sorptive capacity to the composite formulation. Following water remediation, the final aim of this work was to explore the safety of the parent clay minerals as potential enterosorbents for contaminants ingested in water and food. Calcium montmorillonite and sodium montmorillonite clays were added to the balanced diet of Sprague-Dawley rats throughout pregnancy. Based on evaluations of toxicity and neutron activation analysis of tissues, no significant differences were observed between animals receiving clay supplements and control animals, with the exception of slightly decreased brain Rb in animals ingesting clay. Overall, the results suggest that neither clay mineral, at relatively high dietary concentrations, influences mineral uptake or utilization in the pregnant rat.

The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells.

The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells. 22 figs.

This is the second report of the ASHRAE 1093-RP project that reports on the progress during the scheduled Phase II effort. In this report, we present: (1) the data sets identified and acquired required for the analysis; (2) the method adopted for classifying the Office building categories; (3) the relevant methods for daytyping necessary for creating the typical load shapes for energy and cooling load calculation; (4) the relevant robust variability (uncertainty) analysis; (5) typical load shapes reported in the literature; (6) a test to assure the non-weather dependency (seasonal variation) of the lighting and equipment data sets; and (7) a proposed occupancy surrogate variable. The results obtained during Phase II will enable us to proceed with Phase III, as planned. Phase III will cover: (1) developing the typical load shapes for the acquired data sets, using the proposed method, for both energy and cooling load calculations; (2) developing the tool-kit for deriving the new diversity factors and general guidelines for their use; and (3) developing illustrative examples of the use of the diversity factors in the DOE-2 and BLAST simulation programs.

ZipperDB contains predictions of fibril-forming segments within proteins identified by the 3D Profile Method. The UCLA-DOE Institute for Genomics and Proteomics has analyzed over 20,000 putative protein sequences for segments with high fibrillation propensity that could form a "steric zipper"ùtwo self-complementary beta sheets, giving rise to the spine of an amyloid fibril. The approach is unique in that structural information is used to evaluate the likelihood that a particular sequence can form fibrils. [copied with edits from http://www.doe-mbi.ucla.edu/]. In addition to searching the database, academic and non-profit users may also submit their protein sequences to the database.

Two techniques for identifying heavy Higgs bosons produced at SSC energies are discussed. In the first, the Higgs boson decays into ZZ, with one Z decaying into an e-pair or ..mu..-pair and the other into a neutrino pair. In the second, the production of the Higgs boson by WW fusion is tagged by detecting the quarks that produced the bremsstrahlung virtual W's. The associated Higgs decay is identified by one leptonic and one hadronic decay. Both methods appear capable of finding a heavy Higgs boson provided the SSC design parameters are achieved. 16 refs., 2 figs., 2 tabs.

To identify the composition of a metal alloy, sparks generated from the alloy are optically observed and spectrographically analyzed. The spectrographic data, in the form of a full-spectrum plot of intensity versus wavelength, provide the "signature" of the metal alloy. This signature can be compared with similar plots for alloys of known composition to establish the unknown composition by a positive match with a known alloy. An alternative method is to form intensity ratios for pairs of predetermined wavelengths within the observed spectrum and to then compare the values of such ratios with similar values for known alloy compositions, thereby to positively identify the unknown alloy composition.

We report on nine wide common proper motion systems containing late-type M, L, or T companions. We confirm six previously reported companions, and identify three new systems. The ages of these systems are determined using ...

We use 14 year baseline images obtained with the Wide Field and Planetary Camera 2 on board the Hubble Space Telescope (HST) to derive a proper motion for one of the Milky Way's most distant dwarf spheroidal companions, Leo II, relative to an extragalactic background reference frame. Astrometric measurements are performed in the effective point-spread function formalism using our own developed code. An astrometric reference grid is defined using 3224 stars that are members of Leo II and brighter than a magnitude of 25 in the F814W band. We identify 17 compact extragalactic sources, for which we measure a systemic proper motion relative to this stellar reference grid. We derive a proper motion [{mu}{sub {alpha},{mu}{delta}}] = [+104 {+-}113,-33 {+-} 151] {mu}as yr{sup -1} for Leo II in the heliocentric reference frame. Though marginally detected, the proper motion yields constraints on the orbit of Leo II. Given a distance of d {approx_equal} 230 kpc and a heliocentric radial velocity v{sub r} = +79 km s{sup -1}, and after subtraction of the solar motion, our measurement indicates a total orbital motion v{sub G} = 266.1 {+-} 128.7 km s{sup -1} in the Galactocentric reference frame, with a radial component v{sub r{sub G}}=21.5{+-}4.3 km s{sup -1} and tangential component v{sub t{sub G}} = 265.2 {+-} 129.4 km s{sup -1}. The small radial component indicates that Leo II either has a low-eccentricity orbit or is currently close to perigalacticon or apogalacticon distance. We see evidence for systematic errors in the astrometry of the extragalactic sources which, while close to being point sources, are slightly resolved in the HST images. We argue that more extensive observations at later epochs will be necessary to better constrain the proper motion of Leo II. We provide a detailed catalog of the stellar and extragalactic sources identified in the HST data which should provide a solid early-epoch reference for future astrometric measurements.

Energy consumption in buildings is a growing concern. Many buildings are energy hogs simply because they were not set up properly to begin with. The building envelope and infiltration of unconditioned air is also a major concern in hot and humid climates, not only because of the loss of energy, but also because of damage that can result to insulation, drywall, and structure in addition to promotion of mold and mildew growth. Proper setup of the HVAC system, in conjunction with sound building skin design, can alleviate many of these problems. This paper will explain how most mixed air HVAC systems are set up with problems to begin with and how to identify and solve those problems. It will explain different control schemes that specifically deal with proper building pressurization

In the future, more attention will be required concerning the filling of the input phase space used by particle-simulation codes. The prospect of greatly improved particle-tracking codes implies that code input distributions must be accurate models of real input distributions. Much of present simulation work is done using artificial phase-space distributions (K-V, waterbag, etc.). Real beams can differ dramatically from such ideal input. We have already developed a method for deriving code input distributions from measurements. This paper addresses the problem of determining the number of pseudoparticles needed to model the measured distribution properly.

We develop a method to compute the moments of the eigenvalue densities of matrices in the Gaussian, Laguerre and Jacobi ensembles for all the symmetry classes beta = 1,2, 4 and finite matrix dimension n. The moments of the Jacobi ensembles have a physical interpretation as the moments of the transmission eigenvalues of an electron through a quantum dot with chaotic dynamics. For the Laguerre ensemble we also evaluate the finite n negative moments. Physically, they correspond to the moments of the proper delay times, which are the eigenvalues of the Wigner-Smith matrix. Our formulae are well suited to an asymptotic analysis as n -> infinity.

We have searched {approx}8200 deg{sup 2} for high proper motion ({approx}0.''5-2.''7 year{sup -1}) T dwarfs by combining first-epoch data from the Pan-STARRS1 (PS1) 3{pi} Survey, the Two Micron All Sky Survey (2MASS) All-Sky Point Source Catalog, and the WISE Preliminary Data Release. We identified two high proper motion objects with the very red (W1 - W2) colors characteristic of T dwarfs, one being the known T7.5 dwarf GJ 570D. Near-IR spectroscopy of the other object (PSO J043.5395+02.3995 {identical_to} WISEP J025409.45+022359.1) reveals a spectral type of T8, leading to a photometric distance of 7.2 {+-} 0.7 pc. The 2.''56 year{sup -1} proper motion of PSO J043.5+02 is the second highest among field T dwarfs, corresponding to a tangential velocity of 87 {+-} 8 km s{sup -1}. According to the Besancon galaxy model, this velocity indicates that its galactic membership is probably in the thin disk, with the thick disk an unlikely possibility. Such membership is in accord with the near-IR spectrum, which points to a surface gravity (age) and metallicity typical of the field population. We combine 2MASS, Sloan Digital Sky Survey, WISE, and PS1 astrometry to derive a preliminary parallax of 171 {+-} 45 mas (5.8{sup +2.0} {sub -1.2} pc), the first such measurement using PS1 data. The proximity and brightness of PSO J043.5+02 will facilitate future characterization of its atmosphere, variability, multiplicity, distance, and kinematics. The modest number of candidates from our search suggests that the immediate ({approx}10 pc) solar neighborhood does not contain a large reservoir of undiscovered T dwarfs earlier than about T8.

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

We present absolute proper motions in Kapteyn Selected Area (SA) 103. This field is located 7 deg. west of the center of the Virgo Stellar Stream (VSS), and has a well-defined main sequence representing the stream. In SA 103, we identify one RR Lyrae star as a member of the VSS, according to its metallicity, radial velocity, and distance. VSS candidate turnoff and subgiant stars have proper motions consistent with that of the RR Lyrae star. The three-dimensional velocity data imply an orbit with a pericenter of {approx}11 kpc and an apocenter of {approx}90 kpc. Thus, the VSS comprises tidal debris found near the pericenter of a highly destructive orbit. Examining the six globular clusters at distances larger than 50 kpc from the Galactic center, and the proposed orbit of the VSS, we find one tentative association, NGC 2419. We speculate that NGC 2419 is possibly the nucleus of a disrupted system of which the VSS is a part.

The online version of this article has been published under an open access model. Users are entitled to use, reproduce, disseminate, or display the open access version of this article for non-commercial purposes provided that: the original authorship is properly and fully attributed; the Journal and Oxford University Press are attributed as the original place of publication with the correct citation details given; if an article is subsequently reproduced or disseminated not in its entirety but only in part or as a derivative work this must be clearly indicated. For commercial re-use, please contact journals.permissions@oxfordjournals.org

Energy consumed during the construction of buildings and structures, including the embodied energy of the concrete and other construction materials, represent a considerable percentage that may reach 40% of the total energy consumed during the whole service life of the structure. Reducing energy consumed in the construction practices along with reducing the embodied energy of concrete and building materials, therefore, are of major importance. Reducing concrete's embodied energy represents one of the major green features of buildings and an important tool to improve sustainability, save resources for coming generations and reduce greenhouse gas emissions. In this paper, different methods to reduce concrete's embodied energy are discussed and their effect on demand side energy are assessed. Using local materials, pozzolanic blended cements, fillers, along with specifying 56 days strength in design are discussed and assessed. Proper mix design, quality control and proper architectural design also affect and reduce embodied energy. Improving durability, regular maintenance and scheduled repair are essential to increase the expected service life of buildings and hence reduce overall resources consumption and reduce energy. These effects are discussed and quantified. Construction practices also consume considerable amount of energy. The effect of transporting, conveying, pouring, finishing and curing concrete on energy consumption are also discussed.

Context: MHD waves and magnetic null points are both prevalent in many astrophysical plasmas, including the solar atmosphere. Interaction between waves and null points has been implicated as a possible mechanism for localised heating events. Aims: Here we investigate the transient behaviour of the Alfven wave about fully 3D proper and improper 3D magnetic null points. Previously, the behaviour of fast magnetoacoustic waves at null points in 3D, cold MHD was considered by Thurgood & McLaughlin (Astronomy & Astrophysics, 2012, 545, A9). Methods: We introduce an Alfven wave into the vicinity of both proper and improper null points by numerically solving the ideal, $\\beta=0$ MHD equations using the LARE3D code. A magnetic fieldline and flux-based coordinate system permits the isolation of resulting wave-modes and the analysis of their interaction. Results: We find that the Alfven wave propagates throughout the region and accumulates near the fan-plane, causing current build up. For different values of nul...

A method is disclosed for inverting 3-D seismic reflection data obtained from seismic surveys to derive impedance models for a subsurface region, and for inversion of multiple 3-D seismic surveys (i.e., 4-D seismic surveys) of the same subsurface volume, separated in time to allow for dynamic fluid migration, such that small scale structure and regions of fluid and dynamic fluid flow within the subsurface volume being studied can be identified. The method allows for the mapping and quantification of available hydrocarbons within a reservoir and is thus useful for hydrocarbon prospecting and reservoir management. An iterative seismic inversion scheme constrained by actual well log data which uses a time/depth dependent seismic source function is employed to derive impedance models from 3-D and 4-D seismic datasets. The impedance values can be region grown to better isolate the low impedance hydrocarbon bearing regions. Impedance data derived from multiple 3-D seismic surveys of the same volume can be compared to identify regions of dynamic evolution and bypassed pay. Effective Oil Saturation or net oil thickness can also be derived from the impedance data and used for quantitative assessment of prospective drilling targets and reservoir management.

A method is disclosed for inverting 3-D seismic reflection data obtained from seismic surveys to derive impedance models for a subsurface region, and for inversion of multiple 3-D seismic surveys (i.e., 4-D seismic surveys) of the same subsurface volume, separated in time to allow for dynamic fluid migration, such that small scale structure and regions of fluid and dynamic fluid flow within the subsurface volume being studied can be identified. The method allows for the mapping and quantification of available hydrocarbons within a reservoir and is thus useful for hydrocarbon prospecting and reservoir management. An iterative seismic inversion scheme constrained by actual well log data which uses a time/depth dependent seismic source function is employed to derive impedance models from 3-D and 4-D seismic datasets. The impedance values can be region grown to better isolate the low impedance hydrocarbon bearing regions. Impedance data derived from multiple 3-D seismic surveys of the same volume can be compared to identify regions of dynamic evolution and bypassed pay. Effective Oil Saturation or net oil thickness can also be derived from the impedance data and used for quantitative assessment of prospective drilling targets and reservoir management. 20 figs.

When dismantling scenarios are selected, not only the quantitatively calculated results but also the qualitatively estimated results should be considered with a logical and systematic process. In this case, the MAUT (Multi-Attribute Utility Theory) is widely used for the quantification of subjective judgments in various fields of a decision making. This study focuses on the introduction and application of the MAUT method for the selection of decommissioning scenarios. To evaluate decommissioning scenarios, nine evaluation attributes are considered. These attributes are: the primary cost, the peripheral cost, the waste treatment cost, the worker's exposure, the worker's safety, the work difficulty, the originality of the dismantling technologies, their contributions to other industries, public relations for, and an understanding of the public. The weighting values of the attributes were determined by using the AHP (Analytic Hierarchy Process) method and their utility functions are produced from several questionnaires for the decision makers. As an implementation, this method was applied to evaluate two scenarios, the plasma arc cutting scenario and the nibbler cutting scenario for decommissioning the thermal column in KRR- 1 (Korea Research Reactor-1). As a result, this method has many merits even although it is difficult to produce the utility function of each attribute. However, once they are setup it is easy to measure the alternatives' values and it can be applied regardless of the number of alternatives. (authors)

Drilling fluids for underbalanced operations require careful design and testing to ensure they do not damage sensitive formations. In addition to hole cleaning and lubrication functions, these fluids may be needed as kill fluids during emergencies. PanCanadian Petroleum Ltd. used a systematic approach in developing and field testing a nondamaging drilling fluid. It was for use in underbalanced operations in the Glauconitic sandstone in the Westerose gas field in Alberta. A lab study was initiated to develop and test a non-damaging water-based drilling fluid for the horizontal well pilot project. The need to develop an inexpensive, nondamaging drilling fluid was previously identified during underbalanced drilling operations in the Weyburn field in southeastern Saskatchewan. A non-damaging fluid is required for hole cleaning, for lubrication of the mud motor, and for use as a kill fluid during emergencies. In addition, a nondamaging fluid is required when drilling with a conventional rig because pressure surges during connections and trips may result in the well being exposed to short periods of near balanced or overbalanced conditions. Without the protection of a filter cake, the drilling fluid will leak off into the formation, causing damage. The amount of damage is related to the rate of leak off and depth of invasion, which are directly proportional to the permeability to the fluid.

Most Commonly Identified Recommendations Most Commonly Identified Recommendations DOE ITP In Depth ITP Energy Assessment Webcast Presented by: Dr. Bin Wu, Director, Professor of Industrial Engineering Dr. Sanjeev Khanna, Assistant Director, Associate Professor of Mechanical Engineering With Contribution From MO IAC Student Engineers: Chatchai Pinthuprapa Jason Fox Yunpeng Ren College of Engineering, University of Missouri. April 16, 2009 Missouri Industrial Assessment Center Missouri IAC is one of the 26 centers founded by the U.S. DOE in the nation. Since its establishment in 2005, we have been working closely with the MoDNR, the MU University Extension, utility providers in the state, etc, to provide education, development and services in industrial energy efficiency. Our services (audits, workshops, etc), have already covered many locations across the state of Missouri.

We report the results of an investigation of the spoken word retrieval abilities of a patient, BG, with proper name anomia. Our investigations reveal that she is impaired in retrieving common nouns as well as proper names. Common noun retrieval was influenced by age-of-acquisition, word familiarity and name agreement. Cued retrieval of proper names was influenced by age-of-acquisition, although effects of other linguistic variables were not excluded. It is claimed that an explanation in terms of a `continuum of word retrieval difficulty' rather than of proper names as `pure referring expressions' can best account for the findings. However, this proposal is unlikely to be able to explain all cases of proper name anomia. Nonetheless, it is suggested that similar findings may be observed in other people with proper name anomia, and that it is necessary for future studies to investigate not only proper name but also common noun retrieval. We also provide evidence that Plausible Phonology (Brennen, 1993) and Specificity (Brdart, 1993) hypotheses of proper name anomia cannot account for BG's naming abilities.

Abstract: The biodegradabilities of poly(butylene succinate) (PBS) powders in a controlled compost at 58 °C have been studied using a Microbial Oxidative Degradation Analyzer (MODA) based on the ISO 14855-2 method, entitled Determination of the ultimate aerobic biodegradability of plastic materials under controlled composting conditionsMethod by analysis of evolved carbon dioxidePart 2: Gravimetric measurement of carbon dioxide evolved in a laboratory-scale test. The evolved CO2 was trapped by an additional aqueous Ba(OH)2 solution. The trapped BaCO3 was transformed into graphite via a serial vaporization and reduction reaction using a gas-tight tube and vacuum manifold system. This graphite was analyzed by accelerated mass spectrometry (AMS) to determine the percent modern carbon [pMC (sample)] based on the 14 C radiocarbon concentration. By using the theory that pMC (sample) was the sum of the pMC (compost) (109.87%) and pMC (PBS) (0%) as the respective ratio in the determined period, the CO2 (respiration) was calculated from only one reaction vessel. It was found that the biodegradabilities determined by the CO2 amount from PBS in the sample vessel were about 30 % lower than those based on the ISO method. These differences between the

The proper orthogonal decomposition technique is applied to 74 snapshots of 3D wind and temperature fields to study turbulent coherent structures and their interplay in the urban boundary layer over Oklahoma City, Oklahoma. These snapshots of ...

Environmental Management, Inc. It has been subject to the Agencys peer and administrative review, and it has been approved for publication as an EPA document. The opinions, findings, and conclusions expressed herein are those of the contractor and not necessarily those of the EPA or other cooperating agencies. Mention of company or product names is not to be construed as an endorsement by the agency. Foreword The U.S. Environmental Protection Agency is charged by Congress with protecting the Nations land, air, and water resources. Under a mandate of national environmental laws, the Agency strives to formulate and implement actions leading to a compatible balance between human activities and the ability of natural systems to support and nurture life. To meet this mandate, EPAs research program is providing data and technical support for solving environmental problems today and building a science knowledge base necessary to manage our ecological resources wisely, understand how pollutants affect our health, and prevent or reduce environmental risks in the future. The National Risk Management Research Laboratory is the Agencys center for investigation of technological and management approaches for reducing risks from threats to human health and the environment. The focus of the Laboratorys research program is on methods for the prevention and control of pollution to air, land, water and subsurface resources; protection of water quality in public water systems; remediation of contaminated sites and ground water; and prevention and control of indoor air pollution. The goal of this research effort is to catalyze development and implementation of innovative, cost-effective environmental technologies;

PDG Identifiers PDG Identifiers PDG Identifiers are references to items of PDG data such as particles, particle properties, decay modes and review articles. Once defined, a PDG Identifier is guaranteed to not change and can thus be used in other systems as a permanent reference to PDG data. Note that although the meaning of a given PDG Identifier will not change, there is no guarantee that the corresponding data will be included into future editions of the Review of Particle Physics. Each PDG Identifier consists of a single string without embedded spaces. PDG Identifiers are not case-sensitive. More details on PDG Identifiers can be found in this proposal. Future versions of pdgLive will directly support PDG Identifiers both for viewing and for downloading the data associated with a given PDG Identifier.

The Green-Julg theorem states that K_0^G(B) is isomorphic to K_0(L^1(G,B)) for every compact group G and every G-C*-algebra B. We formulate a generalisation of this result to proper groupoids and Banach algebras and deduce that the Bost assembly map is surjective for proper Banach algebras. On the way, we show that the spectral radius of an element in a C_0(X)-Banach algebra can be calculated from the spectral radius in the fibres.

This thesis presents a proper time dependent measurement of the B{sup 0}{sub d} mixing frequency {Delta}m{sub d} using jet charge and soft lepton flavor tagging in p-{anti p} collisions at {radical}s=1.8 TeV. The measurement uses the inclusive e and {mu} trigger data of the CDF detector from an integrated luminosity of 91 pb{sup -1}. The proper time at decay is measured from a partial reconstruction of the B associated with the trigger lepton. The measurement of {Delta}m{sub d} yields {Delta}m{sub d}=0.50{+-}0.05{+-}0.05 {h_bar} ps{sup -1} where the first error is statistical and the second systematic. The flavor tagging methods used give a measured effective efficiency {epsilon}D{sup 2} of - Jet Charge: {epsilon}D{sup 2} = (0.78 {+-} 0.12 {+-} 0.09) % - Soft Lepton: {epsilon}D{sup 2} = (1.07 {+-} 0.09 {+-} 0.10) % where the first error is statistical and the second systematic.

PIER Glossary Â· An acronym table is provided as a "starting point" for determining the proper term. The reader can then look up the term in the glossary. Â· Note that "synonyms" are not usually considered Preference Act TC Tailored Collaboration TSD Technology Systems Division #12;PIER Glossary Word Definition

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Regulatory concerns over the proper characterization of certain waste streams led CH2M HILL Plateau Remediation Company (CHPRC) to develop written guidance for personnel involved in Decontamination & Decommissioning (D&D) activities, facility management and Waste Management Representatives (WMRs) involved in the designation of wastes for disposal on and off the Hanford Site. It is essential that these waste streams regularly encountered in D&D operations are properly designated, characterized and classified prior to shipment to a Treatment, Storage or Disposal Facility (TSDF). Shipments of waste determined by the classification process as Low Specific Activity (LSA) or Surface Contaminated Objects (SCO) must also be compliant with all applicable U.S. Department of Transportation (DOE) regulations as well as Department of Energy (DOE) orders. The compliant shipment of these waste commodities is critical to the Hanford Central Plateau cleanup mission. Due to previous problems and concerns from DOE assessments, CHPRC internal critiques as well as DOT, a management decision was made to develop written guidance and procedures to assist CHPRC shippers and facility personnel in the proper classification of D&D waste materials as either LSA or SCO. The guidance provides a uniform methodology for the collection and documentation required to effectively characterize, classify and identify candidate materials for shipping operations. A primary focus is to ensure that waste materials generated from D&D and facility operations are compliant with the DOT regulations when packaged for shipment. At times this can be difficult as the current DOT regulations relative to the shipment of LSA and SCO materials are often not clear to waste generators. Guidance is often sought from NUREG 1608/RAMREG-003 [3]: a guidance document that was jointly developed by the DOT and the Nuclear Regulatory Commission (NRC) and published in 1998. However, NUREG 1608 [3] is now thirteen years old and requires updating to comply with the newer DOT regulations. Similar challenges present themselves throughout the nuclear industry in both commercial and government operations and therefore, this is not only a Hanford Site problem. Shipping radioactive wastes as either LSA or SCO rather than repacking it is significantly cheaper than other DOT radioactive materials shipping classifications particularly when the cost of packages is included. Additionally, the need to 'repackage' materials for transport can often increase worker exposure, necessitated by 'repackaging' waste materials into DOT 7 A Type A containers.

In the context of quantum information theory, "quantization" of various mathematical and computational constructions is said to occur upon the replacement, at various points in the construction, of the classical randomization notion of probability distribution with higher order randomization notions from quantum mechanics such as quantum superposition with measurement. For this to be done "properly", a faithful copy of the original construction is required to exist within the new "quantum" one, just as is required when a function is extended to a larger domain. Here procedures for extending history dependent Parrondo games, Markov processes and multiplexing circuits to their "quantum" versions are analyzed from a game theoretic viewpoint, and from this viewpoint, proper quantizations developed.

A dramatic increase in house fires caused by wood-burning appliances has accompanied the rediscovery of wood as an alternative heating fuel. The National Bureau of Standards attributed the majority of these fires to conditions related to the installation, operation or maintenance of the appliances rather than malfunctions or construction defects. This publication presents guidelines for the proper installation, use, and maintenance of wood-burning appliances in the home. (DMC)

We identify an NLS within herpes simplex virus scaffold proteins that is required for optimal nuclear import of these proteins into infected or uninfected nuclei, and is sufficient to mediate nuclear import of GFP. A virus lacking this NLS replicated to titers reduced by 1000-fold, but was able to make capsids containing both scaffold and portal proteins suggesting that other functions can complement the NLS in infected cells. We also show that Vp22a, the major scaffold protein, is sufficient to mediate the incorporation of portal protein into capsids, whereas proper portal immunoreactivity in the capsid requires the larger scaffold protein pU{sub L}26. Finally, capsid angularization in infected cells did not require the HSV-1 protease unless full length pU{sub L}26 was expressed. These data suggest that the HSV-1 portal undergoes conformational changes during capsid maturation, and reveal that full length pU{sub L}26 is required for this conformational change.

The physical interpretation for the Davisson-Germer experiments on nickel (Ni) single crystals [(111), (100), and (110) surfaces] is presented in terms of two-dimensional (2D) Bragg scattering. The Ni surface acts as a reflective diffraction grating when the incident electron beams hits the surface. The 2D Bragg reflection occurs when the Ewald sphere intersects the Bragg rods arising from the two-dimension character of the system. Such a concept is essential to proper understanding of the Davisson-Germer experiment for undergraduate modern physics course.

We investigate in detail the probability distribution function (pdf) of the proper-motion measurement errors in the SDSS+USNO-B proper-motion catalog of Munn et al. using clean quasar samples. The pdf of the errors is well represented by a Gaussian core with extended wings, plus a very small fraction (<0.1%) of 'outliers'. We find that while formally the pdf could be well fit by a five-parameter fitting function, for many purposes it is also adequate to represent the pdf with a one-parameter approximation to this function. We apply this pdf to the calculation of the confidence intervals on the true proper motion for an SDSS+USNO-B proper-motion measurement, and discuss several scientific applications of the SDSS proper-motion catalog. Our results have various applications in studies of the galactic structure and stellar kinematics. Specifically, they are crucial for searching hyper-velocity stars in the Galaxy.

There is on-going interest in the application of adaptive fuzzy model-based predictive control techniques which attempt to formulate and solve the control problem when the systems are uncertain and non-linear. This paper proposes a computational efficient ... Keywords: Adaptive control, Air-conditioning system, Fuzzy control, Fuzzy relations, Fuzzy system models

An apparatus and method is described wherein a sensor, such as a mechanical strain sensor, embedded in a fiber core, is "flagged" to identify a preferred orientation of the sensor. The identifying "flag" is a composite material, comprising a plurality of non-woven filaments distributed in a resin matrix, forming a small planar tab. The fiber is first subjected to a stimulus to identify the orientation providing the desired signal response, and then sandwiched between first and second layers of the composite material. The fiber, and therefore, the sensor orientation is thereby captured and fixed in place. The process for achieving the oriented fiber includes, after identifying the fiber orientation, carefully laying the oriented fiber onto the first layer of composite, moderately heating the assembled layer for a short period in order to bring the composite resin to a "tacky" state, heating the second composite layer as the first, and assembling the two layers together such that they merge to form a single consolidated block. The consolidated block achieving a roughly uniform distribution of composite filaments near the embedded fiber such that excess resin is prevented from "pooling" around the periphery of the fiber.

In current microlensing experiments, the information about the physical parameters of individual lenses are obtained from the Einstein timescales. However, the nature of MACHOs is still very uncertain despite the large number of detected events. This uncertainty is mainly due to the degeneracy of the lens parameters in the measured Einstein timescales. The degeneracy can be lifted in a general fashion if the angular Einstein ring radius $\\theta_{\\rm E}$, and thus the MACHO proper motion, can be measured by conducting accurate astrometric measurements of centroid displacement in the source star image. In this paper, we analyze the influence of bright lenses on the astrometric measurements of the centroid displacement and investigate this effect on the determination of $\\theta_{\\rm E}$. We find that if an event is caused by a bright lens, the centroid displacement is distorted by the flux of the lens and resulting astrometric ellipse becomes rounder and smaller with increasing lens brightness, causing an incorr...

These guidelines are intended to assist users of products in identifying: substandard, misrepresented, or fraudulently marked items. The guidelines provide information about such topics as: precautions, inspection and testing, dispositioning identified items, installed inspection and reporting suspect/counterfeit materials. These guidelines apply to users who are developing procurement documents, product acceptance/verification methods, company procedures, work instructions, etc. The intent of these SM guidelines in relation to the Quality Assurance Program Description (QAPD) and implementing company Management Control Procedures is not to substitute or replace existing requirements, as defined in either the QAPD or company implementing instructions (Management Control Procedures). Instead, the guidelines are intended to provide a consolidated source of information addressing the issue of Suspect/Counterfeit materials. These guidelines provide an extensive suspect component listing and suspect indications listing. Users can quickly check their suspect items against the list of manufacturers products (i.e., type, LD. number, and nameplate information) by consulting either of these listings.

A survey of 73 wells in California's vintage South Belridge field indicated numerous casing leaks concentrated at 200-ft and 400- to 500-ft depths. Cathodic protection methods could not be used, and it was necessary to establish the precise causes of corrosion in order to develop techniques to control it. Casing was retrieved from two wells and, after thorough lab analysis, it was concluded that shallow-zone corrosion was triggered by oxygen in surrounding soil and that deep-zone corrosion was the result of CO/sub 2/ in formation water. Prevention depends upon more reliable isolation of casing from the formation with better cementing methods and longer conductor pipe.

We systematically study the first three terms in the asymptotic expansions of the moments of the transmission eigenvalues and proper delay times as the number of quantum channels n in the leads goes to infinity. The computations are based on the assumption that the Landauer-B\\"utticker scattering matrix for chaotic ballistic cavities can be modelled by the circular ensembles of Random Matrix Theory (RMT). The starting points are the finite-n formulae that we recently discovered (Mezzadri and Simm, J. Math. Phys. 52 (2011), 103511). Our analysis includes all the symmetry classes beta=1,2,4; in addition, it applies to the transmission eigenvalues of Andreev billiards, whose symmetry classes were classified by Zirnbauer (J. Math. Phys. 37 (1996), 4986-5018) and Altland and Zirnbauer (Phys. Rev. B. 55 (1997), 1142-1161). Where applicable, our results are in complete agreement with the semiclassical theory of mesoscopic systems developed by Berkolaiko et al. (J. Phys. A.: Math. Theor. 41 (2008), 365102) and Berkolaiko and Kuipers (J. Phys. A: Math. Theor. 43 (2010), 035101 and New J. Phys. 13 (2011), 063020). Our approach also applies to the Selberg-like integrals. We calculate the first two terms in their asymptotic expansion explicitly.

Electrical impedance tomography (EIT) is an imaging modality in which the conductivity distribution inside a target is reconstructed based on voltage measurements from the surface of the target. Reconstructing the conductivity distribution is known to be an ill-posed inverse problem, the solutions of which are highly intolerant to modelling errors. In order to achieve sufficient accuracy, very dense meshes are usually needed in a finite element approximation of the EIT forward model. This leads to very high-dimensional problems and often unacceptably tedious computations for real-time applications. In this paper, the model reduction in EIT is considered within the Bayesian inversion framework. We construct the reduced-order model by proper orthogonal decompositions (POD) of the electrical conductivity and the potential distributions. The associated POD modes are computed based on a priori information on the conductivity. The feasibility of the reduced-order model is tested both numerically and with experimental data. In the selected test cases, the proposed model reduction approach speeds up the computation by more than two orders of magnitude in comparison with the conventional EIT reconstruction, without decreasing the quality of the reconstructed images significantly.

We present an updated analysis of the M31 pixel lensing candidate event OAB-N2 previously reported by Calchi Novati et al. Here we take advantage of new data both astrometrical and photometrical. For astrometry: using archival 4 m KPNO and Hubble Space Telescope/WFPC2 data we perform a detailed analysis of the event source whose result, although not fully conclusive on the source magnitude determination, is confirmed by the following light curve photometry analysis. For photometry: first, unpublished WeCAPP data allow us to confirm OAB-N2, previously reported only as a viable candidate, as a well-constrained pixel lensing event. Second, this photometry enables a detailed analysis in the event parameter space including the effects due to a finite source size. The combined results of these analyses allow us to put a strong lower limit on the lens proper motion. This outcome favors the MACHO lensing hypothesis over self-lensing for this individual event and points the way toward distinguishing between the MACHO and self-lensing hypotheses from larger data sets.

This paper presents 442 new proper motion stellar systems in the southern sky between declinations -90{sup 0} and -47{sup 0} with 0.''40 yr{sup -1} > {mu} {>=} 0.''18 yr{sup -1}. These systems constitute a 25.3% increase in new systems for the same region of the sky covered by previous SuperCOSMOS RECONS (SCR) searches that used Schmidt plates as the primary source of discovery. Among the new systems are 25 multiples, plus an additional 7 new common proper motion (CPM) companions to previously known primaries. All stars have been discovered using the third U.S. Naval Observatory (USNO) CCD Astrograph Catalog (UCAC3). A comparison of the UCAC3 proper motions to those from the Hipparcos, Tycho-2, Southern Proper Motion (SPM4), and SuperCOSMOS efforts is presented and shows that UCAC3 provides similar values and precision to the first three surveys. The comparison between UCAC3 and SuperCOSMOS indicates that proper motions in R.A. are systematically shifted in the SuperCOSMOS data but are consistent in decl. data, while overall showing a significantly higher scatter. Distance estimates are derived for stars having SuperCOSMOS Sky Survey B{sub J} , R{sub 59F}, and I{sub IVN} plate magnitudes and Two-Micron All Sky Survey infrared photometry. We find 15 systems estimated to be within 25 pc, including UPM 1710-5300 our closest new discovery estimated at 13.5 pc. Such new discoveries suggest that more nearby stars are yet to be found in these slower proper motion regimes, indicating that more work is needed to develop a complete map of the solar neighborhood.

We present optical and infrared spectra as well as the proper motion of an H=12 mag object 2" off the ~5 mag brighter spectroscopic binary star CoD-33 7795 (=TWA-5), a member of the TW Hya association of T Tauri stars at ~55 pc. It was suggested as companion candidate by Lowrance et al. (1999) and Webb et al. (1999), but neither a spectrum nor the proper motion of the faint object were available before. Our spectra taken with FORS2 and ISAAC at the ESO-VLT reveal that the companion candidate has spectral type M8.5 to M9. It shows strong H-alpha emission and weak Na I absorption, both indicative of a young age. The faint object is clearly detected and resolved in our optical and infrared images, with a FWHM of 0.18" in the FORS2 image. The faint object's proper motion, based on two year epoch difference, is consistent with the proper motion of CoD-33 7795 by 5 Gaussian sigma significance. From three different theoretical pre-main sequence models, we estimate the companion mass to be between ~15 and 40 M_jup, assuming the distance and age of the primary. A slight offset between the VLT and HST images with an epoch difference of two years can be interpreted as orbital motion. The probability for chance alignment of such a late-type object that close to CoD-33 7795 with the correct proper motion is below 7e-9. Hence, the faint object is physically associated with CoD-33 7795, the 4th brown dwarf companion around a normal star confirmed by both spectrum and proper motion, the first around a pre-main sequence star.

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

AIMS: We contribute to improving the census of cool brown dwarfs (late-T and Y dwarfs) in the immediate solar neighbourhood. METHODS: By combining near-infrared (NIR) data of UKIDSS with mid-infrared WISE and other available NIR (2MASS) and red optical (SDSS $z$-band) multi-epoch data we detect high proper motion (HPM) objects with colours typical of late spectral types ($>$T5). We use NIR low-resolution spectroscopy for the classification of new candidates. RESULTS: We determined new proper motions for 14 known T5.5-Y0 dwarfs, many of them being significantly ($>$2-10 times) more accurate than previous ones. We detected three new candidates, ULAS J0954+0623, ULAS J1152+0359, and ULAS J1204-0150, by their HPMs and colours. Using previously published and new UKIDSS positions of the known nearby T8 dwarf WISE J0254+0223 we improved its trigonometric parallax to 165$\\pm$20 mas. For the three new objects we obtained NIR spectroscopic follow-up with LBT/LUCIFER classifying them as T5.5 and T6 dwarfs. With their es...

Identifying Transformer Incipient Events for Maintaining Distribution System Reliability Karen L events in single-phase distribution transformers. This analysis will aid in the development of an automatic detection method for internal incipient faults in the transformers. The detection method can

This paper analyses French locative prepositional phrases containing a location proper name Npr (e.g. Méditerranée) and its associated classifier Nc (e.g. mer). The (Nc, Npr) pairs are formally described with the aid of elementary sentences. We study ...

We introduce an adaptive POD method to reduce the computational cost of reacting flow simulations. The scheme is coupled with an operator-splitting algorithm to solve the reaction-diffusion equation. For the reaction sub-steps, locally valid basis vectors, obtained via POD and the method of snapshots, are used to project the minor species mass fractions onto a reduced dimensional space thereby decreasing the number of equations that govern combustion chemistry. The method is applied to a one-dimensional laminar premixed CH{sub 4}-air flame using GRImech 3.0; with errors less than 0:25%, a speed-up factor of 3:5 is observed. The speed-up results from fewer source term evaluations required to compute the Jacobian matrices.

IDENTIFYING FRACTURES AND FLUID TYPES USING FLUID INCLUSION STRATIGRAPHY IDENTIFYING FRACTURES AND FLUID TYPES USING FLUID INCLUSION STRATIGRAPHY Jump to: navigation, search GEOTHERMAL ENERGYGeothermal Home Conference Proceedings: IDENTIFYING FRACTURES AND FLUID TYPES USING FLUID INCLUSION STRATIGRAPHY Details Activities (1) Areas (1) Regions (0) Abstract: Fluid Inclusion Stratigraphy (FIS) is a method currently being developed for use in geothermal systems to identify fractures and fluid types. This paper is the third in a series of papers on the development of FIS. Fluid inclusion gas chemistry is analyzed and plotted on well log diagrams. The working hypothesis is that select gaseous species and species ratios indicate areas of groundwater and reservoir fluid flow and reservoir seals. Previously we showed that FIS analyses identify fluid types and

(abridged) Deep multi-epoch Sloan Digital Sky Survey data in a 275 square degrees area along the celestial equator (SDSS stripe 82 = S82) allowed us to search for extremely faint ($i>21$) objects with proper motions larger than 0.14 arcsec/yr. We classify 38 newly detected objects with low-resolution optical spectroscopy using FORS1 @ ESO VLT. All 22 previously known L dwarfs in S82 have been detected in our high proper motion survey. However, 11 of the known L dwarfs have smaller proper motions (0.01$$sdM7) subdwarfs. Some M subdwarf candidates have been classified based on spectral indices with large uncertainties. We failed to detect new nearby ($d<50$ pc) L dwarfs, probably because the S82 area was already well-investigated before. With our survey we have demonstrated a higher efficiency in finding Galactic halo CWDs than previous searches. The space density of halo CWDs is according to our results about 1.5-3.0 $\\times$ 10$^{-5}$ pc$^{-3}$.

Based on the Ogorodnikov-Milne model, we analyze the proper motions of Tycho-2 and UCAC2 stars. We have established that the model component that describes the rotation of all stars under consideration around the Galactic y axis differs significantly from zero at various magnitudes. We interpret this rotation found using the most distant stars as a residual rotation of the ICRS/Tycho-2 system relative to the inertial reference frame. For the most distant ($d\\approx900$ pc) Tycho-2 and UCAC2 stars, the mean rotation around the Galactic y axis has been found to be $M_{13}=-0.37\\pm0.04$ mas yr$^{-1}$. The proper motions of UCAC2 stars with magnitudes in the range $12-15^m$ are shown to be distorted appreciably by the magnitude equation in $\\mu_\\alpha\\cos\\delta$, which has the strongest effect for northern-sky stars with a coefficient of $-0.60\\pm0.05$ mas yr$^{-1}$ mag$^{-1}$. We have detected no significant effect of the magnitude equation in the proper motions of UCAC2 stars brighter than $\\approx11^m$.

A solid tag material which generates stable detectable, identifiable, and measurable isotopic gases on exposure to a neutron flux to be placed in a nuclear reactor component, particularly a fuel element, in order to identify the reactor component in event of its failure. Several tag materials consisting of salts which generate a multiplicity of gaseous isotopes in predetermined ratios are used to identify different reactor components.

In the context of the current financial crisis, when more companies are facing bankruptcy or insolvency, the paper aims to find methods to identify distressed firms by using financial ratios. The study will focus on identifying a group of Romanian listed companies, for which financial data for the year 2008 were available. For each company a set of 14 financial indicators was calculated and then used in a principal component analysis, followed by a cluster analysis, a logit model, and a CHAID classification tree.

Prototype Laser Weather Identifier (LWI) systems designed to detect fog, rain and snow were tested for several months at Stapleton International Airport in Denver, and at the AFGL Weather Test Facility at Otis Air Force Base, Massachusetts. We ...

We have obtained radial velocity measurements for stars in two, widely-separated fields in the Anticenter Stream. Combined with SDSS/USNO-B proper motions, the new measurements allow us to establish that the stream is on a nearly circular, somewhat inclined, prograde orbit around the Galaxy. While the orbital eccentricity is similar to that previously determined for the Monoceros stream, the sizes, inclinations, and positions of the orbits for the two systems differ significantly. Integrating our best fitting Anticenter Stream orbit forward, we find that it is closely aligned along and lies almost on top of a stream-like feature previously designated the "Eastern Banded Structure". The position of this feature coincides with the apogalacticon of the orbit. We tentatively conclude that this feature is the next wrap of the Anticenter Stream.

Identify the Problem: Reduce Waste By Banning Plastic Bag Use Define Goal: Is the ban the most The 2008 EPA report asserts that while paper waste has remained relatively constant at approximately 31%, plastic waste has been rising from 0.4% in 1960 to the present value at 12%a. San Francisco sets the goal

A finely detailed defraction grating is applied to an object as an identifier or tag which is unambiguous, difficult to duplicate, or remove and transfer to another item, and can be read and compared with prior readings with relative ease. The exact pattern of the defraction grating is mapped by diffraction moire techniques and recorded for comparison with future readings of the same grating.

Recent demands for radiation detector materials with better energy resolution at room temperature have prompted research efforts on both accelerated material discovery and efficient analysis techniques. Ions can easily deposit their energy in thin films or small crystals and the radiation response can be used to identify material properties relevant to detector performance. In an effort to identify gamma detector candidates using small crystals or film samples, an ion technique is developed to measure relative light yield and energy resolution of candidate materials and to evaluate radiation detection performance. Employing a unique time-of-flight (TOF) telescope, light yield and energy resolution resulting from ion excitation are investigated over a continuous energy region. The efficiency of this ion technique is demonstrated using both organic (plastic scintillator) and inorganic (CaF2:Eu, YAP:Ce, CsI:Tl and BGO) scintillators.

Personally Identifiable Information Personally Identifiable Information Print page Print page Email page Email page PII is any information about an individual which can be used to distinguish or trace an individual's identity. PII is categorized as either Public PII or Protected PII. Public PII is available in public sources such as telephone books, public websites, business cards, university listings, etc. Public PII does not require redaction prior to document submission to OSTI. Some common examples of Public PII include: Â· First and last name Â· Address Â· Work telephone number Â· E-mail address Â· Home telephone number Â· General educational credentials (e.g., those credentials typically found in resumes) Protected PII is defined as an individual's first name or first initial

The construction of stable reduced order models using Galerkin projection for the Euler or Navier-Stokes equations requires a suitable choice for the inner product. The standard L2 inner product is expected to produce unstable ROMs. For the non-linear Navier-Stokes equations this means the use of an energy inner product. In this report, Galerkin projection for the non-linear Navier-Stokes equations using the L2 inner product is implemented as a first step toward constructing stable ROMs for this set of physics.

A finely detailed diffraction grating is applied to an object as an identifier or tag which is unambiguous, difficult to duplicate, or remove and transfer to another item, and can be read and compared with prior readings with relative ease. The exact pattern of the diffraction grating is mapped by diffraction moire techniques and recorded for comparison with future readings of the same grating. 7 figures.

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

A finely detailed defraction grating is applied to an object as an identifier or tag which is unambiguous, difficult to duplicate, or remove and transfer to another item, and can be read and compared with prior readings with relative ease. The exact pattern of the defraction grating is mapped by diffraction moire techniques and recorded for comparison with future readings of the same grating. 7 figs.

The Energy Productivity Center of the Mellon Institute is engaged in a 2-year study to identify opportunities for improved U.S. industrial energy productivity. A distinguishing feature is the focus on energy services provided when fuels are consumed. The paper describes the Center's Least-Cost Energy Strategy, the Industrial Energy Productivity Project, and presents least-cost results for 1978 and for energy markets over the next two decades.

A system for interrogating electrical leads to correctly ascertain the identity of equipment attached to remote ends of the leads. The system includes a source of a carrier signal generated in a controller/receiver to be sent over the leads and an identifier unit at the equipment. The identifier is activated by command of the carrier and uses a portion of the carrier to produce a supply voltage. Each identifier is uniquely programmed for a specific piece of equipment, and causes the impedance of the circuit to be modified whereby the carrier signal is modulated according to that program. The modulation can be amplitude, frequency or phase modulation. A demodulator in the controller/receiver analyzes the modulated carrier signal, and if a verified signal is recognized displays and/or records the information. This information can be utilized in a computer system to prepare a wiring diagram of the electrical equipment attached to specific leads. Specific circuit values are given for amplitude modulation, and the system is particularly described for use with thermocouples.

A system for interrogating electrical leads to correctly ascertain the identity of equipment attached to remote ends of the leads is disclosed. The system includes a source of a carrier signal generated in a controller/receiver to be sent over the leads and an identifier unit at the equipment. The identifier is activated by command of the carrier and uses a portion of the carrier to produce a supply voltage. Each identifier is uniquely programmed for a specific piece of equipment, and causes the impedance of the circuit to be modified whereby the carrier signal is modulated according to that program. The modulation can be amplitude, frequency or phase modulation. A demodulator in the controller/receiver analyzes the modulated carrier signal, and if a verified signal is recognized displays and/or records the information. This information can be utilized in a computer system to prepare a wiring diagram of the electrical equipment attached to specific leads. Specific circuit values are given for amplitude modulation, and the system is particularly described for use with thermocouples. 6 figs.

This paper deals with the modelling and development of computational schemes to simulate pultrusion processes. Two different computational methods, finite differences and elements, are properly developed and critically analyzed. The methods are applied ... Keywords: Degree of cure, Finite difference method, Finite element method, Numerical modelling, Pultrusion, Temperature

We have developed methods to identify online communities, or groups, using a combination of structural information variables and content information variables from weblog posts and their comments to build a characteristic footprint for groups. We have worked with both explicitly connected groups and 'abstract' groups, in which the connection between individuals is in interest (as determined by content based features) and behavior (metadata based features) as opposed to explicit links. We find that these variables do a good job at identifying groups, placing members within a group, and helping determine the appropriate granularity for group boundaries. The group footprint can then be used to identify differences between the online groups. In the work described here we are interested in determining how an individual's online behavior is influenced by their membership in more than one group. For example, individuals belong to a certain culture; they may belong as well to a demographic group, and other 'chosen' groups such as churches or clubs. There is a plethora of evidence surrounding the culturally sensitive adoption, use, and behavior on the Internet. In this work we begin to investigate how culturally defined internet behaviors may influence behaviors of subgroups. We do this through a series of experiments in which we analyze the interaction between culturally defined behaviors and the behaviors of the subgroups. Our goal is to (a) identify if our features can capture cultural distinctions in internet use, and (b) determine what kinds of interaction there are between levels and types of groups.

We present the characterization of the star KOI 961, an M dwarf with transit signals indicative of three short-period exoplanets, originally discovered by the Kepler Mission. We proceed by comparing KOI 961 to Barnard's Star, a nearby, well-characterized mid-M dwarf. By comparing colors, optical and near-infrared spectra, we find remarkable agreement between the two, implying similar effective temperatures and metallicities. Both are metal-poor compared to the Solar neighborhood, have low projected rotational velocity, high absolute radial velocity, large proper motion and no quiescent H-alpha emission--all of which is consistent with being old M dwarfs. We combine empirical measurements of Barnard's Star and expectations from evolutionary isochrones to estimate KOI 961's mass (0.13 +/- 0.05 Msun), radius (0.17 +/- 0.04 Rsun) and luminosity (2.40 x 10^(-3.0 +/- 0.3) Lsun). We calculate KOI 961's distance (38.7 +/- 6.3 pc) and space motions, which, like Barnard's Star, are consistent with a high scale-height p...

Identifying Opportunities for Low-Carbon Supply Chains Identifying Opportunities for Low-Carbon Supply Chains Speaker(s): Eric Masanet Date: April 11, 2011 - 1:30pm Location: 90-3075 Seminar Host/Point of Contact: Barbara Adams There is growing interest in the development of tools and methods for calculating the supply chain energy and carbon "footprints" associated with products and services. Much of the activity has been in response to "low carbon" product reporting mandates by large global retailers, such as Wal-Mart and Tesco. However, relatively little attention has been paid to the development of models that allow decision makers to assess realistic opportunities for reducing such footprints once they've been established. This presentation will provide an overview of a new supply chain energy use

(abridged) We describe the discovery of an extremely wide pair of low-mass stars with a common large proper motion and discuss their possible membership in a Galactic halo stream crossing the Solar neighbourhood. (...) The late-type (M7) dwarf SSSPM J2003$-$4433 and the ultracool subdwarf SSSPM J1930$-$4311 (sdM7) sharing the same very large proper motion of about 860 mas/yr were found in the same sky region with an angular separation of about 6\\degr. From the comparison with other high proper motion catalogues we have estimated the probability of a chance alignment of the two new large proper motions to be less than 0.3%. From the individually estimated spectroscopic distances of about $38^{+10}_{-7}$ pc and $72^{+21}_{-16}$ pc, respectively for the M7 dwarf and the sdM7 subdwarf, and in view of the accurate agreement in their large proper motions we assume a common distance of about 50 pc and a projected physical separation of about 5 pc. The mean heliocentric space velocity of the pair $(U,V,W)=(-232, -170...

We characterize the star KOI 961, an M dwarf with transit signals indicative of three short-period exoplanets discovered by the Kepler mission. We proceed by comparing KOI 961 to Barnard's Star, a nearby, well-characterized mid-M dwarf. We compare colors, optical and near-infrared spectra, and find remarkable agreement between the two, implying similar effective temperatures and metallicities. Both are metal-poor compared to the Solar neighborhood, have low projected rotational velocity, high absolute radial velocity, large proper motion, and no quiescent H{alpha} emission-all of which are consistent with being old M dwarfs. We combine empirical measurements of Barnard's Star and expectations from evolutionary isochrones to estimate KOI 961's mass (0.13 {+-} 0.05 M{sub Sun }), radius (0.17 {+-} 0.04 R{sub Sun }), and luminosity (2.40 Multiplication-Sign 10{sup -3.0{+-}0.3} L{sub Sun }). We calculate KOI 961's distance (38.7 {+-} 6.3 pc) and space motions, which, like Barnard's Star, are consistent with a high scale-height population in the Milky Way. We perform an independent multi-transit fit to the public Kepler light curve and significantly revise the transit parameters for the three planets. We calculate the false-positive probability for each planet candidate, and find a less than 1% chance that any one of the transiting signals is due to a background or hierarchical eclipsing binary, validating the planetary nature of the transits. The best-fitting radii for all three planets are less than 1 R{sub Circled-Plus }, with KOI 961.03 being Mars-sized (R{sub P} = 0.57 {+-} 0.18 R{sub Circled-Plus }), and they represent some of the smallest exoplanets detected to date.

Internal material control issue Internal material control issue Los Alamos identifies internal material control issue The error relates to internal inventory and accounting that documents movement of sensitive materials within a small portion of Technical Area 55. February 26, 2009 Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable energy sources, to plasma physics and new materials. Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable energy sources, to plasma physics and new materials.

this report identifies six distinct Aid to Families with Dependent Children (AFDC) regions. Among the more striking results is the emergence of two regions---Central Cities and Hispanic Rural---with unique patterns of welfare usage and demographic characteristics. Also, rural Minnesota is divided into four separate regions with unique characteristics. This information is intended to help policymakers and others interested in the welfare system to better understand the geographic pattern of AFDC recipiency. This report is the first in a series of working papers regarding welfare and welfare reform. This report was prepared by DON HIRASUNA, Legislative Analyst in the House Research Department. Questions may be addressed to DON at 651-296-8038. JULIE FRANTUM

Enhanced Geothermal Systems (EGS) are designed to recover heat from the subsurface by mechanically creating fractures in subsurface rocks. Understanding the life cycle of a fracture in a geothermal system is fundamental to the development of techniques for creating fractures. Recognizing the stage of a fracture, whether it is currently open and transmitting fluids; if it recently has closed; or if it is an ancient fracture would assist in targeting areas for further fracture stimulation. Identifying dense fracture areas as well as large open fractures from small fracture systems will also assist in fracture stimulation selection. Geothermal systems are constantly generating fractures, and fluids and gases passing through rocks in these systems leave small fluid and gas samples trapped in healed microfractures. Fluid inclusions trapped in minerals as the fractures heal are characteristic of the fluids that formed them, and this signature can be seen in fluid inclusion gas analysis. Our hypothesis is that fractures over their life cycle have different chemical signatures that we can see in fluid inclusion gas analysis and by using the new method of fluid inclusion stratigraphy (FIS) the different stages of fractures, along with an estimate of fracture size can be identified during the well drilling process. We have shown with this study that it is possible to identify fracture locations using FIS and that different fractures have different chemical signatures however that signature is somewhat dependent upon rock type. Open, active fractures correlate with increase concentrations of CO2, N2, Ar, and to a lesser extent H2O. These fractures would be targets for further enhancement. The usefulness of this method is that it is low cost alternative to current well logging techniques and can be done as a well is being drilled.

This report will discuss strategies available to address identified gaps and weaknesses in education efforts aimed at the preparation of a skilled and properly trained national security workforce.The need to adequately train and educate a national security workforce is at a critical juncture. Even though there are an increasing number of college graduates in the appropriate fields, many of these graduates choose to work in the private sector because of more desirable salary and benefit packages. This is contributing to an inability to fill vacant positions at NNSA resulting from high personnel turnover from the large number of retirements. Further, many of the retirees are practically irreplaceable because they are Cold War scientists that have experience and expertise with nuclear weapons.

Wind and solar power are playing an increasing role in the electrical grid, but their inherent power variability can augment uncertainties in power system operations. One solution to help mitigate the impacts and provide more flexibility is enhanced wind and solar power forecasting; however, its relative utility is also uncertain. Within the variability of solar and wind power, repercussions from large ramping events are of primary concern. At the same time, there is no clear definition of what constitutes a ramping event, with various criteria used in different operational areas. Here the Swinging Door Algorithm, originally used for data compression in trend logging, is applied to identify variable generation ramping events from historic operational data. The identification of ramps in a simple and automated fashion is a critical task that feeds into a larger work of 1) defining novel metrics for wind and solar power forecasting that attempt to capture the true impact of forecast errors on system operations and economics, and 2) informing various power system models in a data-driven manner for superior exploratory simulation research. Both allow inference on sensitivities and meaningful correlations, as well as the ability to quantify the value of probabilistic approaches for future use in practice.

(abridged) We describe the discovery of an extremely wide pair of low-mass stars with a common large proper motion and discuss their possible membership in a Galactic halo stream crossing the Solar neighbourhood. (...) The late-type (M7) dwarf SSSPM J2003$-$4433 and the ultracool subdwarf SSSPM J1930$-$4311 (sdM7) sharing the same very large proper motion of about 860 mas/yr were found in the same sky region with an angular separation of about 6\\degr. From the comparison with other high proper motion catalogues we have estimated the probability of a chance alignment of the two new large proper motions to be less than 0.3%. From the individually estimated spectroscopic distances of about $38^{+10}_{-7}$ pc and $72^{+21}_{-16}$ pc, respectively for the M7 dwarf and the sdM7 subdwarf, and in view of the accurate agreement in their large proper motions we assume a common distance of about 50 pc and a projected physical separation of about 5 pc. The mean heliocentric space velocity of the pair $(U,V,W)=(-232, -170, +74)$ km/s, based on the correctness of the preliminary radial velocity measurement for only one of the components and on the assumption of a common distance and velocity vector, is typical of the Galactic halo population. The large separation and the different metallicities of dwarfs and subdwarfs make a common formation scenario as a wide binary (later disrupted) improbable, although there remains some uncertainty in the spectroscopic classification scheme of ultracool dwarfs/subdwarfs so that a dissolved binary origin cannot be fully ruled out yet. It seems more likely that this wide pair is part of an old halo stream. (...)

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

A real time gamma-ray signature/source identification method and system using principal components analysis (PCA) for transforming and substantially reducing one or more comprehensive spectral libraries of nuclear materials types and configurations into a corresponding concise representation/signature(s) representing and indexing each individual predetermined spectrum in principal component (PC) space, wherein an unknown gamma-ray signature may be compared against the representative signature to find a match or at least characterize the unknown signature from among all the entries in the library with a single regression or simple projection into the PC space, so as to substantially reduce processing time and computing resources and enable real-time characterization and/or identification.

This paper describes an approach by which the circuit breaker status errors can be detected and identified in the presence of analog measurement errors. This is accomplished by using the least absolute value (LAV) state estimation method and applying the previously suggested two stage estimation approach. The ability of the LAV estimators to reject inconsistent measurements, is exploited in order to differentiate between circuit breaker status and analog measurement errors. The first stage of estimation uses a bus level network model as in conventional LAV estimators. Results of Stage 1 are used to draw a set of suspect buses whose substation configurations may be erroneous. In the second stage, the identified buses are modeled in detail using the bus sections and the circuit breaker models while keeping the bus level network models for the rest of the system. The LAV estimation is repeated for the expanded system model and any remaining significant normalized residuals are flagged as bad analog measurements, while the correct topology is determined based on the estimated flows through the modeled circuit breakers in the substations. The proposed approach is implemented and tested. Simulation results for cases involving circuit breaker status and/or analog measurement errors are provided.

The use of pulse compression techniques to improve the sensitivity of meteorological radars has become increasingly common in recent years. An unavoidable side-effect of such techniques is the formation of range sidelobes which lead to spreading ...

An asset identification and information infrastructure management (AI3M) device having an automated identification technology system (AIT), a Transportation Coordinators' Automated Information for Movements System II (TC-AIMS II), a weigh-in-motion system (WIM-II), and an Automated Air Load Planning system (AALPS) all in electronic communication for measuring and calculating actual asset characteristics, either statically or in-motion, and further calculating an actual load plan.

To solve the order choice problem in Prony algorithm, active power or voltage signals are usually used by Prony algorithms based on singular values decomposition-total least square (SVD-TLS). Because of the difference in choosing the standard parameters ... Keywords: power systems, low frequency oscillations, improved Prony algorithm, SVD-TLS, DVR, SNR

Methods, parallel computers, and products are provided for identifying failure in a tree network of a parallel computer. The parallel computer includes one or more processing sets including an I/O node and a plurality of compute nodes. For each processing set embodiments include selecting a set of test compute nodes, the test compute nodes being a subset of the compute nodes of the processing set; measuring the performance of the I/O node of the processing set; measuring the performance of the selected set of test compute nodes; calculating a current test value in dependence upon the measured performance of the I/O node of the processing set, the measured performance of the set of test compute nodes, and a predetermined value for I/O node performance; and comparing the current test value with a predetermined tree performance threshold. If the current test value is below the predetermined tree performance threshold, embodiments include selecting another set of test compute nodes. If the current test value is not below the predetermined tree performance threshold, embodiments include selecting from the test compute nodes one or more potential problem nodes and testing individually potential problem nodes and links to potential problem nodes.

The American Oil Chemists' Society makes no warranty as to the safety of the methods contained herein Methods Disclaimer Official Methods and Recommended Practices of the AOCS (Methods) aocs applicants certified chemist chemists fats lab laborator

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Identify Institutional Change Rules, Roles, and Tools Constituting Identify Institutional Change Rules, Roles, and Tools Constituting Context for Sustainability Identify Institutional Change Rules, Roles, and Tools Constituting Context for Sustainability October 8, 2013 - 11:43am Addthis Graphic showing 5 gears. They progress from Determine Goal to Identify Context-Rules, Roles and Tools to Develop Action Plan to Implement Plan to Measure and Evaluate. Institutional Change Continuous Improvement Cycle After determining your agency's institutional change sustainability goals, the next step is to analyze the context within which these goals are to be achieved. Start by identifying the organizational rules, roles, and tools that shape the current context and may influence success in achieving these goals. Identifying the linkages among rules, roles, and tools and how they

We present a method to identify and localize people by leveraging existing CCTV camera infrastructure along with inertial sensors (accelerometer and magnetometer) within each person's mobile phones. Since a person's motion path, as observed by the camera, ... Keywords: cameras, inertial sensors, localization, person identification

A complete well-defined sample of ultracool dwarfs is one of the key science programs of the Pan-STARRS 1 optical survey telescope (PS1). Here we combine PS1 commissioning data with the Two Micron All Sky Survey (2MASS) to conduct a proper motion search (0.''1-2.''0 yr{sup -1}) for nearby T dwarfs, using optical+near-IR colors to select objects for spectroscopic follow-up. The addition of sensitive far-red optical imaging from PS1 enables discovery of nearby ultracool dwarfs that cannot be identified from 2MASS data alone. We have searched 3700 deg{sup 2} of PS1 y-band (0.95-1.03 {mu}m) data to y {approx} 19.5 mag (AB) and J {approx} 16.5 mag (Vega) and discovered four previously unknown bright T dwarfs. Three of the objects (with spectral types T1.5, T2, and T3.5) have photometric distances within 25 pc and were missed by previous 2MASS searches due to more restrictive color selection criteria. The fourth object (spectral type T4.5) is more distant than 25 pc and is only a single-band detection in 2MASS. We also examine the potential for completing the census of nearby ultracool objects with the PS1 3{pi} survey.

The primary purpose of the current research was to develop an integrated approach by combining information compression methods and artificial neural networks for the monitoring of plant components using nondestructive examination data. Specifically, data from eddy current inspection of heat exchanger tubing were utilized to evaluate this technology. The focus of the research was to develop and test various data compression methods (for eddy current data) and the performance of different neural network paradigms for defect classification and defect parameter estimation. Feedforward, fully-connected neural networks, that use the back-propagation algorithm for network training, were implemented for defect classification and defect parameter estimation using a modular network architecture. A large eddy current tube inspection database was acquired from the Metals and Ceramics Division of ORNL. These data were used to study the performance of artificial neural networks for defect type classification and for estimating defect parameters. A PC-based data preprocessing and display program was also developed as part of an expert system for data management and decision making. The results of the analysis showed that for effective (low-error) defect classification and estimation of parameters, it is necessary to identifyproper feature vectors using different data representation methods. The integration of data compression and artificial neural networks for information processing was established as an effective technique for automation of diagnostics using nondestructive examination methods.

Underbalanced operations reduce formation damage, especially in horizontal wells where zones are exposed to mud for longer time periods. Benefits, risks, well control concerns, equipment and issues associated with these operations are addressed in this paper. Flow drilling raises many concerns, but little has been published on horizontal well control and flow drilling operations. This article covers planning considerations for flow drilling, but does not address horizontal ''overbalanced'' drilling because considerations and equipment are the same as in vertical overbalanced drilling and many references address that subject. The difference in well control between vertical and horizontal overbalanced drilling is fluid influx behavior and how that behavior affects kill operations.

We have used methane imaging techniques to identify the near-infrared counterpart of the bright Wide-field Infrared Survey Explorer (WISE) source WISE J163940.83-684738.6. The large proper motion of this source ( Almost-Equal-To 3.''0 yr{sup -1}) has moved it, since its original WISE identification, very close to a much brighter background star-it currently lies within 1.''5 of the J = 14.90 {+-} 0.04 star 2MASS 16394085-6847446. Observations in good seeing conditions using methane-sensitive filters in the near-infrared J band with the FourStar instrument on the Magellan 6.5 m Baade telescope, however, have enabled us to detect a near-infrared counterpart. We have defined a photometric system for use with the FourStar J2 and J3 filters, and this photometry indicates strong methane absorption, which unequivocally identifies it as the source of the WISE flux. Using these imaging observations we were then able to steer this object down the slit of the Folded-port Infrared Echellette spectrograph on a night of 0.''6 seeing, and so obtain near-infrared spectroscopy confirming a Y0-Y0.5 spectral type. This is in line with the object's near-infrared-to-WISE J3 - W2 color. Preliminary astrometry using both WISE and FourStar data indicates a distance of 5.0 {+-} 0.5 pc and a substantial tangential velocity of 73 {+-} 8 km s{sup -1}. WISE J163940.83-684738.6 is the brightest confirmed Y dwarf in the WISE W2 passband and its distance measurement places it among the lowest luminosity sources detected to date.

Identify Employee Commuting Clusters for Greenhouse Gas Profile Identify Employee Commuting Clusters for Greenhouse Gas Profile Identify Employee Commuting Clusters for Greenhouse Gas Profile October 7, 2013 - 1:53pm Addthis YOU ARE HERE: Step 2 For evaluating a greenhouse gas profile for employee commuting, use survey data on employee home location and arrival/departure times to identify geographic areas to target for vanpool and carpool ride-matching efforts. Those who live in close proximity or en route to the workplace and with similar hours may be clustered to determine which locations might represent the best candidates for ride-share matching. As illustrated in Figure 1, areas with higher concentrations of employees that live farther from the worksite might be good candidate locations for targeted carpool and vanpool

We review the production and flow of identified hadrons at RHIC with a main emphasis on the intermediate transverse momentum region ($2production and resolve the anomalously large baryon yields and elliptic flow observed in the experiments.

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

This paper presents a methodology for identifying best practices followed by various countries worldwide for supporting broadband growth. It also investigates and analyzes these practices using data concerning broadband penetration, access technologies, ... Keywords: Best practices, Broadband, Telecommunications policies

Illinois is completing a comprehensive statewide water plan. The plan selects three atmospheric issues, among the 11 identified as key issues facing the state's water resources. The issues selected include climate change and prediction, ...

Over the past several years, genome wide association studies (GWAS) have implicated hundreds of genes in common disease. More recently, the GWAS approach has been utilized to identify regions of the genome which harbor variation affecting gene expression ...

We have designed a zebrafish genomic microarray to identify DNA-protein interactions in the proximal promoter regions of over 11,000 zebrafish genes. Using these microarrays, together with chromatin immunoprecipitation ...

Weather systems such as tropical cyclones, fronts, troughs and ridges affect our daily lives. Yet, they are often manually located and drawn on weather charts based on forecasters' experience. To identify them, multiple atmospheric elements need to be ...

Compound screening is a powerful tool to identify new therapeutic targets, drug leads, and elucidate the fundamental mechanisms of biological processes. We report here the results of the first in vivo small-molecule screens ...

In systems biology, identifying vital functions like glycolysis from a given metabolic pathway is important to understand living organisms. In this paper, we focus on the problem of finding minimal sub-pathways producing target metabolites from source ...

Identifying Non-Federal Cooperating Agencies in Implementing the Identifying Non-Federal Cooperating Agencies in Implementing the Procedural Requirements of NEPA Identifying Non-Federal Cooperating Agencies in Implementing the Procedural Requirements of NEPA The purpose of this Council on Environmental Quality Memorandum is to ensure that all federal and non-federal cooperating agencies are identified on the cover sheet of each Environmental Impact Statement (EIS) prepared by your agency. G-CEQ-IdentnonfedCooperatingAgencies.pdf More Documents & Publications Designation of Non-Federal Agencies as Cooperating Agencies Cooperating Agencies in Implementing the Procedural Requirements of the National Environmental Policy Act Reporting Cooperating Agencies in Implementing the Procedural Requirements of the National Environmental Policy Act

10-03 10-03 For immediate release: 10/10/2012 | NR-12-10-03 Cold cases heat up through Lawrence Livermore approach to identifying remains Anne M Stark, LLNL, (925) 422-9799, stark8@llnl.gov Printer-friendly Bruce Buchholz loads a sample in the accelerator. High Resolution Image LIVERMORE, Calif. -- In an effort to identify the thousands of John/Jane Doe cold cases in the United States, a Lawrence Livermore National Laboratory researcher and a team of international collaborators have found a multidisciplinary approach to identifying the remains of missing persons. Using "bomb pulse" radiocarbon analysis developed at Lawrence Livermore, combined with recently developed anthropological analysis and forensic DNA techniques, the researchers were able to identify the remains of a missing

Identifying and Protecting Alaskan Fishery Identifying and Protecting Alaskan Fishery Habitats Photo of the Week: Identifying and Protecting Alaskan Fishery Habitats September 27, 2013 - 3:08pm Addthis This aerial photo shows open water and floating ice on ponds, lakes and river channels in the Sagavanirktok River Delta in AlaskaÃ¢ÂÂs North Slope. PNNL scientists employed satellite technology to understand the impacts of oil development activities on the environment. Using satellite radar to Ã¢ÂÂseeÃ¢ÂÂ through the ice, scientists detected critical fish overwintering habitats by identifying where ice was grounded and where it was floating. Utilizing this information on critical habitats, fishery managers can suggest locations for energy development activities that increase the sustainability of fishery resources and minimize environmental impacts. Research was funded by the U.S. Department of the Interior. | Photo courtesy of Pacific Northwest National Laboratory.

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

We report the first detection of the intrinsic velocity dispersion of the Arches cluster-a young ({approx}2 Myr), massive (10{sup 4} M{sub Sun }) starburst cluster located only 26 pc in projection from the Galactic center. This was accomplished using proper motion measurements within the central 10'' Multiplication-Sign 10'' of the cluster, obtained with the laser guide star adaptive optics system at Keck Observatory over a three-year time baseline (2006-2009). This uniform data set results in proper motion measurements that are improved by a factor {approx}5 over previous measurements from heterogeneous instruments. By careful, simultaneous accounting of the cluster and field contaminant distributions as well as the possible sources of measurement uncertainties, we estimate the internal velocity dispersion to be 0.15 {+-} 0.01 mas yr{sup -1}, which corresponds to 5.4 {+-} 0.4 km s{sup -1} at a distance of 8.4 kpc. Projecting a simple model for the cluster onto the sky to compare with our proper motion data set, in conjunction with surface density data, we estimate the total present-day mass of the cluster to be M(r < 1.0 pc) = 1.5{sup +0.74}{sub -0.60} Multiplication-Sign 10{sup 4} M{sub Sun }. The mass in stars observed within a cylinder of radius R (for comparison to photometric estimates) is found to be M(R < 0.4 pc) = 0.90{sup +0.40}{sub -0.35} Multiplication-Sign 10{sup 4} M{sub Sun} at formal 3{sigma} confidence. This mass measurement is free from assumptions about the mass function of the cluster, and thus may be used to check mass estimates from photometry and simulation. Photometric mass estimates assuming an initially Salpeter mass function ({Gamma}{sub 0} = 1.35, or {Gamma} {approx} 1.0 at present, where dN/d(log M){proportional_to}M{sup {Gamma}}) suggest a total cluster mass M{sub cl} {approx} (4-6) Multiplication-Sign 10{sup 4} M{sub Sun} and projected mass ({approx} 2 {<=} M(R < 0.4 pc) {<=} 3) Multiplication-Sign 10{sup 4} M{sub Sun }. Photometric mass estimates assuming a globally top-heavy or strongly truncated present-day mass function (PDMF; with {Gamma} {approx} 0.6) yield mass estimates closer to M(R < 0.4 pc) {approx} 1-1.2 Multiplication-Sign 10{sup 4} M{sub Sun }. Consequently, our results support a PDMF that is either top-heavy or truncated at low mass, or both. Collateral benefits of our data and analysis include: (1) cluster membership probabilities, which may be used to extract a clean-cluster sample for future photometric work; (2) a refined estimate of the bulk motion of the Arches cluster with respect to the field, which we find to be 172 {+-} 15 km s{sup -1}, which is slightly slower than suggested by previous measurements using one epoch each with the Very Large Telescope and the Keck telescope; and (3) a velocity dispersion estimate for the field itself, which is likely dominated by the inner Galactic bulge and the nuclear disk.

Correlation methods have been developed to provide a quick and relatively simple technique for estimating the performance of passive solar systems. The correlations are done with respect to data generated from simulation models. The techniques and accuracies are described. Both the Solar Load Ratio and Un-Utilizability methods are described. The advantages and limitations of correlation methods as design tools are discussed.

According t o energy auditors, state-owned facilities in Texas on the average consume over twice the energy of comparable facilities in the private sector. In 1984 and 1986 as part of the Texas Energy Cost Containment Program, two extensive energy audit programs examined a total of 35.3 million square feet of state-owned space. Energy cost reduction measures with paybacks of four years or less were identified. The purpose of this paper is to present the projects identified in 1986. Most relate to lighting, HVAC, and energy management systems. The type of facilities audited include colleges and universities, health science centers, state schools and centers, hospitals, and office buildings. The relation between the facility type and the energy cost reduction measures identified is discussed. In addition, the energy and dollar savings derived from the identified measures at the different facilities are presented. The total savings of the projects identified in both energy audit programs amount to $23.7 million annually.

53BP1 is phosphorylated by the protein kinase ATM upon DNA damage. Even though several ATM phosphorylation sites in 53BP1 have been reported, those sites have little functional implications in the DNA damage response. Here, we show that ATM phosphorylates the S1219 residue of 53BP1 in vitro and that the residue is phosphorylated in cells exposed to ionizing radiation (IR). Transfection with siRNA targeting ATM abolished IR-induced phosphorylation at this residue, supporting the theory that this process is mediated by the kinase. To determine the functional relevance of this phosphorylation event, a U2OS cell line expressing S1219A mutant 53BP1 was established. IR-induced foci formation of MDC1 and {gamma}H2AX, DNA damage signaling molecules, was reduced in this cell line, implying that S1219 phosphorylation is required for recruitment of these molecules to DNA damage sites. Furthermore, overexpression of the mutant protein impeded IR-induced G2 arrest. In conclusion, we have shown that S1219 phosphorylation by ATM is required for proper execution of DNA damage response.

Tools for Sustainability Tools for Sustainability Identify Institutional Change Tools for Sustainability October 8, 2013 - 11:49am Addthis After identifying institutional change rules and roles, a Federal agency should identify the tools that create the infrastructural context within which it can achieve its sustainability goals. A tool is defined simply as a technology, system, or process used to meet a need. An example would be a time card, which is a system for tracking and verifying work hours. An organization's tools support its standard operations and ensure consistency over the long term; tools both allow and constrain behavior practices. Changes to institutional behavior must be supported by modified operational standards and tools. When an organization's tools are in opposition to

Digital Object Identifiers (DOI) Digital Object Identifiers (DOI) Print page Print page Email page Email page A Digital Object Identifier (DOI) is a permanent, unique name used in the web-based global naming and resolution system that provides for the identification, retrieval, exchange and maintenance of intellectual property. DOIs assist the publishing community with electronic commerce and copyright management of digital objects published on the Internet. Development of the DOI System was initiated in 1997 by the Association of American Publishers, and is now managed by the International DOI Foundation. The DOI System was initially developed by the publishing community but is now a non-profit collaboration to develop infrastructure for persistent identification and management of content. Approximately 2000

Information Center Â» Worker Â» Former Worker Program Â» Program Information Center Â» Worker Â» Former Worker Program Â» Program Implementation Â» Sharing De-identified Data Sharing De-identified Data Sharing De-identified Data: Use the collected information to implement new strategies for worker safety and health at DOE sites and to inform industry-specific researchers while still protecting sensitive participant information and confidentiality. The confidentiality and privacy rights of former workers are not only a legal requirement, they are crucial to establishing and maintaining credibility with the former worker community. All medical information that is collected as part of this program is treated as confidential and is used only as allowed by the Privacy Act of 1974. All FWP activities are conducted with the approval of the Institutional Review Boards, or Human

Protecting FWP Participant Personally Identifiable Protecting FWP Participant Personally Identifiable Information/Protected Health Information Protecting FWP Participant Personally Identifiable Information/Protected Health Information The confidentiality and privacy rights of former workers are not only a legal requirement, they are crucial to establishing and maintaining credibility with the former worker community. All medical information that is collected as part of this program is treated as confidential and is used only as allowed by the Privacy Act of 1974. All FWP activities are conducted with the approval of the Institutional Review Boards, or Human Subjects Committees, of DOE and involved universities. All individuals sign an informed consent and Health Insurance Portability and Accountability Act

Rules for Sustainability Rules for Sustainability Identify Institutional Change Rules for Sustainability October 8, 2013 - 11:45am Addthis It is important to analyze formal and informal workplace rules governing the behavior of individuals and organizations to meet a Federal agency's institutional change goals for sustainability. It is also important to determine how these rules actually affect people filling different roles in the organization, and how they mesh with the technologies, systems, and processes that constitute tools. Identify Formal and Informal Rules First, identify the formal and informal rules that shape current or desired behaviors. This includes checking the extent to which they align with one another in support of your agency's sustainability objectives. You may want

Proposal for PDG Identifiers Proposal for PDG Identifiers Purpose and Use Cases PDG Identif iers are strings that can be used to ref erence items in PDG such as rev iew articles, particles, datablocks or decay modes. Currently env isaged use cases include: External ref erences to items in the PDG database. For example, giv en a PDG Identif ier one can directly go to a specif ic page in pdgLiv e. Tags that can be included into the meta data of publication databases (in particular INSPIRE).

a significant source of wasted energy. A typical plant thatused to burn fuel, energy is wasted, because excessive heatenergy savings in compressed air systems. By properly sizing regulators, compressed air that is otherwise wasted

A new method of measurement of the velocities of solar electron antineutrinos is proposed. The method is based on the assumption, that if the neutrino detector having a shape of a pipe and providing a proper angular resolution, is directed onto the optical "image" of the sun, then it would detect solar neutrinos with velocities $V_{\\widetilde{\

While there is an increasing need to share medical information for public health research, such data sharing must preserve patient privacy without disclosing any information that can be used to identify a patient. A considerable amount of research in ... Keywords: Anonymization, Conditional random fields, Cost-proportionate sampling, Data linkage, Medical text, Named entity recognition

Current literature about idea contests has emphasized individuals' motives for joining volunteer idea contests. However, explanation of why people stay or leave in the long run is rare. We identify factors that motivate users to participate repeatedly ... Keywords: motivation, multiple idea contests, open innovation, user retention

Sting jets, or surface wind maxima at the end of bent-back fronts in ShapiroKeyser cyclones, are one cause of strong winds in extratropical cyclones. Although previous studies identified the release of conditional symmetric instability as a cause ...

-10). The Louisiana Offshore Oil Port, 18 miles from Lafourche, has been economically important to the parish sinceVolume II IDENTIFYING COMMUNITIES ASSOCIATED WITH THE FISHING INDUSTRY IN LOUISIANA - FINAL REPORT in the northern reaches of the parish, and saltwater wetlands predominate in the south. Lafourche Parish

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

.......................................................11 2.3 A Brief Cultural Geography of Coastal Louisiana of the offshore oil and gas industry to Louisiana, and the now lengthy history of economic and social interactionVolume I IDENTIFYING COMMUNITIES ASSOCIATED WITH THE FISHING INDUSTRY IN LOUISIANA - FINAL REPORT

This paper presents an interaction taxonomy for classifying and identifying requirement interactions in software systems. The proposed taxonomy is in the form of a four-layered pyramid that defines 6 Main Interaction Categories in the first layer, 17 ... Keywords: Interaction scenarios, Requirement engineering, Requirement interaction taxonomy

MicroRNAs (miRNAs) play an important role in eukaryotic gene regulation. Although thousands of miRNAs have been identified in laboratories around the world, most of their targets still remain unknown. Different computational techniques exist to predict ... Keywords: genetic algorithms, miRNA targets, microRNAs

In this paper, we address the question of how we can identify hosts that will generate links to web spam. Detecting such spam link generators is important because almost all new spam links are created by them. By monitoring spam link generators, we can ... Keywords: information retrieval, link analysis, web spam

The principal objective of electrical geophysical research at UURI has been to provide reliable exploration and reservoir assessment tools for the shallowest to the deepest levels of interest in geothermal fields. Three diverse methods are being considered currently: magnetotellurics (MT, and CSAMT), self-potential, and borehole resistivity. Primary shortcomings in the methods addressed have included a lack of proper interpretation tools to treat the effects of the inhomogeneous structures often encountered in geothermal systems, a lack of field data of sufficient accuracy and quantity to provide well-focused models of subsurface resistivity structure, and a poor understanding of the relation of resistivity to geothermal systems and physicochemical conditions in the earth generally. In MT, for example, interpretation research has focused successfully on the applicability of 2-D models in 3-D areas which show a preferred structural grain. Leading computer algorithms for 2-D and 3-D simulation have resulted and are combined with modern methods of regularized inversion. However, 3-D data coverage and interpretation is seen as a high priority. High data quality in our own research surveys has been assured by implementing a fully remote reference with digital FM telemetry and real-time processing with data coherence sorting. A detailed MT profile across Long Valley has mapped a caldera-wide altered tuff unit serving as the primary hydrothermal aquifer, and identified a low-resistivity body in the middle crust under the west moat which corresponds closely with teleseismic delay and low density models. In the CSAMT method, our extensive tensor survey over the Sulphur Springs geothermal system provides valuable structural information on this important thermal regime and allows a fundamental analysis of the CSAMT method in heterogeneous areas. The self-potential (SP) method is promoted as an early-stage, cost-effective, exploration technique for covered hydrothermal resources, of low to high temperature, which has little or no adverse environmental impact and yields specific targets for temperature gradient and fluid chemistry testing. Substantial progress has been made in characterizing SP responses for several known, covered geothermal systems in the Basin and Range and southern Rio Grande Rift, and at identifying likely, causative source areas of thermal fluids. (Quantifying buried SP sources requires detailed knowledge of the resistivity structure, obtainable through DC or CSAMT surveys with 2-D or 3-D modeling.) Borehole resistivity (BHR) methods may help define hot and permeable zones in geothermal systems, trace the flow of cooler injected fluids and determine the degree of-water saturation in vapor dominated systems. At UURI, we develop methods to perform field surveys and to model and interpret various borehole-to-borehole, borehole-to-surface and surface-to-borehole arrays. The status of our BHR research may be summarized as follows: (1) forward modeling algorithms have been developed and published to evaluate numerous resistivity methods and to examine the effects of well-casing and noise; (2) two inverse two-dimensional algorithms have been devised and successfully applied to simulated field data; (3) a patented, multi-array resistivity system has been designed and is under construction; and (4) we are seeking appropriate wells in geothermal and other areas in which to test the methods.

We present light curves and periods of 53 candidates for short period eclipsing binary stars identified by SuperWASP. These include 48 newly identified objects with periods <2x10^4 seconds (~0.23d), as well as the shortest period binary known with main sequence components (GSC2314-0530 = 1SWASP J022050.85+332047.6) and four other previously known W UMa stars (although the previously reported periods for two of these four are shown to be incorrect). The period distribution of main sequence contact binaries shows a sharp cut-off at a lower limit of around 0.22d, but until now, very few systems were known close to this limit. These new candidates will therefore be important for understanding the evolution of low mass stars and to allow investigation of the cause of the period cut-off.

5, 2000 5, 2000 MEMORANDUM FOR DEPUTY/ASSISTANT HEADS OF FEDERAL AGENCIES FROM: HORST G. GRECZMIEL Associate Director for NEPA Oversight SUBJECT: IDENTIFYING NON-FEDERAL COOPERATING AGENCIES IN IMPLEMENTING THE PROCEDURAL REQUIREMENTS OF THE NATIONAL ENVIRONMENTAL POLICY ACT The purpose of this Memorandum is to ensure that all federal and non- federal cooperating agencies are identified on the cover sheet of each Environmental Impact Statement (EIS) prepared by your agency. In his Memorandum of July 28, 1999 (attached below), George T. Frampton, Jr., the CEQ Chair, urged all agencies to more actively solicit the participation of state, tribal and local governments as cooperating agencies in implementing the environmental impact statement process under the National Environmental Policy Act (NEPA). Agencies are

The ALICE experiment features multiple particle identification systems. The measurement of the identified charged hadron $p_{t}$ spectra in proton-proton collisions at $\\sqrt{s}=900$ GeV will be discussed. In the central rapidity region ($|\\eta|energy loss signal in the ITS and TPC. In addition, the information from TOF is used to identify hadrons at higher momenta. Finally, the kink topology of the weak decay of charged kaons provides an alternative method to extract the transverse momentum spectra of charged kaons. This combination allows to track and identify charged hadrons in the transverse momentum ($p_{t}$) range from 100 MeV/c up to 2.5 GeV/$c$. Mesons containing strange quarks (\\kos, $\\phi$) and both singly and doubly strange baryons (\\lam, \\lambar, and \\xip + \\xim) are identified by their decay topology inside the TPC detector. Results obtained with the various identification tools above described and a comparison with theoretical models and previously published data will be presented.

The key to proper allocation of fuel and feedstock costs to the products from a plant or from any one of its components is the commodity called exergy - the central concept of the Second Law of Thermodynamics, commonly named available energy or availability. The methods for composing exergy cost flow diagrams will be explained. The results will be shown for several plants - electric-power, co-generation, coal-gasification, and others. The application of such results will be shown for cost-accounting, for plant operation economics, for maintenance decisions, and for design decisions - at both the preliminary and detailed design states.

In keeping with the definition that biotechnology is really no more than a name given to a set of techniques and processes, the authors apply some set of fuzzy techniques to chemical industry problems such as finding the proper proportion of raw mix to control pollution, to study flow rates, to find out the better quality of products. We use fuzzy control theory, fuzzy neural networks, fuzzy relational equations, genetic algorithms to these problems for solutions. When the solution to the problem can have certain concepts or attributes as indeterminate, the only model that can tackle such a situation is the neutrosophic model. The authors have also used these models in this book to study the use of biotechnology in chemical industries. This book has six chapters. First chapter gives a brief description of biotechnology. Second chapter deals will proper proportion of mix of raw materials in cement industries to minimize pollution using fuzzy control theory. Chapter three gives the method of determination of temperature set point for crude oil in oil refineries. Chapter four studies the flow rates in chemical industries using fuzzy neutral networks. Chapter five gives the method of minimization of waste gas flow in chemical industries using fuzzy linear programming. The final chapter suggests when in these studies indeterminancy is an attribute or concept involved, the notion of neutrosophic methods can be adopted.

The present invention provides a portable data collection device that has a variety of sensors that are interchangeable with a variety of input ports in the device. The various sensors include a data identification feature that provides information to the device regarding the type of physical data produced by each sensor and therefore the type of sensor itself. The data identification feature enables the device to locate the input port where the sensor is connected and self adjust when a sensor is removed or replaced. The device is able to collect physical data, whether or not a function of time. The sensor may also store a unique sensor identifier.

Measured energy savings resulting from energy conservation retrofits in commercial buildings can be used to verify the success of the retrofits, determine the payment schedule for the retrofits, and guide the selection of future retrofits. This paper presents a structured methodology, developed for buildings in the Texas LoanSTAR program, for measuring retrofit savings in commercial buildings. This methodology identifies the pre-retrofit construction and post-retrofit periods, normalizes energy consumption data, and quantifies the uncertainty associated with the measured savings. A case study from the Texas LoanSTAR program is presented as an example.

Recently Java programming environment has become so popular. Java programming language is a language that is designed to be portable enough to be executed in wide range of computers ranging from cell phones to supercomputers. Computer programs written in Java are compiled into Java Byte code instructions that are suitable for execution by a Java Virtual Machine implementation. Java virtual Machine is commonly implemented in software by means of an interpreter for the Java Virtual Machine instruction set. As an object oriented language, Java utilizes the concept of objects. Our idea is to identify the candidate objects' references in a Java environment through hierarchical cluster analysis using reference stack and execution stack.

An apparatus allows workers to assert and release control over the energization of a system. The apparatus does not require the workers to carry any additional paraphernalia, and is not be easily defeated by other workers. Users asserting and releasing control present tokens uniquely identifying each user to a reader, and the apparatus prevents transition of the system to an undesired state until an appropriate number of users are currently asserting control. For example, a dangerous manufacturing robot can be prevented from energizing until all the users that have asserted control when entering the robot's controlled space have subsequently released control when leaving the robot's controlled space.

This conference paper describes the High-Performance Photovoltaic (HiPerf PV)Project was initiated by the U.S. Department of Energy to substantially increase the viability of photovoltaics (PV) for cost-competitive applications so that PV can contribute significantly to our energy supply and our environment in the 21st century. To accomplish this, the NCPV directs in-house and subcontracted research in high-performance polycrystalline thin-film and multijunction concentrator devices. Details of the subcontractor and in-house progress will be described toward identifying critical pathways of 25% polycrystalline thin-film tandem cells and developing multijunction concentrator modules to 33%.

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

We characterized the mutational landscape of melanoma, the form of skin cancer with the highest mortality rate, by sequencing the exomes of 147 melanomas. Sun-exposed melanomas had markedly more ultraviolet (UV)-like C>T somatic mutations compared to sun-shielded acral, mucosal and uveal melanomas. Among the newly identified cancer genes was PPP6C, encoding a serine/threonine phosphatase, which harbored mutations that clustered in the active site in 12% of sun-exposed melanomas, exclusively in tumors with mutations in BRAF or NRAS. Notably, we identified a recurrent UV-signature, an activating mutation in RAC1 in 9.2% of sun-exposed melanomas. This activating mutation, the third most frequent in our cohort of sun-exposed melanoma after those of BRAF and NRAS, changes Pro29 to serine (RAC1{sup P29S}) in the highly conserved switch I domain. Crystal structures, and biochemical and functional studies of RAC1{sup P29S} showed that the alteration releases the conformational restraint conferred by the conserved proline, causes an increased binding of the protein to downstream effectors, and promotes melanocyte proliferation and migration. These findings raise the possibility that pharmacological inhibition of downstream effectors of RAC1 signaling could be of therapeutic benefit.

This paper reports the use of GIS mapping softwareArcMap and ArcInfo Workstationby the Idaho National Engineering and Environmental Laboratory (INEEL) as a non-intrusive method of locating and characterizing radioactive waste in a 97-acre landfill to aid in planning cleanup efforts. The fine-scale techniques and methods used offer potential application for other burial sites for which hazards indicate a non-intrusive approach. By converting many boxes of paper shipping records in multiple formats into a relational database linked to spatial data, the INEEL has related the paper history to our current GIS technologies and spatial data layers. The wide breadth of GIS techniques and tools quickly display areas in need of remediation as well as evaluate methods of remediation for specific areas as the site characterization is better understood and early assumptions are refined.

In this paper we present a new method for selecting blue horizontal branch (BHB) candidates based on color-color photometry. We make use of the Sloan Digital Sky Survey z band as a surface gravity indicator and show its value for selecting BHB stars from quasars, white dwarfs, and main-sequence A-type stars. Using the g, r, i, and z bands, we demonstrate that extraction accuracies on a par with more traditional u, g, and r photometric selection methods may be achieved. We also show that the completeness necessary to probe major Galactic structure may be maintained. Our new method allows us to efficiently select BHB stars from photometric sky surveys that do not include a u-band filter such as the Panoramic Survey Telescope and Rapid Response System.

at ORNL and the universities of Kentucky and Tennessee on a method that could help primary care doctors with electrodes attached to the scalp. Researchers at the Uni- versity of Kentucky Medi- cal Center collected EEG and University of Tennessee collaboration revealed that the EEG tests succeeded in terms of sensitivity, accuracy

Roles for Sustainability Roles for Sustainability Identify Institutional Change Roles for Sustainability October 8, 2013 - 11:47am Addthis Example of How Roles Affect Sustainability Goals The following scenario is an example of how roles can affect the implementation of a sustainability goal despite best intentions. Policymakers mandate waste reduction. A waste manager determines that a solution is to recycle more. No one notices that the staff responsible for implementing the solution forgets to order enough recycling bins for a building. Office workers continue to put recyclable material in the trash instead of a recycling bin. Janitorial staff members don't have the time to sort the recyclable material from the trash. Municipal waste personnel dump recyclable material in a landfill. Lesson: It is important that action plans

Assessing the potential property and social impacts of an event, such as tornado or wildfire, continues to be a challenging research area. From financial markets to disaster management to epidemiology, the importance of understanding the impacts that events create cannot be understated. Our work describes an approach to fuse information from multiple sources, then to analyze the information cycles to identify prior temporal patterns related to the impact of an event. This approach is then applied to the analysis of news reports from multiple news sources pertaining to several different natural disasters. Results show that our approach can project the severity of the impacts of certain natural disasters, such as heat waves on droughts and wild fires. In addition, results show that specific types of disaster consistently produce similar impacts when each time they occur.

Strategies to Reduce Business Travel for Greenhouse Gas Strategies to Reduce Business Travel for Greenhouse Gas Mitigation Identify Strategies to Reduce Business Travel for Greenhouse Gas Mitigation October 7, 2013 - 1:34pm Addthis YOU ARE HERE The tables below illustrate some of the more common strategies that can enable employees to travel less and travel more efficiently for business. The "Purpose of Travel" analysis in the previous step can be used with the guidance below to help determine what type of trips may be most appropriately substituted with each business travel alternative. Table 1. Strategies that Enable Employees to Travel Less Business Travel Strategy Best Potential Application Best Practices Web meetings/webinars, including option for video Purpose of travel: training, conferences.

Petroleum Reduction Strategies for Vehicles and Mobile Petroleum Reduction Strategies for Vehicles and Mobile Equipment Identify Petroleum Reduction Strategies for Vehicles and Mobile Equipment October 7, 2013 - 11:50am Addthis YOU ARE HERE: Step 3 As defined by the Federal Energy Management Program (FEMP), greenhouse gas (GHG) emission reduction strategies for Federal vehicles and equipment are based on the three driving principles of petroleum reduction: Reduce vehicle miles traveled Improve fuel efficiency Use alternative fuels. These strategies provide a framework for an agency to use when developing a strategic plan that can be specifically tailored to match the agency's fleet profile and meet its mission. Agency fleet managers should evaluate petroleum reduction strategies and tactics for each fleet location, based on an evaluation of site-specific

I I I D D E E N N T T I I F F Y Y A A N N D D P P R R O O T T E E C C T T Y Y O O U U R R V V I I T T A A L L R R E E C C O O R R D D S S July 2010 Records Management Division Office of IT Planning, Architecture, and E-Government Office of the Chief Information Officer 2 INTRODUCTION Each Federal agency is responsible for establishing a Vital Records Program for the identification and protection of those records needed for continuity of operations before, during, and after emergencies; and those records needed to protect the legal and financial rights of the Government and persons affected by Government activities. This means identifying, safeguarding, and having readily available documents, databases, and information systems that support an organization's performance of its essential functions

D-3C Reflection Seismic Survey and Data Integration to Identify the D-3C Reflection Seismic Survey and Data Integration to Identify the Seismic Response of Fractures and Permeable Zones Over a Known Geothermal Resource at Soda Lake, Churchill Co., NV Geothermal Project Jump to: navigation, search Last modified on July 22, 2011. Project Title A 3D-3C Reflection Seismic Survey and Data Integration to Identify the Seismic Response of Fractures and Permeable Zones Over a Known Geothermal Resource at Soda Lake, Churchill Co., NV Project Type / Topic 1 Recovery Act: Geothermal Technologies Program Project Type / Topic 2 Validation of Innovative Exploration Technologies Project Description The Soda Lake geothermal field is an ideal setting to test the applicability of the 3D-3C reflection seismic method because: it is a producing field with a great deal of geologic and drilling data already available; it is in an alluvial valley where the subsurface structures that carry the geothermal fluids have no surface manifestations; and, there are downhole geophysical logs of fractures and permeable zones that can be used to ground-truth the new data.

Methods of making articles by powder metallurgy techniques are presented. An article is made by packing a metal powder into a desired shape, raising the temperature of the powder compact to a sintering temperature in the presence of a reducing gas, and alternately increasing and decreasing the pressure of the gas while the temperatume is being raised. The product has a greater density than can be achieved by sintering for the same length of time at a constant gas pressure. (AEC)

Electric utilities have historically satisfied customer demand by generating electricity centrally and distributing it through an extensive transmission and distribution network. The author examines targeted demand side management programs as an alternative to system capacity investments once capacity is exceeded. The paper presents an evaluation method to determine how much a utility can afford to pay for distributed resources. 17 refs., 2 figs, 1 tab.

The present invention provides methods and compositions for accessing, in a generally unbaised manner, a diverse genetic pool for genes involved in biosynthetic pathways. The invention also provides compounds which can be identified by cloning biosynthetic pathways.

This performance analysis evaluated 24 events that occurred at LLNL from January through August 2010. The analysis identified areas of potential work control process and/or implementation weaknesses and several common underlying causes. Human performance improvement and safety culture factors were part of the causal analysis of each event and were analyzed. The collective significance of all events in 2010, as measured by the occurrence reporting significance category and by the proportion of events that have been reported to the DOE ORPS under the ''management concerns'' reporting criteria, does not appear to have increased in 2010. The frequency of reporting in each of the significance categories has not changed in 2010 compared to the previous four years. There is no change indicating a trend in the significance category and there has been no increase in the proportion of occurrences reported in the higher significance category. Also, the frequency of events, 42 events reported through August 2010, is not greater than in previous years and is below the average of 63 occurrences per year at LLNL since 2006. Over the previous four years, an average of 43% of the LLNL's reported occurrences have been reported as either ''management concerns'' or ''near misses.'' In 2010, 29% of the occurrences have been reported as ''management concerns'' or ''near misses.'' This rate indicates that LLNL is now reporting fewer ''management concern'' and ''near miss'' occurrences compared to the previous four years. From 2008 to the present, LLNL senior management has undertaken a series of initiatives to strengthen the work planning and control system with the primary objective to improve worker safety. In 2008, the LLNL Deputy Director established the Work Control Integrated Project Team to develop the core requirements and graded elements of an institutional work planning and control system. By the end of that year this system was documented and implementation had begun. In 2009, training of the workforce began and as of the time of this report more than 50% of authorized Integration Work Sheets (IWS) use the activity-based planning process. In 2010, LSO independently reviewed the work planning and control process and confirmed to the Laboratory that the Integrated Safety Management (ISM) System was implemented. LLNL conducted a cross-directorate management self-assessment of work planning and control and is developing actions to respond to the issues identified. Ongoing efforts to strengthen the work planning and control process and to improve the quality of LLNL work packages are in progress: completion of remaining actions in response to the 2009 DOE Office of Health, Safety, and Security (HSS) evaluation of LLNL's ISM System; scheduling more than 14 work planning and control self-assessments in FY11; continuing to align subcontractor work control with the Institutional work planning and control system; and continuing to maintain the electronic IWS application. The 24 events included in this analysis were caused by errors in the first four of the five ISMS functions. The most frequent cause was errors in analyzing the hazards (Function 2). The second most frequent cause was errors occurring when defining the work (Function 1), followed by errors during the performance of work (Function 4). Interestingly, very few errors in developing controls (Function 3) resulted in events. This leads one to conclude that if improvements are made to defining the scope of work and analyzing the potential hazards, LLNL may reduce the frequency or severity of events. Analysis of the 24 events resulted in the identification of ten common causes. Some events had multiple causes, resulting in the mention of 39 causes being identified for the 24 events. The most frequent cause was workers, supervisors, or experts believing they understood the work and the hazards but their understanding was incomplete. The second most frequent cause was unclear, incomplete or confusing documents directing the work. Together, these two causes were mentioned 17 times and co

Methods are presented to assess the global risk of nuclear theft and nuclear terrorism, to identify the nuclear facilities and transport legs that pose the highest-priority risks of nuclear theft, and to evaluate policy ...

The Department of Energy`s (DOE) non-nuclear facilities generally require only a qualitative accident analysis to assess facility risks in accordance with DOE Order 5481.1B, Safety Analysis and Review System. Achieving a meaningful qualitative assessment of risk necessarily requires the use of suitable non-numerical assessment criteria. Typically, the methods and criteria for assigning facility-specific accident scenarios to the qualitative severity and likelihood classification system in the DOE order requires significant judgment in many applications. Systematic methods for more consistently assigning the total accident scenario frequency and associated consequences are required to substantiate and enhance future risk ranking between various activities at Sandia National Laboratories (SNL). SNL`s Risk Management and National Environmental Policy Act (NEPA) Department has developed an improved methodology for performing qualitative risk assessments in accordance wi the DOE order requirements. Products of this effort are an improved set of qualitative description that permit (1) definition of the severity for both technical and programmatic consequences that may result from a variety of accident scenarios, and (2) qualitative representation of the likelihood of occurrence. These sets of descriptions are intended to facilitate proper application of DOE criteria for assessing facility risks.

Inadequate measures to assure that accurate and conservative values were used to establish second level undervoltage relay setpoint. The measures established by the licensee for the translation of design requirements were not adequate to assure that the values used to establish the second level undervoltage relay setpoint were accurate and conservative with respect to the technical specifications. In addition, the measures for promptly identifying and correcting the adverse condition were not adequate as demonstrated by the length of time this condition has existed (since 1987). The failure to accurately translate design requirements was a violation of Criterion III of Appendix B to 10 CFR Part 50, and the untimely corrective actions was a violation of Criterion XVI of Appendix B to 10 CFR Part 50. This violation is noncited in accordance with Section VI.A of NRC's Enforcement Policy, and is in the licensee's corrective action program (Notification 10092429). (Section 1R21.5.b.1.) The finding was of very low safety significance because, although the calculated values were not conservative and were not consistent with the technical specification values, there were administrative procedures in place to prevent exceeding the correct analytical limit. Additionally, there was no actual loss of safety function.

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

accounts for 1.8 million deaths in China, 0.57 million in the U.S. (both in 2010), and 7.6 million deaths leaders may have additional implications in advocating new treatments, guiding the proper use of drugs and Systems - WITS 2011 Shanghai, CHINA, December

The role of molecular recognition is critical to the proper self-assembly of biological macromolecules and their function. Shape complementarity of the mutual recognition interfaces is one of the important factors that guide this interaction. The lock-and-key ... Keywords: data mining, nature of protein interfaces, protein-protein complex

The recognition of faces in unconstrained environments is a challenging problem. The aim of this work is to carry out a comparative study of face recognition methods working in the thermal spectrum (8-12@mm) that are suitable for working properly in ... Keywords: Face recognition, Thermal face recognition, Unconstrained environments

Evolution of a Non-Invasive Method for Providing Assistance to the Heart H. S. Soroff, MD and J. Rastegar The primary function of the ventricular chambers of the heart is to provide the proper volume, in the first part of the cardiac cycle, when the heart is relaxed, cardiac diastole, the device exerts

Here we present 1584 new southern proper motion systems with {mu} {>=} 0.''18 yr{sup -1} and 16.5 > R{sub 59F} {>=} 18.0. This search complements the six previous SuperCOSMOS-RECONS (SCR) proper motion searches of the southern sky for stars within the same proper motion range, but with R{sub 59F} {<=} 16.5. As in previous papers, we present distance estimates for these systems and find that three systems are estimated to be within 25 pc, including one, SCR 1546-5534, possibly within the RECONS 10 pc horizon at 6.7 pc, making it the second nearest discovery of the searches. We find 97 white dwarf candidates with distance estimates between 10 and 120 pc, as well as 557 cool subdwarf candidates. The subdwarfs found in this paper make up nearly half of the subdwarf systems reported from our SCR searches and are significantly redder than those discovered thus far. The SCR searches have now found 155 red dwarfs estimated to be within 25 pc, including 10 within 10 pc. In addition, 143 white dwarf candidates and 1155 cool subdwarf candidates have been discovered. The 1584 systems reported here augment the sample of 4724 systems previously discovered in our SCR searches and imply that additional systems fainter than R{sub 59F} = 18.0 are yet to be discovered.

Multicore microprocessors have been largely motivated by the diminishing returns in performance and the increased power consumption of single-threaded ILP microprocessors. With the industry already shifting from multicore to many-core microprocessors, software developers must extract more thread-level parallelism from applications. Unfortunately, low power-efficiency and diminishing returns in performance remain major obstacles with many cores. Poor interaction between software and hardware, and bottlenecks in shared hardware structures often prevent scaling to many cores, even in applications where a high degree of parallelism is potentially available. In some cases, throwing additional cores at a problem may actually harm performance and increase power consumption. Better use of otherwise limitedly beneficial cores by software components such as hypervisors and operating systems can improve system-wide performance and reliability, even in cases where power consumption is not a main concern. In response to these observations, we evaluate an approach to throttle concurrency in parallel programs dynamically. We throttle concurrency to levels with higher predicted efficiency from both performance and energy standpoints, and we do so via machine learning, specifically artificial neural networks (ANNs). One advantage of using ANNs over similar techniques previously explored is that the training phase is greatly simplified, thereby reducing the burden on the end user. Using machine learning in the context of concurrency throttling is novel. We show that ANNs are effective for identifying energy-efficient concurrency levels in multithreaded scientific applications, and we do so using physical experimentation on a state-of-the-art quad-core Xeon platform.

We introduce one-center method in spherical coordinates to carry out Hartree-Fock calculations. Both the radial wave function and the angular wave function are expanded by B-splines, and the radial knots and angular knots are adjusted to deal with cusps properly, resulting in the significant improvement of convergence for several typical closed-shell diatomic molecules. B-splines could represent both the bound state and continuum state wave function properly, and the present approach has been applied to investigating ionization dynamics for H$_2$ in the intense laser field adopting single-active-electron model.

A casting device includes a covered crucible having a top opening and a bottom orifice, a lid covering the top opening, a stopper rod sealing the bottom orifice, and a reusable mold having at least one chamber, a top end of the chamber being open to and positioned below the bottom orifice and a vacuum tap into the chamber being below the top end of the chamber. A casting method includes charging a crucible with a solid material and covering the crucible, heating the crucible, melting the material, evacuating a chamber of a mold to less than 1 atm absolute through a vacuum tap into the chamber, draining the melted material into the evacuated chamber, solidifying the material in the chamber, and removing the solidified material from the chamber without damaging the chamber.

A semi-automatic method is described for the weld joining of pipes and fittings which utilizes the inert gasshielded consumable electrode electric arc welding technique, comprising laying down the root pass at a first peripheral velocity and thereafter laying down the filler passes over the root pass necessary to complete the weld by revolving the pipes and fittings at a second peripheral velocity different from the first peripheral velocity, maintaining the welding head in a fixed position as to the specific direction of revolution, while the longitudinal axis of the welding head is disposed angularly in the direction of revolution at amounts between twenty minutas and about four degrees from the first position.

One or both of two methods and systems are used to determine concentration of a known material in an unknown mixture on the basis of the measured interaction of electromagnetic waves upon the mixture. One technique is to utilize a multivariate analysis patch technique to develop a library of optimized patches of spectral signatures of known materials containing only those pixels most descriptive of the known materials by an evolutionary algorithm. Identity and concentration of the known materials within the unknown mixture is then determined by minimizing the residuals between the measurements from the library of optimized patches and the measurements from the same pixels from the unknown mixture. Another technique is to train a neural network by the genetic algorithm to determine the identity and concentration of known materials in the unknown mixture. The two techniques may be combined into an expert system providing cross checks for accuracy. 37 figs.

A method of determining the location and history of metallic nitride and/or oxynitride inclusions in metallic melts. The method includes the steps of labeling metallic nitride and/or oxynitride inclusions by making a coreduced metallic-hafnium sponge from a mixture of hafnium chloride and the chloride of a metal, reducing the mixed chlorides with magnesium, nitriding the hafnium-labeled metallic-hafnium sponge, and seeding the sponge to be melted with hafnium-labeled nitride inclusions. The ingots are neutron activated and the hafnium is located by radiometric means. Hafnium possesses exactly the proper metallurgical and radiochemical properties for this use.

In keeping with the definition that biotechnology is really no more than a name given to a set of techniques and processes, the authors apply some set of fuzzy techniques to chemical industry problems such as finding the proper proportion of raw mix to control pollution, to study flow rates, to find out the better quality of products. We use fuzzy control theory, fuzzy neural networks, fuzzy relational equations, genetic algorithms to these problems for solutions. When the solution to the problem can have certain concepts or attributes as indeterminate, the only model that can tackle such a situation is the neutrosophic model. The authors have also used these models in this book to study the use of biotechnology in chemical industries. This book has six chapters. First chapter gives a brief description of biotechnology. Second chapter deals will proper proportion of mix of raw materials in cement industries to minimize pollution using fuzzy control theory. Chapter three gives the method of determination of te...

The analysis of data complexity is a proper framework to characterize the tackled classification problem and to identify domains of competence of classifiers. As a practical outcome of this framework, the proposed data complexity measures may facilitate ... Keywords: Classification, Data complexity, Fuzzy rule based systems, Genetic fuzzy systems

The purpose of this research is twofold, to standardize and to validate exposure assessment methods. First, the attempt is made to standardize the manner in which exposure assessment methods are developed. Literature on the subject is reviewed and seven common elements discovered to be common are discussed. The seven elements are causative agents, exposure groups, exposure-modifying parameters, industrial hygiene measurement data, misclassification issues, validation issues, and reliability issues. It is believed that thinking in terms of these elements will yield more consistent and complete exposure assessment models. Three types of exposure estimation methods are reviewed in this form. These methods are selected because they are the most thorough and represent the most frequently used and referenced types of estimation strategies: the statistical model, the deterministic model, and the multiplicative model. Second, the paper reports on an attempt to validate a semiquantitative exposure assessment model against industrial hygiene data collected from employees of one firm in the maritime industry. The set of data contains 440 samples with 75 percent of them censored by the method limit of detection. Methods to calculate an average concentration with nondetectable data are discussed. It is concluded that (1) the model does not predict the data well, (2) the industrial hygiene data does not properly fit the tails of a lognormal distribution, and (3) that average exposure to benzene in the (un)loading of petrochemicals from tankers is decidedly below exposure limits.

Synthetic DNA nanostructures are typically held together primarily by Holliday junctions. One of the most basic types of structures possible to assemble with only DNA and Holliday junctions is the triangle. To date, however, only equilateral triangles have been assembled in this manner - primarily because it is difficult to figure out what configurations of Holliday triangles have low strain. Early attempts at identifying such configurations relied upon calculations that followed the strained helical paths of DNA. Those methods, however, were computationally expensive, and failed to find many of the possible solutions. I have developed a new approach to identifying Holliday triangles that is computationally faster, and finds well over 95% of the possible solutions. The new approach is based on splitting the problem into two parts. The first part involves figuring out all the different ways that three featureless rods of the appropriate length and diameter can weave over and under one another to form a triangle. The second part of the computation entails seeing whether double helical DNA backbones can fit into the shape dictated by the rods in such a manner that the strands can cross over from one domain to the other at the appropriate spots. Structures with low strain (that is, good fit between the rods and the helices) on all three edges are recorded as promising for assembly.

We introduce a method (MONKEY) to identify conserved transcription-factor binding sites in multispecies alignments. MONKEY employs probabilistic models of factor specificity and binding site evolution, on which basis we compute the likelihood that putative sites are conserved and assign statistical significance to each hit. Using genomes from the genus Saccharomyces, we illustrate how the significance of real sites increases with evolutionary distance and explore the relationship between conservation and function.

This work is directed towards developing flexible Bayesian statistical methods in the semi- and nonparamteric regression modeling framework with special focus on analyzing data from biological and genetic experiments. This dissertation attempts to solve two such problems in this area. In the first part, we study penalized regression splines (P-splines), which are low-order basis splines with a penalty to avoid under- smoothing. Such P-splines are typically not spatially adaptive, and hence can have trouble when functions are varying rapidly. We model the penalty parameter inherent in the P-spline method as a heteroscedastic regression function. We develop a full Bayesian hierarchical structure to do this and use Markov Chain Monte Carlo tech- niques for drawing random samples from the posterior for inference. We show that the approach achieves very competitive performance as compared to other methods. The second part focuses on modeling DNA microarray data. Microarray technology enables us to monitor the expression levels of thousands of genes simultaneously and hence to obtain a better picture of the interactions between the genes. In order to understand the biological structure underlying these gene interactions, we present a hierarchical nonparametric Bayesian model based on Multivariate Adaptive Regres-sion Splines (MARS) to capture the functional relationship between genes and also between genes and disease status. The novelty of the approach lies in the attempt to capture the complex nonlinear dependencies between the genes which could otherwise be missed by linear approaches. The Bayesian model is flexible enough to identify significant genes of interest as well as model the functional relationships between the genes. The effectiveness of the proposed methodology is illustrated on leukemia and breast cancer datasets.

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

The authors briefly survey the jurisdictions where load-following products have been successfully used, examine the characteristics of the load-following products, and explain the shortcomings and inaccurate conclusions of previous analyses. A more thorough analysis reveals that the load-following products fulfill the public policy objectives for which they have been designed and do not adversely impact wholesale electricity markets.

This strategy guideline provides step-by-step procedures for evaluating preferred cost-effective options for energy efficient water heater alternatives based on local utility rates, climate, and anticipated loads. These procedures, developed both for individual water heater applications (both single and multi-family) and multifamily central systems, provide users with projections on operating cost savings over a 10-year time horizon for retrofit applications and on a cash flow basis for new construction.

The selection and spacing of appropriate centralizers can improve the cementation of high-angle and horizontal wells. Mud removal is one of the most important factors in obtaining a good cement job. Effective centralization assists in mud removal and helps ensure an even cement coat around the casing. Centralizers for horizontal wells have to fulfill two requirements: They should have a high restoring capability and a low moving force, and they should allow pipe rotation and reciprocation. Conventional bow-type centralizers have been used successfully in some horizontal wells. But as the horizontal section length increases, special centralizers, such as low-moving-force, bow-type centralizers and rigid centralizers, may be necessary. The paper describes the following: cementing liners, centralization, torque and drag, centralizer placement, the bow-type centralizer, the rigid centralizer, and the downhole activated centralizer.

This paper presents a method for Critical Software Event Execution Reliability (Critical SEER). The Critical SEER method is intended for high assurance software that operates in an environment where transient upsets could occur, causing a disturbance of the critical software event execution order, which could cause safety or security hazards. The method has a finite automata based module that watches (hence SEER) and tracks the critical events and ensures they occur in the proper order or else a fail safe state is forced. This method is applied during the analysis, design and implementation phases of software engineering.

Apparatus and processes are described for recognizing and identifying materials. Characteristic spectra are obtained for the materials via spectroscopy techniques including nuclear magnetic resonance spectroscopy, infrared absorption analysis, x-ray analysis, mass spectroscopy and gas chromatography. Desired portions of the spectra may be selected and then placed in proper form and format for presentation to a number of input layer neurons in an offline neural network. The network is first trained according to a predetermined training process; it may then be employed to identify particular materials. Such apparatus and processes are particularly useful for recognizing and identifying organic compounds such as complex carbohydrates, whose spectra conventionally require a high level of training and many hours of hard work to identify, and are frequently indistinguishable from one another by human interpretation.

Apparatus and processes for recognizing and identifying materials. Characteristic spectra are obtained for the materials via spectroscopy techniques including nuclear magnetic resonance spectroscopy, infrared absorption analysis, x-ray analysis, mass spectroscopy and gas chromatography. Desired portions of the spectra may be selected and then placed in proper form and format for presentation to a number of input layer neurons in an offline neural network. The network is first trained according to a predetermined training process; it may then be employed to identify particular materials. Such apparatus and processes are particularly useful for recognizing and identifying organic compounds such as complex carbohydrates, whose spectra conventionally require a high level of training and many hours of hard work to identify, and are frequently indistinguishable from one another by human interpretation.

This work applies the empirical mode decomposition (EMD) method to data on real quarterly oil price (West Texas Intermediate - WTI) and U.S. gross domestic product (GDP). This relatively new method is adaptive and capable of handling non-linear and non-stationary data. Correlation analysis of the decomposition results was performed and examined for insights into the oil-macroeconomy relationship. Several components of this relationship were identified. However, the principal one is that the medium-run cyclical component of the oil price exerts a negative and exogenous influence on the main cyclical component of the GDP. This can be interpreted as the supply-driven or supply-shock component of the oil price-GDP relationship. In addition, weak correlations suggesting a lagging demand-driven, an expectations-driven, and a long-run supply-driven component of the relationship were also identified. Comparisons of these findings with significant oil supply disruption and recession dates were supportive. The study identified a number of lessons applicable to recent oil market events, including the eventuality of persistent economic and price declines following a long oil price run-up. In addition, it was found that oil-market related exogenous events are associated with short- to medium-run price implications regardless of whether they lead to actual supply disruptions.

Communications device identification methods, communications methods, wireless communications readers, wireless communications systems, and articles of manufacture are described. In one aspect, a communications device identification method includes providing identification information regarding a group of wireless identification devices within a wireless communications range of a reader, using the provided identification information, selecting one of a plurality of different search procedures for identifying unidentified ones of the wireless identification devices within the wireless communications range, and identifying at least some of the unidentified ones of the wireless identification devices using the selected one of the search procedures.

Ion-assisted plasma enhanced deposition of diamond-like carbon (DLC) films on the surface of photovoltaic solar cells is accomplished with a method and apparatus for controlling ion energy. The quality of DLC layers is fine-tuned by a properly biased system of special electrodes and by exact control of the feed gas mixture compositions. Uniform (with degree of non-uniformity of optical parameters less than 5%) large area (more than 110 cm.sup.2) DLC films with optical parameters varied within the given range and with stability against harmful effects of the environment are achieved.

An ultra-low magnetic field NMR system can non-invasively examine containers. Database matching techniques can then identify hazardous materials within the containers. Ultra-low field NMR systems are ideal for this purpose because they do not require large powerful magnets and because they can examine materials enclosed in conductive shells such as lead shells. The NMR examination technique can be combined with ultra-low field NMR imaging, where an NMR image is obtained and analyzed to identify target volumes. Spatial sensitivity encoding can also be used to identify target volumes. After the target volumes are identified the NMR measurement technique can be used to identify their contents.

Life Cycle Assessment (LCA) should be used to assist carbon capture and sequestration (CCS) planners to reduce greenhouse gas (GHG) emissions and avoid unintended environmental trade-offs. LCA is an analytical framework for determining environmental impacts resulting from processes, products, and services. All life cycle stages are evaluated including raw material sourcing, processing, operation, maintenance, and component end-of-life, as well as intermediate stages such as transportation. In recent years a growing number of LCA studies have analyzed CCS systems. We reviewed 50+ LCA studies, and selected 11 studies that compared the environmental performance of 23 electric power plants with and without CCS. Here we summarize and interpret the findings of these studies. Regarding overall climatemitigation effectiveness of CCS, we distinguish between the capture percentage of carbon in the fuels, the net carbon dioxide (CO2) emission reduction, and the net GHG emission reduction. We also identify trade-offs between the climate benefits and the potential increased non-climate impacts of CCS. Emissions of non-CO2 flue gases such as NOx may increase due to the greater throughput of fuel, and toxicity issues may arise due to the use of monoethanolamine (MEA) capture solvent, resulting in ecological and human health impacts. We discuss areas where improvements in LCA data or methods are needed. The decision to implement CCS should be based on knowledge of the overall environmental impacts of the technologies, not just their carbon capture effectiveness. LCA will be an important tool in providing that knowledge.

We present the results of a two year project focused on a common social engineering attack method called %22spear phishing%22. In a spear phishing attack, the user receives an email with information specifically focused on the user. This email contains either a malware-laced attachment or a link to download the malware that has been disguised as a useful program. Spear phishing attacks have been one of the most effective avenues for attackers to gain initial entry into a target network. This project focused on a proactive approach to spear phishing. To create an effective, user-specific spear phishing email, the attacker must research the intended recipient. We believe that much of the information used by the attacker is provided by the target organization's own external website. Thus when researching potential targets, the attacker leaves signs of his research in the webserver's logs. We created tools and visualizations to improve cybersecurity analysts' abilities to quickly understand a visitor's visit patterns and interests. Given these suspicious visitors and log-parsing tools, analysts can more quickly identify truly suspicious visitors, search for potential spear-phishing targeted users, and improve security around those users before the spear phishing email is sent.

This invention is comprised of a backscattering spectrometry method and device for identifying and quantifying impurities in a workpiece during processing and manufacturing of that workpiece. While the workpiece is implanted with an ion beam, that same ion beam backscatters resulting from collisions with known atoms and with impurities within the workpiece. Those ions backscatter along a predetermined scattering angle and are filtered using a self-supporting filter to stop the ions with a lower energy because they collided with the known atoms of the workpiece of a smaller mass. Those ions which pass through the filter have a greater energy resulting from impact with impurities having a greater mass than the known atoms of the workpiece. A detector counts the number and measures the energy of the ions which pass through the filter. From the energy determination and knowledge of the scattering angle, a mass calculation determines the identity, and from the number and solid angle of the scattering angle, a relative concentration of the impurity is obtained.

Amended records are those methods with changes published after the official printing date of September 1st, 2011 Announcing Amended Methods Official Methods and Recommended Practices of the AOCS (Methods) aocs applicants certified chemist chemists

Methods Development Methods Development EPRI and NETL collaboratively funded a $3-million program under the DOE/ University of North Dakota Energy and Environmental Research Center (UNDEERC) Jointly Sponsored Research Program (JSRP) to evaluate, develop, and validate a mercury speciation method for coal-fired produced flue gas. There was a 60/40 percent split of the funding, as required under the JSRP for this two-year effort. The work conducted by the EERC identified the Ontario Hydro Method as the best mercury speciation method. The EERC has validated the Ontario Hydro Method at both pilot- and full-scale levels. Radian International aided in the full-scale validation, with a written protocol of the method being finalized through the American Society for Testing and Materials (ASTM).

Many different methods of assessing the energy savings potential at federal installations, and identifying attractive projects for capital investment have been used by the different federal agencies. These methods range from high-level estimating tools to detailed design tools, both manual and software assisted. These methods have different purposes and provide results that are used for different parts of the project identification, and implementation process. Seven different assessment methods are evaluated in this study. These methods were selected by the program managers at the DoD Energy Policy Office, and DOE Federal Energy Management Program (FEMP). Each of the methods was applied to similar buildings at Bolling Air Force Base (AFB), unless it was inappropriate or the method was designed to make an installation-wide analysis, rather than focusing on particular buildings. Staff at Bolling AFB controlled the collection of data.

Determining soil carbon (C) with high precision is an essential requisite for the success of the terrestrial C sequestration program. The informed choice of management practices for different terrestrial ecosystems rests upon accurately measuring the potential for C sequestration. Numerous methods are available for assessing soil C. Chemical analysis of field-collected samples using a dry combustion method is regarded as the standard method. However, conventional sampling of soil and their subsequent chemical analysis is expensive and time consuming. Furthermore, these methods are not sufficiently sensitive to identify small changes over time in response to alterations inmanagement practices or changes in land use. Presently, several different in situ analytic methods are being developed purportedly offering increased accuracy, precision and cost-effectiveness over traditional ex situ methods. We consider that, at this stage, a comparative discussion of different soil C determination methods will improve the understanding needed to develop a standard protocol.

This invention provides a method for identifying cells expressing a target single chain antibody (scFv) directed against a target antigen from a collection of cells that includes cells that do not express the target scFv, comprising the step of combining the collection of cells with an anti-idiotype directed to an antibody specific for the target antigen and detecting interaction, if any, of the anti-idiotype with the cells, wherein the occurrence of an interaction identifies the cell as one which expresses the target scFv. This invention also provides a method for making a single chain antibody (scFv) directed against an antigen, wherein the selection of clones is made based upon interaction of those clones with an appropriate anti-idiotype, and heretofore inaccessible scFv so made. This invention provides the above methods or any combination thereof. Finally, this invention provides various uses of these methods.

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

A method for identifying, imaging and monitoring dry or fluid-saturated underground reservoirs using seismic waves reflected from target porous or fractured layers is set forth. Seismic imaging the porous or fractured layer occurs by low pass filtering of the windowed reflections from the target porous or fractured layers leaving frequencies below low-most corner (or full width at half maximum) of a recorded frequency spectra. Additionally, the ratio of image amplitudes is shown to be approximately proportional to reservoir permeability, viscosity of fluid, and the fluid saturation of the porous or fractured layers.

This paper describes the methodology followed to perform a co-simulation between 1D (OpenWAM) and 3D (FLUENT) CFD codes. The Method of Characteristics (MoC) has been chosen to transfer the information between the two domains by properly updating the ... Keywords: 1D modeling, 1D-3D coupling, CFD simulation, Co-simulation, Method of Characteristics, User defined function

Issues in building energy software accuracy are often identified by comparative, analytical, and empirical testing as delineated in the BESTEST methodology. As described in this report, window-related discrepancies in heating energy predictions were identified through comparative testing of EnergyPlus and DOE-2. Multiple causes for discrepancies were identified, and software fixes are recommended to better align the models with the intended algorithms and underlying test data.

A method of analyzing computer intrusion detection information that looks beyond known attacks and abnormal access patterns to the critical information that an intruder may want to access. Unique target identifiers and type of work performed by the networked targets is added to audit log records. Analysis using vector space modeling, dissimilarity matrix comparison, and clustering of the event records is then performed.

This is a report on novel slagging and fouling mitigation methods in the coal-fired power generation industry. The project was identified by EPRI in response to member needs to compile a snapshot of approaches to mitigating slagging and fouling of coal-fired boilers as the industry migrates to burning off design coal.

The determination of reservoir quality and its spatial distribution is a key objective in reservoir characterization. This is especially challenging for carbonates because, due to the effects of diagenesis, quality rarely follows depositional patterns. This study integrates data from thin sections and core analyses with measurements of Nuclear Magnetic Resonance (NMR) T2 relaxation times. It exposes a novel approach to the use of NMR by applying geological and statistical analysis to define relationships between pore characteristics and the T2 data, from which a method to identify pore origin from NMR only is developed. One hundred and three samples taken from eleven wells located in fields of the Middle East, Alabama and Texas were used in the study. Modeling of the T2 spectra, as the sum of three normal components, resulted in the definition of 9 parameters representing the average, the variability and the percentage of total porosity of the specific pore sizes present in the sample. Each specific pore size corresponds to one of the following genetic pore types: intergranular, matrix, dissolution-enhanced, intercrystalline, vuggy and cement-reduced. Among the 9 parameters, two variables were identified as having the highest degree of geological significance that could be used to discriminate between pore categories: Ã?Âµmax which represents the largest average pore size of all pore types identified in the sample, and ÃÂ?main which represents the size variability of the most abundant pore type. Based on the joint distribution of Ã?Âµmax and ÃÂ?main computed for each pore category, the probability that an unclassified sample belongs to each of the pore categories, is calculated and the sample is assigned to the category with the highest probability. The accuracy of the method was investigated by comparing NMR predicted pore origin and genetic pore type described from thin section. A result of 89 successful predictions out of 103 samples was obtained. These promising results indicate that T2 time can be a useful identifier of carbonate pore types. Success in this work takes us closer to identifying genetic pore types from NMR logs with minimal calibration against borehole cores and will help predict the spatial distribution of poroperm facies in complex carbonate reservoirs with much improved accuracy.

This invention is comprised of a method for validating a process stream for the presence or absence of a substance of interest such as a chemical warfare agent; that is, for verifying that a chemical warfare agent is present in an input line for feeding the agent into a reaction vessel for destruction, or, in a facility for producing commercial chemical products, that a constituent of the chemical warfare agent has not been substituted for the proper chemical compound. The method includes the steps of transmitting light through a sensor positioned in the feed line just before the chemical constituent in the input line enters the reaction vessel, measuring an optical spectrum of the chemical constituent from the light beam transmitted through it, and comparing the measured spectrum to a reference spectrum of the chemical agent and preferable also reference spectra of surrogates. A signal is given if the chemical agent is not entering a reaction vessel for destruction, or if a constituent of a chemical agent is added to a feed line in substitution of the proper chemical compound.

The relaxation search algorithm to identify the parameters of hydraulic fractured gas wells is developed in this paper based on the inductive matrix. According to the optimization theory and parallel computation method, the parameters to be identified ... Keywords: Gas Wells, hydraulic fracturing, formation parameters, parameter identification, historic fitting

So far, most methods for identifying sequences under selection based on comparative sequence data have either assumed selectional pressures are the same across all branches of a phylogeny, or have focused on changes in specific lineages of interest. ...

A new method is presented for Doppler spectral averaging that more reliably identifies the profiler radar return from clear air in the presence of contaminationfor example, from migrating bird echoes. These very sensitive radars profile the wind ...

This project analyzed safety significant events (SSEs) in several nuclear power plants to identify where improvements in instrumentation and control (IC) and information technology (IT) could prevent or mitigate some of these events. This report identifies potential improvement paths that could enhance reliability and availability for implementation consideration by utilities where appropriate at their own plants.

It is one of the most important tasks in bioinformatics to identify the regulatory elements in gene sequences. Most of the existing algorithms for identifying regulatory elements are inclined to converge into a local optimum, and have high time complexity. ... Keywords: Ant colony optimization, Gene regulatory elements, Motif identification

to Invest up to $2.3 Million to Identify Renewable Energy Zones to Invest up to $2.3 Million to Identify Renewable Energy Zones In Western States, May 28, 2008 DOE to Invest up to $2.3 Million to Identify Renewable Energy Zones In Western States, May 28, 2008 DOE to Invest up to $2.3 Million to Identify Renewable Energy Zones In Western States. The Renewable Energy Zones Initiative will promote regional transmission planning and encourage the development of renewable sources of energy. DOE to Invest up to $2.3 Million to Identify Renewable Energy Zones In Western States, May 28, 2008 More Documents & Publications Senior DOE Official to Deliver Remarks at Western Governors' Association Renewable Energy Zones Initiative Launch Western Renewable Energy Zones-Phase 1 Report Statement of Patricia Hoffman Acting Assistant Secretary for Electricity

The four classes of geophysical methods considered are: passive seismic methods; active seismic methods; natural field electrical and electromagnetic methods; and, controlled-source electrical and electromagnetic methods. Areas of rsearch for improvement of the various techniques for geothermal exploration are identified. (JGB)

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

A computer-assisted method for identifying functionalities to add to an organism-specific metabolic network to enable a desired biotransformation in a host includes accessing reactions from a universal database to provide stoichiometric balance, identifying at least one stoichiometrically balanced pathway at least partially based on the reactions and a substrate to minimize a number of non-native functionalities in the production host, and incorporating the at least one stoichiometrically balanced pathway into the host to provide the desired biotransformation. A representation of the metabolic network as modified can be stored.

A computer-assisted method for identifying functionalities to add to an organism-specific metabolic network to enable a desired biotransformation in a host includes accessing reactions from a universal database to provide stoichiometric balance, identifying at least one stoichiometrically balanced pathway at least partially based on the reactions and a substrate to minimize a number of non-native functionalities in the production host, and incorporating the at least one stoichiometrically balanced pathway into the host to provide the desired biotransformation. A representation of the metabolic network as modified can be stored.

Methods for producing plastic scintillating material employing either two major steps (tumble-mix) or a single major step (inline-coloring or inline-doping). Using the two step method, the polymer pellets are mixed with silicone oil, and the mixture is then tumble mixed with the dopants necessary to yield the proper response from the scintillator material. The mixture is then placed in a compounder and compounded in an inert gas atmosphere. The resultant scintillator material is then extruded and pelletized or formed. When only a single step is employed, the polymer pellets and dopants are metered into an inline-coloring extruding system. The mixture is then processed under a inert gas atmosphere, usually argon or nitrogen, to form plastic scintillator material in the form of either scintillator pellets, for subsequent processing, or as material in the direct formation of the final scintillator shape or form.

Existing techniques for identifying, associating, and tracking storms rely on heuristics and are not transferrable between different types of geospatial images. Yet, with the multitude of remote sensing instruments and the number of channels and ...

We report the genome sequence of the nonseed vascular plant, Selaginella moellendorffii, and by comparative genomics identify genes that likely played important roles in the early evolution of vascular plants and their subsequent evolution

Based on the Bayesian statistical decision theory, a probabilistic quality control (QC) technique is developed to identify and flag migrating-bird-contaminated sweeps of level II velocity scans at the lowest elevation angle using the QC ...

Existing work on mutual exclusion synchronization is based on a structural definition of mutex bodies. Although correct, this structural notion fails to identify many important locking patterns present in some programs. In this paper we present a novel ...

This work describes the application of a recently developed signal processing technique for identifying periodic components in the presence of unknown colored noise. Specifically, the application of this technique to the identification of ...

The characteristics of the so-called radar-identified big drop zones (rBDZ) have been investigated. The study employs radar observations of several thunderstorms and simultaneous microphysical and vertical wind measurements with a penetrating T-...

We introduce a novel set of social network analysis based algorithms for mining the Web, blogs, and online forums to identify trends and find the people launching these new trends. These algorithms have been implemented ...

2. Identify the Code and Compliance Path 2. Identify the Code and Compliance Path It is important to review the submitted documentation and identify which code was used for the building. Next, to determine whether the building complies with that code, the path used to demonstrate compliance must be identified. There are several compliance paths available in the 2009 and 2012 IECC and ASHRAE Standards 90.1-2007 and 90.1-2010. Each of these codes/standards contains a prescriptive path that clearly states specific requirements. Prescriptive paths limit design freedom. Each of these codes/standards also has a performance-based path that provides more design freedom and can lead to innovative design, but involves more complex energy simulations and tradeoffs between systems. Residential and smaller commercial buildings

.3 Million to Identify Renewable Energy Zones .3 Million to Identify Renewable Energy Zones in Western States DOE to Invest up to $2.3 Million to Identify Renewable Energy Zones in Western States May 28, 2008 - 12:32pm Addthis The Renewable Energy Zones Initiative will promote regional transmission planning and encourage the development of renewable sources of energy WASHINGTON, DC - U.S. Department of Energy (DOE) Assistant Secretary for Electricity Delivery and Energy Reliability Kevin Kolevar today announced the Department's plans to contribute up to $2.3 million over three years, subject to annual appropriations, to identify areas in the Western United States with vast renewable energy resources, and expedite the development and delivery of those resources to meet regional energy needs. The Western Renewable Energy Zones (WREZ) project, launched by the Western

to Invest up to $2.3 Million to Identify Renewable Energy Zones to Invest up to $2.3 Million to Identify Renewable Energy Zones in Western States DOE to Invest up to $2.3 Million to Identify Renewable Energy Zones in Western States May 28, 2008 - 1:58pm Addthis The Renewable Energy Zones Initiative will promote regional transmission planning and encourage the development of renewable sources of energy WASHINGTON, DC - U.S. Department of Energy (DOE) Assistant Secretary for Electricity Delivery and Energy Reliability Kevin Kolevar today announced the Department's plans to contribute up to $2.3 million over three years, subject to annual appropriations, to identify areas in the Western United States with vast renewable energy resources, and expedite the development and delivery of those resources to meet regional energy needs. The

6. Identify and Overcome the Barriers of Adoption 6. Identify and Overcome the Barriers of Adoption Description It is important for a state or jurisdiction to identify and overcome a variety of political, economic, and technical challenges when adopting or updating an energy code. Confusion throughout the process and unclear adoption language are two of the most common barriers associated with code adoption. Other barriers identified by advocates and stakeholders include initial cost, limited outreach and education resources, cost and availability of code support information, and state and local confusion. These barriers are often resolved by amending the adoption process, providing code education, or selecting a model energy code for adoption. Adoption Process The adoption process itself can be a barrier to code adoption. States

This final technical progress report describes work performed from October 1, 2004, through May 16, 2007, for the project, 'Aperture-Tolerant, Chemical-Based Methods to Reduce Channeling'. We explored the potential of pore-filling gels for reducing excess water production from both fractured and unfractured production wells. Several gel formulations were identified that met the requirements--i.e., providing water residual resistance factors greater than 2,000 and ultimate oil residual resistance factors (F{sub rro}) of 2 or less. Significant oil throughput was required to achieve low F{sub rro} values, suggesting that gelant penetration into porous rock must be small (a few feet or less) for existing pore-filling gels to provide effective disproportionate permeability reduction. Compared with adsorbed polymers and weak gels, strong pore-filling gels can provide greater reliability and behavior that is insensitive to the initial rock permeability. Guidance is provided on where relative-permeability-modification/disproportionate-permeability-reduction treatments can be successfully applied for use in either oil or gas production wells. When properly designed and executed, these treatments can be successfully applied to a limited range of oilfield excessive-water-production problems. We examined whether gel rheology can explain behavior during extrusion through fractures. The rheology behavior of the gels tested showed a strong parallel to the results obtained from previous gel extrusion experiments. However, for a given aperture (fracture width or plate-plate separation), the pressure gradients measured during the gel extrusion experiments were much higher than anticipated from rheology measurements. Extensive experiments established that wall slip and first normal stress difference were not responsible for the pressure gradient discrepancy. To explain the discrepancy, we noted that the aperture for gel flow (for mobile gel wormholing through concentrated immobile gel within the fracture) was much narrower than the width of the fracture. The potential of various approaches were investigated for improving sweep in parts of the Daqing Oil Field that have been EOR targets. Possibilities included (1) gel treatments that are directed at channeling through fractures, (2) colloidal dispersion gels, (3) reduced polymer degradation, (4) more viscous polymer solutions, and (5) foams and other methods. Fractures were present in a number of Daqing wells (both injectors and producers). Because the fractures were narrow far from the wellbore, severe channeling did not occur. On the contrary, fractures near the wellbore aided reservoir sweep. In the February 2006 issue of the Journal of Petroleum Technology, a 'Distinguished-Author-Series' paper claimed that a process using aqueous colloidal dispersion gels (CDG gels) performed superior to polymer flooding. Unfortunately, this claim is misleading and generally incorrect. Colloidal dispersion gels, in their present state of technological development, should not be advocated as an improvement to, or substitute for, polymer flooding.

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Sizing the ground heat exchanger is one of the most important tasks in the design of a geothermal heat pump (GHP) system. Undersizing the heat exchanger can result in poor operating efficiency, reduced comfort, and nuisance heat pump lockouts on safety controls, while an oversized heat exchanger increases the installation cost of the system. The cost of ground loop installation may mean the difference between a feasible and an unfeasible project. Thus there are strong incentives to select heat exchanger lengths which allow satisfactory performance under all operating conditions within a feasible project budget. Sizing a ground heat exchanger is not a simple calculation. In the first place, there is usually some uncertainty in the peak block and annual space conditioning loads for the building to be served by the GHPs. The thermal properties of the soil formation may be unknown as well. Drilling logs and core samples can identify the soil type, but handbook values for the thermal properties of soils vary widely. Properly-done short-term on-site tests and data analysis to obtain thermal properties provide more accurate information, but since these tests are expensive they are usually only feasible in large projects. Given the uncertainties inherent in the process, if designers were truly working 'close to the edge' - selecting the absolute minimum heat exchanger length required to meet the predicted loads - one would expect to see more examples of undersized heat exchangers. Indeed there have been a few. However, over the past twenty years GHPs have been installed and successfully operated at thousands of locations all over the world. Conversations with customers and facility managers reveal a high degree of satisfaction with the technology, but studies of projects reveal far more cases of generously sized ground heat exchangers than undersized ones. This indicates that the uncertainties in space conditioning loads and soil properties are covered by a factor of safety. These conservative designs increase the installed cost of GHP systems, limiting their use and applicability. Moreover, as ground heat exchanger sizing methods have improved, they have suggested (and field tests are beginning to verify) that standard bore backfill practices lead to unnecessarily large ground heat exchangers. Growing evidence suggests that in many applications use of sand backfill with a grout plug at the surface, or use of bottom-to-top thermally enhanced grout, may provide groundwater protection equal to current practice at far less cost. Site tests of thermal properties provides more accurate information, but since these tests are expensive they are usually only performed in large projects. Even so, because soil properties can vary over a distance as small as a few feet, the value of these tests is limited. One objective of ongoing research at the Oak Ridge National Laboratory (ORNL) is to increase designers confidence in available ground heat exchanger sizing methods that lead to reliable yet cost-effective designs. To this end we have developed research-grade models that address the interactions between buildings, geothermal heat pump systems and ground heat exchangers The first application of these models was at Fort Polk, Louisiana, where the space conditioning systems of over 4,000 homes were replaced with geothermal heat pumps (Shonder and Hughes, 1997; Hughes et. al., 1997). At Fort Polk, the models were calibrated to detailed data from one of the residences. Data on the energy use of the heat pump, combined with inlet and outlet water temperature and flow rate in the ground heat exchangers, allowed us to determine the thermal properties of the soil formation being experienced by the operating GHP system. Outputs from the models provide all the data required by the various commercially-available ground loop sizing programs. Accurate knowledge of both the building loads and the soil properties eliminated the uncertainty normally associated with the design process, and allowed us to compare the predictions of the commercially-available

The goals of this study were: (1) survey the microbial community in soil samples from a site contaminated with heavy metals using new rapid molecular techniques that are culture-independent; (2) identify phylogenetic signatures of microbial populations that correlate with metal ion contamination; and (3) cultivate these diagnostic strains using traditional as well as novel cultivation techniques in order to identify organisms that may be of value in site evaluation/management or bioremediation.

Differentiating between energy-efficient and inefficient single-family homes on a community scale helps identify and prioritize candidates for energy-efficiency upgrades. Prescreening diagnostic procedures can further retrofit efforts by providing efficiency information before a site-visit is conducted. We applied the prescreening diagnostic to a simulated community of homes in Boulder, Colorado and analyzed energy consumption data to identify energy-inefficient homes.

An apparatus and method are disclosed for electrochemical analysis of elements in solution. An auxiliary electrode a reference electrode and five working electrodes are positioned in a container containing a sample solution. The working electrodes are spaced apart evenly from each other and auxiliary electrode to minimize any inter-electrode interference that may occur during analysis. An electric potential is applied between auxiliary electrode and each of the working electrodes. Simultaneous measurements taken of the current flow through each of the working electrodes for each given potential in a potential range are used for identifying chemical elements present in sample solution and their respective concentrations. Multiple working electrodes enable a more positive identification to be made by providing unique data characteristic of chemical elements present in the sample solution.

Due to the large number of chemicals in commerce without adequate toxicity characterization data, coupled with an ineffective federal policy for chemical management in the United States, many states are grappling with the challenge to identify toxic chemicals that may pose a risk to human health and the environment. Specific populations (e.g., children, elderly) are particularly sensitive to these toxic chemicals. In 2008, the Children's Safe Product Act (CSPA) was passed in Washington State. The CSPA included specific requirements to identify High Priority Chemicals (HPCs) and Chemicals of High Concern to Children (CHCCs). To implement this legislation, a methodology was developed to identify HPCs from authoritative scientific and regulatory sources on the basis of toxicity criteria. Another set of chemicals of concern was then identified from authoritative sources, based on their potential exposure to children. Exposure potential was evaluated by identifying chemicals detected in biomonitoring studies (i.e., human tissues), as well as those present in residential exposure media (e.g., indoor air, house dust, drinking water, consumer products). Accordingly, CHCCs were defined as HPCs that also appear in biomonitoring studies or relevant exposure media. For chemicals with unique Chemical Abstracts Service (CAS) numbers, we identified 2044 HPCs and 2219 chemicals with potential exposure to children, resulting in 476 CHCCs. The process of chemical identification is dynamic, so that chemicals may be added or subtracted as new information becomes available. Although beyond the scope of this paper, the 476 CHCCs will be prioritized in a more detailed assessment, based on the strength and weight of evidence of toxicity and exposure data. Our approach was developed to be flexible which allows the addition or removal of specific sources of toxicity or exposure information, as well as transparent to allow clear identification of inputs. Although the methodology was constrained by specific requirements in the CSPA, the intent of this work was to identify HPCs and CHCCs that might guide future regulatory actions and inform chemical management policies, aimed at protecting children's health.

The development of methods tools and process improvements is best to be based on the understanding of the development practice to be supported. Qualitative research has been proposed as a method for understanding the social and cooperative aspects of ... Keywords: Cooperative method development, D.2.9 [Software]: Software Engineering --- Management, Empirical research, Human factors, Research methodology, Software engineering

The Westinghouse Hanford Company (WHC) authorized Pacific Northwest Laboratory (PNL) to conduct a feasibility study to identify promising nondestructive testing (NDT) methods for detecting general and localized (both pitting and pinhole) corrosion in the 55-gal drums that are used to store solid waste materials at the Hanford Site. This document presents results obtained during a literature survey, identifies the relevant reference materials that were reviewed, provides a technical description of the methods that were evaluated, describes the laboratory tests that were conducted and their results, identifies the most promising candidate methods along with the rationale for these selections, and includes a work plan for recommended follow-on activities. This report contains a brief overview and technical description for each of the following NDT methods: magnetic testing techniques; eddy current testing; shearography; ultrasonic testing; radiographic computed tomography; thermography; and leak testing with acoustic detection.

An ultrasonic sensor system and method of use for measuring transit time through a liquid sample, comprising at least one ultrasonic transducer coupled to a precision time interval counter. The timing circuit captures changes in transit time, representing small changes in the velocity of sound transmitted, over necessarily small time intervals (nanoseconds) and uses the transit time changes to identify the presence of non-conforming constituents in the sample.

A method is provided for determining a clastogenic signature of a sample of chromosomes by quantifying a frequency of a first type of chromosome aberration present in the sample; quantifying a frequency of a second, different type of chromosome aberration present in the sample; and comparing the frequency of the first type of chromosome aberration to the frequency of the second type of chromosome aberration. A method is also provided for using that clastogenic signature to identify a clastogenic agent or dosage to which the cells were exposed.

X-ray Identifies Mystery Atom Critical to Food Supply X-ray Identifies Mystery Atom Critical to Food Supply SLAC X-ray Identifies Mystery Atom Critical to Food Supply November 18, 2011 - 10:05am Addthis Serena DeBeer of Cornell University and the Max Planck Institute for Bioinorganic Chemistry, led the the team that performed crucial experiments at SLAC. Dr. DeBeer is pictured above with Michael Roemelt and Frank Neese, also of the Max Planck Institute. Click here to see a photo of the nitrogenase enzyme. Serena DeBeer of Cornell University and the Max Planck Institute for Bioinorganic Chemistry, led the the team that performed crucial experiments at SLAC. Dr. DeBeer is pictured above with Michael Roemelt and Frank Neese,

Methods for detecting and identifying carbon- and/or nitrogen-containing materials are disclosed. The methods may comprise detection of photo-nuclear reaction products of nitrogen and carbon to detect and identify the carbon- and/or nitrogen-containing materials.

A method for locating related items in a geometric space transforms relationships among items to geometric locations. The method locates items in the geometric space so that the distance between items corresponds to the degree of relatedness. The method facilitates communication of the structure of the relationships among the items. The method is especially beneficial for communicating databases with many items, and with non-regular relationship patterns. Examples of such databases include databases containing items such as scientific papers or patents, related by citations or keywords. A computer system adapted for practice of the present invention can include a processor, a storage subsystem, a display device, and computer software to direct the location and display of the entities. The method comprises assigning numeric values as a measure of similarity between each pairing of items. A matrix is constructed, based on the numeric values. The eigenvectors and eigenvalues of the matrix are determined. Each item is located in the geometric space at coordinates determined from the eigenvectors and eigenvalues. Proper construction of the matrix and proper determination of coordinates from eigenvectors can ensure that distance between items in the geometric space is representative of the numeric value measure of the items' similarity. 12 figs.

Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture are described according to some aspects. In one aspect, a word disambiguation method includes accessing textual content to be disambiguated, wherein the textual content comprises a plurality of words individually comprising a plurality of word senses, for an individual word of the textual content, identifying one of the word senses of the word as indicative of the meaning of the word in the textual content, for the individual word, selecting one of a plurality of event classes of a lexical database ontology using the identified word sense of the individual word, and for the individual word, associating the selected one of the event classes with the textual content to provide disambiguation of a meaning of the individual word in the textual content.

for the Implementation for the Implementation and Follow-up of Identified Energy and Water Efficiency Measures in Covered Facilities (per 42 U.S.C. 8253(f), Use of Energy and Water Efficiency Measures in Federal Buildings) September 2012 U.S. Department of ENERGY United States Department of Energy Washington, DC 20585 i Guidance for the Implementation and Follow-up of Identified Energy and Water Efficiency Measures in Covered Facilities (per 42 U.S.C. 8253(f), Use of Energy and Water Efficiency Measures in Federal Buildings) September 2012 I. PURPOSE ............................................................................................................................................ 1 II. BACKGROUND ................................................................................................................................. 1

Online communities, or groups, have largely been defined based on links, page rank, and eigenvalues. In this paper we explore identifying abstract groups, groups where member's interests and online footprints are similar but they are not necessarily connected to one another explicitly. We use a combination of structural information and content information from posts and their comments to build a footprint for groups. We find that these variables do a good job at identifying groups, placing members within a group, and help determine the appropriate granularity for group boundaries.

Online communities, or groups, have largely been defined based on links, page rank, and eigenvalues. In this paper we explore identifying abstract groups, groups where member's interests and online footprints are similar but they are not necessarily connected to one another explicitly. We use a combination of structural information and content information from posts and their comments to build a footprint for groups. We find that these variables do a good job at identifying groups, placing members within a group, and help determine the appropriate granularity for group boundaries.

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

for the Implementation for the Implementation and Follow-up of Identified Energy and Water Efficiency Measures in Covered Facilities (per 42 U.S.C. 8253(f), Use of Energy and Water Efficiency Measures in Federal Buildings) September 2012 U.S. Department of ENERGY United States Department of Energy Washington, DC 20585 i Guidance for the Implementation and Follow-up of Identified Energy and Water Efficiency Measures in Covered Facilities (per 42 U.S.C. 8253(f), Use of Energy and Water Efficiency Measures in Federal Buildings) September 2012 I. PURPOSE ............................................................................................................................................ 1 II. BACKGROUND ................................................................................................................................. 1

A process is described for determining one or more leachate concentrations of one or more components of a glass composition in an aqueous solution of the glass composition by identifying the components of the glass composition, including associated oxides, determining a preliminary glass dissolution estimator, {Delta}G{sub p}, based upon the free energies of hydration for the component reactant species, determining an accelerated glass dissolution function, {Delta}G{sub a}, based upon the free energy associated with weak acid dissociation, {Delta}G{sub a}{sup WA}, and accelerated matrix dissolution at high pH, {Delta}G{sub a}{sup SB} associated with solution strong base formation, and determining a final hydration free energy, {Delta}G{sub f}. This final hydration free energy is then used to determine leachate concentrations for elements of interest using a regression analysis and the formula log{sub 10}(N C{sub i}(g/L))=a{sub i} + b{sub i}{Delta}G{sub f}. The present invention also includes a method to determine whether a particular glass to be produced will be homogeneous or phase separated. The present invention is also directed to methods of monitoring and controlling processes for making glass using these determinations to modify the feedstock materials until a desired glass durability and homogeneity is obtained. 4 figs.

A process for determining one or more leachate concentrations of one or more components of a glass composition in an aqueous solution of the glass composition by identifying the components of the glass composition, including associated oxides, determining a preliminary glass dissolution estimator, .DELTA.G.sub.p, based upon the free energies of hydration for the component reactant species, determining an accelerated glass dissolution function, .DELTA.G.sub.a, based upon the free energy associated with weak acid dissociation, .DELTA.G.sub.a.sup.WA, and accelerated matrix dissolution at high pH, .DELTA.G.sub.a.sup.SB associated with solution strong base formation, and determining a final hydration free energy, .DELTA.G.sub.f. This final hydration free energy is then used to determine leachate concentrations for elements of interest using a regression analysis and the formula log.sub.10 (N C.sub.i (g/L))=a.sub.i +b.sub.i .DELTA.G.sub.f. The present invention also includes a method to determine whether a particular glass to be produced will be homogeneous or phase separated. The present invention is also directed to methods of monitoring and controlling processes for making glass using these determinations to modify the feedstock materials until a desired glass durability and homogeneity is obtained.

Through geochemical analyses of produced waters, petrophysics, and reservoir simulation we developed concepts and approaches for mitigating unwanted water production in tight gas reservoirs and for increasing recovery of gas resources presently considered noncommercial. Only new completion research (outside the scope of this study) will validate our hypothesis. The first task was assembling and interpreting a robust regional database of historical produced-water analyses to address the production of excessive water in basin-centered tight gas fields in the Greater Green (GGRB ) and Wind River basins (WRB), Wyoming. The database is supplemented with a sampling program in currently active areas. Interpretation of the regional water chemistry data indicates most produced waters reflect their original depositional environments and helps identify local anomalies related to basement faulting. After the assembly and evaluation phases of this project, we generated a working model of tight formation reservoir development, based on the regional nature and occurrence of the formation waters. Through an integrative approach to numerous existing reservoir concepts, we synthesized a generalized development scheme organized around reservoir confining stress cycles. This single overarching scheme accommodates a spectrum of outcomes from the GGRB and Wind River basins. Burial and tectonic processes destroy much of the depositional intergranular fabric of the reservoir, generate gas, and create a rock volume marked by extremely low permeabilities to gas and fluids. Stress release associated with uplift regenerates reservoir permeability through the development of a penetrative grain bounding natural fracture fabric. Reservoir mineral composition, magnitude of the stress cycle and local tectonics govern the degree, scale and exact mechanism of permeability development. We applied the reservoir working model to an area of perceived anomalous water production. Detailed water analyses, seismic mapping, petrophysics, and reservoir simulation indicate a lithologic and structural component to excessive in situ water permeability. Higher formation water salinity was found to be a good pay indicator. Thus spontaneous potential (SP) and resistivity ratio approaches combined with accurate formation water resistivity (Rw) information may be underutilized tools. Reservoir simulation indicates significant infill potential in the demonstration area. Macro natural fracture permeability was determined to be a key element affecting both gas and water production. Using the reservoir characterization results, we generated strategies for avoidance and mitigation of unwanted water production in the field. These strategies include (1) more selective perforation by improved pay determination, (2) using seismic attributes to avoid small-scale fault zones, and (3) utilizing detailed subsurface information to deliberately target optimally located small scale fault zones high in the reservoir gas column. Tapping into the existing natural fracture network represents opportunity for generating dynamic value. Recognizing the crucial role of stress release in the natural generation of permeability within tight reservoirs raises the possibility of manmade generation of permeability through local confining stress release. To the extent that relative permeabilities prevent gas and water movement in the deep subsurface a reduction in stress around a wellbore has the potential to increase the relative permeability conditions, allowing gas to flow. For this reason, future research into cavitation completion methods for deep geopressured reservoirs is recommended.

Most software quality research has focused on identifying faults (i.e., information is incorrectly recorded in an artifact). Because software still exhibits incorrect behavior, a different approach is needed. This paper presents a systematic literature ... Keywords: Human errors, Software quality, Systematic literature review

A Nominal Filter for Web Search Snippets: Using the Web to Identify Members of Latin America. This paper presents efforts aimed at using Natural Language Engineering (NLE) techniques to solve of three Latin American countries: Uruguay, Argentina and Colombia. An NLE system is under construction

Measurements of intermediate pT (1.5 increase with event multiplicity much faster than meson production. The rate of increase is similar for all baryons, and seemingly independent of mass. This indicates that the number of constituent quarks determines the multiplicity dependence of identified hadron production at intermediate pT. We review these measurements and interpret the experimental findings.

This report provides a technical basis for estimating the level of corrosion products in materials stored in DOE-STD-3013 containers based on extrapolating available chemical sample results. The primary focus is to estimate the levels of nickel, iron, and chromium impurities in plutonium-bearing materials identified for disposition in the United States Mixed Oxide fuel process.

Modeling Complex Control Systems to Identify Remotely Accessible Devices Vulnerable to Cyber Attack Acquisition (SCADA) systems that allows us to calculate device vulnerability and help power substation vulnerable to cyber attack. We use graph theory to model electric power control and protection devices

Extra-long PCR, an identifier of DNA adducts in single nematodes (Caenorhabditis elegans) Deborah A elegans). An extra-long (XL)-PCR (16,144 bp) target amplicon, the 11 exon spanning ced-1, could was assessed by means of a second, fully quantitative PCR. Following the normalization with an invariant

The competition relationship between enterprises is not necessarily mutually exclusive with their cooperation one, but unified. Competition is the same as Cooperation in essence but different from each other in the expression of external form, which ... Keywords: AHP, analytic hierarchy process, FCE, fuzzy comprehensive evaluation, SCCR, the Strength of Competition-Cooperation Relationship, identified model

An intelligent predictive controller is implemented to control a fossil fuel power unit. This controller is a non-model based system that uses a self-organized neuro-fuzzy identifier to predict the response of the plant in a future time interval. The ...

enhancing greenhouse gas "sinks," such as forests). The report identifies strategies that appear-term. Setting a Greenhouse Gas Budget Many important efforts to limit green- house gases are underway by state and natural ecosys- tems around the world. The largest overall source of greenhouse gas emissions

In this paper, we propose two novel parallel algorithms for identifying all the basis polygons in an image formed by n straight line segments each of which is represented by its two end points. The first algorithm is designed to tackle the simple situation ... Keywords: Basis polygon, Edge traversal, Parallel algorithm

In July 2003, helicopter electromagnetic surveys were conducted at 14 coal waste impoundments in southern West Virginia. The purpose of the surveys was to detect conditions that could lead to impoundment failure either by structural failure of the embankment or by the flooding of adjacent or underlying mine works. Specifically, the surveys attempted to: 1) identify saturated zones within the mine waste, 2) delineate filtrate flow paths through the embankment or into adjacent strata and receiving streams, and 3) identify flooded mine workings underlying or adjacent to the waste impoundment. Data from the helicopter surveys were processed to generate conductivity/depth images. Conductivity/depth images were then spatially linked to georeferenced air photos or topographic maps for interpretation. Conductivity/depth images were found to provide a snapshot of the hydrologic conditions that exist within the impoundment. This information can be used to predict potential areas of failure within the embankment because of its ability to image the phreatic zone. Also, the electromagnetic survey can identify areas of unconsolidated slurry in the decant basin and beneath the embankment. Although shallow, flooded mineworks beneath the impoundment were identified by this survey, it cannot be assumed that electromagnetic surveys can detect all underlying mines. A preliminary evaluation of the data implies that helicopter electromagnetic surveys can provide a better understanding of the phreatic zone than the piezometer arrays that are typically used.

Identifying and tracking new information on the Web is important in sociology, marketing, and survey research, since new trends might be apparent in the new information. Such changes can be observed by crawling the Web periodically. In practice, however, ... Keywords: information retrieval, link analysis, novelty, web evolution

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

This work deals with the problem of identifying the subject of small, sparsely linked collections of web documents from a web community. In the course of attempts to find solutions for many problems concerning the web, we are often left with a handful ... Keywords: identification, sparsely linked, subject, web communities

Oil Palm Research in Context: Identifying the Need for Biodiversity Assessment Edgar C. Turner, Cambridge, United Kingdom Abstract Oil palm cultivation is frequently cited as a major threat to tropical was used to find papers on oil palm published since 1970, which were assigned to different subject

Users are interested in multiple topics during a search session, and identifying the boundaries of search sessions is an important task. This study proposes to use neural networks for defining the topic boundaries in search engine transaction logs, and ... Keywords: ANOVA, Experimental design, Neural network, Search engine, Session identification, Topic identification

In this report, we systematically evaluate the ability of current-generation, satellite-based spectroscopic sensors to distinguish uranium mines and mills from other mineral mining and milling operations. We perform this systematic evaluation by (1) outlining the remote, spectroscopic signal generation process, (2) documenting the capabilities of current commercial satellite systems, (3) systematically comparing the uranium mining and milling process to other mineral mining and milling operations, and (4) identifying the most promising observables associated with uranium mining and milling that can be identified using satellite remote sensing. The Ranger uranium mine and mill in Australia serves as a case study where we apply and test the techniques developed in this systematic analysis. Based on literature research of mineral mining and milling practices, we develop a decision tree which utilizes the information contained in one or more observables to determine whether uranium is possibly being mined and/or milled at a given site. Promising observables associated with uranium mining and milling at the Ranger site included in the decision tree are uranium ore, sulfur, the uranium pregnant leach liquor, ammonia, and uranyl compounds and sulfate ion disposed of in the tailings pond. Based on the size, concentration, and spectral characteristics of these promising observables, we then determine whether these observables can be identified using current commercial satellite systems, namely Hyperion, ASTER, and Quickbird. We conclude that the only promising observables at Ranger that can be uniquely identified using a current commercial satellite system (notably Hyperion) are magnesium chlorite in the open pit mine and the sulfur stockpile. Based on the identified magnesium chlorite and sulfur observables, the decision tree narrows the possible mineral candidates at Ranger to uranium, copper, zinc, manganese, vanadium, the rare earths, and phosphorus, all of which are milled using sulfuric acid leaching.

Nanotechnology as a science is emerging rapidly. As materials are synthesized and utilized at the nanometer size scale, concerns of potential health and safety effects are arising. In an effort to elucidate the physicochemical characteristics of nanoparticles influential in toxicological studies, surface properties of metal oxide and carbonaceous nanoparticles were measured. These properties include zeta potential, dissolution and surface-bound chemical components. Subsequently, the role of these properties in oxidative stress was examined in vitro.
This work identifies the influence that pH has on the zeta potential of nanoparticles. The zeta potential has the ability to alter colloidal stability, as the largest nanoparticle agglomerate is seen at or near the isoelectric point for each of the particles tested. Furthermore, it was observed that metal oxide nanoparticles which exhibit a charged surface at physiological pH, lead to decreased in vitro cellular viability as compared to those that were neutral. Thus, nanoparticle zeta potential may be an important factor to consider when attempting to predict nanoparticle toxicity.
Real world exposure to nanoparticles is a mixture of various particulates and organics. Therefore, to simulate this particle mixture, iron oxide (Fe2O3) and engineered carbon black (ECB) were utilized in combination to identify potential synergistic reactions. Following in vitro exposure, both nanoparticle types are internalized into endosomes, where liberated Fe3+ reacts with hydroquinone moieties on the ECB surface yielding Fe2+. This bioavailable iron may then generate oxidative stress through intracellular pathways including the Fenton reaction.
As oxidative stress is common in particulate toxicology, a comparison between the antioxidant defenses of epithelial (A549) and mesothelial (MeT-5A) cell lines was made. The A549 cell line exhibits alterations in the NRF2-KEAP1 transcription factor system and therefore retains high basal levels of phase II antioxidants. Both cell types were exposed to 33 nm silica where intracellular oxidant generation coupled with markers of oxidative stress were observed. While the MeT-5A cells exhibited a decrease in cell viability, the A549 cell line did not. Therefore, proper characterization of both material and biological systems prior to toxicity testing will help to further define the risks associated with the use of nanotechnology.

A radiometer controller of a solar radiation detector is described. The system includes a calibration method and apparatus comprised of mounting all temperature sensitive elements of the controller in thermostatically controlled ovens during calibration and measurements, using a selected temperature that is above any which might be reached in the field. The instrument is calibrated in situ by adjusting heater power to the receptor cavity in the radiometer detector to a predetermined full scale level as displayed by a meter. Then with the heater de-energized and the receptor cavity covered, the voltage output, is set to zero as displayed by the meter. Next the preset power is applied to the heater and the output of the radiant measurement channel is applied to the panel meter. With this preset heater power producing the proper heat, the gain of the measurement channel is adjusted to bring the meter display to full scale.

A permanent magnet assembly for assembly in large permanent magnet motors and generators includes a two-piece carrier that can be slid into a slot in the rotor and then secured in place using a set screw. The invention also provides an auxiliary carrier device with guide rails that line up with the teeth of the rotor, so that a permanent magnet assembly can be pushed first into a slot, and then down the slot to its proper location. An auxiliary tool is provided to move the permanent magnet assembly into position in the slot before it is secured in place. Methods of assembling and disassembling the magnet assemblies in the rotor are also disclosed. 2 figs.

An estimated 12 million wells have been drilled during the 150 years of oil and gas production in the United States. Many old oil and gas fields are now populated areas where the presence of improperly plugged wells may constitute a hazard to residents. Natural gas emissions from wells have forced people from their houses and businesses and have caused explosions that injured or killed people and destroyed property. To mitigate this hazard, wells must be located and properly plugged, a task made more difficult by the presence of houses, businesses, and associated utilities. This paper describes well finding methods conducted by the National Energy Technology Laboratory (NETL) that were effective at two small towns in Wyoming and in a suburb of Pittsburgh, Pennsylvania.

Marine wave and tidal energy technology could interact with marine resources in ways that are not well understood. As wave and tidal energy conversion projects are planned, tested, and deployed, a wide range of stakeholders will be engaged; these include developers, state and federal regulatory agencies, environmental groups, tribal governments, recreational and commercial fishermen, and local communities. Identifying stakeholders environmental concerns in the early stages of the industrys development will help developers address and minimize potential environmental effects. Identifying important concerns will also assist with streamlining siting and associated permitting processes, which are considered key hurdles by the industry in the U.S. today. In September 2008, RE Vision consulting, LLC was selected by the Department of Energy (DoE) to conduct a scenario-based evaluation of emerging hydrokinetic technologies. The purpose of this evaluation is to identify and characterize environmental impacts that are likely to occur, demonstrate a process for analyzing these impacts, identify the key environmental concerns for each scenario, identify areas of uncertainty, and describe studies that could address that uncertainty. This process is intended to provide an objective and transparent tool to assist in decision-making for siting and selection of technology for wave and tidal energy development. RE Vision worked with H. T. Harvey & Associates, to develop a framework for identifying key environmental concerns with marine renewable technology. This report describes the results of this study. This framework was applied to varying wave and tidal power conversion technologies, scales, and locations. The following wave and tidal energy scenarios were considered: ? 4 wave energy generation technologies ? 3 tidal energy generation technologies ? 3 sites: Humboldt coast, California (wave); Makapuu Point, Oahu, Hawaii (wave); and the Tacoma Narrows, Washington (tidal) ? 3 project sizes: pilot, small commercial, and large commercial The possible combinations total 24 wave technology scenarios and 9 tidal technology scenarios. We evaluated 3 of the 33 scenarios in detail: 1. A small commercial OPT Power Buoy project off the Humboldt County, California coast 2. A small commercial Pelamis Wave Power P-2 project off Makapuu Point, Oahu, Hawaii 3. A pilot MCT SeaGen tidal project, sited in the Tacoma Narrows, Washington This framework document used information available from permitting documents that were written to support actual wave or tidal energy projects, but the results obtained here should not be confused with those of the permitting documents1. The main difference between this framework document and permitting documents of currently proposed pilot projects is that this framework identifies key environmental concerns and describes the next steps in addressing those concerns; permitting documents must identify effects, find or declare thresholds of significance, evaluate the effects against the thresholds, and find mitigation measures that will minimize or avoid the effects so they can be considered less-than-significant. Two methodologies, 1) an environmental effects analysis and 2) Raptools, were developed and tested to identify potential environmental effects associated with wave or tidal energy conversion projects. For the environmental effects analysis, we developed a framework based on standard risk assessment techniques. The framework was applied to the three scenarios listed above. The environmental effects analysis addressed questions such as: ? What is the temporal and spatial exposure of a species at a site? ? What are the specific potential project effects on that species? ? What measures could minimize, mitigate, or eliminate negative effects? ? Are there potential effects of the project, or species response to the effect, that are highly uncertain and warrant additional study? The second methodology, Raptools, is a collaborative approach useful for evaluating multiple characteristi

A method for reducing the concentration of any undesirable metals dissolved in contaminated water, such as waste water. The method involves uniformly reacting the contaminated water with an excess amount of solid particulate calcium sulfite to insolubilize the undesirable metal ions, followed by removal thereof and of the unreacted calcium sulfite.

A method is described for reducing the concentration of any undesirable metals dissolved in contaminated water, such as waste water. The method involves uniformly reacting the contaminated water with an excess amount of solid particulate calcium sulfite to insolubilize the undesirable metal ions, followed by removal thereof and of the unreacted calcium sulfite.

A method for reducing the concentration of many undesirable metals dissolved in contaminated water, such as waste water. The method involves uniformly reacting the contaminated water with an excess amount of solid particulate calcium sulfite to insolubilize the undesirable metal ions, followed by removal thereof and of the unreacted calcium sulfite. 1 tab.

The present invention provides methods for classifying data and uncovering and correcting annotation errors. In particular, the present invention provides a self-organizing, self-correcting algorithm for use in classifying data. Additionally, the present invention provides a method for classifying biological taxa.

The present invention relates to methods for producing fatty acid desaturase mutants having a substantially increased activity towards substrates with fewer than 18 carbon atom chains relative to an unmutagenized precursor desaturase having an 18 carbon chain length specificity, the sequences encoding the desaturases and to the desaturases that are produced by the methods. The present invention further relates to a method for altering a function of a protein, including a fatty acid desaturase, through directed mutagenesis involving identifying candidate amino acid residues, producing a library of mutants of the protein by simultaneously randomizing all amino acid candidates, and selecting for mutants which exhibit the desired alteration of function. Candidate amino acids are identified by a combination of methods. Enzymatic, binding, structural and other functions of proteins can be altered by the method.

Identify Molecular Structural Features of Biomass Recalcitrance Using Non- Identify Molecular Structural Features of Biomass Recalcitrance Using Non- destructive Microscopy and Spectroscopy Shi-You Ding 1 , Mike Himmel 1 , Sunney X. Xie 2 1 National Renewable Energy Laboratory, Golden, CO 2 Harvard University, Cambridge, MA Lignocellulosic biomass has long been recognized as a potential sustainable source of mixed sugars for fermentation to fuels and other bio-based products. However, the chemical and enzymatic conversion processes developed during the past 80 years are inefficient and expensive. The inefficiency of these processes is in part due to the lack of knowledge about the structure of biomass itself; the plant cell wall is indeed a complex nano-composite material at the molecular and nanoscales. Current processing strategies have been derived empirically, with

The purpose of the Standards/Requirements Identification Program, developed partially in response to the Defense Nuclear Facilities Safety Board Recommendation 90-2, was to identify applicable requirements that established the Environmental Restoration Management Contractor`s (ERMC) responsibilities and authorities under the Environmental Restoration Management Contract, determine the adequacy of these requirements, ascertain a baseline level of compliance with them, and implement a maintenance program that would keep the program current as requirements or compliance levels change. The resultant Standards/Requirements Identification Documents (S/RIDs) consolidate the applicable requirements. These documents govern the development of procedures and manuals to ensure compliance with the requirements. Twenty-four such documents, corresponding with each functional area identified at the site, are to be issued. These requirements are included in the contractor`s management plan.

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

The National Nuclear Security Administration s project for developing a unique identifier and a concept for a global monitoring system for UF6 cylinders made significant progress on developing functional requirements and a concept of operation for such a system. The multi-laboratory team is working to define the functional requirements for both the unique identifier and the global monitoring system and to develop a preliminary concept of operations to discuss with key industry stakeholders. Team members began meeting with industry representatives in January 2013 to discuss the preliminary concept and solicit feedback and suggestions. The team has met with representatives from United States Enrichment Corporation, Cameco, URENCO, Honeywell/ConverDyn, and others. This paper presents an overview of the preliminary concept of operations and shares the feedback obtained from the industry engagement meetings.

January/February 2007 January/February 2007 The Laboratory in the News Commentary by George H. Miller Titan Leads the Way in Laser-Matter Science Identifying the Source of Stolen Nuclear Materials Tiny Tubes Make the Flow Go Acidic Microbe Community Fosters the Unique Patents and Awards S&TR Staff Article title: Identifying the Source of Stolen Nuclear Materials; article blurb: Livermore scientists are analyzing interdicted illicit nuclear and radioactive materials for clues to the materials' origins and routes of transit. NUCLEAR forensics and attribution are becoming increasingly important tools in the fight against illegal smuggling and trafficking of radiological and nuclear materials. These include materials intended for industrial and medical use (radiological), nuclear materials such as those produced in the

For quantum systems with competing potentials, the conventional perturbation theory often yields an asymptotic series and the subsequent numerical outcome becomes uncertain. To tackle such a kind of problems, we develop a general solution scheme based on a new energy dissection idea. Instead of dividing the potential energy into 'unperturbed' and 'perturbed' terms, a partition of the kinetic energy is performed. By distributing the kinetic energy term in part into each individual potential, the Hamiltonian can be expressed as the sum of the subsystem Hamiltonians with respective competing potentials. The total wavefunction is expanded by using a linear combination of the basis sets of respective subsystem Hamiltonians. We first illustrate the solution procedure using a simple system consisting of a particle under the action of double {delta}-function potentials. Next, this method is applied to the prototype systems of a charged harmonic oscillator in strong magnetic field and the hydrogen molecule ion. Compared with the usual perturbation approach, this new scheme converges much faster to the exact solutions for both eigenvalues and eigenfunctions. When properly extended, this new solution scheme can be very useful for dealing with strongly coupling quantum systems. - Highlights: Black-Right-Pointing-Pointer A new basis set expansion method is proposed. Black-Right-Pointing-Pointer Split kinetic energy method is proposed to solve quantum eigenvalue problems. Black-Right-Pointing-Pointer Significant improvement has been obtained in converging to exact results. Black-Right-Pointing-Pointer Extension of such methods is promising and discussed.

Though not widely used in the power delivery segment of the commercial electric power industry, linking errors to a formal task analysis is a common technique of human factors engineering that can identify weaknesses in processes and procedures. This study analyzes a collection of switching incidents to determine at which step in the switching process the errors occurred. The report presents a model of how investigation results can be sifted for useful clues about the steps that might benefit from repeat...

This report identifies characteristics of AFDC recipients in Minnesota who might have been subject to the 60-month time limit on assistance, as imposed by the federal welfare reform in 1996. It is the second in a series of working papers regarding welfare and welfare reform. This report was prepared by Don Hirasuna, legislative analyst in the House Research Department. Questions may be addressed to Don at 651-296-8038.

This report provides the technical basis and process for a screening evaluation of a nuclear power plant. This screening will identify appropriate limiting locations for systematic monitoring of the environmentally assisted fatigue (EAF) effects in a Class 1 reactor on the reactor coolant pressure boundary components that are wetted with primary coolant. Use of this process will ensure that the most limiting locations for EAF are determined on a consistent basis.The process developed in ...

This report describes research on fire probabilistic risk assessment PRA methods. The fire PRA methods presented in this report provide additions, clarifications, and refinements to the methods proposed in 2005 by the Electric Power Research Institute EPRI and the U.S. Nuclear Regulatory Commission NRC in EPRI/NRC-RES Fire PRA Methodology for Nuclear Power Facilities EPRI 1011989/NUREG/CR-6850. The purpose of the current report is to provide the most current, state-of-the-art information in order to supp...

Systems for phase-change particulate slurry cooling equipment and methods to induce hypothermia in a patient through internal and external cooling are provided. Subcutaneous, intravascular, intraperitoneal, gastrointestinal, and lung methods of cooling are carried out using saline ice slurries or other phase-change slurries compatible with human tissue. Perfluorocarbon slurries or other slurry types compatible with human tissue are used for pulmonary cooling. And traditional external cooling methods are improved by utilizing phase-change slurry materials in cooling caps and torso blankets.

Systems for phase-change particulate slurry cooling equipment and methods to induce hypothermia in a patient through internal and external cooling are provided. Subcutaneous, intravascular, intraperitoneal, gastrointestinal, and lung methods of cooling are carried out using saline ice slurries or other phase-change slurries compatible with human tissue. Perfluorocarbon slurries or other slurry types compatible with human tissue are used for pulmonary cooling. And traditional external cooling methods are improved by utilizing phase-change slurry materials in cooling caps and torso blankets.

A catalytic reforming method is disclosed herein. The method includes sequentially supplying a plurality of feedstocks of variable compositions to a reformer. The method further includes adding a respective predetermined co-reactant to each of the plurality of feedstocks to obtain a substantially constant output from the reformer for the plurality of feedstocks. The respective predetermined co-reactant is based on a C/H/O atomic composition for a respective one of the plurality of feedstocks and a predetermined C/H/O atomic composition for the substantially constant output.

This report provides a summary of the work performed in this 3-year project sponsored by DOE. The overall objective of this project is to identify new, potentially more cost-effective surfactant formulations for improved oil recovery (IOR). The general approach is to use an integrated experimental and computational chemistry effort to improve our understanding of the link between surfactant structure and performance, and from this knowledge, develop improved IOR surfactant formulations. Accomplishments for the project include: (1) completion of a literature review to assemble current and new surfactant IOR ideas, (2) Development of new atomistic-level MD (molecular dynamic) modeling methodologies to calculate IFT (interfacial tension) rigorously from first principles, (3) exploration of less computationally intensive mesoscale methods to estimate IFT, Quantitative Structure Property Relationship (QSPR), and cohesive energy density (CED) calculations, (4) experiments to screen many surfactant structures for desirable low IFT and solid adsorption behavior, and (5) further experimental characterization of the more promising new candidate formulations (based on alkyl polyglycosides (APG) and alkyl propoxy sulfate surfactants). Important findings from this project include: (1) the IFT between two pure substances may be calculated quantitatively from fundamental principles using Molecular Dynamics, the same approach can provide qualitative results for ternary systems containing a surfactant, (2) low concentrations of alkyl polyglycoside surfactants have potential for IOR (Improved Oil Recovery) applications from a technical standpoint (if formulated properly with a cosurfactant, they can create a low IFT at low concentration) and also are viable economically as they are available commercially, and (3) the alkylpropoxy sulfate surfactants have promising IFT performance also, plus these surfactants can have high optimal salinity and so may be attractive for use in higher salinity reservoirs. Alkylpropoxy sulfate surfactants are not yet available as large volume commercial products. The results presented herein can provide the needed industrial impetus for extending application (alkyl polyglycoside) or scaling up (alkylpropoxy sulfates) of these two promising surfactants for enhanced oil recovery. Furthermore, the advanced simulations tools presented here can be used to continue to uncover new types of surfactants with promising properties such as inherent low IFT and biodegradability.

An improved method of sedimentation is described. A series of spaced surfaces of powdered material positioned normal to the centrifugal field concentrates the larger, slower moving molecules of a liquid and hastens sedimentation. (AEC)

A method for producing boracites is disclosed in which a solution of divalent metal acetate, boric acid, and halogen acid is evaporated to dryness and the resulting solid is heated in an inert atmosphere under pressure.

A method for the preparation of organyltriorganooxysilanes containing at least one silicon-carbon bond is provided comprising reacting at least one tetraorganooxysilane with an activated carbon and at least one base.

We present formal verification methods and procedures for finding bounds of linear programs and proving nonlinear inequalities. An efficient implementation of formal arithmetic computations is also described. Our work is an integral part of the Flyspeck ...

A method is disclosed of saccharifying cellulose by incubation with the cellulase of Clostridium thermocellum in a broth containing an efficacious amount of thiol reducing agent. Other incubation parameters which may be advantageously controlled to stimulate saccharification include the concentration of alkaline earth salts, pH, temperature, and duration. By the method of the invention, even native crystalline cellulose such as that found in cotton may be completely saccharified.

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Methods for treatment of depression-related mood disorders in mammals, particularly humans are disclosed. The methods of the invention include administration of compounds capable of enhancing glutamate transporter activity in the brain of mammals suffering from depression. ATP-sensitive K.sup.+ channel openers and .beta.-lactam antibiotics are used to enhance glutamate transport and to treat depression-related mood disorders and depressive symptoms.

A method and apparatus are described for changing fuel bodies into a process tube of a reactor. According to this method fresh fuel elements are introduced into one end of the tube forcing used fuel elements out the other end. When sufficient fuel has been discharged, a reel and tape arrangement is employed to pull the column of bodies back into the center of the tube. Due provision is made for providing shielding in the tube. (AEC)

Methods for treatment of depression-related mood disorders in mammals, particularly humans are disclosed. The methods of the invention include administration of compounds capable of enhancing glutamate transporter activity in the brain of mammals suffering from depression. ATP-sensitive K.sup.+ channel openers and .beta.-lactam antibiotics are used to enhance glutamate transport and to treat depression-related mood disorders and depressive symptoms.

Wavelet analysis offers a new approach for viewing and analyzing various large datasets by dividing information according to scale and location. Here a new method is presented that is designed to characterize time-evolving structures in large ...

We describe a method for discovering irregularities in temporal mood patterns appearing in a large corpus of blog posts, and labeling them with a natural language explanation. Simple techniques based on comparing corpus frequencies, coupled with large ...

A safer method for the standoff (long distance) detection and identification ofmolecules on a surface has been invented by researchers at ORNL and the Universityof Tennessee. This invention avoids the necessity of close and potentially ...

Symmetrical Components is a topic which even a graduate electrical engineer, who took a course on the subject, may not completely understand. Workers who maintain protective relays may have little knowledge of Symmetrical Components. The result of this unfamiliarity may be that relays such as those which respond to negative sequence voltages are never again tested properly, or readjusted to a more desireable setting, after leaving the manufacturer. The intent of this paper is to present a method of bench-testing negative sequence detecting devices by individuals who possess little knowledge of Symmetrical Components.

A computer program, ISD97, was developed to analyze data from a series of in situ measurements on a grid and identify potential localized areas of elevated activity. The ISD97 code operates using a two-step process. A deconvolution of the data is carried out using the maximum entropy method, and a map of activity on the ground that fits the data within experimental error is generated. This maximum entropy map is then analyzed to determine the locations and magnitudes of potential areas of elevated activity that are consistent with the data. New deconvolutions are then carried out for each potential area of elevated activity identified by the code. Properties of the algorithm are demonstrated using data from actual field measurements.

The BEopt software is a building energy optimization tool that generates a cost-optimal path of building designs from a reference building up to zero-net energy. It employs a sequential search methodology to account for complex energy interactions between building efficiency measures. Enhancement strategies to this search methodology are developed to increase accuracy (ability to identify the true cost-optimal curve) and speed (number of required energy simulations). A test suite of optimizations is used to gauge the effectiveness of each strategy. Combinations of strategies are assembled into packages, ranging from conservative to aggressive, with so up to 71% fewer required simulations are required.

draft-ietf-iri-bidi-guidelines-03 This specification gives guidelines for selection, use, and presentation of International Resource Identifiers (IRIs) which include characters with inherent rightto-left (rtl) writing direction. Status of this Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at

Thermal management in heavy vehicles is cross-cutting because it directly or indirectly affects engine performance, fuel economy, safety and reliability, engine/component life, driver comfort, materials selection, emissions, maintenance, and aerodynamics. It follows that thermal management is critical to the design of large (class 6-8) trucks, especially in optimizing for energy efficiency and emissions reduction. Heat rejection requirements are expected to increase, and it is industry's goal to develop new, innovative, high-performance cooling systems that occupy less space and are lightweight and cost-competitive. The state of the art in heavy vehicle thermal management is reviewed, and issues and research areas are identified.

While inhalation dose coefficients are provided for about 800 radionuclides in International Commission on Radiological Protection (ICRP) Publication 68, many radionuclides of practical dosimetric interest for facilities such as high-energy proton accelerators are not specifically addressed, nor are organ-specific dose coefficients tabulated. The ICRP Publication 68 methodology is used, along with updated radiological decay data and metabolic data, to identify committed equivalent dose coefficients [hT(50)] and committed effective dose coefficients [e(50)] for radionuclides produced at the Oak Ridge National Laboratory s Spallation Neutron Source.

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

This paper proposes a new approach to identifying the effects of monetary policy shocks in an international vector autoregression.Using high-frequency data on the prices of Fed Funds futures contracts,we measure the impact of the surprise component of the FOMC-day Federal Reserve policy decision on financial variables, such as the exchange rate and the foreign interest rate. We show how this information can be used to achieve identification without having to make the usual strong assumption of a recursive ordering.

The present invention provides a method of grid generation that uses the geometry of the problem space and the governing relations to generate a grid. The method can generate a grid with minimized discretization errors, and with minimal user interaction. The method of the present invention comprises assigning grid cell locations so that, when the governing relations are discretized using the grid, at least some of the discretization errors are substantially zero. Conventional grid generation is driven by the problem space geometry; grid generation according to the present invention is driven by problem space geometry and by governing relations. The present invention accordingly can provide two significant benefits: more efficient and accurate modeling since discretization errors are minimized, and reduced cost grid generation since less human interaction is required.

MethodsMethods Disclaimer The data gathered here are for informational purposes only. Inclusion of a report in the database does not represent approval of the estimates by DOE or NREL. Levelized cost calculations DO NOT represent real world market conditions. The calculation uses a single discount rate in order to compare technology costs only. About the Cost Database For emerging energy technologies, a variety of cost and performance numbers are cited in presentations and reports for present-day characteristics and potential improvements. Amid a variety of sources and methods for these data, the Office of Energy Efficiency and Renewable Energy's technology development programs determine estimates for use in program planning. The Transparent Cost Database collects program cost and performance

Our objective here was to perform a quantitative phosphoproteomic study on a reconstituted human skin tissue to identify low and high dose ionizing radiation dependent signaling in a complex 3-dimensional setting. Application of an isobaric labeling strategy using sham and 3 radiation doses (3, 10, 200 cGy) resulted in the identification of 1113 unique phosphopeptides. Statistical analyses identified 151 phosphopeptides showing significant changes in response to radiation and radiation dose. Proteins responsible for maintaining skin structural integrity including keratins and desmosomal proteins (desmoglein, desmoplakin, plakophilin 1 and 2,) had altered phosphorylation levels following exposure to both low and high doses of radiation. A phosphorylation site present in multiple copies in the linker regions of human profilaggrin underwent the largest fold change. Increased phosphorylation of these sites coincided with altered profilaggrin processing suggesting a role for linker phosphorylation in human profilaggrin regulation. These studies demonstrate that the reconstituted human skin system undergoes a coordinated response to ionizing radiation involving multiple layers of the stratified epithelium that serve to maintain skin barrier functions and minimize the damaging consequences of radiation exposure.

The use of the ReaxFF force field to correlate with NMR mobilities of amine catalytic substituents on a mesoporous silica nanosphere surface is considered. The interfacing of the ReaxFF force field within the Surface Integrated Molecular Orbital/Molecular Mechanics (SIMOMM) method, in order to replicate earlier SIMOMM published data and to compare with the ReaxFF data, is discussed. The development of a new correlation consistent Composite Approach (ccCA) is presented, which incorporates the completely renormalized coupled cluster method with singles, doubles and non-iterative triples corrections towards the determination of heats of formations and reaction pathways which contain biradical species.

A computer method of gathering and summarizing large amounts of information comprises collecting information from a plurality of information sources (14, 51) according to respective maps (52) of the information sources (14), converting the collected information from a storage format to XML-language documents (26, 53) and storing the XML-language documents in a storage medium, searching for documents (55) according to a search query (13) having at least one term and identifying the documents (26) found in the search, and displaying the documents as nodes (33) of a tree structure (32) having links (34) and nodes (33) so as to indicate similarity of the documents to each other.

A system and method for converting packet streams into session summaries. Session summaries are a group of packets each having a common source and destination internet protocol (IP) address, and, if present in the packets, common ports. The system first captures packets from a transport layer of a network of computer systems, then decodes the packets captured to determine the destination IP address and the source IP address. The system then identifies packets having common destination IP addresses and source IP addresses, then writes the decoded packets to an allocated memory structure as session summaries in a queue.

A method and apparatus are provided for identifying contents of a nuclear waste container. The method includes the steps of forming an image of the contents of the container using digital radiography, visually comparing contents of the image with expected contents of the container and performing computer tomography on the container when the visual inspection reveals an inconsistency between the contents of the image and the expected contents of the container.

The present disclosure provides an apparatus and method for coupling conduit segments together. A first pump obtains a sample and transmits it through a first conduit to a reservoir accessible by a second pump. The second pump further conducts the sample from the reservoir through a second conduit.

Ongoing research on quantifying the cooling loads in residential buildings, particularly buildings with passive solar heating systems, is described. Correlations are described that permit auxiliary cooling estimates from monthly average insolation and weather data. The objective of the research is to develop a simple analysis method, useful early in design, to estimate the annual cooling energy required of a given building.

A hydraulic mining method includes drilling a vertical borehole into a pitched mineral vein and a slant borehole along the footwall of the vein to intersect the vertical borehole. Material is removed from the mineral vein by a fluid jet stream and the resulting slurry flows down the footwall borehole into the vertical borehole from where it is pumped upwardly therethrough to the surface.

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

A method for treating biomass was developed that uses an apparatus which moves a biomass and dilute aqueous ammonia mixture through reaction chambers without compaction. The apparatus moves the biomass using a non-compressing piston. The resulting treated biomass is saccharified to produce fermentable sugars.

A method of limiting carbon contamination from graphite ware used in induction melting of uranium alloys is provided. The graphite surface is coated with a suspension of Y/sub 2/O/sub 3/ particles in water containing about 1.5 to 4 percent by weight sodium carboxymethylcellulose.

The present invention provides methods for making N-methylpyrrolidine and analogous compounds via hydrogenation. Novel catalysts for this process, and novel conditions/yields are also described. Other process improvements may include extraction and hydrolysis steps. Some preferred reactions take place in the aqueous phase. Starting materials for making N-methylpyrrolidine may include succinic acid, N-methylsuccinimide, and their analogs.

A method for providing an image of the human heart's electrical system derives time-of-flight data from an array of EKG electrodes and this data is transformed into phase information. The phase information, treated as a hologram, is reconstructed to provide an image in one or two dimensions of the electrical system of the functioning heart.

The purpose of this paper is to examine how singular value decomposition (SVD) and demographic information can improve the performance of plain collaborative filtering (CF) algorithms. After a brief introduction to SVD, where the method is explained ... Keywords: collaborative filtering, demographic data, personalization, recommender systems, singular value decomposition (SVD)

the feasible sequences of CF's. Using fuzzy logic, the graph could allow the possibility of more than one CF(6):799--822, December 1994. [14] H.Â­J. Zimmermann. Fuzzy Set Theory and Its Applications, Second Edition. KluwerÂ­static contact condition. Similar to the approach used in [5], our method uses fuzzy logic to model and recognize

LBNL-3714E LBNL-3714E ERNEST ORLANDO LAWRENCE BERKELEY NATIONAL LABORATORY Managing Your Energy An ENERGY STAR Â® Guide for Identifying Energy Savings in Manufacturing Plants Ernst Worrell Tana Angelini Eric Masanet Environmental Energy Technologies Division Sponsored by the U.S. Environmental Protection Agency June 2010 Disclaimer This document was prepared as an account of work sponsored by the United States Government. While this document is believed to contain correct information, neither the United States Government nor any agency thereof, nor The Regents of the University of California, nor any of their employees, makes any warranty, express or implied, or assumes any legal responsibility for the accuracy, completeness, or usefulness of any information,

This paper reports on identification of steam-breakthrough zones in a stacked sand/shale sequence with variable lateral continuity which is difficult. Such identification, however, would allow the modification of field operations to enhance recovery through improved vertical sweep and heat injection. Twenty pulsed-neutron capture (PNC) logs were run to identify the steam-breakthrough zone(s) in a seven-pattern area of Mobil's Middle expansion (MIDX) Steamflood Project in the South Belridge field. These PNC data were combined with data from recent replacement wells and a detailed geologic analysis. Evaluation of this combined information allowed identification of potential steam-breakthrough zone(s), and operations were modified to reduce and eliminate steam breakthrough.

Data from almost 1600 of the 3800 body-burden documents collected to date have been entered in the data base as of October 1981. The emphasis on including recent literature and significant research documents has resulted in a chronological mix of articles from 1974 to the present. When body-burden articles are identified, data are extracted and entered in the data base by chemical and tissue/body fluid. Each data entry comprises a single record (or line entry) and is assigned a record number. If a particular document deals with more than one chemical and/or tissue, there will be multiple records for that document. For example, a study of 5 chemicals in each of 3 tissues has 15 different records (or 15 line entries) in the data base with 15 record numbers. Record numbers are assigned consecutively throughout the entire data base and appear in the upper left corner of the first column for each record.

A materials assessment methodology for identifying specific critical material requirements that could hinder the implementation of solar energy has been developed and demonstrated. The methodology involves an initial screening process, followed by a more detailed materials assessment. The detailed assessment considers such materials concerns and constraints as: process and production constraints, reserve and resource limitations, lack of alternative supply sources, geopolitical problems, environmental and energy concerns, time constraints, and economic constraints. Data for 55 bulk and 53 raw materials are currently available on the data base. These materials are required in the example photovoltaic systems. One photovoltaic system and thirteen photovoltaic cells, ten solar heating and cooling systems, and two agricultural and industrial process heat systems have been characterized to define their engineering and bulk material requirements.

This analysis is an update to the 2005 Energy Efficiency Potential Study completed by KEMA for the Kauai Island Utility Cooperative (KIUC) and identifies potential energy efficiency opportunities in the residential sector on Kauai (KEMA 2005). The Total Resource Cost (TRC) test is used to determine which of the energy efficiency measures analyzed in the KEMA report are cost effective for KIUC to include in a residential energy efficiency program. This report finds that there remains potential energy efficiency savings that could be cost-effectively incentivized through a utility residential demand-side management program on Kauai if implemented in such a way that the program costs per measure are consistent with the current residential program costs.

This paper proposes a chaos-based analog-to-information conversion system for the acquisition and reconstruction of sparse analog signals. The sparse signal acts as an excitation term of a continuous-time chaotic system and the compressive measurements are performed by sampling chaotic system outputs. The reconstruction is realized through the estimation of the sparse coefficients with principle of chaotic parameter estimation. With the deterministic formulation, the analysis on the reconstructability is conducted via the sensitivity matrix from the parameter identifiability of chaotic systems. For the sparsity-regularized nonlinear least squares estimation, it is shown that the sparse signal is locally reconstructable if the columns of the sparsity-regularized sensitivity matrix are linearly independent. A Lorenz system excited by the sparse multitone signal is taken as an example to illustrate the principle and the performance.

A library tracking database has been developed to monitor software/library usage. This Automatic Library Tracking Database (ALTD) automatically and transparently stores, into a database, information about the libraries linked into an application at compilation time and also the executables launched in a batch job. Information gathered into the database can then be mined to provide reports. Analyzing the results from the data collected will help to identify, for example, the most frequently used and the least used libraries and codes, and those users that are using deprecated libraries or applications. We will illustrate the usage of libraries and executables on the Cray XT platforms hosted at the National Institute for Computational Sciences and the Oak Ridge Leadership Computing Facility (both located at Oak Ridge National Laboratory).

Geopressured geothermal reservoirs are characterized by high temperatures and high pressures with correspondingly large quantities of dissolved methane. Due to these characteristics, the reservoirs provide two sources of energy: chemical energy from the recovered methane, and thermal energy from the recovered fluid at temperatures high enough to operate a binary power plant for electricity production. Formations with the greatest potential for recoverable energy are located in the gulf coastal region of Texas and Louisiana where significantly overpressured and hot formations are abundant. This study estimates the total recoverable onshore geopressured geothermal resource for identified sites in Texas and Louisiana. In this study a geopressured geothermal resource is defined as a brine reservoir with fluid temperature greater than 212 degrees F and a pressure gradient greater than 0.7 psi/ft.

The biotinylating reagent succinimidyl 6-(biotinamido) hexanoate was used to label the cell surfaces of the cosmopolitan, marine, eukaryotic microorganism Emiliania huxleyi under different growth conditions. Proteins characteristic of different nutrient conditions could be identified. In particular, a nitrogen-regulated protein, nrp1, has an 82-kDa subunit that is present under nitrogen limitation and during growth on urea. It is absent under phosphate limitation or during exponential growth on nitrate or ammonia. nrp1 is the major membrane or wall protein in nitrogen-limited cells and is found in several strains of E. huxleyi. It may be a useful biomarker for examining the physiological state of E. huxleyi cells in their environment. 35 refs., 4 figs.

Visual Impacts of Energy Facilities Visual Impacts of Energy Facilities The potential visual effects of utility-scale energy facilities on the nation's scenic, cultural, and historic resources have become a factor in slowing or halting energy and electric transmission projects. Concerns about the potential visual effects of utility-scale energy facilities on the nation's scenic, cultural, and historic resources have become a factor in slowing or halting energy and electric transmission projects. Because these projects are so important to the nation's energy supply, their potential visual impacts need to be identified and mitigated. The EVS Division has undertaken a number of studies to analyze visual resources. Detailed information about this work is online at http://visualimpact.anl.gov/.

PETITION FOR WAIVER OF RIGHTS TO PETITION FOR WAIVER OF RIGHTS TO AN IDENTIFIED INVENTION UNDER 10 C.F.R. PART 784 DOE WAIVER NO._________ (To be supplied by DOE) DOE INVENTION NO._________ (To be supplied by DOE) Notice: If you need help in completing this form, contact the DOE Patent Counsel assisting the activity that issued your award or the Assistant General Counsel for Technology Transfer and Intellectual Property in the Office of General Counsel in DOE Headquarters. Unless exceptional circumstances have been determined to exist, parties which qualify as Bayh-Dole entities under 35 U.S.C. 201 (h) or (i) are not required to petition for title. Rather, they may elect to retain title to subject inventions. Title of Contract: ________________________________________________________

Identifying Sources of Volatile Organic Compounds and Aldehydes in a High Identifying Sources of Volatile Organic Compounds and Aldehydes in a High Performance Building Title Identifying Sources of Volatile Organic Compounds and Aldehydes in a High Performance Building Publication Type Report LBNL Report Number LBNL-3979e Year of Publication 2010 Authors Ortiz, Anna C., Marion L. Russell, Wen-Yee Lee, Michael G. Apte, and Randy L. Maddalena Pagination 29 Date Published 09/2010 Publisher Lawrence Berkeley National Laboratory City Berkeley Abstract The developers of the Paharpur Business Center (PBC) and Software Technology Incubator Park in New Delhi, India offer an environmentally sustainable building with a strong emphasis on energy conservation, waste minimization and superior indoor air quality (IAQ). To achieve the IAQ goal, the building utilizes a series of air cleaning technologies for treating the air entering the building. These technologies include an initial water wash followed by ultraviolet light treatment and biolfiltration using a greenhouse located on the roof and numerous plants distributed throughout the building. Even with the extensive treatment of makeup air and room air in the PBC, a recent study found that the concentrations of common volatile organic compounds and aldehydes appear to rise incrementally as the air passes through the building from the supply to the exhaust. This finding highlights the need to consider the minimization of chemical sources in buildings in combination with the use of advanced air cleaning technologies when seeking to achieve superior IAQ. The goal of this project was to identify potential source materials for indoor chemicals in the PBC. Samples of building materials, including wood paneling (polished and unpolished), drywall, and plastic from a hydroponic drum that was part of the air cleaning system, were collected from the building for testing. All materials were collected from the PBC building and shipped to the Lawrence Berkeley National Laboratory (LBNL) for testing. The materials were pre-conditioned for two different time periods before measuring material and chemical specific emission factors for a range of VOCs and Aldehydes. Of the six materials tested, we found that the highest emitter of formaldehyde was new plywood paneling. Although polish and paint contribute to some VOC emissions, the main influence of the polish was in altering the capacity of the surface to accumulate formaldehyde. Neither the new nor aged polish contributed significantly to formaldehyde emissions. The VOC emission stream (excluding formaldehyde) was composed of up to 18 different chemicals and the total VOC emissions ranged in magnitude from 7 Î¼g/m2/h (old wood with old polish) to >500 Î¼g/m2/h (painted drywall). The formaldehyde emissions from drywall and old wood with either new or old polish were ~ 15 Î¼g/m2/h while the new wood material emitted > 100 Î¼g/m2/h. However, when the projected surface area of each material in the building was considered, the new wood, old wood and painted drywall material all contributed substantially to the indoor formaldehyde loading while the coatings contributed primarily to the VOCs

Sample records for identify proper methods from the National Library of Energy Beta (NLEBeta)

Note: This page contains sample records for the topic "identify proper methods" from the National Library of EnergyBeta (NLEBeta).
While these samples are representative of the content of NLEBeta,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of NLEBeta
to obtain the most current and comprehensive results.

Appropriate Acquisition Strategy Appropriate Acquisition Strategy PMLL Identifier: PMLL-2011-NNSS-RFS-388 (Source: User Submitted) Validator: Kevin Thornton, NNSA/NSO Date: 2/14/2011 Contact: Robert Platoni/702-295-0815 Statement: The selection of an acquisition strategy that is appropriate for current market conditions, funding constraints, and project scope can result in more competitive bidding and lower bid prices. Discussion: The scope of this project was to construct two new fire stations to replace existing outdated facilities. The project was originally planned as two separate projects to be constructed in two different fiscal years using a design/bid/build acquisition strategy. The funding profile was appropriate for this type of strategy. In FY2004, Congress directed that the two projects be

of Peripheral Scope of Peripheral Scope PMLL Identifier: PMLL-2010-SNL-HSM-0001 (Source: User Submitted) Validator: Dawn Harder Date: 12/16/2010 Contact: 505-845-6314 Statement: A common understanding of the project scope is essential for project success. Additionally, agreeing on the treatment of scope that is similar but not part of the project scope is also necessary. Discussion: The Sandia Site Office (SSO) and Sandia National Laboratories (SNL) disagreed on the treatment of two separate scopes of work and their inclusion into the Heating Systems Modernization (HSM) Project. The first scope involved the early shutdown of the steam plant. During the initial planning and design, the Engineer and the SNL Plant Engineers studied the effect(s) expected to be experienced at the steam plant as load was successively removed during the planned three years

This presentation will describe the Fernald Environmental Restoration Management Corporation`s (FERMCO) Standards/Requirements Identification Documents (S/RlDs) Program, the unique process used to implement it, and the status of the program. We will also discuss the lessons learned as the program was implemented. The Department of Energy (DOE) established the Fernald site to produce uranium metals for the nation`s defense programs in 1953. In 1989, DOE suspended production and, in 1991, the mission of the site was formally changed to one of environmental cleanup and restoration. The site was renamed the Fernald Environmental Management Project (FEMP). FERMCO`s mission is to provide safe, early, and least-cost final clean-up of the site in compliance with all regulations and commitments. DOE has managed nuclear facilities primarily through its oversight of Management and Operating contractors. Comprehensive nuclear industry standards were absent when most DOE sites were first established, Management and Operating contractors had to apply existing non-nuclear industry standards and, in many cases, formulate new technical standards. Because it was satisfied with the operation of its facilities, DOE did not incorporate modern practices and standards as they became available. In March 1990, the Defense Nuclear Facilities Safety Board issued Recommendation 90-2, which called for DOE to identify relevant standards and requirements, conduct adequacy assessments of requirements in protecting environmental, public, and worker health and safety, and determine the extent to which the requirements are being implemented. The Environmental Restoration and Waste Management Office of DOE embraced the recommendation for facilities under its control. Strict accountability requirements made it essential that FERMCO and DOE clearly identify applicable requirements necessary, determine the requirements` adequacy, and assess FERMCO`s level of compliance.

This report explores the development of recently deregulated industries, both in the United States and abroad, as their markets became increasingly competitive. It concludes by identifying several methods and tools that will be needed to plan and operate power systems in this new business environment.

of the analysis thus far has focused on identifying differential measurements, which form the basis of biomarker being measured. The holy grail of such techniques is the robust identification of causal models measurements of cellular activity. Specifically, we will sur- vey computational methods for learning Bayesian

The purpose of this report is to attempt to identify important highway pavement maintenance and rehabilitation needs and to propose microwave methods and equipment that could be profitably used for this work. As a starting point it is already perceived and accepted that the major emphasis in the US paving indu

A method and apparatus for sampling radiation detector outputs and determining event data from the collected samples is described. The method uses high speed sampling of the detector output, the conversion of the samples to digital values, and the discrimination of the digital values so that digital values representing detected events are determined. The high speed sampling and digital conversion is performed by an A/D sampler that samples the detector output at a rate high enough to produce numerous digital samples for each detected event. The digital discrimination identifies those digital samples that are not representative of detected events. The sampling and discrimination also provides for temporary or permanent storage, either serially or in parallel, to a digital storage medium. 6 figures.

Bimetallic nanoparticles (NPs) have wide applications in electronics, photonics, and catalysis. However, it is particularly challenging to synthesize size-controllable alloy nanoparticles (e.g., NiAu) with bulk immiscible metals as the components. Here we report the synthesis of isolable NiAu alloy nanoparticles with tunable and relatively uniform sizes via a coreduction method employing butyllithium as the reducing agent and trioctylphosphine as the protecting agent. The influences of synthesis conditions (e.g., protecting agent, aging temperature, and the solvent used to wash the product) were investigated, and the synthesis mechanism was preliminarily surveyed. The NiAu alloy nanoparticles obtained were then used as the precursor to prepare an Au-NiO/SiO2 catalyst highly active in low-temperature CO oxidation, and the effects of pretreatment details and catalyst compositions on catalytic activity were studied. Relevant characterization employing XRD, TEM, UV-vis, TG/DTG, and FT-IR was conducted. In addition, the importance of the current synthesis of NiAu alloy NPs and the contribution of the catalyst design were discussed in the context of the literature.