A technique adapted from the guarded-comparative-longitudinal heat flow method was selected for the measurement of the thermal conductivity of a nuclear fuel compact over a temperature range characteristic of its usage. This technique fulfills the requirement for non-destructive measurement of the composite compact. Although numerous measurement systems have been created based on the guarded comparative method, comprehensive systematic (bias) and measurement (precision) uncertainty associated with this technique have not been fully analyzed. In addition to the geometric effect in the bias error, which has been analyzed previously, this paper studies the working condition which is another potential error source. Using finite element analysis, this study showed the effect of these two types of error sources in the thermal conductivity measurement process and the limitations in the design selection of various parameters by considering their effect on the precision error. The results and conclusions provide valuable reference for designing and operating an experimental measurement system using this technique.

We present a multiresolution technique for interactive texture based rendering of arbitrarily oriented cutting planes for very large data sets. This method uses an adaptive scheme that renders the data along a cutting plane at different resolutions: higher resolution near the point-of-interest and lower resolution away from the point-of-interest. The algorithm is based on the segmentation of texture space into an octree, where the leaves of the tree define the original data and the internal nodes define lower-resolution versions. Rendering is done adaptively by selecting high-resolution cells close to a center of attention and low-resolution cells away from it. We limit the artifacts introduced by this method by blending between different levels of resolution to produce a smooth image. This technique can be used to produce viewpoint-dependent renderings.

Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualProperty EditCalifornia:PowerCER.pngRoofs and HeatOpen EnergyRoadmap1977CuttingsAnalysis At

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualProperty EditCalifornia:PowerCER.pngRoofs and HeatOpen EnergyRoadmap1977CuttingsAnalysis

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualProperty EditCalifornia:PowerCER.pngRoofs and HeatOpen EnergyRoadmap1977CuttingsAnalysisTo

bounds for cut elimination by Buss in [8] and Schwichtenberg in [19] are not optimal in many cases, and the bound presented by Buss is already significantly worse than the bound given by Schwichtenberg. A more, is somewhat obscure. Buss's proof of the cut elimination theorem is based upon global proof transÂ­ formations

bounds for cut elimination by Buss in [8] and Schwichtenberg in [19] are not optimal in many cases, and the bound presented by Buss is already significantly worse than the bound given by Schwichtenberg. A more, is somewhat obscure. Buss's proof of the cut elimination theorem is based upon global proof trans- formations

Maintenance is effective when it improves equipment availability and reduces costs. Reduced costs stem from increased availability, which is the primary objective of this study. As a result, overall operating costs decrease. RAM analysis requires a logical approach to the problem through the use of techniques such as FMEA, FTA and goal trees. To illustrate the steps of this method, the authors used a simplified T-G system. This method is to rank critical components in terms of the severity of failure. On the basis of ranking, it is possible to assign the preventive maintenance tasks in order of priority. Other options are available. Examples are revised procedures, more detailed outage plans using PC-based programs and better spare parts management.

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualProperty EditCalifornia:PowerCER.pngRoofs and HeatOpen EnergyRoadmap1977Cuttings

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualProperty EditCalifornia:PowerCER.pngRoofs and HeatOpen EnergyRoadmap1977CuttingsInformation

The practice of mechanical engineering for product development has evolved into a complex activity that requires a team of specialists for success. Sandia National Laboratories (SNL) has product engineers, mechanical designers, design engineers, manufacturing engineers, mechanical analysts and experimentalists, qualification engineers, and others that contribute through product realization teams to develop new mechanical hardware. The goal of SNL's Design Group is to change product development by enabling design teams to collaborate within a virtual model-based environment whereby analysis is used to guide design decisions. Computer-aided design (CAD) models using PTC's Pro/ENGINEER software tools are heavily relied upon in the product definition stage of parts and assemblies at SNL. The three-dimensional CAD solid model acts as the design solid model that is filled with all of the detailed design definition needed to manufacture the parts. Analysis is an important part of the product development process. The CAD design solid model (DSM) is the foundation for the creation of the analysis solid model (ASM). Creating an ASM from the DSM currently is a time-consuming effort; the turnaround time for results of a design needs to be decreased to have an impact on the overall product development. This effort can be decreased immensely through simple Pro/ENGINEER modeling techniques that summarize to the method features are created in a part model. This document contains recommended modeling techniques that increase the efficiency of the creation of the ASM from the DSM.

Laser remote fusion cutting is analyzed by the aid of a semi-analytical mathematical model of the processing front. By local calculation of the energy balance between the absorbed laser beam and the heat losses, the three-dimensional vaporization front can be calculated. Based on an empirical model for the melt flow field, from a mass balance, the melt film and the melting front can be derived, however only in a simplified manner and for quasi-steady state conditions. Front waviness and multiple reflections are not modelled. The model enables to compare the similarities, differences, and limits between laser remote fusion cutting, laser remote ablation cutting, and even laser keyhole welding. In contrast to the upper part of the vaporization front, the major part only slightly varies with respect to heat flux, laser power density, absorptivity, and angle of front inclination. Statistical analysis shows that for high cutting speed, the domains of high laser power density contribute much more to the formation of the front than for low speed. The semi-analytical modelling approach offers flexibility to simplify part of the process physics while, for example, sophisticated modelling of the complex focused fibre-guided laser beam is taken into account to enable deeper analysis of the beam interaction. Mechanisms like recast layer generation, absorptivity at a wavy processing front, and melt film formation are studied too.

SCAT applies statistical techniques to dipmeter data to identify patterns of bulk curvature, determine transverse and longitudinal structural directions, and reconstruct cross sections and contour maps. STRAT-SCAT applies the same concepts to geometric interpretation of multistoried unimodal, bimodal, or trough-type cross-bedding and also to seismic stratigraphy-scale stratigraphic structures. Structural dip, which comprises the bulk of dipmeter data, is related to beds that (statistically) were deposited with horizontal attitudes; stratigraphic dip is related to beds that were deposited with preferentially oriented nonhorizontal attitudes or to beds that assumed such attitudes because of differential compaction. Stratigraphic dip generates local zones of departure from structural dip on special SCAT plots. The RMS (root-mean-square) of apparent structural dip is greatest in the (structural) T-direction and least in the perpendicular L-direction; the RMS of stratigraphic dip (measured with respect to structural dip) is greatest in the stratigraphic T*-direction and least in the stratigraphic L*-direction. Multistoried, cross-bedding appears on T*-plots as local zones of either greater scatter or statistically significant departure of stratigraphic median dip from structural dip. In contrast, the L*-plot (except for trough-type cross-bedding) is sensitive to cross-bedding. Seismic stratigraphy-scale depositional sequences are identified on Mercator dip versus azimuth plots and polar tangent plots as secondary cylindrical-fold patterns imposed on global structural patterns. Progradational sequences generate local cycloid-type patterns on T*-plots, and compactional sequences generate local cycloid-type patterns on T*-plots, and compactional sequences generate local half-cusp patterns. Both features, however, show only structural dip on L*-plots.

The MAJORANA Collaboration is constructing the MAJORANA DEMONSTRATOR, an ultra-low background, 40-kg modular HPGe detector array to search for neutrinoless double beta decay in 76Ge. In view of the next generation of tonne-scale Ge-based 0nbb-decay searches that will probe the neutrino mass scale in the inverted-hierarchy region, a major goal of the MAJORANA DEMONSTRATOR is to demonstrate a path forward to achieving a background rate at or below 1 count/tonne/year in the 4 keV region of interest around the Q-value at 2039 keV. The background rejection techniques to be applied to the data include cuts based on data reduction, pulse shape analysis, event coincidences, and time correlations. The Point Contact design of the DEMONSTRATOR 0s germanium detectors allows for significant reduction of gamma background.

cutting, repeatedly folds onto itself, indicating the presence of multiple frequencies in the signal during chatter. In the second part of the thesis, a regenerative cutting force model is used to study the effects of chatter on tool motion. Floquet...

In the guarded cut-bar technique, a guard surrounding the measured sample and reference (meter) bars is temperature controlled to carefully regulate heat losses from the sample and reference bars. Guarding is typically carried out by matching the temperature profiles between the guard and the test stack of sample and meter bars. Problems arise in matching the profiles, especially when the thermal conductivitiesof the meter bars and of the sample differ, as is usually the case. In a previous numerical study, the applied guarding condition (guard temperature profile) was found to be an important factor in measurement accuracy. Different from the linear-matched or isothermal schemes recommended in literature, the optimal guarding condition is dependent on the system geometry and thermal conductivity ratio of sample to meter bar. To validate the numerical results, an experimental study was performed to investigate the resulting error under different guarding conditions using stainless steel 304 as both the sample and meter bars. The optimal guarding condition was further verified on a certified reference material, pyroceram 9606, and 99.95% pure iron whose thermal conductivities are much smaller and much larger, respectively, than that of the stainless steel meter bars. Additionally, measurements are performed using three different inert gases to show the effect of the insulation effective thermal conductivity on measurement error, revealing low conductivity, argon gas, gives the lowest error sensitivity when deviating from the optimal condition. The result of this study provides a general guideline for the specific measurement method and for methods requiring optimal guarding or insulation.

) and ma directed arcs (i.e., transmission lines) on which power flows. The flow can The authorsElectric Power Network Security Analysis via Minimum Cut Relaxation Kin Cheong Sou, Henrik Sandberg the security of power transmission networks is presented. In order to strategically allocate protection devices

We apply the recently defined multipole vector framework to the frequency-specific first-year WMAP sky maps, estimating the low-l multipole coefficients from the high-latitude sky by means of a power equalization filter. While most previous analyses of this type have considered only heavily processed (and foreground-contaminated) full-sky maps, the present approach allows for greater control of residual foregrounds, and therefore potentially also for cosmologically important conclusions. The low-l spherical harmonics coefficients and corresponding multipole vectors are tabulated for easy reference. Using this formalism, we re-assess a set of earlier claims of both cosmological and non-cosmological low-l correlations based on multipole vectors. First, we show that the apparent l=3 and 8 correlation claimed by Copi et al. (2004) is present only in the heavily processed map produced by Tegmark et al. (2003), and must therefore be considered an artifact of that map. Second, the well-known quadrupole-octopole correlation is confirmed at the 99% significance level, and shown to be robust with respect to frequency and sky cut. Previous claims are thus supported by our analysis. Finally, the low-l alignment with respect to the ecliptic claimed by Schwarz et al. (2004) is nominally confirmed in this analysis, but also shown to be very dependent on severe a-posteriori choices. Indeed, we show that given the peculiar quadrupole-octopole arrangement, finding such a strong alignment with the ecliptic is not unusual.

The effect of polarisation of a Gaussian beam on the radiation absorption during laser cutting of metals is investigated. A generalised formula is proposed for calculating the absorption coefficient, which describes the polarisation of three types (linear, elliptical, and circular), taking into account the fact that the beam may interact with a metal surface of an arbitrary shape. A comparison with the existing analogues (in the cases of linear and circular radiation polarisation) confirmed the advantage of employing the formula for the spatial description of the shape of the surface produced, which is highly important for processing (cutting, welding, drilling) of thick materials. The effect of laser radiation characteristics on the surface shape and cut depth in cutting stainless steel sheets is investigated numerically. It is shown for the first time that the cutting of materials by the TEM{sub 00} beam is most efficient when the beam has elliptical polarisation directed along the direction of beam displacement and characterised by a specific axial ratio. (laser applications and other topics in quantum electronics)

As an important factor affecting the accuracy of the thermal conductivity measurement, systematic (bias) error in the guarded comparative axial heat flow (cut-bar) method was mostly neglected by previous researches. This bias is due primarily to the thermal conductivity mismatch between sample and meter bars (reference), which is common for a sample of unknown thermal conductivity. A correction scheme, based on a finite element simulation of the measurement system, was proposed to reduce the magnitude of the overall measurement uncertainty. This scheme was experimentally validated by applying corrections on four types of sample measurements in which the specimen thermal conductivity is much smaller, slightly smaller, equal and much larger than that of the meter bar. As an alternative to the optimum guarding technique proposed before, the correction scheme can be used to minimize uncertainty contribution from the measurement system with non-optimal guarding conditions. It is especially necessary for large thermal conductivity mismatches between sample and meter bars.

NISTIR 7078 TIN Techniques for Data Analysis and Surface Construction Building and Fire Research Institute of Standards and Technology #12;NISTIR 7078 TIN Techniques for Data Analysis and Surface This report addresses the task of meshing point clouds by triangulated elevated surfaces referred to as TIN

An automated device that couples a pair of differently sized sample loops with a syringe pump and a source of degassed water. A fluid sample is mounted at an inlet port and delivered to the sample loops. A selected sample from the sample loops is diluted in the syringe pump with the degassed water and fed to a flow through detector for analysis. The sample inlet is also directly connected to the syringe pump to selectively perform analysis without dilution. The device is airtight and used to detect oxygen-sensitive species, such as dithionite in groundwater following a remedial injection to treat soil contamination.

The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determined flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.

1 APPLICATION OF DATA ANALYSISTECHNIQUES TO NUCLEAR REACTOR SYSTEMS CODE ACCURACY ASSESSMENT) has been developed by the authors to provide quantitative comparisons between nuclear reactor systems. 1. INTRODUCTION In recent years, the commercial nuclear reactor industry has focused significant

Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.

of finite difference methods, finite element methods, or Green function based methods, each of which in the design of electrical circuits. This paper overviews several methods for the analysis and optimization overview a restricted set of thermal optimization methods, specifically, placement techniques for thermal

An attempt has been made to present some of the thin-film deposition and surface analysistechniques which may be useful in growing superionic conducting materials. Emphasis is made on the importance of being careful in selecting process parameters and materials in order to produce films with properties outlined in this article. Also, special care should be given to proper consideration of grain boundary effects.

This project was undertaken to demonstrate that oil and gas can be drilled and produced safely and economically from a fractured Monterey reservoir in the Santa Maria Basin of California by employing horizontal wellbores and underbalanced drilling technologies. Two vertical wells were previously drilled in this area with heavy mud and conventional completions; neither was commercially productive. A new well was drilled by the project team in 2004 with the objective of accessing an extended length of oil-bearing, high-resistivity Monterey shale via a horizontal wellbore, while implementing managed-pressure drilling (MPD) techniques to avoid formation damage. Initial project meetings were conducted in October 2003. The team confirmed that the demonstration well would be completed open-hole to minimize productivity impairment. Following an overview of the geologic setting and local field experience, critical aspects of the application were identified. At the pre-spud meeting in January 2004, the final well design was confirmed and the well programming/service company requirements assigned. Various design elements were reduced in scope due to significant budgetary constraints. Major alterations to the original plan included: (1) a VSP seismic survey was delayed to a later phase; (2) a new (larger) surface hole would be drilled rather than re-enter an existing well; (3) a 7-in. liner would be placed into the top of the Monterey target as quickly as possible to avoid problems with hole stability; (4) evaluation activities were reduced in scope; (5) geosteering observations for fracture access would be deduced from penetration rate, cuttings description and hydrocarbon in-flow; and (6) rather than use nitrogen, a novel air-injection MPD system was to be implemented. Drilling operations, delayed from the original schedule by capital constraints and lack of rig availability, were conducted from September 12 to November 11, 2004. The vertical and upper curved sections were drilled and lined through the problematic shale member without major stability problems. The top of the targeted Monterey was thought to be seen at the expected TVD of 10,000 ft where the 7-in. liner was set at a 60{sup o} hole angle. Significant oil and gas shows suggested the fractured interval anticipated at the heel location had been penetrated. A total of 2572 ft of 6{Delta}-in. near-horizontal interval was placed in the shale section, extending planned well length by approximately 470 ft. Very little hydrocarbon in-flow was observed from fractures along the productive interval. This may be a result of the well trajectory falling underneath the Monterey fractured zone. Hydrocarbon observations, cuttingsanalysis and gamma-ray response indicated additional fractured intervals were accessed along the last {+-}900 ft of well length. The well was completed with a 2 and 7/8-in. tubing string set in a production packer in preparation for flow and swab tests to be conducted later by a service rig. The planned well time was estimated as 39 days and overall cost as $2.4 million. The actual results are 66 days at a total cost of $3.4 million. Well productivity responses during subsequent flow and swabbing tests were negative. The well failed to inflow and only minor amounts (a few barrels) of light oil were recovered. The lack of production may suggest that actual sustainable reservoir pressure is far less than anticipated. Temblor attempted in July, 2006, to re-enter and clean out the well and run an Array Induction log (primarily for resistivity and correlation purposes), and an FMI log (for fracture detection). Application of surfactant in the length of the horizontal hole, and acid over the fracture zone at 10,236 was also planned. This attempt was not successful in that the clean out tools became stuck and had to be abandoned.

This project was undertaken to demonstrate that oil and gas can be drilled and produced safely and economically from a fractured Monterey reservoir in the Santa Maria Basin of California by employing horizontal wellbores and underbalanced drilling technologies. Two vertical wells were previously drilled in this area with heavy mud and conventional completions; neither was commercially productive. A new well was drilled by the project team in 2004 with the objective of accessing an extended length of oil-bearing, high-resistivity Monterey shale via a horizontal wellbore, while implementing managed-pressure drilling (MPD) techniques to avoid formation damage. Initial project meetings were conducted in October 2003. The team confirmed that the demonstration well would be completed open-hole to minimize productivity impairment. Following an overview of the geologic setting and local field experience, critical aspects of the application were identified. At the pre-spud meeting in January 2004, the final well design was confirmed and the well programming/service company requirements assigned. Various design elements were reduced in scope due to significant budgetary constraints. Major alterations to the original plan included: (1) a VSP seismic survey was delayed to a later phase; (2) a new (larger) surface hole would be drilled rather than re-enter an existing well; (3) a 7-in. liner would be placed into the top of the Monterey target as quickly as possible to avoid problems with hole stability; (4) evaluation activities were reduced in scope; (5) geosteering observations for fracture access would be deduced from penetration rate, cuttings description and hydrocarbon in-flow; and (6) rather than use nitrogen, a novel air-injection MPD system was to be implemented. Drilling operations, delayed from the original schedule by capital constraints and lack of rig availability, were conducted from September 12 to November 11, 2004. The vertical and upper curved sections were drilled and lined through the problematic shale member without major stability problems. The top of the targeted Monterey was thought to be seen at the expected TVD of 10,000 ft where the 7-in. liner was set at a 60{sup o} hole angle. Significant oil and gas shows suggested the fractured interval anticipated at the heel location had been penetrated. A total of 2572 ft of 6 1/8-in. near-horizontal interval was placed in the shale section, extending planned well length by approximately 470 ft. Very little hydrocarbon in-flow was observed from fractures along the productive interval. This may be a result of the well trajectory falling underneath the Monterey fractured zone. Hydrocarbon observations, cuttingsanalysis and gamma-ray response indicated additional fractured intervals were accessed along the last {+-}900 ft of well length. The well was completed with a 2 7/8-in. tubing string set in a production packer in preparation for flow and swab tests to be conducted later by a service rig. The planned well time was estimated as 39 days and overall cost as $2.4 million. The actual results are 66 days at a total cost of $3.4 million. Well productivity responses during subsequent flow and swabbing tests were negative. The well failed to inflow and only minor amounts (a few barrels) of light oil were recovered. The lack of production may suggest that actual sustainable reservoir pressure is far less than anticipated. Temblor is currently planning to re-enter and clean out the well and run an Array Induction log (primarily for resistivity and correlation purposes), and an FMI log (for fracture detection). Depending on the results of these logs, an acidizing or re-drill program will be planned.

This project was undertaken to demonstrate that oil and gas can be drilled and produced safely and economically from a fractured Monterey reservoir in the Santa Maria Basin of California by employing horizontal wellbores and underbalanced drilling technologies. Two vertical wells were previously drilled in this area by Temblor Petroleum with heavy mud and conventional completions; neither was commercially productive. A new well was drilled by the project team in 2004 with the objective of accessing an extended length of oil-bearing, high-resistivity Monterey shale via a horizontal wellbore, while implementing managed-pressure drilling (MPD) techniques to avoid formation damage. Initial project meetings were conducted in October 2003. The team confirmed that the demonstration well would be completed open-hole to minimize productivity impairment. Following an overview of the geologic setting and local field experience, critical aspects of the application were identified. At the pre-spud meeting in January 2004, the final well design was confirmed and the well programming/service company requirements assigned. Various design elements were reduced in scope due to significant budgetary constraints. Major alterations to the original plan included: (1) a VSP seismic survey was delayed to a later phase; (2) a new (larger) surface hole would be drilled rather than re-enter an existing well; (3) a 7-in. liner would be placed into the top of the Monterey target as quickly as possible to avoid problems with hole stability; (4) evaluation activities were reduced in scope; (5) geosteering observations for fracture access would be deduced from penetration rate, cuttings description and hydrocarbon in-flow; and (6) rather than use nitrogen, a novel air-injection MPD system was to be implemented. Drilling operations, delayed from the original schedule by capital constraints and lack of rig availability, were conducted from September 12 to November 11, 2004. The vertical and upper curved sections were drilled and lined through the problematic shale member without major stability problems. The top of the targeted Monterey was thought to be seen at the expected TVD of 10,000 ft where the 7-in. liner was set at a 60{sup o} hole angle. Significant oil and gas shows suggested the fractured interval anticipated at the heel location had been penetrated. A total of 2572 ft of 6.-in. near-horizontal interval was placed in the shale section, extending planned well length by approximately 470 ft. Very little hydrocarbon in-flow was observed from fractures along the productive interval. This may be a result of the well trajectory falling underneath the Monterey fractured zone. Hydrocarbon observations, cuttingsanalysis and gamma-ray response indicated additional fractured intervals were accessed along the last {+-}900 ft of well length. The well was completed with a 2 7/8-in. tubing string set in a production packer in preparation for flow and swab tests to be conducted later by a service rig. The planned well time was estimated as 39 days and overall cost as $2.4 million. The actual results are 66 days at a total cost of $3.4 million. Well productivity responses during subsequent flow and swabbing tests were negative. The well failed to inflow and only minor amounts (a few barrels) of light oil were recovered. The lack of production may suggest that actual sustainable reservoir pressure is far less than anticipated. Temblor is currently investigating the costs and operational viability of re-entering the well and conducting an FMI (fracture detection) log and/or an acid stimulation. No final decision or detailed plans have been made regarding these potential interventions at this time.

The Photovoltaics for Utility Scale Applications (PVUSA) project tests two types of PV systems at the main test site in Davis, California: new module technologies fielded as 20-kW Emerging Module Technology (EMT) arrays and more mature technologies fielded as 70- to 500-kW turnkey Utility-Scale (US) systems. PVUSA members have also installed systems in their service areas. Designed appropriately, data acquisition systems (DASs) can be a convenient and reliable means of assessing system performance, value, and health. Improperly designed, they can be complicated, difficult to use and maintain, and provide data of questionable validity. This report documents PVUSA PV system instrumentation and data analysistechniques and lessons learned. The report is intended to assist utility engineers, PV system designers, and project managers in establishing an objective, then, through a logical series of topics, facilitate selection and design of a DAS to meet the objective. Report sections include Performance Reporting Objectives (including operational versus research DAS), Recommended Measurements, Measurement Techniques, Calibration Issues, and Data Processing and AnalysisTechniques. Conclusions and recommendations based on the several years of operation and performance monitoring are offered. This report is one in a series of 1994--1995 PVUSA reports documenting PVUSA lessons learned at the demonstration sites in Davis and Kerman, California. Other topical reports address: five-year assessment of EMTs; validation of the Kerman 500-kW grid support PV plant benefits; construction and safety experience in installing and operating PV systems; balance-of-system design and costs; procurement, acceptance, and rating practices for PV power plants; experience with power conditioning units and power quality.

Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from which unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.

It is commonly accepted that the introduction of hydrogen as an energy carrier for light-duty vehicles involves concomitant technological development of infrastructure elements, such as production, delivery, and consumption, all associated with certain emission levels. To analyze these at a system level, the suite of corresponding models developed by the United States Department of Energy and involving several national laboratories is combined in one macro-system model (MSM). The macro-system model is being developed as a cross-cuttinganalysis tool that combines a set of hydrogen technology analysis models. Within the MSM, a federated simulation framework is used for consistent data transfer between the component models. The framework is built to suit cross-model as well as cross-platform data exchange and involves features of 'over-the-net' computation.

An apparatus for clipping a protrusion of material is provided. The protrusion may, for example, be a bolt head, a nut, a rivet, a weld bead, or a temporary assembly alignment tab protruding from a substrate surface of assembled components. The apparatus typically includes a cleaver having a cleaving edge and a cutting blade having a cutting edge. Generally, a mounting structure configured to confine the cleaver and the cutting blade and permit a range of relative movement between the cleaving edge and the cutting edge is provided. Also typically included is a power device coupled to the cutting blade. The power device is configured to move the cutting edge toward the cleaving edge. In some embodiments the power device is activated by a momentary switch. A retraction device is also generally provided, where the retraction device is configured to move the cutting edge away from the cleaving edge.

An Investigation of the Latent Semantic AnalysisTechnique for Document Retrieval STUDENT PROJECT;_________________________________________________________________________ An Investigation of the Latent Semantic AnalysisTechnique for Document Retrieval. Report by: David Mugo Page 2. These term-matching techniques have always relied on matching query terms with document terms to retrieve

and other HTGRs. In the present study, new input techniques have been developed for MELCOR HTGR analysis. These new techniques include methods for modeling radiation heat transfer between solid surfaces in an HTGR, calculating fuel and cladding geometric...

This dissertation presents a pointer analysis for Java programs, together with several practical analysis applications. For each program point, the analysis is able to construct a points-to graph that describes how local ...

estimation techniques, the expectation maximization (EM) algorithm, the decorrelating estimator and the averaging method, on both AWGN and Rayleigh fading channels. The implementation of the EM algorithm on TMS320C62 is also presented. The performance...

The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

1 QA-Pagelet: Data Preparation Techniques for Large Scale Data Analysis of the Deep Web James data preparation technique for large scale data analysis of the Deep Web. To support QA the Deep Web. Two unique features of the Thor framework are (1) the novel page clustering for grouping

QA-Pagelet: Data Preparation Techniques for Large-Scale Data Analysis of the Deep Web James the QA-Pagelet as a fundamental data preparation technique for large-scale data analysis of the Deep Web-Pagelets from the Deep Web. Two unique features of the Thor framework are 1) the novel page clustering

FINITE ELEMENT ANALYSIS OF THERMAL TENSIONING TECHNIQUES MITIGATING WELD BUCKLING DISTORTION. This paper presents a finite element analysis model of the thermal tensioning technique. A series of finite by the finite element simulations, the residual stresses of large size and high heat input welds are reduced

We examine the regenerative cutting process by using a single degree of freedom non-smooth model with a friction component and a time delay term. Instead of the standard Lyapunov exponent calculations, we propose a statistical 0-1 test analysis for chaos detection. This approach reveals the nature of the cutting process signaling regular or chaotic dynamics. For the investigated deterministic model we are able to show a transition from chaotic to regular motion with increasing cutting speed. For two values of time delay showing the different response the results have been confirmed by the means of the spectral density and the multiscaled entropy.

Through sampling and toxicity characteristic leaching procedure (TCLP) analyses, LANL and the DOE validated that a LANL transuranic (TRU) waste (TA-55-43, Lot No. 01) was not a Resource Recovery and Conservation Act (RCRA) hazardous waste. This paper describes the sampling and analysis project as well as the statistical assessment of the analytical results. The analyses were conducted according to the requirements and procedures in the sampling and analysis plan approved by the New Mexico Environmental Department. The plan used a statistical approach that was consistent with the stratified, random sampling requirements of SW-846. LANL adhered to the plan during sampling and chemical analysis of randomly selected items of the five major types of materials in this heterogeneous, radioactive, debris waste. To generate portions of the plan, LANL analyzed a number of non-radioactive items that were representative of the mix of items present in the waste stream. Data from these cold surrogates were used to generate means and variances needed to optimize the design. Based on statistical arguments alone, only two samples from the entire waste stream were deemed necessary, however a decision was made to analyze at least two samples of each of the five major waste types. To obtain these samples, nine TRU waste drums were opened. Sixty-six radioactively contaminated and four non-radioactive grab samples were collected. Portions of the samples were composited for chemical analyses. In addition, a radioactively contaminated sample of rust-colored powder of interest to the New Mexico Environment Department (NMED) was collected and qualitatively identified as rust.

The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment`s final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

This is the second quarterly progress report for Year-4 of the ACTS Project. It includes a review of progress made in: (1) Flow Loop construction and development and (2) research tasks during the period of time between October 1, 2002 and December 30, 2002. This report presents a review of progress on the following specific tasks. (a) Design and development of an Advanced Cuttings Transport Facility Task 3: Addition of a Cuttings Injection/Separation System, Task 4: Addition of a Pipe Rotation System. (b) New research project (Task 9b): ''Development of a Foam Generator/Viscometer for Elevated Pressure and Elevated Temperature (EPET) Conditions''. (d) Research project (Task 10): ''Study of Cuttings Transport with Aerated Mud Under Elevated Pressure and Temperature Conditions''. (e) Research on three instrumentation tasks to measure: Cuttings concentration and distribution in a flowing slurry (Task 11), Foam texture while transporting cuttings. (Task 12), and Viscosity of Foam under EPET (Task 9b). (f) New Research project (Task 13): ''Study of Cuttings Transport with Foam under Elevated Pressure and Temperature Conditions''. (g) Development of a Safety program for the ACTS Flow Loop. Progress on a comprehensive safety review of all flow-loop components and operational procedures. (Task 1S). (h) Activities towards technology transfer and developing contacts with Petroleum and service company members, and increasing the number of JIP members.

personnel cuts than cuts to other major categories. Ideally, a budgetary analysis of the IC would follow a similar methodology, analyzing historical trends in the IC’s top line budget, total outlays for each agency, outlays across the major... with analysis and distribution of information within the Community. Finally, inter-agency competition for resources may increase, and organizations may fracture into separate agencies, reducing overall efficiency. 8 Given these premises of organizational...

In at least one embodiment, the inventive technology relates to in-vessel generation of a material from a solution of interest as part of a processing and/or analysis operation. Preferred embodiments of the in-vessel material generation (e.g., in-vessel solid material generation) include precipitation; in certain embodiments, analysis and/or processing of the solution of interest may include dissolution of the material, perhaps as part of a successive dissolution protocol using solvents of increasing ability to dissolve. Applications include, but are by no means limited to estimation of a coking onset and solution (e.g., oil) fractionating.

In at least one embodiment, the inventive technology relates to in-vessel generation of a material from a solution of interest as part of a processing and/or analysis operation. Preferred embodiments of the in-vessel material generation (e.g., in-vessel solid material generation) include precipitation; in certain embodiments, analysis and/or processing of the solution of interest may include dissolution of the material, perhaps as part of a successive dissolution protocol using solvents of increasing ability to dissolve. Applications include, but are by no means limited to estimation of a coking onset and solution (e.g., oil) fractionating.

The objective of this paper is to examine and compare the use of a number of policy evaluation tools, which can be used to measure the impact of transport policies and programmes as part of a strategic environmental assessment (SEA) or sustainability appraisal. The evaluation tools that were examined include cost-benefit analysis (CBA), cost-effectiveness analysis (CEA) and multi-criteria decision analysis (MCDA). It was concluded that both CEA and CBA are useful for estimating the costs and/or benefits associated with transport policies but are constrained by the difficulty in quantifying non-market impacts and monetising total costs and benefits. Furthermore, CEA is limited to identifying the most 'cost-effective policy' for achieving a single, narrowly defined objective, usually greenhouse gas (GHG) reduction and is, therefore, not suitable for evaluating policy options with ancillary costs or a variety of potential benefits. Thus, CBA or CEA evaluation should be complemented by a complete environmental and socio-economic impact assessment approach such as MCDA. This method allows for participatory analysis and qualitative assessment but is subject to caveats such as subjectivity and value-laden judgments.

Clustering is an important data analysistechnique with numerous applications in the analysis of electric power grids. Standard clustering techniques are oblivious to the rich structural and dynamic information available for power grids. Therefore, by exploiting the inherent topological and electrical structure in the power grid data, we propose new methods for clustering with applications to model reduction, locational marginal pricing, phasor measurement unit (PMU or synchrophasor) placement, and power system protection. We focus our attention on model reduction for analysis based on time-series information from synchrophasor measurement devices, and spectral techniques for clustering. By comparing different clustering techniques on two instances of realistic power grids we show that the solutions are related and therefore one could leverage that relationship for a computational advantage. Thus, by contrasting different clustering techniques we make a case for exploiting structure inherent in the data with implications for several domains including power systems.

he purpose of this DOE Standard is to establish guidance for the preparation and review of hazard categorization and accident analyses techniques as required in DOE Order 5480.23, Nuclear Safety Analysis Reports.

The new method for HRA, ATHEANA, has been developed based on a study of the operating history of serious accidents and an understanding of the reasons why people make errors. Previous publications associated with the project have dealt with the theoretical framework under which errors occur and the retrospective analysis of operational events. This is the first attempt to use ATHEANA in a prospective way, to select and evaluate human errors within the PSA context.

step to increase energy performance in buildings is to use passive strategies, such as orientation, natural ventilation or envelope optimisation. This paper presents an analysis of solar passive techniques and natural ventilation concepts in a case...

Storage and analysistechniques for fast 2-D camera data on NSTX W. M. Davisa *, D.M. Mastrovitoa, and this year, one new camera alone can acquire 2GB per pulse. The paper will describe the storage strategies

The following research investigates the use of citation analysistechniques for relevance ranking in computer-assisted legal research systems. Overviews on information retrieval, legal research, computer-assisted legal ...

c Copyright by David Daly, 2001 #12;ANALYSIS OF CONNECTION AS A DECOMPOSITION TECHNIQUE BY DAVID of the decomposition techniques introducing an error of less than 11%. iii #12;To my father, who will never see William H. Sanders, for technical advice and support on the M¨obius project. Jenny Applequist

-1- Extending the Borders of Accident Investigation: Applying Novel AnalysisTechniques to the Loss. In consequence, it is becoming increasingly difficult to identify the causes of incidents and accidents back to the development of a number of novel accident investigation techniques. Most of these approaches are intended

AUTOMATION OF THE LAGUERRE EXPANSION TECHNIQUE FOR ANALYSIS OF TIME-RESOLVED FLUORESCENCE SPECTROSCOPY DATA A Thesis by ADITI SANDEEP DABIR Submitted to the Office of Graduate Studies of Texas A&M University in partial... fulfillment of the requirements for the degree of MASTER OF SCIENCE December 2009 Major Subject: Biomedical Engineering AUTOMATION OF THE LAGUERRE EXPANSION TECHNIQUE FOR ANALYSIS OF TIME-RESOLVED FLUORESCENCE SPECTROSCOPY DATA A Thesis...

EVALUATION OF NEW TECHNIQUES FOR TWO DIMENSIONAL FINITE ELEMENT ANALYSIS OF WOVEN COMPOSITES A Thesis by SITARAM CHOWDARY GUNDAPANENI Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment... of the requirement for the degree of MASTER OF SCIENCE DECEMBER 1992 Major Subject: Aerospace Engineering EVALUATION OF NEW TECHNIQUES FOR TWO DIMENSIONAL FINITE ELEMENT ANALYSIS OF WOVEN COMPOSITES A Thesis by SITARAM CHOWDARY GUNDAPANENI Approved...

Several current research programs at the Illinois State Geological Survey (ISGS) relate to the development of activated carbons from Illinois coal, fly ash, and scrap tires. Preparation of activated carbons involves thermal processing steps that include preoxidation, pyrolysis and activation. Reaction time, temperature and gas composition during these processing steps ultimately determine the nature of the activated carbon produced. Thermal analysis plays a significant role in developing carbons by providing fundamental and engineering data that are useful in carbon production and characterization for process development.

A mining auger comprises a cutting head carried at one end of a tubular shaft and a plurality of wall segments which in a first position thereof are disposed side by side around said shaft and in a second position thereof are disposed oblique to said shaft. A vane projects outwardly from each wall segment. When the wall segments are in their first position, the vanes together form a substantially continuous helical wall. A cutter is mounted on the peripheral edge of each of the vanes. When the wall segments are in their second position, the cutters on the vanes are disposed radially outward from the perimeter of the cutting head.

Improved thermoanalytical methods have been developed that are capable of quantitative identification of various components of fly ash from a laboratory-scale fluidized bed combustion system. The thermogravimetric procedure developed can determine quantities of H{sub 2}O, Ca(OH){sub 2}, CaCO{sub 3}, CaSO{sub 4} and carbonaceous matter in fly ash with accuracy comparable to more time-consuming ASTM methods. This procedure is a modification of the Mikhail-Turcotte methods that can accurately analyze bed ash, with higher accuracy regarding the greater amount of carbonaceous matter in fly ash. In addition, in conjunction with FTIR and SEM/EDS analysis, the reduction mechanism of CaSO{sub 4} as CaSO{sub 4} + 4H{sub 2} = CaS + 4H{sub 2}O has been confirmed in this study. This mechanism is important in analyzing and evaluating sulfur capture in fluidized-bed combustion systems.

Methods of performing a magnetic resonance analysis of a biological object are disclosed that include placing the object in a main magnetic field (that has a static field direction) and in a radio frequency field; rotating the object at a frequency of less than about 100 Hz around an axis positioned at an angle of about 54.degree.44' relative to the main magnetic static field direction; pulsing the radio frequency to provide a sequence that includes a phase-corrected magic angle turning pulse segment; and collecting data generated by the pulsed radio frequency. In particular embodiments the method includes pulsing the radio frequency to provide at least two of a spatially selective read pulse, a spatially selective phase pulse, and a spatially selective storage pulse. Further disclosed methods provide pulse sequences that provide extended imaging capabilities, such as chemical shift imaging or multiple-voxel data acquisition.

The Advanced Cuttings Transport Study (ACTS) was a 5-year JIP project undertaken at the University of Tulsa (TU). The project was sponsored by the U.S. Department of Energy (DOE) and JIP member companies. The objectives of the project were: (1) to develop and construct a new research facility that would allow three-phase (gas, liquid and cuttings) flow experiments under ambient and EPET (elevated pressure and temperature) conditions, and at different angle of inclinations and drill pipe rotation speeds; (2) to conduct experiments and develop a data base for the industry and academia; and (3) to develop mechanistic models for optimization of drilling hydraulics and cuttings transport. This project consisted of research studies, flow loop construction and instrumentation development. Following a one-year period for basic flow loop construction, a proposal was submitted by TU to the DOE for a five-year project that was organized in such a manner as to provide a logical progression of research experiments as well as additions to the basic flow loop. The flow loop additions and improvements included: (1) elevated temperature capability; (2) two-phase (gas and liquid, foam etc.) capability; (3) cuttings injection and removal system; (4) drill pipe rotation system; and (5) drilling section elevation system. In parallel with the flow loop construction, hydraulics and cuttings transport studies were preformed using drilling foams and aerated muds. In addition, hydraulics and rheology of synthetic drilling fluids were investigated. The studies were performed under ambient and EPET conditions. The effects of temperature and pressure on the hydraulics and cuttings transport were investigated. Mechanistic models were developed to predict frictional pressure loss and cuttings transport in horizontal and near-horizontal configurations. Model predictions were compared with the measured data. Predominantly, model predictions show satisfactory agreements with the measured data. As a part of this project, instrumentation was developed to monitor cuttings beds and characterize foams in the flow loop. An ultrasonic-based monitoring system was developed to measure cuttings bed thickness in the flow loop. Data acquisition software controls the system and processes the data. Two foam generating devices were designed and developed to produce foams with specified quality and texture. The devices are equipped with a bubble recognition system and an in-line viscometer to measure bubble size distribution and foam rheology, respectively. The 5-year project is completed. Future research activities will be under the umbrella of Tulsa University Drilling Research Projects. Currently the flow loop is being used for testing cuttings transport capacity of aqueous and polymer-based foams under elevated pressure and temperature conditions. Subsequently, the effect of viscous sweeps on cuttings transport under elevated pressure and temperature conditions will be investigated using the flow loop. Other projects will follow now that the ''steady state'' phase of the project has been achieved.

TESEC 2001, Genova, Italy 1 ADVANCED TECHNIQUES FOR SAFETY ANALYSIS APPLIED TO THE GAS TURBINE for safety analysis of complex computer based systems. Such approaches are applied to the gas turbine control and electrical power supply of the centre of ENEA CR Casaccia. The plant is based on a small gas turbine and has

The very successful application of a CFA (Continuous flow analysis) system in the GRIP project (Greenland Ice Core Project) for high-resolution ammonium, calcium, hydrogen peroxide, and formaldehyde measurements along a deep ice core led to further development of this analysistechnique. The authors included methods for continuous analysistechnique. The authors included methods for continuous analysis of sodium, nitrate, sulfate, and electrolytical conductivity, while the existing methods have been improved. The melting device has been optimized to allow the simultaneous analysis of eight components. Furthermore, a new melter was developed for analyzing firn cores. The system has been used in the frame of the European Project for Ice Coring in Antarctica (EPICA) for in-situ analysis of several firn cores from Dronning Maud Land, Antarctica, and for the new ice core drilled at Dome C, Antarctica.

CUTTING EDGE IMMUNOLOGY THE OF JOURNAL Cutting Edge: Dendritic Cells Copulsed with Microbial). However, there is evidence that DC-associated factors other than IL-12 also play a significant role in Th1

The analysis of process samples for radionuclide content is an important part of current procedures for material balance and accountancy in the different process streams of a recycling plant. The destructive sample analysistechniques currently available necessitate a significant amount of time. It is therefore desirable to develop new sample analysis procedures that allow for a quick turnaround time and increased sample throughput with a minimum of deviation between samples. In particular new capabilities for rapid sample dissolution and radiochemical separation are required. Most of the radioanalytical techniques currently employed for sample analysis are based on manual laboratory procedures. Such procedures are time and labor intensive and not well suited for situations in which a rapid sample analysis is requires and/or large number of samples needed to be analyzed.

- tive procedure leads to a solution to the general problem, while the search procedure is required to locate the optimal solution. The majority of this report deals with analysis of the iterative pro- cedure, although the relation of the solution... derived by this part of the technique and the optimal solution is discussed. The mathematical basis of the method is discussed and the prob- lems to which the technique is applicable are divided into three classes. Experimental example problems of two...

This Quarter has been divided between running experiments and the installation of the drill-pipe rotation system. In addition, valves and piping were relocated, and three viewports were installed. Detailed design work is proceeding on a system to elevate the drill-string section. Design of the first prototype version of a Foam Generator has been finalized, and fabrication is underway. This will be used to determine the relationship between surface roughness and ''slip'' of foams at solid boundaries. Additional cups and rotors are being machined with different surface roughness. Some experiments on cuttings transport with aerated fluids have been conducted at EPET. Theoretical modeling of cuttings transport with aerated fluids is proceeding. The development of theoretical models to predict frictional pressure losses of flowing foam is in progress. The new board design for instrumentation to measure cuttings concentration is now functioning with an acceptable noise level. The ultrasonic sensors are stable up to 190 F. Static tests with sand in an annulus indicate that the system is able to distinguish between different sand concentrations. Viscometer tests with foam, generated by the Dynamic Test Facility (DTF), are continuing.

This paper describes the knowledge base for the application of the new human reliability analysis (HRA) method, a ``A Technique for Human Error Analysis`` (ATHEANA). Since application of ATHEANA requires the identification of previously unmodeled human failure events, especially errors of commission, and associated error-forcing contexts (i.e., combinations of plant conditions and performance shaping factors), this knowledge base is an essential aid for the HRA analyst.

A generalized likelihood ratio technique for automated analysis of bobbin coil eddy current dataq M signals that are commonly found in bobbin coil eddy current data. The performance of the proposed for automated processing and classi®cation of eddy current data. q 2002 Elsevier Science Ltd. All rights

Evaluation of economic feasibility of a bio-gasification facility needs understanding of its unit cost under different production capacities. The objective of this study was to evaluate the unit cost of syngas production at capacities from 60 through 1800Nm 3/h using an economic model with three regression analysistechniques (simple regression, reciprocal regression, and log-log regression). The preliminary result of this study showed that reciprocal regression analysistechnique had the best fit curve between per unit cost and production capacity, with sum of error squares (SES) lower than 0.001 and coefficient of determination of (R 2) 0.996. The regression analysistechniques determined the minimum unit cost of syngas production for micro-scale bio-gasification facilities of $0.052/Nm 3, under the capacity of 2,880 Nm 3/h. The results of this study suggest that to reduce cost, facilities should run at a high production capacity. In addition, the contribution of this technique could be the new categorical criterion to evaluate micro-scale bio-gasification facility from the perspective of economic analysis.

1 State-of-the-art Tools and Techniques for Quantitative Modeling and Analysis of Embedded Systems¶AalborgCNRS VerimagINRIA/IRISA §Saarland University ¶Embedded Systems Institute and Radboud University Abstract and stochastic aspects. Then, we will overview the BIP framework for modular design and code generation. Finally

Redundancy Reduction Techniques and Content Analysis for Multimedia Services ­ the European COST, such as the ongoing ISO MPEG-4 standardisation phase as well as the new ISO MPEG-7 initiative. The aim is to define philosophy of COST projects is introduced before narrowing the focus to the COST 211 series. For more than 20

, ethylbenzene and xylenes (i.e., BTEX) are common ground water pollutants that threaten water suppliesWater Research 38 (2004) 2529­2536 The use of isotopic and lipid analysistechniques linking used recently as an environmental forensics tool to demonstrate microbial degradation of pollutants

A Simulation Technique for Performance Analysis of Generic Petri Net Models of Computer Systems1 Abstract Many timed extensions for Petri nets have been proposed in the literature, but their analytical solutions impose limitations on the time distributions and the net topology. To overcome these limitations

contour cutting strategy will therefore yield a smaller and mor effic'ent t'ableau as illustrated in ='gure Z. b and figure 3. o . The final result is the correc. identification of the three clust rs ofthe graph. T'h e dyna- mic contour cutting...

This paper describes the development and application of statistical analysistechniques to support the AGR experimental program on NGNP fuel performance. The experiments conducted in the Idaho National Laboratory’s Advanced Test Reactor employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule. The tests are instrumented with thermocouples embedded in graphite blocks and the target quantity (fuel/graphite temperature) is regulated by the He-Ne gas mixture that fills the gap volume. Three techniques for statistical analysis, namely control charting, correlation analysis, and regression analysis, are implemented in the SAS-based NGNP Data Management and Analysis System (NDMAS) for automated processing and qualification of the AGR measured data. The NDMAS also stores daily neutronic (power) and thermal (heat transfer) code simulation results along with the measurement data, allowing for their combined use and comparative scrutiny. The ultimate objective of this work includes (a) a multi-faceted system for data monitoring and data accuracy testing, (b) identification of possible modes of diagnostics deterioration and changes in experimental conditions, (c) qualification of data for use in code validation, and (d) identification and use of data trends to support effective control of test conditions with respect to the test target. Analysis results and examples given in the paper show the three statistical analysistechniques providing a complementary capability to warn of thermocouple failures. It also suggests that the regression analysis models relating calculated fuel temperatures and thermocouple readings can enable online regulation of experimental parameters (i.e. gas mixture content), to effectively maintain the target quantity (fuel temperature) within a given range.

The analysis of component fatigue lifetime for a wind energy conversion system (WECS) requires that the component load spectrum be formulated in terms of stress cycles. Typically, these stress cycles are obtained from time series data using a cycle identification scheme. As discussed by many authors, the matrix or matrices of cycle counts that describe the stresses on a turbine are constructed from relatively short, representative samples of time series data. The ability to correctly represent the long-term behavior of the distribution of stress cycles from these representative samples is critical to the analysis of service lifetimes. Several techniques are currently used to convert representative samples to the lifetime cyclic loads on the turbine. There has been recently developed a set of fitting algorithms that is particularly useful for matching the body of the distribution of fatigue stress cycles on a turbine component. Fitting techniques are now incorporated into the LIFE2 fatigue/fracture analysis code for wind turbines. In this paper, the authors provide an overview of the fitting algorithms and describe the pre- and post-count algorithms developed to permit their use in the LIFE2 code. Typical case studies are used to illustrate the use of the technique.

A mining assembly includes a primary rotary cutter mounted on one end of a support shaft and four secondary rotary cutters carried on the same support shaft and positioned behind the primary cutters for cutting corners in the hole cut by the latter.

In this study, principle of prompt gamma neutron activation analysis has been used as a technique to determine the elements in the sample. The system consists of collimated isotopic neutron source, Cf-252 with HPGe detector and Multichannel Analysis (MCA). Concrete with size of 10×10×10 cm{sup 3} and 15×15×15 cm{sup 3} were analysed as sample. When neutrons enter and interact with elements in the concrete, the neutron capture reaction will occur and produce characteristic prompt gamma ray of the elements. The preliminary result of this study demonstrate the major element in the concrete was determined such as Si, Mg, Ca, Al, Fe and H as well as others element, such as Cl by analysis the gamma ray lines respectively. The results obtained were compared with NAA and XRF techniques as a part of reference and validation. The potential and the capability of neutron induced prompt gamma as tool for multi elemental analysis qualitatively to identify the elements present in the concrete sample discussed.

This study addresses uncertainties arising from variations in different modeling approaches to soil-structure interaction of massive structures at a nuclear power plant. To perform a comprehensive systems analysis, it is necessary to quantify, for each phase of the traditional analysis procedure, both the realistic seismic response and the uncertainties associated with them. In this study two linear soil-structure interaction techniques were used to analyze the Zion, Illinois nuclear power plant: a direct method using the FLUSH computer program and a substructure approach using the CLASSI family of computer programs. In-structure response from two earthquakes, one real and one synthetic, was compared. Structure configurations from relatively simple to complicated multi-structure cases were analyzed. The resulting variations help quantify uncertainty in structure response due to analysis procedures.

An apparatus for the sequential fracturing and cutting of subsurface volume of hard rock (102) in the strata (101) of a mining environment (100) by subjecting the volume of rock to a beam (25) of microwave energy to fracture the subsurface volume of rock by differential expansion; and , then bringing the cutting edge (52) of a piece of conventional mining machinery (50) into contact with the fractured rock (102).

Laser-induced fluorescence measurements of cuvette-contained laser dye mixtures are made for evaluation of multivariate analysistechniques to optically thick environments. Nine mixtures of Coumarin 500 and Rhodamine 610 are analyzed, as well as the pure dyes. For each sample, the cuvette is positioned on a two-axis translation stage to allow the interrogation at different spatial locations, allowing the examination of both primary (absorption of the laser light) and secondary (absorption of the fluorescence) inner filter effects. In addition to these expected inner filter effects, we find evidence that a portion of the absorbed fluorescence is re-emitted. A total of 688 spectra are acquired for the evaluation of multivariate analysis approaches to account for nonlinear effects.

This paper describes the most recent version of a human reliability analysis (HRA) method called ``A Technique for Human Event Analysis'' (ATHEANA). The new version is documented in NUREG-1624, Rev. 1 [1] and reflects improvements to the method based on comments received from a peer review that was held in 1998 (see [2] for a detailed discussion of the peer review comments) and on the results of an initial trial application of the method conducted at a nuclear power plant in 1997 (see Appendix A in [3]). A summary of the more important recommendations resulting from the peer review and trial application is provided and critical and unique aspects of the revised method are discussed.

Various methods for the spherical harmonic analysis of the quiet daily variation of geomagnetic fields (Sq) measured at the Earth's surface have been used to represent the separation of the external (source) and internal (induced) currents. The results of such methods differ because the modeling techniques often reflect differing special objectives of the researcher. One method utilizes the observed field measurements at all world locations determined at a specific instant of time. A second method uses only observations in one primary hemisphere, appropriately mirroring field values for the analysis in the opposite hemisphere. The third method, a variation of the second, uses field values in the opposite hemisphere that are mirrored from a primary region that is shifted in time by 6 months. A variation of these three methods utilizes only a longitude line of observatories and assumes that the 24 hours of Sq field variation represents a 360{degree} rotation of the analysis sphere. For the comparison, power spectral representation, global current patterns in different seasons, and deviations of model-computed field values from the surface observations were all evaluated. The power spectral study showed that the spherical harmonic analysis of Sq should be extended to order m = 6 and degree n = m + 17. The northern hemisphere current system seemed to be consistently stronger than the southern hemisphere system. Exclusion of the mid-latitude vortex polynomials with (n {minus} m) = 0 and 1 was shown to be a useful technique for exposing the unique polar cap current pattern S{sup p}{sub q}. The global method was generally best for modeling; however, the hemisphere mirroring methods with 6-month time shift were almost as good in their representation of the Sq fields. Different special regions of effective and poor modeling were identified for all three methods.

The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work is focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in terms of the accuracy of the resulting PC representation of quantities and the computational costs associated with constructing the sparse PCE. Basis adaptivity also seems to make the employment of PC techniques possible for problems with a higher number of input parameters (15–20), alleviating a well known limitation of the traditional approach. The prospect of larger scale applicability and the simplicity of implementation makes such adaptive PC algorithms particularly appealing for the sensitivity and uncertainty analysis of complex systems and legacy codes.

This presentation will review developments on the integration of advanced modeling and simulation techniques into the analysis step of experimental data obtained at the Spallation Neutron Source. A workflow framework for the purpose of refining molecular mechanics force-fields against quasi-elastic neutron scattering data is presented. The workflow combines software components to submit model simulations to remote high performance computers, a message broker interface for communications between the optimizer engine and the simulation production step, and tools to convolve the simulated data with the experimental resolution. A test application shows the correction to a popular fixed-charge water model in order to account polarization effects due to the presence of solvated ions. Future enhancements to the refinement workflow are discussed. This work is funded through the DOE Center for Accelerating Materials Modeling.

Devices are disclosed for performing tissue biopsy on a small scale (microbiopsy). By reducing the size of the biopsy tool and removing only a small amount of tissue or other material in a minimally invasive manner, the risks, costs, injury and patient discomfort associated with traditional biopsy procedures can be reduced. By using micromachining and precision machining capabilities, it is possible to fabricate small biopsy/cutting devices from silicon. These devices can be used in one of four ways (1) intravascularly, (2) extravascularly, (3) by vessel puncture, and (4) externally. Additionally, the devices may be used in precision surgical cutting. 6 figs.

Devices for performing tissue biopsy on a small scale (microbiopsy). By reducing the size of the biopsy tool and removing only a small amount of tissue or other material in a minimally invasive manner, the risks, costs, injury and patient discomfort associated with traditional biopsy procedures can be reduced. By using micromachining and precision machining capabilities, it is possible to fabricate small biopsy/cutting devices from silicon. These devices can be used in one of four ways 1) intravascularly, 2) extravascularly, 3) by vessel puncture, and 4) externally. Additionally, the devices may be used in precision surgical cutting.

A method and machine are provided for cutting a workpiece such as concrete. A gun barrel is provided for repetitively loading projectiles therein and is supplied with a pressurized propellant from a storage tank. A thermal storage tank is disposed between the propellant storage tank and the gun barrel for repetitively receiving and heating propellant charges which are released in the gun barrel for repetitively firing projectiles therefrom toward the workpiece. In a preferred embodiment, hypervelocity of the projectiles is obtained for cutting the concrete workpiece by fracturing thereof. 10 figs.

A method and machine 14 are provided for cutting a workpiece 12 such as concrete. A gun barrel 16 is provided for repetitively loading projectiles 22 therein and is supplied with a pressurized propellant from a storage tank 28. A thermal storage tank 32,32A is disposed between the propellant storage tank 28 and the gun barrel 16 for repetitively receiving and heating propellant charges which are released in the gun barrel 16 for repetitively firing projectiles 22 therefrom toward the workpiece 12. In a preferred embodiment, hypervelocity of the projectiles 22 is obtained for cutting the concrete workpiece 12 by fracturing thereof.

This document reports on initial activities at ORNL aimed at quantitative characterization of porosity development in oxidized graphite specimens using automated image analysis (AIA) techniques. A series of cylindrical shape specimens were machined from nuclear-grade graphite (type PCEA, from GrafTech International). The specimens were oxidized in air to various levels of weight loss (between 5 and 20 %) and at three oxidation temperatures (between 600 and 750 oC). The procedure used for specimen preparation and oxidation was based on ASTM D-7542-09. Oxidized specimens were sectioned, resin-mounted and polished for optical microscopy examination. Mosaic pictures of rectangular stripes (25 mm x 0.4 mm) along a diameter of sectioned specimens were recorded. A commercial software (ImagePro) was evaluated for automated analysis of images. Because oxidized zones in graphite are less reflective in visible light than the pristine, unoxidized material, the microstructural changes induced by oxidation can easily be identified and analyzed. Oxidation at low temperatures contributes to development of numerous fine pores (< 100 m2) distributed more or less uniformly over a certain depth (5-6 mm) from the surface of graphite specimens, while causing no apparent external damage to the specimens. In contrast, oxidation at high temperatures causes dimensional changes and substantial surface damage within a narrow band (< 1 mm) near the exposed graphite surface, but leaves the interior of specimens with little or no changes in the pore structure. Based on these results it appears that weakening and degradation of mechanical properties of graphite materials produced by uniform oxidation at low temperatures is related to the massive development of fine pores in the oxidized zone. It was demonstrated that optical microscopy enhanced by AIA techniques allows accurate determination of oxidant penetration depth and of distribution of porosity in oxidized graphite materials.

Sample preparation is always a critical step in study of micrometer sized astromaterials available for study in the laboratory, whether their subsequent analysis is by electron microscopy or secondary ion mass spectrometry. A focused beam of gallium ions has been used to prepare electron transparent sections from an interplanetary dust particle, as part of an integrated analysis protocol to maximize the mineralogical, elemental, isotopic and spectroscopic information extracted from one individual particle. In addition, focused ion beam techniques have been employed to extract cometary residue preserved on the rims and walls of micro-craters in 1100 series aluminum foils that were wrapped around the sample tray assembly on the Stardust cometary sample collector. Non-ideal surface geometries and inconveniently located regions of interest required creative solutions. These include support pillar construction and relocation of a significant portion of sample to access a region of interest. Serial sectioning, in a manner similar to ultramicrotomy, is a significant development and further demonstrates the unique capabilities of focused ion beam microscopy for sample preparation of astromaterials.

A recent trend in the nuclear power engineering field is the implementation of heavily computational and time consuming algorithms and codes for both design and safety analysis. In particular, the new generation of system analysis codes aim to embrace several phenomena such as thermo-hydraulic, structural behavior, and system dynamics, as well as uncertainty quantification and sensitivity analyses. The use of dynamic probabilistic risk assessment (PRA) methodologies allows a systematic approach to uncertainty quantification. Dynamic methodologies in PRA account for possible coupling between triggered or stochastic events through explicit consideration of the time element in system evolution, often through the use of dynamic system models (simulators). They are usually needed when the system has more than one failure mode, control loops, and/or hardware/process/software/human interaction. Dynamic methodologies are also capable of modeling the consequences of epistemic and aleatory uncertainties. The Monte-Carlo (MC) and the Dynamic Event Tree (DET) approaches belong to this new class of dynamic PRA methodologies. The major challenges in using MC and DET methodologies (as well as other dynamic methodologies) are the heavier computational and memory requirements compared to the classical ET analysis. This is due to the fact that each branch generated can contain time evolutions of a large number of variables (about 50,000 data channels are typically present in RELAP) and a large number of scenarios can be generated from a single initiating event (possibly on the order of hundreds or even thousands). Such large amounts of information are usually very difficult to organize in order to identify the main trends in scenario evolutions and the main risk contributors for each initiating event. This report aims to improve Dynamic PRA methodologies by tackling the two challenges mentioned above using: 1) adaptive sampling techniques to reduce computational cost of the analysis and 2) topology-based methodologies to interactively visualize multidimensional data and extract risk-informed insights. Regarding item 1) we employ learning algorithms that aim to infer/predict simulation outcome and decide the coordinate in the input space of the next sample that maximize the amount of information that can be gained from it. Such methodologies can be used to both explore and exploit the input space. The later one is especially used for safety analysis scopes to focus samples along the limit surface, i.e. the boundaries in the input space between system failure and system success. Regarding item 2) we present a software tool that is designed to analyze multi-dimensional data. We model a large-scale nuclear simulation dataset as a high-dimensional scalar function defined over a discrete sample of the domain. First, we provide structural analysis of such a function at multiple scales and provide insight into the relationship between the input parameters and the output. Second, we enable exploratory analysis for users, where we help the users to differentiate features from noise through multi-scale analysis on an interactive platform, based on domain knowledge and data characterization. Our analysis is performed by exploiting the topological and geometric properties of the domain, building statistical models based on its topological segmentations and providing interactive visual interfaces to facilitate such explorations.

emphasis on in vitro techniques for toxicologic research, the precision-cut lung slice model was extended to the mouse to determine the predictive value of this system for assessing interspecies differences in metabolism and toxicity. Validation of the lung...

emphasis on in vitro techniques for toxicologic research, the precision-cut lung slice model was extended to the mouse to determine the predictive value of this system for assessing interspecies differences in metabolism and toxicity. Validation of the lung...

Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment both synthetic and clinical data. In the synthetic data, the authors demonstrated the performance of the NLDR method compared with conventional linear DR methods. The NLDR approach enabled successful segmentation of the structures, whereas, in most cases, PCA and MDS failed. The NLDR approach was able to segment different breast tissue types with a high accuracy and the embedded image of the breast MRI data demonstrated fuzzy boundaries between the different types of breast tissue, i.e., fatty, glandular, and tissue with lesions (>86%). Conclusions: The proposed hybrid NLDR methods were able to segment clinical breast data with a high accuracy and construct an embedded image that visualized the contribution of different radiological parameters.

Wear data are presented for diamond tools cutting electroless nickel (eNi) for cut lengths up to 70,000 ft (13 miles). Two tools having different infrared absorption characteristics were used to cut an eNi preparation that had yielded minimum values for surface roughness and tool wear rate in a previous study. The data include Talystep measurement of the rms amplitude of the feed-marks versus cumulative cutting distance, representative examples of shape changes for the feed-mark profiles, SEM and optical micrographs of the tool rake and flank face wear zones, and measurements of the cutting edge profile and edge recession distance by a tool-nose replication technique. Feed-mark roughness values were found to increase from 5 to 90 A rms over the duration of the test, with an associated edge recession of about 1000 A and the development of a periodic tool edge grooving indicative of burnishing of the part surface. The ir absorption data successfully predicted the order of the two tools in terms of wear rate and fracture toughness.

In this report, we analytically predict and examine stresses in tool tips used in high speed orthogonal machining operations. Specifically, one analysis was compared to an existing experimental measurement of stresses in a sapphire tool tip cutting 1020 steel at slow speeds. In addition, two analyses were done of a carbide tool tip in a machining process at higher cutting speeds, in order to compare to experimental results produced as part of this study. The metal being cut was simulated using a Sandia developed damage plasticity material model, which allowed the cutting to occur analytically without prespecifying the line of cutting/failure. The latter analyses incorporated temperature effects on the tool tip. Calculated tool forces and peak stresses matched experimental data to within 20%. Stress contours generally agreed between analysis and experiment. This work could be extended to investigate/predict failures in the tool tip, which would be of great interest to machining shops in understanding how to optimize cost/retooling time.

In May of 1998, a technical basis and implementation guidelines document for A Technique for Human Event Analysis (ATHEANA) was issued as a draft report for public comment (NUREG-1624). In conjunction with the release of the draft NUREG, a paper review of the method, its documentation, and the results of an initial test of the method was held over a two-day period in Seattle, Washington, in June of 1998. Four internationally-known and respected experts in human reliability analysis (HRA) were selected to serve as the peer reviewers and were paid for their services. In addition, approximately 20 other individuals with an interest in HRA and ATHEANA also attended the peer review meeting and were invited to provide comments. The peer review team was asked to comment on any aspect of the method or the report in which improvements could be made and to discuss its strengths and weaknesses. All of the reviewers thought the ATEANA method had made significant contributions to the field of PRA/HRA, in particular by addressing the most important open questions and issues in HRA, by attempting to develop an integrated approach, and by developing a framework capable of identifying types of unsafe actions that generally have not been considered using existing methods. The reviewers had many concerns about specific aspects of the methodology and made many recommendations for ways to improve and extend the method, and to make its application more cost effective and useful to PRA in general. Details of the reviewers` comments and the ATHEANA team`s responses to specific criticisms will be discussed.

A new method to analyze human errors has been demonstrated at a pressurized water reactor (PWR) nuclear power plant. This was the first application of the new method referred to as A Technique for Human Error Analysis (ATHEANA). The main goals of the demonstration were to test the ATHEANA process as described in the frame-of-reference manual and the implementation guideline, test a training package developed for the method, test the hypothesis that plant operators and trainers have significant insight into the error-forcing-contexts (EFCs) that can make unsafe actions (UAs) more likely, and to identify ways to improve the method and its documentation. A set of criteria to evaluate the {open_quotes}success{close_quotes} of the ATHEANA method as used in the demonstration was identified. A human reliability analysis (HRA) team was formed that consisted of an expert in probabilistic risk assessment (PRA) with some background in HRA (not ATHEANA) and four personnel from the nuclear power plant. Personnel from the plant included two individuals from their PRA staff and two individuals from their training staff. Both individuals from training are currently licensed operators and one of them was a senior reactor operator {open_quotes}on shift{close_quotes} until a few months before the demonstration. The demonstration was conducted over a 5 month period and was observed by members of the Nuclear Regulatory Commission`s ATHEANA development team, who also served as consultants to the HRA team when necessary. Example results of the demonstration to date, including identified human failure events (HFEs), UAs, and EFCs are discussed. Also addressed is how simulator exercises are used in the ATHEANA demonstration project.

Recent innovations in subsurface corrosion practices of the Arabian American Oil Co. (ARAMCO) have reduced logging and workover costs substantially and have permitted the detection of corrosion in the outer string of two concentric casing strings. At the request of ARAMCO, Schlumberger conducted test under both simulated and field conditions. Results showed that the data required to evaluate casing corrosion in a 7-in.X9 5/8-in. completion can be obtained during a single logging run using a 21.6-in. coil spacing electromagnetic thickness tool (ETT-A /SUP TM/ ) sonde (as opposed to two runs with 17.6-in. and 21.6-in. sondes previously used). In addition, corrosion of the outer string of 9 5/8-in. or 13 3/8-in. casing can be detected by using the results of the ETT-A logs and pipe-analysis tool (PAT) logs or caliper logs. To date, the application of this technique has been very successful in ARAMCO's operations.

Argonne National Laboratory (ANL) and the Houston Advanced Research Center (HARC) have been tasked by the Counterdrug Technology Assessment Center of the Office of National Drug Control Policy to conduct evaluations and analyses of technologies for the non-intrusive inspection of containers for illicit substances. These technologies span the range of nuclear, X-ray, and chemical techniques used in nondestructive sample analysis. ANL has performed assessments of nuclear and X-ray inspection concepts and undertaken site visits with developers to understand the capabilities and the range of applicability of candidate systems. ANL and HARC have provided support to law enforcement agencies (LEAs), including participation in numerous field studies. Both labs have provided staff to assist in the Narcotics Detection Technology Assessment (NDTA) program for evaluating drug detection systems. Also, the two labs are performing studies of drug contamination of currency. HARC has directed technical evaluations of automated ballistics imaging and identification systems under consideration by law enforcement agencies. ANL and HARC have sponsored workshops and a symposium, and are participating in a Non-Intrusive Inspection Study being led by Dynamics Technology, Incorporated.

Abstract ID: WED-AM-B3 Use of ion beam analysistechniques to characterise iron corrosion under 12 MeV proton irradiation on the corrosion behaviour of pure iron. Oxygen and hydrogen playing a crucial role during the corrosion process have been specifically investigated. Heavy desaerated water

Over the past several years, the US Nuclear Regulatory Commission (NRC) has sponsored the development of a new method for performing human reliability analyses (HRAs). A major impetus for the program was the recognized need for a method that would not only address errors of omission (EOOs), but also errors of commission (EOCs). Although several documents have been issued describing the basis and development of the new method referred to as ``A Technique for Human Event Analysis`` (ATHEANA), two documents were drafted to initially provide the necessary documentation for applying the method: the frame of reference (FOR) manual, which served as the technical basis document for the method and the implementation guideline (IG), which provided step by step guidance for applying the method. Upon the completion of the draft FOR manual and the draft IG in April 1997, along with several step-throughs of the process by the development team, the method was ready for a third-party test. The method was demonstrated at Seabrook Station in July 1997. The main goals of the demonstration were to (1) test the ATHENA process as described in the FOR manual and the IG, (2) test a training package developed for the method, (3) test the hypothesis that plant operators and trainers have significant insight into the EFCs that can make UAs more likely, and (4) identify ways to improve the method and its documentation. The results of the Seabrook demonstration are evaluated against the success criteria, and important findings and recommendations regarding ATHENA that were obtained from the demonstration are presented here.

techniques including expert systems, fuzzy logic and Petri-nets, as well as data from remote terminal units (RTUs) of supervisory control and data acquisition (SCADA) systems, and digital protective relays have been explored and utilized to fufill...

Mestrovic, Ante [Department of Physics and Astronomy, University of British Columbia, Vancouver, British Columbia (Canada) and Department of Medical Physics, British Columbia Cancer Agency, Vancouver, British Columbia (Canada)]. E-mail: amestrovic@bccancer.bc.ca; Clark, Brenda G. [Department of Physics and Astronomy, University of British Columbia, Vancouver, British Columbia (Canada); Department of Medical Physics, British Columbia Cancer Agency, Vancouver, British Columbia (Canada)

2005-11-01T23:59:59.000Z

Purpose: To develop a method of predicting the values of dose distribution parameters of different radiosurgery techniques for treatment of arteriovenous malformation (AVM) based on internal geometric parameters. Methods and Materials: For each of 18 previously treated AVM patients, four treatment plans were created: circular collimator arcs, dynamic conformal arcs, fixed conformal fields, and intensity-modulated radiosurgery. An algorithm was developed to characterize the target and critical structure shape complexity and the position of the critical structures with respect to the target. Multiple regression was employed to establish the correlation between the internal geometric parameters and the dose distribution for different treatment techniques. The results from the model were applied to predict the dosimetric outcomes of different radiosurgery techniques and select the optimal radiosurgery technique for a number of AVM patients. Results: Several internal geometric parameters showing statistically significant correlation (p < 0.05) with the treatment planning results for each technique were identified. The target volume and the average minimum distance between the target and the critical structures were the most effective predictors for normal tissue dose distribution. The structure overlap volume with the target and the mean distance between the target and the critical structure were the most effective predictors for critical structure dose distribution. The predicted values of dose distribution parameters of different radiosurgery techniques were in close agreement with the original data. Conclusions: A statistical model has been described that successfully predicts the values of dose distribution parameters of different radiosurgery techniques and may be used to predetermine the optimal technique on a patient-to-patient basis.

A cutting sound enhancement system (10) for transmitting an audible signal from the cutting head (101) of a piece of mine machinery (100) to an operator at a remote station (200), wherein, the operator using a headphone unit (14) can monitor the difference in sounds being made solely by the cutting head (101) to determine the location of the roof, floor, and walls of a coal seam (50).

This is an end-of-year report for a project funded by the National Nuclear Security Administration's Office of Nuclear Safeguards (NA-241). The goal of this project is to investigate the feasibility of using Neutron Resonance Transmission Analysis (NRTA) to assay plutonium in commercial light-water-reactor spent fuel. This project is part of a larger research effort within the Next-Generation Safeguards Initiative (NGSI) to evaluate methods for assaying plutonium in spent fuel, the Plutonium Assay Challenge. The first-year goals for this project were modest and included: 1) developing a zero-order MCNP model for the NRTA technique, simulating data results presented in the literature, 2) completing a preliminary set of studies investigating important design and performance characteristics for the NRTA measurement technique, and 3) documentation of this work in an end of the year report (this report). Research teams at Los Alamos National Laboratory (LANL), Lawrence Berkeley National Laboratory (LBNL), Pacific Northwest National Laboratory (PNNL), and at several universities are also working to investigate plutonium assay methods for spent-fuel safeguards. While the NRTA technique is well proven in the scientific literature for assaying individual spent fuel pins, it is a newcomer to the current NGSI efforts studying Pu assay method techniques having just started in March 2010; several analytical techniques have been under investigation within this program for two to three years or more. This report summarizes a nine month period of work.

The technique for measuring the low-frequency ac mobility of free surface charges first employed by Sommer is analyzed for arbitrary values of driving frequency, charge mobility, and effective mass. Analytical expressions for the cell admittance are given for both rectangular and circular geometries in the absence of edge corrections.

with post-fixation staining techniques in this study, full volumes of previously implanted stents have been analyzed in-situ in a non-destructive manner. The increased soft tissue contrast imparted by metal-containing stains allowed for a qualitative...

. Numerical methods fall into several categories: stochastic methods, [16, 19], finite element methods, [4 techniques become computationally very expensive in such cases. A wide variety of finite element methods, weighted residuals, the method of orthogonal collocation and Galerkin's method are also used for solving

A compensation system for a computer-controlled machining apparatus having a controller and including a cutting tool and a workpiece holder which are movable relative to one another along a preprogrammed path during a machining operation utilizes a camera and a vision computer for gathering information at a preselected stage of a machining operation relating to the actual shape and size of the cutting edge of the cutting tool and for altering the preprogrammed path in accordance with detected variations between the actual size and shape of the cutting edge and an assumed size and shape of the cutting edge. The camera obtains an image of the cutting tool against a background so that the cutting tool and background possess contrasting light intensities, and the vision computer utilizes the contrasting light intensities of the image to locate points therein which correspond to points along the actual cutting edge. Following a series of computations involving the determining of a tool center from the points identified along the tool edge, the results of the computations are fed to the controller where the preprogrammed path is altered as aforedescribed. 9 figures.

A system for forming a wellbore includes a drill tubular. A drill bit is coupled to the drill tubular. One or more cutting structures are coupled to the drill tubular above the drill bit. The cutting structures remove at least a portion of formation that extends into the wellbore formed by the drill bit.

This report examines a wafer slicing technique developed by Crystal Systems, Inc. that reduces the cost of photovoltaic wafers. This fixed, abrasive slicing technique (FAST) uses a multiwire bladepack and a diamond-plated wirepack; water is the coolant. FAST is in the prototype production stage and reduces expendable material costs while retaining the advantages of a multiwire slurry technique. The cost analysis revealed that costs can be decreased by making more cuts per bladepack and slicing more wafers per linear inch. Researchers studied the degradation of bladepacks and increased wirepack life. 21 refs.

This invention resulted from a contract with the United States Department of Energy and relates to a mining tool. More particularly, the invention relates to an assembly capable of drilling a hole having a square cross-sectional shape with radiused corners. In mining operations in which conventional auger-type drills are used to form a series of parallel, cylindrical holes in a coal seam, a large amount of coal remains in place in the seam because the shape of the holes leaves thick webs between the holes. A higher percentage of coal can be mined from a seam by a means capable of drilling holes having a substantially square cross section. It is an object of this invention to provide an improved mining apparatus by means of which the amount of coal recovered from a seam deposit can be increased. Another object of the invention is to provide a drilling assembly which cuts corners in a hole having a circular cross section. These objects and other advantages are attained by a preferred embodiment of the invention.

ParVis was a project funded under LAB 10-05: “Earth System Modeling: Advanced Scientific Visualization of Ultra-Large Climate Data Sets”. Argonne was the lead lab with partners at PNNL, SNL, NCAR and UC-Davis. This report covers progress from January 1st, 2013 through Dec 1st, 2014. Two previous reports covered the period from Summer, 2010, through September 2011 and October 2011 through December 2012, respectively. While the project was originally planned to end on April 30, 2013, personnel and priority changes allowed many of the institutions to continue work through FY14 using existing funds. A primary focus of ParVis was introducing parallelism to climate model analysis to greatly reduce the time-to-visualization for ultra-large climate data sets. Work in the first two years was conducted on two tracks with different time horizons: one track to provide immediate help to climate scientists already struggling to apply their analysis to existing large data sets and another focused on building a new data-parallel library and tool for climate analysis and visualization that will give the field a platform for performing analysis and visualization on ultra-large datasets for the foreseeable future. In the final 2 years of the project, we focused mostly on the new data-parallel library and associated tools for climate analysis and visualization.

Rectilinear Glass-Cut Dissections of Rectangles to Squares Jurek CzyzowiczÂ§ czyzowic is made using only rectilinear glass-cuts, i.e., vertical or horizontal straight-line cuts separating pieces into two. 1 Introduction A glass-cut of a rectangle is a cut by a straight-line segment

Improvements in detection and resolution are always desired and needed. There are various instruments available for the inspection of concrete structures that can be used with confidence for detecting different defects. However, more often than not that confidence is heavily dependent on the experience of the operator rather than the clear, objective discernibility of the output of the instrument. The challenge of objective discernment is amplified when the concrete structures contain multiple layers of reinforcement, are of significant thickness, or both, such as concrete structures in nuclear power plants. We seek to improve and extend the usefulness of results produced using the synthetic aperture focusing technique (SAFT) on data collected from thick, complex concrete structures. A secondary goal is to improve existing SAFT results, with regards to repeatedly and objectively identifying defects and/or internal structure of concrete structures. Towards these goals, we are applying the time-frequency technique of wavelet packet decomposition and reconstruction using a mother wavelet that possesses the exact reconstruction property. However, instead of analyzing the coefficients of each decomposition node, we select and reconstruct specific nodes based on the frequency band it contains to produce a frequency band specific time-series representation. SAFT is then applied to these frequency specific reconstructions allowing SAFT to be used to visualize the reflectivity of a frequency band and that band s interaction with the contents of the concrete structure. We apply our technique to data sets collected using a commercial, ultrasonic linear array (MIRA) from two 1.5m x 2m x 25cm concrete test specimens. One specimen contains multiple layers of rebar. The other contains honeycomb, crack, and rebar bonding defect analogs. This approach opens up a multitude of possibilities for improved detection, readability, and overall improved objectivity. We will focus on improved defect/reinforcement isolation in thick and multilayered reinforcement environments. Additionally, the ability to empirically explore the possibility of a frequency-band-defect-type relationship or sensitivity becomes available.

of the internal stresses as do the previously mentioned methods. Another method which has been successfully used to test flanged tubes is the brittle model technique in which a brittle model is loaded to failure. This method is useful for testing flanged... be used to measure stresses in flanged tubes, flanged tube models of typical and extreme dimen- sions were tested under two extreme conditions of load- ing: (1) zero internal pressure, and (2) zero seal force. In each model, a polariscope was embedded...

Purpose. To establish the efficacy and safety of the preclose technique in total percutaneous endovascular aortic repair (PEVAR).MethodsA systematic literature search of Medline database was conducted for series on PEVAR published between January 1999 and January 2012.ResultsThirty-six articles comprising 2,257 patients and 3,606 arterial accesses were included. Anatomical criteria used to exclude patients from undergoing PEVAR were not uniform across all series. The technical success rate was 94 % per arterial access. Failure was unilateral in the majority (93 %) of the 133 failed PEVAR cases. The groin complication rate in PEVAR was 3.6 %; a minority (1.6 %) of these groin complications required open surgery. The groin complication rate in failed PEVAR cases converted to groin cutdown was 6.1 %. A significantly higher technical success rate was achieved when arterial access was performed via ultrasound guidance. Technical failure rate was significantly higher with larger sheath size ({>=}20F). Conclusion. The preclose technique in PEVAR has a high technical success rate and a low groin complication rate. Technical success tends to increase with ultrasound-guided arterial access and decrease with larger access. When failure occurs, it is unilateral in the majority of cases, and conversion to surgical cutdown does not appear to increase the operative risk.

levelling ABSTRACT: Imaging sensors are increasingly spread in geodetic instruments, because they enable the evaluation of digital image data for the determination of direction and height. Beyond this, the analysis aufzuzeigen. 1. INTRODUCTION Due to automation of tasks in terrestrial geodesy, image sensors and vision

Trace-Based Analysis and Prediction of Cloud Computing User Behavior Using the Fractal Modeling and technology. In this paper, we investigate the characteristics of the cloud computing requests received the alpha- stable distribution. Keywords- cloud computing; alpha-stable distribution; fractional order

energy efficient and less polluting drive-train alternative to conventional internal combustion engine, University of Biskra, Biskra, Algeria Abstract--This paper presents system analysis, modeling and simulation dynamics and system architecture. Simulation tests have been carried out on a 37-kW EV that consists

Failure Mode and Effect Analysis (FMEA) information from processes modeled in the Little-JIL process definition language. Typically FMEA information is created manually by skilled experts, an approach this definition can then be used to create FMEA representations for a wide range of potential failures

In this paper we explore and develop a simple set of rules that apply to cutting, pasting, and folding honeycomb lattices. We consider origami-like structures that are extinsically flat away from zero-dimensional sources of Gaussian curvature and one-dimensional sources of mean curvature, and our cutting and pasting rules maintain the intrinsic bond lengths on both the lattice and its dual lattice. We find that a small set of rules is allowed providing a framework for exploring and building kirigami -- folding, cutting, and pasting the edges of paper.

A method is described for cutting with a laser beam where an oxygen-hydrocarbon reaction is used to provide auxiliary energy to a metal workpiece to supplement the energy supplied by the laser. Oxygen is supplied to the laser focus point on the workpiece by a nozzle through which the laser beam also passes. A liquid hydrocarbon is supplied by coating the workpiece along the cutting path with the hydrocarbon prior to laser irradiation or by spraying a stream of hydrocarbon through a nozzle aimed at a point on the cutting path which is just ahead of the focus point during irradiation. 1 figure.

A method for cutting with a laser beam where an oxygen-hydrocarbon reaction is used to provide auxiliary energy to a metal workpiece to supplement the energy supplied by the laser. Oxygen is supplied to the laser focus point on the workpiece by a nozzle through which the laser beam also passes. A liquid hydrocarbon is supplied by coating the workpiece along the cutting path with the hydrocarbon prior to laser irradiation or by spraying a stream of hydrocarbon through a nozzle aimed at a point on the cutting path which is just ahead of the focus point during irradiation.

We demonstrate a new approach to the analysis of extensive multi-energy data. For the case of d + He-4, we produce a phase shift analysis covering for the energy range 3 to 11 MeV. The key idea is the use of iterative perturbative data-to-potential inversion which can produce potentials which reproduce the data simultaneously over a range of energies. It thus effectively regularizes the extraction of phase shifts from diverse, incomplete and possibly somewhat contradictory data sets. In doing so, it will provide guidance to experimentalists as to what further measurements should be made. This study is limited to vector spin observables and spin-orbit interactions. We discuss alternative ways in which the theory can be implemented and which provide insight into the ambiguity problems. We compare the extrapolation of these solutions to other energies. Majorana terms are presented for each potential component.

A Principal Components Analysis (PCA) has been written to aid in the interpretation of multivariate aerial radiometric data collected by the US Department of Energy (DOE) under the National Uranium Resource Evaluation (NURE) program. The variations exhibited by these data have been reduced and classified into a number of linear combinations by using the PCA program. The PCA program then generates histograms and outlier maps of the individual variates. Black and white plots can be made on a Calcomp plotter by the application of follow-up programs. All programs referred to in this guide were written for a DEC-10. From this analysis a geologist may begin to interpret the data structure. Insight into geological processes underlying the data may be obtained.

A non-intrusive technique using principal component analysis, to infer the depth of the fission fragment caesium-137, when it is buried under silica sand has been described. Using energy variances within different {gamma}-ray spectra, a complete depth model was produced for a single caesium-137 source buried under 1 mm depths ranging between 5-50 mm. This was achieved using a cadmium telluride detector and a bespoke phantom. In this paper we describe the advancement of the technique by further validating it using blind tests for applications outside of the laboratory, where not only the depth (z) but also the surface (x, y) location of {gamma}-ray emitting contamination is often poorly characterised. At present the technique has been tested at the point of maximum activity above the entrained {gamma}-ray emitting source (where the optimal x, y location is known). This is not usually practical in poorly characterized environments where the detector cannot be conveniently placed at such an optimal location to begin with and scanning at multiple points around the region of interest is often required. Using a uniform scanning time, the point of maximum intensity can be located by sampling in terms of total count rate, and converging on this optimal point of maximum intensity. (authors)

Drilling at hazardous waste sites for environmental remediation or monitoring requires containment of all drilling fluids and cuttings to protect personnel and the environment. At many sites, air drilling techniques have advantages over other drilling methods, requiring effective filtering and containment of the return air/cuttings stream. A study of. current containment methods indicated improvements could be made in the filtering of radionuclides and volatile organic compounds, and in equipment like alarms, instrumentation or pressure safety features. Sandia National Laboratories, Dept. 61 11 Environmental Drilling Projects Group, initiated this work to address these concerns. A look at the industry showed that asbestos abatement equipment could be adapted for containment and filtration of air drilling returns. An industry manufacturer was selected to build a prototype machine. The machine was leased and put through a six-month testing and evaluation period at Sandia National Laboratories. Various materials were vacuumed and filtered with the machine during this time. In addition, it was used in an actual air drive drilling operation. Results of these tests indicate that the vacuum/filter unit will meet or exceed our drilling requirements. This vacuum/filter unit could be employed at a hazardous waste site or any site where drilling operations require cuttings and air containment.

Battelle Memorial Institute as part of its U.S. Department of Energy (USDOE) Contract No. DE-AC05-76RL01830 to operate the Pacific Northwest National Laboratory (PNNL) provides technology assistance to qualifying small businesses in association with a Technology Assistance Program (TAP). Qualifying companies are eligible to receive a set quantity of labor associated with specific technical assistance. Having applied for a TAP agreement to assist with fatigue characterization of Abrasive Water Jet (AWJ) cut titanium specimens, the OMAX Corporation was awarded TAP agreement 09-02. This program was specified to cover dynamic testing and analysis of fatigue specimens cut from titanium alloy Ti-6%Al-4%V via AWJ technologies. In association with the TAP agreement, a best effort agreement was made to characterize fatigue specimens based on test conditions supplied by OMAX.

A DESIGN-ORIENTED FRAMEWORK TO DETERMINE THE PARASITIC PARAMETERS OF HIGH FREQUENCY MAGNETICS IN SWITCING POWER SUPPLIES USING FINITE ELEMENT ANALYSISTECHNIQUES A Thesis by MOHAMMAD BAGHER SHADMAND Submitted to the Office... to Determine the Parasitic Parameters of High Frequency Magnetics in Switching Power Supplies using Finite Element AnalysisTechniques Copyright 2012 Mohammad Bagher Shadmand A DESIGN-ORIENTED FRAMEWORK TO DETERMINE THE PARASITIC PARAMETERS OF HIGH...

Most researchers with little high performance computing (HPC) experience have difficulties productively using the supercomputing resources. To address this issue, we investigated usage behaviors of the world s fastest academic Kraken supercomputer, and built a knowledge-based recommendation system to improve user productivity. Six clustering techniques, along with three cluster validation measures, were implemented to investigate the underlying patterns of usage behaviors. Besides manually defining a category for very large job submissions, six behavior categories were identified, which cleanly separated the data intensive jobs and computational intensive jobs. Then, job statistics of each behavior category were used to develop a knowledge-based recommendation system that can provide users with instructions about choosing appropriate software packages, setting job parameter values, and estimating job queuing time and runtime. Experiments were conducted to evaluate the performance of the proposed recommendation system, which included 127 job submissions by users from different research fields. Great feedback indicated the usefulness of the provided information. The average runtime estimation accuracy of 64.2%, with 28.9% job termination rate, was achieved in the experiments, which almost doubled the average accuracy in the Kraken dataset.

A detailed collisionless sheath theory and a three-region collisional model of a bounded plasma are presented, and the suitability of the collisional model for analysis of ignited mode thermionic converters is investigated. The sheath theory extends previous analyses to regimes in which the sheath potential and electron temperatures are comparable in magnitude. In all operating regimes typical of a ignited mode thermionic converter, the predicted sheaths extend several mean-free paths. The apparent collisionality of the sheaths prompted development of a collisional, three-region model of the converter plasma. By interfacing Particle-in-Cell regions (for the sheaths) and fluid regions (for the bulk of the plasma), a time-dependent, wall-to-wall model of the plasma in the inter-electrode space is created. The components of the model are tested and validated against analytic solutions and against one another, then applied to the analysis of an ignited mode thermionic converter. Under ignited mode operating conditions, the electron velocity distribution at the plasma/sheath boundary is found to be inconsistent with that assumed in the model development, and the calculation diverges. The observed distribution is analyzed and a new basis set of distribution functions is suggested that should permit application of the hybrid model to ignited mode thermionic converters.

Impacts of U.S. appliance and equipment standards have been described previously. Since 2000, the U.S. Department of Energy (DOE) has updated standards for clothes washers, water heaters, and residential central air conditioners and heat pumps. A revised estimate of the aggregate impacts of all the residential appliance standards in the United States shows that existing standards will reduce residential primary energy consumption and associated carbon dioxide (CO{sub 2}) emissions by 89 percent in 2020 compared to the levels expected without any standards. Studies of possible new standards are underway for residential furnaces and boilers, as well as a number of products in the commercial (tertiary) sector, such as distribution transformers and unitary air conditioners. The analysis of standards has evolved in response to critiques and in an attempt to develop more precise estimates of costs and benefits of these regulations. The newer analysis elements include: (1) valuing energy savings by using marginal (rather than average) energy prices specific to an end-use; (2) simulating the impacts of energy efficiency increases over a sample population of consumers to quantify the proportion of households having net benefits or net costs over the life of the appliance; and (3) calculating marginal markups in distribution channels to derive the incremental change in retail prices associated with increased manufacturing costs for improving energy efficiency.

A method of performing a magnetic resonance analysis of a biological object that includes placing the object in a main magnetic field (that has a static field direction) and in a radio frequency field; rotating the object at a frequency of less than about 100 Hz around an axis positioned at an angle of about 54.degree.44' relative to the main magnetic static field direction; pulsing the radio frequency to provide a sequence that includes a phase-corrected magic angle turning pulse segment; and collecting data generated by the pulsed radio frequency. The object may be reoriented about the magic angle axis between three predetermined positions that are related to each other by 120.degree.. The main magnetic field may be rotated mechanically or electronically. Methods for magnetic resonance imaging of the object are also described.

A method of performing a magnetic resonance analysis of a biological object that includes placing the object in a main magnetic field (that has a static field direction) and in a radio frequency field; rotating the object at a frequency of less than about 100 Hz around an axis positioned at an angle of about 54.degree.44' relative to the main magnetic static field direction; pulsing the radio frequency to provide a sequence that includes a phase-corrected magic angle turning pulse segment; and collecting data generated by the pulsed radio frequency. The object may be reoriented about the magic angle axis between three predetermined positions that are related to each other by 120.degree.. The main magnetic field may be rotated mechanically or electronically. Methods for magnetic resonance imaging of the object are also described.

A method of performing a magnetic resonance analysis of a biological object that includes placing the biological object in a main magnetic field and in a radio frequency field, the main magnetic field having a static field direction; rotating the biological object at a rotational frequency of less than about 100 Hz around an axis positioned at an angle of about 54.degree.44' relative to the main magnetic static field direction; pulsing the radio frequency to provide a sequence that includes a magic angle turning pulse segment; and collecting data generated by the pulsed radio frequency. According to another embodiment, the radio frequency is pulsed to provide a sequence capable of producing a spectrum that is substantially free of spinning sideband peaks.

A quick method for analyzing the chemical composition of renewable energy biomass feedstock was developed by using Fourier transform near-infrared (FT-NIR) spectroscopy coupled with multivariate analysis. The study presents the broad-based model hypothesis that a single FT-NIR predictive model can be developed to analyze multiple types of biomass feedstock. The two most important biomass feedstocks corn stover and switchgrass were evaluated for the variability in their concentrations of the following components: glucan, xylan, galactan, arabinan, mannan, lignin, and ash. A hypothesis test was developed based upon these two species. Both cross-validation and independent validation results showed that the broad-based model developed is promising for future chemical prediction of both biomass species; in addition, the results also showed the method's prediction potential for wheat straw.

Thermal recovery methods are generally employed for recovering heavy oil and tar sand bitumen. These methods rely on reduction of oil viscosity by application of heat as one of the primary mechanisms of oil recovery. Therefore, design and performance prediction of the thermal recovery methods require adequate prediction of oil viscosity as a function of temperature. In this paper, several commonly used temperature-viscosity correlations are analyzed to evaluate their ability to correctly predict heavy oil and bitumen viscosity as a function of temperature. The analysis showed that Ali and Standing`s correlations gave satisfactory results in most cases when properly applied. Guidelines are provided for their application. None of the correlations, however, performed satisfactorily with very heavy oils at low temperatures.

Natural resource valuation has always had a fundamental role in the practice of cost-benefit analysis of health, safety, and environmental issues. The authors provide an objective overview of resource valuation techniques and describe their potential role in environmental restoration/waste management (ER/WM) activities at federal facilities. This handbook considers five general classes of valuation techniques: (1) market-based techniques, which rely on historical information on market prices and transactions to determine resource values; (2) nonmarket techniques that rely on indirect estimates of resource values; (3) nonmarket techniques that are based on direct estimates of resource values; (4) cross-cutting valuation techniques, which combine elements of one or more of these methods; and (5) ecological valuation techniques used in the emerging field of ecological economics. The various valuation techniques under consideration are described by highlighting their applicability in environmental management and regulation. The handbook also addresses key unresolved issues in the application of valuation techniques generally, including discounting future values, incorporating environmental equity concerns, and concerns over the uncertainties in the measurement of natural resource values and environmental risk.

An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two types of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.

Controlling dust when cutting fibre-cement board Page 1 of 2 Cutting fibre-cement board (e are not typically used when cutting and shaping fibre-cement board. To protect yourself you should: Use one of the methods described above for cutting fibre-· cement board Inspect the dust control equipment before you

Committee: Dr. H. R. cross Dr. J, W, Savell Twenty-nine sides from lightweight heifer carcasses, ranging from 113 to 250 kg, were fabricated into wholesale and retail cuts using standardized procedures. Retail cuts were trimmed to either zero or 0. 64... were below the ten percent fat level. Retail cut yields from the chuck, rib, loin and round for both trim levels were considerably lower than those reported in other studies. Retail cut yield from the four major wholesale cuts increased...

We consider multicommodity flow and cut problems in {\\em polymatroidal} networks where there are submodular capacity constraints on the edges incident to a node. Polymatroidal networks were introduced by Lawler and Martel and Hassin in the single-commodity setting and are closely related to the submodular flow model of Edmonds and Giles; the well-known maxflow-mincut theorem holds in this more general setting. Polymatroidal networks for the multicommodity case have not, as far as the authors are aware, been previously explored. Our work is primarily motivated by applications to information flow in wireless networks. We also consider the notion of undirected polymatroidal networks and observe that they provide a natural way to generalize flows and cuts in edge and node capacitated undirected networks. We establish poly-logarithmic flow-cut gap results in several scenarios that have been previously considered in the standard network flow models where capacities are on the edges or nodes. Our results have alread...

This assessment is to verify hot work requirements associated with welding, cutting, burning, brazing, grinding and other spark- or flame-producing operations have been implemented. Verify that the requirements implemented are appropriate for preventing loss of life and property from fire, and personal injury from contact with or exposure to molten metals, vapors, radiant energy, injurious rays and sparks.

, Italy 1 ADVANCED TECHNIQUES FOR SAFETY ANALYSIS APPLIED TO THE GAS TURBINE CONTROL SYSTEM OF ICARO CO of complex computer based systems. Such approaches are applied to the gas turbine control system of ICARO co of the centre of ENEA CR Casaccia. The plant is based on a small gas turbine and has been specifically designed

Social Network Mining, Analysis and Research Trends: Techniques and Applications 1 Bridging the Gap or sharing content with their friends in social networking websites. Social activities involve basically: (i for the manipulation of social data consists of analysing both the structure of such networks and the content

A Clock-Less Jitter Spectral AnalysisTechnique Chee-Kian Ong, Member, IEEE, Dongwoo Hong, Kwang-Ting (Tim computationally intensive method, based on the derivative prin- ciple, to extract only the random jitter component on simulation show that these methods can accurately estimate the sinusoidal and random jitters

In 1992, the Bonneville Power Administration spent $361 million in capital on a system to transmit electricity. By 1998, it was spending about one-third that amount: $123 million. In 1992, BPA`s expenses for managing, operating and maintaining the transmission system ran $160 million. By 1998, BPA had cut expenses to $128 million. Maintenance costs alone were cut 28%. In 1992, management of the grid was split into six organizations. Today, there is one. About 2,900 people worked for transmission in October 1992. By February 1998, the Transmission Business Line (TBL) employed 1,855. Transmission in 1992 for the most part meant new towers, lines and substations. Today it means computers, digital communications and electronic controls.

This report examines a wafer slicing technique developed by Crystal Systems, Inc. that reduces the cost of photovoltaic wafers. This fixed, abrasive slicing technique (FAST) uses a multiwire bladepack and a diamond-plated wirepack; water is the coolant. FAST is in the prototype production stage and reduces expendable material costs while retaining the advantages of a multiwire slurry technique. The cost analysis revealed that costs can be decreased by making more cuts per bladepack and slicing more wafers per linear inch. Researchers studied the degradation of bladepacks and increased wirepack life. 21 refs.

results and benefits... Birmingham Cutting your CO2 Birmingham City Council July 2007 c a s e s t u of the BirminghamCutting CO2 campaign, news items, display materials etc. Â· Advising on pledge gathering materials system was launched in July 2007 as part of the `Birmingham Cutting Your CO2' campaign. By the end

) technique is commonly used for non-destructive testing of oil and gas pipelines. This testing involves of installed oil and natural gas pipelines using inline magnetic flux leakage (MFL) inspection techniques that could result from a pipeline leak or catastrophic fail- ure, pipelines must be routinely evaluated

This thesis will address the use of nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI) techniques to probe the “monolith reactor”, which consists of a structured catalyst over which reactions may occur. This reactor has emerged...

A cutting blade is disclosed fabricated of micromachined silicon. The cutting blade utilizes a monocrystalline silicon substrate having a {211} crystalline orientation to form one or more cutting edges that are defined by the intersection of {211} crystalline planes of silicon with {111} crystalline planes of silicon. This results in a cutting blade which has a shallow cutting-edge angle .theta. of 19.5.degree.. The micromachined cutting blade can be formed using an anisotropic wet etching process which substantially terminates etching upon reaching the {111} crystalline planes of silicon. This allows multiple blades to be batch fabricated on a common substrate and separated for packaging and use. The micromachined cutting blade, which can be mounted to a handle in tension and optionally coated for increased wear resistance and biocompatibility, has multiple applications including eye surgery (LASIK procedure).

While the performance of YBa{sub 2}Cu{sub 3}O{sub 7-x} (YBCO)-based coated conductors under dc currents has improved significantly in recent years, filamentization is being investigated as a technique to reduce ac loss so that the 2nd generation (2G) high temperature superconducting (HTS) wires can also be utilized in various ac power applications such as cables, transformers and fault current limiters. Experimental studies have shown that simply filamentizing the superconducting layer is not effective enough to reduce ac loss because of incomplete flux penetration in between the filaments as the length of the tape increases. To introduce flux penetration in between the filaments more uniformly and further reduce the ac loss, virtual transverse cross-cuts were made in superconducting filaments of the coated conductors fabricated using the metal organic chemical vapor deposition (MOCVD) method. The virtual transverse cross-cuts were formed by making cross-cuts (17 - 120 {micro}m wide) on the IBAD (ion beam assisted deposition)-MgO templates using laser scribing followed by depositing the superconducting layer ({approx} 0.6 {micro}m thick). AC losses were measured and compared for filamentized conductors with and without the cross-cuts under applied peak ac fields up to 100 mT. The results were analyzed to evaluate the efficacy of filament decoupling and the feasibility of using this method to achieve ac loss reduction.

The purpose of this study is to evaluate the feasibility of using ground-water contaminant mitigation techniques to control radionuclide migration following a severe commercial nuclear power reactor accident. The two types of severe commercial reactor accidents investigated are: (1) containment basemat penetration of core melt debris which slowly cools and leaches radionuclides to the subsurface environment, and (2) containment basemat penetration of sump water without full penetration of the core mass. Six generic hydrogeologic site classifications are developed from an evaluation of reported data pertaining to the hydrogeologic properties of all existing and proposed commercial reactor sites. One-dimensional radionuclide transport analyses are conducted on each of the individual reactor sites to determine the generic characteristics of a radionuclide discharge to an accessible environment. Ground-water contaminant mitigation techniques that may be suitable, depending on specific site and accident conditions, for severe power plant accidents are identified and evaluated. Feasible mitigative techniques and associated constraints on feasibility are determined for each of the six hydrogeologic site classifications. The first of three case studies is conducted on a site located on the Texas Gulf Coastal Plain. Mitigative strategies are evaluated for their impact on contaminant transport and results show that the techniques evaluated significantly increased ground-water travel times. 31 references, 118 figures, 62 tables.

strategy of this technology. It can be found that the air barrier technique, instead of the heating-supply around outside-zone for office building, can avoid dewfall in winter and decrease the cold radiation, which has a great effect on thermal environment...

strategy of this technology. It can be found that the air barrier technique, instead of the heating-supply around outside-zone for office building, can avoid dewfall in winter and decrease the cold radiation, which has a great effect on thermal environment...

, circuit transient analysis, convolution, nonlinear circuits, solitons, state variables. I. INTRODUCTION TRANSIENT analysis of distributed microwave circuits is complicated by the inability of frequency, the linear part of a microwave circuit is described in the frequency domain by network parameters, especially

At the Hanford Tank Farms, recent changes in retrieval technology require cutting new risers in several single-shell tanks. The Hanford Tank Farm Operator is using water jet technology with abrasive silicate minerals such as garnet or olivine to cut through the concrete and rebar dome. The abrasiveness of these minerals, which become part of the high-level waste stream, may enhance the erosion of waste processing equipment. However, garnet and olivine are not thermodynamically stable in Hanford waste, slowly degrading over time. How likely these materials are to dissolve completely in the waste before the waste is processed in the Waste Treatment and Immobilization Plant can be evaluated using theoretical analysis for olivine and collected direct experimental evidence for garnet. Based on an extensive literature study, a large number of primary silicates decompose into sodalite and cancrinite when exposed to Hanford waste. Given sufficient time, the sodalite also degrades into cancrinite. Even though cancrinite has not been directly added to any Hanford tanks during process times, it is the most common silicate observed in current Hanford waste. By analogy, olivine and garnet are expected to ultimately also decompose into cancrinite. Garnet used in a concrete cutting demonstration was immersed in a simulated supernate representing the estimated composition of the liquid retrieving waste from Hanford tank 241-C-107 at both ambient and elevated temperatures. This simulant was amended with extra NaOH to determine if adding caustic would help enhance the degradation rate of garnet. The results showed that the garnet degradation rate was highest at the highest NaOH concentration and temperature. At the end of 12 weeks, however, the garnet grains were mostly intact, even when immersed in 2 molar NaOH at 80 deg C. Cancrinite was identified as the degradation product on the surface of the garnet grains. In the case of olivine, the rate of degradation in the high-pH regimes of a waste tank is expected to depend on two main parameters: carbonate is expected to slow olivine degradation rates, whereas hydroxide is expected to enhance olivine dissolution rates. Which of these two competing dissolution drivers will have a larger impact on the dissolution rate in the specific environment of a waste tank is currently not identifiable. In general, cancrinite is much smaller and less hard than either olivine or garnet, so would be expected to be less erosive to processing equipment. Complete degradation of either garnet or olivine prior to being processed at the Waste Treatment and Immobilization Plant cannot be confirmed, however.

At the Hanford Tank Farms, recent changes in retrieval technology require cutting new risers in several single-shell tanks. The Hanford Tank Farm Operator is using water jet technology with abrasive silicate minerals such as garnet or olivine to cut through the concrete and rebar dome. The abrasiveness of these minerals, which become part of the high-level waste stream, may enhance the erosion of waste processing equipment. However, garnet and olivine are not thermodynamically stable in Hanford waste, slowly degrading over time. How likely these materials are to dissolve completely in the waste before the waste is processed in the Waste Treatment and Immobilization Plant can be evaluated using theoretical analysis for olivine and collected direct experimental evidence for garnet. Based on an extensive literature study, a large number of primary silicates decompose into sodalite and cancrinite when exposed to Hanford waste. Given sufficient time, the sodalite also degrades into cancrinite. Even though cancrinite has not been directly added to any Hanford tanks during process times, it is the most common silicate observed in current Hanford waste. By analogy, olivine and garnet are expected to ultimately also decompose into cancrinite. Garnet used in a concrete cutting demonstration was immersed in a simulated supernate representing the estimated composition of the liquid retrieving waste from Hanford tank 241-C-107 at both ambient and elevated temperatures. This simulant was amended with extra NaOH to determine if adding caustic would help enhance the degradation rate of garnet. The results showed that the garnet degradation rate was highest at the highest NaOH concentration and temperature. At the end of 12 weeks, however, the garnet grains were mostly intact, even when immersed in 2 molar NaOH at 80 deg. C. Cancrinite was identified as the degradation product on the surface of the garnet grains. In the case of olivine, the rate of degradation in the high-pH regimes of a waste tank is expected to depend on two main parameters: carbonate is expected to slow olivine degradation rates, whereas hydroxide is expected to enhance olivine dissolution rates. Which of these two competing dissolution drivers will have a larger impact on the dissolution rate in the specific environment of a waste tank is currently not identifiable. In general, cancrinite is much smaller and less hard than either olivine or garnet, so would be expected to be less erosive to processing equipment. Complete degradation of either garnet or olivine prior to being processed at the Waste Treatment and Immobilization Plant cannot be confirmed, however. (authors)

A benchmarking effort was conducted to determine the accuracy of a new analytic generic geology thermal repository model developed at LLNL relative to a more traditional, numerical, lumped parameter technique. The fast-running analytical thermal transport model assumes uniform thermal properties throughout a homogenous storage medium. Arrays of time-dependent heat sources are included geometrically as arrays of line segments and points. The solver uses a source-based linear superposition of closed form analytical functions from each contributing point or line to arrive at an estimate of the thermal evolution of a generic geologic repository. Temperature rise throughout the storage medium is computed as a linear superposition of temperature rises. It is modeled using the MathCAD mathematical engine and is parameterized to allow myriad gridded repository geometries and geologic characteristics [4]. It was anticipated that the accuracy and utility of the temperature field calculated with the LLNL analytical model would provide an accurate 'birds-eye' view in regions that are many tunnel radii away from actual storage units; i.e., at distances where tunnels and individual storage units could realistically be approximated as physical lines or points. However, geometrically explicit storage units, waste packages, tunnel walls and close-in rock are not included in the MathCAD model. The present benchmarking effort therefore focuses on the ability of the analytical model to accurately represent the close-in temperature field. Specifically, close-in temperatures computed with the LLNL MathCAD model were benchmarked against temperatures computed using geometrically-explicit lumped-parameter, repository thermal modeling technique developed over several years at ANL using the SINDAG thermal modeling code [5]. Application of this numerical modeling technique to underground storage of heat generating nuclear waste streams within the proposed YMR Site has been widely reported [6]. New SINDAG thermal models presented here share this same basic modeling approach.

This report provides sufficient material for a test sponsor with little or no radiochemistry background to understand and follow physics irradiation test program execution. Most irradiation test programs employ similar techniques and the general details provided here can be applied to the analysis of other irradiated sample types. Aspects of program management directly affecting analysis quality are also provided. This report is not an in-depth treatise on the vast field of radiochemical analysistechniques and related topics such as quality control. Instrumental technology is a very fast growing field and dramatic improvements are made each year, thus the instrumentation described in this report is no longer cutting edge technology. Much of the background material is still applicable and useful for the analysis of older experiments and also for subcontractors who still retain the older instrumentation.

We propose an infrared cut-off for the holographic the dark-energy, which besides the square of the Hubble scale also contains the time derivative of the Hubble scale. This avoids the problem of causality which appears using the event horizon area as the cut-off, and solves the coincidence problem.

Termination Semantics of Logic Programs with Cut and Related Features Jamie Andrews Dept of termination for logic programs. I am particularly interested in the termination of logic programs which use practical features such as the Prolog ``cut''. In order to prove termination of such programs

independently noted that cuts expressed in terms of variables from a suitable original ... The In-Degree constraints (1b) state that exactly one arc must enter each ...... of promising sets S, and (ii) the search for violated cuts by considering the ...

A method and apparatus for diamond wire cutting of metal structures, such as nuclear reactor vessels, is provided. A diamond wire saw having a plurality of diamond beads with beveled or chamfered edges is provided for sawing into the walls of the metal structure. The diamond wire is guided by a plurality of support structures allowing for a multitude of different cuts. The diamond wire is cleaned and cooled by CO.sub.2 during the cutting process to prevent breakage of the wire and provide efficient cutting. Concrete can be provided within the metal structure to enhance cutting efficiency and reduce airborne contaminants. The invention can be remotely controlled to reduce exposure of workers to radioactivity and other hazards.

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualProperty EditCalifornia:PowerCER.pngRoofs and HeatOpen EnergyRoadmap

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualProperty EditCalifornia:PowerCER.pngRoofs and HeatOpen EnergyRoadmap1977 Usefulness not

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualProperty EditCalifornia:PowerCER.pngRoofs and HeatOpenInformation 2)

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualProperty EditCalifornia:PowerCER.pngRoofs and HeatOpenInformation 2)| Open Energy

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualProperty EditCalifornia:PowerCER.pngRoofs and HeatOpenInformation 2)| Open Energy2005) |

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualProperty EditCalifornia:PowerCER.pngRoofs and HeatOpenInformation 2)| Open

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualProperty EditCalifornia:PowerCER.pngRoofs and HeatOpenInformation 2)| Open

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualProperty EditCalifornia:PowerCER.pngRoofs and HeatOpenInformation 2)| Open

Purpose: The use of contrast agents in breast imaging has the capability of enhancing nodule detectability and providing physiological information. Accordingly, there has been a growing trend toward using iodine as a contrast medium in digital mammography (DM) and digital breast tomosynthesis (DBT). Widespread use raises concerns about the best way to use iodine in DM and DBT, and thus a comparison is necessary to evaluate typical iodine-enhanced imaging methods. This study used a task-based observer model to determine the optimal imaging approach by analyzing six imaging paradigms in terms of their ability to resolve iodine at a given dose: unsubtracted mammography and tomosynthesis, temporal subtraction mammography and tomosynthesis, and dual energy subtraction mammography and tomosynthesis. Methods: Imaging performance was characterized using a detectability index d{sup ?}, derived from the system task transfer function (TTF), an imaging task, iodine signal difference, and the noise power spectrum (NPS). The task modeled a 10 mm diameter lesion containing iodine concentrations between 2.1 mg/cc and 8.6 mg/cc. TTF was obtained using an edge phantom, and the NPS was measured over several exposure levels, energies, and target-filter combinations. Using a structured CIRS phantom, d{sup ?} was generated as a function of dose and iodine concentration. Results: For all iodine concentrations and dose, temporal subtraction techniques for mammography and tomosynthesis yielded the highest d{sup ?}, while dual energy techniques for both modalities demonstrated the next best performance. Unsubtracted imaging resulted in the lowest d{sup ?} values for both modalities, with unsubtracted mammography performing the worst out of all six paradigms. Conclusions: At any dose, temporal subtraction imaging provides the greatest detectability, with temporally subtracted DBT performing the highest. The authors attribute the successful performance to excellent cancellation of inplane structures and improved signal difference in the lesion.

Sample preparation has been one of the major bottlenecks for many high throughput analyses. The purpose of this research was to develop new sample preparation and integration approach for DNA sequencing, PCR based DNA analysis and combinatorial screening of homogeneous catalysis based on multiplexed capillary electrophoresis with laser induced fluorescence or imaging UV absorption detection. The author first introduced a method to integrate the front-end tasks to DNA capillary-array sequencers. protocols for directly sequencing the plasmids from a single bacterial colony in fused-silica capillaries were developed. After the colony was picked, lysis was accomplished in situ in the plastic sample tube using either a thermocycler or heating block. Upon heating, the plasmids were released while chromsomal DNA and membrane proteins were denatured and precipitated to the bottom of the tube. After adding enzyme and Sanger reagents, the resulting solution was aspirated into the reaction capillaries by a syringe pump, and cycle sequencing was initiated. No deleterious effect upon the reaction efficiency, the on-line purification system, or the capillary electrophoresis separation was observed, even though the crude lysate was used as the template. Multiplexed on-line DNA sequencing data from 8 parallel channels allowed base calling up to 620 bp with an accuracy of 98%. The entire system can be automatically regenerated for repeated operation. For PCR based DNA analysis, they demonstrated that capillary electrophoresis with UV detection can be used for DNA analysis starting from clinical sample without purification. After PCR reaction using cheek cell, blood or HIV-1 gag DNA, the reaction mixtures was injected into the capillary either on-line or off-line by base stacking. The protocol was also applied to capillary array electrophoresis. The use of cheaper detection, and the elimination of purification of DNA sample before or after PCR reaction, will make this approach an attractive alternative to current methods for genetic analysis and disease diagnosis.

An apparatus and method for continuously analyzing liquids by creating a supersonic spray which is shaped and sized prior to delivery of the spray to a analysis apparatus. The gas and liquid is sheared into small particles which are of a size and uniformity to form a spray which can be controlled through adjustment of pressures and gas velocity. The spray is shaped by a concentric supplemental flow of gas. 5 figs.

The groundwater flow pathway in the Culebra Dolomite aquifer at the Waste Isolation Pilot Plant (WIPP) has been identified as a potentially important pathway for radionuclide migration to the accessible environment. Consequently, uncertainties in the models used to describe flow and transport in the Culebra need to be addressed. A ``Geostatistics Test Problem`` is being developed to evaluate a number of inverse techniques that may be used for flow calculations in the WIPP performance assessment (PA). The Test Problem is actually a series of test cases, each being developed as a highly complex synthetic data set; the intent is for the ensemble of these data sets to span the range of possible conceptual models of groundwater flow at the WIPP site. The Test Problem analysis approach is to use a comparison of the probabilistic groundwater travel time (GWTT) estimates produced by each technique as the basis for the evaluation. Participants are given observations of head and transmissivity (possibly including measurement error) or other information such as drawdowns from pumping wells, and are asked to develop stochastic models of groundwater flow for the synthetic system. Cumulative distribution functions (CDFs) of groundwater flow (computed via particle tracking) are constructed using the head and transmissivity data generated through the application of each technique; one semi-analytical method generates the CDFs of groundwater flow directly. This paper describes the results from Test Case No. 1.

change in the cutting head. Furthermore water produced employing a reduced water treatment is sufficient of such orifices are strongly limited to only dozens of hours due to the high water pressure environment (typically 350 MPa) and contacts with abra- sives through the water hammer effect during cut-offs. Due to its

We review the basic techniques for extracting information about quasar structure and kinematics from the broad emission lines in quasars. We consider which lines can most effectively serve as virial estimators of black hole mass. At low redshift the Balmer lines,particularly broad H beta, are the lines of choice. For redshifts greater than 0.7 - 0.8 one can follow H beta into the IR windows or find an H beta surrogate. We explain why UV CIV 1549 is not a safe virial estimator and how MgII 2800 serves as the best virial surrogate for H beta up to the highest redshift quasar known at z ~ 7. We show how spectral binning in a parameter space context (4DE1) makes possible a more effective comparison of H beta and MgII. It also helps to derive more accurate mass estimates from appropriately binned spectra and, finally, to map the dispersion in black hole mass and Eddington ratio across the quasar population. FWHM MgII is about 20% smaller than FWHM H beta in the majority of type 1 AGN requiring correction when comp...

In this thesis, we give a new class of outer bounds on the marginal polytope, and propose a cutting-plane algorithm for efficiently optimizing over these constraints. When combined with a concave upper bound on the entropy, ...

This dissertation develops theory and methodology based on Fenchel cutting planes for solving stochastic integer programs (SIPs) with binary or general integer variables in the second-stage. The methodology is applied to ...

TRENDS IN DEMAND FOR RETAIL AND WHOLESALE CUTS OF MEAT A Thesis by DAVID WAYNE HOLLOWAY Submitted to the Office of Graduate Studies of Texas ARM University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE... December 1990 Major Subject: Agricultural Economics TRENDS IN DEMAND FOR RETAIL AND WHOLESALE CUTS OF MEAT A Thesis by DAVID WAYNE HOLLOWAY Approved as to style and content by: Donald E. Farris (Chair of Committee) Carl E. Shafer (Member) Rudo J...

A computational model for the simulation of a laser-cutting process has been developed using a finite element method. A transient heat transfer model is considered that deals with the material-cutting process using a Gaussian continuous wave laser beam. Numerical experimentation is carried out for mesh refinements and the rate of convergence in terms of groove shape and temperature. Results are also presented for the prediction of groove depth with different moving speeds.

This patent describes a device for drilling a deflection hole or window from a drill hole in underground rock or geologic formations, it comprises: a deflection wedge unit mountable via a packer in the drill hole, and a pilot cutting tool mounted to the lower end of a drill string, the deflection wedge unit eventually guiding the tool and the drill string including one or more later cutting tools.

HIGH PRESSURE WATER JET CUTTING OF SUGAR CANE A Thesis by THOMAS DONALD VALCO Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirement for the degree of MASTER OF SCIENCE May 1977 Major Subject...: Agricultural Engineering HIGH PRESSURE WATER JET CUTTING OF SUGAR CANE A Thesis by THOMAS DONALD VALCO Approved as to style and content by: Dr. Charlie G. Coble (Chairman of Committee) Dr. Edward A. Haler (Head of Department) Mr. William H. Aldred...

Both interstitial water and plant tissue associated with the DC-A substrate exhibited low metal concentrations. Also in agreement with the previous study, plant performance in the DC-A substrate was found to be comparable to plant performance in the dredge spoil and topsoil substrates. This was extremely important because it indicated that the drill cuttings themselves served as an excellent substrate for wetland plant growth, but that the processing and stabilization techniques and drilling fluid formulations required further refinement.

The Technique for Human Error Analysis (ATHEANA) is a newly developed human reliability analysis (HRA) methodology that aims to facilitate better representation and integration of human performance into probabilistic risk assessment (PRA) modeling and quantification by analyzing risk-significant operating experience in the context of existing behavior science models. The fundamental premise of ATHEANA is that error-forcing contexts (EFCs), which refer to combinations of equipment/material conditions and performance shaping factors (PSFs), set up or create the conditions under which unsafe actions (UAs) can occur. ATHEANA is being developed in the context of nuclear power plant (NPP) PRAs, and much of the language used to describe the method and provide examples of its application are specific to that industry. Because ATHEANA relies heavily on the analysis of operational events that have already occurred as a mechanism for generating creative thinking about possible EFCs, a database, called the Human Events Reference for ATHEANA (HERA), has been developed to support the methodology. Los Alamos National Laboratory`s (LANL) Human Factors Group has recently joined the ATHEANA project team; LANL is responsible for further developing the database structure and for analyzing additional exemplar operational events for entry into the database. The Action Characterization Matrix (ACM) is conceived as a bridge between the HERA database structure and ATHEANA. Specifically, the ACM allows each unsafe action or human failure event to be characterized according to its representation along each of six different dimensions: system status, initiator status, unsafe action mechanism, information processing stage, equipment/material conditions, and performance shaping factors. This report describes the development of the ACM and provides details on the structure and content of its dimensions.

Aluminum 1060 and titanium alloy Ti-6Al-4V plates were lap joined by friction stir welding. A cutting pin of rotary burr made of tungsten carbide was employed. The microstructures of the joining interface were observed by scanning electron microscopy. Joint strength was evaluated by a tensile shear test. During the welding process, the surface layer of the titanium plate was cut off by the pin, and intensively mixed with aluminum situated on the titanium plate. The microstructures analysis showed that a visible swirl-like mixed region existed at the interface. In this region, the Al metal, Ti metal and the mixed layer of them were all presented. The ultimate tensile shear strength of joint reached 100% of 1060Al that underwent thermal cycle provided by the shoulder. - Highlights: Black-Right-Pointing-Pointer FSW with cutting pin was successfully employed to form Al/Ti lap joint. Black-Right-Pointing-Pointer Swirl-like structures formed due to mechanical mixing were found at the interface. Black-Right-Pointing-Pointer High-strength joints fractured at Al suffered thermal cycle were produced.

but is hidden inside complicated convolutions, summed over many subprocesses many different processes needed century math comes to help ... R.H. Mellin Finnish mathematician integral transformation: Mellin n truncated at given order satisfy DGLAP eqs. "only" in the sense of a power expansion the treatment

Sensitive and selective detection techniques are of crucial importance for capillary electrophoresis (CE), microfluidic chips, and other microfluidic systems. Electrochemical detectors have attracted considerable interest for microfluidic systems with features that include high sensitivity, inherent miniaturization of both the detection and control instrumentation, low cost and power demands, and high compatibility with microfabrication technology. The commonly used electrochemical detectors can be classified into three general modes: conductimetry, potentiometry, and amperometry.

A mining auger comprises a cutting head carried at one end of a tubular shaft and a plurality of wall segments which in a first position thereof are disposed side by side around said shaft and in a second position thereof are disposed oblique to said shaft. A vane projects outwardly from each wall segment. When the wall segments are in their first position, the vanes together form a substantially continuous helical wall. A cutter is mounted on the peripheral edge of each of the vanes. When the wall segments are in their second position, the cutters on the vanes are disposed radially outward from the perimeter of the cutting head.

Simplex Partitioning via Exponential Clocks and the Multiway Cut Problem [Extended Abstract] Niv known geometric relaxation in which the graph is em- bedded into a high dimensional simplex. Rounding a solu- tion to the geometric relaxation is equivalent to partitioning the simplex. We present a novel

TWIN BUILDING LATTICES DO NOT HAVE ASYMPTOTIC CUT-POINTS PIERRE-EMMANUEL CAPRACE*, FRANĂ?OIS DAHMANI**, AND VINCENT GUIRARDEL** Abstract. We show that twin building lattices have linear divergence, which implies of asymptotic cones of twin building lattices. Theorem 1. The asymptotic cones of a twin building lattice do

a probabilistic constraint) states that the chosen decision vector should, with .... what we present here, but the mechanism for generating cuts is significantly .... if we don't assume f is convex, or that X is a convex set, then again (7) is not efficiently ... We now describe our procedure for generating valid inequalities of the form.

to handle data that is not linearly separable; spectral clustering and kernel k-means are two of the main different methods--in particular, a general weighted kernel k-means objective is mathematically equivalent multilevel algorithm removes this restriction by using kernel k-means to optimize weighted graph cuts

One of the smartest ways for homeowners to save money on major appliance upgrades is to hook into an energy efficiency rebate program. The Neighborhood Energy Connection (NEC), a non-profit organization in St. Paul, Minnesota, helps local residents take advantage of Xcel Energy’s rebate programs that cut the cost of whole-house energy efficiency upgrades.

An abrasive cutting or drilling system, apparatus and method, which includes an upstream supercritical fluid and/or liquid carrier fluid, abrasive particles, a nozzle and a gaseous or low-density supercritical fluid exhaust abrasive stream. The nozzle includes a throat section and, optionally, a converging inlet section, a divergent discharge section, and a feed section.

An abrasive cutting or drilling system, apparatus and method, which includes an upstream supercritical fluid and/or liquid carrier fluid, abrasive particles, a nozzle and a gaseous or low-density supercritical fluid exhaust abrasive stream. The nozzle includes a throat section and, optionally, a converging inlet section, a divergent discharge section, and a feed section.

Type-Inference Based Short Cut Deforestation (nearly) without Inlining -- Work in Progress -- Olaf@informatik.rwth-aachen.de Abstract Deforestation optimises a functional program by transforming it into another one that does not create certain intermediate data structures. In [Chi99] we presented a type-inference based deforestation

SAR (China) 1 sparis@csail.mit.edu -- Sylvain Paris has worked on this project during his PhD at ARTIS calibrated images mainly has been approached using local methods, either as a continuous optimization problem of a pixel. Index Terms Graph flow, graph cut, 3D reconstruction from calibrated cameras, discontinuities

SAR (China) 1sparis@csail.mit.edu Â­ Sylvain Paris has worked on this project during his PhD at ARTIS calibrated images mainly has been approached using local methods, either as a continuous optimization problem of a pixel. Index Terms Graph flow, graph cut, 3D reconstruction from calibrated cameras, discontinuities

SAR (China) 1sparis@csail.mit.edu Â­ Sylvain Paris has worked on this project during his PhD at ARTIS reconstruction from multiple calibrated images mainly has been approached using local methods, either to 1/10th of a pixel. Index Terms Graph flow, graph cut, 3D reconstruction from calibrated cameras

Bipolar electrical coagulation of tissue using radiofrequency energy is combined with the functions of conventional surgical pressure tissue cutting instruments without significant modification thereof in a single instrument with the result that a surgeon can perform both procedures without having to redirect his attention from the area of the surgery. 4 figs.

the spray mix, fill the tank to the final vol- ume. L-5421 10/05 How to Avoid Lumps When Treating Cut Stumps Two safe, effective, three-step ways to control many woody plants Individual Plant Treatment Series Allan McGinty, Professor and Extenson Range...

This thesis guides the reader through the design of an inexpensive XY stage for abrasive water jet cutting machine starting with a set of functional requirements and ending with a product. Abrasive water jet cutting allows ...

High power and radiance dye lasers developed at Lawrence Livermore National Laboratory show promise for material processing tanks. Evaluation using welding heat flow models suggest significant increases in precision and speed are expected. We developed tooling and instrumentation to diagnose important parameters including spot geometry and optical train quality. We started processing studies to determine the viability of these lasers of cutting and drilling. We used titanium alloys first in the studies due to the availability of comparable parametric studies in the technical literature. Results show that cuts and holes with extremely fine features can be made with dye lasers. The high radiance beam produces low distortion and small heat-affected zones. We have accomplished very high aspect ratios and micron scale kerfs and holes. Through continued system improvement and process optimization, we believe that submicron levels will be achieved.

The object of the invention is to provide a system and apparatus which employs laser cutting to disassemble a nuclear core fuel subassembly. The apparatus includes a gantry frame (C) which straddles the core fuel subassembly (14), an x-carriage (22) travelling longitudinally above the frame which carries a focus head assembly (D) having a vertically moving carriage (46) and a laterally moving carriage (52), a system of laser beam transferring and focusing mirrors carried by the x-carriage and focusing head assembly, and a shroud follower (F) and longitudinal follower (G) for following the shape of shroud (14) to maintain a beam focal point (44) fixed upon the shroud surface for accurate cutting.

AGR 1 was the first in a series of experiments designed to test US TRISO fuel under high temperature gas-cooled reactor irradiation conditions. This experiment was irradiated in the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL) and is currently undergoing post irradiation examination (PIE) at INL and Oak Ridge National Laboratory. One component of the AGR 1 PIE is the experimental evaluation of the burnup of the fuel by two separate techniques. Gamma spectrometry was used to non destructively evaluate the burnup of all 72 of the TRISO fuel compacts that comprised the AGR 1 experiment. Two methods for evaluating burnup by gamma spectrometry were developed, one based on the Cs 137 activity and the other based on the ratio of Cs 134 and Cs 137 activities. Burnup values determined from both methods compared well with the values predicted from simulations. The highest measured burnup was 20.1 %FIMA for the direct method and 20.0 %FIMA for the ratio method (compared to 19.56% FIMA from simulations). An advantage of the ratio method is that the burnup of the cylindrical fuel compacts can determined in small (2.5 mm) axial increments and an axial burnup profile can be produced. Destructive chemical analysis by inductively coupled mass spectrometry (ICP MS) was then performed on selected compacts that were representative of the expected range of fuel burnups in the experiment to compare with the burnup values determined by gamma spectrometry. The compacts analyzed by mass spectrometry had a burnup range of 19.3 % FIMA to 10.7 % FIMA. The mass spectrometry evaluation of burnup for the four compacts agreed well with the gamma spectrometry burnup evaluations and the expected burnup from simulation. For all four compacts analyzed by mass spectrometry, the maximum range in the three experimentally determined values and the predicted value was 6% or less. The results confirm the accuracy of the nondestructive burnup evaluation from gamma spectrometry for TRISO fuel compacts across a burnup range of approximately 10 to 20 % FIMA and also validate the approach used in the physics simulation of the AGR 1 experiment.

-negative number, and direction) to each edge such that: net flow at each vertex, except S and T, is zero; and |xe, and direction) to each edge such that: net flow at each vertex is zero; and |xe| e. Value of flow is x0. Duval: net flow at each vertex is zero; and |xe| e. Value of flow is x0. Definition Cut is minimal set

This presentation by Kenneth Nichols, Barber-Nichols, Inc., is about cost-cutting in the energy conversion phase and power plant phase of geothermal energy production. Mr. Nichols discusses several ways in which improvements could be made, including: use of more efficient compressors and other equipment as they become available, anticipating reservoir resource decline and planning for it, running smaller binary systems independent of human operators, and designing plants so that they are relatively maintenance-free.

data from 1965 to 1985 were used. The aggregate system used beef, pork, and chicken, and then disaggregated beef into hamburger and table cuts, and chicken into parts/processed products for the second system. Additional variables in the estimation... shift in demand for beef and pork during the period, and broilers were found to be a strong substitute for beef in the second period. Implications are that there was a structural change in the meat sector during the period, and the decline...

A novel asymmetric-cut variable-incident-angle monochromator was constructed and tested in 1997 at the Advanced Photon Source of Argonne National Laboratory. The monochromator was originally designed as a high heat load monochromator capable of handling 5-10 kW beams from a wiggler source. This was accomplished by spreading the x-ray beam out on the surface an asymmetric-cut crystal and by using liquid metal cooling of the first crystal. The monochromator turned out to be a highly versatile monochromator that could perform many different types of experiments. The monochromator consisted of two 18 deg. asymmetrically cut Si crystals that could be rotated about 3 independent axes. The first stage ({Phi}) rotates the crystal around an axis perpendicular to the diffraction plane. This rotation changes the angle of the incident beam with the surface of the crystal without changing the Bragg angle. The second rotation ({Psi}) is perpendicular to the first and is used to control the shape of the beam footprint on the crystal. The third rotation ({Theta}) controls the Bragg angle. Besides the high heat load application, the use of asymmetrically cut crystals allows one to increase or decrease the acceptance angle for crystal diffraction of a monochromatic x-ray beam and allows one to increase or decrease the wavelength bandwidth of the diffraction of a continuum source like a bending-magnet beam or a normal x-ray-tube source. When the monochromator is used in the doubly expanding mode, it is possible to expand the vertical size of the double-diffracted beam by a factor of 10-15. When this was combined with a bending magnet source, it was possible to generate an 8 keV area beam, 16 mm wide by 26 mm high with a uniform intensity and parallel to 1.2 arc sec that could be applied in imaging experiments.

The U.S. Department of Energy's Clean Cities initiative advances the nation's economic, environmental, and energy security by supporting local actions to cut petroleum use in transportation. Clean Cities accomplishes this work through the activities of nearly 100 local coalitions. These coalitions provide resources and technical assistance in the deployment of alternative and renewable fuels, idle-reduction measures, fuel economy improvements, and new transportation technologies as they emerge.

Several methods have been developed previously for estimating cumulative energy production and plutonium production from graphite-moderated reactors. The Graphite Isotope Ratio Method (GIRM) is one well-known technique. This method is based...

Cutting-plane methods have shown their unique advantage in solving IP problems. In this research, a new algorithm (MIXCUT) is developed to generate the cutting planes for mixed 0-1 knapsack problems. The class of the cutting planes is called Fenchel...

are in good agreement with the metal cutting mechanics where effects of the tool run out and vibrations are observed. A parametric study has been conducted and the results show that the magnitude of the cutting forces at a constant depth of cut increases...

are interested in algorithms for finding 2-factors that cover certain prescribed edge-cuts in bridgeless cubic graphs. We present an algorithm for finding a minimum-weight 2-factor covering all the 3-edge cuts for finding a 2-factor covering all the 3- and 4-edge cuts in bridgeless cubic graphs. Both

A numerical-analytical model is developed to predict temperatures in stud-mounted polycrystalline diamond compact (PDC) drag tools during rock cutting. Experimental measurements of the convective heat transfer coefficient for PDC cutters are used in the model to predict temperatures under typical drilling conditions with fluid flow. The analysis compares favorably with measurements of frictional temperatures in controlled cutting tests on Tennessee marble. It is shown that mean cutter wearflat temperatures can be maintained below the critical value of 750{sup 0}C only under conditions of low friction at the cutter/rock interface. This is true, regardless of the level of convective cooling. In fact, a cooling limit is established above which increases in convective cooling do not further reduce cutter temperatures. The ability of liquid drilling fluids to reduce interface friction is thus shown to be far more important in preventing excessive temperatures than their ability to provide cutter cooling. Due to the relatively high interface friction developed under typical air drilling conditions, it is doubtful that temperatures can be kept subcritical at high rotary speeds in some formations when air is employed as the drilling fluid, regardless of the level of cooling achieved.

The metallic elements with a low melting point and high vapor pressure seemed to transfer in aerosols selectively at dismantling reactor internals using heat cutting. Therefore, the arc melting tests of neutron irradiated zirconium alloy were conducted to investigate the radionuclide transfer behavior of aerosols generated during the heat cutting of activated metals. The arc melting test was conducted using a tungsten inert gas welding machine in an inert gas or air atmosphere. The radioactive aerosols were collected by filter and charcoal filter. The test sample was obtained from Zry-2 fuel cladding irradiated in a Japanese boiling water reactor for five fuel cycles. The activity analysis, chemical composition measurement and scanning electron microscope observation of aerosols were carried out. Some radionuclides were enriched in the aerosols generated in an inert gas atmosphere and the radionuclide transfer ratio did not change remarkably by the presence of air. The transfer ratio of Sb-125 was almost the same as that of Co-60. It was expected that Sb-125 was enriched from other elements since Sb is an element with a low melting point and high vapor pressure compared with the base metal (Zr). In the viewpoint of the environmental impact assessment, it became clear that the influence if Sb-125 is comparable to Co-60. The transfer ratio of Mn-54 was one order higher compared with other radionuclides. The results were discussed on the basis of thermal properties and oxide formation energy of the metallic elements. (authors)

Graphical abstract: The degradation tendency extracted by CP technique was almost the same in both the bulk-type and TFT-type cells. - Highlights: • D{sub it} is directly investigated from bulk-type and TFT-type CTF memory. • Charge pumping technique was employed to analyze the D{sub it} information. • To apply the CP technique to monitor the reliability of the 3D NAND flash. - Abstract: The energy distribution and density of interface traps (D{sub it}) are directly investigated from bulk-type and thin-film transistor (TFT)-type charge trap flash memory cells with tunnel oxide degradation, under program/erase (P/E) cycling using a charge pumping (CP) technique, in view of application in a 3-demension stackable NAND flash memory cell. After P/E cycling in bulk-type devices, the interface trap density gradually increased from 1.55 × 10{sup 12} cm{sup ?2} eV{sup ?1} to 3.66 × 10{sup 13} cm{sup ?2} eV{sup ?1} due to tunnel oxide damage, which was consistent with the subthreshold swing and transconductance degradation after P/E cycling. Its distribution moved toward shallow energy levels with increasing cycling numbers, which coincided with the decay rate degradation with short-term retention time. The tendency extracted with the CP technique for D{sub it} of the TFT-type cells was similar to those of bulk-type cells.

The leading environmental problem facing coastal Louisiana regions is the loss of wetlands. Oil and gas exploration and production activities have contributed to wetland damage through erosion at numerous sites where canals have been cut through the marsh to access drilling sites. An independent oil and gas producer, working with Southeastern Louisiana University and two oil field service companies, developed a process to stabilize drill cuttings so that they could be used as a substrate to grow wetlands vegetation. The U.S. Department of Energy (DOE) funded a project under which the process would be validated through laboratory studies and field demonstrations. The laboratory studies demonstrated that treated drill cuttings support the growth of wetlands vegetation. However, neither the Army Corps of Engineers (COE) nor the U.S. Environmental Protection Agency (EPA) would grant regulatory approval for afield trial of the process. Argonne National Laboratory was asked to join the project team to try to find alternative mechanisms for gaining regulatory approval. Argonne worked with EPA's Office of Reinvention and learned that EPA's Project XL would be the only regulatory program under which the proposed field trial could be done. One of the main criteria for an acceptable Project XL proposal is to have a formal project sponsor assume the responsibility and liability for the project. Because the proposed project involved access to private land areas, the team felt that an oil and gas company with coastal Louisiana land holdings would need to serve as sponsor. Despite extensive communication with oil and gas companies and industry associations, the project team was unable to find any organization willing to serve as sponsor. In September 1999, the Project XL proposal was withdrawn and the project was canceled.

We use the linear supermultiplet formalism of supergravity to study axion couplings and chiral anomalies in the context of field-theoretical Lagrangians describing orbifold compactifications beyond the classical approximation. By matching amplitudes computed in the effective low energy theory with the results of string loop calculations we determine the appropriate counterterm in this effective theory that assures modular invariance to all loop order. We use supersymmetry consistency constraints to identify the correct ultra-violet cut-offs for the effective low energy theory. Our results have a simple interpretation in terms of two-loop unification of gauge coupling constants at the string scale.

Ever imagined that an Xbox controller could help open a window into a world spanning just one billionth of a meter? Brookhaven Lab's Ray Conley grows cutting-edge optics called multilayer Laue lenses (MLL) one atomic layer at a time to focus high-energy x-rays to within a single nanometer. To achieve this focusing feat, Ray uses a massive, custom-built atomic deposition device, an array of computers, and a trusty Xbox controller. These lenses will be deployed at the Lab's National Synchrotron Light Source II, due to begin shining super-bright light on pressing scientific puzzles in 2015

). . . . . 53 22. 23. Effect of the soil-stalk weight factor on the ground average, and the stalk average A comparison of Avg and Avs calculated from laboratory data with both real numbers, and integer numbers. . . . . . . 57 24 . A graph of a portion... factor, W. By calculating the difference between Avg and Avs, as equation (5) shows, the height of the sugarcane stubble remaining after cutting, 0, was to be determined. D = Avg - Avs (5) where: D - the height of the sugarcane stubble remaining...

We find out the smearing/ transfer functions that relate a local bulk operator with its boundary values at a cut-off surface located at $z=z_0$ of the AdS Poincar\\'{e} patch. We compare these results with de Sitter counterparts and comment on their connections with corresponding construction for dS/ CFT. As the boundary values can help define the required field theory at $z=z_0$ and encode bulk locality in terms of it, our work can provide key information about holographic RG in the context of AdS/ CFT.

casting uses significant quantities of energy, as well as materials like oil-based lubricants and cooling effects of equipment manufacture can then be amortized over the many years of service. This analysis

PERMIT FOR WELDING AND CUTTING OPERATIONS INSTRUCTIONS: This permit must be completed for all/Area: Description of Work to be Performed: (check where appropriate) Welding Cutting Soldering Burning Type OF ACCIDENTAL FIRES DUE TO WELDING OR CUTTING OPERATIONS 1: Do not perform cutting or welding work where an open

This report documents the results of Phase II of a three phase research program to develop and validate improved methods to model the cognitive behavior of nuclear power plant (NPP) personnel. In Phase II a dynamic simulation capability for modeling how people form intentions to act in NPP emergency situations was developed based on techniques from artificial intelligence. This modeling tool, Cognitive Environment Simulation or CES, simulates the cognitive processes that determine situation assessment and intention formation. It can be used to investigate analytically what situations and factors lead to intention failures, what actions follow from intention failures (e.g., errors of omission, errors of commission, common mode errors), the ability to recover from errors or additional machine failures, and the effects of changes in the NPP person-machine system. The Cognitive Reliability Assessment Technique (or CREATE) was also developed in Phase II to specify how CES can be used to enhance the measurement of the human contribution to risk in probabilistic risk assessment (PRA) studies. 34 refs., 7 figs., 1 tab.

A numerical-analytical model is developed to analyze temperatures in polycrystalline diamond compact (PDC) drag tools subject to localized frictional heating at a worn flat area and convective cooling at exposed lateral surfaces. Experimental measurements of convective heat transfer coefficients of PDC cutters in a uniform crossflow are presented and used in the model to predict temperatures under typical drilling conditions with fluid flow. The analysis compares favorably with measurements of frictional temperatures in controlled cutting tests on Tennessee marble. It is found that average temperatures at the wearflat contact zone vary directly with frictional force per unit area and are proportional to the one-half power of the cutting speed at the velocities investigated. Temperatures are found to be much more sensitive to decreases in the dynamic friction by lubrication than to increases in convective cooling rates beyond currently achievable levels with water or drilling fluids. It is shown that use of weighted drilling fluids may actually decrease cooling rates compared to those achieved with pure water. It is doubtful that tool temperatures can be kept below critical levels (750/sup 0/C) if air is employed as the drilling fluid. The degree of tool wear is found to have a major influence on the thermal response of the friction contact zone, so that for equal heating per contact area, a worn tool will run much hotter than a sharp tool. It is concluded that tool temperatures may be kept below critical levels with conventional water or mud cooling as long as the fluid provides good cutter-rock lubrication.

An aggressive, cost-cutting, team of T D employees at Arizona Public Service Co (APS) is building a new distribution substation in Phoenix for less than half the original cost that APS planners had calculated for the project's land, labor and materials. Scheduled for service in June of this year, APS analysts had originally projected land, labor and materials costs for the 20-MVA Bell substation at nearly $1.7-million-not including major equipment such as transformers, circuit breakers, and switches. However, after studying the project, an empowered APS crew was able to slash 36% off the original estimate-more than $610,000. What's more, APS spokesmen say that its new approach to substation construction and design has given its engineers and construction crews a laundry list of additional ideas to try out on future substation ventures. 4 figs., 1 tab.

A technique for information retrieval includes parsing a corpus to identify a number of wordform instances within each document of the corpus. A weighted morpheme-by-document matrix is generated based at least in part on the number of wordform instances within each document of the corpus and based at least in part on a weighting function. The weighted morpheme-by-document matrix separately enumerates instances of stems and affixes. Additionally or alternatively, a term-by-term alignment matrix may be generated based at least in part on the number of wordform instances within each document of the corpus. At least one lower rank approximation matrix is generated by factorizing the weighted morpheme-by-document matrix and/or the term-by-term alignment matrix.

A variety of analytical techniques is available for evaluating uranium in excreta and tissues at levels appropriate for occupational exposure control and evaluation. A few (fluorometry, kinetic phosphorescence analysis, {alpha}-particle spectrometry, neutron irradiation techniques, and inductively-coupled plasma mass spectrometry) have also been demonstrated as capable of determining uranium in these materials at levels comparable to those which occur naturally. Sample preparation requirements and isotopic sensitivities vary widely among these techniques and should be considered carefully when choosing a method. This report discusses analytical techniques used for evaluating uranium in biological matrices (primarily urine) and limits of detection reported in the literature. No cost comparison is attempted, although references are cited which address cost. Techniques discussed include: {alpha}-particle spectrometry; liquid scintillation spectrometry, fluorometry, phosphorometry, neutron activation analysis, fission-track counting, UV-visible absorption spectrophotometry, resonance ionization mass spectrometry, and inductively-coupled plasma mass spectrometry. A summary table of reported limits of detection and of the more important experimental conditions associated with these reported limits is also provided.

We have examined cutoffs and pile-ups due to various processes in the spectra of particles produced by shock acceleration, and found that, even in the absence of energy losses, the shape of the spectrum of accelerated particles at energies well below the nominal maximum energy depends strongly on the energy dependence of the diffusion coefficient. This has implications in many areas, for example, in fitting the observed cosmic ray spectrum with models based on power-law source spectra and rigidity dependent diffusive escape from the galaxy. With continuous energy losses, prominent pile-ups may arise, and these should be included when modelling synchrotron X-ray and inverse Compton gamma-ray spectra from a shock-accelerated electron population. We have developed a Monte Carlo/numerical technique to model the shape of the spectrum for the case of non-continuous energy losses such as inverse Compton scattering in the Klein-Nishina regime. We find that the shapes of the resulting cut-offs differ substantially from those arising from continuous processes, and we suggest that such differences could be observable through their effect on the spectrum of radiation emitted by a population of recently accelerated electrons as, for example, may exist in young supernova remnants.

As the mass-energy is universally self-gravitating, the gravitational binding energy must be subtracted self-consistently from its bare mass value so as to give the physical gravitational mass. Such a self-consistent gravitational self-energy correction can be made non-perturbatively by the use of a gravitational `charging' technique, where we calculate the incremental change $dm$ of the physical mass of the cosmological object, of size $r_o$ due to the accretion of a bare mass $dM$, corresponding to the gravitational coupling-in of the successive zero-point vacuum modes, i.e., of the Casimir energy, whose bare value $\\Sigma_{\\bf k} \\hbar ck$ is infinite. Integrating the `charging' equation, $dm = dM - (3\\alpha/5)Gm\\Delta M/r_o c^2$, we get a gravitational mass for the cosmological object that remains finite even in the limit of the infinite zero-point vacuum energy, i.e., without any ultraviolet cut-off imposed. Here $\\alpha$ is a geometrical factor of order unity. Also, setting $r_o = c/H$, the Hubble length, we get the corresponding cosmological density parameter $\\Omega \\simeq 1$, without any adjustable parameter. The cosmological significance of this finite and unique contribution of the otherwise infinite zero-point vacuum energy to the density parameter can hardly be overstated.

We study classical and quantum dynamics of a particle in a circular billiard with a straight cut. This system can be integrable, nonintegrable with soft chaos, or nonintegrable with hard chaos, as we vary the size of the cut. We use a quantum web to show differences in the quantum manifestations of classical chaos for these three different regimes.

innovati nNREL Recommends Ways to Cut Building Energy Costs in Half Building designers and operators could cut energy use by 50% in large office buildings, hospitals, schools, and a variety of stores of Energy (DOE), under the direc- tion of DOE's Building Technologies Program. The reports describe

Cut-Based Abduction Marcello D'Agostino Marcelo Finger Dov Gabbay Abstract In this paper we explore a generalization of traditional abduction which can simultaneously perform two differ- ent tasks: (i) given formula can be seen as a cut formula with respect to Gentzen's sequent calculus, so the abduction method

the Solar Panel center. Â· Tape or glue one set of 4 (folded out) tabs of ring to bottom of Dish, centering ring on the bottom side of Dish. The other set will go into the Solar Panel center. C: SOLAR PANEL PART Â· Cut out Solar Panel Part including the 4 rectangles at base of panels. Â· Cut the 4 white slits

Improving the Modeling of Hydrogen Solubility in Heavy Oil Cuts Using an Augmented Grayson Streed -- Improving the Modeling of Hydrogen Solubility in Heavy Oil Cuts Using an Augmented Grayson Streed (AGS for calculating hydrogen solubility in petroleum fluids. However, its accuracy becomes very bad when very heavy

Europe, Cutting Biofuel Subsidies, Redirects Aid to Stress Greenest Options - New York Times January 22, 2008 Europe, Cutting Biofuel Subsidies, Redirects Aid to Stress Greenest Options By ELISABETH for biofuels, acknowledging that the environmental benefits of these fuels have often been overstated

robotics and human factors to bioengineering and sustainability, our researchers are on the cutting edgeWe are actively engaged in cutting- edge research spanning many diverse specialties. As part and Design Â» Biomedical Engineering Â» Energy and Environmental Engineering Â» Human Factors

1 Published: 3 November 2014 Don't Cut Aid to Egypt: The Hopeful Case for Supporting Egyptian President Sisi Today, the U.S. needs Egypt's partnership more than ever. Op-Ed by Ahmed H. Zewail Some support him. And I believe that cutting foreign aid to Egypt at this point would harm the U.S.-Egypt

On the editing of images: selecting, cutting and filling-in Fr´ed´eric Labrosse Computer Science: selecting, cutting and filling-in Fr´ed´eric Labrosse Computer Science Department, University of Wales of this work is post-production special effects in cinema or image manipulation where one often wants to remove

Does the Budget Surplus Justify Large-Scale Tax Cuts?: Updates and Extensions Alan J. Auerbach agreed should not be used for tax cuts. All of the remaining "on-budget" surplus was due to implausible of the on-budget surplus was due to accumulations in government trust funds for medicare and pensions, which

For just the second time, crews have cut a hole in the top of an active radioactive waste storage tank at Hanford. Workers began cutting a 55-inch hole in the top of Tank C-105 last Tuesday night on graveyard shift, completing the cut early Wednesday. The hole will allow for installation of the Mobile Arm Retrieval System (MARS) Vacuum into the tank. The cut was made through 17 inches of concrete and rebar using the newly developed rotary-core cutting system, which uses a laser-guided steel canister with teeth on the bottom to drill a round hole into the tank dome. The project was completed safely and successfully in a high-rad area without contamination or significant dose to workers.

Fluvial sandstones constitute one of the major clastic petroleum reservoir types in many sedimentary basins around the world. This study is based on the analysis of high-resolution, shallow (seabed to 500 m depth) 3D seismic data which generated three-dimensional (3D) time slices that provide exceptional imaging of the geometry, dimension and temporal and spatial distribution of fluvial channels. The study area is in the northeast of Malay Basin about 280 km to the east of Terengganu offshore. The Malay Basin comprises a thick (> 8 km), rift to post-rift Oligo-Miocene to Pliocene basin-fill. The youngest (Miocene to Pliocene), post-rift succession is dominated by a thick (1–5 km), cyclic succession of coastal plain and coastal deposits, which accumulated in a humid-tropical climatic setting. This study focuses on the Pleistocene to Recent (500 m thick) succession, which comprises a range of seismic facies analysis of the two-dimensional (2D) seismic sections, mainly reflecting changes in fluvial channel style and river architecture. The succession has been divided into four seismic units (Unit S1-S4), bounded by basin-wide strata surfaces. Two types of boundaries have been identified: 1) a boundary that is defined by a regionally-extensive erosion surface at the base of a prominent incised valley (S3 and S4); 2) a sequence boundary that is defined by more weakly-incised, straight and low-sinuosity channels which is interpreted as low-stand alluvial bypass channel systems (S1 and S2). Each unit displays a predictable vertical change of the channel pattern and scale, with wide low-sinuosity channels at the base passing gradationally upwards into narrow high-sinuosity channels at the top. The wide variation in channel style and size is interpreted to be controlled mainly by the sea-level fluctuations on the widely flat Sunda land Platform.

sequence stratigraphy and geostatistical analysis, I developed a geologic model that may improve the ultimate recovery of oil from this field. In this study, I assessed sequence stratigraphic concepts for continental settings and extended the techniques...

Based on multi-dimensional neutrino-radiation hydrodynamic simulations, we report several cutting-edge issues about the long-veiled explosion mechanism of core-collapse supernovae (CCSNe). In this contribution, we pay particular attention to whether three-dimensional (3D) hydrodynamics and/or general relativity (GR) would or would not help the onset of explosions. By performing 3D simulations with spectral neutrino transport, we show that it is more difficult to obtain an explosion in 3D than in 2D. In addition, our results from the first generation of full general relativistic 3D simulations including approximate neutrino transport indicate that GR can foster the onset of neutrino-driven explosions. Based on our recent parametric studies using a light-bulb scheme, we discuss impacts of nuclear energy deposition behind the supernova shock and stellar rotation on the neutrino-driven mechanism, both of which have yet to be included in the self-consistent 3D supernova models. Finally we give an outlook with a summary of the most urgent tasks to extract the information about the explosion mechanisms from multi-messenger CCSN observables.

Optical on line techniques enable non intrusive physical measurements in harsh environments (high temperature, high pressure, radioactivity, ...). Optical absorption spectrometries such as UV-Visible, FTIR, CRDS have been successfully used to study gas phase speciation in different nuclear applications. LIBS which relies on laser matter interactions is a on line optical technique for solids and liquids elementary analysis. (authors)

A data analysis was performed to determine the relationship between the Wilsonville Solvent Quality test result and SRC liquefaction process parameters. The data base studied covers the years 1979 to 1982, Wilsonville runs 133 to 234. Only process-defined material balance data sets were included to best represent steady-state operation. Each material balance period provided 48 variables from which common process conditions were selected by imposing a range of acceptable deviations from a norm, e.g., a reactor hydrogen pressure of 2000 +- 100 psi. Data for all variables vs. solvent quality were plotted, and in some cases variables were compared with each other to determine common trends, e.g. gas production vs. hydrogen consumption. The plotted data produced no discernible trends. Separating the data by coal type (mine location) and identifying common process conditions with coal types still provided no absolute correlations with solvent quality. However, the effect of the weight percent pyrite present in the feed coal produced a consistent trend. A coal containing more than 1.2% pyrite and less than 0.1% sulfate sulfur yielded results in which any one correlation would cluster about a central point. It was observed that, on average, Kentucky Fies and Pyro mine coal and Indiana V coal clustered together, while Kentucky Lafayette and Dotiki mine coals clustered together. These data point clusters for the variables tested were nearly independent of reactor pressure, space rate, and temperature. One unusual observation of all the data points, independent of process conditions, was that at each change of feed coal, the sum of hydrocarbon and heteroatom gas production was greatest for the first 30 days, after which gas production reached a steady state dependent on process conditions, primarily temperature.

The performance of Chemical Vapor Deposition (CVD) carbide insert with ISO designation of CCMT 12 04 04 LF, when turning titanium alloys was investigated. There were four layers of coating materials for this insert i.e.TiN-Al2O3-TiCN-TiN. The insert performance was evaluated based on the insert's edge resistant towards the machining parameters used at high cutting speed range of machining Ti-6Al-4V ELI. Detailed study on the wear mechanism at the cutting edge of CVD carbide tools was carried out at cutting speed of 55-95 m/min, feed rate of 0.15-0.35 mm/rev and depth of cut of 0.10-0.20 mm. Wear mechanisms such as abrasive and adhesive were observed on the flank face. Crater wear due to diffusion was also observed on the rake race. The abrasive wear occurred more at nose radius and the fracture on tool were found at the feed rate of 0.35 mm/rev and the depth of cut of 0.20 mm. The adhesion wear takes place after the removal of the coating or coating delaminating. Therefore, adhesion or welding of titanium alloy onto the flank and rake faces demonstrates a strong bond at the workpiece-tool interface.

The amount of radioactive wastes from decommissioning of a nuclear power plant varies greatly depending on factors such as type and size of the plant, operation history, decommissioning options, and waste treatment and volume reduction methods. There are many methods to decrease the amount of decommissioning radioactive wastes including minimization of waste generation, waste reclassification through decontamination and cutting methods to remove the contaminated areas. According to OECD/NEA, it is known that the radioactive waste treatment and disposal cost accounts for about 40 percentage of the total decommissioning cost. In Korea, it is needed to reduce amount of decommissioning radioactive waste due to high disposal cost, about $7,000 (as of 2010) per a 200 liter drum for the low- and intermediate-level radioactive waste (LILW). In this paper, cutting methods to minimize the radioactive waste of activated concrete were investigated and associated decommissioning cost impact was assessed. The cutting methods considered are cylindrical and volume reductive cuttings. The study showed that the volume reductive cutting is more cost-effective than the cylindrical cutting. Therefore, the volume reductive cutting method can be effectively applied to the activated bio-shield concrete. (authors)

The motivating factor behind recent research and development efforts in metal cutting has been the growing need for companies everywhere to embrace emerging technologies if they are to complete in the global economy. To quickly implement these productivity improvements and gain lower bottom line costs for welding and cutting operations, rapid commercialization of these process advancements is needed. Although initially more expensive, additive-enhanced fuel gases may be the most cost-effective choice for certain cutting applications. The cost of additive-enhanced fuel gases can be justified where oxygen pricing is low (such as with bulk oxygen). Propylene exhibited equal cutting speeds to acetylene and improved cutting economy under specific conditions, which involved longer cuts on thicker base materials. With a longer cut distance, the extra time required to reach the kindling temperature (when compared to acetylene) becomes less critical. It is important to note that kindling temperature was reached more rapidly with propylene than it was with propane, but both fuel gases were slower than acetylene. When factors such as these are considered, many applications are found to be more cost effectively performed with the more expensive acetylene or propylene fuel gases. Each individual application must be studied on a singular basis to determine the most cost-effective choice when selecting the fuel gas.

A stacking of battery laminate is prepared, each battery consisting of anode, polymer electrolyte, cathode films and possibly an insulating film, under conditions suitable to constitute a rigid monoblock assembly, in which the films are unitary with one another. The assembly obtained is thereafter cut in predetermined shape by using a mechanical device without macroscopic deformation of the films constituting the assembly and without inducing permanent short circuits. The battery which is obtained after cutting includes at least one end which appears as a uniform cut, the various films constituting the assembly having undergone no macroscopic deformation, the edges of the films of the anode including an electronically insulating passivation film.

To meet the stringent demands on today`s manufacturing, cutting tool systems had to be developed with innovative design features built-in. In nearly all applications involving non-ferrous metals, these cutting tools are diamond tipped. They dramatically outperform any other {open_quotes}conventional{close_quotes} or substitute cutting material. They can be run up to 50% faster than carbide grades and simultaneously even extend tool life up to 10-fold while improved workpiece quality, increased productivity and reduced production costs are just a bonus.

This compendium lists the repositories holding geothermal core and well cuttings from US government-sponsored geothermal wells. Also, a partial listing of cores and cutting from these wells is tabulated, along with referenced reports and location maps. These samples are available to the public for research investigations and studies, usually following submission of an appropriate request for use of the samples. The purpose of this compilation is to serve as a possible source of cores and cuttings that might aid in enhancing rock property studies in support of geothermal log interpretation.

of chrysanthemum (66), stimulated root1ng in Protea ner1ifolia (17) and black walnut (16), but had no effect on dormant stem cuttings of aspen (67). When used with either IAA or IBA, the percentages of cuttings rooted as well as the number of roots per cutting... in which they were applied. H1gh aux1n plus low cytokin1n 1nduced root format1on while h1gh cytokin1n plus low aux1n induced shoot formation. The inhibitory effect of high cytokinin concentrat1ons on auxin- induced root formation appears to be a general...

1 Modeling of Laser Cutting and Related Processes A considerable proportion of laser processing. Modeling laser cutting and its features Recent modeling work has concentrated on the implementation and numerical evaluation of a transient three-dimensional computer simulation of the CO2 laser cutting process

Supplementary material Setting the weight cut-off in the labelling of gene families with GOslim cut-off values in the abscis. The plots show how many gene labels were appointed to the family S1 shows that if the cut-off value increases, the amount of gene families with many GOslim labels

Deep multipass cutting of bidirectional and unidirectional carbon fiber reinforced plastics (CFRP) with picosecond laser pulses was investigated in different static atmospheres as well as with the assistance of an oxygen or nitrogen gas flow. The ablation rate was determined as a function of the kerf depth and the resulting heat affected zone was measured. An assisting oxygen gas flow is found to significantly increase the cutting productivity, but only in deep kerfs where the diminished evaporative ablation due to the reduced laser fluence reaching the bottom of the kerf does not dominate the contribution of reactive etching anymore. Oxygen-supported cutting was shown to also solve the problem that occurs when cutting the CFRP parallel to the fiber orientation where a strong deformation and widening of the kerf, which temporarily slows down the process speed, is revealed to be typical for processing in standard air atmospheres.

The massless Nelson model describes non-relativistic, spinless quantum particles interacting with a relativistic, massless, scalar quantum field. The interaction is linear in the field. We analyze the one particle sector. First, we construct the renormalized mass shell of the non-relativistic particle for an arbitrarily small infrared cut-off that turns off the interaction with the low energy modes of the field. No ultraviolet cut-off is imposed. Second, we implement a suitable Bogolyubov transformation of the Hamiltonian in the infrared regime. This transformation depends on the total momentum of the system and is non-unitary as the infrared cut-off is removed. For the transformed Hamiltonian we construct the mass shell in the limit where both the ultraviolet and the infrared cut-off are removed. Our approach is constructive and leads to explicit expansion formulae which are amenable to rigorously control the S-matrix elements.

Unveven-aged silvicultural practices can be used to regenerate and manage many eastern hardwood stands. Single-tree selection methods are feasible in stands where a desirable shade-tolerant commercial species can be regenerated following periodic harvests. A variety of partial cutting practices, including single-tree selection and diameter-limit cutting have been used for 30 years or more to manage central Appalachian hardwoods on the Fernow Experimental Forest near Parsons, West Virginia. Results from these research areas are presented to help forest managers evaluate financial aspects of partial cutting practices. Observed volume growth, product yields, changes in species composition, and changes in residual stand quality are used to evaluate potential financial returns. Also, practical economic considerations for applying partial cutting methods are discussed.

Analysis of electrical consumption can payout in reduced energy costs. Continuous monitoring of electrical usage coupled with improvements and optimization in system(s) operations can have a favorable impact on annual operating expenditures. Further...

A COMPARISON OF VACUUM PACKAGING SYSTEMS FOR THE / DISTRIBUTION AND STORAGE OF PRIMAL BEEP CUTS A Thesis by RODNEY AMES BOWLING Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirement of the degree... of ommittee) (Co- rman of Committee) ( ember) (Me er) (Head of Department) December 1974 ABSTRACT A Comparison of Vacuum Packaging Systems for the Distribution and Storage of Primal Beef Cuts (December 1974) Rodney Ames Bowling, B. S. , Sul Ross State...

CARDIORESPIRATORY RESPONSE AND BLOOD LACTATE IN CUTTING HORSES SUBJECTED TO TWO EXERCISE REGIHENS A Thesis by MARY ELIZABETH CAMPBELL Submitted to the Graduate College of Texas ABH University in partial fulfillment of the requirement... for the degree of MASTER OP SCIENCE Hay 1986 Major Subject: Animal Science CARDIORESPIRATORY RESPONSE AHD BLOOD LACTATE IH CUTTING HORSES SUBJECTED TO TWO EXERCISE REGINENS A Thesis by NARY ELIZABETH CANPBELL Approved as to style and content by: Ga D...

We introduce a cutting-edge life-testing technique, accelerated degradation testing (ADT), for PV reliability testing. The ADT technique is a cost-effective and flexible reliability testing method with multiple (MADT) and Step-Stress (SSADT) variants. In an environment with limited resources, including equipment (chambers), test units, and testing time, these techniques can provide statistically rigorous prediction of lifetime and other interesting parameters, such as failure rate, warranty time, mean time to failure, degradation rate, activation energy, acceleration factor, and upper limit level of stress. J-V characterization can be used for degradation data and the generalized Eyring model can be used for the thermal-humidity stress condition. The SSADT model can be constructed based on the cumulative damage model (CEM), which assumes that the remaining test united are failed according to cumulative density function of current stress level regardless of the history on previous stress levels.

A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.

for generational garbage collection systems, which are standard in most high performance Java virtual machines in a common framework. A study of garbage collection traces from four standard Java benchmark programs shows.4 [Programming Languages]: Processors--Memory management (garbage collection) General Terms Measurement Keywords

Clustering is a common technique for statistical data analysis, Clustering is the process of grouping the data into classes or clusters so that objects within a cluster have high similarity in comparison to one another, but are very dissimilar to objects in other clusters. Dissimilarities are assessed based on the attribute values describing the objects. Often, distance measures are used. Clustering is an unsupervised learning technique, where interesting patterns and structures can be found directly from very large data sets with little or none of the background knowledge. This paper also considers the partitioning of m-dimensional lattice graphs using Fiedler's approach, which requires the determination of the eigenvector belonging to the second smallest Eigenvalue of the Laplacian with K-means partitioning algorithm.

Welcome to a workshop on contamination Control techniques. This work shop is designed for about two hours. Attendee participation is encouraged during the workshop. We will address different topics within contamination control techniques; present processes, products and equipment used here at Hanford and then open the floor to you, the attendees for your input on the topics.

Multielement geochemical analysis of drill cuttings from 26 shallow temperature-gradient drill holes and of surface rock samples reveals trace element distributions developed within these rocks as a consequence of chemical interaction with thermal fluid within the Beowawe geothermal area. The presently discharging thermal fluids are dilute in all components except silica, suggesting that the residence time of these fluids within the thermal reservoir has been short and that chemical interaction with the reservoir rock minimal. Interaction between these dilute fluids and rocks within the system has resulted in the development of weak chemical signatures. The absence of stronger signatures in rocks associated with the present system suggests that fluids have had a similar dilute chemistry for some time. The spatial distribution of elements commonly associated with geothermal systems, such as As, Hg and Li, and neither laterally nor vertically continuous. This suggests that there is not now, nor has there been in the past, pervasive movement of thermal fluid throughout the sampled rock but, instead, that isolated chemical anomalies represent distinct fluid-flow chanels. Discontinuous As, Li and Hg concentrations near White Canyon to the east of the presently active surface features record the effects of chemical interaction of rocks with fluids chemically unlike the presently discharging fluids. The observed trace element distributions suggest that historically the Beowawe area has been the center of more than one hydrothermal event and that the near-surface portion of the present hot-water geothermal system is controlled by a single source fracture, the Malpais Fault, or an intersection of faults at the sinter terrace.

During deactivation and decommissioning activities, thermal cutting tools, such as plasma torch, laser, and gasoline torch, are used to cut metals. These activities generate fumes, smoke and particulates. These airborne species of matter, called aerosols, may be inhaled if suitable respiratory protection is not used. Inhalation of the airborne metallic aerosols has been reported to cause ill health effects, such as acute respiratory syndrome and chromosome damage in lymphocytes. In the nuclear industry, metals may be contaminated with radioactive materials. Cutting these metals, as in size reduction of gloveboxes and tanks, produces high concentrations of airborne transuranic particles. Particles of the respirable size range (size < 10 {micro}m) deposit in various compartments of the respiratory tract, the fraction and the site in the respiratory tract depending on the size of the particles. The dose delivered to the respiratory tract depends on the size distribution of the airborne particulates (aerosols) and their concentration and radioactivity/toxicity. The concentration of airborne particulate matter in an environment is dependent upon the rate of their production and the ventilation rate. Thus, measuring aerosol size distribution and generation rate is important for (1) the assessment of inhalation exposures of workers, (2) the selection of respiratory protection equipment, and (3) the design of appropriate filtration systems. Size distribution of the aerosols generated during cutting of different metals by plasma torch was measured. Cutting rates of different metals, rate of generation of respirable mass, as well as the fraction of the released kerf that become respirable were determined. This report presents results of these studies. Measurements of the particles generated during cutting of metal plates with a plasma arc torch revealed the presence of particles with mass median aerodynamic diameters of particles close to 0.2 {micro}m, arising from condensation of vaporized material and subsequent rapid formation of aggregates. Particles of larger size, resulting from ejection of melted material or fragments from the cutting zone, were also observed. This study presents data regarding the metal cutting rate, particle size distribution, and their generation rate, while using different cutting tools and metals. The study shows that respirable particles constitute only a small fraction of the released kerf.

To simplify the LCLS operation and further enhance the injector performances, we are evaluating the various parameters including the photocathode drive laser system. Extensive simulations show that both the projected and time-sliced emittances with spatial Gaussian profiles having reasonable tail-cut are better than those with uniform one. The simulated results are also supported by theoretical analyses. In the LCLS, the spatial uniform or Gaussian-cut laser profiles are conveniently obtained by adjusting the optics of the telescope upstream of an iris, used to define laser size on the cathode. Preliminary beam studies at the LCLS injector show that both the projected and time-sliced emittances with spatial Gaussian-cut laser are almost as good as, although not better than, those with uniform one. In addition, the laser transmission through the iris with the Gaussian-cut profile is twice with uniform one, which can significantly ease LCLS copper cathode/laser operations and thus improve the LCLS operation efficiency. More beam studies are planned to measure FEL performances with the Gaussian-cut in comparison with the uniform one. All simulations and measurements are presented in the paper.

science, and fluid flow. The particular problem motivating this work is heat conduction in nuclear fuel to the well- known Ghost Fluid Method in simple cases. We test the accuracy of the estimates in a series

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualProperty EditCalifornia:PowerCER.pngRoofs and HeatOpen EnergyRoadmap1977

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualProperty EditCalifornia:PowerCER.pngRoofs and HeatOpen

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualProperty EditCalifornia:PowerCER.pngRoofs and HeatOpenInformation 2)| Open Energy2005)

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualProperty EditCalifornia:PowerCER.pngRoofs and HeatOpenInformation 2)|

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualProperty EditCalifornia:PowerCER.pngRoofs and HeatOpenInformation

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualProperty EditCalifornia:PowerCER.pngRoofs and HeatOpenInformationInformation Dees,

This study builds upon earlier research conducted by Southeastern Louisiana University concerning the efficacy of utilizing processed drill cuttings as an alternative substrate source for wetland rehabilitation (wetland creation and restoration). Previous research has indicated that processed drill cuttings exhibit a low degree of contaminant migration from the process drill cuttings to interstitial water and low toxicity, as tested by seven-day mysid shrimp chronic toxicity trials.

The conditions of drilling and cutting of 0.15-mm-thick titanium and stainless steel plates in water with the radiation of a repetitively pulsed Nd : YAG laser having the mean power up to 30 W are studied experimentally in the absence of water and gas jets. Dependences of the maximal cutting speed in water on the radiation power are obtained, the cutting efficiency is determined, and the comparison with the conditions of drilling and cutting of plates in air is carried out.

COMPARISON OF SYSTEMS FOR THE DISTRIBUTION OF LAMB CARCASSES AND WHOLESALE CUTS A Thesis by JOSEPH DARYL TATUM Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirement for the degree of MASTER... OF SCIENCE Play 1976 M+or SubJect: Animal Science (Meats) COMPARISON OF SYSTEMS FOR THE DISTRIBUTION OF LAMB CARCASSES AND WHOLESALE CUTS A Thesis by JOSEPH DARYL TATUM Approved as to style and content by: 0) o (Co-C a man of Committee) ( -Chairman...

This report overviews crosscutting regulatory topics for nuclear fuel cycle facilities for use in the Fuel Cycle Research&Development Nuclear Fuel Cycle Evaluation and Screening study. In particular, the regulatory infrastructure and analysis capability is assessed for the following topical areas:Fire Regulations (i.e., how applicable are current Nuclear Regulatory Commission (NRC) and/or International Atomic Energy Agency (IAEA) fire regulations to advance fuel cycle facilities)Consequence Assessment (i.e., how applicable are current radionuclide transportation tools to support risk-informed regulations and Level 2 and/or 3 PRA) While not addressed in detail, the following regulatory topic is also discussed:Integrated Security, Safeguard and Safety Requirement (i.e., how applicable are current Nuclear Regulatory Commission (NRC) regulations to future fuel cycle facilities which will likely be required to balance the sometimes conflicting Material Accountability, Security, and Safety requirements.)

Blind background prediction using a bifurcated analysis scheme J. Nix, J. Ma, G.N. Perdue, Y. Zheng In this paper we describe a bifurcation analysis procedure for data driven background prediction in a blind. Our set of cuts defines a multidimensional signal region which we wish to keep blind. If two cuts show

such as distillation path optimization, reaction path optimization and heat exchange optimization. These techniques are being supported by other workers in the area of efficiency measurement, availability analysis and exergy analysis which will serve to guide... and exergy analysis are all examples of targeting tech niques. They are all effective at describing where your process lies in the efficiency domain but do not really help you in telling you where you should be going. These techniques are being discussed...

Introducing a new infrared cut-off for the holographic dark-energy, we study the correspondence between the quintessence, tachyon, K-essence and dilaton energy density with this holographic dark energy density in the flat FRW universe. This correspondence allows to reconstruct the potentials and the dynamics for the scalar fields models, which describe accelerated expansion.

Cutting Down Electricity Cost in Internet Data Centers by Using Energy Storage Yuanxiong Guo energy storage capability in data centers to reduce electricity bill under real-time electricity market between cost saving and energy storage capacity. As far as we know, our work is the first to explore

Use of Short-cut Methods to Analyse Optimal Operation of Petlyuk Distillation Columns Ivar J: Petlyuk distillation column, dividing wall column, optimizing control, minimum energy 1. INTRODUCTION- ments for the level of automatic control and to the design of number of stages in each column section. 2

#12;Search for non analyticity: If f is smooth and regular in the vicinity of f=0, the standard-analyticities associated with branch-cuts enter via ring diagrams, i.e., ladders which are closed onto themselves p+q p -p, the dominant terms are generated in the thermodynamic potential. In ladders the non- analyticities associated

, relative total costs are higher in emissions-intensive countries. Using the results of the 22nd Energy with low marginal costs of abating carbon emissions may have high total costs, and vice versa, for a given mitigation. We hypothesize that, under a common percentage cut in emissions intensity relative to business

Gas Dynamic Effects On Laser Cut Quality Kai Chen, Y. Lawrence Yao, and Vijay Modi Department are very sensitive to gas jet pressure and nozzle standoff distance. Do a high gas pressure and a small shows the same behavior (i.e., discontinuity as gas pressure and standoff change

by leveraging `migration slack', or resources that can be used for migration without excessively impacting query"Cut Me Some Slack": Latency-Aware Live Migration for Databases Sean Barker Yun Chi Hyun Jin Moon and exploit migration slack in real time. Using our prototype, we demonstrate that Slacker effectively

36 Chapter 7 - Welding The dangers in welding, cutting, heating and grinding should never and to understand the hazards involved. Spot the hazard Hazards associated with welding include: · The arc itself eyes can become extremely red and sore and in extreme cases suffer permanent damage. · Welding gases

mask customized to the shape of the bone, such as the femoral head. However, creat- ing masks for bones of different methodology have been reported for bone segmen- tation (see a recent survey in [1]). DueInteractive Separation of Segmented Bones in CT Volumes Using Graph Cut Lu Liu, David Raber, David

Building designers and operators could cut energy use by 50% in large office buildings, hospitals, schools, and a variety of stores - including groceries, general merchandise outlets, and retail outlets - by following the recommendations of researchers at the National Renewable Energy Laboratory (NREL).

-8 to analyze the formation structure in depth, since seismic signals around the reservoir area were unclear in the 3-D survey. This research attempts to estimate the attenuation properties of the Bentonite layer in the Cut Bank oil field. VSP data is processed...

Effects of exogenous ABA on photosynthesis and stomatal conductance of cut twigs from oak seedlings of the observed decline in net photosynthesis (Downton etal., 1988) or is there some direct ef- fect of ABA on mesophyll photosynthesis (Raschke and Hedrich, 1985)? Do forest trees display the same responses to ABA

In the past few years the state of Texas, under the direction of Governor George W. Bush, has seriously considered the idea of cutting property taxes. Since schools are financed mainly through property taxes in Texas, this is an extremely important...

, and is subsequently related to the classical MINCUT approach. From a practical perspective a simple and intu- itive clustering. Keywords: Graph Cuts, Transductive Inference, Statistical Learning, Clustering, Combinatorial research witnessed a renewed surge of interest in the MINCUT problem, culminating in the theoretical

Two fungal symbioses collide: endophytic fungi are not welcome in leaf-cutting ant gardens Sunshine, while the leaf material they provide to their garden is usually filled with endophytic fungi. The ants and their cultivar may interact with hundreds of endophytic fungal species, yet little is known about

Two fungal symbioses collide: endophytic fungi are not welcome in leaf-cutting ant gardens Sunshine with their fungal garden, while the leaf material they provide to their garden is usually filled with endophytic fungi. The ants and their cultivar may interact with hundreds of endophytic fungal species, yet little

In this paper, we present a cut-cell methodology for solving the two-dimensional neutral-particle transport equation on an orthogonal Cartesian grid. We allow the rectangular cell to be subdivided into two polygonal subcells. We ensure that this division (or cut) conserves the volumes of the materials in the subcells and we utilize a step-characteristics (SC) slice balance approach (SBA) to calculate the angular fluxes exiting the cell as well as the average scalar fluxes in each subcell. Solving the discrete ordinates transport equation on an arbitrary mesh has historically been difficult to parallelize while maintaining good parallel efficiency. However on Cartesian meshes, the KBA algorithm maintains good parallel efficiency using a direct solve. The ability to preserve this algorithm was a driving factor in the development of our cut-cell method. This method also provides a more accurate depiction of a material interface in a cell, which leads to more accurate solutions downstream of this cell. As a result, fewer spatial cells can be utilized, resulting in reduced memory requirements. We apply this approach in the 2D/3D discrete ordinates neutral-particle transport code Denovo, where we analyze a 2D 3 x 3 lattice of pincells. We show that, for eigenvalue problems, a significant increase in accuracy for a given mesh size is gained by utilizing the cut-cell, SC equations instead of the standard homogenized-cell, SC equations.

-PNIPAAM-CBE, which were applied to fresh-cut romaine lettuce, along with free CBE, to determine their efficiency against L. monocytogenes in a food system. The chitosan-PNIPAAM-CBE yielded the greatest bacterial inhibition (P<0.05); therefore, it was subjected to a...

A Short Cut to Deforestation Andrew Gill John Launchbury Simon L Peyton Jones Department example of just such a transformation is deforestation (Wadler 1990]). Deforestation removes arbitrary]), we know of no mature com- piler that uses deforestation as part of its regular optimisa- tions

Building designers and operators could cut energy use by 50% in large office buildings, hospitals, schools, and a variety of stores -- including groceries, general merchandise outlets, and retail outlets -- by following the recommendations of NREL researchers. The innovative energy-saving recommendations are contained in technical support documents and Advanced Energy Design Guides compiled by NREL.

University of Washington Focus the Nation 1/31/2008 Notes: Science on the cutting edge panel that was not coincident with a loss of ice, and this has led scientists to look at other factors that could impact sea-ice loss. One factor that has been recently detailed via satellite and ice-buoy information is the movement

Interactive Graph Cut Based Segmentation With Shape Priors Daniel Freedman and Tao Zhang Computer segmentation can be very chal- lenging, a small amount of user input can often resolve ambiguous decisions can be very chal- lenging, a small amount of user input can often resolve am- biguous decisions

Cartesian Cut Cell Two-Fluid Solver for Hydraulic Flow Problems L. Qian1 ; D. M. Causon2 ; D. M. Ingram3 ; and C. G. Mingham4 Abstract: A two-fluid solver which can be applied to a variety of hydraulic with a sloping beach is also calculated to demonstrate the applicability of the method to real hydraulic problems

Â­ in this case oil Â­ interacts with oxygen. However, common indus- trial oxidation methods are highly polluting31 Green Chemistry The smartest way to cut pollution is to prevent it. That's the motto of "green Add Oxygen Oil is not only the major energy source on our planet but also the basis for producing

Senate Vent Tent Faculty, Staff, and Student Responses to CSU Budget Cuts In late July, the CSU Board of Trustees adopted a strategy to deal with a budget shortfall of $584 million (relative to the state budget enacted in January), or nearly 20% of the CSU budget. To give a perspective on the size

A NOVEL FRAMEWORK FOR ACCURATE LUNG SEGMENTATION USING GRAPH CUTS Asem M. Ali1 Ayman S. El-Baz 2 Bioengineering Department, University of Louisville ABSTRACT The closeness of the gray levels between lung tissues and the chest tissues makes lung segmentation based only on image signals dif- ficult. This work

This paper analyses the effects of random noise in determining errors and confidence levels for galaxy redshifts obtained by cross-correlation techniques. The main finding is that confidence levels have previously been overestimated, and errors inaccurately calculated in certain applications. New formul\\ae\\ are presented.

Any verification measurement performed on potentially classified nuclear material must satisfy two seemingly contradictory constraints. First and foremost, no classified information can be released. At the same time, the monitoring party must have confidence in the veracity of the measurement. An information barrier (IB) is included in the measurement system to protect the potentially classified information while allowing sufficient information transfer to occur for the monitoring party to gain confidence that the material being measured is consistent with the host's declarations, concerning that material. The attribute measurement technique incorporates an IB and addresses both concerns by measuring several attributes of the nuclear material and displaying unclassified results through green (indicating that the material does possess the specified attribute) and red (indicating that the material does not possess the specified attribute) lights. The attribute measurement technique has been implemented in the AVNG, an attribute measuring system described in other presentations at this conference. In this presentation, we will discuss four techniques used in the AVNG: (1) the 1B, (2) the attribute measurement technique, (3) the use of open and secure modes to increase confidence in the displayed results, and (4) the joint design as a method for addressing both host and monitor needs.

and Graphics: Theories and Applications Yuri Boykov and Olga Veksler Computer Science, The University the corresponding graph. Thus, many applications in vision and graphics use min-cut algorithms as a tool for computing optimal hypersurfaces. Secondly, graph-cuts also work as a powerful energy minimization tool

levels of 1. 27 and . 64 cm of external fat including a partially skinned ham and picnic shoulder, along with separable lean from the four lean cuts. Grisdale et al. (1984a) reported fat-standardized lean as a percentage of the carcass weight..., a bootjack is formed in the belly at a 20 degree angle. The foot was removed perpendicular to the length of the shank 1. 27 cm below the proximal end of the tuber calcis. Pelvic fat, flank musde, and lymph glands and associated fat were removed...

Pro forma cash-flow analysis of petroleum ventures usually is considered as a deterministic model. In the last 10 years, Monte Carlo analysis has allowed the introduction of probability distributions of input variables in place of single-valued functions. Reserve determination and rate scheduling in these current Monte Carlo techniques have relied on the volumetric formula, which works well in nonfractured reservoirs. Recent massive drilling in fractured reservoirs has rendered this approach unusable. This paper develops a variation of the Arps rate-cumulative equation as a basic model for the determination of the distribution of original reserves and the decline rates. Continuation of the Monte Carlo technique into net present value analysis and internal rate of return (IRR) is also developed.

A process has been developed for fabricating composite structures using either reaction forming or polymer infiltration and pyrolysis techniques to densify the composite matrix. The matrix and reinforcement materials of choice can include, but are not limited to, silicon carbide (SiC) and zirconium carbide (ZrC). The novel process can be used to fabricate complex, net-shape or near-net shape, high-quality ceramic composites with a crack-free matrix.

Three different analysistechniques for Atmospheric Imaging System are presented. The classical Hillas parameters based technique is shown to be robust and efficient, but more elaborate techniques can improve the sensitivity of the analysis. A comparison of the different analysistechniques shows that they use different information for gamma-hadron separation, and that it is possible to combine their qualities.

This report reviews how phase-cut dimmers work, how LEDs differ from the incandescent lamps that the dimmers were historically designed to control, and how these differences can lead to complications when trying to dim LEDs. Compatibility between a specific LED source and a specific phase-cut dimmer is often unknown and difficult to assess, and ensuring compatibility adds complexity to the design, specification, bidding, and construction observation phases for new buildings and major remodel projects. To maximize project success, this report provides both general guidance and step-by-step procedures for designing phase-controlled LED dimming on both new and existing projects, as well as real-world examples of how to use those procedures.