Recovery of Bacillus atrophaeous spores from grime-treated and clean surfaces was measured in a controlled chamber study to assess samplingmethod performance. Outdoor surfaces investigated by wipe and vacuum samplingmethods included stainless steel, glass, marble and concrete. Bacillus atrophaeous spores were used as a surrogate for Bacillus anthracis spores in this study designed to assess whether grime-coated surfaces significantly affected surfacesamplingmethod performance when compared to clean surfaces. A series of chamber tests were carried out in which known amounts of spores were allowed to gravitationally settle onto both clean and dirty surfaces. Reference coupons were co-located with test coupons in all chamber experiments to provide a quantitative measure of initial surface concentrations of spores on all surfaces, thereby allowing sampling recovery calculations. Results from these tests, carried out under both low and high humidity conditions, show that spore recovery from grime-coated surfaces is the same as or better than spore recovery from clean surfaces. Statistically significant differences between method performance for grime-coated and clean surfaces were observed in only about half of the chamber tests conducted.

Methods for the collection of soil samples to determine levels of 137 Cs and other fallout radionuclides, such as excess 210 Pb and 7 Be, will depend on the purposes (aims) of the project, site and soil characteristics, analytical capacity, the total number of samples that can be analysed and the sample mass required. The latter two will depend partly on detector type and capabilities. A variety of field methods have been developed for different field conditions and circumstances over the past twenty years, many of them inherited or adapted from soil science and sedimentology. The use of them inherited or adapted from soil science and sedimentology. The use of 137 Cs in erosion studies has been widely developed, while the application of fallout 210 Pb and 7 Be is still developing. Although it is possible to measure these nuclides simultaneously, it is common for experiments to designed around the use of 137 Cs along. Caesium studies typically involve comparison of the inventories found at eroded or sedimentation sites with that of a 'reference' site. An accurate characterization of the depth distribution of these fallout nuclides is often required in order to apply and/or calibrate the conversion models. However, depending on the tracer involved, the depth distribution, and thus the sampling resolution required to define it, differs. For example, a depth resolution of 1 cm is often adequate when using 137 Cs. However, fallout 210 Pb and 7 Be commonly has very strong surface maxima that decrease exponentially with depth, and fine depth increments are required at or close to the soil surface. Consequently, different depth incremental samplingmethods are required when using different fallout radionuclides. Geomorphic investigations also frequently require determination of the depth-distribution of fallout nuclides on slopes and depositional sites as well as their total inventories

In this paper, the combined method of response surface and importance sampling was applied for calculation of parameter failure probability of the thermodynamic system. The mathematics model was present for the parameter failure of physics process in the thermodynamic system, by which the combination arithmetic model of response surface and importance sampling was established, then the performance degradation model of the components and the simulation process of parameter failure in the physics process of thermodynamic system were also present. The parameter failure probability of the purification water system in nuclear reactor was obtained by the combination method. The results show that the combination method is an effective method for the calculation of the parameter failure probability of the thermodynamic system with high dimensionality and non-linear characteristics, because of the satisfactory precision with less computing time than the direct samplingmethod and the drawbacks of response surfacemethod. (authors)

A method and system for formation and withdrawal of a sample from a surface to be analyzed utilizes a collection instrument having a port through which a liquid solution is conducted onto the surface to be analyzed. The port is positioned adjacent the surface to be analyzed, and the liquid solution is conducted onto the surface through the port so that the liquid solution conducted onto the surface interacts with material comprising the surface. An amount of material is thereafter withdrawn from the surface. Pressure control can be utilized to manipulate the solution balance at the surface to thereby control the withdrawal of the amount of material from the surface. Furthermore, such pressure control can be coordinated with the movement of the surface relative to the port of the collection instrument within the X-Y plane.

Empiric quantification of environmental fecal contamination is an important step toward understanding the impact that water, sanitation, and hygiene interventions have on reducing enteric infections. There is a need to standardize the methods used for surfacesampling in field studies that examine fecal contamination in low-income settings. The dry cloth method presented in this manuscript improves upon the more commonly used swabbing technique that has been shown in the literature to have a low sampling efficiency. The recovery efficiency of a dry electrostatic cloth samplingmethod was evaluated using Escherichia coli and then applied to household surfaces in Iquitos, Peru, where there is high fecal contamination and enteric infection. Side-by-side measurements were taken from various floor locations within a household at the same time over a three-month period to compare for consistency of quantification of E. coli bacteria. The dry cloth samplingmethod in the laboratory setting showed 105% (95% Confidence Interval: 98%, 113%) E. coli recovery efficiency off of the cloths. The field application demonstrated strong agreement of side-by-side results (Pearson correlation coefficient for dirt surfaces was 0.83 ( p samples (Pearson (0.53, p method can be utilized in households with high bacterial loads using either continuous (quantitative) or categorical (semi-quantitative) data. The standardization of this low-cost, dry electrostatic cloth samplingmethod can be used to measure differences between households in intervention and non-intervention arms of randomized trials.

A new spectrophotometric method is reported for the determination of nanomolar level of malachite green in surface water samples. The method is based on the catalytic effect of silver nanoparticles on the oxidation of malachite green by hexacyanoferrate (III) in acetate-acetic acid medium. The absorbance is measured at 610 nm with the fixed-time method. Under the optimum conditions, the linear range was 8.0 × 10(-9)-2.0 × 10(-7) mol L(-1) malachite green with a correlation coefficient of 0.996. The limit of detection (S/N = 3) was 2.0 × 10(-9) mol L(-1). Relative standard deviation for ten replicate determinations of 1.0 × 10(-8) mol L(-1) malachite green was 1.86%. The method is featured with good accuracy and reproducibility for malachite green determination in surface water samples without any pre-concentration and separation step.

The admittance loci method plays an important role in the design of multilayer thin film structures. In this paper, admittance loci method has been explored theoretically for sensing of various chemical and biological samples based on surface plasmon resonance (SPR) phenomenon. A dielectric multilayer structure consisting of a Boro silicate glass (BSG) substrate, calcium fluoride (CaF2) and zirconium dioxide (ZrO2) along with different dielectric layers has been investigated. Moreover, admittance loci as well as SPR curves of metal-dielectric multilayer structure consisting of the BSG substrate, gold metal film and various dielectric samples has been simulated in MATLAB environment. To validate the proposed simulation results, calibration curves have also been provided.

Biofilms are an important source of contamination in food companies, yet the composition of biofilms in practice is still mostly unknown. The chemical and microbiological characterization of surfacesamples taken after cleaning and disinfection is very important to distinguish free-living bacteria from the attached bacteria in biofilms. In this study, samplingmethods that are potentially useful for both chemical and microbiological analyses of surfacesamples were evaluated. In the manufacturing facilities of eight Belgian food companies, surfaces were sampled after cleaning and disinfection using two samplingmethods: the scraper-flocked swab method and the sponge stick method. Microbiological and chemical analyses were performed on these samples to evaluate the suitability of the samplingmethods for the quantification of extracellular polymeric substance components and microorganisms originating from biofilms in these facilities. The scraper-flocked swab method was most suitable for chemical analyses of the samples because the material in these swabs did not interfere with determination of the chemical components. For microbiological enumerations, the sponge stick method was slightly but not significantly more effective than the scraper-flocked swab method. In all but one of the facilities, at least 20% of the sampledsurfaces had more than 10 2 CFU/100 cm 2 . Proteins were found in 20% of the chemically analyzed surfacesamples, and carbohydrates and uronic acids were found in 15 and 8% of the samples, respectively. When chemical and microbiological results were combined, 17% of the sampledsurfaces were contaminated with both microorganisms and at least one of the analyzed chemical components; thus, these surfaces were characterized as carrying biofilm. Overall, microbiological contamination in the food industry is highly variable by food sector and even within a facility at various sampling points and sampling times.

The design, implementation, calibration, and assessment of double modulation pyrometry to measure surface temperatures of radiatively heated samples in our 1 kW imaging furnace is presented. The method requires that the intensity of the external radiation can be modulated. This was achieved by a rotating blade mounted parallel to the optical axis of the imaging furnace. Double modulation pyrometry independently measures the external radiation reflected by the sample as well as the sum of thermal and reflected radiation and extracts the thermal emission as the difference of these signals. Thus a two-step calibration is required: First, the relative gains of the measured signals are equalized and then a temperature calibration is performed. For the latter, we transfer the calibration from a calibrated solar blind pyrometer that operates at a different wavelength. We demonstrate that the worst case systematic error associated with this procedure is about 300 K but becomes negligible if a reasonable estimate of the sample's emissivity is used. An analysis of the influence of the uncertainties in the calibration coefficients reveals that one (out of the five) coefficient contributes almost 50% to the final temperature error. On a low emission sample like platinum, the lower detection limit is around 1700 K and the accuracy typically about 20 K. Note that these moderate specifications are specific for the use of double modulation pyrometry at the imaging furnace. It is mainly caused by the difficulty to achieve and maintain good overlap of the hot zone with a diameter of about 3 mm Full Width at Half Height and the measurement spot both of which are of similar size.

The design, implementation, calibration, and assessment of double modulation pyrometry to measure surface temperatures of radiatively heated samples in our 1 kW imaging furnace is presented. The method requires that the intensity of the external radiation can be modulated. This was achieved by a rotating blade mounted parallel to the optical axis of the imaging furnace. Double modulation pyrometry independently measures the external radiation reflected by the sample as well as the sum of thermal and reflected radiation and extracts the thermal emission as the difference of these signals. Thus a two-step calibration is required: First, the relative gains of the measured signals are equalized and then a temperature calibration is performed. For the latter, we transfer the calibration from a calibrated solar blind pyrometer that operates at a different wavelength. We demonstrate that the worst case systematic error associated with this procedure is about 300 K but becomes negligible if a reasonable estimate of the sample's emissivity is used. An analysis of the influence of the uncertainties in the calibration coefficients reveals that one (out of the five) coefficient contributes almost 50% to the final temperature error. On a low emission sample like platinum, the lower detection limit is around 1700 K and the accuracy typically about 20 K. Note that these moderate specifications are specific for the use of double modulation pyrometry at the imaging furnace. It is mainly caused by the difficulty to achieve and maintain good overlap of the hot zone with a diameter of about 3 mm Full Width at Half Height and the measurement spot both of which are of similar size.

In order to solve the problem of adaptability and scanning efficiency of the current surface profile detection device, a high precision and high efficiency detection approach is proposed for surface contour of free-form surface parts based on self- adaptability. The contact mechanical probe and the non-contact laser probe are synthetically integrated according to the sampling approach of adaptive front-end path detection. First, the front-end path is measured by the non-contact laser probe, and the detection path is planned by the internal algorithm of the measuring instrument. Then a reasonable measurement sampling is completed according to the planned path by the contact mechanical probe. The detection approach can effectively improve the measurement efficiency of the free-form surface contours and can simultaneously detect the surface contours of unknown free-form surfaces with different curvatures and even different rate of curvature. The detection approach proposed in this paper also has important reference value for free-form surface contour detection.

This paper addresses two components of the problem of estimating the magnitude of step trends in surface water quality. The first is finding a robust estimator appropriate to the data characteristics expected in water-quality time series. The J. L. Hodges-E. L. Lehmann class of estimators is found to be robust in comparison to other nonparametric and moment-based estimators. A seasonal Hodges-Lehmann estimator is developed and shown to have desirable properties. Second, the effectiveness of various sampling strategies is examined using Monte Carlo simulation coupled with application of this estimator. The simulation is based on a large set of total phosphorus data from the Potomac River. To assure that the simulated records have realistic properties, the data are modeled in a multiplicative fashion incorporating flow, hysteresis, seasonal, and noise components. The results demonstrate the importance of balancing the length of the two sampling periods and balancing the number of data values between the two periods.

Recovery of spores from environmental surfaces is known to vary due to sampling methodology, techniques, spore size and characteristics, surface materials, and environmental conditions. A series of tests were performed to evaluate a new, validated sponge-wipe method. Specific factors evaluated were the effects of contaminant concentrations and surface materials on recovery efficiency (RE), false negative rate (FNR), limit of detection (LOD) - and the uncertainties of these quantities. Ceramic tile and stainless steel had the highest mean RE values (48.9 and 48.1%, respectively). Faux leather, vinyl tile, and painted wood had mean RE values of 30.3, 25.6, and 25.5, respectively, while plastic had the lowest mean RE (9.8%). Results show a roughly linear dependence of surface roughness on RE, where the smoothest surfaces have the highest mean RE values. REs were not influenced by the low spore concentrations tested (3 x 10{sup -3} to 1.86 CFU/cm{sup 2}). The FNR data were consistent with RE data, showing a trend of smoother surfaces resulting in higher REs and lower FNRs. Stainless steel generally had the lowest mean FNR (0.123) and plastic had the highest mean FNR (0.479). The LOD{sub 90} varied with surface material, from 0.015 CFU/cm{sup 2} on stainless steel up to 0.039 on plastic. Selecting sampling locations on the basis of surface roughness and using roughness to interpret spore recovery data can improve sampling. Further, FNR values, calculated as a function of concentration and surface material, can be used pre-sampling to calculate the numbers of samples for statistical sampling plans with desired performance, and post-sampling to calculate the confidence in characterization and clearance decisions.

A system for sampling a surface includes a sampling probe having a housing and a socket, and a rolling sampling sphere within the socket. The housing has a sampling fluid supply conduit and a sampling fluid exhaust conduit. The sampling fluid supply conduit supplies sampling fluid to the sampling sphere. The sampling fluid exhaust conduit has an inlet opening for receiving sampling fluid carried from the surface by the sampling sphere. A surfacesampling probe and a method for sampling a surface are also disclosed.

In this study a rapid and effective method (dispersive liquid-liquid microextraction (DLLME) was developed for extraction of methyl red (MR) prior to its determination by UV-Vis spectrophotometry. Influence variables on DLLME such as volume of chloroform (as extractant solvent) and methanol (as dispersive solvent), pH and ionic strength and extraction time were investigated. Then significant variables were optimized by using a Box-Behnken design (BBD) and desirability function (DF). The optimized conditions (100 μL of chloroform, 1.3 mL of ethanol, pH 4 and 4% (w/v) NaCl) resulted in a linear calibration graph in the range of 0.015-10.0 mg mL-1 of MR in initial solution with R2 = 0.995 (n = 5). The limits of detection (LOD) and limit of quantification (LOQ) were 0.005 and 0.015 mg mL-1, respectively. Finally, the DLLME method was applied for determination of MR in different water samples with relative standard deviation (RSD) less than 5% (n = 5).

The characterization of total-nitrogen (TN) concentrations is an important component of many surface-water-quality programs. However, three widely used methods for the determination of total nitrogen—(1) derived from the alkaline-persulfate digestion of whole-water samples (TN-A); (2) calculated as the sum of total Kjeldahl nitrogen and dissolved nitrate plus nitrite (TN-K); and (3) calculated as the sum of dissolved nitrogen and particulate nitrogen (TN-C)—all include inherent limitations. A digestion process is intended to convert multiple species of nitrogen that are present in the sample into one measureable species, but this process may introduce bias. TN-A results can be negatively biased in the presence of suspended sediment, and TN-K data can be positively biased in the presence of elevated nitrate because some nitrate is reduced to ammonia and is therefore counted twice in the computation of total nitrogen. Furthermore, TN-C may not be subject to bias but is comparatively imprecise. In this study, the effects of suspended-sediment and nitrate concentrations on the performance of these TN methods were assessed using synthetic samples developed in a laboratory as well as a series of stream samples. A 2007 laboratory experiment measured TN-A and TN-K in nutrient-fortified solutions that had been mixed with varying amounts of sediment-reference materials. This experiment identified a connection between suspended sediment and negative bias in TN-A and detected positive bias in TN-K in the presence of elevated nitrate. A 2009–10 synoptic-field study used samples from 77 stream-sampling sites to confirm that these biases were present in the field samples and evaluated the precision and bias of TN methods. The precision of TN-C and TN-K depended on the precision and relative amounts of the TN-component species used in their respective TN computations. Particulate nitrogen had an average variability (as determined by the relative standard deviation) of 13

The present disclosure provides an apparatus and method for coupling conduit segments together. A first pump obtains a sample and transmits it through a first conduit to a reservoir accessible by a second pump. The second pump further conducts the sample from the reservoir through a second conduit.

This book systematically addresses the design and analysis of efficient techniques for independent random sampling. Both general-purpose approaches, which can be used to generate samples from arbitrary probability distributions, and tailored techniques, designed to efficiently address common real-world practical problems, are introduced and discussed in detail. In turn, the monograph presents fundamental results and methodologies in the field, elaborating and developing them into the latest techniques. The theory and methods are illustrated with a varied collection of examples, which are discussed in detail in the text and supplemented with ready-to-run computer code. The main problem addressed in the book is how to generate independent random samples from an arbitrary probability distribution with the weakest possible constraints or assumptions in a form suitable for practical implementation. The authors review the fundamental results and methods in the field, address the latest methods, and emphasize the li...

Nowadays, there are a lot of surfaces analysis methods, each having its specificity, its qualities, its constraints (for instance vacuum) and its limits. Expensive in time and in investment, these methods have to be used deliberately. This article appeals to non specialists. It gives some elements of choice according to the studied information, the sensitivity, the use constraints or the answer to a precise question. After having recalled the fundamental principles which govern these analysis methods, based on the interaction between radiations (ultraviolet, X) or particles (ions, electrons) with matter, two methods will be more particularly described: the Auger electron spectroscopy (AES) and x-rays photoemission spectroscopy (ESCA or XPS). Indeed, they are the most widespread methods in laboratories, the easier for use and probably the most productive for the analysis of surface of industrial materials or samples submitted to treatments in aggressive media. (O.M.)

Although the field of radioactive air sampling has matured and evolved over decades, it has lacked a single resource that assimilates technical and background information on its many facets. Edited by experts and with contributions from top practitioners and researchers, Radioactive Air SamplingMethods provides authoritative guidance on measuring airborne radioactivity from industrial, research, and nuclear power operations, as well as naturally occuring radioactivity in the environment. Designed for industrial hygienists, air quality experts, and heath physicists, the book delves into the applied research advancing and transforming practice with improvements to measurement equipment, human dose modeling of inhaled radioactivity, and radiation safety regulations. To present a wide picture of the field, it covers the international and national standards that guide the quality of air sampling measurements and equipment. It discusses emergency response issues, including radioactive fallout and the assets used ...

The standard test methods used to assess the efficiency of a disinfectant applied to surfaces are often based on counting the microbial survivors sampled in a liquid, but total cell removal from surfaces is seldom achieved. One might therefore wonder whether evaluations of microbial survivors in liquid-sampled cells are representative of the levels of survivors in whole populations. The present study was thus designed to determine the “damaged/undamaged” status induced by a peracetic acid disinfection for Bacillus atrophaeus spores deposited on glass coupons directly on this substrate and to compare it to the status of spores collected in liquid by a sampling procedure. The method utilized to assess the viability of both surface-associated and liquid-sampled spores included fluorescence labeling with a combination of Syto 61 and Chemchrome V6 dyes and quantifications by analyzing the images acquired by confocal laser scanning microscopy. The principal result of the study was that the viability of spores sampled in the liquid was found to be poorer than that of surface-associated spores. For example, after 2 min of peracetic acid disinfection, less than 17% ± 5% of viable cells were detected among liquid-sampled cells compared to 79% ± 5% or 47% ± 4%, respectively, when the viability was evaluated on the surface after or without the sampling procedure. Moreover, assessments of the survivors collected in the liquid phase, evaluated using the microscopic method and standard plate counts, were well correlated. Evaluations based on the determination of survivors among the liquid-sampled cells can thus overestimate the efficiency of surface disinfection procedures. PMID:21742922

The standard test methods used to assess the efficiency of a disinfectant applied to surfaces are often based on counting the microbial survivors sampled in a liquid, but total cell removal from surfaces is seldom achieved. One might therefore wonder whether evaluations of microbial survivors in liquid-sampled cells are representative of the levels of survivors in whole populations. The present study was thus designed to determine the "damaged/undamaged" status induced by a peracetic acid disinfection for Bacillus atrophaeus spores deposited on glass coupons directly on this substrate and to compare it to the status of spores collected in liquid by a sampling procedure. The method utilized to assess the viability of both surface-associated and liquid-sampled spores included fluorescence labeling with a combination of Syto 61 and Chemchrome V6 dyes and quantifications by analyzing the images acquired by confocal laser scanning microscopy. The principal result of the study was that the viability of spores sampled in the liquid was found to be poorer than that of surface-associated spores. For example, after 2 min of peracetic acid disinfection, less than 17% ± 5% of viable cells were detected among liquid-sampled cells compared to 79% ± 5% or 47% ± 4%, respectively, when the viability was evaluated on the surface after or without the sampling procedure. Moreover, assessments of the survivors collected in the liquid phase, evaluated using the microscopic method and standard plate counts, were well correlated. Evaluations based on the determination of survivors among the liquid-sampled cells can thus overestimate the efficiency of surface disinfection procedures.

We present a method of sample preparation for studies of ion implantation on metal surfaces. The method, employing a mechanical mask, is specially adapted for samples analysed by Scanning Force Microscopy. It was successfully tested on polycrystalline copper substrates implanted with phosphorus ions at an acceleration voltage of 39 keV. The changes of the electrical properties of the surface were measured by Kelvin Probe Force Microscopy and the surface composition was analysed by Auger Electron Spectroscopy.

A method of analyzing a chemical composition of a specimen is described. The method can include providing a probe comprising an outer capillary tube and an inner capillary tube disposed co-axially within the outer capillary tube, where the inner and outer capillary tubes define a solvent capillary and a sampling capillary in fluid communication with one another at a distal end of the probe; contacting a target site on a surface of a specimen with a solvent in fluid communication with the probe; maintaining a plug volume proximate a solvent-specimen interface, wherein the plug volume is in fluid communication with the probe; draining plug sampling fluid from the plug volume through the sampling capillary; and analyzing a chemical composition of the plug sampling fluid with an analytical instrument. A system for performing the method is also described.

The performance of a macrofoam-swab samplingmethod was evaluated using Bacillus anthracis Sterne (BAS) and Bacillus atrophaeus Nakamura (BG) spores applied at nine low target amounts (2-500 spores) to positive-control plates and test coupons (2 in. × 2 in.) of four surface materials (glass, stainless steel, vinyl tile, and plastic). Test results from cultured samples were used to evaluate the effects of surrogate, surface concentration, and surface material on recovery efficiency (RE), false negative rate (FNR), and limit of detection. For RE, surrogate and surface material had statistically significant effects, but concentration did not. Mean REs were the lowest for vinyl tile (50.8% with BAS, 40.2% with BG) and the highest for glass (92.8% with BAS, 71.4% with BG). FNR values ranged from 0 to 0.833 for BAS and 0 to 0.806 for BG, with values increasing as concentration decreased in the range tested (0.078 to 19.375 CFU/cm2, where CFU denotes ‘colony forming units’). Surface material also had a statistically significant effect. A FNR-concentration curve was fit for each combination of surrogate and surface material. For both surrogates, the FNR curves tended to be the lowest for glass and highest for vinyl title. The FNR curves for BG tended to be higher than for BAS at lower concentrations, especially for glass. Results using a modified Rapid Viability-Polymerase Chain Reaction (mRV-PCR) analysis method were also obtained. The mRV-PCR results and comparisons to the culture results will be discussed in a subsequent report.

The performance of a macrofoam-swab samplingmethod was evaluated using Bacillus anthracis Sterne (BAS) and Bacillus atrophaeus Nakamura (BG) spores applied at nine low target amounts (2-500 spores) to positive-control plates and test coupons (2 in × 2 in) of four surface materials (glass, stainless steel, vinyl tile, and plastic). Test results from cultured samples were used to evaluate the effects of surrogate, surface concentration, and surface material on recovery efficiency (RE), false negative rate (FNR), and limit of detection. For RE, surrogate and surface material had statistically significant effects, but concentration did not. Mean REs were the lowest for vinyl tile (50.8% with BAS, 40.2% with BG) and the highest for glass (92.8% with BAS, 71.4% with BG). FNR values ranged from 0 to 0.833 for BAS and 0 to 0.806 for BG, with values increasing as concentration decreased in the range tested (0.078 to 19.375 CFU/cm2, where CFU denotes ‘colony forming units’). Surface material also had a statistically significant effect. A FNR-concentration curve was fit for each combination of surrogate and surface material. For both surrogates, the FNR curves tended to be the lowest for glass and highest for vinyl title. The FNR curves for BG tended to be higher than for BAS at lower concentrations, especially for glass. Results using a modified Rapid Viability-Polymerase Chain Reaction (mRV-PCR) analysis method were also obtained. The mRV-PCR results and comparisons to the culture results are discussed in a separate report.

A number of contamination sites exist in this country where the area and volume of material to be remediated is very large, approaching or exceeding 10{sup 6} m{sup 2} and 10{sup 6} m{sup 3}. Typically, only a small fraction of this material is actually contaminated. In such cases there is a strong economic motivation to test the material with a sufficient density of measurements to identify which portions are uncontaminated, so extensively they be left in place or be disposed of as uncontaminated waste. Unfortunately, since contamination often varies rapidly from position to position, this procedure can involve upwards of one million measurements per site. The situation is complicated further in many cases by the difficulties of sampling porous surfaces, such as concrete. This report describes a method for sampling concretes in which an immediate distinction can be made between contaminated and uncontaminated surfaces. Sample acquisition and analysis will be automated.

In recent years more attention is paid to the quality of language samples in typological work. Without an adequate sampling strategy, samples may suffer from various kinds of bias. In this article we propose a samplingmethod in which the genetic criterion is taken as the most important: samples...... to determine how many languages from each phylum should be selected, given any required sample size....

Surfacesampling for Bacillus anthracis spores has traditionally relied on detection via bacterial cultivation methods. Although effective, this approach does not provide the level of organism specificity that can be gained through molecular techniques. False negative rates (FNR) and limits of detection (LOD) were determined for two B. anthracis surrogates with modified rapid viability-polymerase chain reaction (mRV-PCR) following macrofoam-swab sampling. This study was conducted in parallel with a previously reported study that analyzed spores using a plate-culture method. B. anthracis Sterne (BAS) or B. atrophaeus Nakamura (BG) spores were deposited onto four surface materials (glass, stainless steel, vinyl tile, and plastic) at nine target concentrations (2 to 500 spores/coupon; 0.078 to 19.375 colony-forming units [CFU] per cm2). Mean FNR values for mRV-PCR analysis ranged from 0 to 0.917 for BAS and 0 to 0.875 for BG and increased as spore concentration decreased (over the concentrations investigated) for each surface material. FNRs based on mRV-PCR data were not statistically different for BAS and BG, but were significantly lower for glass than for vinyl tile. FNRs also tended to be lower for the mRV-PCR method compared to the culture method. The mRV-PCR LOD95 was lowest for glass (0.429 CFU/cm2 with BAS and 0.341 CFU/cm2 with BG) and highest for vinyl tile (0.919 CFU/cm2 with BAS and 0.917 CFU/cm2 with BG). These mRV-PCR LOD95 values were lower than the culture values (BAS: 0.678 to 1.023 CFU/cm2 and BG: 0.820 to 1.489 CFU/cm2). The FNR and LOD95 values reported in this work provide guidance for environmental sampling of Bacillus spores at low concentrations.

In recent years more attention is paid to the quality of language samples in typological work. Without an adequate sampling strategy, samples may suffer from various kinds of bias. In this article we propose a samplingmethod in which the genetic criterion is taken as the most important: samples...... created with this method will reflect optimally the diversity of the languages of the world. On the basis of the internal structure of each genetic language tree a measure is computed that reflects the linguistic diversity in the language families represented by these trees. This measure is used...... to determine how many languages from each phylum should be selected, given any required sample size....

Full Text Available Cloud point extraction (CPE has been used for the preconcentration and simultaneous determination of cobalt (Co and lead (Pb in fresh and wastewater samples. The extraction of analytes from aqueous samples was performed in the presence of 8-hydroxyquinoline (oxine as a chelating agent and Triton X-114 as a nonionic surfactant. Experiments were conducted to assess the effect of different chemical variables such as pH, amounts of reagents (oxine and Triton X-114, temperature, incubation time, and sample volume. After phase separation, based on the cloud point, the surfactant-rich phase was diluted with acidic ethanol prior to its analysis by the flame atomic absorption spectrometry (FAAS. The enhancement factors 70 and 50 with detection limits of 0.26 μg L−1 and 0.44 μg L−1 were obtained for Co and Pb, respectively. In order to validate the developed method, a certified reference material (SRM 1643e was analyzed and the determined values obtained were in a good agreement with the certified values. The proposed method was applied successfully to the determination of Co and Pb in a fresh surface and waste water sample.

Surfacesampling for Bacillus anthracis spores has traditionally relied on detection via bacterial cultivation methods. Although effective, this approach does not provide the level of organism specificity that can be gained through molecular techniques. False negative rates (FNR) and limits of detection (LOD) were determined for two B. anthracis surrogates with modified rapid viability-polymerase chain reaction (mRV-PCR) following macrofoam-swab sampling. This study was conducted in parallel with a previously reported study that analyzed spores using a plate-culture method. B. anthracis Sterne (BAS) or B. atrophaeus Nakamura (BG) spores were deposited onto four surface materials (glass, stainless steel, vinyl tile, and plastic) at nine target concentrations (2 to 500 spores/coupon; 0.078 to 19.375 colony-forming units [CFU] per cm²). Mean FNR values for mRV-PCR analysis ranged from 0 to 0.917 for BAS and 0 to 0.875 for BG and increased as spore concentration decreased (over the concentrations investigated) for each surface material. FNRs based on mRV-PCR data were not statistically different for BAS and BG, but were significantly lower for glass than for vinyl tile. FNRs also tended to be lower for the mRV-PCR method compared to the culture method. The mRV-PCR LOD₉₅ was lowest for glass (0.429 CFU/cm² with BAS and 0.341 CFU/cm² with BG) and highest for vinyl tile (0.919 CFU/cm² with BAS and 0.917 CFU/cm² with BG). These mRV-PCR LOD₉₅ values were lower than the culture values (BAS: 0.678 to 1.023 CFU/cm² and BG: 0.820 to 1.489 CFU/cm²). The FNR and LOD₉₅ values reported in this work provide guidance for environmental sampling of Bacillus spores at low concentrations.

The Chemical Analysis Automation (CAA) project is developing standardized, software-driven, site-deployable robotic laboratory systems with the objective of lowering the per-sample analysis cost, decreasing sample turnaround time, and minimizing human exposure to hazardous and radioactive materials associated with DOE remediation projects. The first integrated system developed by the CAA project is designed to determine polychlorinated biphenyls (PCB) content in soil matrices. A demonstration and development testing of this system was conducted in conjuction with surface soil characterization activities at the David Witherspoon 1630 Site in Knoxville, Tennessee. The PCB system consists of five hardware standard laboratory modules (SLMs), one software SLM, the task sequence controller (TSC), and the human-computer interface (HCI). Four of the hardware SLMs included a four-channel Soxhlet extractor, a high-volume concentrator, a column cleanup, and a gas chromatograph. These SLMs performed the sample preparation and measurement steps within the total analysis protocol. The fifth hardware module was a robot that transports samples between the SLMs and the required consumable supplies to the SLMs. The software SLM is an automated data interpretation module that receives raw data from the gas chromatograph SLM and analyzes the data to yield the analyte information. The TSC is a software system that provides the scheduling, management of system resources, and the coordination of all SLM activities. The HCI is a graphical user interface that presents the automated laboratory to the analyst in terms of the analytical procedures and methods. Human control of the automated laboratory is accomplished via the HCI. Sample information required for processing by the automated laboratory is entered through the HCI. Information related to the sample and the system status is presented to the analyst via graphical icons

Full Text Available This contribution describes a simple, fast, and sensitive application of localized surface plasmon resonance effect of silver nanoparticles for simultaneous determination of antihypertensive drugs’ mixture atenolol and amiloride in both pharmaceutical dosage forms and in biological samples (urine. Silver nanoparticles were prepared by chemical reduction of silver nitrate using hydroxylamine HCL in an alkaline medium. Application of silver-hydroxylamine nanoparticles (SH NPs provides many advantages including reproducibility, sensitivity, and cost effective way of analyte determination. Amiloride has four amino groups which act as attachment points on the surface of silver nanoparticles resulting in a synergistic effect on the absorption intensity of atenolol, leading to increase the sensitivity of the determination of both compounds. This method shows excellent advantages comparing with the previously reported methods, including accuracy, precision, and selectivity. The linear range of atenolol is 1 × 10−5–1 × 10−4 mol·L−1 and of amiloride is 1 × 10−6–1 × 10−5 mol·L−1. The limit of detection (LOD values of atenolol and amiloride are 0.89 × 10−5 and 0.42 × 10−6 mol·L−1, respectively.

A sample processing device is disclosed, which sample processing device comprises a first substrate and a second substrate, where the first substrate has a first surface comprising two area types, a first area type with a first contact angle with water and a second area type with a second contact...... angle with water, the first contact angle being smaller than the second contact angle. The first substrate defines an inlet system and a preparation system in areas of the first type which two areas are separated by a barrier system in an area of the second type. The inlet system is adapted to receive...

Full Text Available Audit sampling involves the application of audit procedures to less than 100% of items within an account balance or class of transactions. Our article aims to study audit sampling in audit of financial statements. As an audit technique largely used, in both its statistical and nonstatistical form, the method is very important for auditors. It should be applied correctly for a fair view of financial statements, to satisfy the needs of all financial users. In order to be applied correctly the method must be understood by all its users and mainly by auditors. Otherwise the risk of not applying it correctly would cause loose of reputation and discredit, litigations and even prison. Since there is not a unitary practice and methodology for applying the technique, the risk of incorrectly applying it is pretty high. The SWOT analysis is a technique used that shows the advantages, disadvantages, threats and opportunities. We applied SWOT analysis in studying the samplingmethod, from the perspective of three players: the audit company, the audited entity and users of financial statements. The study shows that by applying the samplingmethod the audit company and the audited entity both save time, effort and money. The disadvantages of the method are difficulty in applying and understanding its insight. Being largely used as an audit method and being a factor of a correct audit opinion, the sampling method’s advantages, disadvantages, threats and opportunities must be understood by auditors.

In this book, the authors cover the basic methods and advances within distance sampling that are most valuable to practitioners and in ecology more broadly. This is the fourth book dedicated to distance sampling. In the decade since the last book published, there have been a number of new developments. The intervening years have also shown which advances are of most use. This self-contained book covers topics from the previous publications, while also including recent developments in method, software and application. Distance sampling refers to a suite of methods, including line and point transect sampling, in which animal density or abundance is estimated from a sample of distances to detected individuals. The book illustrates these methods through case studies; data sets and computer code are supplied to readers through the book’s accompanying website. Some of the case studies use the software Distance, while others use R code. The book is in three parts. The first part addresses basic methods, the ...

In marine geology research it is necessary to obtain a suf fcient quantity of seabed surfacesamples, while also en- suring that the samples are in their original state. Currently,there are a number of seabed surfacesampling devices available, but we fnd it is very diffcult to obtain sand samples using these devices, particularly when dealing with fne sand. Machine-controlled seabed surfacesampling devices are also available, but generally unable to dive into deeper regions of water. To obtain larger quantities of seabed surface sand samples in their original states, many researchers have tried to improve upon sampling devices,but these efforts have generally produced ambiguous results, in our opinion.To resolve this issue, we have designed an improved andhighly effective seabed surface sand sampling device that incorporates the strengths of a variety of sampling devices. It is capable of diving into deepwater to obtain fne sand samples and is also suited for use in streams, rivers, lakes and seas with varying levels of depth (up to 100 m). This device can be used for geological mapping, underwater prospecting, geological engineering and ecological, environmental studies in both marine and terrestrial waters.

Contaminants present at top surface of superconducting radio frequency (SRF) cavities can act as field emitters and restrict the cavity accelerating gradient. A room temperature in-situ plasma processing technology for SRF cavities aiming to clean hydrocarbons from inner surface of cavities has been recently developed at the Spallation Neutron Source (SNS). Surface studies of the plasma-processed Nb samples by Secondary ion mass spectrometry (SIMS) and Scanning Kelvin Probe (SKP) showed that the NeO_2 plasma processing is very effective to remove carbonaceous contaminants from top surface and improves the surface work function by 0.5 to 1.0 eV.

The coincidence methods which are currently used for the accurate activity standardisation of radio-nuclides, require dead time and resolving time corrections which tend to become increasingly uncertain as countrates exceed about 10 K. To reduce the dependence on such corrections, Muller, in 1981, proposed the selective samplingmethod using a fast multichannel analyser (50 ns ch -1 ) for measuring the countrates. It is, in many ways, more convenient and possibly potentially more reliable to replace the MCA with scalers and a circuit is described employing five scalers; two of them serving to measure the background correction. Results of comparisons using our new method and the coincidence method for measuring the activity of 60 Co sources yielded agree-ment within statistical uncertainties. (author)

/ionization-time of flight mass spectrometry (MALDI-TOF-MS) is used as the first protein screening method in many laboratories because of its inherent simplicity, mass accuracy, sensitivity and relatively high sample throughput. We present a simplified sample preparation method for MALDI-MS that enables in-gel digestion...... for protein identification similar to that obtained by the traditional protocols for in-gel digestion and MALDI peptide mass mapping of human proteins, i.e. approximately 60%. The overall performance of the novel on-probe digestion method is comparable with that of the standard in-gel sample preparation...... protocol while being less labour intensive and more cost-effective due to minimal consumption of reagents, enzymes and consumables. Preliminary data obtained on a MALDI quadrupole-TOF tandem mass spectrometer demonstrated the utility of the on-probe digestion protocol for peptide mass mapping and peptide...

Multistage sampling is commonly used for household surveys when there exists no sampling frame, or when the population is scattered over a wide area. Multistage sampling usually introduces a complex dependence in the selection of the final units, which makes asymptotic results quite difficult to prove. In this work, we consider multistage sampling with simple random without replacement sampling at the first stage, and with an arbitrary sampling design for further stages. We consider coupling ...

An estimate of the resistive losses in the LHC dipole beam screen is given from cold surface resistance measurements using the shielded pair technique. Several beam screen samples have been evaluated, with different copper coating methods, including a sample with ribbed surface envisaged to reduce electron cloud losses thanks to its low reflectivity. Experimental data, derived by a proper analysis of the measured Q-factors and including error estimates are compared with theoretical predictions of the anomalous skin effect.

Earth moving vehicles (e.g., dump trucks, belly dumps) commonly haul radiologically contaminated materials from a site being remediated to a disposal site. Traditionally, each vehicle must be surveyed before being released. The logistical difficulties of implementing the traditional approach on a large scale demand that an alternative be devised. A statistical method (MIL-STD-105E, open-quotes Sampling Procedures and Tables for Inspection by Attributesclose quotes) for assessing product quality from a continuous process was adapted to the vehicle decontamination process. This method produced a sampling scheme that automatically compensates and accommodates fluctuating batch sizes and changing conditions without the need to modify or rectify the sampling scheme in the field. Vehicles are randomly selected (sampled) upon completion of the decontamination process to be surveyed for residual radioactive surface contamination. The frequency of sampling is based on the expected number of vehicles passing through the decontamination process in a given period and the confidence level desired. This process has been successfully used for 1 year at the former uranium mill site in Monticello, Utah (a CERCLA regulated clean-up site). The method forces improvement in the quality of the decontamination process and results in a lower likelihood that vehicles exceeding the surface contamination standards are offered for survey. Implementation of this statistical samplingmethod on Monticello Projects has resulted in more efficient processing of vehicles through decontamination and radiological release, saved hundreds of hours of processing time, provided a high level of confidence that release limits are met, and improved the radiological cleanliness of vehicles leaving the controlled site

The samplingmethod has been paid much attention in the field of complex network in general and statistical physics in particular. This paper presents two new samplingmethods based on the perspective that a small part of vertices with high node degree can possess the most structure information of a network. The two proposed samplingmethods are efficient in sampling the nodes with high degree. The first new samplingmethod is improved on the basis of the stratified random samplingmethod and...

Uniform sampling in metrology has known drawbacks such as coherent spectral aliasing and a lack of efficiency in terms of measuring time and data storage. The requirement for intelligent sampling strategies has been outlined over recent years, particularly where the measurement of structured surfaces is concerned. Most of the present research on intelligent sampling has focused on dimensional metrology using coordinate-measuring machines with little reported on the area of surface metrology. In the research reported here, potential intelligent sampling strategies for surface topography measurement of structured surfaces are investigated by using numerical simulation and experimental verification. The methods include the jittered uniform method, low-discrepancy pattern sampling and several adaptive methods which originate from computer graphics, coordinate metrology and previous research by the authors. By combining the use of advanced reconstruction methods and feature-based characterization techniques, the measurement performance of the samplingmethods is studied using case studies. The advantages, stability and feasibility of these techniques for practical measurements are discussed. (paper)

Previously two of the authors proposed a Monte Carlo method for sampling statistical ensembles of random walks and surfaces with a Boltzmann probabilistic weight. In the present paper we work out the details for several models of random surfaces, defined on d-dimensional hypercubic lattices. (orig.)

Full Text Available A series of experiments was conducted to explore the utility of composite-based collection of surfacesamples for the detection of a Bacillus anthracis surrogate using cellulose sponge samplers on a nonporous stainless steel surface. Two composite-based collection approaches were evaluated over a surface area of 3716 cm2 (four separate 929 cm2 areas, larger than the 645 cm2 prescribed by the standard Centers for Disease Control (CDC and Prevention cellulose sponge sampling protocol for use on nonporous surfaces. The CDC method was also compared to a modified protocol where only one surface of the sponge sampler was used for each of the four areas composited. Differences in collection efficiency compared to positive controls and the potential for contaminant transfer for each protocol were assessed. The impact of the loss of wetting buffer from the sponge sampler onto additional surface areas sampled was evaluated. Statistical tests of the results using ANOVA indicate that the collection of composite samples using the modified sampling protocol is comparable to the collection of composite samples using the standard CDC protocol (p = 0.261. Most of the surface-bound spores are collected on the first sampling pass, suggesting that multiple passes with the sponge sampler over the same surface may be unnecessary. The effect of moisture loss from the sponge sampler on collection efficiency was not significant (p = 0.720 for both methods. Contaminant transfer occurs with both sampling protocols, but the magnitude of transfer is significantly greater when using the standard protocol than when the modified protocol is used (p<0.001. The results of this study suggest that composite surfacesampling, by either method presented here, could successfully be used to increase the surface area sampled per sponge sampler, resulting in reduced sampling times in the field and decreased laboratory processing cost and turn-around times.

Particulate surface contamination is of concern in production industries such as food processing, aerospace, electronics and semiconductor manufacturing. There is also an increased awareness that surface contamination should be monitored in industrial hygiene surveys. A conceptual and theoretical...... framework for designing sampling strategies is thus developed. The distribution and spatial correlation of surface contamination can be characterized using concepts from geostatistical science, where spatial applications of statistics is most developed. The theory is summarized and particulate surface...... contamination, sampled from small areas on a table, have been used to illustrate the method. First, the spatial correlation is modelled and the parameters estimated from the data. Next, it is shown how the contamination at positions not measured can be estimated with kriging, a minimum mean square error method...

We present a novel method for high-quality blue-noise sampling on mesh surfaces with prescribed cell-sizes for the underlying tessellation (capacity constraint). Unlike the previous surfacesampling approach that only uses capacity constraints as a regularizer of the Centroidal Voronoi Tessellation (CVT) energy, our approach enforces an exact capacity constraint using the restricted power tessellation on surfaces. Our approach is a generalization of the previous 2D blue noise sampling technique using an interleaving optimization framework. We further extend this framework to handle multi-capacity constraints. We compare our approach with several state-of-the-art methods and demonstrate that our results are superior to previous work in terms of preserving the capacity constraints.

Full Text Available This paper intends to generate the approximate Voronoi diagram in the geodesic metric for some unbiased samples selected from original points. The mesh model of seeds is then constructed on basis of the Voronoi diagram. Rather than constructing the Voronoi diagram for all original points, the proposed strategy is to run around the obstacle that the geodesic distances among neighboring points are sensitive to nearest neighbor definition. It is obvious that the reconstructed model is the level of detail of original points. Hence, our main motivation is to deal with the redundant scattered points. In implementation, Poisson disk sampling is taken to select seeds and helps to produce the Voronoi diagram. Adaptive reconstructions can be achieved by slightly changing the uniform strategy in selecting seeds. Behaviors of this method are investigated and accuracy evaluations are done. Experimental results show the proposed method is reliable and effective.

Currently available and recently developed samplingmethods for slurry and solid manure were tested for bias and reproducibility in the determination of total phosphorus and nitrogen content of samples. Samplingmethods were based on techniques in which samples were taken either during loading from

The strategy for sampling and sample preparation is outlined: necessary number of samples; analysis and treatment of the results received; quantity of the analysed material according to the radionuclide concentrations and analytical methods; the minimal quantity and kind of the data needed for making final conclusions and decisions on the base of the results received. This strategy was tested in gamma spectroscopic analysis of radionuclide contamination of the region of Eleshnitsa Uranium Mines. The water samples was taken and stored according to the ASTM D 3370-82. The general sampling procedures were in conformity with the recommendations of ISO 5667. The radionuclides was concentrated by coprecipitation with iron hydroxide and ion exchange. The sampling of soil samples complied with the rules of ASTM C 998, and their sample preparation - with ASTM C 999. After preparation the samples were sealed hermetically and measured. (author)

In this work, we studied the effects of the operating conditions of the xray photoelectron spectroscopy analysis technique (XPS) on the investigated samples. Firstly, the performances of the whole system have been verified as well as the accuracy of the analysis. Afterwards, the problem of the analysis of insulating samples caused by the charge buildup on the surface has been studied. The use of low-energy electron beam (<100 eV) to compensate the surface charge has been applied. The effect of X-ray on the samples have been assessed and was found to be nondestructive within the analysis time. The effect of low- and high-energy electron beams on the samplesurface have been investigated. Highenergy electrons were found to have destructive effect on organic samples. The sample heating procedure has been tested and its effect on the chemical stat of the surface was followed. Finally, the ion source was used to determine the elements distribution and the chemical stat of different depths of the sample. A method has been proposed to determine these depths (author).

The samplingmethod has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new samplingmethods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed samplingmethods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new samplingmethod is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new samplingmethods, we compare them with the existing samplingmethods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed samplingmethods perform much better than the existing samplingmethods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.

National Aeronautics and Space Administration — Protecting the International Space Station (ISS) crew from microbial contaminants is of great importance. Bacterial and fungal contamination of air, surfaces and...

Sampling subnet is an important topic of complex network research. Samplingmethods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel samplingmethod could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

Extraction methods enabling faster removal and concentration of uranium compounds for improved trace and low-level assay are demonstrated for standard surfacesampling material in support of nuclear safeguards efforts, health monitoring, and other nuclear analysis applications. A key problem with the existing surfacesampling swipes is the requirement for complete digestion of sample and sampling matrix. This is a time-consuming and labour-intensive process that limits laboratory throughput, elevates costs, and increases background levels. Various extraction methods are explored for their potential to quickly and efficiently remove different chemical forms of uranium from standard surfacesampling material. A combination of carbonate and peroxide solutions is shown to give the most rapid and complete form of uranyl compound extraction and dissolution. This rapid extraction process is demonstrated to be compatible with standard inductive coupled plasma mass spectrometry methods for uranium isotopic assay as well as screening techniques such as x-ray fluorescence. The general approach described has application beyond uranium to other analytes of nuclear forensic interest (e.g., rare earth elements and plutonium) as well as heavy metals for environmental and industrial hygiene monitoring.

This manual provides samplingmethods of environmental samples of airborne dust, precipitated dust, precipitated water (rain or snow), fresh water, soil, river sediment or lake sediment, discharged water from a nuclear facility, grains, tea, milk, pasture grass, limnetic organisms, daily diet, index organisms, sea water, marine sediment, marine organisms, and that for tritium and radioiodine determination for radiation monitoring from radioactive fallout or radioactivity release by nuclear facilities. This manual aims at the presentation of standard sampling procedures for environmental radioactivity monitoring regardless of monitoring objectives, and shows preservation method of environmental samples acquired at the samplingpoint for radiation counting for those except human body. Sampling techniques adopted in this manual is decided by the criteria that they are suitable for routine monitoring and any special skillfulness is not necessary. Based on the above-mentioned principle, this manual presents outline and aims of sampling, sampling position or object, sampling quantity, apparatus, equipment or vessel for sampling, sampling location, sampling procedures, pretreatment and preparation procedures of a sample for radiation counting, necessary recording items for sampling and sample transportation procedures. Special attention is described in the chapter of tritium and radioiodine because these radionuclides might be lost by the above-mentioned sample preservation method for radiation counting of less volatile radionuclides than tritium or radioiodine. (Takagi, S.)

This article presents a discussion of mixed methods (MM) sampling techniques. MM sampling involves combining well-established qualitative and quantitative techniques in creative ways to answer research questions posed by MM research designs. Several issues germane to MM sampling are presented including the differences between probability and…

Application of SEM-EDX, AES, XPS are exemplarily demonstrated for highly radioactive materials with ionizing dose rates of about 1 Sv near the surface. The samples studied are aerosols from the high level waste vitrification process, postprecipitation in a pretreated fuel solution and emulsifying sludge from a solvent extraction process. The results of the chemical composition differentiated down to microscopic level reveal much more information about the history of a sample than those available from the integral macro-methods analysing. Elucidication of chemical composition and body structure in micrometer level may give insight into the origin and generation processes of samples under investigation. (orig.)

The purpose of this assessment is to compare underwater and above water settler sludge samplingmethods to determine if the added cost for underwater sampling for the sole purpose of worker dose reductions is justified. Initial planning for sludge sampling included container, settler and knock-out-pot (KOP) sampling. Due to the significantly higher dose consequence of KOP sludge, a decision was made to sample KOP underwater to achieve worker dose reductions. Additionally, initial plans were to utilize the underwater sampling apparatus for settler sludge. Since there are no longer plans to sample KOP sludge, the decision for underwater sampling for settler sludge needs to be revisited. The present sampling plan calls for spending an estimated $2,500,000 to design and construct a new underwater sampling system (per A21 C-PL-001 RevOE). This evaluation will compare and contrast the present method of above water sampling to the underwater method that is planned by the Sludge Treatment Project (STP) and determine if settler samples can be taken using the existing sampling cart (with potentially minor modifications) while maintaining doses to workers As Low As Reasonably Achievable (ALARA) and eliminate the need for costly redesigns, testing and personnel retraining

The purpose of this assessment is to compare underwater and above water settler sludge samplingmethods to determine if the added cost for underwater sampling for the sole purpose of worker dose reductions is justified. Initial planning for sludge sampling included container, settler and knock-out-pot (KOP) sampling. Due to the significantly higher dose consequence of KOP sludge, a decision was made to sample KOP underwater to achieve worker dose reductions. Additionally, initial plans were to utilize the underwater sampling apparatus for settler sludge. Since there are no longer plans to sample KOP sludge, the decision for underwater sampling for settler sludge needs to be revisited. The present sampling plan calls for spending an estimated $2,500,000 to design and construct a new underwater sampling system (per A21 C-PL-001 RevOE). This evaluation will compare and contrast the present method of above water sampling to the underwater method that is planned by the Sludge Treatment Project (STP) and determine if settler samples can be taken using the existing sampling cart (with potentially minor modifications) while maintaining doses to workers As Low As Reasonably Achievable (ALARA) and eliminate the need for costly redesigns, testing and personnel retraining.

A method is described of measuring surface density or thickness, preferably of coating layers, using radiation emitted by a suitable radionuclide, e.g., 241 Am. The radiation impinges on the measured material, e.g., a copper foil and in dependence on its surface density or thickness part of the flux of impinging radiation is reflected and part penetrates through the material. The radiation which has penetrated through the material excites in a replaceable adjustable backing characteristic radiation of an energy close to that of the impinging radiation (within +-30 keV). Part of the flux of the characteristic radiation spreads back to the detector, penetrates through the material in which in dependence on surface density or thickness of the coating layer it is partly absorbed. The flux of the penetrated characteristic radiation impinging on the face of the detector is a function of surface density or thickness. Only that part of the energy is evaluated of the energy spectrum which corresponds to the energy of characteristic radiation. (B.S.)

This research project was conducted at the National Nuclear Security Administration's Kansas City Plant, operated by Honeywell Federal Manufacturing and Technologies, in conjunction with the Safety Sciences Department of Central Missouri State University, to compare relative removal efficiencies of three wipe sampling techniques currently used at Department of Energy facilities. Efficiencies of removal of beryllium contamination from typical painted surfaces were tested by wipe sampling with dry Whatman 42 filter paper, with water-moistened (Ghost Wipe) materials, and by methanol-moistened wipes. Test plates were prepared using 100 mm X 15 mm Pyrex Petri dishes with interior surfaces spray painted with a bond coat primer. To achieve uniform deposition over the test plate surface, 10 ml aliquots of solution containing 1 beryllium and 0.1 ml of metal working fluid were transferred to the test plates and subsequently evaporated. Metal working fluid was added to simulate the slight oiliness common on surfaces in metal working shops where fugitive oil mist accumulates over time. Sixteen test plates for each wipe method (dry, water, and methanol) were processed and sampled using a modification of wiping patterns recommended by OSHA Method 125G. Laboratory and statistical analysis showed that methanol-moistened wipe sampling removed significantly more (about twice as much) beryllium/oil-film surface contamination as water-moistened wipes (p< 0.001), which removed significantly more (about twice as much) residue as dry wipes (p <0.001). Evidence for beryllium sensitization via skin exposure argues in favor of wipe sampling with wetting agents that provide enhanced residue removal efficiency.

Provided herein are fluidics platforms and related methods for performing integrated sample collection and solid-phase extraction of a target component of the sample all in one tube. The fluidics platform comprises a pump, particles for solid-phase extraction and a particle-holding means. The method comprises contacting the sample with one or more reagents in a pump, coupling a particle-holding means to the pump and expelling the waste out of the pump while the particle-holding means retains the particles inside the pump. The fluidics platform and methods herein described allow solid-phase extraction without pipetting and centrifugation.

Procedures and guidelines are given for the dissolution of a variety of selected materials using fusion, microwave, and Parr bomb techniques. These materials include germanium glass, corium-concrete mixtures, and zeolites. Emphasis is placed on sample-preparation approaches that produce a single master solution suitable for complete multielement characterization of the sample. In addition, data are presented on the soil microwave digestion method approved by the Environmental Protection Agency (EPA). Advantages and disadvantages of each sample-preparation technique are summarized.

Procedures and guidelines are given for the dissolution of a variety of selected materials using fusion, microwave, and Parr bomb techniques. These materials include germanium glass, corium-concrete mixtures, and zeolites. Emphasis is placed on sample-preparation approaches that produce a single master solution suitable for complete multielement characterization of the sample. In addition, data are presented on the soil microwave digestion method approved by the Environmental Protection Agency (EPA). Advantages and disadvantages of each sample-preparation technique are summarized

To improve surveys of sparse objects, methods that use auxiliary information have been suggested. Guided transect sampling uses prior information, e.g., from aerial photographs, for the layout of survey strips. Instead of being laid out straight, the strips will wind between potentially more interesting areas. 3P sampling (probability proportional to prediction) uses...

A method of simultaneously sampling particulate mercury, organic mercurial vapors, and metallic mercury vapor in the working and occupational environment and determining the amount of mercury derived from each such source in the sampled air. A known volume of air is passed through a sampling tube containing a filter for particulate mercury collection, a first adsorber for the selective adsorption of organic mercurial vapors, and a second adsorber for the adsorption of metallic mercury vapor. Carbon black molecular sieves are particularly useful as the selective adsorber for organic mercurial vapors. The amount of mercury adsorbed or collected in each section of the sampling tube is readily quantitatively determined by flameless atomic absorption spectrophotometry.

were investigated in this study: Nine samples from different surface water bodies, two samples from two effluent sources ... Ezeagu, Udi, Nkanu, Oji River and some parts of Awgu and Aninri ..... Study of Stream Output from Small Catchments.

Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The samplingmethod may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of samplingmethods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.

Surface waters collected in the field for chemical analyses are easily contaminated. This research note presents a step-by-step detailed description of how to avoid sample contamination when field collecting, processing, and transporting surface water samples for laboratory analysis.

When water samples are taken for the analysis of CFCs, regardless of the samplingmethod used, contamination of samples by contact with atmospheric air (with its 'high' CFC concentrations) is a major concern. This is because groundwaters usually have lower CFC concentrations than those waters which have been exposed to the modern air. Some groundwaters might not contain CFCs and, therefore, are most sensitive to trace contamination by atmospheric air. Thus, extreme precautions are needed to obtain uncontaminated samples when groundwaters, particularly those with older ages, are sampled. It is recommended at the start of any CFC investigation that samples from a CFC-free source be collected and analysed, as a check upon the sampling equipment and methodology. The CFC-free source might be a deep monitoring well or, alternatively, CFC-free water could be carefully prepared in the laboratory. It is especially important that all tubing, pumps and connection that will be used in the sampling campaign be checked in this manner

The purpose of this paper is to summarize the sampling and examination techniques that were used in the collection and analysis of TMI-2 samples. Samples ranging from auxiliary building air to core debris were collected and analyzed. Handling of the larger samples and many of the smaller samples had to be done remotely and many standard laboratory analytical techniques were modified to accommodate the extremely high radiation fields associated with these samples. The TMI-2 samples presented unique problems with sampling and the laboratory analysis of prior molten fuel debris. 14 refs., 8 figs

Methods described for sampling amphibians and reptiles in Douglas-fir forests in the Pacific Northwest include pitfall trapping, time-constrained collecting, and surveys of coarse woody debris. The herpetofauna of this region differ in breeding and nonbreeding habitats and vagility, so that no single technique is sufficient for a community study. A combination of...

“Dissolved” concentrations of contaminants in sediment porewater (Cfree) provide a more relevant exposure metric for risk assessment than do total concentrations. Passive samplingmethods (PSMs) for estimating Cfree offer the potential for cost-efficient and accurate in situ characterization...

Traditional methods for determining the frequency of suspended sediment sample collection often rely on measurements, such as water discharge, that are not well correlated to sediment concentration. Stream power is generally not a good predictor of sediment concentration for rivers that transport the bulk of their load as fines, due to the highly variable routing of...

Nowadays, there are a lot of surfaces analysis methods, each having its specificity, its qualities, its constraints (for instance vacuum) and its limits. Expensive in time and in investment, these methods have to be used deliberately. This article appeals to non specialists. It gives some elements of choice according to the studied information, the sensitivity, the use constraints or the answer to a precise question. After having recalled the fundamental principles which govern these analysis methods, based on the interaction between radiations (ultraviolet, X) or particles (ions, electrons) with matter, two methods will be more particularly described: the Auger electron spectroscopy (AES) and x-rays photoemission spectroscopy (ESCA or XPS). Indeed, they are the most widespread methods in laboratories, the easier for use and probably the most productive for the analysis of surface of industrial materials or samples submitted to treatments in aggressive media. (O.M.) 11 refs.

Poisson-disk sampling is one of the fundamental research problems in computer graphics that has many applications. In this paper, we study the problem of maximal Poisson-disk sampling on mesh surfaces. We present a simple approach that generalizes the 2D maximal sampling framework to surfaces. The key observation is to use a subdivided mesh as the sampling domain for conflict checking and void detection. Our approach improves the state-of-the-art approach in efficiency, quality and the memory consumption.

Poisson-disk sampling is one of the fundamental research problems in computer graphics that has many applications. In this paper, we study the problem of maximal Poisson-disk sampling on mesh surfaces. We present a simple approach that generalizes the 2D maximal sampling framework to surfaces. The key observation is to use a subdivided mesh as the sampling domain for conflict checking and void detection. Our approach improves the state-of-the-art approach in efficiency, quality and the memory consumption.

We provide a description of the interpolating and sampling sequences on a space of holomorphic functions on a finite Riemann surface, where a uniform growth restriction is imposed on the holomorphic functions.

regularizer of the Centroidal Voronoi Tessellation (CVT) energy, our approach enforces an exact capacity constraint using the restricted power tessellation on surfaces. Our approach is a generalization of the previous 2D blue noise sampling technique using

Full Text Available Surface exposure dating using in situ cosmogenic nuclides has contributed to our understanding of Earth-surface processes. The precision of the ages estimated by this method is affected by the sample geometry; therefore, high accuracy measurements of the thickness and shape of the rock sample (thickness and shape is crucial. However, it is sometimes diffi cult to meet these requirements by conventional samplingmethods with a hammer and chisel. Here, we propose a new sampling technique using a portable electric rock cutter. This sampling technique is faster, produces more precisely shaped samples, and allows for a more precise age interpretation. A simple theoretical modeldemonstrates that the age error due to defective sample geometry increases as the total sample thickness increases, indicating the importance of precise sampling for surface exposure dating.

In this paper the magnetic leakage flux and eddy current method were used to evaluate changes of materials' properties caused by stress. Seven samples made of ferromagnetic material with different level of applied stress were prepared. First, the leakage magnetic fields were measured by scanning the surface of the specimens with GMR gradiometer. Next, the same samples were evaluated using an eddy current sensor. A comparison between results obtained from both methods was carried out. Finally, selected parameters of the measured signal were calculated and utilized to evaluate level of the applied stress. A strong coincidence between amount of the applied stress and the maximum amplitude of the derivative was confirmed

Full Text Available This article approaches the general issue of diminishing the evidence investigation space in audit activities, by means of sampling techniques, given that in the instance of a significant data volume an exhaustive examination of the assessed popula¬tion is not possible and/or effective. The general perspective of the presentation involves dealing with sampling risk, in essence, the risk that a selected sample may not be representative for the overall population, in correlation with the audit risk model and with the component parts of this model (inherent risk, control risk and non detection risk and highlights the inter-conditionings between these two models.

Full Text Available Previous studies showed that olfactory cues are important for mammalian communication. However, many specific compounds that convey information between conspecifics are still unknown. To understand mechanisms and functions of olfactory cues, olfactory signals such as volatile compounds emitted from individuals need to be assessed. Sampling of animals with and without scent glands was typically conducted using cotton swabs rubbed over the skin or fur and analysed by gas chromatography-mass spectrometry (GC-MS. However, this method has various drawbacks, including a high level of contaminations. Thus, we adapted two methods of volatile sampling from other research fields and compared them to sampling with cotton swabs. To do so we assessed the body odor of common marmosets (Callithrix jacchus using cotton swabs, thermal desorption (TD tubes and, alternatively, a mobile GC-MS device containing a thermal desorption trap. Overall, TD tubes comprised most compounds (N = 113, with half of those compounds being volatile (N = 52. The mobile GC-MS captured the fewest compounds (N = 35, of which all were volatile. Cotton swabs contained an intermediate number of compounds (N = 55, but very few volatiles (N = 10. Almost all compounds found with the mobile GC-MS were also captured with TD tubes (94%. Hence, we recommend TD tubes for state of the art sampling of body odor of mammals or other vertebrates, particularly for field studies, as they can be easily transported, stored and analysed with high performance instruments in the lab. Nevertheless, cotton swabs capture compounds which still may contribute to the body odor, e.g. after bacterial fermentation, while profiles from mobile GC-MS include only the most abundant volatiles of the body odor.

An apparatus and method for continuously sampling a pulverous material flow includes means for extracting a representative subflow from a pulverous material flow. A screw conveyor is provided to cause the extracted subflow to be pushed upwardly through a duct to an overflow. Means for transmitting a radiation beam transversely to the subflow in the duct, and means for sensing the transmitted beam through opposite pairs of windows in the duct are provided to measure the concentration of one or more constituents in the subflow. (author)

Methods for conditional Poisson sampling (CP-sampling) and Sampford sampling are compared and the focus is on the efficiency of the methods. The efficiency is investigated by simulation in different sampling situations. It was of interest to compare methods since new methods for both CP-sampling and Sampford sampling were introduced by Bondesson, Traat & Lundqvist in 2004. The new methods are acceptance rejection methods that use the efficient Pareto samplingmethod. They are found to be ...

This paper studies the Touch-and-Go (TAG) concept for enabling a spacecraft to take a sample from the surface of a small primitive body, such as an asteroid or comet. The idea behind the TAG concept is to let the spacecraft descend to the surface, make contact with the surface for several seconds, and then ascend to a safe location. Sampling would be accomplished by an end-effector that is active during the few seconds of surface contact. The TAG event is one of the most critical events in a primitive body sample-return mission. The purpose of this study is to evaluate the dynamic behavior of a representative spacecraft during the TAG event, i.e., immediately prior, during, and after surface contact of the sampler. The study evaluates the sample-collection performance of the proposed sampling end-effector, in this case a brushwheel sampler, while acquiring material from the surface during the contact. A main result of the study is a guidance and control (G&C) validation of the overall TAG concept, in addition to specific contributions to demonstrating the effectiveness of using nonlinear clutch mechanisms in the sampling arm joints, and increasing the length of the sampling arms to improve robustness.

Full Text Available The paper work presents two practical methods to draw the development of a surface unable to be developed applying classical methods of Descriptive Geometry, the toroidal surface, frequently met in technical practice. The described methods are approximate ones; the development is obtained with the help of points. The accuracy of the methods is given by the number of points used when drawing. As for any other approximate method, when practically manufactured the development may need to be adjusted on site.

Investigations at hazardous waste sites and sites of chemical spills often require on-site measurements and sampling activities to assess the type and extent of contamination. This document is a compilation of samplingmethods and materials suitable to address most needs that arise during routine waste site and hazardous spill investigations. The samplingmethods presented in this document are compiled by media, and were selected on the basis of practicality, economics, representativeness, compatability with analytical considerations, and safety, as well as other criteria. In addition to sampling procedures, sample handling and shipping, chain-of-custody procedures, instrument certification, equipment fabrication, and equipment decontamination procedures are described. Samplingmethods for soil, sludges, sediments, and bulk materials cover the solids medium. Ten methods are detailed for surface waters, groundwater and containerized liquids; twelve are presented for ambient air, soil gases and vapors, and headspace gases. A brief discussion of ionizing radiation survey instruments is also provided

Sample transport is an important requirement for In-situ analysis of samples in NASA planetary exploration missions. Tests have shown that powders or liquid drops on a surface can be transported by surface acoustic waves (SAW) that are generated on the surface using interdigital transducers. The phenomena were investigated experimentally and to generate SAWs interdigital electrodes were deposited on wafers of 128 deg rotated Y-cut LiNbO?. Transporting capability of the SAW device was tested using particles of various sizes and drops of various viscosities liquids. Because of different interaction mechanisms with the SAWs, the powders and the liquid drops were observed to move in opposite directions. In the preliminary tests, a speed of 180 mm/s was achieved for powder transportation. The detailed experimental setup and results are presented in this paper. The transporting mechanism can potentially be applied to miniaturize sample analysis system or " lab-on-chip" devices.

Bayesian statistical inference for sampling from weighted distribution models is studied. Small-sample Bayesian bootstrap clone (BBC) approximations to the posterior distribution are discussed. A second-order property for the BBC in unweighted i.i.d. sampling is given. A consequence is that BBC approximations to a posterior distribution of the mean and to the sampling distribution of the sample average, can be made asymptotically accurate by a proper choice of the random variables that genera...

A fast code for the simulation of common RBS spectra including surface roughness effects has been written and tested on virtual samples comprising either a rough layer deposited on a smooth substrate or smooth layer deposited on a rough substrate and simulated at different geometries. The samplesurface or interface relief has been described by a polyline and the simulated RBS spectrum has been obtained as the sum of many particular spectra from randomly chosen particle trajectories. The code includes several procedures generating virtual samples with random and regular (periodical) roughness. The shape of the RBS spectra has been found to change strongly with increasing sample roughness and an increasing angle of the incoming ion beam.

... 40 Protection of Environment 25 2010-07-01 2010-07-01 false Representative SamplingMethods I...—Representative SamplingMethods The methods and equipment used for sampling waste materials will vary with the form and consistency of the waste materials to be sampled. Samples collected using the sampling...

The report describes the procedures used by Argonne National Laboratory to samplesurface coal mine effluents in order to obtain field and laboratory data on 110 organic compounds or classes of compounds and 14 metals and minerals that are known as priority pollutants, plus 5-day biochemical oxygen demand (BOD/sub 5/), total organic carbon (TOC), chemical oxygen demand (COD), total dissolved solids (TDS), and total suspended solids (TSS). Included are directions for preparation of sampling containers and equipment, methods of sampling and sample preservation, and field and laboratory protocols, including chain-of-custody procedures. Actual analytical procedures are not described, but their sources are referenced.

Purpose is to develop a rapid surface (concrete, steel) contamination measurement system that will provide a ''quick-look'' indication of contamination areas, an archival record, and an automated analysis. A bulk sampling oven is also being developed. The sampling device consists of a sampling head, a quick look detector, and an archiving system (sorbent tube). The head thermally desorbs semi-volatiles, such as PCBs, oils, etc., from concrete and steel surfaces; the volatilized materials are passed through a quick-look detector. Sensitivity of the detector can be attenuated for various contaminant levels. Volatilized materials are trapped in a tube filled with adsorbent. The tubes are housed in a magazine which also archives information about sampling conditions. Analysis of the tubes can be done at a later date. The concrete sampling head is fitted with a tungsten-halogen lamp; in laboratory experiments it has extracted model contaminants by heating the top 4mm of the surface to 250 C within 100-200 s. The steel sampling head has been tested on different types of steels and has extracted model contaminants within 30 s. A mathematical model of heat and mass transport in concrete has been developed. Rate of contaminant removal is at maximum when the moisture content is about 100 kg/m 3 . The system will be useful during decontamination and decommissioning operations

A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

Measurements of the concentrations of specific atmospheric radionuclides in air filter samples collected for the Environmental Measurements Laboratory`s Surface Air Sampling Program (SASP) during 1990--1993, with the exception of April 1993, indicate that anthropogenic radionuclides, in both hemispheres, were at or below the lower limits of detection for the sampling and analytical techniques that were used to collect and measure them. The occasional detection of {sup 137}Cs in some air filter samples may have resulted from resuspension of previously deposited debris. Following the April 6, 1993 accident and release of radionuclides into the atmosphere at a reprocessing plant in the Tomsk-7 military nuclear complex located 16 km north of the Siberian city of Tomsk, Russia, weekly air filter samples from Barrow, Alaska; Thule, Greenland and Moosonee, Canada were selected for special analyses. The naturally occurring radioisotopes that the authors measure, {sup 7}Be and {sup 210}Pb, continue to be detected in most air filter samples. Variations in the annual mean concentrations of {sup 7}Be at many of the sites appear to result primarily from changes in the atmospheric production rate of this cosmogenic radionuclide. Short-term variations in the concentrations of {sup 7}Be and {sup 210}Pb continued to be observed at many sites at which weekly air filter samples were analyzed. The monthly gross gamma-ray activity and the monthly mean surface air concentrations of {sup 7}Be, {sup 95}Zr, {sup 137}Cs, {sup 144}Ce, and {sup 210}Pb measured at sampling sites in SASP during 1990--1993 are presented. The weekly mean surface air concentrations of {sup 7}Be, {sup 95}Zr, {sup 137}Cs, {sup 144}Ce, and {sup 210}Pb for samples collected during 1990--1993 are given for 17 sites.

Measurements of the concentrations of specific atmospheric radionuclides in air filter samples collected for the Environmental Measurements Laboratory's Surface Air Sampling Program (SASP) during 1990--1993, with the exception of April 1993, indicate that anthropogenic radionuclides, in both hemispheres, were at or below the lower limits of detection for the sampling and analytical techniques that were used to collect and measure them. The occasional detection of 137 Cs in some air filter samples may have resulted from resuspension of previously deposited debris. Following the April 6, 1993 accident and release of radionuclides into the atmosphere at a reprocessing plant in the Tomsk-7 military nuclear complex located 16 km north of the Siberian city of Tomsk, Russia, weekly air filter samples from Barrow, Alaska; Thule, Greenland and Moosonee, Canada were selected for special analyses. The naturally occurring radioisotopes that the authors measure, 7 Be and 210 Pb, continue to be detected in most air filter samples. Variations in the annual mean concentrations of 7 Be at many of the sites appear to result primarily from changes in the atmospheric production rate of this cosmogenic radionuclide. Short-term variations in the concentrations of 7 Be and 210 Pb continued to be observed at many sites at which weekly air filter samples were analyzed. The monthly gross gamma-ray activity and the monthly mean surface air concentrations of 7 Be, 95 Zr, 137 Cs, 144 Ce, and 210 Pb measured at sampling sites in SASP during 1990--1993 are presented. The weekly mean surface air concentrations of 7 Be, 95 Zr, 137 Cs, 144 Ce, and 210 Pb for samples collected during 1990--1993 are given for 17 sites

Characterizing fish assemblages in lentic ecosystems is difficult, and multiple samplingmethods are almost always necessary to gain reliable estimates of indices such as species richness. However, most research focused on lentic fish sampling methodology has targeted recreationally important species, and little to no information is available regarding the influence of multiple methods and timing (i.e., temporal variation) on characterizing entire fish assemblages. Therefore, six lakes and impoundments (48–1,557 ha surface area) were sampled seasonally with seven gear types to evaluate the combined influence of samplingmethods and timing on the number of species and individuals sampled. Probabilities of detection for species indicated strong selectivities and seasonal trends that provide guidance on optimal seasons to use gears when targeting multiple species. The evaluation of species richness and number of individuals sampled using multiple gear combinations demonstrated that appreciable benefits over relatively few gears (e.g., to four) used in optimal seasons were not present. Specifically, over 90 % of the species encountered with all gear types and season combinations (N = 19) from six lakes and reservoirs were sampled with nighttime boat electrofishing in the fall and benthic trawling, modified-fyke, and mini-fyke netting during the summer. Our results indicated that the characterization of lentic fish assemblages was highly influenced by the selection of sampling gears and seasons, but did not appear to be influenced by waterbody type (i.e., natural lake, impoundment). The standardization of data collected with multiple methods and seasons to account for bias is imperative to monitoring of lentic ecosystems and will provide researchers with increased reliability in their interpretations and decisions made using information on lentic fish assemblages.

We describe methods for efficient, accurate sampling of logs at landscape and stand scales to estimate density, total length, cover, volume, and weight. Our methods focus on optimizing the sampling effort by choosing an appropriate samplingmethod and transect length for specific forest conditions and objectives. Samplingmethods include the line-intersect method and...

The resonant nuclear reaction F-19 (p, alpha gamma)O-16 was used to perform depth sensitive analyses for both fluorine and hydrogen in lunar samples. The resonance at 0.83 MeV (center-of-mass) in this reaction was applied to the measurement of the distribution of trapped solar protons in lunar samples to depths of about 1 / 2 micrometer. These results are interpreted in terms of terrestrial H 2 O surface contamination and a redistribution of the implanted solar H which has been influenced by heavy radiation damage in the surface region. Results are also presented for an experiment to test the penetration of H 2 O into laboratory glass samples which have been irradiated with O-16 to simulate the radiation damaged surfaces of lunar glasses. Fluorine determinations were performed in a 1 pm surface layer on lunar samples using the same F-19(alpha gamma)O-16 resonance. The data are discussed from the standpoint of lunar fluorine and Teflon contamination. (U.S.)

Microbiological samplingmethods presently used for enumeration of microorganisms on spacecraft surfaces require contact with easily damaged components. Estimation of viable particles on surfaces using air samplingmethods in conjunction with a mathematical model would be desirable. Parameters necessary for the mathematical model are the effect of angled surfaces on viable particle collection and the number of viable cells per viable particle. Deposition of viable particles on angled surfaces closely followed a cosine function, and the number of viable cells per viable particle was consistent with a Poisson distribution. Other parameters considered by the mathematical model included deposition rate and fractional removal per unit time. A close nonlinear correlation between volumetric air sampling and airborne fallout on surfaces was established with all fallout data points falling within the 95% confidence limits as determined by the mathematical model.

Development of the best sampling design to answer a research question should be an interactive venture between the land manager or researcher and statisticians, and is the result of answering various questions. A series of questions that can be asked to guide the researcher in making decisions that will arrive at an effective sampling plan are described, and a case...

An electrospray system comprises a liquid extraction surfacesampling probe. The probe comprises a probe body having a liquid inlet and a liquid outlet, and having a liquid extraction tip. A solvent delivery conduit is provided for receiving solvent liquid from the liquid inlet and delivering the solvent liquid to the liquid extraction tip. An open liquid extraction channel extends across an exterior surface of the probe body from the liquid extraction tip to the liquid outlet. An electrospray emitter tip is in liquid communication with the liquid outlet of the liquid extraction surfacesampling probe. A system for analyzing samples, a liquid junction surfacesampling system, and a method of analyzing samples are also disclosed.

The optical methods to determine refractive index profile of layered materials are commonly used with spectroscopic ellipsometry or transmittance/reflectance spectrometry. Measurements of spectral reflection and transmission usually permit to characterize optical materials and determine their refractive index. However, it is possible to characterize of samples with dopants, impurities as well as defects using optical methods. Microstructures of a hydrogenated crystalline Si wafer and a layer of SiO2 - ZrO2 composition are investigated. The first sample is a Si(001):H Czochralski grown single crystalline wafer with 50 nm thick surface Si02 layer. Hydrogen dose implantation (D continue to be an important issue in microelectronic device and sensor fabrication. Hydrogen-implanted silicon (Si: H) has become a topic of remarkable interest, mostly because of the potential of implantation-induced platelets and micro-cavities for the creation of gettering -active areas and for Si layer splitting. Oxygen precipitation and atmospheric impurity are analysed. The second sample is the layer of co-evaporated SiO2 and ZrO2 materials using simultaneously two electron beam guns in reactive evaporation methods. The composition structure was investigated by X-Ray photoelectron spectroscopy (XPS), and spectroscopic ellipsometry methods. A non-uniformity and composition of layer are analysed using average density method.

Full Text Available Measurements of maximum tangential component of magnetic intensity Hτm have been carried out in the paper. The measurements have been taken on the surface of metal samples according to time of single current pulse rise in the form of semi-sinusoid of a linear current wire. Measurements have been made with the purpose to determine a value of the component according to thickness of samples made of aluminium.Temporary resolution ranges of electric and magnetic properties and defects of sample continuity along the depth have been found.Empirical formulae of dependence Hτm on sample thickness have been derived and their relation with efficient depth penetration of magnetic field into metal has been found.

Journal Article Following a wide area release of biological materials, mapping the extent of contamination is essential for orderly response and decontamination operations. HVAC filters process large volumes of air and therefore collect highly representative particulate samples in buildings. HVAC filter extraction may have great utility in rapidly estimating the extent of building contamination following a large-scale incident. However, until now, no studies have been conducted comparing the two most appropriate sampling approaches for HVAC filter materials: direct extraction and vacuum-based sampling.

It has been assumed that the perturbed region, R/sub p/, is large enough so that: (1) even without a great deal of biasing there is a substantial probability that an average source-neutron will enter it; and (2) once having entered, the neutron is likely to make several collisions in R/sub p/ during its lifetime. Unfortunately neither assumption is valid for the typical configurations one encounters in small-sample-worth experiments. In such experiments one measures the reactivity change which is induced when a very small void in a critical assembly is filled with a sample of some test-material. Only a minute fraction of the fission-source neutrons ever gets into the sample and, of those neutrons that do, most emerge uncollided. Monte Carlo small-sample perturbations computations are described

Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

The demands of production, such as thin films in microelectronics, rely on consideration of factors influencing the interaction of dissimilar materials that make contact with their surfaces. Bond formation between surface layers of dissimilar condensed solids-termed adhesion-depends on the nature of the contacting bodies. Thus, it is necessary to determine the characteristics of adhesion interaction of different materials from both applied and fundamental perspectives of surface phenomena. Given the difficulty in obtaining reliable experimental values of the adhesion strength of coatings, the theoretical approach to determining adhesion characteristics becomes more important. Surface Physics: Theoretical Models and Experimental Methods presents straightforward and efficient approaches and methods developed by the authors that enable the calculation of surface and adhesion characteristics for a wide range of materials: metals, alloys, semiconductors, and complex compounds. The authors compare results from the ...

... 19 Customs Duties 2 2010-04-01 2010-04-01 false Method of sampling by Customs. 151.70 Section 151... THE TREASURY (CONTINUED) EXAMINATION, SAMPLING, AND TESTING OF MERCHANDISE Wool and Hair § 151.70 Method of sampling by Customs. A general sample shall be taken from each sampling unit, unless it is not...

Annual sampling was conducted at the Rio Blanco, Colorado, site for the Long-Term Hydrologic Monitoring Program May 16-17, 2011, to monitor groundwater and surface water for potential radionuclide contamination. Sampling and analyses were conducted as specified in Sampling and Analysis Plan for the U.S. Department of Energy Office of Legacy Management Sites (LMS/PRO/S04351, continually updated). A duplicate sample was collected from location Johnson Artesian WL. Samples were analyzed by the U.S. Environmental Protection Agency (EPA) Radiation&Indoor Environments National Laboratory in Las Vegas, Nevada. Samples were analyzed for gamma-emitting radionuclides by high-resolution gamma spectrometry, and for tritium using the conventional method. Tritium was not measured using the enrichment method because the EPA laboratory no longer offers that service. Results of this monitoring at the Rio Blanco site demonstrate that groundwater and surface water outside the boundaries have not been affected by project-related contaminants.

Generally the surface of the sample should be smooth and flat in XRF analysis, but the ancient ceramics and hardly match this condition. Two simple methods are put forward in fundamental method and empirical correction method of XRF analysis, so the analysis of little sample or the sample with curved surface can be easily completed

A surface of a metal member such as carbon steel to be used in a corrosion circumstance such as in a nuclear power plant and a thermoelectric plant are polished. A printing method is conducted for removing obstacles on the surface of the member. Namely, a photographing printing paper immersed in a diluted sulfuric acid solution is appended tightly to the portion with its surface polished smoothly. Sulfur present in the form of an obstacle of MnS or present alone in the material reacts with the sulfuric acid to form a sulfuric acid gas, and reacts with Ag of the printing paper to discolor the printing paper to brown. When a peeled printing paper is discolored to brown, sulfur printing is repeated. After conforming that the peeled printing paper is white, the surface is washed. Subsequently, surface plasticization is conducted by water jet peening or shot peening. (I.N.)

Coarse woody debris (CWD) plays an important role in many forest ecosystem processes. In recent years, a number of new methods have been proposed to sample CWD. These methods select individual logs into the sample using some form of unequal probability sampling. One concern with most of these methods is the difficulty in estimating the volume of each log. A new method...

Microplastic pollution in the marine environment is a scientific topic that has received increasing attention over the last decade. The majority of scientific publications address microplastic pollution of the sea surface. The protocol below describes the methodology for sampling, sample preparation, separation and chemical identification of microplastic particles. A manta net fixed on an »A frame« attached to the side of the vessel was used for sampling. Microplastic particles caught in the cod end of the net were separated from samples by visual identification and use of stereomicroscopes. Particles were analyzed for their size using an image analysis program and for their chemical structure using ATR-FTIR and micro FTIR spectroscopy. The described protocol is in line with recommendations for microplastics monitoring published by the Marine Strategy Framework Directive (MSFD) Technical Subgroup on Marine Litter. This written protocol with video guide will support the work of researchers that deal with microplastics monitoring all over the world. PMID:28060297

Systems and methods are described for laser ablation of an analyte from a specimen and capturing of the analyte in a dispensed solvent to form a testing solution. A solvent dispensing and extraction system can form a liquid microjunction with the specimen. The solvent dispensing and extraction system can include a surfacesampling probe. The laser beam can be directed through the surfacesampling probe. The surfacesampling probe can also serve as an atomic force microscopy probe. The surfacesampling probe can form a seal with the specimen. The testing solution including the analyte can then be analyzed using an analytical instrument or undergo further processing.

This report describes a field experiment that identifies an optimal method for determination of compliance with the US Environmental Protection Agency's Ra-226 guidelines for soil. The primary goals were to establish practical levels of accuracy and precision in estimating the mean Ra-226 concentration of surface soil in a small contaminated region; to obtain empirical information on composite vs. individual soil sampling and on random vs. uniformly spaced sampling; and to examine the practicality of using gamma measurements in predicting the average surface radium concentration and in estimating the number of soil samples required to obtain a given level of accuracy and precision. Numerous soil samples were collected on each six sites known to be contaminated with uranium mill tailings. Three types of samples were collected on each site: 10-composite samples, 20-composite samples, and individual or post hole samples; 10-composite sampling is the method of choice because it yields a given level of accuracy and precision for the least cost. Gamma measurements can be used to reduce surface soil sampling on some sites. 2 refs., 5 figs., 7 tabs

Annual sampling was conducted at the Rio Blanco, Colorado, site for the Long-Term Hydrologic Monitoring Program May 14-16, 2013, to monitor groundwater and surface water for potential radionuclide contamination. Sampling and analyses were conducted as specified in Sampling and Analysis Plan for the U.S. Department of Energy Office of Legacy Management Sites (LMS/PRO/S04351, continually updated). A duplicate sample was collected from location CER #1 Black Sulphur. Samples were analyzed for gamma-emitting radionuclides by high-resolution gamma spectrometry and for tritium using the conventional and enrichment methods.

A series of experiments was conducted using SRL 165 synthetic waste glass to investigate the effects of surface preparation and leaching solution composition on the alteration of the glass. Samples of glass with as-cast surfaces produced smooth reaction layers and some evidence for precipitation of secondary phases from solution. Secondary phases were more abundant in samples reacted in deionized water than for those reacted in a silicate solution. Samples with saw-cut surfaces showed a large reduction in surface roughness after 7 days of reaction in either solution. Reaction in silicate solution for up to 91 days produced no further change in surface morphology, while reaction in DIW produced a spongy surface that formed the substrate for further surface layer development. The differences in the surface morphology of the samples may create microclimates that control the details of development of alteration layers on the glass; however, the concentrations of elements in leaching solutions show differences of 50% or less between samples prepared with different surface conditions for tests of a few months duration. 6 refs., 7 figs., 1 tab

Aspheric optics are being used more and more widely in modern optical systems, due to their ability of correcting aberrations, enhancing image quality, enlarging the field of view and extending the range of effect, while reducing the weight and volume of the system. With optical technology development, we have more pressing requirement to large-aperture and high-precision aspheric surfaces. The original computer controlled optical surfacing (CCOS) technique cannot meet the challenge of precision and machining efficiency. This problem has been thought highly of by researchers. Aiming at the problem of original polishing process, an optimized method for manufacturing large aspheric surfaces is put forward. Subsurface damage (SSD), full aperture errors and full band of frequency errors are all in control of this method. Lesser SSD depth can be gained by using little hardness tool and small abrasive grains in grinding process. For full aperture errors control, edge effects can be controlled by using smaller tools and amendment model with material removal function. For full band of frequency errors control, low frequency errors can be corrected with the optimized material removal function, while medium-high frequency errors by using uniform removing principle. With this optimized method, the accuracy of a K9 glass paraboloid mirror can reach rms 0.055 waves (where a wave is 0.6328μm) in a short time. The results show that the optimized method can guide large aspheric surface manufacturing effectively.

Calculation of surface wave velocity is a classic problem dating back to the well-known Haskell's transfer matrix method, which contributes to solutions of elastic wave propagation, global subsurface structure evaluation by simulating observed earthquake group velocities, and on-site evaluation of subsurface structure by simulating phase velocity dispersion curves and/or H/V spectra obtained by micro-tremor observation. Recently inversion analysis on micro-tremor observation requires efficient method of generating many model candidates and also stable, accurate, and fast computation of dispersion curves and Raleigh wave trajectory. The original Haskell's transfer matrix method has been improved in terms of its divergence tendency mainly by the generalized transmission and reflection matrix method with formulation available for surface wave velocity; however, root finding algorithm has not been fully discussed except for the one by setting threshold to the absolute value of complex characteristic functions. Since surface wave number (reciprocal to the surface wave velocity multiplied by frequency) is a root of complex valued characteristic function, it is intractable to use general root finding algorithm. We will examine characteristic function in phase plane to construct two dimensional bisection algorithm with consideration on a layer to be evaluated and algorithm for tracking roots down along frequency axis. (author)

A mercury detection system that includes a flow cell having a mercury sensor, a light source and a light detector is provided. The mercury sensor includes a transparent substrate and a submonolayer of mercury absorbing nanoparticles, e.g., gold nanoparticles, on a surface of the substrate. Methods of determining whether mercury is present in a sample using the mercury sensors are also provided. The subject mercury detection systems and methods find use in a variety of different applications, including mercury detecting applications.

This paper discusses generating three dimensional (3D) spatial channel models with emphasis on the angular samplingmethods. Three angular samplingmethods, i.e. modified uniform power sampling, modified uniform angular sampling, and random pairing methods are proposed and investigated in detail....... The random pairing method, which uses only twenty sinusoids in the ray-based model for generating the channels, presents good results if the spatial channel cluster is with a small elevation angle spread. For spatial clusters with large elevation angle spreads, however, the random pairing method would fail...... and the other two methods should be considered....

the mode, ratio-of-uniforms rejection method, and rejection by sampling in the tau domain. Methods for the multivariate distributions include: simulation of urn experiments, conditional method, Gibbs sampling, and Metropolis-Hastings sampling. These methods are useful for Monte Carlo simulation of models...... of biased sampling and models of evolution and for calculating moments and quantiles of the distributions.......Several methods for generating variates with univariate and multivariate Wallenius' and Fisher's noncentral hypergeometric distributions are developed. Methods for the univariate distributions include: simulation of urn experiments, inversion by binary search, inversion by chop-down search from...

Disclosed is a method for characterizing a sample having a structure disposed on or within the sample, comprising the steps of applying a first pulse of light to a surface of the sample for creating a propagating strain pulse in the sample, applying a second pulse of light to the surface so that the second pulse of light interacts with the propagating strain pulse in the sample, sensing from a reflection of the second pulse a change in optical response of the sample, and relating a time of occurrence of the change in optical response to at least one dimension of the structure.

In molecular dynamics, enhanced samplingmethods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced samplingmethods.

There are numerous flow phenomena in pressure vessel and piping systems that involve the dynamics of free fluid surfaces. For example, fluid interfaces must be considered during the draining or filling of tanks, in the formation and collapse of vapor bubbles, and in seismically shaken vessels that are partially filled. To aid in the analysis of these types of flow phenomena, a new technique has been developed for the computation of complicated free-surface motions. This technique is based on the concept of a local average volume of fluid (VOF) and is embodied in a computer program for two-dimensional, transient fluid flow called SOLA-VOF. The basic approach used in the VOF technique is briefly described, and compared to other free-surfacemethods. Specific capabilities of the SOLA-VOF program are illustrated by generic examples of bubble growth and collapse, flows of immiscible fluid mixtures, and the confinement of spilled liquids

Nanotechnology is the science and engineering that manipulate matters at nano scale, which can be used to create many new materials and devices with a vast range of applications. As the nanotech product increasingly enters the commercial marketplace, nanometrology becomes a stringent and enabling technology for the manipulation and the quality control of the nanotechnology. However, many measuring instruments, for instance scanning probe microscopy, are limited to relatively small area of hundreds of micrometers with very low efficiency. Therefore some intelligent sampling strategies should be required to improve the scanning efficiency for measuring large area. This paper presents a Gaussian process based intelligent samplingmethod to address this problem. The method makes use of Gaussian process based Bayesian regression as a mathematical foundation to represent the surface geometry, and the posterior estimation of Gaussian process is computed by combining the prior probability distribution with the maximum likelihood function. Then each sampling point is adaptively selected by determining the position which is the most likely outside of the required tolerance zone among the candidates and then inserted to update the model iteratively. Both simulationson the nominal surface and manufactured surface have been conducted on nano-structure surfaces to verify the validity of the proposed method. The results imply that the proposed method significantly improves the measurement efficiency in measuring large area structured surfaces.

Annual sampling was conducted at the Rio Blanco, Colorado, site for the Long-Term Hydrologic Monitoring Program May 9-10, 2012, to monitor groundwater and surface water for potential radionuclide contamination. Sampling and analyses were conducted as specified in Sampling and Analysis Plan for the U.S. Department of Energy Office of Legacy Management Sites (LMS/PRO/S04351, continually updated). A duplicate sample was collected from location Johnson Artesian WL. Samples were analyzed for gamma-emitting radionuclides by high-resolution gamma spectrometry and for tritium using the conventional and enrichment methods. Results of this monitoring at the Rio Blanco site demonstrate that groundwater and surface water outside the site boundaries have not been affected by project-related contaminants.

Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of samplingmethods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability samplingmethods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.

This paper presents a design of experiment (DOE) for laser surface modification process of AISI H13 tool steel in achieving the maximum hardness and minimum surface roughness at a range of modified layer depth. A Rofin DC-015 diffusion-cooled CO2 slab laser was used to process AISI H13 tool steel samples. Samples of 10 mm diameter were sectioned to 100 mm length in order to process a predefined circumferential area. The parameters selected for examination were laser peak power, overlap percentage and pulse repetition frequency (PRF). The response surfacemethod with Box-Behnken design approach in Design Expert 7 software was used to design the H13 laser surface modification process. Metallographic study and image analysis were done to measure the modified layer depth. The modified surface roughness was measured using two-dimensional surface profilometer. The correlation of the three laser processing parameters and the modified surface properties was specified by plotting three-dimensional graph. The hardness properties were tested at 981 mN force. From metallographic study, the laser modified surface depth was between 37 μm and 150 μm. The average surface roughness recorded from the 2D profilometry was at a minimum value of 1.8 μm. The maximum hardness achieved was between 728 and 905 HV0.1. These findings are significant to modern development of hard coatings for wear resistant applications.

Dead-end ultrafiltration (DEUF) has been reported to be a simple, field-deployable technique for recovering bacteria, viruses, and parasites from large-volume water samples for water quality testing and waterborne disease investigations. While DEUF has been reported for application to water samples having relatively low turbidity, little information is available regarding recovery efficiencies for this technique when applied to sampling turbid water samples such as those commonly found in lakes and rivers. This study evaluated the effectiveness of a DEUF technique for recovering MS2 bacteriophage, enterococci, Escherichia coli, Clostridium perfringens, and Cryptosporidium parvum oocysts in surface water samples having elevated turbidity. Average recovery efficiencies for each study microbe across all turbidity ranges were: MS2 (66%), C. parvum (49%), enterococci (85%), E. coli (81%), and C. perfringens (63%). The recovery efficiencies for MS2 and C. perfringens exhibited an inversely proportional relationship with turbidity, however no significant differences in recovery were observed for C. parvum, enterococci, or E. coli. Although ultrafilter clogging was observed, the DEUF method was able to process 100-L surface water samples at each turbidity level within 60 min. This study supports the use of the DEUF method for recovering a wide array of microbes in large-volume surface water samples having medium to high turbidity. Published by Elsevier B.V.

In the current work we propose a new method to samplesurface sediment during bivalve fishing surveys. Fishing institutes all around the word carry out regular surveys with the aim of monitoring the stocks of commercial species. These surveys comprise often more than one hundred of sampling stations and cover large geographical areas. Although superficial sediment grain sizes are among the main drivers of benthic communities and provide crucial information for studies on coastal dynamics, overall there is a strong lack of this type of data, possibly, because traditional surface sediment samplingmethods use grabs, that require considerable time and effort to be carried out on regular basis or on large areas. In face of these aspects, we developed an easy and un-expensive method to sample superficial sediments, during bivalve fisheries monitoring surveys, without increasing survey time or human resources. The method was successfully evaluated and validated during a typical bivalve survey carried out on the Northwest coast of Portugal, confirming that it had any interference with the survey objectives. Furthermore, the method was validated by collecting samples using a traditional Van Veen grabs (traditional method), which showed a similar grain size composition to the ones collected by the new method, on the same localities. We recommend that the procedure is implemented on regular bivalve fishing surveys, together with an image analysis system to analyse the collected samples. The new method will provide substantial quantity of data on surface sediment in coastal areas, using a non-expensive and efficient manner, with a high potential application in different fields of research.

Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

The aim of this work was to obtain superhydrophobic surfaces in a plasma medium. The experiment was carried out using the PECVD method in two different modes: constant and pulsing. The surface roughness was obtained by applying nanoparticles synthesized in plasma in a mixture of argon and methane. The resulting particles were deposited on the surface of silicon and glass materials. The contact angle increased linearly depending on the number of cycles, until it reached 160° at 150-160th cycles, after that the increase in cycles does not affect the contact angle, since the saturation process is in progress. Also the effect of the working gas composition on the hydrophobicity of the surface was studied. At low concentrations of methane (1%) only particles are synthesized in the working gas, and hydrophobicity is unstable, with an increase in methane concentration (7%) nanofilms are synthesized from nanoclusters, and surface hydrophobicity is relatively stable. In addition, a pulsing plasma mode was used to obtain superhydrophobic surfaces. The hydrophobicity of the sample showed that the strength of the nanofilm was stable in comparison with the sample obtained in the first mode, but the contact angle was lower. The obtained samples were examined using SEM, SPM, optical analysis, and their contact angles were determined.

to temporal factors. Paired T-test between pre- and post-disturbance samples suggested that the above methods of sampling and variables like TC, protein and TOC could be used for monitoring disturbance....

This study proposes a new adaptive method to enable the number of interrogation windows and their positions in a particle image velocimetry (PIV) image interrogation algorithm to become self-adapted according to the seeding density. The proposed method can relax the constraint of uniform sampling rate and uniform window size commonly adopted in the traditional PIV algorithm. In addition, the positions of the sampling points are redistributed on the basis of the spring force generated by the sampling points. The advantages include control of the number of interrogation windows according to the local seeding density and smoother distribution of sampling points. The reliability of the adaptive samplingmethod is illustrated by processing synthetic and experimental images. The synthetic example attests to the advantages of the samplingmethod. Compared with that of the uniform interrogation technique in the experimental application, the spatial resolution is locally enhanced when using the proposed samplingmethod. (technical design note)

This paper gives a flexible method to determine sample sizes for both systematic and random error models (this pertains to sampling problems in nuclear safeguard questions). In addition, the method allows different attribute rejection limits. The new method could assist achieving a higher detection probability and enhance inspection effectiveness

This report summarizes previous laboratory studies to characterize the performance of methods for collecting, storing/transporting, processing, and analyzing samples from surfaces contaminated by Bacillus anthracis or related surrogates. The focus is on plate culture and count estimates of surface contamination for swab, wipe, and vacuum samples of porous and nonporous surfaces. Summaries of the previous studies and their results were assessed to identify gaps in information needed as inputs to calculate key parameters critical to risk management in biothreat incidents. One key parameter is the number of samples needed to make characterization or clearance decisions with specified statistical confidence. Other key parameters include the ability to calculate, following contamination incidents, the (1) estimates of Bacillus anthracis contamination, as well as the bias and uncertainties in the estimates, and (2) confidence in characterization and clearance decisions for contaminated or decontaminated buildings. Gaps in knowledge and understanding identified during the summary of the studies are discussed and recommendations are given for future studies.

At the present time, the method of choice to determine surface contamination of the workplace with antineoplastic and other hazardous drugs is surface wipe sampling and subsequent sample analysis with a variety of analytical techniques. The purpose of this article is to review current methodology for determining the level of surface contamination with hazardous drugs in healthcare settings and to discuss recent advances in this area. In addition it will provide some guidance for conducting surface wipe sampling and sample analysis for these drugs in healthcare settings. Published studies on the use of wipe sampling to measure hazardous drugs on surfaces in healthcare settings drugs were reviewed. These studies include the use of well-documented chromatographic techniques for sample analysis in addition to newly evolving technology that provides rapid analysis of specific antineoplastic. Methodology for the analysis of surface wipe samples for hazardous drugs are reviewed, including the purposes, technical factors, sampling strategy, materials required, and limitations. The use of lateral flow immunoassay (LFIA) and fluorescence covalent microbead immunosorbent assay (FCMIA) for surface wipe sample evaluation is also discussed. Current recommendations are that all healthc a re settings where antineoplastic and other hazardous drugs are handled include surface wipe sampling as part of a comprehensive hazardous drug-safe handling program. Surface wipe sampling may be used as a method to characterize potential occupational dermal exposure risk and to evaluate the effectiveness of implemented controls and the overall safety program. New technology, although currently limited in scope, may make wipe sampling for hazardous drugs more routine, less costly, and provide a shorter response time than classical analytical techniques now in use.

Radon-222 is a naturally occurring radioactive gas in the uranium-238 decay series that has traditionally been called, simply, radon. The lung cancer risks associated with the inhalation of radon decay products have been well documented by epidemiological studies on populations of uranium miners. The realization that radon is a public health hazard has raised the need for sampling and analytical guidelines for field personnel. Several sampling and analytical methods are being used to document radon concentrations in ground water and surface water worldwide but no convenient, single set of guidelines is available. Three different sampling and analytical methods - bubbler, liquid scintillation, and field screening - are discussed in this paper. The bubbler and liquid scintillation methods have high accuracy and precision, and small analytical method detection limits of 0.2 and 10 pCi/l (picocuries per liter), respectively. The field screening method generally is used as a qualitative reconnaissance tool.

The performance of superconducting radio frequency niobium cavities is sometimes limited by contaminations present on the cavity surface. In the recent years extensive research has been done to enhance the cavity performance by applying improved surface treatments such as mechanical grinding, electropolishing (EP), chemical polishing, tumbling, etc., followed by various rinsing methods such as ultrasonic pure water rinse, alcoholic rinse, high pressure water rinse, hydrogen per oxide rinse, etc. Although good cavity performance has been obtained lately by various post-EP cleaning methods, the detailed nature about the surface contaminants is still not fully characterized. Further efforts in this area are desired. Prior x-ray photoelectron spectroscopy (XPS) analyses of EPed niobium samples treated with fresh EP acid, demonstrated that the surfaces were covered mainly with the niobium oxide (Nb 2 O 5 ) along with carbon, in addition a small quantity of sulfur and fluorine were also found in secondary ion mass spectroscopy (SIMS) analysis. In this article, the authors present the analyses of surface contaminations for a series of EPed niobium samples located at various positions of a single cell niobium cavity followed by ultrapure water rinsing as well as our endeavor to understand the aging effect of EP acid solution in terms of contaminations presence at the inner surface of the cavity with the help of surface analytical tools such as XPS, SIMS, and scanning electron microscope at KEK.

The performance of superconducting radio frequency niobium cavities is sometimes limited by contaminations present on the cavity surface. In the recent years extensive research has been done to enhance the cavity performance by applying improved surface treatments such as mechanical grinding, electropolishing (EP), chemical polishing, tumbling, etc., followed by various rinsing methods such as ultrasonic pure water rinse, alcoholic rinse, high pressure water rinse, hydrogen per oxide rinse, etc. Although good cavity performance has been obtained lately by various post-EP cleaning methods, the detailed nature about the surface contaminants is still not fully characterized. Further efforts in this area are desired. Prior x-ray photoelectron spectroscopy (XPS) analyses of EPed niobium samples treated with fresh EP acid, demonstrated that the surfaces were covered mainly with the niobium oxide (Nb{sub 2}O{sub 5}) along with carbon, in addition a small quantity of sulfur and fluorine were also found in secondary ion mass spectroscopy (SIMS) analysis. In this article, the authors present the analyses of surface contaminations for a series of EPed niobium samples located at various positions of a single cell niobium cavity followed by ultrapure water rinsing as well as our endeavor to understand the aging effect of EP acid solution in terms of contaminations presence at the inner surface of the cavity with the help of surface analytical tools such as XPS, SIMS, and scanning electron microscope at KEK.

Systems and methods for self-synchronized data sampling are provided. In one embodiment, a system for capturing synchronous data samples is provided. The system includes an analog to digital converter adapted to capture signals from one or more sensors and convert the signals into a stream of digital data samples at a sampling frequency determined by a sampling control signal; and a synchronizer coupled to the analog to digital converter and adapted to receive a rotational frequency signal from a rotating machine, wherein the synchronizer is further adapted to generate the sampling control signal, and wherein the sampling control signal is based on the rotational frequency signal.

DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) provides applicable methods in use by. the US Department of Energy (DOE) laboratories for sampling and analyzing constituents of waste and environmental samples. The development of DOE Methods is supported by the Laboratory Management Division (LMD) of the DOE. This document contains chapters and methods that are proposed for use in evaluating components of DOE environmental and waste management samples. DOE Methods is a resource intended to support sampling and analytical activities that will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the US Environmental Protection Agency (EPA), or others

DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) provides applicable methods in use by. the US Department of Energy (DOE) laboratories for sampling and analyzing constituents of waste and environmental samples. The development of DOE Methods is supported by the Laboratory Management Division (LMD) of the DOE. This document contains chapters and methods that are proposed for use in evaluating components of DOE environmental and waste management samples. DOE Methods is a resource intended to support sampling and analytical activities that will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the US Environmental Protection Agency (EPA), or others.

Methods describing how to sample aquatic and semiaquatic amphibians in small streams and headwater habitats in the Pacific Northwest are presented. We developed a technique that samples 10-meter stretches of selected streams, which was adequate to detect presence or absence of amphibian species and provided sample sizes statistically sufficient to compare abundance of...

Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random samplingmethod...

A multielement method of atomic fluorescence analysis of environmental samples based on sample decomposition by autoclave fluorination and gas-phase atomization of volatile compounds in inductive araon plasma using a nondispersive polychromator is suggested. Detection limits of some elements (Be, Sr, Cd, V, Mo, Te, Ru etc.) for different sample forms introduced in to an analyzer are given

Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized samplingmethod for pan traps have focused on pan trap color. Th...

A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of samplingmethods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform samplingmethod was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform samplingmethod was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform samplingmethod has good accuracy and reproducibility, which can be put into use in drugs analysis.

Full Text Available Marketing and statistical literature available to practitioners provides a wide range of samplingmethods that can be implemented in the context of marketing research. Ranking samplingmethod is based on taking apart the general population into several strata, namely into several subdivisions which are relatively homogenous regarding a certain characteristic. In fact, the sample will be composed by selecting, from each stratum, a certain number of components (which can be proportional or non-proportional to the size of the stratum until the pre-established volume of the sample is reached. Using ranking sampling within marketing research requires the determination of some relevant statistical indicators - average, dispersion, sampling error etc. To that end, the paper contains a case study which illustrates the actual approach used in order to apply the ranking samplemethod within a marketing research made by a company which provides Internet connection services, on a particular category of customers – small and medium enterprises.

concentrations and bacteriological content. Evaluation of the results ... and Aninri local government areas of Enugu state. Surface water ... surface water bodies are prone to impacts from ... Coal Measures (Akamigbo, 1987). The geologic map ...

A method is presented for a more efficient sampling of the configurational space of proteins as compared to conventional sampling techniques such as molecular dynamics. The method is based on the large conformational changes in proteins revealed by the ''essential dynamics'' analysis. A form of

There is little published guidance that systematically evaluates the different methods of neonatal blood gas sampling, where each method has its individual benefits and risks. This review critically surveys the available evidence to generate a comparison between arterial and capillary blood gas sampling, focusing on their ...

The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

The flexibility of mixed methods research strategies makes such approaches especially suitable for multisite case studies. Yet the utilization of mixed methods to select sites for these studies is rarely reported. The authors describe their pragmatic mixed methods approach to select a sample for their multisite mixed methods case study of a…

In NMCC(Nishina Memorial Cyclotron Center) we are doing researches on PET of nuclear medicine (Positron Emission Computed Tomography) and PIXE analysis (Particle Induced X-ray Emission) using a small cyclotron of compactly designed. The NMCC facilities have been opened to researchers of other institutions since April 1993. The present status of NMCC is described. Bio-samples (medical samples, plants, animals and environmental samples) have mainly been analyzed by PIXE in NMCC. Small amounts of bio-samples for PIXE are decomposed quickly and easily in a sealed PTFE (polytetrafluoroethylene) vessel with a microwave oven. This sample preparation method of bio-samples also is described. (author)

Specific surface areas of antimony oxide samples, one commercial, the other prepared from antimony trichloride were measured by heterogeneous isotope exchange, gas adsorption, air permeability and microscopic methods. Specific surface areas obtained by these four methods for the two samples were compared and the observed differences are explained.

Full Text Available Methods for sampling ecological assemblages strive to be efficient, repeatable, and representative. Unknowingly, common methods may be limited in terms of revealing species function and so of less value for comparative studies. The global decline in pollination services has stimulated surveys of flower-visiting invertebrates, using pan traps and net sampling. We explore the relative merits of these two methods in terms of species discovery, quantifying abundance, function, and composition, and responses of species to changing floral resources. Using a spatially-nested design we sampled across a 5000 km(2 area of arid grasslands, including 432 hours of net sampling and 1296 pan trap-days, between June 2010 and July 2011. Net sampling yielded 22% more species and 30% higher abundance than pan traps, and better reflected the spatio-temporal variation of floral resources. Species composition differed significantly between methods; from 436 total species, 25% were sampled by both methods, 50% only by nets, and the remaining 25% only by pans. Apart from being less comprehensive, if pan traps do not sample flower-visitors, the link to pollination is questionable. By contrast, net sampling functionally linked species to pollination through behavioural observations of flower-visitation interaction frequency. Netted specimens are also necessary for evidence of pollen transport. Benefits of net-based sampling outweighed minor differences in overall sampling effort. As pan traps and net samplingmethods are not equivalent for sampling invertebrate-flower interactions, we recommend net sampling of invertebrate pollinator assemblages, especially if datasets are intended to document declines in pollination and guide measures to retain this important ecosystem service.

Methods for sampling ecological assemblages strive to be efficient, repeatable, and representative. Unknowingly, common methods may be limited in terms of revealing species function and so of less value for comparative studies. The global decline in pollination services has stimulated surveys of flower-visiting invertebrates, using pan traps and net sampling. We explore the relative merits of these two methods in terms of species discovery, quantifying abundance, function, and composition, and responses of species to changing floral resources. Using a spatially-nested design we sampled across a 5000 km(2) area of arid grasslands, including 432 hours of net sampling and 1296 pan trap-days, between June 2010 and July 2011. Net sampling yielded 22% more species and 30% higher abundance than pan traps, and better reflected the spatio-temporal variation of floral resources. Species composition differed significantly between methods; from 436 total species, 25% were sampled by both methods, 50% only by nets, and the remaining 25% only by pans. Apart from being less comprehensive, if pan traps do not sample flower-visitors, the link to pollination is questionable. By contrast, net sampling functionally linked species to pollination through behavioural observations of flower-visitation interaction frequency. Netted specimens are also necessary for evidence of pollen transport. Benefits of net-based sampling outweighed minor differences in overall sampling effort. As pan traps and net samplingmethods are not equivalent for sampling invertebrate-flower interactions, we recommend net sampling of invertebrate pollinator assemblages, especially if datasets are intended to document declines in pollination and guide measures to retain this important ecosystem service.

The objective of the present study was to evaluate and compare the efficiency of a filter based samplingmethod and a high volume samplingmethod for sampling airborne culturable fungi present in waste sorting facilities. Membrane filters method was compared with surface air system method. The selected samplingmethods were modified and tested in 2 plastic waste sorting facilities. The total number of colony-forming units (CFU)/m3 of airborne fungi was dependent on the type of sampling device, on the time of sampling, which was carried out every hour from the beginning of the work shift, and on the type of cultivation medium (p airborne fungi ranged 2×102-1.7×106 CFU/m3 when using the membrane filters (MF) method, and 3×102-6.4×104 CFU/m3 when using the surface air system (SAS) method. Both methods showed comparable sensitivity to the fluctuations of the concentrations of airborne fungi during the work shifts. The SAS method is adequate for a fast indicative determination of concentration of airborne fungi. The MF method is suitable for thorough assessment of working environment contamination by airborne fungi. Therefore we recommend the MF method for the implementation of a uniform standard methodology of airborne fungi sampling in working environments of waste treatment facilities. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

We consider the direct samplingmethod (DSM) for the two-dimensional inverse scattering problem. Although DSM is fast, stable, and effective, some phenomena remain unexplained by the existing results. We show that the imaging function of the direct samplingmethod can be expressed by a Bessel function of order zero. We also clarify the previously unexplained imaging phenomena and suggest multi-frequency DSM to overcome traditional DSM. Our method is evaluated in simulation studies using both single and multiple frequencies.

Wipe sampling is an important technique for the estimation of contaminant deposition in buildings, homes, or outdoor surfaces as a source of possible human exposure. Numerousmethods of wipe sampling exist, and each method has its own specification for the type of wipe, we...

Synthetic aperture imaging methods have been employed widely in recent research in non-destructive testing (NDT), but uptake has been more limited in medical ultrasound imaging. Typically offering superior focussing power over more traditional phased array methods, these techniques have been employed in NDT applications to locate and characterise small defects within large samples, but have rarely been used to image surfaces. A desire to ultimately employ ultrasonic surface imaging for bone surface geometry measurement prior to surgical intervention motivates this research, and results are presented for initial laboratory trials of a surface reconstruction technique based on global thresholding of ultrasonic 3D point cloud data. In this study, representative geometry artefacts were imaged in the laboratory using two synthetic aperture techniques; the Total Focusing Method (TFM) and the Synthetic Aperture Focusing Technique (SAFT) employing full and narrow synthetic apertures, respectively. Three high precision metallic samples of known geometries (cuboid, sphere and cylinder) which featured a range of elementary surface primitives were imaged using a 5MHz, 128 element 1D phased array employing both SAFT and TFM approaches. The array was manipulated around the samples using a precision robotic positioning system, allowing for repeatable ultrasound derived 3D surface point clouds to be created. A global thresholding technique was then developed that allowed the extraction of the surface profiles, and these were compared with the known geometry samples to provide a quantitative measure of error of 3D surface reconstruction. The mean errors achieved with optimised SAFT imaging for the cuboidal, spherical and cylindrical samples were 1.3mm, 2.9mm and 2.0mm respectively, while those for TFM imaging were 3.7mm, 3.0mm and 3.1mm, respectively. These results were contrary to expectations given the higher information content associated with the TFM images. However, it was

DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) is a resource intended to support sampling and analytical activities for the evaluation of environmental and waste management samples from U.S. Department of Energy (DOE) sites. DOE Methods is the result of extensive cooperation from all DOE analytical laboratories. All of these laboratories have contributed key information and provided technical reviews as well as significant moral support leading to the success of this document. DOE Methods is designed to encompass methods for collecting representative samples and for determining the radioisotope activity and organic and inorganic composition of a sample. These determinations will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the U.S. Environmental Protection Agency, or others. The development of DOE Methods is supported by the Laboratory Management Division of the DOE. Methods are prepared for entry into DOE Methods as chapter editors, together with DOE and other participants in this program, identify analytical and samplingmethod needs. Unique methods or methods consolidated from similar procedures in the DOE Procedures Database are selected for potential inclusion in this document. Initial selection is based largely on DOE needs and procedure applicability and completeness. Methods appearing in this document are one of two types. open-quotes Draftclose quotes or open-quotes Verified.close quotes. open-quotes Draftclose quotes methods that have been reviewed internally and show potential for eventual verification are included in this document, but they have not been reviewed externally, and their precision and bias may not be known. open-quotes Verifiedclose quotes methods in DOE Methods have been reviewed by volunteers from various DOE sites and private corporations

A system for sampling a surface includes a surfacesampling probe comprising a solvent liquid supply conduit and a distal end, and a sample collector for suspending a sample collection liquid adjacent to the distal end of the probe. A first electrode provides a first voltage to solvent liquid at the distal end of the probe. The first voltage produces a field sufficient to generate electrospray plume at the distal end of the probe. A second electrode provides a second voltage and is positioned to produce a plume-directing field sufficient to direct the electrospray droplets and ions to the suspended sample collection liquid. The second voltage is less than the first voltage in absolute value. A voltage supply system supplies the voltages to the first electrode and the second electrode. The first electrode can apply the first voltage directly to the solvent liquid. A method for sampling for a surface is also disclosed.

Samples of particulate matter (PM) collected in the city of Milan during wintertime were analyzed by X-ray photoelectron spectroscopy (XPS), thermal optical transmittance (TOT), ionic chromatography (IC) and X-ray fluorescence (XRF) in order to compare quantitative bulk analysis and surface analysis. In particular, the analysis of surface carbon is here presented following a new approach for the C1s curve fitting aiming this work to prove the capability of XPS to discriminate among elemental carbon (EC) and organic carbon (OC) and to quantify the carbon-based compounds that might be present in the PM. Since surface of urban PM is found to be rich in carbon it is important to be able to distinguish between the different species. XPS results indicate that aromatic and aliphatic species are adsorbed on the PM surface. Higher concentrations of (EC) are present in the bulk. Also nitrogen and sulfur were detected on the surfaces and a qualitative and quantitative analysis is provided. Surface concentration of sulfate ion is equal to that found by bulk analysis; moreover surface analysis shows an additional signal due to organic sulfur not detectable by the other methods. Surface appears to be also enriched in nitrogen.

Full Text Available Aerosil samples, heat treated and then silylated with various silanes at various temperatures have been characterised by adsorption of ethanol at 293 K. Adsorption isotherms were plotted and the BET specific surface areas were determined. Contact angles were measured by the captive bubble method at the three phase contact line in ethanol, on glass slides similarly modified. Silylation was found to alter the ethanol adsorptive properties on aerosil and increase the contact angles on the glass slides to extents that depend on the silane used as well as the concentration of residual silanols and that of surface silyl groups.

We consider the inverse elastic scattering of incident plane compressional and shear waves from the knowledge of the far field patterns. Specifically, three direct samplingmethods for location and shape reconstruction are proposed using the different component of the far field patterns. Only inner products are involved in the computation, thus the novel samplingmethods are very simple and fast to be implemented. With the help of the factorization of the far field operator, we give a lower bound of the proposed indicator functionals for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functionals decay like the Bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functionals continuously dependent on the far field patterns, which further implies that the novel samplingmethods are extremely stable with respect to data error. For the case when the observation directions are restricted into the limited aperture, we firstly introduce some data retrieval techniques to obtain those data that can not be measured directly and then use the proposed direct samplingmethods for location and shape reconstructions. Finally, some numerical simulations in two dimensions are conducted with noisy data, and the results further verify the effectiveness and robustness of the proposed samplingmethods, even for multiple multiscale cases and limited-aperture problems.

In this paper, we study the generation of maximal Poisson-disk sets with varying radii on surfaces. Based on the concepts of power diagram and regular triangulation, we present a geometric analysis of gaps in such disk sets on surfaces, which

Full Text Available Relative efficiencies of standard dip-net sampling (SDN versus collections of surface-floating pupal exuviae (SFPE were determined for detecting Chironomidae at catchment and site scales and at subfamily/tribe-, genus- and species-levels based on simultaneous, equal-effort sampling on a monthly basis for one year during a biodiversity assessment of Bear Run Nature Reserve. Results showed SFPE was more efficient than SDN at catchment scales for detecting both genera and species. At site scales, SDN sampling was more efficient for assessment of a first-order site. No consistent pattern, except for better efficiency of SFPE to detect Orthocladiinae genera, was observed at genus-level for two second-order sites. However, SFPE was consistently more efficient at detecting species of Orthocladiinae, Chironomini and Tanytarsini at the second order sites. SFPE was more efficient at detecting both genera and species at two third-order sites. The differential efficiencies of the two methods are concluded to be related to stream order and size, substrate size, flow and water velocity, depth and habitat heterogeneity, and differential ability to discriminate species among pupal exuviae specimens versus larval specimens. Although both approaches are considered necessary for comprehensive biodiversity assessments of Chironomidae, our results suggest that there is an optimal, but different, allocation of sampling effort for detecting Chironomidae across stream orders and at differing spatial and taxonomic scales.Article submitted 13. August 2014, accepted 31. October 2014, published 22. December 2014.

Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

Non-proliferation and International Security (NA-241) established a working group of researchers from Los Alamos National Laboratory (LANL), Pacific Northwest National Laboratory (PNNL) and Savannah River National Laboratory (SRNL) to evaluate the utilization of in-field mass spectrometry for safeguards applications. The survey of commercial off-the-shelf (COTS) mass spectrometers (MS) revealed no instrumentation existed capable of meeting all the potential safeguards requirements for performance, portability, and ease of use. Additionally, fieldable instruments are unlikely to meet the International Target Values (ITVs) for accuracy and precision for isotope ratio measurements achieved with laboratory methods. The major gaps identified for in-field actinide isotope ratio analysis were in the areas of: 1. sample preparation and/or sample introduction, 2. size reduction of mass analyzers and ionization sources, 3. system automation, and 4. decreased system cost. Development work in 2 through 4, numerated above continues, in the private and public sector. LANL is focusing on developing sample preparation/sample introduction methods for use with the different sample types anticipated for safeguard applications. Addressing sample handling and sample preparation methods for MS analysis will enable use of new MS instrumentation as it becomes commercially available. As one example, we have developed a rapid, sample preparation method for dissolution of uranium and plutonium oxides using ammonium bifluoride (ABF). ABF is a significantly safer and faster alternative to digestion with boiling combinations of highly concentrated mineral acids. Actinides digested with ABF yield fluorides, which can then be analyzed directly or chemically converted and separated using established column chromatography techniques as needed prior to isotope analysis. The reagent volumes and the sample processing steps associated with ABF sample digestion lend themselves to automation and field

In this work we present a novel samplingmethod for time harmonic inverse medium scattering problems. It provides a simple tool to directly estimate the shape of the unknown scatterers (inhomogeneous media), and it is applicable even when

In this paper, we study the inverse electromagnetic medium scattering problem of estimating the support and shape of medium scatterers from scattered electric/magnetic near-field data. We shall develop a novel direct samplingmethod based

-quarter method.The parameter which was most efficiently sampled was species composition relativedensity) with 90% replicate similarity being achieved with 100 point-centred-quarters. However, this technique cannot be recommended, even ...

Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

Restorationistis typically evaluate the success of a project by estimating the population sizes of species that have been planted or seeded. Because total census is raely feasible, they must rely on samplingmethods for population estimates. However, traditional random sampling designs may be inefficient for species that, for one reason or another, are challenging to...

Full Text Available Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min. Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

Sample preparation is considered one of the limiting steps in microbial metabolome analysis. Eukaryotes and prokaryotes behave very differently during the several steps of classical sample preparation methods for analysis of metabolites. Even within the eukaryote kingdom there is a vast diversity...

A new samplingmethod for down coarse woody debris is proposed based on limiting the perpendicular distance from individual pieces to a randomly chosen sample point. Two approaches are presented that allow different protocols to be used to determine field measurements; estimators for each protocol are also developed. Both protocols are compared via simulation against...

Many new methods for sampling down coarse woody debris have been proposed in the last dozen or so years. One of the most promising in terms of field application, perpendicular distance sampling (PDS), has several variants that have been progressively introduced in the literature. In this study, we provide an overview of the different PDS variants and comprehensive...

Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on samplingmethods. Three different samplingmethods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of samplingmethods were compared with results obtained by Mathematica software, which was used as a benchmark. All three samplingmethods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

The invention has been aimed at simplifying the technology and saving energy in modifying surfaces with the aid of electron beams. The described beam-object geometry allows to abandon additional heat treatments. It can be used for surface hardening

The authors have presented a simple method for the determination of possible inhomogeneity of thin samples in a wavedispersive XRF analysis after previous examination of intensity distribution of exciting radiation on sample's surface. Investigations were carried out using as an example microsamples of mono- and polycrystals. Samples were prepared by digesting an analysed material directly on the substrate. The obtained results have been presented in a graphical way. (author)

The nuclear reactions analysis technique is mainly based on the relative method or the use of activation cross sections. In order to validate nuclear data for the calculated cross section evaluated from systematic studies, we used the neutron activation analysis technique (NAA) to determine the various constituent concentrations of certified samples for animal blood, milk and hay. In this analysis, the absolute method is used. The neutron activation technique involves irradiating the sample and subsequently performing a measurement of the activity of the sample. The fundamental equation of the activation connects several physical parameters including the cross section that is essential for the quantitative determination of the different elements composing the sample without resorting to the use of standard sample. Called the absolute method, it allows a measurement as accurate as the relative method. The results obtained by the absolute method showed that the values are as precise as the relative method requiring the use of standard sample for each element to be quantified.

DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) is a resource intended to support sampling and analytical activities for the evaluation of environmental and waste management samples from U.S. Department of Energy (DOE) sites. DOE Methods is the result of extensive cooperation from all DOE analytical laboratories. All of these laboratories have contributed key information and provided technical reviews as well as significant moral support leading to the success of this document. DOE Methods is designed to encompass methods for collecting representative samples and for determining the radioisotope activity and organic and inorganic composition of a sample. These determinations will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the U.S. Environmental Protection Agency, or others. The development of DOE Methods is supported by the Analytical Services Division of DOE. Unique methods or methods consolidated from similar procedures in the DOE Procedures Database are selected for potential inclusion in this document. Initial selection is based largely on DOE needs and procedure applicability and completeness. Methods appearing in this document are one of two types, open-quotes Draftclose quotes or open-quotes Verifiedclose quotes. open-quotes Draftclose quotes methods that have been reviewed internally and show potential for eventual verification are included in this document, but they have not been reviewed externally, and their precision and bias may not be known. open-quotes Verifiedclose quotes methods in DOE Methods have been reviewed by volunteers from various DOE sites and private corporations. These methods have delineated measures of precision and accuracy

DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) is a resource intended to support sampling and analytical activities for the evaluation of environmental and waste management samples from U.S. Department of Energy (DOE) sites. DOE Methods is the result of extensive cooperation from all DOE analytical laboratories. All of these laboratories have contributed key information and provided technical reviews as well as significant moral support leading to the success of this document. DOE Methods is designed to encompass methods for collecting representative samples and for determining the radioisotope activity and organic and inorganic composition of a sample. These determinations will aid in defining the type and breadth of contamination and thus determine the extent of environmental restoration or waste management actions needed, as defined by the DOE, the U.S. Environmental Protection Agency, or others. The development of DOE Methods is supported by the Analytical Services Division of DOE. Unique methods or methods consolidated from similar procedures in the DOE Procedures Database are selected for potential inclusion in this document. Initial selection is based largely on DOE needs and procedure applicability and completeness. Methods appearing in this document are one of two types, {open_quotes}Draft{close_quotes} or {open_quotes}Verified{close_quotes}. {open_quotes}Draft{close_quotes} methods that have been reviewed internally and show potential for eventual verification are included in this document, but they have not been reviewed externally, and their precision and bias may not be known. {open_quotes}Verified{close_quotes} methods in DOE Methods have been reviewed by volunteers from various DOE sites and private corporations. These methods have delineated measures of precision and accuracy.

On many occasions, sample treatment is a critical step in electrophoretic analysis. As an alternative to batch procedures, in this work, a new strategy is presented with a view to develop an on-capillary sample cleanup method. This strategy is based on the partial filling of the capillary with carboxylated single-walled carbon nanotube (c-SWNT). The nanoparticles retain interferences from the matrix allowing the determination and quantification of carbohydrates (viz glucose, maltose and fructose). The precision of the method for the analysis of real samples ranged from 5.3 to 6.4%. The proposed method was compared with a method based on a batch filtration of the juice sample through diatomaceous earth and further electrophoretic determination. This method was also validated in this work. The RSD for this other method ranged from 5.1 to 6%. The results obtained by both methods were statistically comparable demonstrating the accuracy of the proposed methods and their effectiveness. Electrophoretic separation of carbohydrates was achieved using 200 mM borate solution as a buffer at pH 9.5 and applying 15 kV. During separation, the capillary temperature was kept constant at 40 degrees C. For the on-capillary cleanup method, a solution containing 50 mg/L of c-SWNTs prepared in 300 mM borate solution at pH 9.5 was introduced for 60 s into the capillary just before sample introduction. For the electrophoretic analysis of samples cleaned in batch with diatomaceous earth, it is also recommended to introduce into the capillary, just before the sample, a 300 mM borate solution as it enhances the sensitivity and electrophoretic resolution.

We report a study of the correlation between three optical methods for characterizing surface roughness: a laboratory scatterometer measuring the bi-directional reflection distribution function (BRDF instrument), a simple commercial scatterometer (rBRDF instrument), and a confocal optical profiler....... For each instrument, the effective range of spatial surface wavelengths is determined, and the common bandwidth used when comparing the evaluated roughness parameters. The compared roughness parameters are: the root-mean-square (RMS) profile deviation (Rq), the RMS profile slope (Rdq), and the variance...... of the scattering angle distribution (Aq). The twenty-two investigated samples were manufactured with several methods in order to obtain a suitable diversity of roughness patterns.Our study shows a one-to-one correlation of both the Rq and the Rdq roughness values when obtained with the BRDF and the confocal...

NAA is a method of testing that has not been standardized. To affirm and confirm that this method is valid. it must be done validation of the method with various sample standard reference materials. In this work. the validation is carried for food product samples using NIST SRM 1567a (wheat flour) and NIST SRM 1568a (rice flour). The results show that the validation method for testing nine elements (Al, K, Mg, Mn, Na, Ca, Fe, Se and Zn) in SRM 1567a and eight elements (Al, K, Mg, Mn, Na, Ca, Se and Zn ) in SRM 1568a pass the test of accuracy and precision. It can be conclude that this method has power to give valid result in determination element of the food products samples. (author)

The alias method is a Monte Carlo sampling technique that offers significant advantages over more traditional methods. It equals the accuracy of table lookup and the speed of equal probable bins. The original formulation of this methodsampled from discrete distributions and was easily extended to histogram distributions. We have extended the method further to applications more germane to Monte Carlo particle transport codes: continuous distributions. This paper presents the alias method as originally derived and our extensions to simple continuous distributions represented by piecewise linear functions. We also present a method to interpolate accurately between distributions tabulated at points other than the point of interest. We present timing studies that demonstrate the method's increased efficiency over table lookup and show further speedup achieved through vectorization. 6 refs., 12 figs., 2 tabs

A soil sampler includes a fluidized bed for receiving a soil sample. The fluidized bed may be in communication with a vacuum for drawing air through the fluidized bed and suspending particulate matter of the soil sample in the air. In a method of sampling, the air may be drawn across a filter, separating the particulate matter. Optionally, a baffle or a cyclone may be included within the fluidized bed for disentrainment, or dedusting, so only the finest particulate matter, including asbestos, will be trapped on the filter. The filter may be removable, and may be tested to determine the content of asbestos and other hazardous particulate matter in the soil sample.

Body odor sampling is an essential tool in human chemical ecology research. However, methodologies of individual studies vary widely in terms of sampling material, length of sampling, and sample processing. Although these differences might have a critical impact on results obtained, almost no studies test validity of current methods. Here, we focused on the effect of freezing samples between collection and use in experiments involving body odor perception. In 2 experiments, we tested whether axillary odors were perceived differently by raters when presented fresh or having been frozen and whether several freeze-thaw cycles affected sample quality. In the first experiment, samples were frozen for 2 weeks, 1 month, or 4 months. We found no differences in ratings of pleasantness, attractiveness, or masculinity between fresh and frozen samples. Similarly, almost no differences between repeatedly thawed and fresh samples were found. We found some variations in intensity; however, this was unrelated to length of storage. The second experiment tested differences between fresh samples and those frozen for 6 months. Again no differences in subjective ratings were observed. These results suggest that freezing has no significant effect on perceived odor hedonicity and that samples can be reliably used after storage for relatively long periods.

When there is no sampling frame within a certain group or the group is concerned that making its population public would bring social stigma, we say the population is hidden. It is difficult to approach this kind of population survey-methodologically because the response rate is low and its members are not quite honest with their responses when probability sampling is used. The only alternative known to address the problems caused by previous methods such as snowball sampling is respondent-driven sampling (RDS), which was developed by Heckathorn and his colleagues. RDS is based on a Markov chain, and uses the social network information of the respondent. This characteristic allows for probability sampling when we survey a hidden population. We verified through computer simulation whether RDS can be used on a hidden population of cancer survivors. According to the simulation results of this thesis, the chain-referral sampling of RDS tends to minimize as the sample gets bigger, and it becomes stabilized as the wave progresses. Therefore, it shows that the final sample information can be completely independent from the initial seeds if a certain level of sample size is secured even if the initial seeds were selected through convenient sampling. Thus, RDS can be considered as an alternative which can improve upon both key informant sampling and ethnographic surveys, and it needs to be utilized for various cases domestically as well.

One of the many unexpected observations of asteroid 433 Eros by the Near Earth Asteroid Rendezvous (NEAR) mission was the many ponds of fine-grained materials [1-3]. The ponds have smooth surfaces, and define equipotential surfaces up to 10's of meters in diameter [4]. The ponds have a uniformly sub-cm grain size and appear to be cohesive or indurated to some degree, as revealed by slumping. The ponds appear to be concentrated within 30 degrees of the equator of Eros, where gravity is lowest. There is some insight into the mineralogy and composition of the ponds surfaces from NEAR spectroscopy [2,4,5,6]. Compared to the bulk asteroid, ponds: (1) are distinctly bluer (high 550/760 nm ratio), (2) have a deeper 1um mafic band, (3) have reflectance elevated by 5%.

We developed a new sampling system, the Nano Catcher, for measuring the surface chemical structure of polymers or industrial products and we evaluated the performance of the system. The system can directly pick up surface species whose depth is on the order of approximately 100 nm and can easily provide a sample for a Fourier transform infrared (FT-IR) system without the necessity of passing it over to a measurement plate. The FT-IR reflection data obtained from the Nano Catcher were compared with those obtained using the attenuated total reflection (ATR) method and sampling by hand. Chemical structural analysis of a depth region from a few tens of nanometers to a few hundred nanometers can be directly performed using this system. Such depths are beyond the scope of conventional X-ray photoelectron spectroscopy (XPS) and ATR methods. We can expect the use of the Nano Catcher system to lead to a great improvement in the detection of signals of surface species in these depth regions.

The choice of samplingmethods is a crucial step in every field survey in herpetology. In countries where time and financial support are limited, the choice of the methods is critical. The methods used to sample snakes often lack objective criteria, and the traditional methods have apparently been more important when making the choice. Consequently researches using not-standardized methods are frequently found in the literature. We have compared four commonly used methods for sampling snake assemblages in a semiarid area in Brazil. We compared the efficacy of each method based on the cost-benefit regarding the number of individuals and species captured, time, and financial investment. We found that pitfall traps were the less effective method in all aspects that were evaluated and it was not complementary to the other methods in terms of abundance of species and assemblage structure. We conclude that methods can only be considered complementary if they are standardized to the objectives of the study. The use of pitfall traps in short-term surveys of the snake fauna in areas with shrubby vegetation and stony soil is not recommended.

This important reference book provides standard samplingmethods recommended by the American Fisheries Society for assessing and monitoring freshwater fish populations in North America. Methods apply to ponds, reservoirs, natural lakes, and streams and rivers containing cold and warmwater fishes. Range-wide and eco-regional averages for indices of abundance, population structure, and condition for individual species are supplied to facilitate comparisons of standard data among populations. Provides information on converting nonstandard to standard data, statistical and database procedures for analyzing and storing standard data, and methods to prevent transfer of invasive species while sampling.

A multiple signal classification (MUSIC)-like multi-dimensional samplingmethod (MDSM) is introduced to locate small three-dimensional scatterers using electromagnetic waves. The indicator is built with the most stable part of signal subspace of the multi-static response matrix on a set of combinatorial sampling nodes inside the domain of interest. It has two main advantages compared to the conventional MUSIC methods. First, the MDSM is more robust against noise. Second, it can work with a single incidence even for multi-scatterers. Numerical simulations are presented to show the good performance of the proposed method. (paper)

This book provides comprehensive coverage of simulation of complex systems using Monte Carlo methods. Developing algorithms that are immune to the local trap problem has long been considered as the most important topic in MCMC research. Various advanced MCMC algorithms which address this problem have been developed include, the modified Gibbs sampler, the methods based on auxiliary variables and the methods making use of past samples. The focus of this book is on the algorithms that make use of past samples. This book includes the multicanonical algorithm, dynamic weighting, dynamically weight

Planar separations, which include thin layer chromatography and gel electrophoresis, are in widespread use as important and powerful tools for conducting separations of complex mixtures. To increase the utility of planar separations, new methods are needed that allow in situ characterization of the individual components of the separated mixtures. A large number of atmospheric pressure surfacesampling and ionization techniques for use with mass spectrometry have emerged in the past several years, and several have been investigated as a means for mass spectrometric read-out of planar separations. In this article, we review the atmospheric pressure surfacesampling and ionization techniques that have been used for the read-out of planar separation media. For each technique, we briefly explain the operational basics and discuss the analyte type for which it is appropriate and some specific applications from the literature. Copyright (c) 2010 Elsevier B.V. All rights reserved.

A system and method utilizes distance-measuring equipment including a laser sensor for controlling the collection instrument-to-surface distance during a sample collection process for use, for example, with mass spectrometric detection. The laser sensor is arranged in a fixed positional relationship with the collection instrument, and a signal is generated by way of the laser sensor which corresponds to the actual distance between the laser sensor and the surface. The actual distance between the laser sensor and the surface is compared to a target distance between the laser sensor and the surface when the collection instrument is arranged at a desired distance from the surface for sample collecting purposes, and adjustments are made, if necessary, so that the actual distance approaches the target distance.

The Thermochemical Process Development Unit (TCPDU) at the National Renewable Energy Laboratory (NREL) is a highly instrumented half-ton/day pilot scale plant capable of demonstrating industrially relevant thermochemical technologies from lignocellulosic biomass conversion, including gasification. Gasification creates primarily Syngas (a mixture of Hydrogen and Carbon Monoxide) that can be utilized with synthesis catalysts to form transportation fuels and other valuable chemicals. Biomass derived gasification products are a very complex mixture of chemical components that typically contain Sulfur and Nitrogen species that can act as catalysis poisons for tar reforming and synthesis catalysts. Real-time hot online sampling techniques, such as Molecular Beam Mass Spectrometry (MBMS), and Gas Chromatographs with Sulfur and Nitrogen specific detectors can provide real-time analysis providing operational indicators for performance. Sampling typically requires coated sampling lines to minimize trace sulfur interactions with steel surfaces. Other materials used inline have also shown conversion of sulfur species into new components and must be minimized. Sample line Residence time within the sampling lines must also be kept to a minimum to reduce further reaction chemistries. Solids from ash and char contribute to plugging and must be filtered at temperature. Experience at NREL has shown several key factors to consider when designing and installing an analytical sampling system for biomass gasification products. They include minimizing sampling distance, effective filtering as close to source as possible, proper line sizing, proper line materials or coatings, even heating of all components, minimizing pressure drops, and additional filtering or traps after pressure drops.

A superconducting niobium triaxial cavity has been designed and fabricated to study residual surface resistance of planar superconducting materials. The edge of a 25.4 mm or larger diameter sample in the triaxial cavity is located outside the strong field region. Therefore, the edge effects and possible losses between the thin film and the substrate have been minimized, ensuring that induced RF losses are intrinsic to the test material. The fundamental resonant frequency of the cavity is the same as the working frequency of CEBAF cavities. The cavity has a compact size compared to its TE 011 counterpart, which makes it more sensitive to the sample's loss. For even higher sensitivity, a calorimetry method has been used to measure the RF losses on the superconducting sample. At 2 K, a 2 μK temperature change can be resolved by using carbon resistor sensors. The temperature distribution caused by RF heating is measured by 16 carbon composition resistor sensors. A 0.05 μW heating power can be detected as such a resolution, which translates to a surface resistance of 0.02 nΩ at a surface magnetic field of 52 Oe. This is the most sensitive device for surface resistance measurements to date. In addition, losses due to the indium seal, coupling probes, field emission sites other than the sample, and all of the high field resonator surface, are excluded in the measurement. Surface resistance of both niobium and high-Tc superconducting thin films has been measured. A low R s of 35.2 μΩ was measured for a 25.4 mm diameter YBa 2 Cu 3 O 7 thin film at 1.5 GHz and at 2 K. The measurement result is the first result for a large area epitaxially grown thin film sample at such a low RF frequency. The abrupt disappearance of multipacting between two parallel plates has been observed and monitored with the 16 temperature mapping sensors. Field emission or some field dependent anomalous RF losses on the niobium plate have also been observed

Quantitative determination of caffeine on reversed-phase C8 thin-layer chromatography plates using a surfacesampling electrospray ionization system with tandem mass spectrometry detection is reported. The thin-layer chromatography/electrospray tandem mass spectrometry method employed a deuterium-labeled caffeine internal standard and selected reaction monitoring detection. Up to nine parallel caffeine bands on a single plate were sampled in a single surface scanning experiment requiring 35 min at a surface scan rate of 44 {mu}m/s. A reversed-phase HPLC/UV caffeine assay was developed in parallel to assess the mass spectrometry method performance. Limits of detection for the HPLC/UV and thin-layer chromatography/electrospray tandem mass spectrometry methods determined from the calibration curve statistics were 0.20 ng injected (0.50 {mu}L) and 1.0 ng spotted on the plate, respectively. Spike recoveries with standards and real samples ranged between 97 and 106% for both methods. The caffeine content of three diet soft drinks (Diet Coke, Diet Cherry Coke, Diet Pepsi) and three diet sport drinks (Diet Turbo Tea, Speed Stack Grape, Speed Stack Fruit Punch) was measured. The HPLC/UV and mass spectrometry determinations were in general agreement, and these values were consistent with the quoted values for two of the three diet colas. In the case of Diet Cherry Coke and the diet sports drinks, the determined caffeine amounts using both methods were consistently higher (by 8% or more) than the literature values.

The successive correction method was examined and evaluated statistically as a nowcasting method for surface meteorological parameters including temperature, dew point temperature, and horizontal wind vector components...

We developed an original preparation method for honey samples (samples in paste-like state) specifically designed for PIXE analysis. The results of PIXE analysis of thin targets prepared by adding a standard containing nine elements to honey samples demonstrated that the preparation method bestowed sufficient accuracy on quantitative values. PIXE analysis of 13 kinds of honey was performed, and eight mineral components (Si, P, S, K, Ca, Mn, Cu and Zn) were detected in all honey samples. The principal mineral components were K and Ca, and the quantitative value for K accounted for the majority of the total value for mineral components. K content in honey varies greatly depending on the plant source. Chestnuts had the highest K content. In fact, it was 2-3 times that of Manuka, which is known as a high quality honey. K content of false-acacia, which is produced in the greatest abundance, was 1/20 that of chestnuts. (author)

We have experimentally validated a single-sample variant of the doubly labeled water method for measuring metabolic rate and water turnover in a very small passerine bird, the verdin (Auriparus flaviceps). We measured CO 2 production using the Haldane gravimetric technique and compared these values with estimates derived from isotopic data. Doubly labeled water results based on the one-sample calculations differed from Haldane values by less than 0.5% on average (range -8.3 to 11.2%, n = 9). Water flux computed by the single-samplemethod differed by -1.5% on average from results for the same birds based on the standard, two-sample technique (range -13.7 to 2.0%, n = 9)

Forming representative gas hydrate-bearing laboratory samples is important so that the properties of these materials may be measured, while controlling the composition and other variables. Natural samples are rare, and have often experienced pressure and temperature changes that may affect the property to be measured [Waite et al., 2008]. Forming methane hydrate samples in the laboratory has been done a number of ways, each having advantages and disadvantages. The ice-to-hydrate method [Stern et al., 1996], contacts melting ice with methane at the appropriate pressure to form hydrate. The hydrate can then be crushed and mixed with mineral grains under controlled conditions, and then compacted to create laboratory samples of methane hydrate in a mineral medium. The hydrate in these samples will be part of the load-bearing frame of the medium. In the excess gas method [Handa and Stupin, 1992], water is distributed throughout a mineral medium (e.g. packed moist sand, drained sand, moistened silica gel, other porous media) and the mixture is brought to hydrate-stable conditions (chilled and pressurized with gas), allowing hydrate to form. This method typically produces grain-cementing hydrate from pendular water in sand [Waite et al., 2004]. In the dissolved gas method [Tohidi et al., 2002], water with sufficient dissolved guest molecules is brought to hydrate-stable conditions where hydrate forms. In the laboratory, this is can be done by pre-dissolving the gas of interest in water and then introducing it to the sample under the appropriate conditions. With this method, it is easier to form hydrate from more soluble gases such as carbon dioxide. It is thought that this method more closely simulates the way most natural gas hydrate has formed. Laboratory implementation, however, is difficult, and sample formation is prohibitively time consuming [Minagawa et al., 2005; Spangenberg and Kulenkampff, 2005]. In another version of this technique, a specified quantity of gas

With publication of Standard Methods for Sampling North American Freshwater Fishes in 2009, the American Fisheries Society (AFS) recommended standard procedures for North America. To explore interest in standardizing at intercontinental scales, a symposium attended by international specialists in freshwater fish sampling was convened at the 145th Annual AFS Meeting in Portland, Oregon, in August 2015. Participants represented all continents except Australia and Antarctica and were employed by...

Herein provided are fluidics platform and method for sample preparation and analysis. The fluidics platform is capable of analyzing DNA from blood samples using amplification assays such as polymerase-chain-reaction assays and loop-mediated-isothermal-amplification assays. The fluidics platform can also be used for other types of assays and analyzes. In some embodiments, a sample in a sealed tube can be inserted directly. The following isolation, detection, and analyzes can be performed without a user's intervention. The disclosed platform may also comprises a sample preparation system with a magnetic actuator, a heater, and an air-drying mechanism, and fluid manipulation processes for extraction, washing, elution, assay assembly, assay detection, and cleaning after reactions and between samples.

Deep-penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, a kind of particle transport random walking system under the emission point as a sampling station is built. Then, an adaptive sampling scheme is derived for better solution with the achieved information. The main advantage of the adaptive scheme is to choose the most suitable sampling number from the emission point station to obtain the minimum value of the total cost in the process of the random walk. Further, the related importance samplingmethod is introduced. Its main principle is to define the importance function due to the particle state and to ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive scheme under the emission point as a station could overcome the difficulty of underestimation of the result in some degree, and the adaptive importance samplingmethod gets satisfied results as well. (authors)

We report a study of the correlation between three optical methods for characterizing surface roughness: a laboratory scatterometer measuring the bi-directional reflection distribution function (BRDF instrument), a simple commercial scatterometer (rBRDF instrument), and a confocal optical profiler. For each instrument, the effective range of spatial surface wavelengths is determined, and the common bandwidth used when comparing the evaluated roughness parameters. The compared roughness parameters are: the root-mean-square (RMS) profile deviation (Rq), the RMS profile slope (Rdq), and the variance of the scattering angle distribution (Aq). The twenty-two investigated samples were manufactured with several methods in order to obtain a suitable diversity of roughness patterns.Our study shows a one-to-one correlation of both the Rq and the Rdq roughness values when obtained with the BRDF and the confocal instruments, if the common bandwidth is applied. Likewise, a correlation is observed when determining the Aq value with the BRDF and the rBRDF instruments.Furthermore, we show that it is possible to determine the Rq value from the Aq value, by applying a simple transfer function derived from the instrument comparisons. The presented method is validated for surfaces with predominantly 1D roughness, i.e. consisting of parallel grooves of various periods, and a reflectance similar to stainless steel. The Rq values are predicted with an accuracy of 38% at the 95% confidence interval. (paper)

A method of extracting surfaces in three-dimensional data includes receiving as inputs three-dimensional data and a seed point p located on a surface to be extracted. The method further includes propagating a front outwardly from the seed point p and extracting a plurality of ridge curves based on the propagated front. A surface boundary is detected based on a comparison of distances between adjacent ridge curves and the desired surface is extracted based on the detected surface boundary.

In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit

In order to improve the technique for measuring oil and water entrained in a compressed air stream, a laboratory study has been made of some methods for sampling and measurement. For this purpose water or oil as artificial contaminants were injected in thin streams into a test loop, carrying dry compressed air. Sampling was performed in a vertical run, down-stream of the injection point. Wall attached liquid, coarse droplet flow, and fine droplet flow were sampled separately. The results were compared with two-phase flow theory and direct observation of liquid behaviour. In a study of sample transport through narrow tubes, it was observed that, below a certain liquid loading, the sample did not move, the liquid remaining stationary on the tubing wall. The basic analysis of the collected samples was made by gravimetric methods. Adsorption tubes were used with success to measure water vapour. A humidity meter with a sensor of the aluminium oxide type was found to be unreliable. Oil could be measured selectively by a flame ionization detector, the sample being pretreated in an evaporation- condensation unit

A new automatic method for discontinuity traces mapping and sampling on a rock mass digital model is described in this work. The implemented procedure allows one to automatically identify discontinuity traces on a Digital Surface Model: traces are detected directly as surface breaklines, by means of maximum and minimum principal curvature values of the vertices that constitute the model surface. Color influence and user errors, that usually characterize the trace mapping on images, are eliminated. Also trace sampling procedures based on circular windows and circular scanlines have been implemented: they are used to infer trace data and to calculate values of mean trace length, expected discontinuity diameter and intensity of rock discontinuities. The method is tested on a case study: results obtained applying the automatic procedure on the DSM of a rock face are compared to those obtained performing a manual sampling on the orthophotograph of the same rock face.

The authors attempt to give a most systamtic possible treatment of the sample taking and sample preparation of biological material (particularly in human medicine) for trace analysis (e.g. neutron activation analysis, atomic absorption spectrometry). Contamination and loss problems are discussed as well as the manifold problems of the different consistency of solid and liquid biological materials, as well as the stabilization of the sample material. The process of dry and wet ashing is particularly dealt with, where new methods are also described. (RB) [de

Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)

Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of samplingmethods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.

Characterization of structures using conventional optical microscopy is restricted by the diffraction limit. Techniques like atomic force and scanning electron microscopy can investigate smaller structures but are very time consuming. We show that using scatterometry, a technique based on optical...... diffraction, integrated into a commercial light microscope we can characterize nano-textured surfaces in a few milliseconds. The adapted microscope has two detectors, a CCD camera used to easily find an area of interest and a spectrometer for the measurements. We demonstrate that the microscope has...

Small polar molecules such as nucleosides, amines, amino acids are important analytes in biological, food, environmental, and other fields. It is necessary to develop efficient sample preparation and sensitive analytical methods for rapid analysis of these polar small molecules in complex matrices. Some typical materials in sample preparation, including silica, polymer, carbon, boric acid and so on, are introduced in this paper. Meanwhile, the applications and developments of analytical methods of polar small molecules, such as reversed-phase liquid chromatography, hydrophilic interaction chromatography, etc., are also reviewed.

In-vitro bioassay monitoring is based on the determination of activity concentrations in biological samples excreted from the body and is most suitable for alpha and beta emitters. A truly representative bioassay sample is the one having all the voids collected during a 24-h period however, this being technically difficult, overnight urine samples collected by the workers are analyzed. These overnight urine samples are collected for 10-16 h, however in the absence of any specific information, 12 h duration is assumed and the observed results are then corrected accordingly obtain the daily excretion rate. To reduce the uncertainty due to unknown duration of sample collection, IAEA has recommended two methods viz., measurement of specific gravity and creatinine excretion rate in urine sample. Creatinine is a final metabolic product creatinine phosphate in the body and is excreted at a steady rate for people with normally functioning kidneys. It is, therefore, often used as a normalization factor for estimation of duration of sample collection. The present study reports the chemical procedure standardized and its application for the estimation of creatinine in urine samples collected from occupational workers. Chemical procedure for estimation of creatinine in bioassay samples was standardized and applied successfully for its estimation in bioassay samples collected from the workers. The creatinine excretion rate observed for these workers is lower than observed in literature. Further, work is in progress to generate a data bank of creatinine excretion rate for most of the workers and also to study the variability in creatinine coefficient for the same individual based on the analysis of samples collected for different duration

For the detection of hidden objects by low-frequency electromagnetic imaging the linear samplingmethod works remarkably well despite the fact that the rigorous mathematical justification is still incomplete. In this work, we give an explanation for this good performance by showing that in the low-frequency limit the measurement operator fulfils the assumptions for the fully justified variant of the linear samplingmethod, the so-called factorization method. We also show how the method has to be modified in the physically relevant case of electromagnetic imaging with divergence-free currents. We present numerical results to illustrate our findings, and to show that similar performance can be expected for the case of conducting objects and layered backgrounds

Three new methods to sample and prepare various carbonyl compounds for radiocarbon measurements were developed and tested. Two of these procedures utilized the Strecker synthetic method to form amino acids from carbonyl compounds with either sodium cyanide or trimethylsilyl cyanide. The third procedure used semicarbazide to form crystalline carbazones with the carbonyl compounds. The resulting amino acids and semicarbazones were then separated and purified using thin layer chromatography. The separated compounds were then combusted to CO2 and reduced to graphite to determine 14C content by accelerator mass spectrometry (AMS). All of these methods were also compared with the standard carbonyl compound samplingmethod wherein a compound is derivatized with 2,4-dinitrophenylhydrazine and then separated by high-performance liquid chromatography (HPLC).

Three new methods to sample and prepare various carbonyl compounds for radiocarbon measurements were developed and tested. Two of these procedures utilized the Strecker synthetic method to form amino acids from carbonyl compounds with either sodium cyanide or trimethylsilyl cyanide. The third procedure used semicarbazide to form crystalline carbazones with the carbonyl compounds. The resulting amino acids and semicarbazones were then separated and purified using thin layer chromatography. The separated compounds were then combusted to CO{sub 2} and reduced to graphite to determine {sup 14}C content by accelerator mass spectrometry (AMS). All of these methods were also compared with the standard carbonyl compound samplingmethod wherein a compound is derivatized with 2,4-dinitrophenylhydrazine and then separated by high-performance liquid chromatography (HPLC).

Full Text Available Statement of Problem: Considering the cost and amount of time and also the quantity of tooth loss in the process of cavity preparation, repair of the restoration instead of itsreplacement would be much more efficient.Purpose: The aim of this study was to determine the effect of different methods of surface conditioning on the shear bond strength of repaired compomers.Materials and Methods: Sixty blocks of compomer were prepared in acrylic molds and then they were randomly divided into five groups of 12. Group I (control groupreceived no treatment. The remaining samples were immersed in 37 ºC distilled water for one week, then the surfaces were roughened with a coarse diamond bur. Samples ineach group were prepared by different surface treatment and conditioning: In group II specimens were conditioned with 35% phosphoric acid for 20s. Specimens in group III were etched with 10% polyacrylic acid for 20s. In group IV 1.23% acidulated phosphatefluoride was applied for 30s, and compomer surfaces were sandblasted with 50μm Al2O3 powder in group V. After the initial preparations, all groups were treated with silane and resin before bonding of the second mix of compomer. Shear forces were applied with a universal testing machine at a cross-head speed of 5mm/min. The data were analyzed using one-way ANOVA and Duncan's multiple range tests.Results: The mean shear bond strengths and standard deviations (in parentheses for groups I to V were 31.56(10.86, 20.02(5.49, 17.74(7.34, 19.31(4.31 and 27.7(6.33MPa, respectively. The mean bond strengths for Groups I and V were significantly higher than that of the other groups (P<0.05.Conclusion: The results showed that among the surface treatments used in this study,sandblasting with alumina could be the best surface preparation method for repairing compomer restorations.

Detecting biomarkers from complex sample solutions is the key objective of molecular diagnostics. Being able to do so in a simple approach that does not require laborious sample preparation, sophisticated equipment and trained staff is vital for point-of-care applications. Here, we report on the specific detection of the breast cancer biomarker sHER2 directly from serum and saliva samples by a nanorod-based homogeneous biosensing approach, which is easy to operate as it only requires mixing of the samples with the nanorod probes. By careful nanorod surface engineering and homogeneous assay design, we demonstrate that the formation of a protein corona around the nanoparticles does not limit the applicability of our detection method, but on the contrary enables us to conduct in-situ reference measurements, thus further strengthening the point-of-care applicability of our method. Making use of sandwich assays on top of the nanorods, we obtain a limit of detection of 110 pM and 470 pM in 10-fold diluted spiked saliva and serum samples, respectively. In conclusion, our results open up numerous applications in direct protein biomarker quantification, specifically in point-of-care settings where resources are limited and ease-of-use is of essence.

Detecting biomarkers from complex sample solutions is the key objective of molecular diagnostics. Being able to do so in a simple approach that does not require laborious sample preparation, sophisticated equipment and trained staff is vital for point-of-care applications. Here, we report on the specific detection of the breast cancer biomarker sHER2 directly from serum and saliva samples by a nanorod-based homogeneous biosensing approach, which is easy to operate as it only requires mixing of the samples with the nanorod probes. By careful nanorod surface engineering and homogeneous assay design, we demonstrate that the formation of a protein corona around the nanoparticles does not limit the applicability of our detection method, but on the contrary enables us to conduct in-situ reference measurements, thus further strengthening the point-of-care applicability of our method. Making use of sandwich assays on top of the nanorods, we obtain a limit of detection of 110 pM and 470 pM in 10-fold diluted spiked saliva and serum samples, respectively. In conclusion, our results open up numerous applications in direct protein biomarker quantification, specifically in point-of-care settings where resources are limited and ease-of-use is of essence.

Full text: The paper presents various identification and measurement methods, used for the expertise of a wide variety of suspect radioactive materials, whose circulation was not legally stated. The main types of examined samples were: radioactive sources, illegally trafficked; suspect radioactive materials or radioactively contaminated devices; uranium tablets; fire detectors containing 241 Am sources; osmium samples containing radioactive 185 Os or enriched 187 Os. The types of analyses and determination methods were as follows: the chemical composition was determined by using identification reagents or by neutron activation analysis; the radionuclide composition was determined by using gamma-ray spectrometry; the activity and particle emission rates were determined by using calibrated radiometric equipment; the absorbed dose rate at the wall of all types of containers and samples was determined by using calibrated dose ratemeters. The radiation exposure risk for population, due to these radioactive materials, was evaluated for every case. (author)

The data obtained for the first round robin sample collected at Mesa 6-2 wellhead, East Mesa Test Site, Imperial Valley are summarized. Test results are listed by method used for cross reference to the analytic methods section. Results obtained for radioactive isotopes present in the brine sample are tabulated. The data obtained for the second round robin sample collected from the Woolsey No. 1 first stage flash unit, San Diego Gas and Electric Niland Test Facility are presented in the same manner. Lists of the participants of the two round robins are given. Data from miscellaneous analyses are included. Summaries of values derived from the round robin raw data are presented. (MHR)

was to evaluate two different DNA extraction methods in order to choose the most efficient method for studying intestinal bacterial diversity using Denaturing Gradient Gel Electrophoresis (DGGE). FINDINGS: In this study, a semi-automatic DNA extraction system (easyMag®, BioMérieux, Marcy I'Etoile, France......BACKGROUND: There are challenges, when extracting bacterial DNA from specimens for molecular diagnostics, since fecal samples also contain DNA from human cells and many different substances derived from food, cell residues and medication that can inhibit downstream PCR. The purpose of the study...... by easyMag® from the same fecal samples. Furthermore, DNA extracts obtained using easyMag® seemed to contain inhibitory compounds, since in order to perform a successful PCR-analysis, the sample should be diluted at least 10 times. DGGE performed on PCR from DNA extracted by QIAamp DNA Stool Mini Kit DNA...

A new approach to the design of a sampled-data compensator in the frequency domain is investigated. The starting point is a continuous-time compensator for the continuous-time system which satisfy specific design criteria. The new design method will graphically show how the discrete...

The determination of phosphorus in milk samples by instrumental thermal neutron activation analysis is described. The procedure involves a short irradiation in a nuclear reactor and measurement of the beta radiation emitted by phosphorus - 32 after a suitable decay period. The sources of error were studied and the established method was applied to standard reference materials of known phosphorus content. (author)

The quasicontinuum (QC) method reduces computational costs of atomistic calculations by using interpolation between a small number of so-called repatoms to represent the displacements of the complete lattice and by selecting a small number of sampling atoms to estimate the total potential energy of

Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…

Accurate estimation of the characteristics of log resources, or coarse woody debris (CWD), is critical to effective management of wildlife and other forest resources. Despite the importance of logs as wildlife habitat, methods for sampling logs have traditionally focused on silvicultural and fire applications. These applications have emphasized estimates of log volume...

Midmar) was harvested at three and four weeks after cutting and fertilizing with 200 kg nitrogen (N)/ha. Freshly cut herbage was used to investigate the following four sample preparation methods. In trial 1, herbage was (1) chopped with a paper-cutting guillotine into 5-10 mm lengths, representing fresh (FR) herbage; ...

Highlights: • We analyzed and modeled spectral envelopes of complex molybdenum oxides. • Molybdenum oxide films of varying valence and crystallinity were synthesized. • MoO{sub 3} and MoO{sub 2} line shapes from experimental data were created. • Informed amorphous sample model (IASM) developed. • Amorphous molybdenum oxide XPS envelopes were interpreted. - Abstract: Accurate elemental oxidation state determination for the outer surface of a complex material is of crucial importance in many science and engineering disciplines, including chemistry, fundamental and applied surface science, catalysis, semiconductors and many others. X-ray photoelectron spectroscopy (XPS) is the primary tool used for this purpose. The spectral data obtained, however, is often very complex and can be subject to incorrect interpretation. Unlike traditional XPS spectra fitting procedures using purely synthetic spectral components, here we develop and present an XPS data processing method based on vector analysis that allows creating XPS spectral components by incorporating key information, obtained experimentally. XPS spectral data, obtained from series of molybdenum oxide samples with varying oxidation states and degree of crystallinity, were processed using this method and the corresponding oxidation states present, as well as their relative distribution was elucidated. It was shown that monitoring the evolution of the chemistry and crystal structure of a molybdenum oxide sample due to an invasive X-ray probe could be used to infer solutions to complex spectral envelopes.

In this study, spontaneous Raman scattering and surface-enhanced Raman scattering, Surface-Enhanced Raman Spectroscopy spectra have been investigated. The samples which were kept in the formalin solution selected from the human's healthy and cancerous colon tissues. The Surface-Enhanced Raman Spectroscopy spectra were collected by adding colloidal solution contained silver nanoparticles to the top of the samples. The recorded spectra were compared for the spontaneous Raman spectra of healthy and cancerous colon samples. The spontaneous and surface enhanced Raman scattering data were also collected and compared for both healthy and damaged samples.

In recent years, many different water sources and foods have been reported to contain perchlorate. Studies indicate that significant levels of perchlorate are present in both human and dairy milk. The determination of perchlorate in milk is particularly important due to its potential health impact on infants and children. As for many other biological samples, sample preparation is more time consuming than the analysis itself. The concurrent presence of large amounts of fats, proteins, carbohydrates, etc., demands some initial cleanup; otherwise the separation column lifetime and the limit of detection are both greatly compromised. Reported milk processing methods require the addition of chemicals such as ethanol, acetic acid or acetonitrile. Reagent addition is undesirable in trace analysis. We report here an essentially reagent-free sample preparation method for the determination of perchlorate in milk. Milk samples are spiked with isotopically labeled perchlorate and centrifuged to remove lipids. The resulting liquid is placed in a disposable centrifugal ultrafilter device with a molecular weight cutoff of 10 kDa, and centrifuged. Approximately 5-10 ml of clear liquid, ready for analysis, is obtained from a 20 ml milk sample. Both bovine and human milk samples have been successfully processed and analyzed by ion chromatography-mass spectrometry (IC-MS). Standard addition experiments show good recoveries. The repeatability of the analytical result for the same sample in multiple sample cleanup runs ranged from 3 to 6% R.S.D. This processing technique has also been successfully applied for the determination of iodide and thiocyanate in milk

The idea for this book stemmed from a remark by Philip Jennings of Murdoch University in a discussion session following a regular meeting of the Australian Surface Science group. He observed that a text on surface analysis and applica­ tions to materials suitable for final year undergraduate and postgraduate science students was not currently available. Furthermore, the members of the Australian Surface Science group had the research experience and range of coverage of sur­ face analytical techniques and applications to provide a text for this purpose. A of techniques and applications to be included was agreed at that meeting. The list intended readership of the book has been broadened since the early discussions, particularly to encompass industrial users, but there has been no significant alter­ ation in content. The editors, in consultation with the contributors, have agreed that the book should be prepared for four major groups of readers: - senior undergraduate students in chemistry, physics, metallur...

In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surfacemethod and adaptive importance samplingmethod. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)

The zone of the action of thermal disturbances around a circular heat source on the surface of a semi-infinite body is estimated with the aim of using contact methods of determination of thermophysical properties of materials from core samples

These notes introduce the BFS (Bozzolo-Ferrante-Smith) method for alloys, in the framework of what is available today in terms of computationally efficient and physically sound techniques for modeling of atomic systems. The BFS method belongs to the family of semi-empirical methods, which aim to balance scientific rigour with practical applications. The goal is to provide a tool that aids in the process of material analysis and development, supplementing the experimental work which by itself has limitations in terms of time, money, technology and human resources. One of the main advantages of the BFS method, basically tailored to assist in the problem of alloy design, is that it is easily applicable to the analysis of surface structure, with a satisfactory degree of accuracy. In these notes, first the role of semiempirical methods among the available tools for atomistic simulations is reviewed, followed by a description of the BFS method and a simple application in order to understand the operational procedure, and conclude reviewing some of the topics of current interest where techniques such as the BFS method play an important role in furthering the understanding os fundamental issues

This Methods Manual provides a unified source of information on the sampling and analytical techniques that enable Department of Energy (DOE) facilities to comply with the requirements established in the current revision of the Transuranic Waste Characterization Quality Assurance Program Plan (QAPP) for the Waste Isolation Pilot Plant (WIPP) Transuranic (TRU) Waste Characterization Program (the Program) and the WIPP Waste Analysis Plan. This Methods Manual includes all of the testing, sampling, and analytical methodologies accepted by DOE for use in implementing the Program requirements specified in the QAPP and the WIPP Waste Analysis Plan. The procedures in this Methods Manual are comprehensive and detailed and are designed to provide the necessary guidance for the preparation of site-specific procedures. With some analytical methods, such as Gas Chromatography/Mass Spectrometry, the Methods Manual procedures may be used directly. With other methods, such as nondestructive characterization, the Methods Manual provides guidance rather than a step-by-step procedure. Sites must meet all of the specified quality control requirements of the applicable procedure. Each DOE site must document the details of the procedures it will use and demonstrate the efficacy of such procedures to the Manager, National TRU Program Waste Characterization, during Waste Characterization and Certification audits

This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to

A novel method is proposed for simulating free space field propagation from a source plane to a destination plane that is applicable for both small and large propagation distances. The angular spectrum method (ASM) was widely used for simulating near field propagation, but it caused a numerical error when the propagation distance was large because of aliasing due to under sampling. Band limited ASM satisfied the Nyquist condition on sampling by limiting a bandwidth of a propagation field to avoid an aliasing error so that it could extend the applicable propagation distance of the ASM. However, the band limited ASM also made an error due to the decrease of an effective sampling number in a Fourier space when the propagation distance was large. In the proposed wide range ASM, we use a non-uniform sampling in a Fourier space to keep a constant effective sampling number even though the propagation distance is large. As a result, the wide range ASM can produce simulation results with high accuracy for both far and near field propagation. For non-paraxial wave propagation, we applied the wide range ASM to a shifted destination plane as well. (paper)

With publication of Standard Methods for Sampling North American Freshwater Fishes in 2009, the American Fisheries Society (AFS) recommended standard procedures for North America. To explore interest in standardizing at intercontinental scales, a symposium attended by international specialists in freshwater fish sampling was convened at the 145th Annual AFS Meeting in Portland, Oregon, in August 2015. Participants represented all continents except Australia and Antarctica and were employed by state and federal agencies, universities, nongovernmental organizations, and consulting businesses. Currently, standardization is practiced mostly in North America and Europe. Participants described how standardization has been important for management of long-term data sets, promoting fundamental scientific understanding, and assessing efficacy of large spatial scale management strategies. Academics indicated that standardization has been useful in fisheries education because time previously used to teach how samplingmethods are developed is now more devoted to diagnosis and treatment of problem fish communities. Researchers reported that standardization allowed increased sample size for method validation and calibration. Group consensus was to retain continental standards where they currently exist but to further explore international and intercontinental standardization, specifically identifying where synergies and bridges exist, and identify means to collaborate with scientists where standardization is limited but interest and need occur.

One of the problems that occur when working with regression models is regarding the sample size; once the statistical methods used in inferential analyzes are asymptotic if the sample is small the analysis may be compromised because the estimates will be biased. An alternative is to use the bootstrap methodology, which in its non-parametric version does not need to guess or know the probability distribution that generated the original sample. In this work we used a set of soybean yield data and physical and chemical soil properties formed with fewer samples to determine a multiple linear regression model. Bootstrap methods were used for variable selection, identification of influential points and for determination of confidence intervals of the model parameters. The results showed that the bootstrap methods enabled us to select the physical and chemical soil properties, which were significant in the construction of the soybean yield regression model, construct the confidence intervals of the parameters and identify the points that had great influence on the estimated parameters. (Author)

Noble gases occur to some extent in the Earth's atmosphere, but the concentrations of all but argon are exceedingly low. Argon is plentiful, constituting almost 1 % of the air. Fission Product Noble Gases (FPNG) are produced by nuclear fission and large parts of FPNG is produced in Nuclear reactions. FPNG are b-j emitters and contributing significantly in public dose. During normal operation of reactor release of FPNG is negligible but its release increases in case of fuel failure. Xenon, a member of FPNG family helps in identification of fuel failure and its extent in PHWRs. Due to above reasons it becomes necessary to assess the FPNG release during operation of NPPs. Presently used methodology of assessment of FPNG, at almost all power stations is Computer based gamma ray spectrometry. This provides fission product Noble gases nuclide identification through peak search of spectra. The air sample for the same is collected by grab samplingmethod, which has inherent disadvantages. An alternate method was developed at Rajasthan Atomic Power Station (RAPS) - 3 and 4 for assessment of FPNG, which uses adsorption phenomena for collection of air samples. This report presents details of samplingmethod for FPNG and noble gases in different systems of Nuclear Power Plant. (author)

Methods and articles for controlling the surface of an alloy substrate for deposition of an epitaxial layer. The invention includes the use of an intermediate layer to stabilize the substrate surface against oxidation for subsequent deposition of an epitaxial layer.

We apply Bayesian inference to the analytic continuation of quantum Monte Carlo (QMC) data from the imaginary axis to the real axis. Demanding a proper functional Bayesian formulation of any analytic continuation method leads naturally to the stochastic samplingmethod (StochS) as the Bayesian method with the simplest prior, while it excludes the maximum entropy method and Tikhonov regularization. We present a new efficient algorithm for performing StochS that reduces computational times by orders of magnitude in comparison to earlier StochS methods. We apply the new algorithm to a wide variety of typical test cases: spectral functions and susceptibilities from DMFT and lattice QMC calculations. Results show that StochS performs well and is able to resolve sharp features in the spectrum.

One of the key requirements for the accurate calculation of free energy differences is proper sampling of conformational space. Especially in biological applications, molecular dynamics simulations are often confronted with rugged energy surfaces and high energy barriers, leading to insufficient sampling and, in turn, poor convergence of the free energy results. In this work, we address this problem by employing enhanced samplingmethods. We explore the possibility of using self-guided Langevin dynamics (SGLD) to speed up the exploration process in free energy simulations. To obtain improved free energy differences from such simulations, it is necessary to account for the effects of the bias due to the guiding forces. We demonstrate how this can be accomplished for the Bennett's acceptance ratio (BAR) and the enveloping distribution sampling (EDS) methods. While BAR is considered among the most efficient methods available for free energy calculations, the EDS method developed by Christ and van Gunsteren is a promising development that reduces the computational costs of free energy calculations by simulating a single reference state. To evaluate the accuracy of both approaches in connection with enhanced sampling, EDS was implemented in CHARMM. For testing, we employ benchmark systems with analytical reference results and the mutation of alanine to serine. We find that SGLD with reweighting can provide accurate results for BAR and EDS where conventional molecular dynamics simulations fail. In addition, we compare the performance of EDS with other free energy methods. We briefly discuss the implications of our results and provide practical guidelines for conducting free energy simulations with SGLD.

We have extended the entropic sampling Monte Carlo method to the case of path integral representation of a quantum system. A two-dimensional density of states is introduced into path integral form of the quantum canonical partition function. Entropic sampling technique within the algorithm suggested recently by Wang and Landau (Wang F and Landau D P 2001 Phys. Rev. Lett. 86 2050) is then applied to calculate the corresponding entropy distribution. A three-dimensional quantum oscillator is considered as an example. Canonical distributions for a wide range of temperatures are obtained in a single simulation run, and exact data for the energy are reproduced

A waste stream sampling program was undertaken to determine those waste streams which contained hazardous constituents, and would therefore be regulated as a hazardous waste under the Resource Conservation and Recovery Act. The waste streams also had the potential of containing radioactive material, either plutonium, americium, or depleted uranium. Because of the potential for contamination with radioactive material, a method of rapidly screening the liquid samples for radioactive material was required. A counting technique was devised to count a small aliquot of a sample, determine plutonium concentration, and allow the sample to be shipped the same day they were collected. This technique utilized the low energy photons (x-rays) that accompany α decay. This direct, non-destructive x-ray analysis was applied to quantitatively determine Pu-239 concentrations in industrial samples. Samples contained a Pu-239, Am-241 mixture; the ratio and/or concentrations of these two radionuclides was not constant. A computer program was designed and implemented to calculate Pu-239 activity and concentration (g/ml) using the 59.5 keV Am-241 peak to determine Am-241's contribution to the 17 keV region. Am's contribution was subtracted, yielding net counts in the 17 keV region due to Pu. 2 figs., 1 tab

Contaminants released from wastewater treatment plants can persist in surface waters for substantial distances. Much research has gone into evaluating the fate and transport of these contaminants, but this work has often assumed constant flow from wastewater treatment plants. However, effluent discharge commonly varies widely over a 24-hour period, and this variation controls contaminant loading and can profoundly influence interpretations of environmental data. We show that methodologies relying on the normalization of downstream data to conservative elements can give spurious results, and should not be used unless it can be verified that the same parcel of water was sampled. Lagrangian sampling, which in theory samples the same water parcel as it moves downstream (the Lagrangian parcel), links hydrologic and chemical transformation processes so that the in-stream fate of wastewater contaminants can be quantitatively evaluated. However, precise Lagrangian sampling is difficult, and small deviations – such as missing the Lagrangian parcel by less than 1 h – can cause large differences in measured concentrations of all dissolved compounds at downstream sites, leading to erroneous conclusions regarding in-stream processes controlling the fate and transport of wastewater contaminants. Therefore, we have developed a method termed “verified Lagrangian” sampling, which can be used to determine if the Lagrangian parcel was actually sampled, and if it was not, a means for correcting the data to reflect the concentrations which would have been obtained had the Lagrangian parcel been sampled. To apply the method, it is necessary to have concentration data for a number of conservative constituents from the upstream, effluent, and downstream sites, along with upstream and effluent concentrations that are constant over the short-term (typically 2–4 h). These corrections can subsequently be applied to all data, including non-conservative constituents. Finally, we

Contaminants released from wastewater treatment plants can persist in surface waters for substantial distances. Much research has gone into evaluating the fate and transport of these contaminants, but this work has often assumed constant flow from wastewater treatment plants. However, effluent discharge commonly varies widely over a 24-hour period, and this variation controls contaminant loading and can profoundly influence interpretations of environmental data. We show that methodologies relying on the normalization of downstream data to conservative elements can give spurious results, and should not be used unless it can be verified that the same parcel of water was sampled. Lagrangian sampling, which in theory samples the same water parcel as it moves downstream (the Lagrangian parcel), links hydrologic and chemical transformation processes so that the in-stream fate of wastewater contaminants can be quantitatively evaluated. However, precise Lagrangian sampling is difficult, and small deviations - such as missing the Lagrangian parcel by less than 1h - can cause large differences in measured concentrations of all dissolved compounds at downstream sites, leading to erroneous conclusions regarding in-stream processes controlling the fate and transport of wastewater contaminants. Therefore, we have developed a method termed "verified Lagrangian" sampling, which can be used to determine if the Lagrangian parcel was actually sampled, and if it was not, a means for correcting the data to reflect the concentrations which would have been obtained had the Lagrangian parcel been sampled. To apply the method, it is necessary to have concentration data for a number of conservative constituents from the upstream, effluent, and downstream sites, along with upstream and effluent concentrations that are constant over the short-term (typically 2-4h). These corrections can subsequently be applied to all data, including non-conservative constituents. Finally, we show how data

Success of a future Mars Sample Return mission will depend on the correct choice of samples. Pyrolysis-FTIR can be employed as a triage instrument for Mars Sample Return. The technique can thermally dissociate minerals and organic matter for detection. Identification of certain mineral types can determine the habitability of the depositional environment, past or present, while detection of organic matter may suggest past or present habitation. In Mars' history, the Theiikian era represents an attractive target for life search missions and the acquisition of samples. The acidic and increasingly dry Theiikian may have been habitable and followed a lengthy neutral and wet period in Mars' history during which life could have originated and proliferated to achieve relatively abundant levels of biomass with a wide distribution. Moreover, the sulfate minerals produced in the Theiikian are also known to be good preservers of organic matter. We have used pyrolysis-FTIR and samples from a Mars analog ferrous acid stream with a thriving ecosystem to test the triage concept. Pyrolysis-FTIR identified those samples with the greatest probability of habitability and habitation. A three-tier scoring system was developed based on the detection of (i) organic signals, (ii) carbon dioxide and water, and (iii) sulfur dioxide. The presence of each component was given a score of A, B, or C depending on whether the substance had been detected, tentatively detected, or not detected, respectively. Single-step (for greatest possible sensitivity) or multistep (for more diagnostic data) pyrolysis-FTIR methods informed the assignments. The system allowed the highest-priority samples to be categorized as AAA (or A*AA if the organic signal was complex), while the lowest-priority samples could be categorized as CCC. Our methods provide a mechanism with which to rank samples and identify those that should take the highest priority for return to Earth during a Mars Sample Return mission. Key Words

Chloramphenicol (CAP) is a widely used amide alcohol antibiotics, which has been banned from using in food producing animals in many countries. In this study, surface enhanced Raman scattering (SERS) coupled with gold colloidal nanoparticles was used for the rapid analysis of CAP. Density functional theory (DFT) calculations were conducted with Gaussian 03 at the B3LYP level using the 3-21G(d) and 6-31G(d) basis sets to analyze the assignment of vibrations. Affirmatively, the theoretical Raman spectrum of CAP was in complete agreement with the experimental spectrum. They both exhibited three strong peaks characteristic of CAP at 1104 cm-1, 1344 cm-1, 1596 cm-1, which were used for rapid qualitative analysis of CAP residues in food samples. The use of SERS as a method for the measurements of CAP was explored by comparing use of different solvents, gold colloidal nanoparticles concentration and absorption time. The method of the detection limit was determined as 0.1 μg/mL using optimum conditions. The Raman peak at 1344 cm-1 was used as the index for quantitative analysis of CAP in food samples, with a linear correlation of R2 = 0.9802. Quantitative analysis of CAP residues in foods revealed that the SERS technique with gold colloidal nanoparticles was sensitive and of a good stability and linear correlation, and suited for rapid analysis of CAP residue in a variety of food samples.

In case of a severe hypothetical accident in a pressurized water reactor, the reactor assembly melts partially or completely. The material formed, called corium, flows out and spreads at the bottom of the reactor. To limit and control the consequences of such an accident, the specifications of the O-U-Zr basic system must be known accurately. To achieve this goal, the corium mix was melted by electron bombardment at very high temperature (3000 K) followed by quenching of the ingot in the Isabel 1 evaporator. Metallographic analyses were then required to validate the thermodynamic databases set by the Thermo-Calc software. The study consists in defining an overall surface quantitative analysis method that is fast and reliable, in order to determine the overall corium composition. The analyzed ingot originated in a [U+Fe+Y+UO 2 +ZrO 2 ) mix, with a total mass of 2253.7 grams. Several successive heating with average power were performed before a very brief plateau at very high temperature, so that the ingot was formed progressively and without any evaporation liable to modify its initial composition. The central zone of the ingot was then analyzed by qualitative and quantitative global surfacemethods, to yield the volume composition of the analyzed zone. Corium sample analysis happens to be very complex because of the variety and number of elements present, and also because of the presence of oxygen in a heavy element like the uranium based matrix. Three different global quantitative surface analysis methods were used: global EDS analysis (Energy Dispersive Spectrometry), with SEM, global WDS analysis (Wavelength Dispersive Spectrometry) with EPMA, and coupling of image analysis with EDS or WDS point spectroscopic analyses. The difficulties encountered during the study arose from sample preparation (corium is very sensitive to oxidation), and the choice of acquisition parameters of the images and analyses. The corium sample studied consisted of two zones displaying

Analysis of surface water for mercury comprises the determination of both ionic and organically bound mercury in solution and that of the total mercury content of the suspended matter. Eventually, metallic mercury has to be determined too. Requirements for the sampling procedure are given. A method for the routine determination of mercury in surface water and seawater was developed and applied to Dutch surface waters. The total sample volume is 2500 ml. About 500 ml is used for the determination of the content of suspended matter and the total amount of mercury in the water. The sample is filtered through a bed of previously purified active charcoal at a low flow-rate. The main portion ca. 2000 ml) passes a flow-through centrifuge to separate the solid fraction. One liter is used to separate ''inorganic'' mercury by reduction, volatilization in an airstream and adsorption on active charcoal. The other liter is led through a column of active charcoal to collect all mercury. The procedures were checked with 197 Hg radiotracer both as an ion and incorporated in organic compounds. The mercury is determined by thermal neutron activation, followed by volatilization in a tube furnace and adsorption on a fresh carbon bed. The limit of determination is approximately equal to 1 ng 1 -1 . The rate of desorption from and adsorption on suspended material has been measured as a function of a pH of the solution for Hg +2 and various other ions. It can be concluded that only the procedure mentioned above does not disturb the equilibrium. The separation of mercury from air is obtained by suction of 1 m 3 through a 0.22 μm filter and a charcoal bed. The determination is then performed as in the case of the water samples

The data-quality objectives for samples collected at surface-water sites in the National Water-Quality Network include estimating the extent to which contamination, matrix effects, and measurement variability affect interpretation of environmental conditions. Quality-control samples provide insight into how well the samples collected at surface-water sites represent the true environmental conditions. Quality-control samples used in this program include field blanks, replicates, and field matrix spikes. This report describes the design for collection of these quality-control samples and the data management needed to properly identify these samples in the U.S. Geological Survey’s national database.

A simple method for the determination of Cr, Ni and Mo in stainless steels is presented. In order to minimize the matrix effects, the conditions of liquid system to dissolve stainless steels chips has been developed. Pure element solutions were used as standards. Preparation of synthetic solutions with all the elements of steel and also mathematic corrections are avoided. It results in a simple chemical operation which simplifies the method of analysis. The variance analysis of the results obtained with steel samples show that the three elements may be determined from the comparison with the analytical curves obtained with the pure elements if the same parameters in the calibration curves are used. The accuracy and the precision were checked against other techniques using the British Chemical Standards of the Bureau of Anlysed Samples Ltd. (England). (M.E.L.) [es

Increased incidence of microbial infection in distillate fuels has led to a demand for organisations such as the Institute of Petroleum to propose standards for microbiological quality, based on numbers of viable microbial colony forming units. Variations in quality requirements, and in the spoilage significance of contaminating microbes plus a tendency for temporal and spatial changes in the distribution of microbes, makes such standards difficult to implement. The problem is compounded by a diversity in the procedures employed for sampling and testing for microbial contamination and in the interpretation of the data obtained. The following paper reviews these problems and describes the efforts of The Institute of Petroleum Microbiology Fuels Group to address these issues and in particular to bring about harmonisation of sampling and testing methods. The benefits and drawbacks of available test methods, both laboratory based and on-site, are discussed.

This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.

The paper presents a simple calculation method which serves for an evaluation of radiative properties of window glasses. The method is based on a computer simulation model of the energy balance of a thermally insulated box with selected glass samples. A temperature profile of the air inside of the box with a glass sample exposed to affecting radiation was determined for defined boundary conditions. The spectral range of the radiation was considered in the interval between 280 and 2500 nm. This interval is adequate to the spectral range of solar radiation affecting windows in building facades. The air temperature rise within the box was determined in a response to the affecting radiation in the time between the beginning of the radiation exposition and the time of steady-state thermal conditions. The steady state temperature inside of the insulated box serves for the evaluation of the box energy balance and determination of the glass sample radiative properties. These properties are represented by glass characteristics as mean values of transmittance, reflectance and absorptance calculated for a defined spectral range. The data of the computer simulations were compared to experimental measurements on a real model of the insulated box. Results of both the calculations and measurements are in a good compliance. The method is recommended for preliminary evaluation of window glass radiative properties which serve as data for energy evaluation of buildings.

International audience; The modification of the surface of low-density polyethylene (LDPE) and polyurethane (PU) by means of the pulsed ion-plasma deposition of nanostructural carbon coatings at 20–60°C has been studied. The effect of this low-temperature treatment on the biocompatibility of the LDPE and PU has been assessed. Optimum technological parameters for the formation of mosaic carbon nanostructures with a thickness of 0.3–15 nm and a cluster lateral size of 10–500 nm are determined. ...

Surface wipe sampling in the occupational environment is a technique widely used by industrial hygienists. Although several organizations have promulgated standards for sampling lead and other metals, uncertainty still exists when trying to determine an appropriate wipe sampling strategy and how to interpret sampling results. Investigators from the National Institute for Occupational Safety and Health (NIOSH) Health Hazard Evaluation Program have used surface wipe sampling as part of their exposure assessment sampling strategies in a wide range of workplaces. This article discusses wipe sampling for measuring lead on surfaces in three facilities: (1) a battery recycling facility; (2) a firing range and gun store; and (3) an electronic scrap recycling facility. We summarize our findings from the facilities and what we learned by integrating wipe sampling into our sampling plan. Wiping sampling demonstrated lead in non-production surfaces in all three workplaces and that the potential that employees were taking lead home to their families existed. We also found that the presence of metals such as tin can interfere with the colorimetric results. We also discuss the advantages and disadvantages of colorimetric analysis of surface wipe samples and the challenges we faced when interpreting wipe sampling results.

Current standard sources of radiochemistry methods are often inappropriate for use in evaluating US Department of Energy environmental and waste management (DOE/EW) samples. Examples of current sources include EPA, ASTM, Standard Methods for the Examination of Water and Wastewater and HASL-300. Applicability of these methods is limited to specific matrices (usually water), radiation levels (usually environmental levels), and analytes (limited number). Radiochemistry methods in DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) attempt to fill the applicability gap that exists between standard methods and those needed for DOE/EM activities. The Radiochemistry chapter in DOE Methods includes an ''analysis and reporting'' guidance section as well as radiochemistry methods. A basis for identifying the DOE/EM radiochemistry needs is discussed. Within this needs framework, the applicability of standard methods and targeted new methods is identified. Sources of new methods (consolidated methods from DOE laboratories and submissions from individuals) and the methods review process will be discussed. The processes involved in generating consolidated methods add editing individually submitted methods will be compared. DOE Methods is a living document and continues to expand by adding various kinds of methods. Radiochemistry methods are highlighted in this paper. DOE Methods is intended to be a resource for methods applicable to DOE/EM problems. Although it is intended to support DOE, the guidance and methods are not necessarily exclusive to DOE. The document is available at no cost through the Laboratory Management Division of DOE, Office of Technology Development

The effect of packaging, shipping temperatures and storage times on recovery of Bacillus anthracis . Sterne spores from swabs was investigated. Macrofoam swabs were pre-moistened, inoculated with Bacillus anthracis spores, and packaged in primary containment or secondary containment before storage at -15°C, 5°C, 21°C, or 35°C for 0-7 days. Swabs were processed according to validated Centers for Disease Control/Laboratory Response Network culture protocols, and the percent recovery relative to a reference sample (T 0 ) was determined for each variable. No differences were observed in recovery between swabs held at -15° and 5°C, (p ≥ 0.23). These two temperatures provided significantly better recovery than swabs held at 21°C or 35°C (all 7 days pooled, p ≤ 0.04). The percent recovery at 5°C was not significantly different if processed on days 1, 2 or 4, but was significantly lower on day 7 (day 2 vs. 7, 5°C, 10 2 , p=0.03). Secondary containment provided significantly better percent recovery than primary containment, regardless of storage time (5°C data, p ≤ 0.008). The integrity of environmental swab samples containing Bacillus anthracis spores shipped in secondary containment was maintained when stored at -15°C or 5°C and processed within 4 days to yield the optimum percent recovery of spores.

Detection of activity in natural samples is specific especially because of its low level and high background interferences. Reduction of background interferences could be reached using low background chamber. Measurement geometry in shape of Marinelli beaker is commonly used according to low level of activity in natural samples. The Peak Net Area (PNA) method is the world-wide accepted technique for analysis of gamma-ray spectra. It is based on the net area calculation of the full energy peak, therefore, it takes into account only a fraction of measured gamma-ray spectrum. On the other hand, the Whole Spectrum Processing (WSP) approach to the gamma analysis makes possible to use entire information being in the spectrum. This significantly raises efficiency and improves energy resolution of the analysis. A principal step for the WSP application is building up the suitable response operator. Problems are put in an appearance when suitable standard calibration sources are unavailable. It may be occurred in the case of large volume samples and/or in the analysis of high energy range. Combined experimental and mathematical calibration may be a suitable solution. Many different detectors have been used to register the gamma ray and its energy. HPGe detectors produce the highest resolution commonly available today. Therefore they are they the most often used detectors in natural samples activity analysis. Scintillation detectors analysed using PNA method could be also used in simple cases, but for complicated spectra are practically inapplicable. WSP approach improves resolution of scintillation detectors and expands their applicability. WSP method allowed significant improvement of the energetic resolution and separation of "1"3"7Cs 661 keV peak from "2"1"4Bi 609 keV peak. At the other hand the statistical fluctuations in the lower part of the spectrum highlighted by background subtraction causes that this part is still not reliably analyzable. (authors)

An algorithm is developed to automatically screen the outliers from massive training samples for Global Land Survey - Imperviousness Mapping Project (GLS-IMP). GLS-IMP is to produce a global 30 m spatial resolution impervious cover data set for years 2000 and 2010 based on the Landsat Global Land Survey (GLS) data set. This unprecedented high resolution impervious cover data set is not only significant to the urbanization studies but also desired by the global carbon, hydrology, and energy balance researches. A supervised classification method, regression tree, is applied in this project. A set of accurate training samples is the key to the supervised classifications. Here we developed the global scale training samples from 1 m or so resolution fine resolution satellite data (Quickbird and Worldview2), and then aggregate the fine resolution impervious cover map to 30 m resolution. In order to improve the classification accuracy, the training samples should be screened before used to train the regression tree. It is impossible to manually screen 30 m resolution training samples collected globally. For example, in Europe only, there are 174 training sites. The size of the sites ranges from 4.5 km by 4.5 km to 8.1 km by 3.6 km. The amount training samples are over six millions. Therefore, we develop this automated statistic based algorithm to screen the training samples in two levels: site and scene level. At the site level, all the training samples are divided to 10 groups according to the percentage of the impervious surface within a sample pixel. The samples following in each 10% forms one group. For each group, both univariate and multivariate outliers are detected and removed. Then the screen process escalates to the scene level. A similar screen process but with a looser threshold is applied on the scene level considering the possible variance due to the site difference. We do not perform the screen process across the scenes because the scenes might vary due to

The quality of wood surfaces after different kinds of machining processes is a property of great importance for the wood processing industries. Present work is a study, whose objective was to evaluate different non-contact methods, for measurement of the quality of the wood surfaces by correlating them with stylus tracing. A number of Scots Pine samples were prepared by different kinds of wood machining processing. Surface roughness measurements were performed, utilizing two optical noncontact methods. The results indicate that the laser scan method can measure surface roughness on sawn wood with a sufficient degree of accuracy. (author) [de

The use of Rutherford backscattering for structural analysis of single crystal surfaces is reviewed, and a new method is introduced. With this method, which makes use of the channeling and blocking phenomenon of light ions of medium energy, surface atoms can be located with a precision of 0.02 A. This is demonstrated in a measurement of surface relaxation for the Cu(110) surface. (Auth.)

Full Text Available In this paper, an analytical method to estimate the complex dielectric constant of liquids is presented. The method is based on the measurement of the transmission coefficient in an embedded microstrip line loaded with a complementary split ring resonator (CSRR, which is etched in the ground plane. From this response, the dielectric constant and loss tangent of the liquid under test (LUT can be extracted, provided that the CSRR is surrounded by such LUT, and the liquid level extends beyond the region where the electromagnetic fields generated by the CSRR are present. For that purpose, a liquid container acting as a pool is added to the structure. The main advantage of this method, which is validated from the measurement of the complex dielectric constant of olive and castor oil, is that reference samples for calibration are not required.

four subsampling methods and five digestion methods, paying attention to the heterogeneity and the material characteristics of the waste fractions, it was possible to determine 61 substances with low detection limits, reasonable variance, and high accuracy. For most of the substances of environmental...... of variance (20-85% of the overall variation). Only by increasing the sample size significantly can this variance be reduced. The accuracy and short-term reproducibility of the chemical characterization were good, as determined by the analysis of several relevant certified reference materials. Typically, six...... to eight different certified reference materials representing a range of concentrations levels and matrix characteristics were included. Based on the documentation provided, the methods introduced were considered satisfactory for characterization of the chemical composition of waste-material fractions...

The decayed daughter of Rn released from the power sample (soil) in a sealed bottle were collected on a piece of copper and the radium in the sample can be measured by counting α-particles with an Alphameter for uranium prospection, thus it is called the radium method. This method has many advantages, such as high sensitivity (the lowest limit of detection for radium sample per gram is 2.7 x 10 -15 g), high efficiency, low cost and easy to use. On the basis of measuring more than 700 samples taken along 20 sections in 8 deposits, the results show that the radium method is better than γ-measurement and equal to 210 Po method for the capability to descover anomalies. The author also summarizes the anomaly intensities of radium method, 210 Po method and γ-measurement respectively at the surface with deep blind ores, with or without surficial mineralization, and the figures of their profiles and the variation of Ra/ 210 Po ratios. According to the above-mentioned distinguishing features, the uranium mineralization located in deep and/or shallow parts can be distinguishd. The combined application of radium, 210 Po and γ-measurement methods may be regarded as one of the important methods used for anomaly assessment. Based on the experiments of the radium measurements with 771 stream sediments samples in an area of 100 km 2 , it is demonstrated that the radium mehtod can be used in the stages of uranium reconnaissance and prospecting

Propose different methods to obtain crystallographic information about biological materials are important since powder method is a nondestructive method. Slices are an approximation of what would be an in vivo analysis. Effects of samples preparation cause differences in scattering profiles compared with powder method. The main inorganic component of bones and teeth is a calcium phosphate mineral whose structure closely resembles hydroxyapatite (HAp). The hexagonal symmetry, however, seems to work well with the powder diffraction data, and the crystal structure of HAp is usually described in space group P63/m. Were analyzed ten third molar teeth. Five teeth were separated in enamel, detin and circumpulpal detin powder and five in slices. All the scattering profile measurements were carried out at the X-ray diffraction beamline (XRD1) at the National Synchrotron Light Laboratory - LNLS, Campinas, Brazil. The LNLS synchrotron light source is composed of a 1.37 GeV electron storage ring, delivering approximately 4x10{sup -1}0 photons/s at 8 keV. A double-crystal Si(111) pre-monochromator, upstream of the beamline, was used to select a small energy bandwidth at 11 keV . Scattering signatures were obtained at intervals of 0.04 deg for angles from 24 deg to 52 deg. The human enamel experimental crystallite size obtained in this work were 30(3)nm (112 reflection) and 30(3)nm (300 reflection). These values were obtained from measurements of powdered enamel. When comparing the slice obtained 58(8)nm (112 reflection) and 37(7)nm (300 reflection) enamel diffraction patterns with those generated by the powder specimens, a few differences emerge. This work shows differences between powder and slices methods, separating characteristics of sample of the method's influence. (author)

Propose different methods to obtain crystallographic information about biological materials are important since powder method is a nondestructive method. Slices are an approximation of what would be an in vivo analysis. Effects of samples preparation cause differences in scattering profiles compared with powder method. The main inorganic component of bones and teeth is a calcium phosphate mineral whose structure closely resembles hydroxyapatite (HAp). The hexagonal symmetry, however, seems to work well with the powder diffraction data, and the crystal structure of HAp is usually described in space group P63/m. Were analyzed ten third molar teeth. Five teeth were separated in enamel, detin and circumpulpal detin powder and five in slices. All the scattering profile measurements were carried out at the X-ray diffraction beamline (XRD1) at the National Synchrotron Light Laboratory - LNLS, Campinas, Brazil. The LNLS synchrotron light source is composed of a 1.37 GeV electron storage ring, delivering approximately 4x10 -1 0 photons/s at 8 keV. A double-crystal Si(111) pre-monochromator, upstream of the beamline, was used to select a small energy bandwidth at 11 keV . Scattering signatures were obtained at intervals of 0.04 deg for angles from 24 deg to 52 deg. The human enamel experimental crystallite size obtained in this work were 30(3)nm (112 reflection) and 30(3)nm (300 reflection). These values were obtained from measurements of powdered enamel. When comparing the slice obtained 58(8)nm (112 reflection) and 37(7)nm (300 reflection) enamel diffraction patterns with those generated by the powder specimens, a few differences emerge. This work shows differences between powder and slices methods, separating characteristics of sample of the method's influence. (author)

Pathogenic microbes on the surfaces of salad crops and growth chambers pose a threat to the health of crew on International Space Station. For astronauts to safely consume spacegrown vegetables produced in NASA's new vegetable production unit, VEGGIE, three technical challenges must be overcome: real-time sampling, microbiological analysis, and sanitation. Raphanus sativus cultivar Cherry Bomb II and Latuca sativa cultivar Outredgeous, two saled crops to be grown in VEGGIE, were inoculated with Salmonella enterica serovar Typhimurium (S. Typhimurium), a bacterium known to cause food-borne illness Tape- and swab-based sampling techniques were optimized for use in microgravity and assessed for effectiveness in recovery of bacteria from crop surfaces: Rapid pathogen detection and molecular analyses were performed via quantitative real-time polymerase chain reactiop using LightCycler® 480 and RAZOR® EX, a scaled-down instrument that is undergoing evaluation and testing for future flight hardware. These methods were compared with conventional, culture-based methods for the recovery of S. Typhimurium colonies. A sterile wipe saturated with a citric acid-based, food-grade sanitizer was applied to two different surface materials used in VEGGIE flight hardware that had been contaminated with the bacterium Pseudomonas aeruginosa,. another known human pathogen. To sanitize surfaces, wipes were saturated with either the sanitizer or sterile deionized water and applied to each surface. Colony forming units of P. aeruginosa grown on tryptic soy agar plates were enumerated from surfacesamples after sanitization treatments. Depending on the VEGGIE hardware material, 2- to 4.5-log10 reductions in colony-forming units were observed after sanitization. The difference in recovery of S. Typhimurium between tape- and swab- based sampling techniques was insignificant. RAZOR® EX rapidly detected S. Typhimurium present in both raw culture and extracted DNA samples.

Radiochemistry methods in Department of Energy Methods for Evaluating Environmental and Waste Management Samples (DOE Methods) add to the repertoire of other standard methods in support of U.S. Department of Energy environmental restoration and waste management (DOE/EM) radiochemical characterization activities. Current standard sources of radiochemistry methods are not always applicable for evaluating DOE/EM samples. Examples of current sources include those provided by the US Environmental Protection Agency, the American Society for Testing and Materials, Standard Methods for the Examination of Water and Wastewater, and Environmental Measurements Laboratory Procedures Manual (HASL-300). The applicability of these methods is generally limited to specific matrices (usually water), low-level radioactive samples, and a limited number of analytes. DOE Methods complements these current standard methods by addressing the complexities of EM characterization needs. The process for determining DOE/EM radiochemistry characterization needs is discussed. In this context of DOE/EM needs, the applicability of other sources of standard radiochemistry methods is defined, and gaps in methodology are identified. Current methods in DOE Methods and the EM characterization needs they address are discussed. Sources of new methods and the methods incorporation process are discussed. The means for individuals to participate in (1) identification of DOE/EM needs, (2) the methods incorporation process, and (3) submission of new methods are identified

Electrostatic force microscopy (EFM) is a special design of non-contact atomic force microscopy used for detecting electrostatic interactions between the probe tip and the sample. Its resolution is limited by the finite probe size and the long-range characteristics of electrostatic forces. Therefore, quantitative analysis is crucial to understanding the relationship between the actual local surface potential distribution and the quantities obtained from EFM measurements. To study EFM measurements on bimetallic samples with surface potential inhomogeneities as a special case, we have simulated such measurements using the boundary element method and calculated the force component and force gradient component that would be measured by amplitude modulation (AM) EFM and frequency modulation (FM) EFM, respectively. Such analyses have been performed for inhomogeneities of various shapes and sizes, for different tip-sample separations and tip geometries, for different applied voltages, and for different media (e.g., vacuum or water) in which the experiment is performed. For a sample with a surface potential discontinuity, the FM-EFM resolution expression agrees with the literature; however, the simulation for AM-EFM suggests the existence of an optimal tip radius of curvature in terms of resolution. On the other hand, for samples with strip- and disk-shaped surface potential inhomogeneities, we have obtained quantitative expressions for the detectability size requirements as a function of experimental conditions for both AM- and FM-EFMs, which suggest that a larger tip radius of curvature is moderately favored for detecting the presence of such inhomogeneities

Proposes a method for predicting probability of sinkhole shaped subsidence, number of funnel-shaped subsidences and size of individual funnels. The following factors which influence the sudden subsidence of the surface in the form of funnels are analyzed: geologic structure of the strata between mining workings and the surface, mining depth, time factor, and geologic disolocations. Sudden surface subsidence is observed only in the case of workings situated up to a few dozen meters from the surface. Using the proposed method is explained with some examples. It is suggested that the method produces correct results which can be used in coal mining and in ore mining. (1 ref.) (In Polish)

Surface analysis methods, such as Auger electron spectroscopy, X-ray photoelectron spectroscopy, secondary ion mass spectrometry, glow discharge optical emission spectrometry and so on, have become indispensable to characterize surface and interface of many kinds of steel. Although a number of studies on characterization of steel by these methods have been carried out, several problems still remain in quantification and depth profiling. Nevertheless, the methods have provided essential information on the concentration and chemical state of elements at the surface and interface. Recent results on characterization of oxide layers, coated films, etc. on the surface of steel are reviewed here. (author). 99 refs

For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this document we present an original methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time we develop a theoretical model to study the features of the correlated samplingmethod to understand its effects on depletion calculations. In a third time the implementation of this method in the TRIPOLI-4 code will be discussed, as well as the precise calculation scheme used to bring important speed-up of the depletion calculation. We will begin to validate and optimize the perturbed depletion scheme with the calculation of a REP-like fuel cell depletion. Then this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes. (author) [fr

Full Text Available We here present a code for performing analytic continuation of fermionic Green’s functions and self-energies as well as bosonic susceptibilities on a graphics processing unit (GPU. The code is based on the samplingmethod introduced by Mishchenko et al. (2000, and is written for the widely used CUDA platform from NVidia. Detailed scaling tests are presented, for two different GPUs, in order to highlight the advantages of this code with respect to standard CPU computations. Finally, as an example of possible applications, we provide the analytic continuation of model Gaussian functions, as well as more realistic test cases from many-body physics.

A series of printed samples on substrate of semi-gloss paper and with the magnitude of threshold color difference were prepared for scaling the visual color difference and to evaluate the performance of different method. The probabilities of perceptibly was used to normalized to Z-score and different color differences were scaled to the Z-score. The visual color difference was got, and checked with the STRESS factor. The results indicated that only the scales have been changed but the relative scales between pairs in the data are preserved.

A radio frequency (RF) surface impedance characterization (SIC) system that uses a novel sapphire-loaded niobium cavity operating at 7.5 GHz has been developed as a tool to measure the RF surface impedance of flat superconducting material samples. The SIC system can presently make direct calorimetric RF surface impedance measurements on the central 0.8 cm(2) area of 5 cm diameter disk samples from 2 to 20 K exposed to RF magnetic fields up to 14 mT. To illustrate system utility, we present first measurement results for a bulk niobium sample.

Sludge samples from the DWPF Slurry Mix Evaporator (SME) heating coil frame and coil surface were characterized to identify differences that might help identify heat transfer fouling materials. The SME steam coils have seen increased fouling leading to lower boil-up rates. Samples of the sludge were taken from the coil frame somewhat distant from the coil (bulk tank material) and from the coil surface (coil surfacesample). The results of the analysis indicate the composition of the two SME samples are very similar with the exception that the coil surfacesample shows ∼5-10X higher mercury concentration than the bulk tank sample. Elemental analyses and x-ray diffraction results did not indicate notable differences between the two samples. The ICP-MS and Cs-137 data indicate no significant differences in the radionuclide composition of the two SME samples. Semi-volatile organic analysis revealed numerous organic molecules, these likely result from antifoaming additives. The compositions of the two SME samples also match well with the analyzed composition of the SME batch with the exception of significantly higher silicon, lithium, and boron content in the batch sample indicating the coil samples are deficient in frit relative to the SME batch composition.

Interest in coral microbial ecology has been increasing steadily over the last decade, yet standardized methods of sample collection still have not been defined. Two methods were compared for their ability to sample coral-associated microbial communities: tissue punches and foam swabs, the latter being less invasive and preferred by reef managers. Four colonies of star coral, Montastraea annularis, were sampled in the Dry Tortugas National Park (two healthy and two with white plague disease). The PhyloChip™ G3 microarray was used to assess microbial community structure of amplified 16S rRNA gene sequences. Samples clustered based on methodology rather than coral colony. Punch samples from healthy and diseased corals were distinct. All swab samples clustered closely together with the seawater control and did not group according to the health state of the corals. Although more microbial taxa were detected by the swab method, there is a much larger overlap between the water control and swab samples than punch samples, suggesting some of the additional diversity is due to contamination from water absorbed by the swab. While swabs are useful for noninvasive studies of the coral surface mucus layer, these results show that they are not optimal for studies of coral disease.

A national comparison on volume sample activity measurements methods may be regarded as a step toward accomplishing the traceability of the environmental and food chain activity measurements to national standards. For this purpose, the Radionuclide Metrology Laboratory has distributed 137 Cs and 134 Cs water-equivalent solid standard sources to 24 laboratories having responsibilities in this matter. Every laboratory has to measure the activity of the received source(s) by using its own standards, equipment and methods and report the obtained results to the organizer. The 'measured activities' will be compared with the 'true activities'. A final report will be issued, which plans to evaluate the national level of precision of such measurements and give some suggestions for improvement. (Author)

On the basis of positive results about sorption of radionuclides in rock thin sections an autoradiographic method applicable for measurement sorption of radionuclides on rough rock surfaces was developed. There is no method available because 1) a plane film cannot be used because due to the roughness of rock surfaces 2) rock samples used in this investigation cannot be studied with microscopes and 3) autoradiogram cannot be studied fixed on the surface of a rock sample because the colours of the minerals in the sample will interfere with the interpretation. This report discusses experimental work done to find an useful proedure. In the development of the method main emphasis was put on investigation of the following steps: 1) preparation of the sample for equilibration and spiking; 2) properties of the covering paint for the rock surface and 3) testing of autoradiographic methods using different nuclear emulsions. As the result of these experiments promising autoradiograms with gel emulsion for sawed rock surfaces and with stripping film for rough rock surfaces were obtained. The mineralogic disribution of sorbed activity is easily seen in autoradiograms. Much work must still be done to get reliable quantitative information from autoradiograms. For developing of the autoradiographic method sawed plane rock samples of quartz feldspar intergrowth, pegmatite and limestone were used. In addition core samples of tonalite and mica gneiss from Olkiluoto were utilized. The distribution coefficients (Ksub(a)) obtained for cesium were 560 x 10 -4 and 620 x 10 -4 m 3 /m 2 for tonalite and mica gneiss, respectively. The results are little higher but of the same order of magnitude as obtained by the autoradiographic method using rock thin sections and by the batch method using crused samples. The natural fracture surface sorption study is a logical step in determining the scaling factor from laboratory to field studies. Field data will be needed to determine whether laboratory

In the works are demonstrated modern methods for study of solid surfaces and its use of glasses. Study of the interaction of ions, electrons and photons with the glass surface provides information about the composition of the surface and its structure on an atomic scale. A qualitative analysis of a surface can be made with the aid of the Auger electron spectroscopy (AES) and the electron spectroscopy for chemical analysis (ESCA) and with the ion scattering (ISS and RBS) and the secondary ion mass spectrometry (SIMS). The structure of a surface can be studied by means of ion scattering and low-energy electron diffraction (LEED) and the topography of a surface by means of scanning electron microscopy (SEM). The ellipsometry is generally confined to measuring the thickness of very thin layers. The application these methods to the glass surfaces is demonstrated on series of examples. (author)

Full Text Available Dental light-cured resins can undergo different degrees of polymerization when applied in vivo. When polymerization is incomplete, toxic monomers may be released into the oral cavity. The present study assessed the cytotoxicity of different materials, using sample preparation methods that mirror clinical conditions. Composite and bonding resins were used and divided into four groups according to sample preparation method: uncured; directly cured samples, which were cured after being placed on solidified agar; post-cured samples were polymerized before being placed on agar; and “removed unreacted layer” samples had their oxygen-inhibition layer removed after polymerization. Cytotoxicity was evaluated using an agar diffusion test, MTT assay, and confocal microscopy. Uncured samples were the most cytotoxic, while removed unreacted layer samples were the least cytotoxic (p < 0.05. In the MTT assay, cell viability increased significantly in every group as the concentration of the extracts decreased (p < 0.05. Extracts from post-cured and removed unreacted layer samples of bonding resin were less toxic than post-cured and removed unreacted layer samples of composite resin. Removal of the oxygen-inhibition layer resulted in the lowest cytotoxicity. Clinicians should remove unreacted monomers on the resin surface immediately after restoring teeth with light-curing resin to improve the restoration biocompatibility.

The aim of this research was to verify the spectrophotometric method to analyze nitrate in water samples using APHA 2012 Section 4500 NO3-B method. The verification parameters used were: linearity, method detection limit, level of quantitation, level of linearity, accuracy and precision. Linearity was obtained by using 0 to 50 mg/L nitrate standard solution and the correlation coefficient of standard calibration linear regression equation was 0.9981. The method detection limit (MDL) was defined as 0,1294 mg/L and limit of quantitation (LOQ) was 0,4117 mg/L. The result of a level of linearity (LOL) was 50 mg/L and nitrate concentration 10 to 50 mg/L was linear with a level of confidence was 99%. The accuracy was determined through recovery value was 109.1907%. The precision value was observed using % relative standard deviation (%RSD) from repeatability and its result was 1.0886%. The tested performance criteria showed that the methodology was verified under the laboratory conditions.

Full Text Available The environmental DNA (eDNA method is a detection technique that is rapidly gaining credibility as a sensitive tool useful in the surveillance and monitoring of invasive and threatened species. Because eDNA analysis often deals with small quantities of short and degraded DNA fragments, methods that maximize eDNA recovery are required to increase detectability. In this study, we performed experiments at different stages of the eDNA analysis to show which combinations of methods give the best recovery rate for eDNA. Using Oriental weatherloach (Misgurnus anguillicaudatus as a study species, we show that various combinations of DNA capture, preservation and extraction methods can significantly affect DNA yield. Filtration using cellulose nitrate filter paper preserved in ethanol or stored in a -20°C freezer and extracted with the Qiagen DNeasy kit outperformed other combinations in terms of cost and efficiency of DNA recovery. Our results support the recommendation to filter water samples within 24hours but if this is not possible, our results suggest that refrigeration may be a better option than freezing for short-term storage (i.e., 3-5 days. This information is useful in designing eDNA detection of low-density invasive or threatened species, where small variations in DNA recovery can signify the difference between detection success or failure.

β-N-Methylamino-L-alanine (BMAA), a neurotoxin reportedly produced by cyanobacteria, diatoms and dinoflagellates, is proposed to be linked to the development of neurological diseases. BMAA has been found in aquatic and terrestrial ecosystems worldwide, both in its phytoplankton producers and in several invertebrate and vertebrate organisms that bioaccumulate it. LC-MS/MS is the most frequently used analytical technique in BMAA research due to its high selectivity, though consensus is lacking as to the best extraction method to apply. This study accordingly surveys the efficiency of three extraction methods regularly used in BMAA research to extract BMAA from cyanobacteria samples. The results obtained provide insights into possible reasons for the BMAA concentration discrepancies in previous publications. In addition and according to the method validation guidelines for analysing cyanotoxins, the TCA protein precipitation method, followed by AQC derivatization and LC-MS/MS analysis, is now validated for extracting protein-bound (after protein hydrolysis) and free BMAA from cyanobacteria matrix. BMAA biological variability was also tested through the extraction of diatom and cyanobacteria species, revealing a high variance in BMAA levels (0.0080-2.5797 μg g(-1) DW).

Environmental DNA (eDNA) has recently been used for detecting the distribution of macroorganisms in various aquatic habitats. In this study, we applied an eDNA method to estimate the distribution of the Japanese clawed salamander, Onychodactylus japonicus, in headwater streams. Additionally, we compared the detection of eDNA and hand-capturing methods used for determining the distribution of O. japonicus. For eDNA detection, we designed a qPCR primer/probe set for O. japonicus using the 12S rRNA region. We detected the eDNA of O. japonicus at all sites (with the exception of one), where we also observed them by hand-capturing. Additionally, we detected eDNA at two sites where we were unable to observe individuals using the hand-capturing method. Moreover, we found that eDNA concentrations and detection rates of the two water sampling areas (stream surface and under stones) were not significantly different, although the eDNA concentration in the water under stones was more varied than that on the surface. We, therefore, conclude that eDNA methods could be used to determine the distribution of macroorganisms inhabiting headwater systems by using samples collected from the surface of the water.

Full Text Available Weak structural surface is one of the key factors controlling the stability of slopes. The stability of rock slopes is in general concerned with set of discontinuities. However, in soft rocks, failure can occur along surfaces approaching to a circular failure surface. To better understand the position of potential sliding surface, a new method called simplex-finite stochastic tracking method is proposed. This method basically divides sliding surface into two parts: one is described by smooth curve obtained by random searching, the other one is polyline formed by the weak structural surface. Single or multiple sliding surfaces can be considered, and consequently several types of combined sliding surfaces can be simulated. The paper will adopt the arc-polyline to simulate potential sliding surface and analyze the searching process of sliding surface. Accordingly, software for slope stability analysis using this method was developed and applied in real cases. The results show that, using simplex-finite stochastic tracking method, it is possible to locate the position of a potential sliding surface in the slope.

microbeads modified with N-alkyl hydroxylamine and N-alkyl-O-methyl hydroxylamine surface groups by incubation of antigen and beads for 16 h at 40 oC without the need for coupling agents. The efficiency of the new method was evaluated by flow cytometry in model samples and serum samples containing antibodies...

A system and method for laser desorption of an analyte from a specimen and capturing of the analyte in a suspended solvent to form a testing solution are described. The method can include providing a specimen supported by a desorption region of a specimen stage and desorbing an analyte from a target site of the specimen with a laser beam centered at a radiation wavelength (.lamda.). The desorption region is transparent to the radiation wavelength (.lamda.) and the sampling probe and a laser source emitting the laser beam are on opposite sides of a primary surface of the specimen stage. The system can also be arranged where the laser source and the sampling probe are on the same side of a primary surface of the specimen stage. The testing solution can then be analyzed using an analytical instrument or undergo further processing.

Full Text Available In this paper there are presented the graphical methods to determine the parameters of an helicoidal stairs. The first part of this paper shows the used methods to generate the helicoidal curves using descriptive geometry methods. It has represented the state of the art of the generation of a helical surface studies. The second part of this study shows the helical stairs surface representation using descriptive geometry methods. For the representation of the helicoidal stairs are used two projections, the front and top view. A method of the stairs representation is solved using CAD modelling dedicated software. Following the helical surface representation in both methods, has been achieved a comparative study by using two representation methods. Conclusions about these two representation methods are presented in the end of this paper.

This report describes the results of Phase 1 efforts to develop a Rapid SurfaceSampling and Archival Record (RSSAR) System for the detection of semivolatile organic contaminants on concrete, transite, and metal surfaces. The characterization of equipment and building surfaces for the presence of contaminants as part of building decontamination and decommissioning activities is an immensely large tacks of concern to both government and industry. Contaminated and clean materials must be clearly identified and segregated so that the clean materials can be recycled or reused, if possible, or disposed of more cheaply as nonhazardous waste. Characterization of building and equipment surfaces will be needed during initial investigations, during cleanup operations, and during the final confirmatory process, increasing the total number of samples well beyond that needed for initial characterization. This multiplicity of information places a premium on the ability to handle and track data as efficiently as possible. Aware of the shortcomings of traditional surface characterization technology, GE, with DOE support has undertaken a 12-month effort to complete Phase 1 of a proposed four-phase program to develop the RSSAR system. The objectives of this work are to provide instrumentation to cost-effectively sample concrete and steel surfaces, provide a quick-look indication for the presence or absence of contaminants, and collect samples for later, more detailed analysis in a readily accessible and addressable form. The Rapid SurfaceSampling and Archival Record (RSSAR) System will be a modular instrument made up of several components: (1) sampling heads for concrete surfaces, steel surfaces, and bulk samples; (2) quick-look detectors for photoionization and ultraviolet; (3) multisample trapping module to trap and store vaporized contaminants in a manner suitable for subsequent detailed lab-based analyses

Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

Jupiter's moon Europa is embedded deep within the Jovian magnetosphere and is thus exposed to bombardment by charged particles, from thermal plasma to more energetic particles at radiation belt energies. In particular, energetic charged particles are capable of affecting the uppermost layer of surface material on Europa, in some cases down to depths of several meters (Johnson et al., 2004; Paranicas et al., 2009, 2002). Examples of radiation-induced surface alteration include sputtering, radiolysis and grain sintering; processes that are capable of significantly altering the physical properties of surface material. Radiolysis of surface ices containing sulfur-bearing contaminants from Io has been invoked as a possible explanation for hydrated sulfuric acid detected on Europa's surface (Carlson et al., 2002, 1999) and radiolytic production of oxidants represents a potential source of energy for life that could reside within Europa's sub-surface ocean (Chyba, 2000; Hand et al., 2007; Johnson et al., 2003; Vance et al., 2016). Accurate knowledge of Europa's surface radiation environment is essential to the interpretation of space and Earth-based observations of Europa's surface and exosphere. Furthermore, future landed missions may seek to sample endogenic material emplaced on Europa's surface to investigate its chemical composition and to search for biosignatures contained within. Such material would likely be sampled from the shallow sub-surface, and thus, it becomes crucial to know to which degree this material is expected to have been radiation processed.Here we will present modeling results of energetic electron and proton bombardment of Europa's surface, including interactions between these particles and surface material. In addition, we will present predictions for biosignature destruction at different geographical locations and burial depths and discuss the implications of these results for surfacesampling by future missions to Europa's surface.

Full Text Available Accelerator performance, in particular the average accelerating field and the cavity quality factor, depends on the physical and chemical characteristics of the superconducting radio-frequency (SRF cavity surface. Plasma based surface modification provides an excellent opportunity to eliminate nonsuperconductive pollutants in the penetration depth region and to remove the mechanically damaged surface layer, which improves the surface roughness. Here we show that the plasma treatment of bulk niobium (Nb presents an alternative surface preparation method to the commonly used buffered chemical polishing and electropolishing methods. We have optimized the experimental conditions in the microwave glow discharge system and their influence on the Nb removal rate on flat samples. We have achieved an etching rate of 1.7 μm/min⁡ using only 3% chlorine in the reactive mixture. Combining a fast etching step with a moderate one, we have improved the surface roughness without exposing the samplesurface to the environment. We intend to apply the optimized experimental conditions to the preparation of single cell cavities, pursuing the improvement of their rf performance.

Healthcare facilities (HF) represent an at-risk environment for legionellosis transmission occurring after inhalation of contaminated aerosols. In general, the control of water is preferred to that of air because, to date, there are no standardized sampling protocols. Legionella air contamination was investigated in the bathrooms of 11 HF by active sampling (Surface Air System and Coriolis®μ) and passive sampling using settling plates. During the 8-hour sampling, hot tap water was sampled three times. All air samples were evaluated using culture-based methods, whereas liquid samples collected using the Coriolis®μ were also analyzed by real-time PCR. Legionella presence in the air and water was then compared by sequence-based typing (SBT) methods. Air contamination was found in four HF (36.4%) by at least one of the culturable methods. The culturable investigation by Coriolis®μ did not yield Legionella in any enrolled HF. However, molecular investigation using Coriolis®μ resulted in eight HF testing positive for Legionella in the air. Comparison of Legionella air and water contamination indicated that Legionella water concentration could be predictive of its presence in the air. Furthermore, a molecular study of 12 L. pneumophila strains confirmed a match between the Legionella strains from air and water samples by SBT for three out of four HF that tested positive for Legionella by at least one of the culturable methods. Overall, our study shows that Legionella air detection cannot replace water sampling because the absence of microorganisms from the air does not necessarily represent their absence from water; nevertheless, air sampling may provide useful information for risk assessment. The liquid impingement technique appears to have the greatest capacity for collecting airborne Legionella if combined with molecular investigations. PMID:28640202

Healthcare facilities (HF) represent an at-risk environment for legionellosis transmission occurring after inhalation of contaminated aerosols. In general, the control of water is preferred to that of air because, to date, there are no standardized sampling protocols. Legionella air contamination was investigated in the bathrooms of 11 HF by active sampling (Surface Air System and Coriolis ® μ) and passive sampling using settling plates. During the 8-hour sampling, hot tap water was sampled three times. All air samples were evaluated using culture-based methods, whereas liquid samples collected using the Coriolis ® μ were also analyzed by real-time PCR. Legionella presence in the air and water was then compared by sequence-based typing (SBT) methods. Air contamination was found in four HF (36.4%) by at least one of the culturable methods. The culturable investigation by Coriolis ® μ did not yield Legionella in any enrolled HF. However, molecular investigation using Coriolis ® μ resulted in eight HF testing positive for Legionella in the air. Comparison of Legionella air and water contamination indicated that Legionella water concentration could be predictive of its presence in the air. Furthermore, a molecular study of 12 L. pneumophila strains confirmed a match between the Legionella strains from air and water samples by SBT for three out of four HF that tested positive for Legionella by at least one of the culturable methods. Overall, our study shows that Legionella air detection cannot replace water sampling because the absence of microorganisms from the air does not necessarily represent their absence from water; nevertheless, air sampling may provide useful information for risk assessment. The liquid impingement technique appears to have the greatest capacity for collecting airborne Legionella if combined with molecular investigations.

Key science and exploration objectives of lunar robotic precursor missions can be achieved with the Lunar Explorer (LEx) low-cost, robotic surface mission concept described herein. Selected elements of the LEx concept can also be used to create a lunar surfacesample return mission that we have called Boomerang

Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics.

Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive samplingmethod based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential samplingmethod is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential samplingmethods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

A statistical analysis was made of the activity concentrations measured in surface and deep soil samples for natural and anthropogenic gamma-emitting radionuclides. Soil samples were obtained from 48 different locations in Gilgit, Pakistan covering about 50 km/sup 2/ areas at an average altitude of 1550 m above sea level. From each location two samples were collected: one from the top soil (2-6 cm) and another from a depth of 6-10 cm. Four radionuclides including /sup 226/Ra, /sup 232/Th, /sup 40/K and /sup 137/Cs were quantified. The data was analyzed using t-test to find out activity concentration difference between the surface and depth samples. At the surface, the median activity concentrations were 23.7, 29.1, 4.6 and 115 Bq kg/sup -1/ for 226Ra, 232Th, 137Cs and 40K respectively. For the same radionuclides, the activity concentrations were respectively 25.5, 26.2, 2.9 and 191 Bq kg/sup -1/ for the depth samples. Principal component analysis (PCA) was applied to explore patterns within the data. A positive significant correlation was observed between the radionuclides /sup 226/Ra and /sup 232/Th. The data from PCA was further utilized in linear discriminant analysis (LDA) for the classification of surface and depth samples. LDA classified surface and depth samples with good predictability. (author)

This study proposes a cross-correlation based PIV image interrogation algorithm that adapts the number of interrogation windows and their size to the image properties and to the flow conditions. The proposed methodology releases the constraint of uniform sampling rate (Cartesian mesh) and spatial resolution (uniform window size) commonly adopted in PIV interrogation. Especially in non-optimal experimental conditions where the flow seeding is inhomogeneous, this leads either to loss of robustness (too few particles per window) or measurement precision (too large or coarsely spaced interrogation windows). Two criteria are investigated, namely adaptation to the local signal content in the image and adaptation to local flow conditions. The implementation of the adaptive criteria within a recursive interrogation method is described. The location and size of the interrogation windows are locally adapted to the image signal (i.e., seeding density). Also the local window spacing (commonly set by the overlap factor) is put in relation with the spatial variation of the velocity field. The viability of the method is illustrated over two experimental cases where the limitation of a uniform interrogation approach appears clearly: a shock-wave-boundary layer interaction and an aircraft vortex wake. The examples show that the spatial sampling rate can be adapted to the actual flow features and that the interrogation window size can be arranged so as to follow the spatial distribution of seeding particle images and flow velocity fluctuations. In comparison with the uniform interrogation technique, the spatial resolution is locally enhanced while in poorly seeded regions the level of robustness of the analysis (signal-to-noise ratio) is kept almost constant.

Assessing levels of pasture larval contamination is frequently used to study the population dynamics of the free-living stages of parasitic nematodes of livestock. Direct quantification of infective larvae (L3) on herbage is the most applied method to measure pasture larval contamination. However, herbage collection remains labour intensive and there is a lack of studies addressing the variation induced by the samplingmethod and the required sample size. The aim of this study was (1) to compare two different samplingmethods in terms of pasture larval count results and time required to sample, (2) to assess the amount of variation in larval counts at the level of sample plot, pasture and season, respectively and (3) to calculate the required sample size to assess pasture larval contamination with a predefined precision using random plots across pasture. Eight young stock pastures of different commercial dairy herds were sampled in three consecutive seasons during the grazing season (spring, summer and autumn). On each pasture, herbage samples were collected through both a double-crossed W-transect with samples taken every 10 steps (method 1) and four random located plots of 0.16 m(2) with collection of all herbage within the plot (method 2). The average (± standard deviation (SD)) pasture larval contamination using samplingmethods 1 and 2 was 325 (± 479) and 305 (± 444)L3/kg dry herbage (DH), respectively. Large discrepancies in pasture larval counts of the same pasture and season were often seen between methods, but no significant difference (P = 0.38) in larval counts between methods was found. Less time was required to collect samples with method 2. This difference in collection time between methods was most pronounced for pastures with a surface area larger than 1 ha. The variation in pasture larval counts from samples generated by random plot sampling was mainly due to the repeated measurements on the same pasture in the same season (residual variance

Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and a novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The

Purpose: Performing lobe-based quantitative analysis of the lung in computed tomography (CT) scans can assist in efforts to better characterize complex diseases such as chronic obstructive pulmonary disease (COPD). While airways and vessels can help to indicate the location of lobe boundaries, segmentations of these structures are not always available, so methods to define the lobes in the absence of these structures are desirable. Methods: The authors present a fully automatic lung lobe segmentation algorithm that is effective in volumetric inspiratory and expiratory computed tomography (CT) datasets. The authors rely on ridge surface image features indicating fissure locations and a novel approach to modeling shape variation in the surfaces defining the lobe boundaries. The authors employ a particle system that efficiently samples ridge surfaces in the image domain and provides a set of candidate fissure locations based on the Hessian matrix. Following this, lobe boundary shape models generated from principal component analysis (PCA) are fit to the particles data to discriminate between fissure and nonfissure candidates. The resulting set of particle points are used to fit thin plate spline (TPS) interpolating surfaces to form the final boundaries between the lung lobes. Results: The authors tested algorithm performance on 50 inspiratory and 50 expiratory CT scans taken from the COPDGene study. Results indicate that the authors' algorithm performs comparably to pulmonologist-generated lung lobe segmentations and can produce good results in cases with accessory fissures, incomplete fissures, advanced emphysema, and low dose acquisition protocols. Dice scores indicate that only 29 out of 500 (5.85%) lobes showed Dice scores lower than 0.9. Two different approaches for evaluating lobe boundary surface discrepancies were applied and indicate that algorithm boundary identification is most accurate in the vicinity of fissures detectable on CT. Conclusions: The proposed

We present a discontinuous Galerkin method on a fully unstructured grid for the modeling of unsteady incompressible fluid flows with free surfaces. The surface is modeled by embedding and represented by a levelset. We discuss the discretization of the flow equations and the level set equation...

A method and apparatus for surface scanning in medical imaging is provided. The surface scanning apparatus comprises an image source, a first optical fiber bundle comprising first optical fibers having proximal ends and distal ends, and a first optical coupler for coupling an image from the image...

Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method

With the fast development of sea surface target detection by optoelectronic sensors, machine learning has been adopted to improve the detection performance. Many features can be learned from training images by machines automatically. However, field images of sea surface target are not sufficient as training data. 3D scene simulation is a promising method to address this problem. For ocean scene simulation, sea surface height field generation is the key point to achieve high fidelity. In this paper, two spectra-based height field generation methods are evaluated. Comparison between the linear superposition and linear filter method is made quantitatively with a statistical model. 3D ocean scene simulating results show the different features between the methods, which can give reference for synthesizing sea surface target images with different ocean conditions.

A continuous active samplingmethod was compared to continuous passive and discrete samplingmethods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate samplingmethods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19-23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three samplingmethods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three samplingmethods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method. Published by Elsevier B.V.

A continuous active samplingmethod was compared to continuous passive and discrete samplingmethods for the sampling of trace organic compounds (TOCs) in water. Results from each method are compared and contrasted in order to provide information for future investigators to use while selecting appropriate samplingmethods for their research. The continuous low-level aquatic monitoring (CLAM) sampler (C.I.Agent® Storm-Water Solutions) is a submersible, low flow-rate sampler, that continuously draws water through solid-phase extraction media. CLAM samplers were deployed at two wastewater-dominated stream field sites in conjunction with the deployment of polar organic chemical integrative samplers (POCIS) and the collection of discrete (grab) water samples. All samples were analyzed for a suite of 69 TOCs. The CLAM and POCIS samples represent time-integrated samples that accumulate the TOCs present in the water over the deployment period (19–23 h for CLAM and 29 days for POCIS); the discrete samples represent only the TOCs present in the water at the time and place of sampling. Non-metric multi-dimensional scaling and cluster analysis were used to examine patterns in both TOC detections and relative concentrations between the three samplingmethods. A greater number of TOCs were detected in the CLAM samples than in corresponding discrete and POCIS samples, but TOC concentrations in the CLAM samples were significantly lower than in the discrete and (or) POCIS samples. Thirteen TOCs of varying polarity were detected by all of the three methods. TOC detections and concentrations obtained by the three samplingmethods, however, are dependent on multiple factors. This study found that stream discharge, constituent loading, and compound type all affected TOC concentrations detected by each method. In addition, TOC detections and concentrations were affected by the reporting limits, bias, recovery, and performance of each method.

This Field Sampling Plan describes the Operable Unit 3-13, Group 3, Other Surface Soils, Phase II remediation field sampling activities to be performed at the Idaho Nuclear Technology and Engineering Center located within the Idaho National Laboratory Site. Sampling activities described in this plan support characterization sampling of new sites, real-time soil spectroscopy during excavation, and confirmation sampling that verifies that the remedial action objectives and remediation goals presented in the Final Record of Decision for Idaho Nuclear Technology and Engineering Center, Operable Unit 3-13 have been met.

To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter

The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

A recently developed centroidal Voronoi tessellation (CVT) samplingmethod is investigated here to assess its suitability for use in statistical sampling applications. CVT efficiently generates a highly uniform distribution of sample points over arbitrarily shaped M-dimensional parameter spaces. On several 2-D test problems CVT has recently been found to provide exceedingly effective and efficient point distributions for response surface generation. Additionally, for statistical function integration and estimation of response statistics associated with uniformly distributed random-variable inputs (uncorrelated), CVT has been found in initial investigations to provide superior points sets when compared against latin-hypercube and simple-random Monte Carlo methods and Halton and Hammersley quasi-random sequence methods. In this paper, the performance of all these samplingmethods and a new variant ('Latinized' CVT) are further compared for non-uniform input distributions. Specifically, given uncorrelated normal inputs in a 2-D test problem, statistical sampling efficiencies are compared for resolving various statistics of response: mean, variance, and exceedence probabilities

A method for treatment of the surface of a CdZnTe (CZT) crystal that provides a native dielectric coating to reduce surface leakage currents and thereby, improve the resolution of instruments incorporating detectors using CZT crystals. A two step process is disclosed, etching the surface of a CZT crystal with a solution of the conventional bromine/methanol etch treatment, and after attachment of electrical contacts, passivating the CZT crystal surface with a solution of 10 w/o NH4F and 10 w/o H2O2 in water.

The recent restoration works of Santo Stefano Church Facade (XV century) in Venice have shown traces variously saved of different kind of surface finishes. These finishes were found on the brick's surface both in the masonry and in the decorative elements. Different brick's surface and decorative tile samples were investigated using several techniques: optical microscopy, scanning electron-microscopy, thermal analysis, infrared spectroscopy and reflectance Fourier transform infrared microspectroscopy. The evaluation of the reached results was used to understand the decorative techniques and to recognize the material employed

Growing concerns over the potential release and threat of silver nanoparticles (AgNPs) to environmental and biological systems urge researchers to investigate their fate and behavior. However, current analytical techniques cannot meet the requirements for rapidly, sensitively and reliably probing AgNPs in complex matrices. Surface-enhanced Raman spectroscopy (SERS) has shown great capability for rapid detection of AgNPs based on an indicator molecule that can bind on the AgNP surface. The objective of this study was to exploit SERS to detect AgNPs in environmental and biological samples through optimizing the Raman indicator for SERS. Seven indicator molecules were selected and determined to obtain their SERS signals at optimal concentrations. Among them, 1,2-di(4-pyridyl)ethylene (BPE), crystal violet and ferric dimethyl-dithiocarbamate (ferbam) produced the highest SERS intensities. Further experiments on binding competition between each two of the three candidates showed that ferbam had the highest AgNPs-binding ability. The underlying mechanism lies in the strong binding affinity of ferbam with AgNPs via multiple sulfur atoms. We further validated ferbam to be an effective indicator for SERS detection of as low as 0.1 mg/L AgNPs in genuine surface water and 0.57 mg/L in spinach juice. Moreover, limited interference on SERS detection of AgNPs was found from environmentally relevant inorganic ions, organic matter, inorganic particles, as well as biologically relevant components, demonstrating the ferbam-assisted SERS is an effective and sensitive method to detect AgNPs in complex environmental and biological samples. - Graphical abstract: SERS signal intensity of ferbam indicates the concentration of AgNPs. - Highlights: • Ferbam was found to be the best indicator for SERS detection of AgNPs. • SERS was able to detect AgNPs in both environmental and biological samples. • Major components in the two matrices had limited effect on AgNP detection.

Growing concerns over the potential release and threat of silver nanoparticles (AgNPs) to environmental and biological systems urge researchers to investigate their fate and behavior. However, current analytical techniques cannot meet the requirements for rapidly, sensitively and reliably probing AgNPs in complex matrices. Surface-enhanced Raman spectroscopy (SERS) has shown great capability for rapid detection of AgNPs based on an indicator molecule that can bind on the AgNP surface. The objective of this study was to exploit SERS to detect AgNPs in environmental and biological samples through optimizing the Raman indicator for SERS. Seven indicator molecules were selected and determined to obtain their SERS signals at optimal concentrations. Among them, 1,2-di(4-pyridyl)ethylene (BPE), crystal violet and ferric dimethyl-dithiocarbamate (ferbam) produced the highest SERS intensities. Further experiments on binding competition between each two of the three candidates showed that ferbam had the highest AgNPs-binding ability. The underlying mechanism lies in the strong binding affinity of ferbam with AgNPs via multiple sulfur atoms. We further validated ferbam to be an effective indicator for SERS detection of as low as 0.1 mg/L AgNPs in genuine surface water and 0.57 mg/L in spinach juice. Moreover, limited interference on SERS detection of AgNPs was found from environmentally relevant inorganic ions, organic matter, inorganic particles, as well as biologically relevant components, demonstrating the ferbam-assisted SERS is an effective and sensitive method to detect AgNPs in complex environmental and biological samples. - Graphical abstract: SERS signal intensity of ferbam indicates the concentration of AgNPs. - Highlights: • Ferbam was found to be the best indicator for SERS detection of AgNPs. • SERS was able to detect AgNPs in both environmental and biological samples. • Major components in the two matrices had limited effect on AgNP detection.

Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall

Surface temperatures are estimated with high precision based on a multitemperature method for Fourier-transform spectrometers. The method is based on Planck's radiation law and a nonlinear least-squares fitting algorithm applied to two or more spectra at different sample temperatures and a single...... of blackbody sources are estimated with an uncertainty of 0.2-2 K. The method is demonstrated for measuring the spectral emissivity of a brass specimen and an oxidized nickel specimen. (C) 1996 Optical Society of America...... measurement at a known sample temperature, for example, at ambient temperature. The temperature of the samplesurface can be measured rather easily at ambient temperature. The spectrum at ambient temperature is used to eliminate background effects from spectra as measured at other surface temperatures...

For limitation of the noise in environment, the necessity occurs of determining and location of sources of sounds emitted from surfaces of many machines and devices, assuring in effect the possibility of suitable constructional changes implementation, targeted at decreasing of their nuisance. In the paper, the results of tests and calculations are presented for plane surface sources emitting acoustic waves. The tests were realized with the use of scanning laser vibrometer which enabled remote registration and the spectral analysis of the surfaces vibrations. The known hybrid digital method developed for determination of sound wave emission from such surfaces divided into small finite elements was slightly modified by distinguishing the phase correlations between such vibrating elements. The final method being developed may find use in wide range of applications for different forms of vibrations of plane surfaces.

Visualization of water surface is a hot topic in computer graphics. In this paper, we presented a fast method to generate wide range of water surface with good image quality both near and far from the viewpoint. This method utilized uniform mesh and Fractal Perlin noise to model water surface. Mipmapping technology was enforced to the surface textures, which adjust the resolution with respect to the distance from the viewpoint and reduce the computing cost. Lighting effect was computed based on shadow mapping technology, Snell's law and Fresnel term. The render pipeline utilizes a CPU-GPU shared memory structure, which improves the rendering efficiency. Experiment results show that our approach visualizes water surface with good image quality at real-time frame rates performance.

To reduce the computational effort of reliability-based design optimization (RBDO), the response surfacemethod (RSM) has been widely used to evaluate reliability constraints. We propose an efficient methodology for solving RBDO problems based on an improved high order response surfacemethod (HORSM) that takes advantage of an efficient samplingmethod, Hermite polynomials and uncertainty contribution concept to construct a high order response surface function with cross terms for reliability analysis. The samplingmethod generates supporting points from Gauss-Hermite quadrature points, which can be used to approximate response surface function without cross terms, to identify the highest order of each random variable and to determine the significant variables connected with point estimate method. The cross terms between two significant random variables are added to the response surface function to improve the approximation accuracy. Integrating the nested strategy, the improved HORSM is explored in solving RBDO problems. Additionally, a sampling based reliability sensitivity analysis method is employed to reduce the computational effort further when design variables are distributional parameters of input random variables. The proposed methodology is applied on two test problems to validate its accuracy and efficiency. The proposed methodology is more efficient than first order reliability method based RBDO and Monte Carlo simulation based RBDO, and enables the use of RBDO as a practical design tool.

In this paper, the design and development of an optical probe for in situ measurement of surface roughness are discussed. Based on this light scattering principle, the probe which consists of a laser diode, measuring lens and a linear photodiode array, is designed to capture the scattered light from a test surface with a relatively large scattering angle ϕ (=28°). This capability increases the measuring range and enhances repeatability of the results. The coaxial arrangement that incorporates a dual-laser beam and a constant compressed air stream renders the proposed system insensitive to movement or vibration of the test surface as well as surface conditions. Tests were conducted on workpieces which were mounted on a turning machine that operates with different cutting speeds. Test specimens which underwent different machining processes and of different surface finish were also studied. The results obtained demonstrate the feasibility of surface roughness measurement using the proposed method.

The aim of this study was to develop method for sampling dissolved gases in groundwater pumped out from borehole. In this report the developed method called Simple gas collector (YKK) and the first results gained are described. Samples were collected from five sampling sections. First test samplings were made from multipackered deep borehole (OL-KR1/523,2-528,2 m). The rest of samples were sampled during prepumping of PAVE-samplings. All samples were analysed with mass spectrometer. Gas composition results were very reproducible but gas concentration results varied in some sampling sections. Achieved results were compared with gas results of groundwater samples taken with PAVE-equipment. YKK-results were mainly comparable to PAVE-results, although differences were observed in both gas composition and concentration results. When gas concentration is small ( 2 O) gas compositions are very comparable and when concentration is high compositions differs between YKK- and PAVE-results. Gas concentration values were very comparable when the groundwater samples contained gases a lot, but the differences were relatively higher, when the gas amount in the groundwater sample was small. According to the survey you can get comparable information of dissolved gases in groundwater with YKK-method. The limit of using this method is that pumped groundwater must be oversaturated with gases in sampling conditions. (orig.)

In conclusion, we have developed a simple and inexpensive method for fabricating a superhydrophobic surface of magnesium by metal deposition and stearic acid coating. We fabricated a superhydrophobic surface on magnesium by nickel deposition and surface coating of stearic acid. The fabricated surfaces were stable against acidic and basic solutions. In recent times, technologies based on the imitation of nature have attracted considerable attention. Lotus leaves are known for their self-cleaning effect. The micrometer-scale papillae structure and the epicuticular wax on the lotus leaf contribute to this effect. In a manner similar to the self-cleaning property of lotus leaves, the wettability of solid surfaces is of great interest in daily life and industry.1-4 Wettability is controlled by both the geometrical structure of a surface and a low surface energy material coating. A superhydrophobic surface is satisfied with a water contact angle of more than 150 .deg. and a sliding angle of less than 10 .deg. On such a surface, a water drop has a perfectly spherical shape and it easily rolls off and removes deposited contaminants. A superhydrophobic surface thus protects a material from contamination, fogging, and snow deposition.

Small samples of 12.5 mm in diameter made from pure tungsten were exposed to a dense plasma jet produced by a coaxial plasma gun operated at 2 kJ. The surface of the samples was analyzed using a scanning electron microscope (SEM) before and after applying consecutive plasma shots. Cracks and craters were produced in the surface due to surface tensions during plasma heating. Nanodroplets and micron size droplets could be observed on the samplessurface. An energy-dispersive spectroscopy (EDS) analysis revealed that the composition of these droplets coincided with that of the gun electrode material. Four types of samples were prepared by spark plasma sintering from powders with the average particle size ranging from 70 nanometers up to 80 μm. The plasma power load to the samplesurface was estimated to be ≈4.7 MJ m-2 s-1/2 per shot. The electron temperature and density in the plasma jet had peak values 17 eV and 1.6 × 1022 m-3, respectively.

Sampling and detection of trace explosives is a key analytical process in modern transportation safety. In this work we have explored some of the fundamental analytical processes for collection and detection of trace level explosive on surfaces with the most widely utilized system, thermal desorption IMS. The performance of the standard muslin swipe material was compared with chemically modified fiberglass cloth. The fiberglass surface was modified to include phenyl functional groups. When compared to standard muslin, the phenyl functionalized fiberglass sampling material showed better analyte release from the sampling material as well as improved response and repeatability from multiple uses of the same swipe. The improved sample release of the functionalized fiberglass swipes resulted in a significant increase in sensitivity. Various physical and chemical properties were systematically explored to determine optimal performance. The results herein have relevance to improving the detection of other explosive compounds and potentially to a wide range of other chemical sampling and field detection challenges.

The ion-bombardment-induced surface topography of polycrystalline silver was studied using the stereophotogrammetric method. The samples were irradiated with 30keV argon ions at fairly high fluences (> 10 17 ions/cm 2 ). The influence of the inclination angle of the sample in the SEM on the cone shape of a SEM-picture is discussed. To analyse the irradiated surfaces covered with cones, the SEM-stereotechnique is proposed. The measurements of the sample section perpendicular to the incidence plane are also carried out. (author)

We compare different approaches to measure surface area of aerosol agglomerates. The objective was to compare field methods, such as mobility and diffusion charging based approaches, with laboratory approach, such as Brunauer, Emmett, Teller (BET) method used for bulk powder samples. To allow intercomparison of various surface area measurements, we defined 'geometric surface area' of agglomerates (assuming agglomerates are made up of ideal spheres), and compared various surface area measurements to the geometric surface area. Four different approaches for measuring surface area of agglomerate particles in the size range of 60-350 nm were compared using (i) diffusion charging-based sensors from three different manufacturers, (ii) mobility diameter of an agglomerate, (iii) mobility diameter of an agglomerate assuming a linear chain morphology with uniform primary particle size, and (iv) surface area estimation based on tandem mobility-mass measurement and microscopy. Our results indicate that the tandem mobility-mass measurement, which can be applied directly to airborne particles unlike the BET method, agrees well with the BET method. It was also shown that the three diffusion charging-based surface area measurements of silver agglomerates were similar within a factor of 2 and were lower than those obtained from the tandem mobility-mass and microscopy method by a factor of 3-10 in the size range studied. Surface area estimated using the mobility diameter depended on the structure or morphology of the agglomerate with significant underestimation at high fractal dimensions approaching 3.

The objective of this work was to develop and evaluate a series of methods and validate their capability to measure differences in oxidized versus reduced saltstone. Validated methods were then applied to samples cured under field conditions to simulate Performance Assessment (PA) needs for the Saltstone Disposal Facility (SDF). Four analytical approaches were evaluated using laboratory-cured saltstone samples. These methods were X-ray absorption spectroscopy (XAS), diffuse reflectance spectroscopy (DRS), chemical redox indicators, and thin-section leaching methods. XAS and thin-section leaching methods were validated as viable methods for studying oxidation movement in saltstone. Each method used samples that were spiked with chromium (Cr) as a tracer for oxidation of the saltstone. The two methods were subsequently applied to field-cured samples containing chromium to characterize the oxidation state of chromium as a function of distance from the exposed air/cementitious material surface.

Alcohol consumption triggers toxic effect to organs and tissues in the human body. The risks are essentially thought to be related to ethanol content in alcoholic beverages. The identification of ethanol in blood samples requires rapid, minimal sample handling, and non-destructive analysis, such as Raman Spectroscopy. This study aims to apply Raman Spectroscopy for identification of ethanol in blood samples. Silver nanoparticles were synthesized to obtain Surface Enhanced Raman Spectroscopy (SERS) spectra of blood samples. The SERS spectra were used for Partial Least Square (PLS) for determining ethanol quantitatively. To apply PLS method, 920~820 cm -1 band interval was chosen and the spectral changes of the observed concentrations statistically associated with each other. The blood samples were examined according to this model and the quantity of ethanol was determined as that: first a calibration method was established. A strong relationship was observed between known concentration values and the values obtained by PLS method (R 2 = 1). Second instead of then, quantities of ethanol in 40 blood samples were predicted according to the calibration method. Quantitative analysis of the ethanol in the blood was done by analyzing the data obtained by Raman spectroscopy and the PLS method.

In this work, separation methods have been developed for the analysis of anthropogenic transuranium elements plutonium, americium, curium and neptunium from environmental samples contaminated by global nuclear weapons testing and the Chernobyl accident. The analytical methods utilized in this study are based on extraction chromatography. Highly varying atmospheric plutonium isotope concentrations and activity ratios were found at both Kurchatov (Kazakhstan), near the former Semipalatinsk test site, and Sodankylae (Finland). The origin of plutonium is almost impossible to identify at Kurchatov, since hundreds of nuclear tests were performed at the Semipalatinsk test site. In Sodankylae, plutonium in the surface air originated from nuclear weapons testing, conducted mostly by USSR and USA before the sampling year 1963. The variation in americium, curium and neptunium concentrations was great as well in peat samples collected in southern and central Finland in 1986 immediately after the Chernobyl accident. The main source of transuranium contamination in peats was from global nuclear test fallout, although there are wide regional differences in the fraction of Chernobyl-originated activity (of the total activity) for americium, curium and neptunium. The separation methods developed in this study yielded good chemical recovery for the elements investigated and adequately pure fractions for radiometric activity determination. The extraction chromatographic methods were faster compared to older methods based on ion exchange chromatography. In addition, extraction chromatography is a more environmentally friendly separation method than ion exchange, because less acidic waste solutions are produced during the analytical procedures. (orig.)

The simulation of viscous free-surface water flow is a subject that has reached a certain maturity and is nowadays used in industrial applications, like the simulation of the flow around ships. While almost all methods used are based on the Navier-Stokes equations, the discretisation methods for the

Secondhand smoke contains a mixture of pollutants that can persist in air, dust, and on surfaces for months or longer. This persistent residue is known as thirdhand smoke (THS). Here, we detail a simple method of wipe sampling for nicotine as a marker of accumulated THS on surfaces. We analyzed findings from 5 real-world studies to investigate the performance of wipe sampling for nicotine on surfaces in homes, cars, and hotels in relation to smoking behavior and smoking restrictions. The intraclass correlation coefficient for side-by-side samples was 0.91 (95% CI: 0.87-0.94). Wipe sampling for nicotine reliably distinguished between private homes, private cars, rental cars, and hotels with and without smoking bans and was significantly positively correlated with other measures of tobacco smoke contamination such as air and dust nicotine. The sensitivity and specificity of possible threshold values (0.1, 1, and 10 μg/m(2)) were evaluated for distinguishing between nonsmoking and smoking environments. Sensitivity was highest at a threshold of 0.1 μg/m(2), with 74%-100% of smoker environments showing nicotine levels above threshold. Specificity was highest at a threshold of 10 μg/m(2), with 81%-100% of nonsmoker environments showing nicotine levels below threshold. The optimal threshold will depend on the desired balance of sensitivity and specificity and on the types of smoking and nonsmoking environments. Surface wipe sampling for nicotine is a reliable, valid, and relatively simple collection method to quantify THS contamination on surfaces across a wide range of field settings and to distinguish between nonsmoking and smoking environments.

The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)

In order to enhance the thermal fatigue resistance of cast iron materials, the samples with biomimetic non-smooth surface were processed by Neodymium:Yttrium Aluminum Garnet (Nd:YAG) laser. With self-controlled thermal fatigue test method, the thermal fatigue resistance of smooth and non-smooth samples was investigated. The effects of striated laser tracks on thermal fatigue resistance were also studied. The results indicated that biomimetic non-smooth surface was benefit for improving thermal fatigue resistance of cast iron sample. The striated non-smooth units formed by laser tracks which were vertical with thermal cracks had the best propagation resistance. The mechanisms behind these influences were discussed, and some schematic drawings were introduced to describe them.

Apparatus has been constructed for treating the surface of U-Pu carbide fuel samples for EPMA. The treatment is to clean off oxide layer on the surface, then coat with an electric-conductive material. The apparatus, safe in handling plutonium, operates as follows. (1) To avoid oxidation of the analyzing surface by oxygen and water in the air, series of cleaning and coating, i.e. ion-etching and ion-coating or ion-etching and vacuum-evaporation is done at the same time in an inert gas atmosphere. (2) Ion-etching is possible on samples embedded in non-electric-conductive and low heat-conductive resin. (3) Since the temperature rise in (2) is negligible, there is no deterioration of the samples. (author)

To accelerate the thermal equilibrium sampling of multi-level quantum systems, the infinite swapping limit of a recently proposed multi-level ring polymer representation is investigated. In the infinite swapping limit, the ring polymer evolves according to an averaged Hamiltonian with respect to all possible surface index configurations of the ring polymer and thus connects the surface hopping approach to the mean-field path-integral molecular dynamics. A multiscale integrator for the infinite swapping limit is also proposed to enable efficient sampling based on the limiting dynamics. Numerical results demonstrate the huge improvement of sampling efficiency of the infinite swapping compared with the direct simulation of path-integral molecular dynamics with surface hopping.

In gerontology the most recognized and elaborate discourse about sampling is generally thought to be in quantitative research associated with survey research and medical research. But sampling has long been a central concern in the social and humanistic inquiry, albeit in a different guise suited to the different goals. There is a need for more explicit discussion of qualitative sampling issues. This article will outline the guiding principles and rationales, features, and practices of sampli...

Geoscientists commonly use electromagnetic methods to image the Earth's near surface. Field measurements of EM fields are made (often with the aid an artificial EM source) and then used to infer near surface electrical conductivity via a process known as inversion. In geophysics, the standard inversion tool kit is robust and can provide an estimate of the Earth's near surface conductivity that is both geologically reasonable and compatible with the measured field data. However, standard inverse methods struggle to provide a sense of the uncertainty in the estimate they provide. This is because the task of finding an Earth model that explains the data to within measurement error is non-unique - that is, there are many, many such models; but the standard methods provide only one "answer." An alternative method, known as Bayesian inversion, seeks to explore the full range of Earth model parameters that can adequately explain the measured data, rather than attempting to find a single, "ideal" model. Bayesian inverse methods can therefore provide a quantitative assessment of the uncertainty inherent in trying to infer near surface conductivity from noisy, measured field data. This study applies a Bayesian inverse method (called trans-dimensional Markov chain Monte Carlo) to transient airborne EM data previously collected over Taylor Valley - one of the McMurdo Dry Valleys in Antarctica. Our results confirm the reasonableness of previous estimates (made using standard methods) of near surface conductivity beneath Taylor Valley. In addition, we demonstrate quantitatively the uncertainty associated with those estimates. We demonstrate that Bayesian inverse methods can provide quantitative uncertainty to estimates of near surface conductivity.

A study was conducted to compare four gravimetric methods of measuring fine particle (PM2.5) concentrations in air: the BGI, Inc. PQ200 Federal Reference Method PM2.5 (FRM) sampler; the Harvard-Marple Impactor (HI); the BGI, Inc. GK2.05 KTL Respirable/Thoracic Cyclone (KTL); and the AirMetrics MiniVol (MiniVol). Pairs of FRM, HI, and KTL samplers and one MiniVol sampler were collocated and 24-hr integrated PM2.5 samples were collected on 21 days from January 6 through April 9, 2000. The mean and standard deviation of PM2.5 levels from the FRM samplers were 13.6 and 6.8 microg/m3, respectively. Significant systematic bias was found between mean concentrations from the FRM and the MiniVol (1.14 microg/m3, p = 0.0007), the HI and the MiniVol (0.85 microg/m3, p = 0.0048), and the KTL and the MiniVol (1.23 microg/m3, p = 0.0078) according to paired t test analyses. Linear regression on all pairwise combinations of the sampler types was used to evaluate measurements made by the samplers. None of the regression intercepts was significantly different from 0, and only two of the regression slopes were significantly different from 1, that for the FRM and the MiniVol [beta1 = 0.91, 95% CI (0.83-0.99)] and that for the KTL and the MiniVol [beta1 = 0.88, 95% CI (0.78-0.98)]. Regression R2 terms were 0.96 or greater between all pairs of samplers, and regression root mean square error terms (RMSE) were 1.65 microg/m3 or less. These results suggest that the MiniVol will underestimate measurements made by the FRM, the HI, and the KTL by an amount proportional to PM2.5 concentration. Nonetheless, these results indicate that all of the sampler types are comparable if approximately 10% variation on the mean levels and on individual measurement levels is considered acceptable and the actual concentration is within the range of this study (5-35 microg/m3).

Full Text Available The existing visualization method in the virtual globe mainly uses the projection grid to organize the ocean grid. This special grid organization has the defects in reflecting the difference characteristics of different ocean areas. The method of global ocean visualization based on global discrete grid can make up the defect of the projection grid method by matching with the discrete space of the virtual globe, so it is more suitable for the virtual ocean surface simulation application.But the available global discrete grids method has many problems which limiting its application such as the low efficiency of rendering and loading, the need of repairing grid crevices. To this point, we propose an optimization for the global discrete grids method. At first, a GPU-oriented multi-scale grid model of ocean surface which develops on the foundation of global discrete grids was designed to organize and manage the ocean surface grids. Then, in order to achieve the wind-drive wave dynamic rendering, this paper proposes a dynamic wave rendering method based on the multi-scale ocean surface grid model to support real-time wind field updating. At the same time, considering the effect of repairing grid crevices on the system efficiency, this paper presents an efficient method for repairing ocean surface grid crevices based on the characteristics of ocean grid and GPU technology. At last, the feasibility and validity of the method are verified by the comparison experiment. The experimental results show that the proposed method is efficient, stable and fast, and can compensate for the lack of function of the existing methods, so the application range is more extensive.

Full Text Available The spatial sampling interval, as related to the ability to digitize a soil profile with a certain number of features per unit length, depends on the profiling technique itself. From a variety of profiling techniques, roughness parameters are estimated at different sampling intervals. Since soil profiles have continuous spectral components, it is clear that roughness parameters are influenced by the sampling interval of the measurement device employed. In this work, we contributed to answer which sampling interval the profiles needed to be measured at to accurately account for the microwave response of agricultural surfaces. For this purpose, a 2-D laser profiler was built and used to measure surface soil roughness at field scale over agricultural sites in Argentina. Sampling intervals ranged from large (50 mm to small ones (1 mm, with several intermediate values. Large- and intermediate-sampling-interval profiles were synthetically derived from nominal, 1 mm ones. With these data, the effect of sampling-interval-dependent roughness parameters on backscatter response was assessed using the theoretical backscatter model IEM2M. Simulations demonstrated that variations of roughness parameters depended on the working wavelength and was less important at L-band than at C- or X-band. In any case, an underestimation of the backscattering coefficient of about 1-4 dB was observed at larger sampling intervals. As a general rule a sampling interval of 15 mm can be recommended for L-band and 5 mm for C-band.

This report describes the results of Phase 2 efforts to develop a Rapid SurfaceSampling and Archival Record (RSSAR) System for the detection of semivolatile organic contaminants on concrete, transite, and metal surfaces. The characterization of equipment and building surfaces for the presence of contaminants as part of building decontamination and decommissioning activities is an immensely large task of concern to both government and industry. Because of the high cost of hazardous waste disposal, old, contaminated buildings cannot simply be demolished and scrapped. Contaminated and clean materials must be clearly identified and segregated so that the clean material can be recycled or reused, if possible, or disposed of more cheaply as nonhazardous waste. DOE has a number of sites requiring surface characterization. These sites are large, contain very heterogeneous patterns of contamination (requiring high sampling density), and will thus necessitate an enormous number of samples to be taken and analyzed. Characterization of building and equipment surfaces will be needed during initial investigations, during cleanup operations, and during the final confirmation process, increasing the total number of samples well beyond that needed for initial characterization. This multiplicity of information places a premium on the ability to handle and track data as efficiently as possible.

This report describes the results of Phase 2 efforts to develop a Rapid SurfaceSampling and Archival Record (RSSAR) System for the detection of semivolatile organic contaminants on concrete, transite, and metal surfaces. The characterization of equipment and building surfaces for the presence of contaminants as part of building decontamination and decommissioning activities is an immensely large task of concern to both government and industry. Because of the high cost of hazardous waste disposal, old, contaminated buildings cannot simply be demolished and scrapped. Contaminated and clean materials must be clearly identified and segregated so that the clean material can be recycled or reused, if possible, or disposed of more cheaply as nonhazardous waste. DOE has a number of sites requiring surface characterization. These sites are large, contain very heterogeneous patterns of contamination (requiring high sampling density), and will thus necessitate an enormous number of samples to be taken and analyzed. Characterization of building and equipment surfaces will be needed during initial investigations, during cleanup operations, and during the final confirmation process, increasing the total number of samples well beyond that needed for initial characterization. This multiplicity of information places a premium on the ability to handle and track data as efficiently as possible

The relentless pressure for designs with new optical functions, small volume, and light weight has greatly increased the importance of aspheric surfaces. In this paper, we propose an annularly stitched aspheric surface (ASAS) description method to increase the freedom and flexibility of imaging system design. The rotationally symmetric ASAS consists of a circular central zone and one or more annular zones. Two neighboring zones are constrained to have the same derivatives on their joint curve, and this means the ASAS is C1 continuous. This finding is proved and verified by the mathematical deduction of the surface formulas. Two optimization strategies and two design methods with the C1 continuous constraints are also discussed. This surface can greatly facilitate the design and even achieve some previously impossible designs without increasing the fabrication difficulty. Two different systems with the proposed ASAS are optimized and the results are presented. The design results verified the practicability of the ASAS.

A new experimental setup to investigate the physical process of dust deposition and resuspension on and from surfaces is introduced. Dust deposition can reduce the airborne dust concentration considerably. As a basis for developing methods to eliminate dust-related problems in rooms, there is a n......A new experimental setup to investigate the physical process of dust deposition and resuspension on and from surfaces is introduced. Dust deposition can reduce the airborne dust concentration considerably. As a basis for developing methods to eliminate dust-related problems in rooms......, there is a need for better understanding of the mechanism of dust deposition and resuspension. With the presented experimental setup, the dust load on surfaces in a channel can be measured as a function of the environmental and surface conditions and the type of particles under controlled laboratory conditions....

Radioactive particles, tens of μm or more in diameter, are unlikely to be emitted directly from nuclear facilities with exhaust gas cleansing systems, but may arise in the case of an accident or where resuspension from contaminated surfaces is significant. Such particles may dominate deposition and, according to some workers, may contribute to inhalation doses. Quantitative sampling of large airborne particles is difficult because of their inertia and large sedimentation velocities. The literature describes conditions for unbiased sampling and the magnitude of sampling errors for idealised sampling inlets in steady winds. However, few air samplers for outdoor use have been assessed for adequacy of sampling. Many size selective samplingmethods are found in the literature but few are suitable at the low concentrations that are often encountered in the environment. A number of approaches for unbiased sampling of large particles have been found in the literature. Some are identified as meriting further study, for application in the measurement of airborne radioactivity. (author)

A study was undertaken in order to understand recent environmental change in Mobile Bay, Alabama. For this study a series of surface sediment and box core samples was collected. The surface benthic foraminiferal data provide the modern baseline conditions of the bay and can be used as a reference for changing paleoenvironmental parameters recorded in the box cores. The 14 sampling locations were chosen in the bay to cover the wide diversity of fluvial and marine-influenced environments on both sides of the shipping channel.

In this chapter, the state of the art of flow injection and related approaches thereof for automation and miniaturization of sample processing regardless of the aggregate state of the sample medium is overviewed. The potential of the various generation of flow injection for implementation of in...

This paper presents a novel lander anchoring system based on sawing method for asteroid exploration. The system is composed of three robotic arms, three cutting discs, and a control system. The discs mounted at the end of the arms are able to penetrate into the rock surface of asteroids. After the discs cut into the rock surface, the self-locking function of the arms provides forces to fix the lander on the surface. Modeling, trajectory planning, simulations, mechanism design, and prototype fabrication of the anchoring system are discussed, respectively. The performances of the system are tested on different kinds of rocks, at different sawing angles, locations, and speeds. Results show that the system can cut 15 mm deep into granite rock in 180 s at sawing angle of 60°, with the average power of 58.41 W, and the "weight on bit" (WOB) of 8.637 N. The 7.8 kg anchoring system is capable of providing omni-directional anchoring forces, at least 225 N normal and 157 N tangent to the surface of the rock. The system has the advantages of low-weight, low energy consumption and balance forces, high anchoring efficiency and reliability, and could enable the lander to move and sample or assist astronauts and robots in walking and sampling on asteroids.

Acoustic levitation is used as a new tool to study concentration-dependent processes in fluorescence spectroscopy. With this technique, small amounts of liquid and solid samples can be measured without the need for sample supports or containers, which often limits signal acquisition and can even alter sample properties due to interactions with the support material. We demonstrate that, because of the small sample volume, fluorescence measurements at high concentrations of an organic dye are possible without the limitation of inner-filter effects, which hamper such experiments in conventional, cuvette-based measurements. Furthermore, we show that acoustic levitation of liquid samples provides an experimentally simple way to study distance-dependent fluorescence modulations in semiconductor nanocrystals. The evaporation of the solvent during levitation leads to a continuous increase of solute concentration and can easily be monitored by laser-induced fluorescence.

Full Text Available We report the results of surface characterizations of niobium (Nb samples electropolished together with a single cell superconducting radio-frequency accelerator cavity. These witness samples were located in three regions of the cavity, namely at the equator, the iris, and the beam pipe. Auger electron spectroscopy was utilized to probe the chemical composition of the topmost four atomic layers. Scanning electron microscopy with energy dispersive x ray for elemental analysis was used to observe the surface topography and chemical composition at the micrometer scale. A few atomic layers of sulfur (S were found covering the samples nonuniformly. Niobium oxide granules with a sharp geometry were observed on every sample. Some Nb-O granules appeared to also contain sulfur.

As it is confirmed in this work, electrodeposition of α radionuclides gives a simple method for preparing α samples of high spectrometric quality, compared to those prepared by evaporation. Then we give the methods for electrodepositon or α emitters use in our Department. Actinides α emitters are electroplated from a 1% H 2 SO 4 medium with a recovery of about 90%. The samples of Ra are prepared by electrodeposition from a HCl + CH 3 -COONH 4 medium at pH approx.= 5. In this case the recovery reaches a value that ranges from 70 to 90%. For these measurements a Si surface barrier detector has been used. Some of its features are discussed in the text. (author)

Optimum samplingmethods in surface water and associated sediments for use in uranium exploration are being studied at thirty sites in Colorado, New Mexico, Arizona and Utah. For water samples, filtering is recommended to increase sample homogeneity and reproducibility because for most elements studied water samples which were allowed to remain unfiltered until time of analysis contained higher concentrations than field-filtered samples of the same waters. Acidification of unfiltered samples resulted in still higher concentrations. This is predominantly because of leaching of the elements from the suspended fraction. U in water correslates directly with Ca, Mg, Na, K, Ba, B, Li and As. In stream sediments, U and other trace elements are concentrated in the finer size fractions. Accordingly, in prospecting, grain size fractions less than 90 μm (170 mesh) should be analyzed for U. A greater number of elements (21) show a significant positive correlation with U in stream sediments than in water. Results have revealed that anomalous concentrations of U found in water may not be detected in associated sediments and vice versa. Hence, sampling of both surface water and coexisting sediment is strongly recommended

We have previously shown that liquid microjunction surfacesampling of dried blood spots coupled with high resolution top-down mass spectrometry may be used for screening of common hemoglobin variants HbS, HbC, and HbD. In order to test the robustness of the approach, we have applied the approach to unknown hemoglobin variants. Six neonatal dried blood spot samples that had been identified as variants, but which could not be diagnosed by current screening methods, were analyzed by direct surfacesampling top-down mass spectrometry. Both collision-induced dissociation and electron transfer dissociation mass spectrometry were employed. Four of the samples were identified as β-chain variants: two were heterozygous Hb D-Iran, one was heterozygous Hb Headington, and one was heterozygous Hb J-Baltimore. The fifth sample was identified as the α-chain variant heterozygous Hb Phnom Penh. Analysis of the sixth sample suggested that it did not in fact contain a variant. Adoption of the approach in the clinic would require speed in both data collection and interpretation. To address that issue, we have compared manual data analysis with freely available data analysis software (ProsightPTM). The results demonstrate the power of top-down proteomics for hemoglobin variant analysis in newborn samples.

Highlights: ► Cathodic delamination of epoxy coated steel samples was studied using SKP. ► Delamination of the coating decreased with increased substrate surface roughness. ► Delamination of the coating was faster on the substrate with parallel surface scratches. ► Delamination of the coating exposed to weathering conditions increased with prolonged exposure. - Abstract: The Scanning Kelvin Probe (SKP) technique was used to investigate the effects of surface roughness, texture and polymer degradation on cathodic delamination of epoxy coated steel. The cathodic delamination rate of the epoxy coatings dramatically decreased with increased surface roughness of the underlying steel substrate. The surface texture of the steel substrates also had a significant effect in that samples with parallel abrasion lines exhibiting faster cathodic delamination in the direction of the lines compared to the direction perpendicular to the lines. The cathodic delamination kinetics of epoxy coatings previously exposed to weathering conditions increased with prolonged exposure due to pronounced polymer degradation. SEM observation confirmed that the cyclic exposure to UV radiation and water condensation caused severe deterioration in the polymer structures with surface cracking and erosion. The SKP results clearly showed that the cathodic delamination of the epoxy coatings was significantly influenced by the surface features of the underlying steel substrates and the degradation of the coatings.

As water depth increases, the structural safety and reliability of a system become more and more important and challenging. Therefore, the structural reliability method must be applied in ocean engineering design such as offshore platform design. If the performance function is known in structural reliability analysis, the first-order second-moment method is often used. If the performance function could not be definitely expressed, the response surfacemethod is always used because it has a very clear train of thought and simple programming. However, the traditional response surfacemethod fits the response surface of quadratic polynomials where the problem of accuracy could not be solved, because the true limit state surface can be fitted well only in the area near the checking point. In this paper, an intelligent computing method based on the whole response surface is proposed, which can be used for the situation where the performance function could not be definitely expressed in structural reliability analysis. In this method, a response surface of the fuzzy neural network for the whole area should be constructed first, and then the structural reliability can be calculated by the genetic algorithm. In the proposed method, all the sample points for the training network come from the whole area, so the true limit state surface in the whole area can be fitted. Through calculational examples and comparative analysis, it can be known that the proposed method is much better than the traditional response surfacemethod of quadratic polynomials, because, the amount of calculation of finite element analysis is largely reduced, the accuracy of calculation is improved,and the true limit state surface can be fitted very well in the whole area. So, the method proposed in this paper is suitable for engineering application.

Arid regions represent nearly 30 % of the Earth's terrestrial surface, but their microbial biodiversity is not yet well characterized. The surface sands of deserts, a subset of arid regions, are generally subjected to large temperature fluctuations plus high UV light exposure and are low in organic matter. We examined surface sand samples from the Taklamaken (China, three samples) and Gobi (Mongolia, two samples) deserts, using pyrosequencing of PCR-amplified 16S V1/V2 rDNA sequences from total extracted DNA in order to gain an assessment of the bacterial population diversity. In total, 4,088 OTUs (using ≥97 % sequence similarity levels), with Chao1 estimates varying from 1,172 to 2,425 OTUs per sample, were discernable. These could be grouped into 102 families belonging to 15 phyla, with OTUs belonging to the Firmicutes, Proteobacteria, Bacteroidetes, and Actinobacteria phyla being the most abundant. The bacterial population composition was statistically different among the samples, though members from 30 genera were found to be common among the five samples. An increase in phylotype numbers with increasing C/N ratio was noted, suggesting a possible role in the bacterial richness of these desert sand environments. Our results imply an unexpectedly large bacterial diversity residing in the harsh environment of these two Asian deserts, worthy of further investigation.

This paper presents a novel method of FIB (FIB: focused ion beam) sample preparation to accurately evaluate critical dimensions and profiles of ArF photo resist patterns without the use of a protective coating on the photo resist. In order to accomplish this, the FIB micro-samplingmethod that is one of effective FIB milling and fabrication method was employed. First a Si cap is picked up from a silicon wafer and fixed to ArF photo resist patterns to protect against ion beam irradiation. Then, a micro-sample, a piece of Si-capped ArF photo resist, was extracted from the bulk ArF photo resist. In this procedure, this silicon cap always protects ArF photo resist patterns against ion beam irradiation. For the next step, the micro-sample is fixed to a needle stub of the FIB-STEM (STEM: scanning transmission electron microscopy) compatible rotation holder. This sample on the needle stub was rotated 180 degrees and milled from the side of Si substrate. Lastly, the sample is milled to the thickness of 2μm. In this process, the ion beam is irradiating from the silicon substrate side to minimize the ion beam irradiation damages on the ArF photo resist patterns. EDX (EDX: Energy dispersive X-ray spectroscopy) analysis proved that no gallium ions were detected on the surface of the ArF photo resist patterns. The feasibility of high accelerating voltage observation of STEM to observe line edge roughness of a thick sample like 2μm without shrinkage has been demonstrated.

Full Text Available In field ecological studies inferences must often be drawn from dissimilarities in numbers and species of organisms found in biological samples collected at different times and under various conditions....

.... Previous work in Support of these efforts developed a compost sample preparation scheme, consisting of air drying followed by milling, to reduce analytical variability in the heterogeneous compost matrix...

This paper describes an outline and some examples of three dimensional electric field calculations with a computer code developed at NIRS. In the code, a surface charge method is adopted because of it's simplicity in the mesh establishing procedure. The charge density in a triangular mesh is assumed to distribute with a linear function of the position. The electric field distribution is calculated for a pair of drift tubes with the focusing fingers on the opposing surfaces. The field distribution in an acceleration gap is analyzed with a Fourier-Bessel series expansion method. The calculated results excellently reproduces the measured data with a magnetic model. (author)

The diagnosis of candida balanitis should be based upon both clinical and mycological data. The procedure of material collection is a critical issue to confirm or rule out the clinical diagnosis of candida balanitis. To compare direct impression of the glans on the agar surface of solid culture media with the collection of genital exudates with cotton swab for the diagnosis of candida balanitis. A prospective cross-sectional study was carried out during a 36-month period. Sexually transmitted disease clinic attendees with balanitis and asymptomatic men were included. Specimens for yeast culture were collected from the glans penis and inner preputial layer using the direct impression on CHROMagar candida medium and by swabbing with a sterile cotton swab. Among 478 men enrolled, 189 had balanitis. The prevalence of candida balanitis was 17.8% (85/478) confirmed after culture by direct impression; the swab method detected only 54/85 (63.5%) of these men. Of the 289 asymptomatic men, 36 (12.5%) yielded Candida spp; the swab method detected only 38.9% of these men. The risk of having candida balanitis is 8.9 (IC 95% 2.48 to 32.04) whenever the number of candida colonies recovered by direct impression was greater than 10. Direct impression on CHROMagar candida medium resulted in the highest Candida spp recovery rate. More than 10 colonies yielded by impression culture were statistically associated with candida balanitis. This method shows the ideal profile for sampling the male genital area for yeasts and should be included in the management of balanitis.

surface of a M-dimensional, unit radius hyper-sphere, (ii) relocating the N points on a representative set of N hyper-spheres of different radii, and (iii) transforming the coordinates of those points to lie on N different hyper-ellipsoids spanning the multivariate Gaussian distribution. The above method is applied in a dimensionality reduction context by defining flow-controlling points over which representative sampling of hydraulic conductivity is performed, thus also accounting for the sensitivity of the flow and transport model to the input hydraulic conductivity field. The performance of the various stratified samplingmethods, LH, SL, and ME, is compared to that of SR sampling in terms of reproduction of ensemble statistics of hydraulic conductivity and solute concentration for different sample sizes N (numbers of realizations). The results indicate that ME sampling constitutes an equally if not more efficient simulation method than LH and SL sampling, as it can reproduce to a similar extent statistics of the conductivity and concentration fields, yet with smaller sampling variability than SR sampling. References [1] Gutjahr A.L. and Bras R.L. Spatial variability in subsurface flow and transport: A review. Reliability Engineering & System Safety, 42, 293-316, (1993). [2] Helton J.C. and Davis F.J. Latin hypercube sampling and the propagation of uncertainty in analyses of complex systems. Reliability Engineering & System Safety, 81, 23-69, (2003). [3] Switzer P. Multiple simulation of spatial fields. In: Heuvelink G, Lemmens M (eds) Proceedings of the 4th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences, Coronet Books Inc., pp 629?635 (2000).

Dynamic, flow-through flux chambers are sometimes used to estimate ammonia emissions from livestock operations; however, ammonia emissions from the surfaces are affected by many factors which can be affected by the chamber. Ammonia emissions estimated using environmental flow-through chambers may be...

The effect of discrete structures such as macrobrush or castellated surfaces on power handling and deuterium retention of plasma facing components is to be assessed since such geometrical configurations are needed for increasing the lifetime of the armour to heat-sink joint. Four small macrobrush W and W + 1%La2O3 samples have been exposed in the Frascati Tokamak Upgrade (FTU) scrape-off layer up to the last closed flux surface by means of the Sample Introduction System. FTU is an all metal machine with no carbon source inside vacuum vessel; it exhibits ITER relevant energy and particle fluxes on the plasma facing components. Here, results on morphological surface changes (SEM), chemical composition (EDX) and deuterium retention (TDS) are reported.

Microwave induced heating is widely used in medical treatments, scientific and industrial applications. The temperature field inside a microwave heated sample is often inhomogenous, therefore multiple temperature sensors are required for an accurate result. Nowadays, non-contact (Infra Red thermography or microwave radiometry) or direct contact temperature measurement methods (expensive and sophisticated fiber optic temperature sensors transparent to microwave radiation) are mainly used. IR thermography gives only the surface temperature and can not be used for measuring temperature distributions in cross sections of a sample. In this paper we present a very simple experimental method for temperature distribution highlighting inside a cross section of a liquid sample, heated by a microwave radiation through a coaxial applicator. The method proposed is able to offer qualitative information about the heating distribution, using a temperature sensitive liquid crystal sheet. Inhomogeneities as smaller as 1°-2°C produced by the symmetry irregularities of the microwave applicator can be easily detected by visual inspection or by computer assisted color to temperature conversion. Therefore, the microwave applicator is tuned and verified with described method until the temperature inhomogeneities are solved

With the rapid development of analytical techniques, it has become much easier to detect chemical and biological analytes, even at very low detection limits. In recent years, techniques based on vibrational spectroscopy, such as surface enhanced Raman spectroscopy (SERS), have been developed for non-destructive detection of pathogenic microorganisms. SERS is a highly sensitive analytical tool that can be used to characterize chemical and biological analytes interacting with SERS-active substrates. However, it has always been a challenge to obtain consistent and reproducible SERS spectroscopic results at complicated experimental conditions. Microfluidics, a tool for highly precise manipulation of small volume liquid samples, can be used to overcome the major drawbacks of SERS-based techniques. High reproducibility of SERS measurement could be obtained in continuous flow generated inside microfluidic devices. This article provides a thorough review of the principles, concepts and methods of SERS-microfluidic platforms, and the applications of such platforms in trace analysis of chemical and biological analytes. (topical review)

Ceramics are among the most interesting materials for a large category of applications, including both industry and health. Among the characteristic of the ceramic materials, the specific surface area is often difficult to evaluate.The paper presents a method of evaluation for the specific surface area of two ceramic powders by means of scanning electron microscopy measurements and an original method of computing the specific surface area.Cumulative curves are used to calculate the specific surface area under assumption that the values of particles diameters follow a normal logarithmic distribution. For two powder types, X7R and NPO the results are the following: - for the density ρ (g/cm 2 ), 5.5 and 6.0, respectively; - for the average diameter D bar (μm), 0.51 and 0.53, respectively; - for σ, 1.465 and 1.385, respectively; - for specific surface area (m 2 /g), 1.248 and 1.330, respectively. The obtained results are in good agreement with the values measured by conventional methods. (authors)

Residents of high background radiation areas of Ramsar have lived in these areas for many generations and received radiation doses much higher than the dose limit recommended by ICRP for radiation workers. The radioactivity of the high background radiation areas of Ramsar is reported to be due to 226 Ra and its decay products, which have been brought to the surface by the waters of hot springs. Over the past years the department has focused on different aspects of the health effects of the elevated levels of natural radiation in Ramsar. This study was aimed to perform a preliminary investigation on the bioeffects of exposure to elevated levels of natural radiation on the microbiology of surface water samples. Water samples were collected from surface water streams in Talesh Mahalleh district, Ramsar as well as a nearby area with normal levels of background radiation. Only two strains of bacteria, that is, Providencia stuartii and Shimwellia blattae, could be isolated from the water samples collected from high background radiation areas, while seven strains (Escherichia coli, Enterobacter asburiae, Klebsiella pneumoniae, Shigella dysenteriae, Buttiauxella agerstis, Tatumella punctuata and Raoultella ornithinolytica) were isolated from the water samples collected from normal background radiation areas. All the bacteria isolated from water samples of high and normal background radiation areas were sensitive to ultraviolet radiation, heat, betadine, alcohol, and deconex. Although other investigators have reported that bacteria isolated from hot springs show radioresistance, the results reported here do not reveal any adaptive response. (author)

This report summarizes work conducted by Pacific Northwest National Laboratory to technically evaluate the current approach to staged feed sampling of high-level waste (HLW) sludge to meet waste acceptance criteria (WAC) for transfer from tank farms to the Hanford Waste Treatment and Immobilization Plant (WTP). The current sampling and analysis approach is detailed in the document titled Initial Data Quality Objectives for WTP Feed Acceptance Criteria, 24590-WTP-RPT-MGT-11-014, Revision 0 (Arakali et al. 2011). The goal of this current work is to evaluate and provide recommendations to support a defensible, technical and statistical basis for the staged feed sampling approach that meets WAC data quality objectives (DQOs).

Research and development of the renewable nanomaterial cellulose nanofibrils (CNFs) has received considerable attention. The effect of drying on the surface energy of CNFs was investigated. Samples of nanofibrillated cellulose (NFC) and cellulose nanocrystals (CNC) were each subjected to four separate drying methods: air-drying, freeze-drying, spray-drying, and...

In aqueous solution, solute conformational transitions are governed by intimate interplays of the fluctuations of solute-solute, solute-water, and water-water interactions. To promote molecular fluctuations to enhance sampling of essential conformational changes, a common strategy is to construct an expanded Hamiltonian through a series of Hamiltonian perturbations and thereby broaden the distribution of certain interactions of focus. Due to a lack of active sampling of configuration response to Hamiltonian transitions, it is challenging for common expanded Hamiltonian methods to robustly explore solvent mediated rare conformational events. The orthogonal space sampling (OSS) scheme, as exemplified by the orthogonal space random walk and orthogonal space tempering methods, provides a general framework for synchronous acceleration of slow configuration responses. To more effectively sample conformational transitions in aqueous solution, in this work, we devised a generalized orthogonal space tempering (gOST) algorithm. Specifically, in the Hamiltonian perturbation part, a solvent-accessible-surface-area-dependent term is introduced to implicitly perturb near-solute water-water fluctuations; more importantly in the orthogonal space response part, the generalized force order parameter is generalized as a two-dimension order parameter set, in which essential solute-solvent and solute-solute components are separately treated. The gOST algorithm is evaluated through a molecular dynamics simulation study on the explicitly solvated deca-alanine (Ala10) peptide. On the basis of a fully automated sampling protocol, the gOST simulation enabled repetitive folding and unfolding of the solvated peptide within a single continuous trajectory and allowed for detailed constructions of Ala10 folding/unfolding free energy surfaces. The gOST result reveals that solvent cooperative fluctuations play a pivotal role in Ala10 folding/unfolding transitions. In addition, our assessment

There is an immediate need for rapid triage of the population in case of a large scale exposure to ionizing radiation. Knowing the dose absorbed by the body will allow clinicians to administer medical treatment for the best chance of recovery for the victim. In addition, today's radiotherapy treatment could benefit from additional information regarding the patient's sensitivity to radiation before starting the treatment. As of today, there is no system in place to respond to this demand. This paper will describe specific procedures to mimic the effects of human exposure to ionizing radiation creating the tools for optimization of administered radiation dosimetry for radiotherapy and/or to estimate the doses of radiation received accidentally during a radiation event that could pose a danger to the public. In order to obtain irradiated biological samples to study ionizing radiation absorbed by the body, we performed ex-vivo irradiation of human blood samples using the linear accelerator (LINAC). The LINAC was implemented and calibrated for irradiating human whole blood samples. To test the calibration, a 2 Gy test run was successfully performed on a tube filled with water with an accuracy of 3% in dose distribution. To validate our technique the blood samples were ex-vivo irradiated and the results were analyzed using a gene expression assay to follow the effect of the ionizing irradiation by characterizing dose responsive biomarkers from radiobiological assays. The response of 5 genes was monitored resulting in expression increase with the dose of radiation received. The blood samples treated with the LINAC can provide effective irradiated blood samples suitable for molecular profiling to validate radiobiological measurements via the gene-expression based biodosimetry tools. (orig.)

There is an immediate need for rapid triage of the population in case of a large scale exposure to ionizing radiation. Knowing the dose absorbed by the body will allow clinicians to administer medical treatment for the best chance of recovery for the victim. In addition, today's radiotherapy treatment could benefit from additional information regarding the patient's sensitivity to radiation before starting the treatment. As of today, there is no system in place to respond to this demand. This paper will describe specific procedures to mimic the effects of human exposure to ionizing radiation creating the tools for optimization of administered radiation dosimetry for radiotherapy and/or to estimate the doses of radiation received accidentally during a radiation event that could pose a danger to the public. In order to obtain irradiated biological samples to study ionizing radiation absorbed by the body, we performed ex-vivo irradiation of human blood samples using the linear accelerator (LINAC). The LINAC was implemented and calibrated for irradiating human whole blood samples. To test the calibration, a 2 Gy test run was successfully performed on a tube filled with water with an accuracy of 3% in dose distribution. To validate our technique the blood samples were ex-vivo irradiated and the results were analyzed using a gene expression assay to follow the effect of the ionizing irradiation by characterizing dose responsive biomarkers from radiobiological assays. The response of 5 genes was monitored resulting in expression increase with the dose of radiation received. The blood samples treated with the LINAC can provide effective irradiated blood samples suitable for molecular profiling to validate radiobiological measurements via the gene-expression based biodosimetry tools.

Surface Acoustic Wave (SAW) technology is low cost, rugged, lightweight, extremely low power and can be used to develop passive wireless sensors. For these reasons, NASA is investigating the use of SAW technology for Integrated Vehicle Health Monitoring (IVHM) of aerospace structures. To facilitate rapid prototyping of passive SAW sensors for aerospace applications, SAW models have been developed. This paper reports on the comparison of three methods of modeling SAWs. The three models are the Impulse Response Method (a first order model), and two second order matrix methods; the conventional matrix approach, and a modified matrix approach that is extended to include internal finger reflections. The second order models are based upon matrices that were originally developed for analyzing microwave circuits using transmission line theory. Results from the models are presented with measured data from devices. Keywords: Surface Acoustic Wave, SAW, transmission line models, Impulse Response Method.

Full Text Available The sea surface microlayer (SML is an important biogeochemical system whose physico-chemical analysis often necessitates some degree of sample storage. However, many SML components degrade with time so the development of optimal storage protocols is paramount. We here briefly review some commonly used treatment and storage protocols. Using freshwater and saline SML samples from a river estuary, we investigated temporal changes in surfactant activity (SA and the absorbance and fluorescence of chromophoric dissolved organic matter (CDOM over four weeks, following selected sample treatment and storage protocols. Some variability in the effectiveness of individual protocols most likely reflects sample provenance. None of the various protocols examined performed any better than dark storage at 4 °C without pre-treatment. We therefore recommend storing samples refrigerated in the dark.

We propose a new one-sample test for normality in a Reproducing Kernel Hilbert Space (RKHS). Namely, we test the null-hypothesis of belonging to a given family of Gaussian distributions. Hence our procedure may be applied either to test data for normality or to test parameters (mean and covariance) if data are assumed Gaussian. Our test is based on the same principle as the MMD (Maximum Mean Discrepancy) which is usually used for two-sample tests such as homogeneity or independence testing. O...

Two surfacesamples (HTF-10-17-30 and HTF-10-17-31) and two variable depth samples (HTF-10-17-32 and HTF-10-17-33) were collected from SRS Tank 10 during March 2017 and submitted to SRNL for characterization. At SRNL, the two surfacesamples were combined in one container, the two variable depth samples (VDSs) were combined in another container, and then the two composite samples were each characterized by a series of physical, ionic, radiological, and elemental analysis methods. The surfacesample composite was characterized primarily for Tank Farm corrosion control purposes, while the VDS composite was characterized primarily for Tank Closure Cesium Removal (TCCR) purposes.

Methods and apparatus whereby an optical interferometer is utilized to monitor and provide feedback control to an integrated energetic particle column, to create desired topographies, including the depth, shape and/or roughness of features, at a surface of a specimen. Energetic particle columns can direct energetic species including, ions, photons and/or neutral particles to a surface to create features having in-plane dimensions on the order of 1 micron, and a height or depth on the order of 1 nanometer. Energetic processes can include subtractive processes such as sputtering, ablation, focused ion beam milling and, additive processes, such as energetic beam induced chemical vapor deposition. The integration of interferometric methods with processing by energetic species offers the ability to create desired topographies at surfaces, including planar and curved shapes.

A method, based on the multi-phase-field framework, is proposed that adequately accounts for the effects of a coupling between surface free energy and elastic deformation in solids. The method is validated via a number of analytically solvable problems. In addition to stress states at mechanical equilibrium in complex geometries, the underlying multi-phase-field framework naturally allows us to account for the influence of surface energy induced stresses on phase transformation kinetics. This issue, which is of fundamental importance on the nanoscale, is demonstrated in the limit of fast diffusion for a solid sphere, which melts due to the well-known Gibbs-Thompson effect. This melting process is slowed down when coupled to surface energy induced elastic deformation.

Poly-n-isopropylacrylamide surface coatings demonstrate the useful property of being able to switch charateristics depending upon temperature. More specifically, these coatings switch from being hydrophilic at low temperature to hydrophobic at high temperature. Research has been conducted for many years to better characterize and control the properties of temperature sensitive coatings. The present invention provides novel temperature sensitive coatings on articles and novel methods of making temperature sensitive coatings that are disposed on the surfaces of various articles. These novel coatings contain the reaction products of n-isopropylacrylamide and are characterized by their properties such as advancing contact angles. Numerous other characteristics such as coating thickness, surface roughness, and hydrophilic-to-hydrophobic transition temperatures are also described. The present invention includes articles having temperature-sensitve coatings with improved properties as well as improved methods for forming temperature sensitive coatings.

Evaluating fungal contamination indoors is complicated because of the many different samplingmethods utilized. In this study, fungal contamination was evaluated using five samplingmethods and four matrices for results. The five samplingmethods were a 48 hour indoor air sample ...

The application of ICP/mass spectrometry for the isotopic analysis of environmental samples, the use of drum assayers for measuring radionuclides in food and a rapid procedure for the measurement of the transuranic elements and thorium, performed at the Pacific Northwest Laboratory are discussed

Today, a wide variety of techniques is available for the preparation of (semi-) solid, liquid and gaseous samples, prior to their instrumental analysis by means of capillary gas chromatography (GC) or, increasingly, comprehensive two-dimensional GC (GC × GC). In the past two decades, a large number

An importance sampling (IS) simulation technique is presented for evaluating the lower-bound bit error rate (BER) of the Bayesian decision feedback equalizer (DFE) under the assumption of correct decisions being fed back. A design procedure is developed, which chooses appropriate bias vectors for the simulation density to ensure asymptotic efficiency of the IS simulation.

We develop of a direct method for surface X-ray diffraction that exploits the holographic feature of a known reference wave from the substrate. A Bayesian analysis of the optimal inference to be made from an incomplete data set suggests a maximum entropy algorithm that balances agreement with the data and other statistical considerations

A protease producing Bacillus sp. GA CAS10 was isolated from ascidian Phallusia arabica, Tuticorin, Southeast coast of India. Response surface methodology was employed for the optimization of different nutritional and physical factors for the production of protease. Plackett-Burman method was applied to identify ...

An effective disinfection method for strawberry (Fragaria x ananassa Duch.) cv. Senga Sengana micropropagation using runner tips and nodal segments as explants was developed. The explants were surface sterilized with different sterilants for different durations. The present studies on the effect of different regimes of ...

algorithm for feature recognition. To compare the methods, the mould insert and a number of replicated nano-patterned surfaces, injection moulded with an induction heating aid, were measured on nominally identical locations by means of an atomic force microscope mounted on a manual CMM....

In this study, we conducted surface zwitterionization of hydroxyapatite (HA) surfaces by immersing them in the zwitterionic polymer solutions to provide anti-bacterial properties to the HA surface. Three different monomers containing various zwitterionic groups, i.e., phosphorylcholine (PC), sulfobetaine (SB), and carboxybetaine (CB), were copolymerized with the methacrylic monomer containing a Ca2+-binding moiety, using the free radical polymerization method. As a control, functionalization of the copolymer containing the Ca2+-binding moiety was synthesized using a hydroxy group. The stable immobilization of the zwitterionic functional groups was confirmed by water contact angle analysis and X-ray photoelectron spectroscopy (XPS) measurement conducted after the sonication process. The zwitterionized HA surface showed significantly decreased protein adsorption, whereas the hydroxyl group-coated HA surface showed limited efficacy. The anti-bacterial adhesion property was confirmed by conducting Streptococcus mutans (S. mutans) adhesion tests for 6 h and 24 h. When furanone C-30, a representative anti-quorum sensing molecule for S. mutans, was used, only a small amount of bacteria adhered after 6 h and the population did not increase after 24 h. In contrast, zwitterionized HA surfaces showed almost no bacterial adhesion after 6 h and the effect was retained for 24 h, resulting in the lowest level of oral bacterial adhesion. These results confirm that surface zwitterionization is a promising method to effectively prevent oral bacterial adhesion on HA-based materials.

Bullet and cartridge case evidence may potentially link weapons and crimes through the comparison of toolmark patterns. This analysis relies on the clarity of the toolmarks and the ability of the examiner to identify patterns on the evidence. These patterns may be distorted by debris such as soil, blood, cyanoacrylate, and construction materials. Despite the potential importance of bullet and cartridge case evidence, few investigations of proper cleaning methods have been conducted. The present study was designed to examine the effects of various cleaning solutions and application methods on copper and brass bullets and cartridge cases. Additionally, this research investigated the efficacy of these cleaning protocols on the common evidence contaminants blood and cyanoacrylate. No cleaning method was found to be universally effective on both contaminant types and nondestructive to the metal surface. Ultrasonication was the most efficient application method employed when used in conjunction with an appropriate cleaning solution. Acetone proved to be safe and successful at removing heavy cyanoacrylate deposits from brass cartridge cases without damaging the metal. Although sulfuric acid removed most of the cyanoacrylate from the brass cartridge case, ultrasonication of the fumed cartridge cases in sulfuric acid caused the nickel-plated primer caps to turn black. Additionally, etching occurred when sulfuric acid was allowed to dry on the cartridge case surface. Citric acid, salt-flour-vinegar paste, TergazymeRTM, and water did not effectively remove the cyanoacrylate from the cartridge cases, but the solutions were safe to use on the brass and sometimes resulted in a shinier surface. Regardless of the cleaning method employed, the bloodstained bullets retained most or all of the underlying brown tarnish. Ultrasonication with sulfuric acid was successful at removing some blood-initiated tarnishing; however, the removal of residues was not complete, making it difficult

Biomass Pyrolysis has been an increasing topic of research, in particular as a replacement for crude oil. This process utilizes moderate temperatures to thermally deconstruct the biomass which is then condensed into a mixture of liquid oxygenates to be used as fuel precursors. Pyrolysis oils contain more than 400 compounds, up to 60 percent of which do not re-volatilize for subsequent chemical analysis. Vapor chemical composition is also complicated as additional condensation reactions occur during the condensation and collection of the product. Due to the complexity of the pyrolysis oil, and a desire to catalytically upgrade the vapor composition before condensation, online real-time analytical techniques such as Molecular Beam Mass Spectrometry (MBMS) are of great use. However, in order to properly sample hot pyrolysis vapors, many challenges must be overcome. Sampling must occur within a narrow range of temperatures to reduce product composition changes from overheating or partial condensation or plugging of lines from condensed products. Residence times must be kept at a minimum to reduce further reaction chemistries. Pyrolysis vapors also form aerosols that are carried far downstream and can pass through filters resulting in build-up in downstream locations. The co-produced bio-char and ash from the pyrolysis process can lead to plugging of the sample lines, and must be filtered out at temperature, even with the use of cyclonic separators. A practical approach for considerations and sampling system design, as well as lessons learned are integrated into the hot analytical sampling system of the National Renewable Energy Laboratory's (NREL) Thermochemical Process Development Unit (TCPDU) to provide industrially relevant demonstrations of thermochemical transformations of biomass feedstocks at the pilot scale.

Full Text Available While adhesiveness is required for polymer surfaces in special applications, tacky surfaces are generally undesirable in many applications like automotive interior parts. The tackiness of polymer surface results from a combination of composition and additivation, and it can change significantly in natural or accelerated ageing. Since there is no established, uniform method to characterize surface tack, the major focus of the present work was on the development of an objective quantification method. A setup having a soft die tip attached to a standard tensile tester was developed aiming for correlation to the human sense of touch. Three different model thermoplastic polyolefin (TPO compound formulations based on a high-impact isotactic polypropylene (iPP composition with varying amounts and types of anti-scratch additives were used for these investigations. As the surface tack phenomenon is related to ageing and weathering, the material’s examination was also performed after various intervals of weathering. The developed method allows a fast assessment of the effect of polymer composition variations and different additive formulations on surface tack and gives identical rankings as the standardized haptic panel.

A cross-sectional study design is relatively inexpensive, fast and easy to conduct when compared to other study designs. Careful planning is essential to obtaining a representative sample of the population, and the recommended approach is to use simple random sampling from an exhaustive list of units in the target population. This approach is rarely feasible in practice, and other sampling procedures must often be adopted. For example, when slaughter pigs are the target population, sampling the pigs on the slaughter line may be an alternative to on-site sampling at a list of farms. However, it is difficult to sample a large number of farms from an exact predefined list, due to the logistics and workflow of an abattoir. Therefore, it is necessary to have a systematic sampling procedure and to evaluate the obtained sample with respect to the study objective. We propose a method for 1) planning, 2) conducting, and 3) evaluating the representativeness and reproducibility of a cross-sectional study when simple random sampling is not possible. We used an example of a cross-sectional study with the aim of quantifying the association of antimicrobial resistance and antimicrobial consumption in Danish slaughter pigs. It was not possible to visit farms within the designated timeframe. Therefore, it was decided to use convenience sampling at the abattoir. Our approach was carried out in three steps: 1) planning: using data from meat inspection to plan at which abattoirs and how many farms to sample; 2) conducting: sampling was carried out at five abattoirs; 3) evaluation: representativeness was evaluated by comparing sampled and non-sampled farms, and the reproducibility of the study was assessed through simulated sampling based on meat inspection data from the period where the actual data collection was carried out. In the cross-sectional study samples were taken from 681 Danish pig farms, during five weeks from February to March 2015. The evaluation showed that the sampling

The invention relates to a surface refractive index scanning system for characterization of a sample. The system comprises a grating device for holding or receiving the sample, the device comprising at least a first grating region having a first grating width along a transverse direction, and a s......The invention relates to a surface refractive index scanning system for characterization of a sample. The system comprises a grating device for holding or receiving the sample, the device comprising at least a first grating region having a first grating width along a transverse direction...... a grating period Î›2 in the longitudinal direction, where the longitudinal direction is orthogonal to the transverse direction. A grating period spacing Î”Î› = Î›1 - Î›2 is finite. Further, the first and second grating periods are chosen to provide optical resonances for light respectively in a first...... wavelength band and a second wavelength band, light is being emitted, transmitted, or reflected in an out-of-plane direction, wherein the first wavelength band and the second wavelength band are at least partially non-overlapping in wavelength. The system further comprises a light source for illuminating...

We examined the harmful side effects on indigenous soil microorganisms of two organic solvents, acetone and dichloromethane, that are normally used for spiking of soil with polycyclic aromatic hydrocarbons for experimental purposes. The solvents were applied in two contamination protocols to either...... higher than in control soil, probably due mainly to release of predation from indigenous protozoa. In order to minimize solvent effects on indigenous soil microorganisms when spiking native soil samples with compounds having a low water solubility, we propose a common protocol in which the contaminant...... tagged with luxAB::Tn5. For both solvents, application to the whole sample resulted in severe side effects on both indigenous protozoa and bacteria. Application of dichloromethane to the whole soil volume immediately reduced the number of protozoa to below the detection limit. In one of the soils...

Molecular dynamics (MD) and Monte Carlo (MC) simulations have emerged as a valuable tool to investigate statistical mechanics and kinetics of biomolecules and synthetic soft matter materials. However, major limitations for routine applications are due to the accuracy of the molecular mechanics force field and due to the maximum simulation time that can be achieved in current simulations studies. For improving the sampling a number of advanced sampling approaches have been designed in recent years. In particular, variants of the parallel tempering replica-exchange methodology are widely used in many simulation studies. Recent methodological advancements and a discussion of specific aims and advantages are given. This includes improved free energy simulation approaches and conformational search applications. (topical review)

Studies on the induction of mutation in Dendrobium orchid at MINT has produced a number of new orchid mutant cultivars. Tissue culture techniques on orchid seeds and meristem cloning are employed in preparing the samples for the mutation induction. Solid medium based on the Murashige and Skoog (1962) and liquid medium based on Vacin and Went (1949) were found to be suitable in producing protocorm like bodies (PLBs) that are required for the irradiation treatment. (Author)

Two correlated Monte Carlo methods, the similar flight path and the identical flight path methods, have been improved to evaluate up to the second order change of the reactivity perturbation. Secondary fission neutrons produced by neutrons having passed through perturbed regions in both unperturbed and perturbed systems are followed in a way to have a strong correlation between secondary neutrons in both the systems. These techniques are incorporated into the general purpose Monte Carlo code MORSE, so as to be able to estimate also the statistical error of the calculated reactivity change. The control rod worths measured in the FCA V-3 assembly are analyzed with the present techniques, which are shown to predict the measured values within the standard deviations. The identical flight path method has revealed itself more useful than the similar flight path method for the analysis of the control rod worth. (auth.)

The U.S. Army Environmental Center (USAEC), formerly the U.S. Army Toxic and Hazardous Materials Agency, has evaluated composting methods for treatment of explosive-contaminated soils and sediments at Army installations...

Mass spectrometry-based methods play a crucial role in the quantification of the main iron metabolism regulator hepcidin by singling out the bioactive 25-residue peptide from the other naturally occurring N-truncated isoforms (hepcidin-20, -22, -24), which seem to be inactive in iron homeostasis. However, several difficulties arise in the MS analysis of hepcidin due to the "sticky" character of the peptide and the lack of suitable standards. Here, we propose the use of amino- and fluoro-silanized autosampler vials to reduce hepcidin interaction to laboratory glassware surfaces after testing several types of vials for the preparation of stock solutions and serum samples for isotope dilution liquid chromatography-tandem mass spectrometry (ID-LC-MS/MS). Furthermore, we have investigated two sample preparation strategies and two chromatographic separation conditions with the aim of developing a LC-MS/MS method for the sensitive and reliable quantification of hepcidin-25 in serum samples. A chromatographic separation based on usual acidic mobile phases was compared with a novel approach involving the separation of hepcidin-25 with solvents at high pH containing 0.1% of ammonia. Both methods were applied to clinical samples in an intra-laboratory comparison of two LC-MS/MS methods using the same hepcidin-25 calibrators with good correlation of the results. Finally, we recommend a LC-MS/MS-based quantification method with a dynamic range of 0.5-40 μg/L for the assessment of hepcidin-25 in human serum that uses TFA-based mobile phases and silanized glass vials. Graphical abstract Structure of hepcidin-25 (Protein Data Bank, PDB ID 2KEF).

In the testing laboratory used by neutron activation analysis method, sample preparation is the main factor and it can't be neglect. The error in the sample preparation can give result with lower accuracy. In this article is explained the scheme of sample preparation i.e sample receive administration, the separate of sample, fluid and solid sample preparation, sample grouping, irradiation, sample counting and holding the sample post irradiation. If the management of samples were good application based on Standard Operation Procedure, therefore each samples has good traceability. To optimize the management of samples is needed the trained and skilled personal and good facility. (author)

To assess the validity, feasibility, and acceptability of 2 methods of self-sampling compared to clinician sampling during a speculum examination. To improve screening for reproductive tract infections (RTIs) in resource-poor settings. In a public clinic in Cape Town, 450 women underwent a speculum

The measurement of ecosystem-scale energy and mass fluxes between the planetary surface and the atmosphere is crucial for understanding geophysical processes. Surface renewal is a flux measurement technique based on analyzing the turbulent coherent structures that interact with the surface. It is a less expensive technique because it does not require fast-response velocity measurements, but only a fast-response scalar measurement. It is therefore also a useful tool for the study of the global cycling of trace gases. Currently, surface renewal requires calibration against another flux measurement technique, such as eddy covariance, to account for the linear bias of its measurements. We present two advances in the surface renewal theory and methodology that bring the technique closer to becoming a fully independent flux measurement method. The first advance develops the theory of turbulent coherent structure transport associated with the different scales of coherent structures. A novel method was developed for identifying the scalar change rate within structures at different scales. Our results suggest that for canopies less than one meter in height, the second smallest coherent structure scale dominates the energy and mass flux process. Using the method for resolving the scalar exchange rate of the second smallest coherent structure scale, calibration is unnecessary for surface renewal measurements over short canopies. This study forms the foundation for analysis over more complex surfaces. The second advance is a sensor frequency response correction for measuring the sensible heat flux via surface renewal. Inexpensive fine-wire thermocouples are frequently used to record high frequency temperature data in the surface renewal technique. The sensible heat flux is used in conjunction with net radiation and ground heat flux measurements to determine the latent heat flux as the energy balance residual. The robust thermocouples commonly used in field experiments

With the background of leak-before-break (LBB) analysis of pressurized vessels and pipes in nuclear plants, the fatigue growth problem of either circumferential or longitudinal semi-elliptical surface cracks subjected to cyclic loading is studied by using a continuum damage mechanics method. The fatigue damage is described by a scalar damage variable. From the damage evolution equation at the crack tip, a crack growth equation similar to famous Paris' formula is derived, which shows the physical meaning of Paris' formula. Thereby, a continuum damage mechanics approach is developed to analyze the configuration evolution of surface cracks during fatigue growth

We developed a new fabrication method for standard surface sources by using an inkjet printer with inks in which a radioactive material is mixed to print on a sheet of paper. Three printed test patterns have been prepared: (1) 100 mmx100 mm uniformity-test patterns, (2) positional-resolution test patterns with different widths and intervals of straight lines, and (3) logarithmic intensity test patterns with different radioactive intensities. The results revealed that the fabricated standard surface sources had high uniformity, high positional resolution, arbitrary shapes and a broad intensity range.

Plastic, as a form of marine litter, is found in varying quantities and sizes around the globe from surface waters to deep-sea sediments. Identifying patterns of microplastic distribution will benefit an understanding of the scale of their potential effect on the environment and organisms. As sea ice extent is reducing in the Arctic, heightened shipping and fishing activity may increase marine pollution in the area. Microplastics may enter the region following ocean transport and local input, although baseline contamination measurements are still required. Here we present the first study of microplastics in Arctic waters, south and southwest of Svalbard, Norway. Microplastics were found in surface (top 16 cm) and sub-surface (6 m depth) samples using two independent techniques. Origins and pathways bringing microplastic to the Arctic remain unclear. Particle composition (95% fibres) suggests they may either result from the breakdown of larger items (transported over large distances by prevailing currents, or derived from local vessel activity), or input in sewage and wastewater from coastal areas. Concurrent observations of high zooplankton abundance suggest a high probability for marine biota to encounter microplastics and a potential for trophic interactions. Further research is required to understand the effects of microplastic-biota interaction within this productive environment. PMID:26446348

Plastic, as a form of marine litter, is found in varying quantities and sizes around the globe from surface waters to deep-sea sediments. Identifying patterns of microplastic distribution will benefit an understanding of the scale of their potential effect on the environment and organisms. As sea ice extent is reducing in the Arctic, heightened shipping and fishing activity may increase marine pollution in the area. Microplastics may enter the region following ocean transport and local input, although baseline contamination measurements are still required. Here we present the first study of microplastics in Arctic waters, south and southwest of Svalbard, Norway. Microplastics were found in surface (top 16 cm) and sub-surface (6 m depth) samples using two independent techniques. Origins and pathways bringing microplastic to the Arctic remain unclear. Particle composition (95% fibres) suggests they may either result from the breakdown of larger items (transported over large distances by prevailing currents, or derived from local vessel activity), or input in sewage and wastewater from coastal areas. Concurrent observations of high zooplankton abundance suggest a high probability for marine biota to encounter microplastics and a potential for trophic interactions. Further research is required to understand the effects of microplastic-biota interaction within this productive environment.

Surface acoustic wave nebulisation (SAWN) mass spectrometry (MS) is a method to generate gaseous ions compatible with direct MS of minute samples at femtomole sensitivity. To perform SAWN, acoustic waves are propagated through a LiNbO3 sampling chip, and are conducted to the liquid sample, which ultimately leads to the generation of a fine mist containing droplets of nanometre to micrometre diameter. Through fission and evaporation, the droplets undergo a phase change from liquid to gaseous analyte ions in a non-destructive manner. We have developed SAWN technology for the characterisation of organic colourants in textiles. It generates electrospray-ionisation-like ions in a non-destructive manner during ionisation, as can be observed by the unmodified chemical structure. The sample size is decreased by tenfold to 1000-fold when compared with currently used liquid chromatography-MS methods, with equal or better sensitivity. This work underscores SAWN-MS as an ideal tool for molecular analysis of art objects as it is non-destructive, is rapid, involves minimally invasive sampling and is more sensitive than current MS-based methods. [Figure not available: see fulltext.

Roughness, shape and structure of a surface offer information on the state, shape and surface characteristics of a component. Particularly the roughness of the surface dictates the subsequent polishing of the optical surface. The roughness is usually measured by a white light interferometer, which is limited by the size of the components. Using a moulding method of surfaces that are difficult to reach, an imprint is taken and analysed regarding to roughness and structure. This moulding compound method is successfully used in dental technology. In optical production, the moulding compound method is advantageous in roughness determination in inaccessible spots or on large components (astrological optics). The "replica method" has been around in metal analysis and processing. Film is used in order to take an impression of a surface. Then, it is analysed for structures. In optical production, compound moulding seems advantageous in roughness determination in inaccessible spots or on large components (astrological optics). In preliminary trials, different glass samples with different roughness levels were manufactured. Imprints were taken from these samples (based on DIN 54150 "Abdruckverfahren für die Oberflächenprüfung"). The objective of these feasibility tests was to determine the limits of this method (smallest roughness determinable / highest roughness). The roughness of the imprint was compared with the roughness of the glass samples. By comparing the results, the uncertainty of the measuring method was determined. The spectrum for the trials ranged from rough grind (0.8 μm rms), over finishing grind (0.6 μm rms) to polishing (0.1 μm rms).

Twin and family studies suggest that genetic variants contribute to the pathogenesis of bulimia nervosa (BN) and anorexia nervosa (AN). The Price Foundation has supported an international, multisite study of families with these disorders to identify these genetic variations. The current study presents the clinical characteristics of this sample as well as a description of the study methodology. All probands met modified criteria for BN or bulimia nervosa with a history of AN (BAN) as defined in the 4th ed. of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV; American Psychiatric Association, 1994). All affected relatives met DSM-IV criteria for BN, AN, BAN, or eating disorders not otherwise specified (EDNOS). Probands and affected relatives were assessed diagnostically using both trained-rater and self-report assessments. DNA samples were collected from probands, affected relatives, and available biologic parents. Assessments were obtained from 163 BN probands and 165 BAN probands. Overall, there were 365 relative pairs available for linkage analysis. Of the affected relatives of BN probands, 62 were diagnosed as BN (34.8%), 49 as BAN (27.5%), 35 as AN (19.7%), and 32 as EDNOS (18.0%). For the relatives of BAN probands, 42 were diagnosed as BN (22.5%), 67 as BAN (35.8%), 48 as AN (25.7%), and 30 as EDNOS (16.0%). This study represents the largest genetic study of eating disorders to date. Clinical data indicate that although there are a large number of individuals with BN disorders, a range of eating pathology is represented in the sample, allowing for the examination of several different phenotypes in molecular genetic analyses. Copyright 2004 by Wiley Periodicals, Inc. Int J Eat Disord 35: 556-570, 2004.

was compared to a linear frequency modulated signal with amplitude tapering, previously used in clinical studies for synthetic transmit aperture imaging. The latter had a relatively flat spectrum which implied that the waveform tried to excite all frequencies including ones with low amplification. The proposed......In this paper a method for designing waveforms for temporal encoding in medical ultrasound imaging is described. The method is based on least squares optimization and is used to design nonlinear frequency modulated signals for synthetic transmit aperture imaging. By using the proposed design method...... waveform, on the other hand, was designed so that only frequencies where the transducer had a large amplification were excited. Hereby, unnecessary heating of the transducer could be avoided and the signal-tonoise ratio could be increased. The experimental ultrasound scanner RASMUS was used to evaluate...

The objective of this short-term LDRD project was to acquire the tools needed to use our chemical imaging precision mass analyzer (ChIPMA) instrument to analyze tissue samples. This effort was an outgrowth of discussions with oncologists on the need to find the cellular origin of signals in mass spectra of serum samples, which provide biomarkers for ovarian cancer. The ultimate goal would be to collect chemical images of biopsy samples allowing the chemical images of diseased and nondiseased sections of a sample to be compared. The equipment needed to prepare tissue samples have been acquired and built. This equipment includes an cyro-ultramicrotome for preparing thin sections of samples and a coating unit. The coating unit uses an electrospray system to deposit small droplets of a UV-photo absorbing compound on the surface of the tissue samples. Both units are operational. The tissue sample must be coated with the organic compound to enable matrix assisted laser desorption/ionization (MALDI) and matrix enhanced secondary ion mass spectrometry (ME-SIMS) measurements with the ChIPMA instrument Initial plans to test the sample preparation using human tissue samples required development of administrative procedures beyond the scope of this LDRD. Hence, it was decided to make two types of measurements: (1) Testing the spatial resolution of ME-SIMS by preparing a substrate coated with a mixture of an organic matrix and a bio standard and etching a defined pattern in the coating using a liquid metal ion beam, and (2) preparing and imaging C. elegans worms. Difficulties arose in sectioning the C. elegans for analysis and funds and time to overcome these difficulties were not available in this project. The facilities are now available for preparing biological samples for analysis with the ChIPMA instrument. Some further investment of time and resources in sample preparation should make this a useful tool for chemical imaging applications.

Acid digestion, using the microwave power, was applied for ''dissolution'' of different materials corresponding to the radioactive waste matrices resulted from a nuclear power plant operation, including exchange resin (cationic and mixed), concrete, paper, textile and activated charcoals. A small aliquot of solid sample (0.1-0.5g) was mixed with a known volume of digestion reagents (HNO3 67% - H2O2 30% or HNO3 67% - HCl 37%, with HF addition if the SiO2 was present in matrices) in a 100 ml PTFE vessel and it was mineralized using a Berghof digestion system, Speedwave 4. Starting from the manufacturer procedures, the technical parameters (temperature and mineralization time), the types and quantities of digestion reagents were optimized. After the mineralization process, the samples were transferred in centrifuge tubes, separated at 3500 rot/min and visually analysed. The obtained solutions were clear, without suspended or deposed materials and separated phases, ready for future separation processes of the ''difficult to measure'' radioisotopes. (authors)

Soil samples were collected from four regions from Armant area. Qena, Upper Egypt for measure their natural radioactivity concentrations due to Ra-226, Th-232 and K-40 radionuclides. Thirty-Four surface soil samples were analyzed by using low-level gamma-spectrometric analysis. The average activity concentration for Ra-226 in (Bq/kg) in the collected soil samples were found to be 27.3 ±3.2, 11.4±1.09, 10.6±1.2, and 11.4±1.02 while the average value for Th-232 were 15.1±1.4, 11.1±0.77, 10.8 ± 0.72 and 11.1 ± 0.8 (Bq/kg) for soil samples from North, South, West and East. The corresponding average values for K-40 were 521.4±16.8, 463±14.8, 488.9±15.6 and 344.5±10.7 (Bq/kg), respectively. Based on radionuclides concentration in surface soil samples the radiological effects can be assessed

Full Text Available In this paper the focus is on environmental statistics, with the aim of estimating the concentration surface and related uncertainty of an air pollutant. We used air quality data recorded by a network of monitoring stations within a Bayesian framework to overcome difficulties in accounting for prediction uncertainty and to integrate information provided by deterministic models based on emissions meteorology and chemico-physical characteristics of the atmosphere. Several authors have proposed such integration, but all the proposed approaches rely on representativeness and completeness of existing air pollution monitoring networks. We considered the situation in which the spatial process of interest and the sampling locations are not independent. This is known in the literature as the preferential sampling problem, which if ignored in the analysis, can bias geostatistical inferences. We developed a Bayesian geostatistical model to account for preferential sampling with the main interest in statistical integration and uncertainty. We used PM10 data arising from the air quality network of the Environmental Protection Agency of Lombardy Region (Italy and numerical outputs from the deterministic model. We specified an inhomogeneous Poisson process for the sampling locations intensities and a shared spatial random component model for the dependence between the spatial location of monitors and the pollution surface. We found greater predicted standard deviation differences in areas not properly covered by the air quality network. In conclusion, in this context inferences on prediction uncertainty may be misleading when geostatistical modelling does not take into account preferential sampling.

We compared the results of four methods used to assess the algal communities at 60 sites distributed among four rivers. Based on Principle Component Analysis of physical habitat data collected concomitantly with the algal data, sites were separated into those with a mean thalweg...

We compared the results of four methods used to assess the algal communities at 60 sites distributed among four rivers. Based on Principle Component Analysis of physical habitat data collected concomitantly with the algal data, sites were separated into those with a mean thalweg...

We describe an approach to use multiple-histogram methods in combination with static, biased Monte Carlo simulations. To illustrate this, we computed the force-extension curve of an athermal polymer from multiple histograms constructed in a series of static Rosenbluth Monte Carlo simulations. From

individual packaging, an operator can generate a large amount of waste that needs to be managed during a sampling mission. The U.S. Army Edgewood...prepared and spore spotting was performed in a biological safety cabinet. For the spore- spotting procedures, the surfaces were spotted with 1 mL of...260 nm (A260) and 280 nm (A280). To determine the DNA concentration for each sample, the NanoDrop software used a modified Beer –Lambert equation and

The density gradient theory has been becoming a widely used framework for calculating surface tension, within which the same equation of state is used for the interface and bulk phases, because it is a theoretically sound, consistent and computationally affordable approach. Based on the observation...... that the optimal density path from the geometric mean density gradient theory passes the saddle point of the tangent plane distance to the bulk phases, we propose to estimate surface tension with an approximate density path profile that goes through this saddle point. The linear density gradient theory, which...... assumes linearly distributed densities between the two bulk phases, has also been investigated. Numerical problems do not occur with these density path profiles. These two approximation methods together with the full density gradient theory have been used to calculate the surface tension of various...

We simulate a single component fluid condensing on 2D structured surfaces with different wettability. To simulate the two phase fluid, we use the athermal Lattice Boltzmann Method (LBM) driven by a pseudopotential force. The pseudopotential force results in a non-ideal equation of state (EOS) which permits liquid-vapor phase change. To account for thermal effects, the athermal LBM is coupled to a finite volume discretization of the temperature evolution equation obtained using a thermal energy rate balance for the specific internal energy. We use the developed model to probe the effect of surface structure and surface wettability on the condensation rate in order to identify microstructure topographies promoting condensation. Financial support is acknowledged from Kimberly-Clark.

In the absence of direct turbulence measurements, the turbulence characteristics of the atmospheric surface layer are often derived from measurements of the surface layer mean properties based on Monin-Obukhov Similarity Theory (MOST). This approach requires two levels of the ensemble mean wind, temperature, and water vapor, from which the fluxes of momentum, sensible heat, and water vapor can be obtained. When only one measurement level is available, the roughness heights and the assumed properties of the corresponding variables at the respective roughness heights are used. In practice, the temporal mean with large number of samples are used in place of the ensemble mean. However, in many situations the samples of data are taken from multiple levels. It is thus desirable to derive the boundary layer flux properties using all measurements. In this study, we used an optimal estimation approach to derive surface layer properties based on all available measurements. This approach assumes that the samples are taken from a population whose ensemble mean profile follows the MOST. An optimized estimate is obtained when the results yield a minimum cost function defined as a weighted summation of all error variance at each sample altitude. The weights are based one sample data variance and the altitude of the measurements. This method was applied to measurements in the marine atmospheric surface layer from a small boat using radiosonde on a tethered balloon where temperature and relative humidity profiles in the lowest 50 m were made repeatedly in about 30 minutes. We will present the resultant fluxes and the derived MOST mean profiles using different sets of measurements. The advantage of this method over the 'traditional' methods will be illustrated. Some limitations of this optimization method will also be discussed. Its application to quantify the effects of marine surface layer environment on radar and communication signal propagation will be shown as well.

Novel reporter bacteriophages are provided. Provided are compositions and methods that allow bacteriophages that are used for specific detection or killing of E. coli 0157:H7 to be propagated in nonpathogenic E. coli, thereby eliminating the safety and security risks of propagation in E. coli 0157:H7. Provided are compositions and methods for attaching active bacteriophages to the surface of a polymer in order to kill target bacteria with which the phage comes into contact. Provided are modified bacteriophages immobilized to a surface, which capture E. coli 0157:H7 and cause the captured cells to emit light or fluorescence, allowing detection of the bacteria in a sample.

Sample preparation for measurement of 99 Tc in a large amount of soil and water samples by ICP-MS has been developed using 95m Tc as a yield tracer. This method is based on the conventional method for a small amount of soil samples using incineration, acid digestion, extraction chromatography (TEVA resin) and ICP-MS measurement. Preliminary concentration of Tc has been introduced by co-precipitation with ferric oxide. The matrix materials in a large amount of samples were more sufficiently removed with keeping the high recovery of Tc than previous method. The recovery of Tc was 70-80% for 100 g soil samples and 60-70% for 500 g of soil and 500 L of water samples. The detection limit of this method was evaluated as 0.054 mBq/kg in 500 g soil and 0.032 μBq/L in 500 L water. The determined value of 99 Tc in the IAEA-375 (soil sample collected near the Chernobyl Nuclear Reactor) was 0.25 ± 0.02 Bq/kg. (author)

Defence R and D Canada-Suffield (DRDC-Suffield) is responsible for analyzing samples that are suspected to contain chemical warfare agents, either collected by the Canadian Forces or by first-responders in the event of a terrorist attack in Canada. The analytical techniques used to identify the composition of the samples include gas chromatography-mass spectrometry (GC-MS), liquid chromatography-mass spectrometry (LC-MS), Fourier-transform infrared spectroscopy (FT-IR) and nuclear magnetic resonance spectroscopy. GC-MS and LC-MS generally require solvent extraction and reconcentration, thereby increasing sample handling. The authors examined analytical techniques which reduce or eliminate sample manipulation. In particular, this paper presented a screening method based on solid phase microextraction (SPME) headspace sampling and GC-MS analysis for chemical warfare agents such as mustard, sarin, soman, and cyclohexyl methylphosphonofluoridate in contaminated soil samples. SPME is a method which uses small adsorbent polymer coated silica fibers that trap vaporous or liquid analytes for GC or LC analysis. Collection efficiency can be increased by adjusting sampling time and temperature. This method was tested on two real-world samples, one from excavated chemical munitions and the second from a caustic decontamination mixture. 7 refs., 2 tabs., 3 figs.

Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.

A system for detecting trace concentrations of an analyte in air includes a preconcentrator for the analyte and an analyte detector. The preconcentrator includes an elongated tubular container comprising a wettable material. The wettable material is continuously wetted with an analyte-sorbing liquid which flows from one part of the container to a lower end. Sampled air flows through the container in contact with the wetted material with a swirling motion which results in efficient transfer of analyte vapors or aerosol particles to the sorbing liquid and preconcentration of traces of analyte in the liquid. The preconcentrated traces of analyte may be either detected within the container or removed therefrom for injection into a separate detection means or for subsequent analysis.

Emerging chemical pollutants (ECPs) are defined as new chemicals which do not have a regulatory status, but which may have an adverse effect on human health and the environment. The occurrence and concentrations of ECPs in South African water bodies are largely unknown, so monitoring is required in order to determine the potential threat that these ECPs may pose. Relevant surface water sampling sites in the Gauteng Province of South Africa were identified utilising a geographic information sy...

For reinforced concrete structures a localisation of all significant critical areas can only be done by a full surface inspection. The economic advantages are obvious: uncritical areas have not to be repaired expensively.The first step of the assessment should always be a visual inspection. The range of deterioration causes can be limited and the degree of deterioration may be estimated roughly. The inspection program can be adjusted to the requirements. By means of a full surface potential mapping areas with a high risk for chloride induced reinforcement corrosion can be localised, although no deteriorations are visually detectable at the concrete surface. In combination with concrete cover depth and resistivity measurements areas with corrosion promoting exposure conditions can be localised even if the reinforcement is not yet de-passivated. The following publication gives an overview about the essential full surface investigation methods to localise critical areas regarding corrosion of steel in concrete. The selection of methods is based on the inspection procedure given in reference 2. (authors)

Stable and radioactive cosmogenic nuclides and radiation damage effects such as cosmic ray tracks can provide information on the surface history of Mars. A recent overview on developments in cosmogenic nuclide research for historical studies of predominantly extraterrestrial materials was published previously. The information content of cosmogenic nuclides and radiation damage effects produced in the Martian surface is based on the different ways of interaction of the primary galactic and solar cosmic radiation (GCR, SCR) and the secondary particle cascade. Generally the kind and extent of interactions as seen in the products depend on the following factors: (1) composition, energy and intensity of the primary SCR and GCR; (2) composition, energy and intensity of the GCR-induced cascade of secondary particles; (3) the target geometry, i.e., the spatial parameters of Martian surface features with respect to the primary radiation source; (4) the target chemistry, i.e., the chemical composition of the Martian surface at the sampling location down to the minor element level or lower; and (5) duration of the exposure. These factors are not independent of each other and have a major influence on sample taking strategies and techniques

The environment is implicated as a source of healthcare-associated infections (HAIs) and there is a need for evidence-based approaches to environmental sampling to assess cleanliness and improve infection prevention and control. We assessed, in vitro, different approaches to sampling the environment for meticillin-resistant Staphylococcus aureus (MRSA). In a laboratory-based investigation, the recovery of MRSA from two common hospital environments using six different samplingmethods was evaluated, with a wild-type strain of MRSA. A 100 cm(2) section of mattress and a laboratory bench surface were contaminated with known inocula of MRSA. Bacteria were recovered by sampling at 30 min after inoculation, using either saline-moistened cotton swabs, neutralising buffer swabs, eSwabs or macrofoam swabs, which were all enriched in tryptone soya broth, or by sampling with direct contact plates or chromogenic \\'sweep\\' plates. The sensitivity (i.e. the minimum number of bacteria inoculated on to a surface which subsequently produced a positive result) of each method was determined for each surface. The most sensitive methods were eSwabs and macrofoam swabs, requiring 6.1 × 10(-1) and 3.9 × 10(-1) MRSA\\/cm(2), respectively, to produce a positive result from the bench surface. The least sensitive swabbing method was saline-moistened cotton swabs, requiring 1.1 × 10(3) MRSA\\/cm(2) of mattress. The recovery of bacteria from environmental samples varies with the swabs and methodology used and negative culture results do not exclude a pathogen-free environment. Greater standardisation is required to facilitate the assessment of cleanliness of healthcare environments.

The US Dapartment of Energy's (DOE's) environmental and waste management (EM) sampling and analysis activities require that large numbers of samples be analyzed for materials characterization, environmental surveillance, and site-remediation programs. The present document, DOE Methods for Evaluating Environmental and Waste Management Samples (DOE Methods), is a supplemental resource for analyzing many of these samples