Composite mixed conductors comprising one electronic conducting phase, and one ionic conducting phase (MIECs) have been developed in this work. Such MIECs have applications in generating and separating hydrogen from hydrocarbon fuels at high process rates and high purities. The ionic conducting phase comprises of rare-earth doped ceria and the electronic conducting phase of rare-earth doped strontium titanate. These compositions are ideally suited for the hydrogen separation application. In the process studied in this project, steam at high temperatures is fed to one side of the MIEC membrane and hydrocarbon fuel or reformed hydrocarbon fuel to the other side of the membrane. Oxygen is transported from the steam side to the fuel side down the electrochemical potential gradient thereby enriching the steam side flow in hydrogen. The remnant water vapor can then be condensed to obtain high purity hydrogen. In this work we have shown that two-phase MIECs comprising rare-earth ceria as the ionic conductor and doped-strontium titanate as the electronic conductor are stable in the operating environment of the MIEC. Further, no adverse reaction products are formed when these phases are in contact at elevated temperatures. The composite MIECs have been characterized using a transient electrical conductivity relaxation technique to measure the oxygen chemical diffusivity and the surface exchange coefficient. Oxygen permeation and hydrogen generation rates have been measured under a range of process conditions and the results have been fit to a model which incorporates the oxygen chemical diffusivity and the surface exchange coefficient from the transient measurements.

While finite non-commutative operator systems lie at the foundation of quantum measurement, they are also tools for understanding geometric iterations as used in the theory of iterated function systems (IFSs) and in wavelet analysis. Key is a certain splitting of the total Hilbert space and its recursive iterations to further iterated subdivisions. This paper explores some implications for associated probability measures (in the classical sense of measure theory), specifically their fractal components. We identify a fractal scale $s$ in a family of Borel probability measures $\\mu$ on the unit interval which arises independently in quantum information theory and in wavelet analysis. The scales $s$ we find satisfy $s\\in \\mathbb{R}_{+}$ and $s\

MiniBooNE reports the first absolute cross sections for neutral current single \\pi^0 production on CH_2 induced by neutrino and antineutrino interactions measured from the largest sets of NC \\pi^0 events collected to date. The principal result consists of differential cross sections measured as functions of \\pi^0 momentum and \\pi^0 angle averaged over the neutrino flux at MiniBooNE. We find total cross sections of (4.76+/-0.05_{stat}+/-0.40_{sys})*10^{-40} cm^2/nucleon at a mean energy of =808 MeV and (1.48+/-0.05_{stat}+/-0.14_{sys})*10^{-40} cm^2/nucleon at a mean energy of =664 MeV for \

SOFTWARE RELATEDSOFTWARE-RELATED MEASUREMENT:MEASUREMENT: RISKS AND OPPORTUNITIES CEM KANER, J and lazy? Most managers who I know have tried at least oney g measurement program--and abandoned them: · Measurement theory and how it applies to software development metrics (which, at their core, are typically

NIST Frequency Measurement & Analysis Service #12;A Complete Solution To All Frequency Measurement & Calibration Problems The NIST Frequency Measurement and Analysis Service makes it easy to measure and calibrate any quartz, rubidium, or cesium frequency standard. All measurements are made automatically

It is assumed that an arbitrary composite bipartite pure state in which the two subsystems are entangled is given, and it is investigated how the entanglement transmits the influence of measurement on only one of the subsystems to the state of the opposite subsystem. It is shown that any exact subsystem measurement has the same influence as ideal measurement on the opposite subsystem. In particular, the distant effect of subsystem measurement of a twin observable, i. e., so-called 'distant measurement', is always ideal measurement on the distant subsystem no matter how intricate the direct exact measurement on the opposite subsystem is.

Flatness of a plate is a parameter has been put under consideration for long time. Factors influencing the accuracy of this parameter have been recognized and examined carefully but placed scatterringly. Beside that those reports have not been always in harmonization with Guide for expression of uncertainty measurement (GUM). Furthermore, mathematical equations describing clearly the flatness measurement have not been seen in those reports also. We have collected those influencing factors for systematic reference purpose, re-written the equation describing the profile measurement of the plate topography, and proposed an equation for flatness determination. An illustrative numerical example will be also shown.

The fidelity (Shannon mutual information between measurements and physical quantities) is proposed as a quantitative measure of the quality of physical measurements. The fidelity does not depend on the true value of unknown physical quantities (as does the Fisher information) and it allows for the role of prior information in the measurement process. The fidelity is general enough to allow a natural comparison of the quality of classical and quantum measurements. As an example, the fidelity is used to compare the quality of measurements made by a classical and a quantum Mach-Zehnder interferometer.

Energy measurements play a very important role in a detailed energy analysis. The role is more important in industrial processes where wide variations of process conditions exist. Valid energy measurements make the decision making process easier...

An empirical method for the remote sensing of steam quality that can be easily adapted to downhole steam quality measurements by measuring the electrical properties of two-phase flow across electrode grids at low frequencies.

Apparatus and methods are provided for a system for measurement of a current in a conductor such that the conductor current may be momentarily directed to a current measurement element in order to maintain proper current without significantly increasing an amount of power dissipation attributable to the current measurement element or adding resistance to assist in current measurement. The apparatus and methods described herein are useful in superconducting circuits where it is necessary to monitor current carried by the superconducting elements while minimizing the effects of power dissipation attributable to the current measurement element.

A method and apparatus for measuring the through-thickness resistance or conductance of a thin electrolyte is provided. The method and apparatus includes positioning a first source electrode on a first side of an electrolyte to be tested, positioning a second source electrode on a second side of the electrolyte, positioning a first sense electrode on the second side of the electrolyte, and positioning a second sense electrode on the first side of the electrolyte. current is then passed between the first and second source electrodes and the voltage between the first and second sense electrodes is measured.

Any verification measurement performed on potentially classified nuclear material must satisfy two seemingly contradictory constraints. First and foremost, no classified information can be released. At the same time, the monitoring party must have confidence in the veracity of the measurement. An information barrier (IB) is included in the measurement system to protect the potentially classified information while allowing sufficient information transfer to occur for the monitoring party to gain confidence that the material being measured is consistent with the host's declarations, concerning that material. The attribute measurement technique incorporates an IB and addresses both concerns by measuring several attributes of the nuclear material and displaying unclassified results through green (indicating that the material does possess the specified attribute) and red (indicating that the material does not possess the specified attribute) lights. The attribute measurement technique has been implemented in the AVNG, an attribute measuring system described in other presentations at this conference. In this presentation, we will discuss four techniques used in the AVNG: (1) the 1B, (2) the attribute measurement technique, (3) the use of open and secure modes to increase confidence in the displayed results, and (4) the joint design as a method for addressing both host and monitor needs.

An apparatus for measuring the hydraulic axial thrust of a pump under operation conditions is disclosed. The axial thrust is determined by forcing the rotating impeller off of an associated thrust bearing by use of an elongate rod extending coaxially with the pump shaft. The elongate rod contacts an impeller retainer bolt where a bearing is provided. Suitable measuring devices measure when the rod moves to force the impeller off of the associated thrust bearing and the axial force exerted on the rod at that time. The elongate rod is preferably provided in a housing with a heat dissipation mechanism whereby the hot fluid does not affect the measuring devices. 1 fig.

An apparatus for measuring the hydraulic axial thrust of a pump under operation conditions is disclosed. The axial thrust is determined by forcing the rotating impeller off of an associated thrust bearing by use of an elongate rod extending coaxially with the pump shaft. The elongate rod contacts an impeller retainer bolt where a bearing is provided. Suitable measuring devices measure when the rod moves to force the impeller off of the associated thrust bearing and the axial force exerted on the rod at that time. The elongate rod is preferably provided in a housing with a heat dissipation mechanism whereby the hot fluid does not affect the measuring devices.

The measurement of absolutely normalized cross sections for high-energy scattering processes is an important reference for theoretical models. This paper discusses the first determination of the luminosity for data of the COMPASS experiment, which is the basis for such measurements. The resulting normalization is validated via the determination of the structure function $F_2$ from COMPASS data, which is compared to literature.

20 Measuring Energy Sustainability David L. Greene Abstract For the purpose of measurement, energy sustainability is defined as ensuring that future generations have energy resources that enable them to achieve that there are valid, more comprehensive understandings of sustainability and that energy sustainability as de- fined

A method for the measurement of the viscosity of a fluid uses a micromachined cantilever mounted on a moveable base. As the base is rastered while in contact with the fluid, the deflection of the cantilever is measured and the viscosity determined by comparison with standards.

This document contains descriptions of Federal Manufacturing & Technologies (FM&T) Metrology capabilities, traceability flow charts, and the measurement uncertainty of each measurement capability. Metrology provides NIST traceable precision measurements or equipment calibration for a wide variety of parameters, ranges, and state-of-the-art uncertainties. Metrology laboratories conform to the requirements of the Department of Energy Development and Production Manual Chapter 13.2, ANSI/ISO/IEC ANSI/ISO/IEC 17025:2005, and ANSI/NCSL Z540-1. FM&T Metrology laboratories are accredited by NVLAP for the parameters, ranges, and uncertainties listed in the specific scope of accreditation under NVLAP Lab code 200108-0. See the Internet at http://ts.nist.gov/Standards/scopes/2001080.pdf. These parameters are summarized. The Honeywell Federal Manufacturing & Technologies (FM&T) Metrology Department has developed measurement technology and calibration capability in four major fields of measurement: (1) Mechanical; (2) Environmental, Gas, Liquid; (3) Electrical (DC, AC, RF/Microwave); and (4) Optical and Radiation. Metrology Engineering provides the expertise to develop measurement capabilities for virtually any type of measurement in the fields listed above. A strong audit function has been developed to provide a means to evaluate the calibration programs of our suppliers and internal calibration organizations. Evaluation includes measurement audits and technical surveys.

A magnetic field measurement system was designed, built and installed at MAX Lab, Sweden for the purpose of characterizing the magnetic field produced by Insertion Devices (see Figure 1). The measurement system consists of a large granite beam roughly 2 feet square and 14 feet long that has been polished beyond laboratory grade for flatness and straightness. The granite precision coupled with the design of the carriage yielded minimum position deviations as measured at the probe tip. The Hall probe data collection and compensation technique allows exceptional resolution and range while taking data on the fly to programmable sample spacing. Additional flip coil provides field integral data.

We characterize the extremal points of the convex set of quantum measurements that are covariant under a finite-dimensional projective representation of a compact group, with action of the group on the measurement probability space which is generally non-transitive. In this case the POVM density is made of multiple orbits of positive operators, and, in the case of extremal measurements, we provide a bound for the number of orbits and for the rank of POVM elements. Two relevant applications are considered, concerning state discrimination with mutually unbiased bases and the maximization of the mutual information.

This thesis details a measurement setup and experimental procedures for emittance measurements using a Fourier transform infrared spectrometer. We calibrate the FTIR measurement system using measurements of a blackbody ...

A multipurpose in situ underground measurement system comprising a plurality of long electrical resistance elements in the form of rigid reinforcing bars, each having an open loop hairpin configuration of shorter length than the other resistance elements. The resistance elements are arranged in pairs in a unitized structure, and grouted in place in the underground volume. Measurement means are provided for obtaining for each pair the electrical resistance of each element and the difference in electrical resistance of the paired elements, which difference values may be used in analytical methods involving resistance as a function of temperature. A scanner means sequentially connects the resistance-measuring apparatus to each individual pair of elements. A source of heating current is also selectively connectable for heating the elements to an initial predetermined temperature prior to electrical resistance measurements when used as an anemometer.

The linear accelerator ELBE delivers high-brightness electron bunches to multiple user stations, including two IR-FEL oscillators [1], [2]. In the framework of an upgrade program the current thermionic injector is being replaced by a SRF-photoinjector [3], [4]. The SRF injector promises higher beam quality, especially required for future experiments with high power laser radiation. During the commissioning phase, the SRF-injector was running in parallel to the thermionic gun. After installation of a injection beamline (dogleg), beam from the SRF-injector can now be injected into the ELBE linac. Detailed characterization of the electron beam quality delivered by the new electron injector includes vertical slice emittance measurements in addition to measurements of projected emittance values. This report gives an overview of the status of the project and summarizes first measurement results as well as results of simulations performed with measurement settings.

. To identify habits, previous research has relied upon measures of past behavior frequency. These studies have been unable to differentiate between habits and frequently performed behavior that is thoughtful and deliberate. Thoughtful initiation and performance...

This thesis presents the measurement of the charged current quasi-elastic (CCQE) neutrino-nucleon cross section at neutrino energies around 1 GeV. This measurement has two main physical motivations. On one hand, the neutrino-nucleon interactions at few GeV is a region where existing old data are sparse and with low statistics. The current measurement populates low energy regions with higher statistics and precision than previous experiments. On the other hand, the CCQE interaction is the most useful interaction in neutrino oscillation experiments. The CCQE channel is used to measure the initial and final neutrino fluxes in order to determine the neutrino fraction that disappeared. The neutrino oscillation experiments work at low neutrino energies, so precise measurement of CCQE interactions are essential for flux measurements. The main goal of this thesis is to measure the CCQE absolute neutrino cross section from the SciBooNE data. The SciBar Booster Neutrino Experiment (SciBooNE) is a neutrino and anti-neutrino scattering off experiment. The neutrino energy spectrum works at energies around 1 GeV. SciBooNE was running from June 8th 2007 to August 18th 2008. In that period, the experiment collected a total of 2.65 x 10{sup 20} protons on target (POT). This thesis has used full data collection in neutrino mode 0.99 x 10{sup 20} POT. A CCQE selection cut has been performed, achieving around 70% pure CCQE sample. A fit method has been exclusively developed to determine the absolute CCQE cross section, presenting results in a neutrino energy range from 0.2 to 2 GeV. The results are compatible with the NEUT predictions. The SciBooNE measurement has been compared with both Carbon (MiniBoonE) and deuterium (ANL and BNL) target experiments, showing a good agreement in both cases.

The Hamiltonian structure of general relativity provides a natural canonical measure on the space of all classical universes, i.e., the multiverse. We review this construction and show how one can visualize the measure in terms of a 'magnetic flux' of solutions through phase space. Previous studies identified a divergence in the measure, which we observe to be due to the dilatation invariance of flat Friedmann-Lemaitre-Robertson-Walker universes. We show that the divergence is removed if we identify universes which are so flat they cannot be observationally distinguished. The resulting measure is independent of time and of the choice of coordinates on the space of fields. We further show that, for some quantities of interest, the measure is very insensitive to the details of how the identification is made. One such quantity is the probability of inflation in simple scalar field models. We find that, according to our implementation of the canonical measure, the probability for N e-folds of inflation in single-field, slow-roll models is suppressed by of order exp(-3N) and we discuss the implications of this result.

This document contains descriptions of Federal Manufacturing & Technologies (FM&T) Metrology capabilities, traceability flow charts, and the measurement uncertainty of each measurement capability. Metrology provides NIST traceable precision measurements or equipment calibration for a wide variety of parameters, ranges, and state-of-the-art uncertainties. Metrology laboratories conform to the requirements of the Department of Energy Development and Production Manual Chapter 8.4, ANSI/ISO/IEC ANSI/ISO/IEC 17025:2000, and ANSI/NCSL Z540-1 (equivalent to ISO Guide 25). FM&T Metrology laboratories are accredited by NVLAP for the parameters, ranges, and uncertainties listed in the specific scope of accreditation under NVLAP Lab code 200108-0. See the Internet at http://ts.nist.gov/ts/htdocs/210/214/scopes/2001080.pdf. These parameters are summarized in the table at the bottom of this introduction.

We summarize the current status of cosmological measurements using SNe Ia. Searches to an average depth of z~0.5 have found approximately 100 SNe Ia to date, and measurements of their light curves and peak magnitudes find these objects to be about 0.25mag fainter than predictions for an empty universe. These measurements imply low values for Omega_M and a positive cosmological constant, with high statistical significance. Searches out to z~1.0-1.2 for SNe Ia (peak magnitudes of I~24.5) will greatly aid in confirming this result, or demonstrate the existence of systematic errors. Multi-epoch spectra of SNe Ia at z~0.5 are needed to constrain possible evolutionary effects. I band searches should be able to find SNe Ia out to z~2. We discuss some simulations of deep searches and discovery statistics at several redshifts.

The present invention relates to an empirical electrical method for remote sensing of steam quality utilizing flow-through grids which allow measurement of the electrical properties of a flowing two-phase mixture. The measurement of steam quality in the oil field is important to the efficient application of steam assisted recovery of oil. Because of the increased energy content in higher quality steam it is important to maintain the highest possible steam quality at the injection sandface. The effectiveness of a steaming operation without a measure of steam quality downhole close to the point of injection would be difficult to determine. Therefore, a need exists for the remote sensing of steam quality.

Previous investigations of structural power flow through beam-like structures resulted in some unexplained anomalies in the calculated data. In order to develop structural power flow measurement as a viable technique for machine tool design, the causes of these anomalies needed to be found. Once found, techniques for eliminating the errors could be developed. Error sources were found in the experimental apparatus itself as well as in the instrumentation. Although flexural waves are the carriers of power in the experimental apparatus, at some frequencies longitudinal waves were excited which were picked up by the accelerometers and altered power measurements. Errors were found in the phase and gain response of the sensors and amplifiers used for measurement. A transfer function correction technique was employed to compensate for these instrumentation errors.

The present invention provides systems and methods for accurately characterizing thermodynamic and materials properties of electrodes and electrochemical energy storage and conversion systems. Systems and methods of the present invention are configured for simultaneously collecting a suite of measurements characterizing a plurality of interconnected electrochemical and thermodynamic parameters relating to the electrode reaction state of advancement, voltage and temperature. Enhanced sensitivity provided by the present methods and systems combined with measurement conditions that reflect thermodynamically stabilized electrode conditions allow very accurate measurement of thermodynamic parameters, including state functions such as the Gibbs free energy, enthalpy and entropy of electrode/electrochemical cell reactions, that enable prediction of important performance attributes of electrode materials and electrochemical systems, such as the energy, power density, current rate and the cycle life of an electrochemical cell.

The purpose of this measure guideline on ventilation cooling is to provide information on a cost-effective solution for reducing cooling system energy and demand in homes located in hot-dry and cold-dry climates. This guideline provides a prescriptive approach that outlines qualification criteria, selection considerations, and design and installation procedures.

MANDATORY MEASURES DAYLIGHTING Reference: Sub-Chapter 4, Section 130.1(d) #12;SECTION 4 Daylighting daylighting controls. Â· Lighting in daylit zones should have multi-level steps, per Table 130.1-A Â· Light levels provided at night should be available at all other times Â· When sufficient daylight is available

MANDATORY MEASURES DAYLIGHTING Reference: Sub-Chapter 4, Section 130.1(d) #12;SECTION 4 MINIMUM DAYLIGHTING FOR LARGE SPACES Large enclosed spaces, such as large retail warehouses, are required to have a minimum amount of daylight available when using the prescriptive method of compliance. The minimum

MANDATORY MEASURES DAYLIGHTING Reference: Sub-Chapter 4, Section 130.1(d) #12;SECTION 5 Daylighting daylighting controls. Â· Lighting in daylit zones should have multi-level steps, per Table 130.1-A Â· Light levels provided at night should be available at all other times Â· When sufficient daylight is available

October 16, 2008 High-energy photons from medical accelerators are used to treat tumors in cancer patients therapy is operated at energies up to 25 MeV. This high energy exceeds the photonuclear threshold energy alternative methods since they are quicker, more flexible, and less rigorous than taking measurements

New Proton Radioactivity Measurements Richard J. Irvine Thesis submitted for the degree of Doctor to search for examples of proton emission from ground and lowÂ­lying states in oddÂ­Z nuclei at the proton into a doubleÂ­sided silicon strip detector system, where their subsequent particle decays (proton or alpha) were

droplets are within the laser beam long enough so they can be sized. · A running average of droplet transit, it is rejected from sizing but included in the running average. Laser Beam Fraction Correction #12;The velocity ­ Activity Fraction #12;Percentage of particle losses based on the measured FSSP activity. FSSP Particle Loss

the experimental set­up und the results of performing the experiment. Again, this is part of human cultureExperiments: Preparation and Measurement by Arnold Neumaier, Vienna March 1996 Abstract Introduction Experiments, properly arranged, provide information about a physical system by suitable

We present a definition of time measurement based on high energy photons and the fundamental length scale, and show that, for macroscopic time, it is in accord with the Lorentz transformation of special relativity. To do this we define observer in a different way than in special relativity.

The LCLS reference undulator has been measured 22 times during the course of undulator tuning. These measurements provide estimates of various statistical errors. This note gives a summary of the reference undulator measurements and it provides estimates of the undulator tuning errors. We measured the reference undulator many times during the tuning of the LCLS undulators. These data sets give estimates of the random errors in the tuned undulators. The measured trajectories in the reference undulator are stable and straight to within {+-}2 {micro}m. Changes in the phase errors are less than {+-}2 deg between data sets. The phase advance in the cell varies by less than {+-}2 deg between data sets. The rms variation between data sets of the first integral of B{sub x} is 9.98 {micro}Tm, and the rms variation of the second integral of B{sub x} is 17.4 {micro}Tm{sup 2}. The rms variation of the first integral of B{sub y} is 6.65 {micro}Tm, and the rms variation of the second integral of B{sub y} is 12.3 {micro}Tm{sup 2}. The rms variation of the x-position of the fiducialized beam axis is 35 {micro}m in the final production run This corresponds to an rms uncertainty in the K value of {Delta}K/K = 2.7 x 10{sup -5}. The rms variation of the y-position of the fiducialized beam axis is 4 {micro}m in the final production run.

Weak measurements are supposed to be essential for the so called direct measurement of the quantum wavefunction [Nature (London) 474, 188 (2011)]. Here we show that direct measurement of the wavefunction can be obtained by using measurements of arbitrary strength. In particular, in the case of strong (i.e. projective) measurements, we compared the precision and the accuracy of the two methods, by showing that strong measurements outperform weak measurements in both. We also give the exact expression of the reconstructed wavefunction obtained by the weak measurement approach, allowing to define the range of applicability of such method.

I review recent progress in defining a probability measure in the inflationary multiverse. General requirements for a satisfactory measure are formulated and recent proposals for the measure are clarified and discussed.

Energy storage devices, primarily batteries, are now more important to consumers, industries and the military. With increasing technical complexity and higher user expectations, there is also a demand for highly accurate state-of-health battery assessment techniques. IMB incorporates patented, proprietary, and tested capabilities using control software and hardware that can be part of an embedded monitoring system. IMB directly measures the wideband impedance spectrum in seconds during battery operation with no significant impact on service life. It also can be applied to batteries prior to installation, confirming health before entering active service, as well as during regular maintenance. For more information about this project, visit http://www.inl.gov/rd100/2011/impedance-measurement-box/

The top quark, with its extraordinarily large mass (nearly that of a gold atom), plays a significant role in the phenomenology of EWSB in the Standard Model. In particular, the top quark mass when combined with the W mass constrains the mass of the as yet unobserved Higgs boson. Thus, a precise determination of the mass of the top quark is a principal goal of the CDF and D0 experiments. With the data collected thus far in Runs 1 and 2 of the Tevatron, CDF and D0 have measured the top quark mass in both the lepton+jets and dilepton decay channels using a variety of complementary experimental techniques. The author presents an overview of the most recent of the measurements.

A statistical description and model of individual healthcare expenditures in the US has been developed for measuring value in healthcare. We find evidence that healthcare expenditures are quantifiable as an infusion-diffusion process, which can be thought of intuitively as a steady change in the intensity of treatment superimposed on a random process reflecting variations in the efficiency and effectiveness of treatment. The arithmetic mean represents the net average annual cost of healthcare; and when multiplied by the arithmetic standard deviation, which represents the effective risk, the result is a measure of healthcare cost control. Policymakers, providers, payors, or patients that decrease these parameters are generating value in healthcare. The model has an average absolute prediction error of approximately 10-12% across the range of expenditures which spans 6 orders of magnitude over a nearly 10-year period. For the top 1% of the population with the largest expenditures, representing 20%-30% of total ...

The measurement of transverse spin effects in semi-inclusive deep-inelastic scattering is an important part of the COMPASS physics program. From the analysis of the 2002-2004 data, new results for the transverse target spin asymmetry of z-ordered identified pion and kaon pairs are presented. In addition, a first result for the transverse target spin asymmetry of exclusively produced rho^0 mesons on the deuteron is shown.

their corrected outputs given their measured inputs, outputs and the sensitivity relations of equation 9). For example, the gas turbine flow and exhaust temperatures are input variables to the respective HRSG in addition to stearn pressures and inlet... feedwater temperatures which are HRSG independent variables, as shown in Figure 2. In sliding pressure operation, there is an iterative calculation with the steam turbine due to the effect of pressure on HRSG steam generation and stearn ~Is...

of words and their neutral counterparts were designed for this experiment. Using a computer ST opposed to a card ST was disregarded based on Kindt, Bierman, and Brosschots (1996) study that showed the highest testretest correlation for the standard... such as inattention to task) (Townshend & Duka, 2001). In total, 6.3% of participants were excluded (n #1; 5). There were no significant differences in the excluded participants demo- graphic characteristics. Instruments and measures. Questionnaires. Three self...

reserve margin and the probability of having such a reserve margin. An overview of other statistics measuring the continuity of supply quantities, such as the Customer Average Interruption Duration Index (CAIDI) or the Customer Minutes... continuity of Italian gas and electricity supplies based on the DGTren Reference Scenario for Italy in 2030 (DG Tren, 2009) but replacing the nuclear capacity with 10GW concentrated solar power imports. The decision which policy...

DRIVER EYF HEIGHT MFASHRFMENT A Thesis by ANTHONY DANIEL ABRAHAMSON Submitted to the Graduate College of Texas A&M Hniversity in partial fulfillment oi the requirement for the degree of MASTER OF SCIENCE December 1978 Major Subje"t: Civil... Engineering DRIVER EYE HEIGHT MEASUREMENT A Thesis by ANTHONY DANIEL ABRAHAMSON Approved as to style and content by: I (C irman of Committee) (Member) (Memb er ) Head of Department) December 1978 ABSTRACT Driver. Eye Height Neasurement. (December...

The purpose of this measure guideline on evaporative condensers is to provide information on a cost-effective solution for energy and demand savings in homes with cooling loads. This is a prescriptive approach that outlines selection criteria, design and installation procedures, and operation and maintenance best practices. This document has been prepared to provide a process for properly designing, installing, and maintaining evaporative condenser systems as well as understanding the benefits, costs, and tradeoffs.

Systems and methods are described for a wireless instrumented silicon wafer that can measure temperatures at various points and transmit those temperature readings to an external receiver. The device has particular utility in the processing of semiconductor wafers, where it can be used to map thermal uniformity on hot plates, cold plates, spin bowl chucks, etc. without the inconvenience of wires or the inevitable thermal perturbations attendant with them.

The system of the present invention contemplates a non-intrusive method for measuring the temperature rise of optical elements under high laser power optical loading to determine the absorption coefficient. The method comprises irradiating the optical element with a high average power laser beam, viewing the optical element with an infrared camera to determine the temperature across the optical element and calculating the absorption of the optical element from the temperature.

An auto-ranging AC resistance measuring instrument for remote measurement of the resistance of an electrical device or circuit connected to the instrument includes a signal generator which generates an AC excitation signal for application to a load, including the device and the transmission line, a monitoring circuit which provides a digitally encoded signal representing the voltage across the load, and a microprocessor which operates under program control to provide an auto-ranging function by which range resistance is connected in circuit with the load to limit the load voltage to an acceptable range for the instrument, and an auto-compensating function by which compensating capacitance is connected in shunt with the range resistance to compensate for the effects of line capacitance. After the auto-ranging and auto-compensation functions are complete, the microprocessor calculates the resistance of the load from the selected range resistance, the excitation signal, and the load voltage signal, and displays of the measured resistance on a digital display of the instrument.

An auto-ranging AC resistance measuring instrument for remote measurement of the resistance of an electrical device or circuit connected to the instrument includes a signal generator which generates an AC excitation signal for application to a load, including the device and the transmission line, a monitoring circuit which provides a digitally encoded signal representing the voltage across the load, and a microprocessor which operates under program control to provide an auto-ranging function by which range resistance is connected in circuit with the load to limit the load voltage to an acceptable range for the instrument, and an auto-compensating function by which compensating capacitance is connected in shunt with the range resistance to compensate for the effects of line capacitance. After the auto-ranging and auto-compensation functions are complete, the microprocessor calculates the resistance of the load from the selected range resistance, the excitation signal, and the load voltage signal, and displays of the measured resistance on a digital display of the instrument. 8 figs.

The neutrino mass plays an important role in particle physics, astrophysics and cosmology. In recent years the detection of neutrino flavour oscillations proved that neutrinos carry mass. However, oscillation experiments are only sensitive to the mass-squared difference of the mass eigenvalues. In contrast to cosmological observations and neutrino-less double beta decay (0v2{beta}) searches, single {beta}-decay experiments provide a direct, model-independent way to determine the absolute neutrino mass by measuring the energy spectrum of decay electrons at the endpoint region with high accuracy.Currently the best kinematic upper limits on the neutrino mass of 2.2eV have been set by two experiments in Mainz and Troitsk, using tritium as beta emitter. The next generation tritium {beta}-experiment KATRIN is currently under construction in Karlsruhe/Germany by an international collaboration. KATRIN intends to improve the sensitivity by one order of magnitude to 0.2eV. The investigation of a second isotope ({sup 137}Rh) is being pursued by the international MARE collaboration using micro-calorimeters to measure the beta spectrum. The technology needed to reach 0.2eV sensitivity is still in the R and D phase. This paper reviews the present status of neutrino-mass measurements with cosmological data, 0v2{beta} decay and single {beta}-decay.

MEASUREMENT SENSITIVITY AND ACCURACY VERIFICATION FOR AN ANTENNA MEASUREMENT SYSTEM Newlyn Hui Luis Obispo, CA 93407 ABSTRACT An antenna measurement system was developed to complement a new an RF link budget is calculated to evaluate the performance of the antenna measurement system. Keywords

We report a mathematically rigorous technique which facilitates the optimization of various optical properties of electromagnetic fields. The technique exploits the linearity of electromagnetic fields along with the quadratic nature of their interaction with matter. In this manner we may decompose the respective fields into optical quadratic measure eigenmodes (QME). Key applications include the optimization of the size of a focused spot, the transmission through photonic devices, and the structured illumination of photonic and plasmonic structures. We verify the validity of the QME approach through a particular experimental realization where the size of a focused optical field is minimized using a superposition of Bessel beams.

A method of measurement of objects to determine object flaws, Poisson`s ratio ({sigma}) and shear modulus ({mu}) is shown and described. First, the frequency for expected degenerate responses is determined for one or more input frequencies and then splitting of degenerate resonant modes are observed to identify the presence of flaws in the object. Poisson`s ratio and the shear modulus can be determined by identification of resonances dependent only on the shear modulus, and then using that shear modulus to find Poisson`s ratio using other modes dependent on both the shear modulus and Poisson`s ratio. 1 fig.

A procedure and tools for quantifying surface cleanliness are described. Cleanliness of a target surface is quantified by wiping a prescribed area of the surface with a flexible, bright white cloth swatch, preferably mounted on a special tool. The cloth picks up a substantial amount of any particulate surface contamination. The amount of contamination is determined by measuring the reflectivity loss of the cloth before and after wiping on the contaminated system and comparing that loss to a previous calibration with similar contamination. In the alternative, a visual comparison of the contaminated cloth to a contamination key provides an indication of the surface cleanliness.

Through the reactor oversight process (ROP), the U.S. Nuclear Regulatory Commission (NRC) monitors the performance of utilities licensed to operate nuclear power plants. The process is designed to assure public health and safety by providing reasonable assurance that licensees are meeting the cornerstones of safety and designated crosscutting elements. The reactor inspection program, together with performance indicators (PIs), and enforcement activities form the basis for the NRCs risk-informed, performance based regulatory framework. While human performance is a key component in the safe operation of nuclear power plants and is a designated cross-cutting element of the ROP, there is currently no direct inspection or performance indicator for assessing human performance. Rather, when human performance is identified as a substantive cross cutting element in any 1 of 3 categories (resources, organizational or personnel), it is then evaluated for common themes to determine if follow-up actions are warranted. However, variability in human performance occurs from day to day, across activities that vary in complexity, and workgroups, contributing to the uncertainty in the outcomes of performance. While some variability in human performance may be random, much of the variability may be attributed to factors that are not currently assessed. There is a need to identify and assess aspects of human performance that relate to plant safety and to develop measures that can be used to successfully assure licensee performance and indicate when additional investigation may be required. This paper presents research that establishes a technical basis for developing human performance measures. In particular, we discuss: 1) how historical data already gives some indication of connection between human performance and overall plant performance, 2) how industry led efforts to measure and model human performance and organizational factors could serve as a data source and basis for a framework, 3) how our use of modeling and simulation techniques could be used to develop and validate measures of human performance, and 4) what the possible outcomes are from this research as the modeling and simulation efforts generate results.

We develop a circuit theory that enables us to analyze quantum measurements on a two-level system and on a continuous-variable system on an equal footing. As a measurement scheme applicable to both systems, we discuss a swapping state measurement which exchanges quantum states between the system and the measuring apparatus before the apparatus meter is read out. This swapping state measurement has an advantage in gravitational-wave detection over contractive state measurement in that the postmeasurement state of the system can be set to a prescribed one, regardless of the outcome of the measurement.

As the International Atomic Energy Agency (IAEA) implements a State Level Approach to its safeguards verification responsibilities, a number of countries are beginning new nuclear power programs and building new nuclear fuel cycle faculties. The State Level approach is holistic and investigatory in nature, creating a need for transparent, non-discriminatory judgments about a state's nonproliferation posture. In support of this need, the authors previously explored the value of defining and measuring a state's safeguards culture. We argued that a clear definition of safeguards culture and an accompanying set of metrics could be applied to provide an objective evaluation and demonstration of a country's nonproliferation posture. As part of this research, we outlined four high-level metrics that could be used to evaluate a state's nuclear posture. We identified general data points. This paper elaborates on those metrics, further refining the data points to generate a measurable scale of safeguards cultures. We believe that this work could advance the IAEA's goals of implementing a safeguards system that is fully information driven, while strengthening confidence in its safeguards conclusions.

LLNL has an ongoing research and development project that includes developing data acquisition systems with remote wireless communication for monitoring the vibrations of large civil engineering structures. In order to establish the capability of performing remote sensing over an extended period of time, the researchers needed to apply this technology to a real structure. The construction of the National Ignition Facility provided an opportunity to test the data acquisition system on a large structure to monitor whether the facility is remaining within the strict ambient vibration guidelines. This document will briefly discuss the NIF ambient vibration requirements and summarize the vibration measurements performed during the Spring and Summer of 1999. In addition, a brief description of the sensors and the data acquisition systems will be provided in Appendix B.

A monolithic sensor includes a reference channel and at least one sensing channel. Each sensing channel has an oscillator and a counter driven by the oscillator. The reference channel and the at least one sensing channel being formed integrally with a substrate and intimately nested with one another on the substrate. Thus, the oscillator and the counter have matched component values and temperature coefficients. A frequency determining component of the sensing oscillator is formed integrally with the substrate and has an impedance parameter which varies with an environmental parameter to be measured by the sensor. A gating control is responsive to an output signal generated by the reference channel, for terminating counting in the at least one sensing channel at an output count, whereby the output count is indicative of the environmental parameter, and successive ones of the output counts are indicative of changes in the environmental parameter.

In this paper, we propose a scheme to enhance trapping of entanglement of two qubits in the environment of a photonic band gap material. Our entanglement trapping promotion scheme makes use of combined weak measurements and quantum measurement reversals. The optimal promotion of entanglement trapping can be acquired with a reasonable finite success probability by adjusting measurement strengths.

We introduce an operational discord-type measure for quantifying nonclassical correlations in bipartite Gaussian states based on using Gaussian measurements. We refer to this measure as operational Gaussian discord (OGD). It is defined as the difference between the entropies of two conditional probability distributions associated to one subsystem, which are obtained by performing optimal local and joint Gaussian measurements. We demonstrate the operational significance of this measure in terms of a Gaussian quantum protocol for extracting maximal information about an encoded classical signal. As examples, we calculate OGD for several Gaussian states in the standard form.

The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment's final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

One of the missing keys in the present understanding of the spin structure of the nucleon is the contribution from the gluons: the so-called gluon polarisation. This quantity can be determined in DIS through the photon-gluon fusion process, in which two analysis methods may be used: (i) identifying open charm events or (ii) selecting events with high p_T hadrons. The data used in the present work were collected in the COMPASS experiment, where a 160 GeV/c naturally polarised muon beam, impinging on a polarised nucleon fixed target is used. Preliminary results for the gluon polarisation from high p_T and open charm analyses are presented. The gluon polarisation result for high p_T hadrons is divided, for the first time, into three statistically independent measurements at LO. The result from open charm analysis is obtained at LO and NLO. In both analyses a new weighted method based on a neural network approach is used.

In the Aharonov-Albert-Vaidman (AAV) weak measurement, it is assumed that the measuring device or the pointer is in a quantum mechanical pure state. In reality, however, it is often not the case. In this paper, we generalize the AAV weak measurement scheme to include more generalized situations in which the measuring device is in a mixed state. We also report an optical implementation of the weak value measurement in which the incoherent pointer is realized with the pseudo-thermal light. The theoretical and experimental results show that the measuring device under the influence of partial decoherence could still be used for amplified detection of minute physical changes and are applicable for implementing the weak value measurement for massive particles.

We study the recently proposed "stationary measure" in the context of the string landscape scenario. We show that it suffers neither from the "Boltzmann brain" problem nor from the "youngness" paradox that makes some other measures predict a high CMB temperature at present. We also demonstrate a satisfactory performance of this measure in predicting the results of local experiments, such as proton decay.

We study the recently proposed ''stationary measure'' in the context of the string landscape scenario. We show that it suffers neither from the ''Boltzmann brain'' problem nor from the ''youngness'' paradox that makes some other measures predict a high CMB temperature at present. We also demonstrate a good performance of this measure in predicting the results of local experiments, such as proton decay.

The Cable Measuring Engine (CME) is a tool which measures and records the cable dimensions in a nondestructive fashion. It is used in-line with the superconductor cable as it is being made. The CME is intended to be used as a standard method of measuring cable by the various manufacturers involved in the cable process.

The measurement of uranium holdup, the residual material left in process equipment such as pipes or ducts, is an integral element of material control and accountability. Not only are the measurements important for accountability, they are also important for criticality safety. The goal in measuring holdup is to quantify the amount of material in the pipes to verify that all material is accounted for (inventory in [inventory out + holdup] = 0) and to ensure that the amount of material heldup is not a criticality risk. There are a number of ways to measure holdup in process equipment; however, this paper will evaluate only two methods (i.e., Holdup Measurement System 4 (HMS-4) and In Situ Object Counting Software (ISOCS)) for specific measurement scenarios. The comparison will use measurements of well-known reference materials in various configurations and will examine the results, uncertainties, repeatability, time required, portability, and cost of each system.

The purpose of this presentation is to provide a brief introduction to measurement uncertainty analysis, outline how it is done, and illustrate uncertainty analysis with examples drawn from the PV field, with particular emphasis toward its use in PV performance measurements. The uncertainty information we know and state concerning a PV performance measurement or a module test result determines, to a significant extent, the value and quality of that result. What is measurement uncertainty analysis? It is an outgrowth of what has commonly been called error analysis. But uncertainty analysis, a more recent development, gives greater insight into measurement processes and tests, experiments, or calibration results. Uncertainty analysis gives us an estimate of the I interval about a measured value or an experiment`s final result within which we believe the true value of that quantity will lie. Why should we take the time to perform an uncertainty analysis? A rigorous measurement uncertainty analysis: Increases the credibility and value of research results; allows comparisons of results from different labs; helps improve experiment design and identifies where changes are needed to achieve stated objectives (through use of the pre-test analysis); plays a significant role in validating measurements and experimental results, and in demonstrating (through the post-test analysis) that valid data have been acquired; reduces the risk of making erroneous decisions; demonstrates quality assurance and quality control measures have been accomplished; define Valid Data as data having known and documented paths of: Origin, including theory; measurements; traceability to measurement standards; computations; uncertainty analysis of results.

An apparatus for measuring TPV cell efficiencies at different radiation intensities and for different graybody emitter temperatures has been constructed. The apparatus has been used for measuring V-I characteristics, efficiencies and fill factors for several InGaAs TPV cells. Measured results are used to determine how cells may function together with edge filters, and those results are compared with theory. {copyright} {ital 1997 American Institute of Physics.}

This paper reviews a few liquid measurement techniques and their associated problems. In measuring liquid petroleum gas, the first obstacle to overcome is accomodating some form of volumetric measurement. This is usually accomplished by orifice, positive displacement, or turbine meters. Each of the three established methods is covered extensively by industry standards in the API Manual of Petroleum Standards. If the operator follows these standards, very accurate results can be achieved.

Measurements involving top quarks provide important tests of QCD. A selected set of top quark measurements in CMS including the strong coupling constant, top quark pole mass, constraints on parton distribution functions, top quark pair differential cross sections, ttbar+0 and >0 jet events, top quark mass studied using various kinematic variables in different phase-space regions, and alternative top quark mass measurements is presented. The evolution of expected uncertainties in future LHC runs for the standard and alternative top quark mass measurements is also presented.

Phasor measurement units struggle to make acceptable estimates of frequency and rate of change of frequency. The most important cause of the problem is that the quantity being measured is not actually a phasor. The paper substitutes a different equation for the phasor equatin, and obtains its solution by curve-fitting.

It is known from path integral studies of the chiral anomaly that the fermion measure has to depend on gauge fields interacting with the fermion. It is argued here that in the presence of axion fields interacting with the fermion, they too may be involved in the measure, with unexpected consequences.

Weak measurement is increasingly acknowledged as an important theoretical and experimental tool. Until now however, it was not known how to perform an efficient weak non-local measurement of a general operator. We propose a novel scheme for performing non-local weak measurement which is based on the principle of quantum erasure. This method is then demonstrated within a few gedanken experiments, and also applied to the case of measuring sequential weak values. Comparison with other protocols for extracting non-local weak values offers several advantages of the suggested algorithm. In addition to the practical merits, this scheme sheds new light on fundamental topics such as causality, non-locality, measurement and uncertainty.

This paper considers what it means to make a measurement, and the changes in measurement technology over the years. The impact of the latest changes, which have resulted in most electrical measurements being done digitally, is explored. It is argued that the process of measurement can be considered equivalent to one of data compression. The smart grid will certainly result in many more signals being made available, and therefore a great deal of data compression will be taking place. Measurements will be made in parts of the power system presently unmonitored, as well as parts that are already well covered by instrumentation. The smart grid engineer must decide what it means to have useful information. Unless care is taken, the signal processing may furnish information that is not useful, and may not even make sense. The paper concludes by examining the possibilities of data compression from multiple separate signals.

The National Residential Efficiency Measures Database is a publicly available, centralized resource of residential building retrofit measures and costs for the U.S. building industry. With support from the U.S. Department of Energy, NREL developed this tool to help users determine the most cost-effective retrofit measures for improving energy efficiency of existing homes. Software developers who require residential retrofit performance and cost data for applications that evaluate residential efficiency measures are the primary audience for this database. In addition, home performance contractors and manufacturers of residential materials and equipment may find this information useful. The database offers the following types of retrofit measures: 1) Appliances, 2) Domestic Hot Water, 3) Enclosure, 4) Heating, Ventilating, and Air Conditioning (HVAC), 5) Lighting, 6) Miscellaneous.

The multiverse/landscape paradigm that has emerged from eternal inflation and string theory, describes a large-scale multiverse populated by "pocket universes" which come in a huge variety of different types, including different dimensionalities. In order to make predictions in the multiverse, we need a probability measure. In $(3+1)d$ landscapes, the scale factor cutoff measure has been previously shown to have a number of attractive properties. Here we consider possible generalizations of this measure to a transdimensional multiverse. We find that a straightforward extension of scale factor cutoff to the transdimensional case gives a measure that strongly disfavors large amounts of slow-roll inflation and predicts low values for the density parameter $\\Omega$, in conflict with observations. A suitable generalization, which retains all the good properties of the original measure, is the "volume factor" cutoff, which regularizes the infinite spacetime volume using cutoff surfaces of constant volume expansion factor.

The multiverse/landscape paradigm that has emerged from eternal inflation and string theory, describes a large-scale multiverse populated by ''pocket universes'' which come in a huge variety of different types, including different dimensionalities. In order to make predictions in the multiverse, we need a probability measure. In (3+1)d landscapes, the scale factor cutoff measure has been previously shown to have a number of attractive properties. Here we consider possible generalizations of this measure to a transdimensional multiverse. We find that a straightforward extension of scale factor cutoff to the transdimensional case gives a measure that strongly disfavors large amounts of slow-roll inflation and predicts low values for the density parameter ?, in conflict with observations. A suitable generalization, which retains all the good properties of the original measure, is the ''volume factor'' cutoff, which regularizes the infinite spacetime volume using cutoff surfaces of constant volume expansion factor.

A multi-channel spectrometer and a light source are used to measure both the emitted and the reflected light from a surface which is at an elevated temperature relative to its environment. In a first method, the temperature of the surface and emissivity in each wavelength is calculated from a knowledge of the spectrum and the measurement of the incident and reflected light. In the second method, the reflected light is measured from a reference surface having a known reflectivity and the same geometry as the surface of interest and the emitted and the reflected light are measured for the surface of interest. These measurements permit the computation of the emissivity in each channel of the spectrometer and the temperature of the surface of interest.

Recent work has revealed that the wave function of a pure state can be measured directly and that complementary knowledge of a quantum system can be obtained simultaneously by weak measurements. However, the original scheme applies only to pure states, and it is not efficient because most of the data are discarded by post-selection. Here, we propose tomography schemes for pure states and for mixed states via weak measurements, and our schemes are more efficient because we do not discard any data. Furthermore, we demonstrate that any matrix element of a general state can be directly read from an appropriate weak measurement. The density matrix (with all of its elements) represents all that is directly accessible from a general measurement.

to studies of the ocean beneath sea ice. Although icebreakers can penetrate sea ice, they cannot measure sea proposition. First, there is the issue of data coverage. For many purposes, the ocean interior cannot

The Specimen coordinate Automated Measuring Machine (SCAMM) and the Fiducial Automated Measuring Machine (FAMM) is a computer controlled metrology system capable of measuring length, width, and thickness, and of locating fiducial marks. SCAMM and FAMM have many similarities in their designs, and they can be converted from one to the other without taking them out of the hot cell. Both have means for: supporting a plurality of samples and a standard; controlling the movement of the samples in the +/- X and Y directions; determining the coordinates of the sample; compensating for temperature effects; and verifying the accuracy of the measurements and repeating as necessary. SCAMM and FAMM are designed to be used in hot cells.

in the power system. A robust state estimation should have the capability of keeping the system observable during different contingencies, as well as detecting and identifying the gross errors in measurement set and network topology. However, this capability...

We study the problem of mapping an unknown mixed quantum state onto a known pure state without the use of unitary transformations. This is achieved with the help of sequential measurements of two noncommuting observables only. We show that the overall success probability is maximized in the case of measuring two observables whose eigenstates define mutually unbiased bases. We find that for this optimal case the success probability quickly converges to unity as the number of measurement processes increases and that it is almost independent of the initial state. In particular, we show that to guarantee a success probability close to one the number of consecutive measurements must be larger than the dimension of the Hilbert space. We connect these results to quantum copying, quantum deleting, and entanglement generation.

A brief description of the experimental tools available for fusion neutronics experiments is given. Attention is paid to error estimates mainly for the measurement of tritium breeding ratio in simulated blankets using various techniques.

A mine roof bolt and a method of measuring the strain in mine roof bolts of this type are disclosed. According to the method, a flat portion on the head of the mine roof bolt is first machined. Next, a hole is drilled radially through the bolt at a predetermined distance from the bolt head. After installation of the mine roof bolt and loading, the strain of the mine roof bolt is measured by generating an ultrasonic pulse at the flat portion. The time of travel of the ultrasonic pulse reflected from the hole is measured. This time of travel is a function of the distance from the flat portion to the hole and increases as the bolt is loaded. Consequently, the time measurement is correlated to the strain in the bolt. Compensation for various factors affecting the travel time are also provided.

This study advanced knowledge of the measurement properties of the Family Leisure Activity Profile (FLAP). The FLAP is a sixteen-item index based on the Core and Balance Model of Family Functioning. This study assessed three distinct scaling...

We give short overview of various beam emittance measurement methods, currently applied at different machine locations for the Run II collider physics program at Fermilab. All these methods are based on beam profile measurements, and we give some examples of the related instrumentation techniques. At the end we introduce a multi-megawatt proton source project, currently under investigation at Fermilab, with respect to the beam instrumentation challenges.

The U.S. Department of Energy National Nuclear Security Agencys Aerial Measuring System deployed personnel and equipment to partner with the U.S. Air Force in Japan to conduct multiple aerial radiological surveys. These were the first and most comprehensive sources of actionable information for U.S. interests in Japan and provided early confirmation to the government of Japan as to the extent of the release from the Fukushima Daiichi Nuclear Power Generation Station. Many challenges were overcome quickly during the first 48 hours; including installation and operation of Aerial Measuring System equipment on multiple U.S. Air Force Japan aircraft, flying over difficult terrain, and flying with talented pilots who were unfamiliar with the Aerial Measuring System flight patterns. These all combined to make for a dynamic and non-textbook situation. In addition, the data challenges of the multiple and on-going releases, and integration with the Japanese government to provide valid aerial radiological survey products that both military and civilian customers could use to make informed decisions, was extremely complicated. The Aerial Measuring System Fukushima response provided insight in addressing these challenges and gave way to an opportunity for the expansion of the Aerial Measuring Systems mission beyond the borders of the US.

by TrueWind Solutions, LLC Albany, New York for California Energy Commission Sacramento, California was developed by TrueWind Solutions, hereon referred to as TrueWind, to guide Task 4 of the Wind Energy Resource Modeling and Measurement Project, contact number 500-03-006, with the California Energy Commission

We use Naimark's dilation theorem in order to characterize the joint measurability of two POVMs. Then, we analyze the joint measurability of two commutative POVMs $F_1$ and $F_2$ which are the smearing of two self-adjoint operators $A_1$ and $A_2$ respectively. We prove that the compatibility of $F_1$ and $F_2$ is connected to the existence of two compatible self-adjoint dilations $A_1^+$ and $A_2^+$ of $A_1$ and $A_2$ respectively. As a corollary we prove that each couple of self-adjoint operators can be dilated to a couple of compatible self-adjoint operators. Next, we analyze the joint measurability of the unsharp position and momentum observables and show that it provides a master example of the scheme we propose. Finally, we give a sufficient condition for the compatibility of two effects.

It is argued that Hawking radiation has indeed been measured and shown to posses a thermal spectrum, as predicted. This contention is based on three separate legs. The first is that the essential physics of the Hawking process for black holes can be modelled in other physical systems. The second is the white hole horizons are the time inverse of black hole horizons, and thus the physics of both is the same. The third is that the quantum emission, which is the Hawking process, is completely determined by measurements of the classical parameters of a linear physical system. The experiment conducted in 2010 fulfills all of these requirements, and is thus a true measurement of Hawking radiation.

Pulses to steer the time evolution of quantum systems can be designed with optimal control theory. In most cases it is the coherent processes that can be controlled and one optimizes the time evolution towards a target unitary process, sometimes also in the presence of non-controllable incoherent processes. Here we show how to extend the GRAPE algorithm in the case where the incoherent processes are controllable and the target time evolution is a non-unitary quantum channel. We perform a gradient search on a fidelity measure based on Choi matrices. We illustrate our algorithm by optimizing a phase qubit measurement pulse. We show how this technique can lead to large measurement contrast close to 99%. We also show, within the validity of our model, that this algorithm can produce short 1.4 ns pulses with 98.2% contrast.

In recent years, membrane based technologies have attracted much attention thanks to their simplicity in reactor design. The concept proposed is to use mixed ionic-electronic conducting membrane (MIEC) in CO2 reuse and ...

This article begins with a review of quantum measure spaces. Quantum forms and indefinite inner-product spaces are then discussed. The main part of the paper introduces a quantum integral and derives some of its properties. The quantum integral's form for simple functions is characterized and it is shown that the quantum integral generalizes the Lebesgue integral. A bounded, monotone convergence theorem for quantum integrals is obtained and it is shown that a Radon-Nikodym type theorem does not hold for quantum measures. As an example, a quantum-Lebesgue integral on the real line is considered.

The procedure for installing Superconducting Super Collider (SSC) dipoles in their respective cryostats involves aligning the average direction of their field with the vertical to an accuracy of 0.5 mrad. The equipment developed for carrying on these measurements is described and the measurements performed on the first few prototypes SSC magnets are presented. The field angle as a function of position in these 16.6 m long magnets is a characteristic of the individual magnet with possible feedback information to its manufacturing procedure. A comparison of this vertical alignment characteristic with a magnetic field intensity (by NMR) characteristic for one of the prototypes is also presented. 5 refs., 7 figs.

Based on a sample of 225.3 million J/? events accumulated with the BESIII detector at the BEPCII, the decays of ?'?????l?l? are studied via J/????'. A clear ?' signal is observed in the ????e?e? mass spectrum, and the branching fraction is measured to be B(?'?????e?e?)=(2.11±0.12(stat)±0.14(syst))×10?³, which is in good agreement with theoretical predictions and the previous measurement, but is determined with much higher precision. No ?' signal is found in the ???????? mass spectrum, and the upper limit is determined to be B(?'?????????)<2.9×10?? at the 90% confidence level.

A beta ray flux measuring device in an activated member in-core instrumentation system for pressurized water reactors. The device includes collector rings positioned about an axis in the reactor's pressure boundary. Activated members such as hydroballs are positioned within respective ones of the collector rings. A response characteristic such as the current from or charge on a collector ring indicates the beta ray flux from the corresponding hydroball and is therefore a measure of the relative nuclear power level in the region of the reactor core corresponding to the specific exposed hydroball within the collector ring.

Our first shock temperature measurements on a cryogenic target are reported for NH/sub 3/. A new fast optical pyrometer and a cryogenic specimen holder for liquid NH/sub 3/ were developed to measure shock temperatures of 4400 and 3600 K at pressures of 61 and 48 GPa. These conditions correspond to those in the ice layers in Uranus and Neptune. The shock temperature data are in reasonable agreement with an equation of state based on an intermolecular potential derived from NH/sub 3/ Hugoniot data.

A measuring system for measuring axial displacement of a tube relative to an axially stationary component in a rotating rotor assembly includes at least one displacement sensor adapted to be located normal to a longitudinal axis of the tube; an insulated cable system adapted for passage through the rotor assembly; a rotatable proximitor module located axially beyond the rotor assembly to which the cables are connected; and a telemetry system operatively connected to the proximitor module for sampling signals from the proximitor module and forwarding data to a ground station.

A method and apparatus for measuring fluid flow in a duct is disclosed. The invention uses a novel high velocity tracer injector system, an optional insertable folding mixing fan for homogenizing the tracer within the duct bulk fluid flow, and a perforated hose sampling system. A preferred embodiment uses CO.sub.2 as a tracer gas for measuring air flow in commercial and/or residential ducts. In extant commercial buildings, ducts not readily accessible by hanging ceilings may be drilled with readily plugged small diameter holes to allow for injection, optional mixing where desired using a novel insertable foldable mixing fan, and sampling hose.

A device for measuring the levitation force of a high temperature superconductor sample with respect to a reference magnet includes a receptacle for holding several high temperature superconductor samples each cooled to superconducting temperature. A rotatable carousel successively locates a selected one of the high temperature superconductor samples in registry with the reference magnet. Mechanism varies the distance between one of the high temperature superconductor samples and the reference magnet, and a sensor measures levitation force of the sample as a function of the distance between the reference magnet and the sample. A method is also disclosed.

A device is disclosed for measuring the levitation force of a high temperature superconductor sample with respect to a reference magnet includes a receptacle for holding several high temperature superconductor samples each cooled to superconducting temperature. A rotatable carousel successively locates a selected one of the high temperature superconductor samples in registry with the reference magnet. Mechanism varies the distance between one of the high temperature superconductor samples and the reference magnet, and a sensor measures levitation force of the sample as a function of the distance between the reference magnet and the sample. A method is also disclosed. 3 figs.

The mathematically rigorous definition and construction of the amplitudes in superstring theory is still an open problem. Here, we describe some recent development in the construction of the superstring measures in $g=3,4$ and we point out some aspects that are not yet clear.

Portable Positron Measurement System (PPMS) is an automated, non-destructive inspection system based on positron annihilation, which characterizes a material's in situatomic-level properties during the manufacturing processes of formation, solidification, and heat treatment. Simultaneous manufacturing and quality monitoring now are possible. Learn more about the lab's project on our facebook site http://www.facebook.com/idahonationallaboratory.

A previously-developed experimental facility has been used to determine gas-surface thermal accommodation coefficients from the pressure dependence of the heat flux between parallel plates of similar material but different surface finish. Heat flux between the plates is inferred from measurements of temperature drop between the plate surface and an adjacent temperature-controlled water bath. Thermal accommodation measurements were determined from the pressure dependence of the heat flux for a fixed plate separation. Measurements of argon and nitrogen in contact with standard machined (lathed) or polished 304 stainless steel plates are indistinguishable within experimental uncertainty. Thus, the accommodation coefficient of 304 stainless steel with nitrogen and argon is estimated to be 0.80 {+-} 0.02 and 0.87 {+-} 0.02, respectively, independent of the surface roughness within the range likely to be encountered in engineering practice. Measurements of the accommodation of helium showed a slight variation with 304 stainless steel surface roughness: 0.36 {+-} 0.02 for a standard machine finish and 0.40 {+-} 0.02 for a polished finish. Planned tests with carbon-nanotube-coated plates will be performed when 304 stainless-steel blanks have been successfully coated.

Portable Positron Measurement System (PPMS) is an automated, non-destructive inspection system based on positron annihilation, which characterizes a material's in situatomic-level properties during the manufacturing processes of formation, solidification, and heat treatment. Simultaneous manufacturing and quality monitoring now are possible. Learn more about the lab's project on our facebook site http://www.facebook.com/idahonationallaboratory.

SGP-TR-186 DOWNHOLE ENTHALPY MEASUREMENT IN GEOTHERMAL WELLS WITH FIBER OPTICS Nilufer Atalay June 2008 Financial support was provided through the Stanford Geothermal Program under Idaho National University Stanford Geothermal Program Interdisciplinary Research in Engineering and Earth Sciences STANFORD

An embodiment of the invention is directed to a pulse measuring system that measures a characteristic of an input pulse under test, particularly the pulse shape of a single-shot, nano-second duration, high shape-contrast optical or electrical pulse. An exemplary system includes a multi-stage, passive pulse replicator, wherein each successive stage introduces a fixed time delay to the input pulse under test, a repetitively-gated electronic sampling apparatus that acquires the pulse train including an entire waveform of each replica pulse, a processor that temporally aligns the replicated pulses, and an averager that temporally averages the replicated pulses to generate the pulse shape of the pulse under test. An embodiment of the invention is directed to a method for measuring an optical or an electrical pulse shape. The method includes the steps of passively replicating the pulse under test with a known time delay, temporally stacking the pulses, and temporally averaging the stacked pulses. An embodiment of the invention is directed to a method for increasing the dynamic range of a pulse measurement by a repetitively-gated electronic sampling device having a rated dynamic range capability, beyond the rated dynamic range of the sampling device; e.g., enhancing the dynamic range of an oscilloscope. The embodied technique can improve the SNR from about 300:1 to 1000:1. A dynamic range enhancement of four to seven bits may be achieved.

MANDATORY MEASURES OUTDOOR LIGHTING CONTROLS (Reference: Sub-Chapter 4, Section 130.2) #12;SECTION level of each multi-tier garage. Â· General lighting must have occupant sensing controls with at least one control step between 20% and 50% of design lighting power Â· No more than 500 watts of rated

Resources. Ric Gale of IPC and Paul Kjellander of the Idaho Office of Energy Resources updated the Council of energy efficiency and demand response measures for all customer sectors, Gale said. IPC sees Karier, Power Committee chair, said the first topic for the committee was an update by John Fazio

A phase measurement system is disclosed which measures the phase shift between two signals by dithering a clock signal and averaging a plurality of measurements of the phase differences between the two signals. 8 figures.

Methods, systems and computer program products are disclosed for measuring a performance of a program running on a processing unit of a processing system. In one embodiment, the method comprises informing a logic unit of each instruction in the program that is executed by the processing unit, assigning a weight to each instruction, assigning the instructions to a plurality of groups, and analyzing the plurality of groups to measure one or more metrics. In one embodiment, each instruction includes an operating code portion, and the assigning includes assigning the instructions to the groups based on the operating code portions of the instructions. In an embodiment, each type of instruction is assigned to a respective one of the plurality of groups. These groups may be combined into a plurality of sets of the groups.

Crystalline silicon has been proposed as a new test mass material in third generation gravitational wave detectors such as the Einstein Telescope (ET). Birefringence can reduce the interferometric contrast and can produce dynamical disturbances in interferometers. In this work we use the method of polarisation-dependent resonance frequency analysis of Fabry-Perot-cavities containing silicon as a birefringent medium. Our measurements show a birefringence of silicon along the (111) axis of the order of $\\Delta\\, n \\approx 10^{-7}$ at a laser wavelength of 1550nm and room temperature. A model is presented that explains the results of different settings of our measurements as a superposition of elastic strains caused by external stresses in the sample and plastic strains possibly generated during the production process. An application of our theory on the proposed ET test mass geometry suggests no critical effect on birefringence due to elastic strains.

, in the cardiotoxic effects of doxorubicin chemotherapy for the treat- ment of acute lymphoblastic leukemia in childhood (Lipsitz et al., 2002; Fitzmaurice et al., 2003), the design points are not pre-defined but determined by the preceding response. This outcome...-dependent feature of measurements makes biased estimation of regression line. As noticed by Lipsitz et al. (2002); Fitzmaurice et al. (2003), even the least square estimates will be biased, which does not require the distributional assumption of response error...

Properties of sharp observables (normalized PV measures) in relation to smearing by a Markov kernel are studied. It is shown that for a sharp observable $P$ defined on a standard Borel space, and an arbitrary observable $M$, the following properties are equivalent: (a) the range of $P$ is contained in the range of $M$; (b) $P$ is a function of $M$; (c) $P$ is a smearing of $M$.

We explore the phenomenological implications of generalizing measures to a multidimensional multiverse. We consider a simple model in which the vacua are nucleated from a $D$-dimensional parent spacetime through dynamical compactification of the extra dimensions, and compute the geometric contribution to the probability distribution of observations within the multiverse for each measure. We then study how the shape of this probability distribution depends on the timescales for the existence of observers, for vacuum domination, and for curvature domination ($t_{obs}, t_{\\Lambda},$ and $t_c$, respectively.) In this work we restrict ourselves to bubbles with positive cosmological constant, $\\Lambda$. In the case of the causal patch cutoff, when the bubble universes have $p+1$ large spatial dimensions with $p \\geq 2$, the shape of the probability distribution is such that we obtain the coincidence of timescales $t_{obs} \\sim t_{\\Lambda} \\sim t_c$. Moreover, the size of the cosmological constant is related to the size of the landscape. However, the exact shape of the probability distribution is different in the case $p = 2$, compared to $p \\geq 3$. In the case of the fat geodesic measure, the result is even more robust: the shape of the probability distribution is the same for all $p \\geq 2$, and we once again obtain the coincidence $t_{obs} \\sim t_{\\Lambda} \\sim t_c$. These results require only very mild conditions on the prior probability of the distribution of vacua in the landscape. Our work shows that the observed double coincidence of timescales is a robust prediction even when the multiverse is generalized to be multidimensional; that this coincidence is not a consequence of our particular universe being (3+1)-dimensional; and that this observable cannot be used to preferentially select one measure over another in a multidimensional multiverse.

Studies in the forward region of charged particle multiplicity and density, as well as energy flow, are presented. These measurements are performed using data from proton-proton collisions at a center-of-mass energy of 7 TeV, collected with the LHCb detector. The results are compared to predictions from a variety of Monte Carlo event generators and are used to test underlying event and hadronization models as well as the performance of event generator tunes in the forward region.

The Karlsruhe multi-detector set-ups KASCADE, KASCADE-Grande, and LOPES aim on measurements of cosmic rays in the energy range of the so called knee between 10^14 eV and 10^18 eV. The multidimensional analysis of the air shower data measured by KASCADE indicates a distinct knee in the energy spectra of light primary cosmic rays and an increasing dominance of heavy ones towards higher energies. This provides, together with the results of large scale anisotropy studies, implications for discriminating astrophysical models of the origin of the knee. To improve the reconstruction quality and statistics at higher energies, where the knee of the heavy primaries is expected at around 100 PeV, KASCADE has been extended by a factor 10 in area to the new experiment KASCADE-Grande. LOPES is located on site of the KASCADE-Grande experiment. It measures radio pulses from extensive air showers with the goal to establish this renewed detection technique for future large scale experiments.

We study the thermophoretic motion of a micron sized single colloidal particle in front of a flat wall by evanescent light scattering. To quantify thermophoretic effects we analyse the nonequilibrium steady state (NESS) of the particle in a constant temperature gradient perpendicular to the confining walls. We propose to determine thermophoretic forces from a 'generalized potential' associated with the probability distribution of the particle position in the NESS. Experimentally we demonstrate, how this spatial probability distribution is measured and how thermophoretic forces can be extracted with 10 fN resolution. By varying temperature gradient and ambient temperature, the temperature dependence of Soret coefficient $S_T(T)$ is determined for $r = 2.5 \\mu m$ polystyrene and $r = 1.35 \\mu m$ melamine particles. The functional form of $S_T(T)$ is in good agreement with findings for smaller colloids. In addition, we measure and discuss hydrodynamic effects in the confined geometry. The theoretical and experimental technique proposed here extends thermophoresis measurements to so far inaccessible particle sizes and particle solvent combinations.

The Karlsruhe multi-detector set-ups KASCADE, KASCADE-Grande, and LOPES aim on measurements of cosmic rays in the energy range of the so called knee between 10^14 eV and 10^18 eV. The multidimensional analysis of the air shower data measured by KASCADE indicates a distinct knee in the energy spectra of light primary cosmic rays and an increasing dominance of heavy ones towards higher energies. This provides, together with the results of large scale anisotropy studies, implications for discriminating astrophysical models of the origin of the knee. To improve the reconstruction quality and statistics at higher energies, where the knee of the heavy primaries is expected at around 100 PeV, KASCADE has been extended by a factor 10 in area to the new experiment KASCADE-Grande. LOPES is located on site of the KASCADE-Grande experiment. It measures radio pulses from extensive air showers with the goal to establish this renewed detection technique for future large scale experiments.

Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number of tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.

A method is described for non-destructively assaying the radionuclide content of solid waste in a sealed container by analysis of the waste's gamma-ray spectrum and neutron emissions. Some radionuclides are measured by characteristic photopeaks in the gamma-ray spectrum; transuranic nuclides are measured by neutron emission rate; other radionuclides are measured by correlation with those already measured.

surrounded by a time-of-flight scintillation sys- texn and an electromagnetic shower detector consisting of 7800 thalliuxn-doped cesium iodide crystals. The track- ing system, time-of-flight scintillators, and calorimeter are installed inside a 1.5-T... at least seven interaction lengths of iron, and have ~cos8~ &0.85. Tight track quality cuts are demand- ed to reduce the contamination from kaon and pion de- cays in Aight. Once a track is found, the muon identification efficiency, measured with radiative...

A procedure and tools for quantifying surface cleanliness are described. Cleanliness of a target surface is quantified by wiping a prescribed area of the surface with a flexible, bright white cloth swatch, preferably mounted on a special tool. The cloth picks up a substantial amount of any particulate surface contamination. The amount of contamination is determined by measuring the reflectivity loss of the cloth before and after wiping on the contaminated system and comparing that loss to a previous calibration with similar contamination. In the alternative, a visual comparison of the contaminated cloth to a contamination key provides an indication of the surface cleanliness.

The present invention relates to a method for measuring a surface temperature using is a fluorescent temperature sensor or optical thermometer. The sensor includes a solution of 1,3-bis(1-pyrenyl)propane within a 1-butyl-1-1-methyl pyrrolidinium bis(trifluoromethylsulfonyl)imide ionic liquid solvent. The 1,3-bis(1-pyrenyl)propane remains unassociated when in the ground state while in solution. When subjected to UV light, an excited state is produced that exists in equilibrium with an excimer. The position of the equilibrium between the two excited states is temperature dependent.

A system for determining the thicknesses of thin films of materials exhibiting fluorescence in response to exposure to excitation energy from a suitable source of such energy. A section of film is illuminated with a fixed level of excitation energy from a source such as an argon ion laser emitting blue-green light. The amount of fluorescent light produced by the film over a limited area within the section so illuminated is then measured using a detector such as a photomultiplier tube. Since the amount of fluorescent light produced is a function of the thicknesses of thin films, the thickness of a specific film can be determined by comparing the intensity of fluorescent light produced by this film with the intensity of light produced by similar films of known thicknesses in response to the same amount of excitation energy. The preferred embodiment of the invention uses fiber optic probes in measuring the thicknesses of oil films on the operational components of machinery which are ordinarily obscured from view.

This report is a summary of the water analysis performance for the Denver, Colorado Wynkoop Building. The Wynkoop Building (Figure 1) was built in 2006 as the Environmental Protection Agency (EPA) Region 8 Headquarters intended to house over 900 occupants in the 301,292 gross square feet (248,849 rentable square feet). The building was built on a brownfield in the Lower Downtown Historic District as part of an urban redevelopment effort. The building was designed and constructed through a public-private partnership with the sustainable design elements developed jointly by General Services Administration (GSA) and EPA. That partnership is still active with all parties still engaged to optimize building operations and use the building as a Learning Laboratory. The building design achieved U.S. Green Building Council Leadership in Energy and Environmental Design for New Construction (LEED-NC) Gold Certification in 2008 (Figure 2) and a 2008 EPA Energy Star Rating of 96 with design highlights that include: (1) Water use was designed to use 40% less than a typical design baseline. The design included low flow fixtures, waterless urinals and dual flush toilets; (2) Native and adaptive vegetation were selected to minimize the need for irrigation water for landscaping; and (3) Energy use intensity was modeled at 66.1 kBtus/gross square foot, which is 39% better than ASHRAE 90.1 1999. The Wynkoop Building water use (10 gallons/square foot) was measured at lower than industry average (15 gallons/square foot) and GSA goals (13 gallons/square foot), however, it was higher than building management expected it would be. The type of occupants and number of occupants can have a significant impact on fixture water use. The occupancy per floor varied significantly over the study time period, which added uncertainty to the data analysis. Investigation of the fixture use on the 2nd, 5th, and 7th floors identified potential for water use reduction if the flush direction of the dual-flush toilet handles was reversed. The building management retrofitted the building's toilets with handles that operated on reduced flush when pushed down (0.8 gallons) and full flush when pulled up (1.1 gallons). The water pressure on the 5th floor (< 30 psi) is less than half the pressure on the 7th floor (>80 psi). The measured water savings post-retrofit was lower on the 5th floor than the 7th floor. The differences in water pressure may have had an impact on the quantity of water used per floor. The second floor water use was examined prior to and following the toilet fixture retrofit. This floor is where conference rooms for non-building occupants are available for use, thus occupancy is highly variable. The 3-day average volume per flush event was higher post-retrofit (0.79 gallons per event), in contrast to pre-retrofit (0.57 gallons per event). There were 40% more flush events post retrofit, which impacted the findings. Water use in the third floor fitness center was also measured for a limited number of days. Because of water line accessibility, only water use on the men's side of the fitness center was measured and from that the total fitness center water use was estimated. Using the limited data collected, the fitness center shower water use is approximately 2% of the whole building water use. Overall water use in the Wynkoop Building is below the industry baseline and GSA expectations. The dual flush fixture replacement appears to have resulted in additional water savings that are expected to show a savings in the total annual water use.

Angular particle correlations are a powerful tool to study collective effects and in-medium jet modification as well as their interplay in the hot and dense medium produced in central heavy-ion collisions. We present measurements of two-particle angular correlations of inclusive charged and identified particles performed with the ALICE detector. The near-side peak in the short-range correlation region is quantitatively analyzed: while the rms of the peak in $\\phi$-direction is independent of centrality within uncertainties, we find a significant broadening in $\\eta$-direction from peripheral to central collisions. The particle content of the near-side peak is studied finding that the $p/\\pi$ ratio of particles associated to a trigger particle is much smaller than the one in the bulk of the particles and consistent with fragmentation of a parton in vacuum.

In this paper, we investigate the use of confidence measures for the evaluation of pronunciation models and the employment of these evaluations in an automatic baseform learning process. The confidence measures and ...

Measurement of the human sar a) acoustic input immittance of the. William M. Rabinowitz Research for immittance measurements A closed sound system was calibrated using known acoustic immittances as "loads

and Selected MAP Results ...... 15 Appendix C, Qualification of Artifacts and Measurement System is intended for the calibration of PDL measurement equipment, and is not intended for the simulation of PDL

A method for measuring texture of metal plates or sheets using non-destructive ultrasonic investigation includes measuring the velocity of ultrasonic energy waves in lower order plate modes in one or more directions, and measuring phase velocity dispersion of higher order modes of the plate or sheet if needed. Texture or preferred grain orientation can be derived from these measurements with improves reliability and accuracy. The method can be utilized in production on moving metal plate or sheet. 9 figures.

Six individual ice cores were collected from Barrow Environmental Observatory in Barrow, Alaska, in May of 2013 as part of the Next Generation Ecosystem Experiment (NGEE). Each core was drilled from a different location at varying depths. A few days after drilling, the cores were stored in coolers packed with dry ice and flown to Lawrence Berkeley National Laboratory (LBNL) in Berkeley, CA. 3-dimensional images of the cores were constructed using a medical X-ray computed tomography (CT) scanner at 120kV. Hydraulic conductivity samples were extracted from these cores at LBNL Richmond Field Station in Richmond, CA, in February 2014 by cutting 5 to 8 inch segments using a chop saw. Samples were packed individually and stored at freezing temperatures to minimize any changes in structure or loss of ice content prior to analysis. Hydraulic conductivity was determined through falling head tests using a permeameter [ELE International, Model #: K-770B]. After approximately 12 hours of thaw, initial falling head tests were performed. Two to four measurements were collected on each sample and collection stopped when the applied head load exceeded 25% change from the original load. Analyses were performed between 2 to 3 times for each sample. The final hydraulic conductivity calculations were computed using methodology of Das et al., 1985.

The dynamics of a system, made of a particle interacting with a field mode, thwarted by the action of repeated projective measurements on the particle, is examined. The effect of the partial measurements is discussed by comparing it with the dynamics in the absence of the measurements.

Cross-calibrations of charge diagnostics are conducted to verify their validity for measuring electron beams produced by laser plasma accelerators (LPAs). Employed diagnostics are a scintillating screen, activation based measurement, and integrating current transformer. The diagnostics agreed within {+-}8 %, showing that they can provide accurate charge measurements for LPAs provided they are used properly.

Measurements of top quark properties performed at the Large Hadron Collider are reviewed, with a particular emphasis on top-pair charge asymmetries, spin correlations and polarization measurements performed by the ATLAS and CMS collaborations. The measurements are generally in good agreement with predictions from next-to-leading-order QCD calculations, and no deviations from Standard Model expectations have been seen.

SGP-TR-169 Constant-Pressure Measurement of Steam- Water Relative Permeability Peter A. O by measuring in-situ steam saturation more directly. Mobile steam mass fraction was established by separate steam and water inlets or by correlating with previous results. The measured steam-water relative

The notion of incompatibility of measurements in quantum theory is in stark contrast with the corresponding classical perspective, where all physical observables are jointly measurable. It is of interest to examine if the results of two or more measurements in the quantum scenario can be perceived from a classical point of view or they still exhibit non-classical features. Clearly, commuting observables can be measured jointly using projective measurements and their statistical outcomes can be discerned classically. However, such simple minded association of compatibility of measurements with commutativity turns out to be limited in an extended framework, where the usual notion of sharp projective valued measurements of self adjoint observables gets broadened to include unsharp measurements of generalized observables constituting positive operator valued measures (POVM). There is a surge of research activity recently towards gaining new physical insights on the emergence of classical behavior via joint measurability of unsharp observables. Here, we explore the entropic uncertainty relation for a pair of discrete observables (of Alice's system) when an entangled quantum memory of Bob is restricted to record outcomes of jointly measurable POVMs only. Within the joint measurability regime, the sum of entropies associated with Alice's measurement outcomes - conditioned by the results registered at Bob's end - are constrained to obey an entropic steering inequality. In this case, Bob's non-steerability reflects itself as his inability in predicting the outcomes of Alice's pair of non-commuting observables with better precision, even when they share an entangled state. As a further consequence, the quantum advantage envisaged for the construction of security proofs in key distribution is lost, when Bob's measurements are restricted to the joint measurability regime.

We consider the problem of lossy source coding with a mismatched distortion measure. That is, we investigate what distortion guarantees can be made with respect to distortion measure $\\tilde{\\rho}$, for a source code designed such that it achieves distortion less than $D$ with respect to distortion measure $\\rho$. We find a single-letter characterization of this mismatch distortion and study properties of this quantity. These results give insight into the robustness of lossy source coding with respect to modeling errors in the distortion measure. They also provide guidelines on how to choose a good tractable approximation of an intractable distortion measure.

The informational power of a quantum measurement is the maximum amount of classical information that the measurement can extract from any ensemble of quantum states. We discuss its main properties. Informational power is an additive quantity, being equivalent to the classical capacity of a quantum-classical channel. The informational power of a quantum measurement is the maximum of the accessible information of a quantum ensemble that depends on the measurement. We present some examples where the symmetry of the measurement allows to analytically derive its informational power.

Time measurement plays a crucial rule for the purpose of particle identification in high energy physical experiments. With the upgrading of physical goal and the developing of electronics, modern time measurement system meets the requirement of excellent resolution specification as well as high integrity. Due to Field Programmable Gate Array (FPGA), FPGA time-to-digital converter (TDC) becomes one of mature and prominent time measurement methods in recent years. For correcting time-walk effect caused by leading timing, time-over-threshold (TOT) measurement should be added in the FPGA TDC. TOT can be obtained by measuring the interval time of signal leading and trailing edge. Unfortunately, a traditional TDC can recognize only one kind of signal edge, the leading or the trailing. Generally, to measure the interval, two TDC channels can be used at the same time, one for leading, the other for trailing. However, this method will increase the amount of used FPGA resource and reduce the TDC's integrity unavoidably. This paper presents one method of TOT measurement implemented in a Xilinx Virtex-5 FPGA. In this method, TOT measure can be achieved in only one TDC input channel. The consumed resources and time resolution can both be guaranteed. Test shows that this TDC can achieve resolution better than 15 ps for leading edge measurement and 37 ps for TOT measurement. Furthermore, the TDC measuring dead time is about 2 clock cycles, which makes it be good for applications of higher physical event rate

Casting customers continue to demand tighter dimensional tolerances for casting features. The foundry then places demands on the patternshop to produce more accurate patterns. Control of all sources of dimensional variability, including measurement system variability in the foundry and patternshop, is important to insure casting accuracy. Sources of dimensional casting errors will be reviewed, focusing on the importance of accurate patterns. The foundry and patternshop together must work within the tolerance limits established by the customer. In light of contemporary pattern tolerances, the patternshop must review its current measurement methods. The measurement instrument must have sufficient resolution to detect part variability. In addition, the measurement equipment must be used consistently by all patternmakers to insure adequacy of the measurement system. Without these precautions, measurement error can significantly contribute to overall pattern variability. Simple robust methods to check the adequacy of pattern measurement systems are presented. These tests will determine the variability that is contributed by the measurement equipment and by the operators. Steps to control measurement variability once it has been identified are also provided. Measurement system errors for various types of measurement equipment are compared to the allowable pattern tolerances, that are established together by the foundry and patternshop.

Microwave measurement and tuning of accelerator structures are important issues for the current and next generation of high energy physics machines. Application of these measurements both before and after high power processing can reveal information about the structure but may be misinterpreted if measurement conditions are not carefully controlled. For this reason extensive studies to characterize the microwave measurements at have been made at SLAC. For the beadpull a reproducible measurement of less than 1 degree of phase accuracy in total phase drift is needed in order to resolve issues such as phase changes due to structure damage during high power testing. Factors contributing to measurement errors include temperature drift, mechanical vibration, and limitations of measurement equipment such as the network analyzer. Results of this continuing effort will be presented.

MINOS is a long-baseline neutrino oscillation experiment. It consists of two large steel-scintillator tracking calorimeters. The near detector is situated at Fermilab, close to the production point of the NuMI muon-neutrino beam. The far detector is 735 km away, 716m underground in the Soudan mine, Northern Minnesota. The primary purpose of the MINOS experiment is to make precise measurements of the 'atmospheric' neutrino oscillation parameters ({Delta}m{sub atm}{sup 2} and sin{sup 2} 2{theta}{sub atm}). The oscillation signal consists of an energy-dependent deficit of {nu}{sub {mu}} interactions in the far detector. The near detector is used to characterize the properties of the beam before oscillations develop. The two-detector design allows many potential sources of systematic error in the far detector to be mitigated by the near detector observations. This thesis describes the details of the {nu}{sub {mu}}-disappearance analysis, and presents a new technique to estimate the hadronic energy of neutrino interactions. This estimator achieves a significant improvement in the energy resolution of the neutrino spectrum, and in the sensitivity of the neutrino oscillation fit. The systematic uncertainty on the hadronic energy scale was re-evaluated and found to be comparable to that of the energy estimator previously in use. The best-fit oscillation parameters of the {nu}{sub {mu}}-disappearance analysis, incorporating this new estimator were: {Delta}m{sup 2} = 2.32{sub -0.08}{sup +0.12} x 10{sup -3} eV{sup 2}, sin {sup 2} 2{theta} > 0.90 (90% C.L.). A similar analysis, using data from a period of running where the NuMI beam was operated in a configuration producing a predominantly {bar {nu}}{sub {mu}} beam, yielded somewhat different best-fit parameters {Delta}{bar m}{sup 2} = (3.36{sub -0.40}{sup +0.46}(stat.) {+-} 0.06(syst.)) x 10{sup -3}eV{sup 2}, sin{sup 2} 2{bar {theta}} = 0.86{sub -0.12}{sup _0.11}(stat.) {+-} 0.01(syst.). The tension between these results is intriguing, and additional antineutrino data is currently being taken in order to further investigate this apparent discrepancy.

A radiation beam calorimetric power measurement system for measuring the average power of a beam such as a laser beam, including a calorimeter configured to operate over a wide range of coolant flow rates and being cooled by continuously flowing coolant for absorbing light from a laser beam to convert the laser beam energy into heat. The system further includes a flow meter for measuring the coolant flow in the calorimeter and a pair of thermistors for measuring the temperature difference between the coolant inputs and outputs to the calorimeter. The system also includes a microprocessor for processing the measured coolant flow rate and the measured temperature difference to determine the average power of the laser beam.

Radar systems use time delay measurements between a transmitted signal and its echo to calculate range to a target. Ranges that change with time cause a Doppler offset in phase and frequency of the echo. Consequently, the closing velocity between target and radar can be measured by measuring the Doppler offset of the echo. The closing velocity is also known as radial velocity, or line-of-sight velocity. Doppler frequency is measured in a pulse-Doppler radar as a linear phase shift over a set of radar pulses during some Coherent Processing Interval (CPI). An Interferometric Moving Target Indicator (MTI) radar can be used to measure the tangential velocity component of a moving target. Multiple baselines, along with the conventional radial velocity measurement, allow estimating the true 3-D velocity of a target.

The authors present a measurement of the W boson mass in W {yields} e{nu} decays using 1 fb{sup -1} of data collected with the D0 detector during Run II of the Fermilab Tevatron collider. With a sample of 499830 W {yields} e{nu} candidate events, they measure M{sub W} = 80.401 {+-} 0.043 GeV. This is the most precise measurement from a single experiment.

In this paper we show how weak joint measurement and local feedback can be used to control entanglement generation between two qubits. To do this, we make use of a decoherence free subspace (DFS). Weak measurement and feedback can be used to drive the system into this subspace rapidly. Once within the subspace, feedback can generate entanglement rapidly, or turn off entanglement generation dynamically. We also consider, in the context of weak measurement, some of differences between purification and generating entanglement.

in this series . ? Extension clothing specialist. The Texas A&M University System. MeasuringMeasuring tools are important in fitting to obtain a symmetrical appearance. Most measuring tools available today are marked in inches and yards as well... most straight, flat areas. Wood may warp or chip; if used, however, it should have a metal edge for accuracy. Sewing Gauge. Sewing gauges are usually metal or plastic in 6-inch (15 cm) lengths and have a moveable slide for marking certain lengths...

We report a new method to probe the solid-liquid interface through the use of a thin liquid layer on a solid surface. An ambient pressure XPS (AP-XPS) endstation that is capable of detecting high kinetic energy photoelectrons (7 keV) at a pressure up to 110 Torr has been constructed and commissioned. Additionally, we have deployed a dip & pull method to create a stable nanometers-thick aqueous electrolyte on platinum working electrode surface. Combining the newly constructed AP-XPS system, dip & pull approach, with a tender X-ray synchrotron source (2 keV7 keV), we are able to access the interface between liquid and solid dense phases with photoelectrons and directly probe important phenomena occurring at the narrow solid-liquid interface region in an electrochemical system. Using this approach, we have performed electrochemical oxidation of the Pt electrode at an oxygen evolution reaction (OER) potential. Under this potential, we observe the formation of both Pt²? and Pt?? interfacial species on the Pt working electrode in situ. We believe this thin-film approach and the use of tender AP-XPS highlighted in this study is an innovative new approach to probe this key solid-liquid interface region of electrochemistry.

We report a new method to probe the solid-liquid interface through the use of a thin liquid layer on a solid surface. An ambient pressure XPS (AP-XPS) endstation that is capable of detecting high kinetic energy photoelectrons (7 keV) at a pressure up to 110 Torr has been constructed and commissioned. Additionally, we have deployed a dip & pull method to create a stable nanometers-thick aqueous electrolyte on platinum working electrode surface. Combining the newly constructed AP-XPS system, dip & pull approach, with a tender X-ray synchrotron source (2 keV7 keV), we are able to access the interface between liquidmore »and solid dense phases with photoelectrons and directly probe important phenomena occurring at the narrow solid-liquid interface region in an electrochemical system. Using this approach, we have performed electrochemical oxidation of the Pt electrode at an oxygen evolution reaction (OER) potential. Under this potential, we observe the formation of both Pt²? and Pt?? interfacial species on the Pt working electrode in situ. We believe this thin-film approach and the use of tender AP-XPS highlighted in this study is an innovative new approach to probe this key solid-liquid interface region of electrochemistry.« less

Accurate, precise wear measurements are a key element in solving both current wear problems and in basic wear research. Applications range from assessing durability of micro-scale components to accurate screening of surface treatments and thin solid films. Need to distinguish small differences in wear tate presents formidable problems to those who are developing new materials and surface treatments. Methods for measuring wear in ASTM standard test methods are discussed. Errors in using alterate methods of wear measurement on the same test specimen are also described. Human judgemental factors are a concern in common methods for wear measurement, and an experiment involving measurement of a wear scar by ten different people is described. Precision in wear measurement is limited both by the capabilities of the measuring instruments and by the nonuniformity of the wear process. A method of measuring wear using nano-scale indentations is discussed. Current and future prospects for incorporating advanced, higher-precision wear measurement methods into standards are considered.

We study repeated (noncontinuous) measurements on the electron spin in a quantum dot and find that the measurement technique may lead to a different met$ or mechanism to realize nuclear spin polarization. While it may be used in any case, the method is aimed at the further polarization, providing that nuclear spins have been polarized by the existent electrical or optical methods. The feasibility of the method is analyzed. The existing techniques in electron spin measurements are applicable to this scheme. The repeated measurements \\emph{deform} the structures of the nuclear wave function and can also serve as $\\emph{gates}$ to manipulate nuclear spins.

This Work Plan identifies and outlines interim measures to address nitrate contamination in groundwater at the Burn Site, Sandia National Laboratories/New Mexico. The New Mexico Environment Department has required implementation of interim measures for nitrate-contaminated groundwater at the Burn Site. The purpose of interim measures is to prevent human or environmental exposure to nitrate-contaminated groundwater originating from the Burn Site. This Work Plan details a summary of current information about the Burn Site, interim measures activities for stabilization, and project management responsibilities to accomplish this purpose.

Two recent measurements of beauty production in deep inelastic scattering based on data collected by the ZEUS detector are summarised. In the first one, the beauty fraction in the data was obtained from events with a muon and a jet. In the second one, beauty cross sections were measured using the decay length significance and mass of inclusive secondary vertices. Differential cross sections are presented and compared to QCD predictions. The beauty contribution to the inclusive proton structure function F_2^bbbar was extracted for the jet+muon measurement and is compared to previous measurements and theoretical predictions.

This paper is the second of the series of papers proposing dedicated strategies for precision measurements of the Standard Model parameters at the LHC. The common feature of these strategies is their robustness with respect to the systematic measurement and modeling error sources. Their impact on the precision of the measured parameters is reduced using dedicated observables and dedicated measurement procedures which exploit flexibilities of the collider and detector running modes. In the present paper we focus our attention on the measurement of the charge asymmetry of the W-boson mass. This measurement is of primordial importance for the LHC experimental program, both as a direct test of the charge-sign-independent coupling of the W-bosons to the matter particles and as a necessary first step towards the precision measurement of the charge-averaged W-boson mass. We propose and evaluate the LHC-specific strategy to measure the mass difference between the positively and negatively charged W-bosons, MW+ - MW-. We show that its present precision can be improved at the LHC by a factor of 20. We argue that such a precision is beyond the reach of the standard measurement and calibration methods imported to the LHC from the Tevatron program.

The Arctic is a challenging environment for making in-situ radiation measurements. A standard suite of radiation sensors is typically designed to measure the total, direct and diffuse components of incoming and outgoing broadband shortwave (SW) and broadband thermal infrared, or longwave (LW) radiation. Enhancements can include various sensors for measuring irradiance in various narrower bandwidths. Many solar radiation/thermal infrared flux sensors utilize protective glass domes and some are mounted on complex mechanical platforms (solar trackers) that rotate sensors and shading devices that track the sun. High quality measurements require striking a balance between locating sensors in a pristine undisturbed location free of artificial blockage (such as buildings and towers) and providing accessibility to allow operators to clean and maintain the instruments. Three significant sources of erroneous data include solar tracker malfunctions, rime/frost/snow deposition on the instruments and operational problems due to limited operator access in extreme weather conditions. In this study, a comparison is made between the global and component sum (direct [vertical component] + diffuse) shortwave measurements. The difference between these two quantities (that theoretically should be zero) is used to illustrate the magnitude and seasonality of radiation flux measurement problems. The problem of rime/frost/snow deposition is investigated in more detail for one case study utilizing both shortwave and longwave measurements. Solutions to these operational problems are proposed that utilize measurement redundancy, more sophisticated heating and ventilation strategies and a more systematic program of operational support and subsequent data quality protocols.

Dose of radiation to which a body of crystalline material has been exposed is measured by exposing the body to optical radiation at a first wavelength, which is greater than about 540 nm, and measuring optical energy emitted from the body by luminescence at a second wavelength, which is longer than the first wavelength. 9 figures.

Random Fractal Measures via the Contraction Method John E. Hutchinson Australian National mapping method to prove various existence and uniqueness properties of (selfÂ­similar) random fractal in order to establish a.s. exponential convergence to the unique random fractal measure. The arguments used

The first and second field integrals in the LCLS undulators must be below a specified limit. To accurately measure the field integrals, a long coil system is used. This note describes a set of tests which were used to check the performance of the long coil system. A long coil system was constructed to measure the first and second field integrals of the LCLS undulators. The long coil measurements of the background fields were compared to field integrals obtained by sampling the background fields and numerically calculating the integrals. This test showed that the long coil has the sensitivity required to measure at the levels specified for the field integrals. Tests were also performed by making long coil measurements of short magnets of known strength placed at various positions The long coil measurements agreed with the known field integrals obtained by independent measurements and calculation. Our tests showed that the long coil measurements are a valid way to determine whether the LCLS undulator field integrals are below the specified limits.

Six of the key physics measurements that will be made by the LHCb experiment, concerning CP asymmetries and rare B decays, are discussed in detail. The "road map" towards the precision measurements is presented, including the use of control channels and other techniques to understand the performance of the detector with the first data from the LHC.

AIAA 2002­2738 PIV MEASUREMENTS IN RIBBED DUCTS WITH AND WITHOUT ROTATION Rahul Bharadwaj, James 500, Reston, VA 20191­4344 #12;PIV MEASUREMENTS IN RIBBED DUCTS WITH AND WITHOUT ROTATION Rahul cooling have fo- cused primarily on both simple and complex channel flow and the effect of turbulence

We present the latest measurements of the top quark mass from the Tevatron. The different top decay channels and measurement techniques used for these results are also described. The world average of the top quark mass based on some of these new results combined with previous results is mtop=172.6+-1.4 GeV.

In connection with the problem of raising the sensitivity of gravitational-wave experiments, a study is made of the quantum limitations that can arise when a classical force is measured by the response of a quantum oscillator. Following up work done by the groups at Caltech and Moscow, and also by Unruh, attention is drawn to a class of nondemolition measurements that are free of quantum limitations on the accuracy with which a force can be measured. It is shown that such measurements can be realized in the case of observation of operators that are quantum integrals of the motion of the investigated system. The physical reasons for the presence or absence of a quantum sensitivity limit for an arbitrary choice of an observable are elucidated; they reside in the degree of uncertainty of the initial state of the system. In the case of integrals of the motion, this uncertainty can be reduced to zero by an initial precise measurement and subsequently remains zero (quasinondemolition measurement). It is shown further that one can make a choice of observables that do not depend on the initial state of the quantum system at all but retain information about an external influence. In this case, a precise continuous measurement can be realized without special preparation of the state of the system (strictly nondemolition measurement). The general rule for the construction of such an optimal variable is identical to the recommendations of the quantum theory of filtration.

A simple and accurate method for measuring the overall emittance of receiver pipes used with cylindrical concentrators is described. Experimental measurements obtained for steel pipes with a black chrome over nickel selective surface are presented. The observed strong temperature dependence of emittance indicates that the use of room temperature emittance data will substantially overestimate collector efficiency. (SPH)

Computing Physical Invariant Measures Gary Froyland \\Lambda Department of Mathematical Engineering of the long term distribution of most orbits of our system (M; T ). The computational techniques we present situation, our method provides an automatic way to reconstruct the dynamics. A. Physical Invariant Measures

Measurement of Magnetic Field Using Collaborative AUVs Jesse Pentzer, Brendan Crosbie, Thomas Bean, tests using multiple types of AUVs to individually sample bathymetric data and water mass properties, salinity, and temperature data was reported [2]. In 2007, an AUV was equipped to measure water properties

Constructive Contrasts Between Modeled and Measured Climate Responses Over a Regional Scale of simulated net primary production (NPP) to climate variables and the response observed in field measurements of NPP. Residual contrasts com- pared deviations of NPP from the empirical surface to identify groupings

Volume 1 of this manual documents the procedures and existing technology that are currently used by the Environmental Measurements Laboratory. A section devoted to quality assurance has been included. These procedures have been updated and revised and new procedures have been added. They include: sampling; radiation measurements; analytical chemistry; radionuclide data; special facilities; and specifications. 228 refs., 62 figs., 37 tabs. (FL)

A FRAMEWORK FOR MEASURING SUPERCOMPUTER PRODUCTIVITY1 10/30/2003 Marc Snir2 and David A. Bader3 Abstract We propose a framework for measuring the productivity of High Performance Computing (HPC) systems, based on common economic definitions of productivity and on Utility Theory. We discuss how

A method and apparatus for orienting a pulsed neutron source and a multi-angle diffractometer toward a sample of a ceramic-matrix or metal-matrix composite so that the measurement of internal strain (from which stress is calculated) is reduced to uncomplicated time-of-flight measurements.

accuracy. The light source is very important when calibrating solar cells. Commonly used light sourcesAccurate performance measurement of silicon solar cells William Murray Keogh July 2001 A thesis is an important part of the solar cell manufacturing process. Two classes of measurement can be considered

Dose of radiation to which a body of crystalline material has been exposed is measured by exposing the body to optical radiation at a first wavelength, which is greater than about 540 nm, and measuring optical energy emitted from the body by luminescence at a second wavelength, which is longer than the first wavelength.

One of the most important factors which influence the behaviour of electrodeposited films is the strain induced by the electrodeposition process. In this communication the authors report a new optical fiber interferometer-based technique for the in situ measurement of strain during electrodeposition. The measurement system is shown.

The operations of linear algebra, calculus, and statistics are routinely applied to measurement scales but certain mathematical conditions must be satisfied in order for these operations to be applicable. We call attention to the conditions that lead to construction of measurement scales that enable these operations.

Knowledge about the optical properties of materials at high pressure and high temperature is needed for EOS research. Ellipsometry measures the change in the polarization of a probe beam reflected from a surface. From the change in polarization, the real and imaginary parts of the time dependent complex index of refraction can be extracted. From the measured optical properties, fundamental physical properties of the material, such as emissivity, phase transitions, and electrical conductivity can be extracted. A dynamic ellipsometry measurement system with nanosecond resolution was built in order to measure all four stocks parameters. Gas gun was used to accelerate the impact flyer. Our experiments concentrated on the optical properties of 1020 steel targets with impact pressure range of 40-250 kbar. Although there are intrinsic difficulties with dynamic ellipsometric measurements, distinct changes were observed for 1020 steel under shock compression larger than 130 kbar, the alpha->epsilon phase transition.

The partitioning experiment is commonly performed with little or no attention to reducing measurement variance. Batch test procedures such as those used to measure K{sub d} values (e.g., ASTM D 4646 and EPA402 -R-99-004A) do not explain how to evaluate measurement uncertainty nor how to minimize measurement variance. In fact, ASTM D 4646 prescribes a sorbent:water ratio that prevents variance minimization. Consequently, the variance of a set of partitioning measurements can be extreme and even absurd. Such data sets, which are commonplace, hamper probabilistic modeling efforts. An error-savvy design requires adjustment of the solution:sorbent ratio so that approximately half of the sorbate partitions to the sorbent. Results of Monte Carlo simulations indicate that this simple step can markedly improve the precision and statistical characterization of partitioning uncertainty.

A modification of the Tulsi quantum search algorithm with intermediate measurements of the control is presented. In order to analyze the effect of measurements in quantum searches, a different choice of the angular parameter is used. The study is performed for several values of time lapses between measurements, finding close relationships between probabilities and correlations (Mutual Information and Cumulative Correlation Measure). The order of this modified algorithm is estimated, showing that for some time lapses the performance is improved, and became of order $O(N)$ (classical brute force search) when the measurement is taken in every step. The results indicate a possible way to analyze improvements to other quantum algorithms using one, or more, control qubits.

Frequency control is an essential requirement of reliable electric power system operations. Determination of frequency control depends on frequency measurement and the practices based on these measurements that dictate acceptable frequency management. This report chronicles the evolution of these measurements and practices. As technology progresses from analog to digital for calculation, communication, and control, the technical basis for frequency control measurement and practices to determine acceptable performance continues to improve. Before the introduction of digital computing, practices were determined largely by prior experience. In anticipation of mandatory reliability rules, practices evolved from a focus primarily on commercial and equity issues to an increased focus on reliability. This evolution is expected to continue and place increased requirements for more precise measurements and a stronger scientific basis for future frequency management practices in support of reliability.

In the framework of the Neutronic and Nuclear Assessment Task Group of the MEGAPIE experiment we measured the delayed neutron (DN) flux at the top of the target. The measurement was proposed mainly for radioprotection purposes since the DN flux at the top of the target has been estimated to be of the same order of magnitude as the prompt neutron flux. Given the strong model-dependence of DN predictions, the measurement of DN contribution to the total neutron activity at the top of the target was thus desired. Moreover, this measurement is complementary to the DN experiments performed at PNPI (Gatchina) on solid lead and bismuth targets. The DN measurement at MEGAPIE was performed during the start-up phase of the target. In this paper we present a detailed description of the experimental setup and some preliminary results on decay spectra.

In the framework of the Neutronic and Nuclear Assessment Task Group of the MEGAPIE experiment we measured the delayed neutron (DN) flux at the top of the target. The measurement was proposed mainly for radioprotection purposes since the DN flux at the top of the target has been estimated to be of the same order of magnitude as the prompt neutron flux. Given the strong model-dependence of DN predictions, the measurement of DN contribution to the total neutron activity at the top of the target was thus desired. Moreover, this measurement is complementary to the DN experiments performed at PNPI (Gatchina) on solid lead and bismuth targets. The DN measurement at MEGAPIE was performed during the start-up phase of the target. In this paper we present a detailed description of the experimental setup and some preliminary results on decay spectra.

The present invention provides systems and methods for measuring a load force associated with pulling a farm implement through soil that is used to generate a spatially variable map that represents the spatial variability of the physical characteristics of the soil. An instrumented hitch pin configured to measure a load force is provided that measures the load force generated by a farm implement when the farm implement is connected with a tractor and pulled through or across soil. Each time a load force is measured, a global positioning system identifies the location of the measurement. This data is stored and analyzed to generate a spatially variable map of the soil. This map is representative of the physical characteristics of the soil, which are inferred from the magnitude of the load force.

Optical profilers are valuable tools for the characterization of microelectromechanical systems (MEMSs). They use phase sifting interferometry (PSI) or vertical scanning interferometry to measure the topography of microscale structures with nanometer resolution. However, for many emerging MEMS applications, the sample needs to be imaged while placed in a liquid or in a package with a glass window. The increased refractive index of the transparent medium degrades the interference image contrast and prevents any measurement of the sample. We report on the modification of a Veeco NT1100 optical profiler to enable PSI measurements through refractive media. This approach can be applied to any other optical profiler with PSI capability. The modification consists in replacing the original illumination source with a custom-built narrow linewidth source, which increases the coherence length of the light and the contrast of the interference image. We present measurements taken with the modified configuration on samples covered with 3 mm water or 500 {mu}m glass, and we compare them to measurements of uncovered samples. We show that the measurement precision is only slightly reduced by the water and glass, and that it is still sufficiently high for typical MEMS applications. The described method can be readily used for measuring through other types and thicknesses of refractive materials.

We study the applicability of several galaxy environment measures (n^th-nearest-neighbor distance, counts in an aperture, and Voronoi volume) within deep redshift surveys. Mock galaxy catalogs are employed to mimic representative photometric and spectroscopic surveys at high redshift (z ~ 1). We investigate the effects of survey edges, redshift precision, redshift-space distortions, and target selection upon each environment measure. We find that even optimistic photometric redshift errors (\\sigma_z = 0.02) smear out the line-of-sight galaxy distribution irretrievably on small scales; this significantly limits the application of photometric redshift surveys to environment studies. Edges and holes in a survey field dramatically affect the estimation of environment, with the impact of edge effects depending upon the adopted environment measure. These edge effects considerably limit the usefulness of smaller survey fields (e.g. the GOODS fields) for studies of galaxy environment. In even the poorest groups and clusters, redshift-space distortions limit the effectiveness of each environment statistic; measuring density in projection (e.g. using counts in a cylindrical aperture or a projected n^th-nearest-neighbor distance measure) significantly improves the accuracy of measures in such over-dense environments. For the DEEP2 Galaxy Redshift Survey, we conclude that among the environment estimators tested the projected n^th-nearest-neighbor distance measure provides the most accurate estimate of local galaxy density over a continuous and broad range of scales.

Proper substation grounding grid design requires good, accurate soil resistivity measurements. This data is essential to model the substation ground grid to design a safe ground grid with a satisfactory ground grid resistance at minimum cost. For substations with several decades of service, there is some concern that a grid may have deteriorated, been damaged during equipment installation or excavation, or that initial soil resistivity measurements were lost or may not have been correctly performed. Ground grid conductors change the substation surface voltage distribution. Any voltage measurements taken at the complete substation will also vary from the tests made without conductors present. During testing, current was injected in the soil by probes placed near the ground grid. The current tends to follow the ground grid conductors since copper is a far better conductor than the soil it is placed in. Resistance readings near grids will be lower than readings in undisturbed soil. Since computer models were unavailable for many years, analyzing the effect of the grid conductors on soil resistivity measurements was very difficult. As a result, soil resistivity measurements made close to substations were of little use to the engineer unless some means of correcting the measured values could be developed. This paper will present results of soil resistivity measurements near a substation ground grid before and after a ground grid has been installed and describes a means of calculating the undisturbed soil model.

A recent effort determined uranium holdup at a large fuel fabrication facility abroad where low enriched ({approx} 3%) uranium (LEU) oxide feeds the pellet manufacturing process. Measurements taken with both high- and low-resolution gamma-ray spectrometry systems include extensive data for the ventilation and vacuum systems. Equipment dimensions and the corresponding holdup deposit masses are large for LEU. Because deposits are infinitely thick to the 186 keV gamma ray in many locations in an LEU environment, measurements of both the 186 and 1001 keV gamma-rays were required, and self-attenuation was significant at 1001 keV in many cases. These wide-dynamic-range measruements used short count times, portable scintillator detectors, and portable MCAs. Because equipment is elevated above floor levels, most measurements were made with detectors mounted on extended telescoping poles. One of the main goals of this effort was to demonstrate and validate methods for measurement and quantitative analysis of LEU holdup using low-resolution detectors and the Generalized Geometry Holdup (GGH) techniques. The current GGH approach is applied elsewhere for holdup measurements of plutonium and high-enriched uranium. The recent experience is directly applicable to holdup measruements at LEU facilities such as the Paducah and Portmouth gaseous diffusion enrichment plants and elsewhere, including LEU sites where D and D is active. This report discusses the measurement methodology, calibration of the measurement equipment, measurement control, analysis of the data, and the global and local assay results including random and systematic uncertainties. It includes field-validation exercises (multiple calibrated systems that perform measruements on the same extended equipment) as well as quantitative validation results obtained on reference materials assembled to emulate the deposits in an extended vacuum line that was also measured by these techniques. The paper examines the differences in assay results between the low-resolution system using the GGH method and the high-resolution system utilizing the commercially available ISOCS analysis method.

We generalize the measurement using an expanded concept of cover, in order to provide a new approach to size of set other than cardinality. The generalized measurement has application backgrounds such as a generalized problem in dimension reduction, and has reasons from the existence of the minimum of both the positive size and the positive graduation, i.e., both the minimum is the size of the set ${0}$. The minimum of positive graduation in actual measurement provides the possibility that an object cannot be partitioned arbitrarily, e.g., an interval $[0, 1]$ cannot be partitioned by arbitrarily infinite times to keep compatible with the minimum of positive size. For the measurement on size of set, it can be assumed that this minimum is the size of ${0}$, in symbols $|{0}|$ or graduation 1. For a set $S$, we generalize any graduation as the size of a set $C_i$ where $\\exists x \\in S (x \\in C_i)$, and $|S|$ is represented by a pair, in symbols $(C, N(C))$, where ${C} = \\cup {C_i}$ and $N(C)$ is a set function on $C_i$, with $C_i$ independent of the order $i$ and $N(C)$ reflecting the quantity of $C_i$. This pair is a generalized form of box-counting dimension. The yielded size satisfies the properties of outer measure in general cases, and satisfies the properties of measure in the case of graduation 1; while in the reverse view, measure is a size using the graduation of size of an interval. As for cardinality, the yielded size is a one-to-one correspondence where only addition is allowable, a weak form of cardinality, and rewrites Continuum Hypothesis using dimension as $\\omega \\dot |{0,1}| = 1$. In the reverse view, cardinality of a set is a size in the graduation of the set. The generalized measurement provides a unified approach to dimension, measure, cardinality and hence infinity.

We present the most recent CDF results in the measurements of the decay and production vertex of the top-quark. New results on forward-backward asymmetry in top-antitop events are presented. Also, recent measurements of the branching fractions of top-quark are discussed. Finally, measurements in single top events, where top-quark is produced through electroweak processes, are presented. Despite the much larger number of top events collected at the LHC, due to the symmetric initial state and the better signal-to-background ratio in specific channels, some results will be lasting heritage of the Tevatron.

It has been recently suggested that probabilities of different events in the multiverse are given by the frequencies at which these events are encountered along the worldline of a geodesic observer (the "watcher"). Here I discuss an extension of this probability measure to quantum theory. The proposed extension is gauge-invariant, as is the classical version of this measure. Observations of the watcher are described by a reduced density matrix, and the frequencies of events can be found using the decoherent histories formalism of Quantum Mechanics (adapted to open systems). The quantum watcher measure makes predictions in agreement with the standard Born rule of QM.

It has been recently suggested that probabilities of different events in the multiverse are given by the frequencies at which these events are encountered along the worldline of a geodesic observer (the "watcher"). Here I discuss an extension of this probability measure to quantum theory. The proposed extension is gauge-invariant, as is the classical version of this measure. Observations of the watcher are described by a reduced density matrix, and the frequencies of events can be found using the decoherent histories formalism of Quantum Mechanics (adapted to open systems). The quantum watcher measure makes predictions in agreement with the standard Born rule of QM.

The goal of this document is to outline a procedure for dimensional measurement of Los Alamos National Laboratory's CMM Pit Artifact. This procedure will be used by the Manufacturing Practice's Inspection Technology Subgroup of the Interagency Manufacturing Operations Group and Joint Operations Weapon Operations Group (IMOG/JOWOG 39) round robin participants. The intent is to assess the state of industry within the Nuclear Weapons Complex for measurements made on this type of part and find which current measurement strategies and techniques produce the best results.

A method of electromagnetically measuring the distance between adjacent tube elements in a heat exchanger. A cylindrical, high magnetic permeability ferrite slug is placed in the tube adjacent the spacing to be measured. A bobbin or annular coil type probe operated in the absolute mode is inserted into a second tube adjacent the spacing to be measured. From prior calibrations on the response of the eddy current coil, the signals from the coil, when sensing the presence of the ferrite slug, are used to determine the spacing between the tubes.

The research contained in this thesis explores design attributes of the enterprise performance measurement system required for the transformation to the lean enterprise and its management. Arguments are made from the ...

We have developed a mission concept that uses 3-unit cubesats to perform new measurements of lunar magnetic fields, less than 100 meters above the Moons surface. The mission calls for sending the cubesats on impact ...

We introduce on physical grounds a new measure of multipartite entanglement for pure states. The function we define is discriminant and monotone under LOCC and moreover can be expressed in terms of observables of the system.

CF4 gas is useful in many applications, especially as a drift gas in particle detection chambers. In order to make accurate measurements of incident particles the properties of the drift gas must be well understood. An ...

Corrections for inaccuracy in Vaisala radiosonde RH measurements have been applied to ARM SGP radiosonde soundings. The magnitude of the corrections can vary considerably between soundings. The radiosonde measurement accuracy, and therefore the correction magnitude, is a function of atmospheric conditions, mainly T, RH, and dRH/dt (humidity gradient). The corrections are also very sensitive to the RH sensor type, and there are 3 Vaisala sensor types represented in this dataset (RS80-H, RS90, and RS92). Depending on the sensor type and the radiosonde production date, one or more of the following three corrections were applied to the RH data: Temperature-Dependence correction (TD), Contamination-Dry Bias correction (C), Time Lag correction (TL). The estimated absolute accuracy of NIGHTTIME corrected and uncorrected Vaisala RH measurements, as determined by comparison to simultaneous reference-quality measurements from Holger Voemel's (CU/CIRES) cryogenic frostpoint hygrometer (CFH), is given by Miloshevich et al. (2006).

optic sensing technique is developed. An incident light beam from a semiconductor laser is coupled back into an optical fiber upon reflection from the metal surface. By measuring the diffused light power reflected from the metal surface, the diameter...

A new software tool was created at Jefferson Lab to measure the emittance of the CEBAF electron beams. The tool consists of device control and data analysis applications. The device control application handles the work of wire scanners and writes their measurement results as well as the information about accelerator settings during these measurements into wire scanner data files. The data analysis application reads these files and calculates the beam emittance on the basis of a wire scanner data processing model. Both applications are computer platform independent but are mostly used on LINUX PCs recently installed in the accelerator control room. The new tool significantly simplifies beam emittance measurement procedures for accelerator operations and contributes to a very high availability of the CEBAF machine for the nuclear physics program at Jefferson Lab.

In this paper we review some of the most essential literature on the concept and measurement of quality of work. We show that different academic fields have conceptualized quality of work in distinct ways however there has been a convergence...

The international Muon Ionization Cooling Experiment (MICE), under construction at RAL, will test a prototype cooling channel for a future Neutrino Factory or Muon Collider. The cooling channel aims to achieve, using liquid hydrogen absorbers, a 10% reduction in transverse emittance. The change in 4D emittance will be determined with an accuracy of 1% by measuring muons individually. Step IV of MICE will make the first precise emittance-reduction measurements of the experiment. Simulation studies using G4MICE, based on GEANT4, find a significant difference in multiple scattering in low Z materials, compared with the standard expression quoted by the Particle Data Group. Direct measurement of multiple scattering using the scintillating-fibre trackers is found to be possible, but requires the measurement resolution to be unfolded from the data.

This report details the results of a scoping study funded by the Defense Waste Processing Facility (DWPF) for the measurement of melt viscosities for simulated glasses representative of Macrobatch 2 (Tank 42/51 feed).

I propose a new volume-weighted probability measure for cosmological 'multiverse' scenarios involving eternal inflation. The 'reheating-volume (RV) cutoff' calculates the distribution of observable quantities on a portion of the reheating hypersurface that is conditioned to be finite. The RV measure is gauge-invariant, does not suffer from the 'youngness paradox', and is independent of initial conditions at the beginning of inflation. In slow-roll inflationary models with a scalar inflaton, the RV-regulated probability distributions can be obtained by solving nonlinear diffusion equations. I discuss possible applications of the new measure to 'landscape' scenarios with bubble nucleation. As an illustration, I compute the predictions of the RV measure in a simple toy landscape.

Optical interferometry is amongst the most sensitive techniques for precision measurement. By increasing the light intensity a more precise measurement can usually be made. However, in some applications the sample is light sensitive. By using entangled states of light the same precision can be achieved with less exposure of the sample. This concept has been demonstrated in measurements of fixed, known optical components. Here we use two-photon entangled states to measure the concentration of the blood protein bovine serum albumin (BSA) in an aqueous buffer solution. We use an opto-fluidic device that couples a waveguide interferometer with a microfluidic channel. These results point the way to practical applications of quantum metrology to light sensitive samples.

Six analytical systems measuring delayed neutrons have been used for safeguards measurements at the Savannah River Site (SRS). A predecessor, the 252Cf Activation Analysis Facility installed at the Savannah River Technology Center (formally SR Laboratory) has been used since 1974 to analyze small samples, measuring both delayed neutrons and gammas. The six shufflers, plus one currently being fabricated, were developed, designed and fabricated by the LANL N-1 group. These shufflers have provided safeguards measurements of product (2 each), in-process scrap (2 each plus a conceptual replacement) and process waste (2 each plus one being fabricated). One shuffler for scrap assay was the first shuffler to be installed (1978) in a process. Another (waste) was the first installed in a process capable of assaying barrels. A third (waste) is the first pass-through model and a fourth (product) is the most precise ({+-}.12%) and accurate NDA instrument yet produced.

was or whether the ground cover completely covered the ground. Hurricane Jeanne destroyed the test configuration before air tightness measurements could be taken. However, we believe these crawlspaces were typical of poorly vented crawls one might find under...

As we enter the age of precision measurement in neutrino physics, improved flux sources are required. These must have a well defined flavor content with energies in ranges where backgrounds are low and cross-section ...

A "relaxoscope" (100) detects the degree of arterial endothelial function. Impairment of arterial endothelial function is an early event in atherosclerosis and correlates with the major risk factors for cardiovascular disease. An artery (115), such as the brachial artery (BA) is measured for diameter before and after several minutes of either vasoconstriction or vasorelaxation. The change in arterial diameter is a measure of flow-mediated vasomodification (FMVM). The relaxoscope induces an artificial pulse (128) at a superficial radial artery (115) via a linear actuator (120). An ultrasonic Doppler stethoscope (130) detects this pulse 10-20 cm proximal to the point of pulse induction (125). The delay between pulse application and detection provides the pulse transit time (PTT). By measuring PTT before (160) and after arterial diameter change (170), FMVM may be measured based on the changes in PTT caused by changes in vessel caliber, smooth muscle tone and wall thickness.

measurements on three such arrays, each with a different disorder correlation length but identical average've gone on ski trips, bike rides, and enjoyed some good beer. My friends have been an indispensable

optic sensing technique is developed. An incident light beam from a semiconductor laser is coupled back into an optical fiber upon reflection from the metal surface. By measuring the diffused light power reflected from the metal surface, the diameter...

Siemens Power Corporation (SPC) has performed reactor poolside gamma scanning measurements of fuel rods for fission gas release (FGR) detection for more than 10 yr. The measurement system has been previously described. Over the years, the data acquisition system, the method of spectrum analysis, and the means of reducing spectrum interference have been significantly improved. A personal computer (PC)-based multichannel analyzer (MCA) package is used to collect, display, and store high-resolution gamma-ray spectra measured in the fuel rod plenum. A PC spread sheet is used to fit the measured spectra and compute sample count rates after Compton background subtraction. A Zircaloy plenum spacer is often used to reduce positron annihilation interference that can arise from the INCONEL[sup [reg sign

The origin and scaling of the current measured during steady electrospinning of polymer solutions in organic solvents are considered. For a specified electric field strength E, flow rate Q, and conductivity K, the total ...

Measurement-Induced NonLocality was introduced by Luo and Fu (Phys. Rev. Lett. \\textbf{106}, 120401,(2011)) as a measure of nonlocality in a bipartite state. In this paper we will discuss monogamy property of measurement-induced nonlocality for some three- and four-qubit classes of states. Unlike discord, we find quite surprising results in this situation. Both the GHZ and W states satisfy monogamy relations in the three-qubit case, however, in general there are violations of monogamy relations in both the GHZ-class and W-class states. In case of four-qubit system, monogamy holds for most of the states in the generic class. Four qubit GHZ does not satisfy monogamy relation, but W-state does. We provide several numerical results including counterexamples regarding monogamy nature of measurement induced nonlocality. We will also extend our results of generalized W-class to n-qubit.

A sample of recent results in muon scattering measurements from the COMPASS experiment at CERN will be reviewed. These include high energy processes with longitudinally polarised proton and deuteron targets. High energy polarised measurements provide important constraints for studying the nucleon spin structure and thus permit to test the applicability of the theoretical framework of factorisation theorems and perturbative QCD. Specifically, latest results on longitudinal quark polarisation, quark helicity densities and gluon polarisation will be reviewed.

Complying with permitted emissions limits may be the most significant operations risk for a power plant. As limits are slowly ratcheted downward, understanding the accuracy and variation of measured pollutant levels becomes even more important. To avoid misunderstandings, regulators and plant owners should factor measurement uncertainty into air quality permit numbers both as the permit is formulated and preceding any subsequent modifications. 4 figs., 2 tabs.

Temperature sensitive features of particular phosphors were utilized for measuring the temperature T{sub p} of microparticles, confined in the sheath of a rf plasma. The experiments were performed under variation of argon pressure and rf power of the process plasma. T{sub p} has been determined by evaluation of characteristic fluorescent lines. The results for T{sub p} measurements are strongly dependent on rf power and gas pressure.

With a large and still increasing dataset, W and Z boson physics studies at the Tevatron p{bar p} collider are particularly useful for testing many aspects of the Standard Model. In this proceeding, we present measurements of electroweak boson properties, distributions, and charge asymmetries. We examine both solitary W and Z production as well as production in association with jets. These measurements are compared to NLO QCD predictions, are used to extract fundamental Standard Model parameters, and constrain parton distribution functions.

A new class of micromechanical dynamometers has been disclosed which are particularly suited to fabrication in parallel with other microelectromechanical apparatus. Forces in the microNewton regime and below can be measured with such dynamometers which are based on a high-compliance deflection element (e.g. a ring or annulus) suspended above a substrate for deflection by an applied force, and one or more distance scales for optically measuring the deflection.

A system and method of efficiently obtaining distance measurements of a target. A modulated optical beam may be used to determine the distance to the target. A first beam splitter may be used to split the optical beam and a second beam splitter may be used to recombine a reference beam with a return ranging beam. An optical mixing detector may be used in a receiver to efficiently detect distance measurement information.

This technical bulletin documents measured peak equipment load data from 39 laboratory spaces in nine buildings across five institutions. The purpose of these measurements was to obtain data on the actual peak loads in laboratories, which can be used to rightsize the design of HVAC systems in new laboratories. While any given laboratory may have unique loads and other design considerations, these results may be used as a 'sanity check' for design assumptions.

The earth's atmosphere affects the velocity of propagation of microwave signals. This imparts a range error to radar range measurements that assume the typical simplistic model for propagation velocity. This range error is a function of atmospheric constituents, such as water vapor, as well as the geometry of the radar data collection, notably altitude and range. Models are presented for calculating atmospheric effects on radar range measurements, and compared against more elaborate atmospheric models.

A technique of dynamically defined measures is developed and its relation to the theory of equilibrium states is shown. The technique uses Caratheodory's method and the outer measure introduced in (I. Werner, Math. Proc. Camb. Phil. Soc. 140 (2) (2006) 333-347). As an application, equilibrium states for contractive Markov systems (I. Werner, J. London Math. Soc. 71 (2005), no. 1, 236-258) are obtained.

The report provides school administrators and facilities managers with instructions on how to test for the presence of radon. The findings from EPA's comprehensive studies of radon measurements in schools have been incorporated into these recommendations. The report supersedes Radon Measurements in Schools- An Interim Report (EPA 520/1-89-010). However, it does not invalidate tests in the process of being conducted under the interim report.

Energy savings are often used to help finance a facility`s modernization program. The process of determining these savings has come under the microscope as facility owners often pay energy service companies based on savings measurements. Unfortunately, financiers and facility owners usually do not appreciate that determination of savings is an Art, subject to discretion. To them, savings are determined by accountants subtracting one year`s costs from another year`s costs. For those wishing to properly quantify avoided cost, engineering judgments must be added to this simplistic subtraction process. How do you measure something you don`t have? This is the challenge in measuring energy savings. The absence of energy use cannot be measured. However, energy savings can be determined indirectly, using measurements of the presence of energy use, engineering judgment and analysis. This article summarizes the state of the art in savings measurement, particularly as it may be applied by an energy service company (ESCO) in an energy performance contract.

Time measurement plays a crucial rule for the purpose of particle identification in high energy physical experiments. With the upgrading of physical goal and the developing of electronics, modern time measurement system meets the requirement of excellent resolution specification as well as high integrity. Due to Field Programmable Gate Array (FPGA), FPGA time-to-digital converter (TDC) becomes one of mature and prominent time measurement methods in recent years. For correcting time-walk effect caused by leading timing, time-over-threshold (TOT) measurement should be added in the FPGA TDC. TOT can be obtained by measuring the interval time of signal leading and trailing edge. Unfortunately, a traditional TDC can recognize only one kind of signal edge, the leading or the trailing. Generally, to measure the interval, two TDC channels can be used at the same time, one for leading, the other for trailing. However, this method will increase the amount of used FPGA resource and reduce the TDC's integrity unavoidably...

The magnetic measurements of HQ01e, a 1 m long LHC Accelerator Research Program (LARP) high-gradient quadrupole model, were performed at 4.4 K and above 40 K at the magnet test facility of LBNL in July 2011. The 120 mm aperture cos2? Nb{sub 3}Sn magnet was designed with accelerator magnet features including alignment and field quality. Conductor-limited gradient was 195 T/m at 4.4 K. During the measurement, a ramp rate of 10 A/s was used and measurements at the nominal current of 14.2 kA (82% of short-sample limit with a gradient of 160 T/m) were performed using the 250 mm long printed-circuit board rotating probe developed by FNAL. At 14.2 kA, 2.7 units of b{sub 6} and 0.7 units of b{sub 10} were measured. Large persistent current contribution and strong dynamic effects were observed. We analyzed the allowed and non-allowed harmonics obtained during the measurements above 40 K and at the nominal current. Significant change of the skew sextupole occurred between 50 K and 95 K. The allowed multipole and the low-order non-allowed multipoles at the straight section were explained through the rigid displacement of coil blocks with an amplitude less than 100 ?m. We also attempted to correlate the coil asymmetry (a{sub 3} and b{sub 3}) with the measured coil pole azimuthal strain. The dynamic multipole measured at the magnetic straight section varied linearly with the ramp rate of magnet current ranging from 10 A/s to 60 A/s. It was attributed to the inter-strand coupling currents with low crossover resistance. The crossover resistance of the cables at the inner layer of the magnet was estimated to range between 0.2 ?? to 0.7 ??.

Accurate distances to pulsars can be used for a variety of studies of the Galaxy and its electron content. However, most distance measures to pulsars have been derived from the absorption (or lack thereof) of pulsar emission by Galactic H I gas, which typically implies that only upper or lower limits on the pulsar distance are available. We present a critical analysis of all measured H I distance limits to pulsars and other neutron stars, and translate these limits into actual distance estimates through a likelihood analysis that simultaneously corrects for statistical biases. We also apply this analysis to parallax measurements of pulsars in order to obtain accurate distance estimates and find that the parallax and H I distance measurements are biased in different ways, because of differences in the sampled populations. Parallax measurements typically underestimate a pulsar's distance because of the limited distance to which this technique works and the consequential strong effect of the Galactic pulsar distribution (i.e., the original Lutz-Kelker bias), in H I distance limits, however, the luminosity bias dominates the Lutz-Kelker effect, leading to overestimated distances because the bright pulsars on which this technique is applicable are more likely to be nearby given their brightness.

In circular machines, nonlinear dynamics can impact parameters such as beam lifetime and could result in limitations on the performance reach of the accelerator. Assessing and understanding these effects in experiments is essential to confirm the accuracy of the magnetic model and improve the machine performance. A direct measurement of the machine nonlinearities can be obtained by characterizing the dependency of the tune as a function of the amplitude of oscillations (usually defined as amplitude detuning). The conventional technique is to excite the beam to large amplitudes with a single kick and derive the tune from turn-by-turn data acquired with beam position monitors. Although this provides a very precise tune measurement it has the significant disadvantage of being destructive. An alternative, nondestructive way of exciting large amplitude oscillations is to use an ac dipole. The perturbation Hamiltonian in the presence of an ac dipole excitation shows a distinct behavior compared to the free oscillations which should be correctly taken into account in the interpretation of experimental data. The use of an ac dipole for direct amplitude detuning measurement requires careful data processing allowing one to observe the natural tune of the machine; the feasibility of such a measurement is demonstrated using experimental data from the Large Hadron Collider. An experimental proof of the theoretical derivations based on measurements performed at injection energy is provided as well as an application of this technique at top energy using a large number of excitations on the same beam.

A typical acoustic harmonic generation measurement comes with certain limitations. Firstly, the use of the plane wave-based analysis used to extract the nonlinear parameter, ?, ignores the effects of diffraction, attenuation and receiver averaging which are common to most experiments, and may therefore limit the accuracy of a measurement. Secondly, the method usually requires data obtained from a through-transmission type setup, which may not be practical in a field measurement scenario where access to the component is limited. Thirdly, the technique lacks a means of pinpointing areas of damage in a component, as the measured nonlinearity represents an average over the length of signal propagation. Here we describe a three-dimensional model of harmonic generation in a sound beam, which is intended to provide a more realistic representation of a typical experiment. The presence of a reflecting boundary is then incorporated into the model to assess the feasibility of performing single-sided measurements. Experimental validation is provided where possible. Finally, a focusing acoustic source is modelled to provide a theoretical indication of the afforded advantages when the nonlinearity is localized.

The measurement of the Avogadro constant opened the way to a comparison of the watt-balance measurements of the Planck constant with the values calculated from the quotients of the Planck constant and the mass of a particle or an atom. Since the energy scales of these measurements span nine energy decades, these data provide insight into the consistency of our understanding of physics.

was the energy service company (ESCO) chosen by HHSC to implement the ESPC. The M&V plan is based on the International Performance Measurement and Verification Protocol (IPMVP) Option C Whole Building Measurement and provides the methodology.../05-47 1 INTRODUCTION This document describes the Measurement and Verification (M&V) methodology for Phase One of the Texas Health and Human Services Commission (HHSC) energy savings performance contracting (ESPC) project. TAC-Tour Andover Controls...

We report on the successful use of a laser-driven few-MeV proton source to measure the differential cross section of a hadronic scattering reaction as well as on the measurement and simulation study of polarization observables of the laser-accelerated charged particle beams. These investigations were carried out with thin foil targets, illuminated by 100 TW laser pulses at the Arcturus laser facility; the polarization measurement is based on the spin dependence of hadronic proton scattering off nuclei in a Silicon target. We find proton beam polarizations consistent with zero magnitude which indicates that for these particular laser-target parameters the particle spins are not aligned by the strong magnetic fields inside the laser-generated plasmas.

Observations in Quantum Mechanics are subject to complex restrictions arising from the principle of energy conservation. Determining such restrictions, however, has been so far an elusive task, and only partial results are known. In this paper we discuss how constraints on the energy spectrum of a measurement device translate into limitations on the measurements which we can effect on a target system with non-trivial energy operator. We provide efficient algorithms to characterize such limitations and we quantify them exactly when the target is a two-level quantum system. Our work thus identifies the boundaries between what is possible or impossible to measure, i.e., between what we can see or not, when energy conservation is at stake.

Time and energy of quantum processes are a tradeoff against each other. We propose to ascribe to any given quantum process a time-energy cost to quantify how much computation it performs. Here, we analyze the time-energy costs for general quantum measurements, along a similar line as our previous work for quantum channels, and prove exact and lower bound formulae for the costs. We use these formulae to evaluate the efficiencies of actual measurement implementations. We find that one implementation for a Bell measurement is optimal in time-energy. We also analyze the time-energy cost for unambiguous state discrimination and find evidence that only a finite time-energy cost is needed to distinguish any number of states.

A gas monitor detector was implemented and characterized at the Soft X-ray Research (SXR) instrument to measure the average, absolute and pulse-resolved photon flux of the LCLS beam in the energy range between 280 and 2000 eV. The detector is placed after the monochromator and addresses the need to provide reliable absolute pulse energy as well as pulse-resolved measurements for the various experiments at this instrument. This detector provides a reliable non-invasive measurement for determining flux levels on the samples in the downstream experimental chamber and for optimizing signal levels of secondary detectors and for the essential need of datamore »normalization. The design, integration into the instrument and operation are described, and examples of its performance are given.« less

Co- and counter-viewing bolometers aimed along a common tangency chord are being used to study power losses due to charge exchange (CX) of fast ions in neutral beam injection (NBI) heated TFTR plasmas. For unidirectional injection, tangential bolometers oriented to view CX loss of circulating fast ions detect losses from the thermal target plasma (impurity radiation and CX) plus power due to the fast ion CX loss, whereas bolometers oppositely directed measure only the target plasma contribution. The difference between the two signals is a measure of the fast ion CX loss. Additional information is obtained by comparing the tangential bolometer signals with those of perpendicularly viewing bolometer monitors and arrays. The measurements are compared to results of the TRANSP code analysis.

This report summarizes LDRD project number 151365, %5CDynamic Temperature Measurements with Embedded Optical Sensors%22. The purpose of this project was to develop an optical sensor capable of detecting modest temperature states (<1000 K) with nanosecond time resolution, a recurring diagnostic need in dynamic compression experiments at the Sandia Z machine. Gold sensors were selected because the visible re ectance spectrum of gold varies strongly with temperature. A variety of static and dynamic measurements were performed to assess re ectance changes at di erent temperatures and pressures. Using a minimal optical model for gold, a plausible connection between static calibrations and dynamic measurements was found. With re nements to the model and diagnostic upgrades, embedded gold sensors seem capable of detecting minor (<50 K) temperature changes under dynamic compression.

A central goal of the research effort in quantum thermodynamics is the extension of standard thermodynamics to include small-scale and quantum effects. Here we lay out consequences of seeing measurement, one of the central pillars of quantum theory, not merely as a mathematical projection but as a thermodynamic process. We uncover that measurement, a component of any experimental realisation, is accompanied by work and heat contributions and that these are distinct in classical and quantum thermodynamics. Implications are far-reaching, giving a thermodynamic interpretation to quantum coherence, extending the link between thermodynamics and information theory, and providing key input for the construction of a future quantum thermodynamic framework. Repercussions for existing quantum thermodynamic relations that omitted the role of measurement are discussed, including quantum work fluctuation relations and single-shot approaches.

The nature of the dominant component of galaxies and clusters remains unknown. While the astrophysics comunity supports the cold dark matter (CDM) paradigm as a clue factor in the current cosmological model, no direct CDM detections have been performed. Faber and Visser 2006 have suggested a simple method for measuring the dark matter equation of state. By combining kinematical and gravitational lensing data it is possible to test the widely adopted assumption of pressureless dark matter. According to this formalism, we have measured the dark matter equation of state for first time using improved techniques. We have found that the value of the equation of state parameter is consistent with pressureless dark matter within the errors. Nevertheless the measured value is lower than expected. This fact follows from the well known differences between the masses determinated by lensing and kinematical methods. We have tested our techniques using simulations and we have also analyzed possible sources of errors that c...

After two decades of phasor network deployment, phasor measurements are now available at many major substations and power plants. The North American SynchroPhasor Initiative (NASPI), supported by both the US Department of Energy and the North American Electricity Reliability Council (NERC), provides a forum to facilitate the efforts in phasor technology in North America. Phasor applications have been explored and some are in todays utility practice. IEEE C37.118 Standard is a milestone in standardizing phasor measurements and defining performance requirements. To comply with IEEE C37.118 and to better understand the impact of phasor quality on applications, the NASPI Performance and Standards Task Team (PSTT) initiated and accomplished the development of two important documents to address characterization of PMUs and instrumentation channels, which leverage prior work (esp. in WECC) and international experience. This paper summarizes the accomplished PSTT work and presents the methods for phasor measurement evaluation.

As novel fibers with enhanced mechanical properties continue to be synthesized and developed, the ability to easily and accurately characterize these materials becomes increasingly important. Here we present a design for an inexpensive tabletop instrument to measure shear modulus (G) and other longitudinal shear properties of a micrometer-sized monofilament fiber sample, such as nonlinearities and hysteresis. This automated system applies twist to the sample and measures the resulting torque using a sensitive optical detector that tracks a torsion reference. The accuracy of the instrument was verified by measuring G for high purity copper and tungsten fibers, for which G is well known. Two industrially important fibers, IM7 carbon fiber and Kevlar{sup ®} 119, were also characterized with this system and were found to have G = 16.5 ± 2.1 and 2.42 ± 0.32 GPa, respectively.

After commenting briefly on the role of the typicality assumption in science, we advocate a phenomenological approach to the cosmological measure problem. Like any other theory, a measure should be simple, general, well defined, and consistent with observation. This allows us to proceed by elimination. As an example, we consider the proper time cutoff on a geodesic congruence. It predicts that typical observers are quantum fluctuations in the early universe, or Boltzmann babies. We sharpen this well-known youngness problem by taking into account the expansion and open spatial geometry of pocket universes. Moreover, we relate the youngness problem directly to the probability distribution for observables, such as the temperature of the cosmic background radiation. We consider a number of modifications of the proper time measure, but find none that would make it compatible with observation.

A MEM inertial sensor (e.g. accelerometer, gyroscope) having integral rotational means for providing static and dynamic bias compensation is disclosed. A bias compensated MEM inertial sensor is described comprising a MEM inertial sense element disposed on a rotatable MEM stage. A MEM actuator for drives the rotation of the stage between at least two predetermined rotational positions. Measuring and comparing the output of the MEM inertial sensor in the at least two rotational positions allows, for both static and dynamic bias compensation in inertial calculations based on the sensor's output. An inertial measurement unit (IMU) comprising a plurality of independently rotatable MEM inertial sensors and methods for making bias compensated inertial measurements are disclosed.

A MEM inertial sensor (e.g. accelerometer, gyroscope) having integral rotational means for providing static and dynamic bias compensation is disclosed. A bias compensated MEM inertial sensor is described comprising a MEM inertial sense element disposed on a rotatable MEM stage. A MEM actuator drives the rotation of the stage between at least two predetermined rotational positions. Measuring and comparing the output of the MEM inertial sensor in the at least two rotational positions allows for both static and dynamic bias compensation in inertial calculations based on the sensor's output. An inertial measurement unit (IMU) comprising a plurality of independently rotatable MEM inertial sensors and methods for making bias compensated inertial measurements are disclosed.

The Chiral Magnetic Wave (CMW) [1] predicts a dependence of the positive and negative particle elliptic flow on the event charge asymmetry. Such a dependence has been observed by the STAR Collaboration [2]. However, it is rather difficult to interpret the results of this measurement, as well as to perform cross-experiment comparisons, due to the dependence of the observable on experimental inefficiencies and the kinematic acceptance used to determine the net asymmetry. We propose another observable that is free from these deficiencies. It also provides possibilities for differential measurements clarifying the interpretation of the results. We use this new observable to study the effect of the local charge conservation that can mimic the effect of the CMW in charge dependent flow measurements.

Efficient PEM fuel cell performance requires effective water management. The materials used, their durability, and the operating conditions under which fuel cells run, make efficient water management within a practical fuel cell system a primary challenge in developing commercially viable systems. We present experimental measurements of water content within operating fuel cells. in response to operational conditions, including transients and freezing conditions. To help understand the effect of components and operations, we examine water transport in operating fuel cells, measure the fuel cell water in situ and model the water transport within the fuel cell. High Frequency Resistance (HFR), AC Impedance and Neutron imaging (using NIST's facilities) were used to measure water content in operating fuel cells with various conditions, including current density, relative humidity, inlet flows, flow orientation and variable GDL properties. Ice formation in freezing cells was also monitored both during operation and shut-down conditions.

Abstract: Energy is a key requirement for a healthy, productive life and a major driver of the emissions leading to an increasingly warm planet. The implications of a doubling and redoubling of per capita incomes over the remainder of this century for energy use are a critical input into understanding the magnitude of the carbon management problem. A substantial controversy about how the Special Report on Emssions Scenarios (SRES) measured income and the potential implications of how income was measured for long term levels of energy use is revisited again in the McKibbin, Pearce and Stegman article appearing elsewhere in this issue. The recent release of a new set of purchasing power estimates of national income, and the preparations for creating new scenarios to support the IPCCs fifth assessment highlight the importance of the issues which have arisen surrounding income and energy use. Comparing the 1993 and 2005 ICP results on Purchasing Power Parity (PPP) based measures of income reveals that not only do the 2005 ICP estimates share the same issue of common growth rates for real income as measured by PPP and US $, but the lack of coherence in the estimates of PPP incomes, especially for developing countries raises yet another obstacle to resolving the best way to measure income. Further, the common use of an income term to mediate energy demand (as in the Kaya identity) obscures an underlying reality about per capita energy demands, leading to unreasonable estimates of the impact of changing income measures and of the recent high GDP growth rates in India and China. Significant new research is required to create both a reasonable set of GDP growth rates and long term levels of energy use.

An apparatus and method for measuring the electric properties of solid matter which provides data for determining the polarizability of the electron distributions contained therein is disclosed. A sample of the solid to be studied is placed between the plates of a capacitor where it acts as a dielectric. The sample is excited by the interaction of electromagnetic radiation with an atomic species contained in the sample. The voltage induced across the capacitor is then measured as a function of time with the aid of a high Q circuit tuned to a frequency related to the frequency of the applied electromagnetic energy.

By measuring transverse single spin asymmetries one has access to the transversity distribution function $\\Delta_T q(x)$ and the transverse momentum dependent Sivers function $q_0^T(x,\\vec{k}_T)$. New measurements from identified hadrons and hadron pairs, produced in deep inelastic scattering of a transversely polarized $^6LiD$ target are presented. The data were taken in 2003 and 2004 by the COMPASS collaboration using the muon beam of the CERN SPS at 160 GeV/c, resulting in small asymmetries.

The study of elastic neutron scattering at intermediate energies is essential for the understanding of the isovector term in the nucleon-nucleus interaction, as well as for the development of macroscopic and microscopic optical potentials at these energies. The techniques used for neutron scattering measurements is presented in this paper, as well as the di culties encountered. The few facilities that have been used are reviewed, and a newly installed setup for such measurements in Uppsala is described. Finally, the normalization problem is speci cally addressed. 1

The eddy correlation (ECOR) flux measurement system provides in situ, half-hour measurements of the surface turbulent fluxes of momentum, sensible heat, latent heat, and carbon dioxide (CO2) (and methane at one Southern Great Plains extended facility (SGP EF) and the North Slope of Alaska Central Facility (NSA CF). The fluxes are obtained with the eddy covariance technique, which involves correlation of the vertical wind component with the horizontal wind component, the air temperature, the water vapor density, and the CO2 concentration.

This Measure Guideline covers installation of high-efficiency gas furnaces. Topics covered include when to install a high-efficiency gas furnace as a retrofit measure, how to identify and address risks, and the steps to be used in the selection and installation process. The guideline is written for Building America practitioners and HVAC contractors and installers. It includes a compilation of information provided by manufacturers, researchers, and the Department of Energy as well as recent research results from the Partnership for Advanced Residential Retrofit (PARR) Building America team.

. Short range telemetry (4, 5) involves placing a frequency modulated transmitter on the rotating member and locating a receiver in close proximity such that the data may be trans fered from the rotat1ng member to the stationary readout. l Numbers... was insignificant. The above is the situation for which the measuring system was to be disigned. The accuracy desired for the measuring system was speci fied as + 5L' by Mr. Alexander (6) as needed for his research. The approximate critical speed of the shaft...

A system and method of efficiently obtaining distance measurements of a target by scanning the target. An optical beam is provided by a light source and modulated by a frequency source. The modulated optical beam is transmitted to an acousto-optical deflector capable of changing the angle of the optical beam in a predetermined manner to produce an output for scanning the target. In operation, reflected or diffused light from the target may be received by a detector and transmitted to a controller configured to calculate the distance to the target as well as the measurement uncertainty in calculating the distance to the target.

Beam Single Spin Asymmetries in single neutral semi-inclusive pion electroproduction off an unpolarized hydrogen target in the deep inelastic scattering regime (Q{sup 2}>1 GeV{sup 2}, W{sup 2}>4 GeV{sup 2}) have been measured using a polarized electron beam of 5.776 GeV with the CEBAF Large Acceptance Spectrometer at the Thomas Jefferson National Accelerator Facility (JLab). The measured kinematical dependences are compared with published data and existing theoretical predictions.

We report the latest results on the top-quark mass and on the top-antitop mass difference from the CDF and D0 collaborations using data collected at the Fermilab Tevatron $p\\bar{p}$ collider at $\\sqrt{s}=1.96$ TeV. We discuss general issues in top-quark mass measurements and present new results from direct measurements and from top-pair production cross-section. We also report new results on the top-antitop mass difference.

We report the latest results on the top-quark mass and on the top-antitop mass difference from the CDF and D0 collaborations using data collected at the Fermilab Tevatron p{bar p} collider at {radical}s = 1.96 TeV. We discuss general issues in top-quark mass measurements and present new results from direct measurements and from top-pair production cross-section. We also report new results on the top-antitop mass difference.

We present the entropic uncertainty relations for multiple measurement settings in quantum mechanics. Those uncertainty relations are obtained for both cases with and without the presence of quantum memory. They take concise forms which can be proven in a unified method and easy to calculate. Our results recover the well known entropic uncertainty relations for two observables, which show the uncertainties about the outcomes of two incompatible measurements. Those uncertainty relations are applicable in both foundations of quantum theory and the security of many quantum cryptographic protocols.

Many Landsat images of Antarctica show distinctive flow and crevasse features in the floating part of ice streams and outlet glaciers immediately below their grounding zones. Some of the features, which move with the glacier or ice stream, remain visible over many years and thus allow time-lapse measurements of ice velocities. Measurements taken from Landsat images of features on Byrd Glacier agree well with detailed ground and aerial observations. The satellite-image technique thus offers a rapid and cost-effective method of obtaining average velocities, to a first order of accuracy, of many ice streams and outlet glaciers near their termini.

This paper describes a keyboard control mode based on the DEC VAX computer. The VAX Keyboard code can be found under running of a program was developed. During the loop measurement or multitask operation, it ables to be distinguished from a keyboard code to stop current operation or transfer to another operation while previous information can be held. The combining of this mode, the author successfully used one key control loop measurement for test Dual Input Memory module which is used in a rearrange Energy Trigger system for LEP 8 Bunch operation.

Recent measurements of the D\\O\\ experiment related to the search for new phenomena beyond the Standard Model are reviewed. The new measurement of the like-sign dimuon charge asymmetry reveals a 3.2$\\sigma$ deviation from the SM prediction, while the updated study of the $B_s \\to J/\\psi \\phi$ decay demonstrates a better agreement with the SM. All experimental results on the $CP$ violation in mixing are currently consistent with each other. The D\\O\\ collaboration has much more statistics to analyze, and all these results can be significantly improved in the future.

The present invention is directed to an apparatus and method for measuring the viscosity of a fluid. This apparatus and method is particularly useful for the measurement of the viscosity of a liquid in a harsh environment characterized by high temperature and the presence of corrosive or deleterious gases and vapors which adversely affect conventional ball or roller bearings. The apparatus and method of the present invention employ one or more flexural or torsional bearings to suspend a bob capable of limited angular motion within a rotatable sleeve suspended from a stationary frame. 7 figs.

The present invention is directed to an apparatus and method for measuring the viscosity of a fluid. This apparatus and method is particularly useful for the measurement of the viscosity of a liquid in a harsh environment characterized by high temperature and the presence of corrosive or deleterious gases and vapors which adversely affect conventional ball or roller bearings. The apparatus and method of the present invention employ one or more flexural or torsional bearings to suspend a bob capable of limited angular motion within a rotatable sleeve suspended from a stationary frame.

We derive a formalism of stochastic master equations (SME) which describes the decoherence dynamics of a system in spin environments conditioned on the measurement record. Markovian and non-Markovian nature of environment can be revealed by a spectroscopy method based on weak quantum measurement (weak spectroscopy). On account of that correlated environments can lead to a nonlocal open system which exhibits strong non-Markovian effects although the local dynamics are Markovian, the spectroscopy method can be used to demonstrate that there is correlation between two environments.

A precision measurement of the gravitational constant $G$ has been made using a beam balance. Special attention has been given to determining the calibration, the effect of a possible nonlinearity of the balance and the zero-point variation of the balance. The equipment, the measurements and the analysis are described in detail. The value obtained for G is 6.674252(109)(54) 10^{-11} m3 kg-1 s-2. The relative statistical and systematic uncertainties of this result are 16.3 10^{-6} and 8.1 10^{-6}, respectively.

Simultaneous measurement of neutron flux and temperature is provided by a single sensor which includes a phosphor mixture having two principal constituents. The first constituent is a neutron sensitive 6LiF and the second is a rare-earth activated Y203 thermophosphor. The mixture is coated on the end of a fiber optic, while the opposite end of the fiber optic is coupled to a light detector. The detected light scintillations are quantified for neutron flux determination, and the decay is measured for temperature determination. 3 figs.

In the first part of this two-part article, we have introduced and analyzed a multidimensional model, called the 'general tension-reduction' (GTR) model, able to describe general quantum-like measurements with an arbitrary number of outcomes, and we have used it as a general theoretical framework to study the most general possible condition of lack of knowledge in a measurement, so defining what we have called a 'universal measurement'. In this second part, we present the formal proof that universal measurements, which are averages over all possible forms of fluctuations, produce the same probabilities as measurements characterized by 'uniform' fluctuations on the measurement situation. Since quantum probabilities can be shown to arise from the presence of such uniform fluctuations, we have proven that they can be interpreted as the probabilities of a first-order non-classical theory, describing situations in which the experimenter lacks complete knowledge about the nature of the interaction between the measuring apparatus and the entity under investigation. This same explanation can be applied -- mutatis mutandis -- to the case of cognitive measurements, made by human subjects on conceptual entities, or in decision processes, although it is not necessarily the case that the structure of the set of states would be in this case strictly Hilbertian. We also show that universal measurements correspond to maximally 'robust' descriptions of indeterministic reproducible experiments, and since quantum measurements can also be shown to be maximally robust, this adds plausibility to their interpretation as universal measurements, and provides a further element of explanation for the great success of the quantum statistics in the description of a large class of phenomena.

Commission, all recommend or de- mand that hospitals monitor hand hygiene compliance. Basic research hy- giene practices by the WHO and the CDC, compliance rates among healthcare staff remains low measured com- pliance of staff members on the floor. The method proved to be reliable and provided

Azimuthal Asymmetries in unpolarized SIDIS can be used to probe the transverse momentum of quarks inside the nucleon. Furthermore, they give access to the so-far unmeasured Boer-Mulders function. We report on the first measurement of azimuthal asymmetries of the SIDIS cross section from scattering of muons off a deuteron target.

COMPASS experiment measurements of the gluon polarisation in nucleon, DeltaG/G are reviewed. Two different approaches based on tagging the Photon Gluon Fusion process are described. They rely on the open charm meson or high-p_T hadron pairs detection.

MEASURING AND MODELING THE WEB A DISSERTATION SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE in the World Wide Web. The Web has now become a ubiquitous channel for information sharing and dissemination contributions in an endeavor towards a better understanding of the Web. We focus on two major topics: (1

A capability for measuring the thermal conductivity of microelectromechanical systems (MEMS) materials using a steady state resistance technique was developed and used to measure the thermal conductivities of SUMMiT{trademark} V layers. Thermal conductivities were measured over two temperature ranges: 100K to 350K and 293K to 575K in order to generate two data sets. The steady state resistance technique uses surface micromachined bridge structures fabricated using the standard SUMMiT fabrication process. Electrical resistance and resistivity data are reported for poly1-poly2 laminate, poly2, poly3, and poly4 polysilicon structural layers in the SUMMiT process from 83K to 575K. Thermal conductivity measurements for these polysilicon layers demonstrate for the first time that the thermal conductivity is a function of the particular SUMMiT layer. Also, the poly2 layer has a different variation in thermal conductivity as the temperature is decreased than the poly1-poly2 laminate, poly3, and poly4 layers. As the temperature increases above room temperature, the difference in thermal conductivity between the layers decreases.

The modal interpretation of quantum mechanics allows one to keep the standard classical definition of realism intact. That is, variables have a definite status for all time and a measurement only tells us which value it had. However, at present modal dynamics are only applicable to situations that are described in the orthodox theory by projective measures. In this paper we extend modal dynamics to include positive operator measures (POMs). That is, for example, rather than using a complete set of orthogonal projectors, we can use an overcomplete set of nonorthogonal projectors. We derive the conditions under which Bell's stochastic modal dynamics for projective measures reduce to deterministic dynamics, showing (incidentally) that Brown and Hiley's generalization of Bohmian mechanics [quant-ph/0005026, (2000)] cannot be thus derived. We then show how {\\em deterministic} dynamics for positive operators can also be derived. As a simple case, we consider a Harmonic oscillator, and the overcomplete set of coherent state projectors (i.e. the Husimi POM). We show that the modal dynamics for this POM in the classical limit correspond to the classical dynamics, even for the nonclassical number state $\\ket{n}$. This is in contrast to the Bohmian dynamics, which for energy eigenstates, the dynamics are always non-classical.

Power Control Using Stochastic Measurements \\Lambda Sennur Ulukus Roy D. Yates Department@winlab.rutgers.edu Abstract For wireless communication systems, iterative power control algorithms have been proposed an iterative, distributed power control algorithm in which each user needs only to know its own channel gain

In this article I report on new and updated measurements of the CP-violating parameter beta (phi_1), which is related to the phase of the Cabibbo-Kobayashi-Maskawa (CKM) quark-mixing matrix of the electroweak interaction. Over the past few years, beta has become the most precisely known parameter of the CKM unitarity triangle that governs the B system. The results presented here were produced by the two B factories, BaBar and Belle, based on their most recent datasets of over 600 million BB events combined. The new world average for sin(2beta), measured in the theoretically and experimentally cleanest charmonium modes, such as B -> J/psi K0s, is sin(2beta) = 0.685 +- 0.032. In addition to these tree-level dominated decays, independent measurements of sin(2beta) are obtained from gluonic b --> s penguin decays, including B --> phi K0s, B --> eta' K0s and others. There are hints, albeit somewhat weaker than earlier this year, that these measurements tend to come out low compared to the charmonium average, giving rise to the tantalizing possibility that New Physics amplitudes could be contributing to the corresponding loop diagrams. Clearly, more data from both experiments are needed to elucidate these intriguing differences.

Ammonia is a reactive trace gas that is emitted in large quantities by animal agriculture and other sources in California, which subsequently forms aerosol particulate matter, potentially affecting visibility, climate, and human health. We performed initial measurements of NH{sub 3} at the Blodgett Forest Research Station (BFRS) during a 3 week study in June, 2006. The site is used for ongoing air quality research and is a relatively low-background site in the foothills of the Sierra Nevada. Measured NH{sub 3} mixing ratios were quite low (< 1 to {approx}2 ppb), contrasting with typical conditions in many parts of the Central Valley. Eddy covariance measurements showed NH{sub 3} fluxes that scaled with measured NH{sub 3} mixing ratio and calculated aerodynamic deposition velocity, suggesting dry deposition is a significant loss mechanism for atmospheric NH{sub 3} at BFRS. A simple model of NH{sub 3} transport to the site supports the hypothesis that NH{sub 3} is transported from the Valley to BFRS, but deposits on vegetation during the summer. Further work is necessary to determine whether the results obtained in this study can be generalized to other seasons.

......................................................................25 5.2.4 Personal, Fatigue & Delay........................................................26 5.2.5 Occurrences...............................................................................26... between a time study, pre-determined time systems, standard time data, and work sampling. Each topic defines which work measurement method to use for each situation at Karls. This section also goes into great detail on how to develop personal, fatigue...

Alternative Energy Leadership Study Measuring Performance Through a Multidisciplinary Lens February and a major cause of pollution, scientists and engineers have sought for decades to develop alternative energy to alternative energy-related research is critical for understanding the potential solutions emerging from

We present recent top physics results in the CDF including updates of top mass, \\ttbar cross section, single top search, forward-backward asymmetry, and the differential cross section of \\ttbar. Most of measurements utilize close to the integrated luminosity of 3 fb$^{-1}$.

Synthetic Aperture Radar (SAR) measures radar soundings from a set of locations typically along the flight path of a radar platform vehicle. Optimal focusing requires precise knowledge of the sounding source locations in 3 - D space with respect to the target scene. Even data driven focusing techniques (i.e. autofocus) requires some degree of initial fidelity in the measurements of the motion of the radar. These requirements may be quite stringent especially for fine resolution, long ranges, and low velocities. The principal instrument for measuring motion is typically an Inertial Measurement Unit (IMU), but these instruments have inherent limi ted precision and accuracy. The question is %22How good does an IMU need to be for a SAR across its performance space?%22 This report analytically relates IMU specifications to parametric requirements for SAR. - 4 - Acknowledgements Th e preparation of this report is the result of a n unfunded research and development activity . Although this report is an independent effort, it draws heavily from limited - release documentation generated under a CRADA with General Atomics - Aeronautical System, Inc. (GA - ASI), and under the Joint DoD/DOE Munitions Program Memorandum of Understanding. Sandia National Laboratories is a multi - program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of En ergy's National Nuclear Security Administration under contract DE - AC04 - 94AL85000.

The ability of a nation to participate in the global knowledge economy depends to some extent on its capacities in science and technology. In an effort to assess the capacity of different countries in science and technology, this article updates a classification scheme developed by RAND to measure science and technology capacity for 150 countries of the world.

This measure guideline provides information and guidance on rehabilitating, retrofitting, and replacing existing window assemblies in residential construction. The intent is to provide information regarding means and methods to improve the energy and comfort performance of existing wood window assemblies in a way that takes into consideration component durability, in-service operation, and long term performance of the strategies.

Methodology Water Harvesting Measurements with Biomimetic Surfaces Zi Jun Wang and Prof. Anne parameters that affect the water harvesting efficiencies of different surfaces · Optimize the experimental Objectives Water is one of the most essential natural resources. The easy accessibility of water

activities: the preventive maintenance, whose activities can be long term planned, and the correctiveROBUSTNESS MEASURE FOR FUZZY MAINTENANCE ACTIVITIES SCHEDULE FranÃ§ois MARMIER , Christophe VARNIER. Especially in the field of maintenance services where the different practical knowledge or skills

A method to measure scaling rate and the effect of scale control agents are discussed. It is based on calcium carbonate growth under controlled conditions in a capillary stainless steel column. The efficacy of blended compositions can be predicted when the response of individual components is known.

A method is presented that ascribes proper statistical variability to simulations that are derived from longer-duration measurements. This method is applicable to simulations of either real-value or integer-value data. An example is presented that demonstrates the applicability of this technique to the synthesis of gamma-ray spectra.

;#22;-Fe di#11;erential cross sections. The extraction is performed in a physics model independent (PMI, is higher than current theoretical predictions. The ratio of the F2 (PMI) values measured in #23 by extracting the #23; #22; structure functions in a physics model independent (PMI) way. We also re- port

angular reflectance behavior and point the way toward future studies. Definition of BRDF and REFF of the ridges seen in the enlarged light microscope image of the specimen (shown at right), reflectance exposure images, and combination of three measured wavelengths. Transmission electron microscopy (TEM

In this lecture, after recalling the basic definitions and facts about the running coupling in QCD, I present a critical discussion of the methods for measuring $\\alpha_s$ and select those that appear to me as the most reliably precise

117 Chapter 7 ONÂ­LINE OPTIMIZATION AND SELECTION OF MEASUREMENTS This is the last of three chapters that discuss optimal operation of a general heat exchanger network. A method that combines the use of steady state optimization and decentralized feedback control is proposed. A general steady state model

System Measures: Â§110-Â§113: HVAC equipment, water heaters, showerheads, faucets and all other regulated): Mandatory Vapor barrier installed in Climate Zones 14 or 16. Â§150(l): Water absorption rate for slab edge insulation material alone without facings is no greater than 0.3%; water vapor permeance rate is no greater

for such information in the interpretation of neutrino oscillation data. Scattering results on both charged current (CC, analysis techniques, and detector technologies. With the advent of intense neutrino sources for oscillation45. Neutrino Cross Section Measurements 1 45. Neutrino Cross Section Measurements Written in April

to the minimized backaction of the SSET, we observe a 2e periodic Coulomb staircase according to the two- level on the qubit are not yet completely understood. In this article we report measurements where the Cooper- pair laws of quantum mechanics, measuring a system perturbs its state, and specifically destroys the phase

Measurement Form Please fill out this measurement sheet to the best of your ability. It is easiest to get a friend or parent to help you take your measurements. If you don't have a measuring tape, use a piece of string, or ribbon, and then mark and measure it with a ruler or yardstick. The more accurate

The authors report on the final electroweak measurements performed with data taken at the Z resonance by the experiments operating at the electron-positron colliders SLC and LEP. the data consist of 17 million Z decays accumulated by the ALEPH, DELPHI, L3 and OPAL experiments at LEP, and 600 thousand Z decays by the SLD experiment using a polarized beam at SLC. The measurements include cross-sections, forward-backward asymmetries and polarized asymmetries. The mass and width of the Z boson, m{sub Z} and {Lambda}{sub Z}, and its couplings to fermions, for example the {rho} parameter and the effective electroweak mixing angle for leptons, are precisely measured: m{sub Z} = 91.1875 {+-} 0.0021 GeV; {Lambda}{sub Z} = 2.4952 {+-} 0.0023 GeV; {rho}{sub {ell}} = 1.0050 {+-} 0.0010; sin{sup 2} {theta}{sub eff}{sup lept} = 0.23153 {+-} 0.00016. The number of light neutrino species is determined to be 2.9840 {+-} 0.0082, in agreement with the three observed generations of fundamental fermions. The results are compared to the predictions of the Standard Model. At the Z-pole, electroweak radiative corrections beyond the running of the QED and QCD coupling constants are observed with a significance of five standard deviations, and in agreement with the Standard Model. of the many Z-pole measurements, the forward-backward asymmetry in b-quark production shows the largest difference with respect to its Standard Model expectation, at the level of 2.8 standard deviations. Through radiative corrections evaluated in the framework of the Standard Model, the Z-pole data are also used to predict the mass of the top quark, m{sub t} = 173{sub -10}{sup +13} GeV, and the mass of the W boson, m{sub W} = 80.363 {+-} 0.032 GeV. These indirect constraints are compared to the direct measurements, providing a stringent test of the Standard Model. Using in addition the direct measurements of m{sub t} and m{sub W}, the mass of the as yet unobserved Standard Model Higgs boson is predicted with a relative uncertainty of about 50% and found to be less than 285 GeV at 95% confidence level.

This document provides standard definitions of performance metrics and methods to determine them for the energy performance of building interior lighting systems. It can be used for existing buildings and for proposed buildings. The primary users for whom these documents are intended are building energy analysts and technicians who design, install, and operate data acquisition systems, and who analyze and report building energy performance data. Typical results from the use of this procedure are the monthly and annual energy used for lighting, energy savings from occupancy or daylighting controls, and the percent of the total building energy use that is used by the lighting system. The document is not specifically intended for retrofit applications. However, it does complement Measurement and Verification protocols that do not provide detailed performance metrics or measurement procedures.

This report focuses on work conducted at Pacific Northwest National Laboratory to better characterize aspects of backgrounds in RPMs deployed for homeland security purposes. Two polyvinyl toluene scintillators were utilized with supporting NIM electronics to measure the muon coincidence rate. Muon spallation is one mechanism by which background neutrons are produced. The measurements performed concentrated on a broad investigation of the dependence of the muon flux on a) variations in solid angle subtended by the detector; b) the detector inclination with the horizontal; c) depth underground; and d) diurnal effects. These tests were conducted inside at Building 318/133, outdoors at Building 331G, and underground at Building 3425 at Pacific Northwest National Laboratory.

A low detection limit analytical method was developed to measure a suite of benzoic acid and fluorinated benzoic acid compounds intended for use as tracers for enhanced oil recovery operations. Although the new high performance liquid chromatography separation successfully measured the tracers in an aqueous matrix at low part per billion levels, the low detection limits could not be achieved in oil field water due to interference problems with the hydrocarbon-saturated water using the system's UV detector. Commercial instrument vendors were contacted in an effort to determine if mass spectrometry could be used as an alternate detection technique. The results of their work demonstrate that low part per billion analysis of the tracer compounds in oil field water could be achieved using ultra performance liquid chromatography mass spectrometry.

Surface contamination evaluation is a tough problem since it is difficult to isolate the radiations emitted by the surface, especially in a highly irradiating atmosphere. In that case the only possibility is to evaluate smearable (removeable) contamination since ex-situ countings are possible. Unfortunately, according to our experience at CEA, these values are not consistent and thus non relevant. In this study, we show, using in-situ Fourier Transform Infra Red spectrometry on contaminated metal samples, that fixed contamination seems to be chemisorbed and removeable contamination seems to be physisorbed. The distribution between fixed and removeable contamination appears to be variable. Chemical equilibria and reversible ion exchange mechanisms are involved and are closely linked to environmental conditions such as humidity and temperature. Measurements of smearable contamination only give an indication of the state of these equilibria between fixed and removeable contamination at the time and in the environmental conditions the measurements were made.

Advanced technological uses of single-wall carbon nanotubes (SWCNTs) rely on the production of single length and chirality populations that are currently only available through liquid phase post processing. The foundation of all of these processing steps is the attainment of individualized nanotube dispersion in solution; an understanding of the collodial properties of the dispersed SWCNTs can then be used to designed appropriate conditions for separations. In many instances nanotube size, particularly length, is especially active in determining the achievable properties from a given population, and thus there is a critical need for measurement technologies for both length distribution and effective separation techniques. In this Progress Report, we document the current state of the art for measuring dispersion and length populations, including separations, and use examples to demonstrate the desirability of addressing these parameters.

Measurement of microfocus spot size can be important for several reasons: (1) Quality assurance during manufacture of microfocus tubes; (2) Tracking performance and stability of microfocus tubes; (3) Determining magnification (especially important for digital radiography where the native spatial resolution of the digital system is not adequate for the application); (4) Knowledge of unsharpness from the focal spot alone. The European Standard EN 12543-5 is based on a simple geometrical method of calculating focal spot size from unsharpness of high magnification film radiographs. When determining microfocus focal spot dimensions using unsharpness measurements both signal-to-noise (SNR) and magnification can be important. There is a maximum accuracy that is a function of SNR and therefore an optimal magnification. Greater than optimal magnification can be used but it will not increase accuracy.

Characterization of thermoelectric materials can pose many problems. A temperature difference can be established across these materials as an electrical current is passed due to the Peltier effect. The thermopower of these materials is quite large and thus large thermal voltages can contribute to many of the measurements necessary to investigate these materials. This paper will discuss the chracterization techniques necessary to investigate these materials and provide an overview of some of the potential systematic errors which can arise. It will also discuss some of the corrections one needs to consider. This should provide an introduction to the characterization and measurement of thermoelectric materials and provide references for a more in depth discussion of the concepts. It should also serve as an indication of the care that must be taken while working with thermoelectric materials.

A relationship between power anisotropy and wavevector anisotropy in turbulent fluctuations is derived. This can be used to interpret plasma turbulence measurements, for example in the solar wind. If fluctuations are anisotropic in shape then the ion gyroscale break point in spectra in the directions parallel and perpendicular to the magnetic field would not occur at the same frequency, and similarly for the electron gyroscale break point. This is an important consideration when interpreting solar wind observations in terms of anisotropic turbulence theories. Model magnetic field power spectra are presented assuming a cascade of critically balanced Alfven waves in the inertial range and kinetic Alfven waves in the dissipation range. The variation of power anisotropy with scale is compared to existing solar wind measurements and the similarities and differences are discussed.

A light-emitting diode (LED) pulser for testing the low-rate response of a photomultiplier tube (PMT) to scintillator-like pulses has been designed, developed, and implemented. This pulser is intended to simulate 80 ns full width at half maximum photon pulses over the dynamic range of the PMT, in order to precisely determine PMT linearity. This particular design has the advantage that, unlike many LED test rigs, it does not require the use of multiple calibrated LEDs, making it insensitive to LED gain drifts. Instead, a finite-difference measurement is made using two LEDs which need not be calibrated with respect to one another. These measurements give a better than 1% mapping of the response function, allowing for the testing and development of particularly linear PMT bases.

To compute the spectrum of bubble collisions seen by an observer in an eternally-inflating multiverse, one must choose a measure over the diverging spacetime volume, including choosing an "initial" hypersurface below which there are no bubble nucleations. Previous calculations focused on the case where the initial hypersurface is pushed arbitrarily deep into the past. Interestingly, the observed spectrum depends on the orientation of the initial hypersurface, however one's ability observe the effect rapidly decreases with the ratio of inflationary Hubble rates inside and outside one's bubble. We investigate whether this conclusion might be avoided under more general circumstances, in particular placing the observer's bubble near the initial hypersurface. We find that it is not. As a point of reference, a substantial appendix reviews relevant aspects of the measure problem of eternal inflation.

To compute the spectrum of bubble collisions seen by an observer in an eternally-inflating multiverse, one must choose a measure over the diverging spacetime volume, including choosing an ''initial'' hypersurface below which there are no bubble nucleations. Previous calculations focused on the case where the initial hypersurface is pushed arbitrarily deep into the past. Interestingly, the observed spectrum depends on the orientation of the initial hypersurface, however one's ability observe the effect rapidly decreases with the ratio of inflationary Hubble rates inside and outside one's bubble. We investigate whether this conclusion might be avoided under more general circumstances, including placing the observer's bubble near the initial hypersurface. We find that it is not. As a point of reference, a substantial appendix reviews relevant aspects of the measure problem of eternal inflation.

We discuss the duality, conjectured in earlier work, between the wave function of the multiverse and a 3D Euclidean theory on the future boundary of spacetime. In particular, we discuss the choice of the boundary metric and the relation between the UV cutoff scale xi on the boundary and the hypersurfaces Sigma on which the wave function is defined in the bulk. We propose that in the limit of xi going to 0 these hypersurfaces should be used as cutoff surfaces in the multiverse measure. Furthermore, we argue that in the inflating regions of spacetime with a slowly varying Hubble rate H the hypersurfaces Sigma are surfaces of constant comoving apparent horizon (CAH). Finally, we introduce a measure prescription (called CAH+) which appears to have no pathological features and coincides with the constant CAH cutoff in regions of slowly varying H.

We discuss the duality, conjectured in earlier work, between the wave function of the multiverse and a 3D Euclidean theory on the future boundary of spacetime. In particular, we discuss the choice of the boundary metric and the relation between the UV cutoff scale ? on the boundary and the hypersurface ? on which the wave function is defined in the bulk. We propose that in the limit ? ? 0 this hypersurface should be used as the cutoff surface in the multiverse measure. Furthermore, we argue that in the inflating regions of spacetime with a slowly varying Hubble rate H the hypersurfaces ? are surfaces of constant comoving apparent horizon (CAH). Finally, we introduce a measure prescription (called CAH+) which appears to have no pathological features and coincides with the constant CAH cutoff in regions of slowly varying H.

We show a powerful method to compute entanglement measures based on convex roof constructions. In particular, our method is applicable to measures that, for pure states, can be written as low order polynomials of operator expectation values. We show how to compute the linear entropy of entanglement, the linear entanglement of assistance, and a bound on the dimension of the entanglement for bipartite systems. We discuss how to obtain the convex roof of the three-tangle for three-qubit states. We also show how to calculate the linear entropy of entanglement and the quantum Fisher information based on partial information or device independent information. We demonstrate the usefulness of our method by concrete examples

Phase equilibrium measurements have been performed on nine binary mixtures. The PTx method was used to obtain vapor-liquid equilibrium data for the following systems at two temperatures each: (aminoethyl)piperazine + diethylenetriamine; 2-butoxyethyl acetate + 2-butoxyethanol; 2-methyl-2-propanol + 2-methylbutane; 2-methyl-2-propanol + 2-methyl-2-butene; methacrylonitrile + methanol; 1-chloro-1,1-difluoroethane + hydrogen chloride; 2-(hexyloxy)ethanol + ethylene glycol; butane + ammonia; propionaldehyde + butane. Equilibrium vapor and liquid phase compositions were derived form the PTx data using the Soave equation of state to represent the vapor phase and the Wilson or the NRTL activity coefficient model to represent the liquid phase. A large immiscibility region exists in the butane + ammonia system at 0 C. Therefore, separate vapor-liquid-liquid equilibrium measurements were performed on this system to more precisely determine the miscibility limits and the composition of the vapor phase in equilibrium with the two liquid phases.

An optical measurement system is presented that offers precision on-line monitoring of the quality of steam. Multiple wavelengths of radiant energy are passed through the steam from an emitter to a detector. By comparing the amount of radiant energy absorbed by the flow of steam for each wavelength, a highly accurate measurement of the steam quality can be determined on a continuous basis in real-time. In an embodiment of the present invention, the emitter, comprises three separate radiant energy sources for transmitting specific wavelengths of radiant energy through the steam. In a further embodiment, the wavelengths of radiant energy are combined into a single beam of radiant energy for transmission through the steam using time or wavelength division multiplexing. In yet a further embodiment, the single beam of radiant energy is transmitted using specialized optical elements.

A level measurement system suitable for use in a high temperature and pressure environment to measure the level of coolant fluid within the environment, the system including a volume of coolant fluid located in a coolant region of the high temperature and pressure environment and having a level therein; an ultrasonic waveguide blade that is positioned within the desired coolant region of the high temperature and pressure environment; a magnetostrictive electrical assembly located within the high temperature and pressure environment and configured to operate in the environment and cooperate with the waveguide blade to launch and receive ultrasonic waves; and an external signal processing system located outside of the high temperature and pressure environment and configured for communicating with the electrical assembly located within the high temperature and pressure environment.

Subcritical source-driven noise measurements are simultaneous Rossia and randomly pulsed neutron measurements that provide measured quantities that can be related to the subcritical neutron multiplication factor. In fact, subcritical source-driven noise measurements should be performed in lieu of Rossia measurements because of the additional information that is obtained from noise measurements such as the spectral ratio and the coherence functions. The basic understanding of source-driven noise analysis measurements can be developed from a point reactor kinetics model to demonstrate how the measured quantities relate to the subcritical neutron multiplication factor.