Sample records for motorcycles mopeds large

6. 6.1 MOTOR BIKES, MOPEDS, AND MOTOR SCOOTERS Registration and Operation Motor Bikes, Mopeds, and Motor Scooters are defined as motor vehicles and are subject to all regulations governing motor vehicle operation on the grounds of the University. Such a motor vehicle owned and operated by a member

Motorcycle windshield wipers are essentially non-existent in the United States. Customer and market research reveals a demand for such a product. This paper explores the product viability of a modular motorcycle windshield ...

Control of Motorcycle Steering Instabilities SIMOS EVANGELOU, DAVID J.N. LIMEBEER, ROBIN S. SHARP, and MALCOLM C. SMITH dvances in modeling bicycle and motor- cycle dynamics are providing improved of these modes are wobble and weave. Wobble is a steering oscillation that is reminis- cent of the caster shimmy

. The reconstructed torque is used as the main control of the virtual motorcycle dynamic model, in order to actuate the simulator's platform. The steering system is modeled as a haptic display subjected to a couple of action of the real tire-road contact. The control approach is based on a robust tracking problem of a reference

A New Motorcycle Simulator Platform: Mechatronics Design, Dynamics Modeling and Control L. Nehaoua and dynamics modeling will be presented. Some results are shown, validating the actutation requirements and platform control. 1. INTRODUCTION Road safety has become a major political and economical issue. While all

Experimental testing and modelling of a passive mechanical steering compensator for high of the method to the control of motorcycles steer- ing instabilities. Simulation studies have shown-- This paper presents experimental results and a modelling study of a prototype mechanical device that repre

such as the roll angle and the steering torque. According to these features, the control of PTW is still a chal a motorcycle using nonlinear equations of motion derived from a highly simplified model. Some control, 7, 8], control techniques are considered with more complex and realistic models but with the as

increasing. Thus France as well as Europe have launched several research programs in order to study. INTRODUCTION The number of death in road accidents has experienced a huge reduction of about 50% during the last decade. However analysis of accident statistics shows that the number of death when a motorcycle

horsepower. The top speed is 60-70 mph which is adjustable by the sprocket gearing that is chosen. Its range is claimed to be 35-60 miles,depending on how you ride, via a 3.3 kilowatt-hour lithium battery pack. The battery pack can be recharged... of the bike’s parameters. Performance comes from a liquid-cooled, 3-phase AC (Alternating Current) induction motor and a proprietary high energy lithium-ion battery pack plus adjustable regenerative braking to capture wasted energy for battery recharging...

The objective of this thesis is to present the foundation of an automated large-scale disease prediction system. Unlike previous work that has typically focused on a small self-contained dataset, we explore the possibility ...

The book deals with the following aspects of transformer engineering: general principles governing the function of transformers, iron cores, windings, stray losses caused by stray flux, the insulation of transformers, and the structural parts and accessories. This edition includes the developments in theory and practice on the basis of the authors' experience in design, manufacturing and testing of large transformers. New developments have been particularly extensive in the fields of new magnetic materials, cooling methods, dielectric strength for overvoltages of different types, and stray-load loss problems, which are presented in the book in detail. The many diagrams in the book can be used directly in the design, manufacture and testing of large transformers. In preparing their text, the authors have aimed to satisfy the demand for a work that summarizes the latest experience in development and design of large power transformers.

Disposing of large animal carcasses can be a problem for agricultural producers. Composting is a simple, low-cost method that yields a useful product that can be used as fertilizer. In this publication you'll learn the basics of composting, how...

Disposing of large animal carcasses can be a problem for agricultural producers. Composting is a simple, low-cost method that yields a useful product that can be used as fertilizer. In this publication you'll learn the basics of composting, how...

Hyperspectral imaging produces a spectrum or vector at each image pixel. These spectra can be used to identify materials present in the image. In some cases, spectral libraries representing atmospheric chemicals or ground materials are available. The challenge is to determine if any of the library chemicals or materials exist in the hyperspectral image. The number of spectra in these libraries can be very large, far exceeding the number of spectral channels collected in the ¯eld. Suppose an image pixel contains a mixture of p spectra from the library. Is it possible to uniquely identify these p spectra? We address this question in this paper and refer to it as the Large Spectral Library (LSL) problem. We show how to determine if unique identi¯cation is possible for any given library. We also show that if p is small compared to the number of spectral channels, it is very likely that unique identi¯cation is possible. We show that unique identi¯cation becomes less likely as p increases.

Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

It's easy to make the myriad of types of large area and decorative coatings for granted. We probably don't even think about most of them; the low-e and heat mirror coatings on our windows and car windows, the mirrors in displays, antireflection coatings on windows and displays, protective coatings on aircraft windows, heater coatings on windshields and aircraft windows, solar reflectors, thin film solar cells, telescope mirrors, Hubble mirrors, transparent conductive coatings, and the list goes on. All these products require large deposition systems and chambers. Also, don't forget that large batches of small substrates or parts are coated in large chambers. In order to be cost effective hundreds of ophthalmic lenses, automobile reflectors, display screens, lamp reflectors, cell phone windows, laser reflectors, DWDM filters, are coated in batches.

In this paper we discuss the problems associated with the description and manipulation of large systems when their sources are not maintained as single fields. We show why and how tools that address these issues, such ...

We use analytic conformal bootstrap methods to determine the anomalous dimensions and OPE coefficients for large spin operators in general conformal field theories in four dimensions containing a scalar operator of conformal dimension $\\Delta_\\phi$. It is known that such theories will contain an infinite sequence of large spin operators with twists approaching $2\\Delta_\\phi+2n$ for each integer $n$. By considering the case where such operators are separated by a twist gap from other operators at large spin, we analytically determine the $n$, $\\Delta_\\phi$ dependence of the anomalous dimensions. We find that for all $n$, the anomalous dimensions are negative for $\\Delta_\\phi$ satisfying the unitarity bound, thus extending the Nachtmann theorem to non-zero $n$. In the limit when $n$ is large, we find agreement with the AdS/CFT prediction corresponding to the Eikonal limit of a 2-2 scattering with dominant graviton exchange.

We use analytic conformal bootstrap methods to determine the anomalous dimensions and OPE coefficients for large spin operators in general conformal field theories in four dimensions containing a scalar operator of conformal dimension $\\Delta_\\phi$. It is known that such theories will contain an infinite sequence of large spin operators with twists approaching $2\\Delta_\\phi+2n$ for each integer $n$. By considering the case where such operators are separated by a twist gap from other operators at large spin, we analytically determine the $n$, $\\Delta_\\phi$ dependence of the anomalous dimensions. We find that for all $n$, the anomalous dimensions are negative for $\\Delta_\\phi$ satisfying the unitarity bound, thus extending the Nachtmann theorem to non-zero $n$. In the limit when $n$ is large, we find agreement with the AdS/CFT prediction corresponding to the Eikonal limit of a 2-2 scattering with dominant graviton exchange.

A review of research into the burning behaviour of large pool fires and fuel spill fires is presented. The features which distinguish such fires from smaller pool fires are mainly associated with the fire dynamics at low ...

We have developed a new approach to constructing large aperture optical switches for next generation inertial confinement fusion lasers. A transparent plasma electrode formed in low pressure ionized gas acts as a conductive coating to allow the uniform charging of the optical faces of an electro-optic material. In this manner large electric fields can be applied longitudinally to large aperture, high aspect ratio Pockels cells. We propose a four-electrode geometry to create the necessary high conductivity plasma sheets, and have demonstrated fast (less than 10 nsec) switching in a 5x5 cm aperture KD*P Pockels cell with such a design. Detaid modelling of Pockels cell performance with plasma electrodes has been carried out for 15 and 30 cm aperture designs.

CALIFORNIA ENERGY COMMISSION Large HVAC Building Survey Information Database of Buildings over 100 Energy Systems: Productivity and Building Science Program. This program was funded by the California of Portland Energy Conservation, Inc. Project Management: Cathy Higgins, Program Director for New Buildings

This paper describes the removal and disposal of the large components from Maine Yankee Atomic Power Plant. The large components discussed include the three steam generators, pressurizer, and reactor pressure vessel. Two separate Exemption Requests, which included radiological characterizations, shielding evaluations, structural evaluations and transportation plans, were prepared and issued to the DOT for approval to ship these components; the first was for the three steam generators and one pressurizer, the second was for the reactor pressure vessel. Both Exemption Requests were submitted to the DOT in November 1999. The DOT approved the Exemption Requests in May and July of 2000, respectively. The steam generators and pressurizer have been removed from Maine Yankee and shipped to the processing facility. They were removed from Maine Yankee's Containment Building, loaded onto specially designed skid assemblies, transported onto two separate barges, tied down to the barges, th en shipped 2750 miles to Memphis, Tennessee for processing. The Reactor Pressure Vessel Removal Project is currently under way and scheduled to be completed by Fall of 2002. The planning, preparation and removal of these large components has required extensive efforts in planning and implementation on the part of all parties involved.

The decontamination and decommissioning (D and D) of 1200 buildings within the US Department of Energy-Office of Environmental Management (DOE-EM) Complex will require the disposition of miles of pipe. The disposition of large-bore pipe, in particular, presents difficulties in the area of decontamination and characterization. The pipe is potentially contaminated internally as well as externally. This situation requires a system capable of decontaminating and characterizing both the inside and outside of the pipe. Current decontamination and characterization systems are not designed for application to this geometry, making the direct disposal of piping systems necessary in many cases. The pipe often creates voids in the disposal cell, which requires the pipe to be cut in half or filled with a grout material. These methods are labor intensive and costly to perform on large volumes of pipe. Direct disposal does not take advantage of recycling, which could provide monetary dividends. To facilitate the decontamination and characterization of large-bore piping and thereby reduce the volume of piping required for disposal, a detailed analysis will be conducted to document the pipe remediation problem set; determine potential technologies to solve this remediation problem set; design and laboratory test potential decontamination and characterization technologies; fabricate a prototype system; provide a cost-benefit analysis of the proposed system; and transfer the technology to industry. This report summarizes the activities performed during fiscal year 1997 and describes the planned activities for fiscal year 1998. Accomplishments for FY97 include the development of the applicable and relevant and appropriate regulations, the screening of decontamination and characterization technologies, and the selection and initial design of the decontamination system.

The Large Aperture GRB Observatory (LAGO) is aiming at the detection of the high energy (around 100 GeV) component of Gamma Ray Bursts, using the single particle technique in arrays of Water Cherenkov Detectors (WCD) in high mountain sites (Chacaltaya, Bolivia, 5300 m a.s.l., Pico Espejo, Venezuela, 4750 m a.s.l., Sierra Negra, Mexico, 4650 m a.s.l). WCD at high altitude offer a unique possibility of detecting low gamma fluxes in the 10 GeV - 1 TeV range. The status of the Observatory and data collected from 2007 to date will be presented.

A large (10's of meters) aperture space telescope including two separate spacecraft--an optical primary objective lens functioning as a magnifying glass and an optical secondary functioning as an eyepiece. The spacecraft are spaced up to several kilometers apart with the eyepiece directly behind the magnifying glass "aiming" at an intended target with their relative orientation determining the optical axis of the telescope and hence the targets being observed. The objective lens includes a very large-aperture, very-thin-membrane, diffractive lens, e.g., a Fresnel lens, which intercepts incoming light over its full aperture and focuses it towards the eyepiece. The eyepiece has a much smaller, meter-scale aperture and is designed to move along the focal surface of the objective lens, gathering up the incoming light and converting it to high quality images. The positions of the two space craft are controlled both to maintain a good optical focus and to point at desired targets which may be either earth bound or celestial.

In almost 30 years of operation, the Very Large Array (VLA) has proved to be a remarkably flexible and productive radio telescope. However, the basic capabilities of the VLA have changed little since it was designed. A major expansion utilizing modern technology is currently underway to improve the capabilities of the VLA by at least an order of magnitude in both sensitivity and in frequency coverage. The primary elements of the Expanded Very Large Array (EVLA) project include new or upgraded receivers for continuous frequency coverage from 1 to 50 GHz, new local oscillator, intermediate frequency, and wide bandwidth data transmission systems to carry signals with 16 GHz total bandwidth from each antenna, and a new digital correlator with the capability to process this bandwidth with an unprecedented number of frequency channels for an imaging array. Also included are a new monitor and control system and new software that will provide telescope ease of use. Scheduled for completion in 2012, the EVLA will prov...

. Lead lengths were evaluated with five different design vehicles (large and small passenger cars, a pickup truck, a motorcycle, and a high profile truck) with several detector units. Both passenger cars and the pickup truck were always detected with 4000...

A large optics stand provides a risk free means of safely tilting large optics with ease and a method of safely tilting large optics with ease. The optics are supported in the horizontal position by pads. In the vertical plane the optics are supported by saddles that evenly distribute the optics weight over a large area.

A large optics stand provides a risk free means of safely tilting large optics with ease and a method of safely tilting large optics with ease. The optics are supported in the horizontal position by pads. In the vertical plane the optics are supported by saddles that evenly distribute the optics weight over a large area.

that are now considered the "lenses" for examining large-scale data. THE LARGE-SCALE DATA VISUALIZATIONVisualization of Large-Scale Distributed Data Jason Leigh1 , Andrew Johnson1 , Luc Renambot1 representation of data and the interactive manipulation and querying of the visualization. Large-scale data

The use of adaptive optics was originally conceived by astronomers seeking to correct the blurring of images made with large telescopes due to the effects of atmospheric turbulence. The basic idea is to use a device, a wave front corrector, to adjust the phase of light passing through an optical system, based on some measurement of the spatial variation of the phase transverse to the light propagation direction, using a wave front sensor. Although the original concept was intended for application to astronomical imaging, the technique can be more generally applied. For instance, adaptive optics systems have been used for several decades to correct for aberrations in high-power laser systems. At Lawrence Livermore National Laboratory (LLNL), the world's largest laser system, the National Ignition Facility, uses adaptive optics to correct for aberrations in each of the 192 beams, all of which must be precisely focused on a millimeter scale target in order to perform nuclear physics experiments.

Fires in urban areas caused by a nuclear burst are analyzed as a first step towards determining their smoke-generation chacteristics, which may have grave implications for global-scale climatic consequences. A chain of events and their component processes which would follow a nuclear attack are described. A numerical code is currently being developed to calculate ultimately the smoke production rate for a given attack scenario. Available models for most of the processes are incorporated into the code. Sample calculations of urban fire-development history performed in the code for an idealized uniform city are presented. Preliminary results indicate the importance of the wind, thermal radiation transmission, fuel distributions, and ignition thresholds on the urban fire spread characteristics. Future plans are to improve the existing models and develop new ones to characterize smoke production from large urban fires. 21 references, 18 figures.

Large margin classification in infinite neural networks Youngmin Cho and Lawrence K. Saul, CA 92093-0404 Abstract We introduce a new family of positive-definite kernels for large margin classi- fication in support vector machines (SVMs). These kernels mimic the computation in large neural networks

Webinar introduces the “Large Scale Renewable Energy Guide." The webinar will provide an overview of this important FEMP guide, which describes FEMP's approach to large-scale renewable energy projects and provides guidance to Federal agencies and the private sector on how to develop a common process for large-scale renewable projects.

The etiology of the large scale peculiar velocity (large scale streaming motion) of clusters would increasingly seem more tenuous, within the context of the gravitational instability hypothesis. Are there any alternative testable models possibly accounting for such large scale streaming of clusters?

A matrix model is constructed which describes a chiral version of the large $N$ $U(N)$ gauge theory on a two-dimensional sphere of area $A$. This theory has three separate phases. The large area phase describes the associated chiral string theory. An exact expression for the free energy in the large area phase is used to derive a remarkably simple formula for the number of topologically inequivalent covering maps of a sphere with fixed branch points and degree $n$.

Network Coding for Large Scale Content Distribution IEEE Infocom 2005 Christos Gkantsidis College propose a new scheme for content distribution of large files that is based on network coding. With network coding, each node of the distribution network is able to generate and transmit encoded blocks

Microfluidic Large-Scale Integration: The Evolution of Design Rules for Biological Automation, polydimethylsiloxane Abstract Microfluidic large-scale integration (mLSI) refers to the develop- ment of microfluidic, are discussed. Several microfluidic components used as building blocks to create effective, complex, and highly

1 Prospective Climate Change Impact on Large Rivers in the US and South Korea Pierre Y. Julien Dept. of Civil and Environ. Eng. Colorado State University Seoul, South Korea August 11, 2009 Climate Change and Large Rivers 1. Climatic changes have been on-going for some time; 2. Climate changes usually predict

This paper provides lessons learned from developing several large system dynamics (SD) models. System dynamics modeling practice emphasize the need to keep models small so that they are manageable and understandable. This practice is generally reasonable and prudent; however, there are times that large SD models are necessary. This paper outlines two large SD projects that were done at two Department of Energy National Laboratories, the Idaho National Laboratory and Sandia National Laboratories. This paper summarizes the models and then discusses some of the valuable lessons learned during these two modeling efforts.

The next generation of ground-based optical/infrared (IR) telescopes will have primary mirrors of up to 42 m. To take advantage of the large potential increase in angular resolution, adaptive optics will be essential to ...

The goal of this whitepaper is to summarize the LAI research that applies to program management. The context of most of the research discussed in this whitepaper are large-scale engineering programs, particularly in the ...

We investigate the interactions of large composite dark matter (DM) states with the Standard Model (SM) sector. Elastic scattering with SM nuclei can be coherently enhanced by factors as large as A^2, where A is the number of constituents in the composite state (there exist models in which DM states of very large A > 10^8 may be realised). This enhancement, for a given direct detection event rate, weakens the expected signals at colliders by up to 1/A. Moreover, the spatially extended nature of the DM states leads to an additional, characteristic, form factor modifying the momentum dependence of scattering processes, altering the recoil energy spectra in direct detection experiments. In particular, energy recoil spectra with peaks and troughs are possible, and such features could be confirmed with only O(50) events, independently of the assumed halo velocity distribution. Large composite states also generically give rise to low-energy collective excitations potentially relevant to direct detection and indirec...

Stimulated Raman scattering is of concern to laser fusion since it can create a hot electron environment which can increase the difficulty of achieving high final fuel densities. In earlier experiments with one micron laser light, the energy measured in Raman-scattered light has been insignificant. But these experiments were done with, at most, about 100 joules of laser energy. The Raman instability has a high threshold which also requires a large plasma to be irradiated with a large diameter spot. Only with a long interaction length can the Raman-scattered light wave convectively grow to a large amplitude, and only in recent long pulse, high energy experiments (4000 joules in 2 ns) at the Shiva laser facility have we observed as much as several percent of the laser light to be Raman-scattered. We find that the Raman instability has a much lower intensity threshold for longer laser pulselength and larger laser spot size on a solid target.

The amount of genomic data available for study is increasing at a rate similar to that of Moore's law. This deluge of data is challenging bioinformaticians to develop newer, faster and better algorithms for analysis and examination of this data. The growing availability of large scale computing grids coupled with high-performance networking is challenging computer scientists to develop better, faster methods of exploiting parallelism in these biological computations and deploying them across computing grids. In this paper, we describe two computations that are required to be run frequently and which require large amounts of computing resource to complete in a reasonable time. The data for these computations are very large and the sequential computational time can exceed thousands of hours. We show the importance and relevance of these computations, the nature of the data and parallelism and we show how we are meeting the challenge of efficiently distributing and managing these computations in the SEED project.

We review the previous studies of galaxies and quasar redshifts discretisation. We present also the investigations of the large scale periodicity, detected by pencil--beam observations, which revealed 128 (1/h) Mpc period, afterwards confirmed with supercluster studies. We present the theoretical possibility of obtaining such a periodicity using a toy-model. We solved the Kepler problem, i.e. the equation of motion of a particle with null energy moving in the uniform, expanding Universe, decribed by FLRW metrics. It is possible to obtain theoretically the separation between large scale structures similar to the observed one.

We re-examine neutrino oscillations in the early universe. Contrary to previous studies, we show that large neutrino asymmetries can arise due to oscillations between ordinary neutrinos and sterile neutrinos. This means that the Big Bang Nucleosynthesis (BBN) bounds on the mass and mixing of ordinary neutrinos with sterile neutrinos can be evaded. Also, it is possible that the neutrino asymmetries can be large (i.e. $\\stackrel{>}{\\sim} 10\\%$), and hence have a significant effect on BBN through nuclear reaction rates.

with sampling of large particles such as those most often emitted from agricultural operations. Previous studies have characterized the performance of PM10 inlets across a wide range of particle sizes, including particles up to 25 mm AED (McFarland and Ortiz..., fluorometric analysis methods used by McFarland and Ortiz (1984), which will be discussed in detail below, likely masked small sampling efficiency values when characterizing the per- formance of the original FRM PM10 for large particles. Rela- tively small...

A large volume flow through radiation detector for use in large air flow situations such as incinerator stacks or building air systems comprises a plurality of flat plates made of a scintillating material arranged parallel to the air flow. Each scintillating plate has a light guide attached which transfers light generated inside the scintillating plate to an associated photomultiplier tube. The output of the photomultiplier tubes are connected to electronics which can record any radiation and provide an alarm if appropriate for the application.

Geological and geophysical data suggest that during the evolution of the earth and its species, that there have been many mass extinctions due to large impacts from comets and large asteroids, and major volcanic events. Today, technology has developed to the stage where we can begin to consider protective measures for the planet. Evidence of the ecological disruption and frequency of these major events is presented. Surveillance and warning systems are most critical to develop wherein sufficient lead times for warnings exist so that appropriate interventions could be designed. The long term research undergirding these warning systems, implementation, and proof testing is rich in opportunities for collaboration for peace.

Adaptive Training for Large Vocabulary Continuous Speech Recognition Kai Yu Hughes Hall College for the degree of Doctor of Philosophy #12;ii Summary In recent years, there has been a trend towards training is to train hidden Markov models (HMMs) on the whole data set as if all data comes from a single acoustic

Tools for Large Graph Mining by Deepayan Chakrabarti Submitted to the Center for Automated Learning computer networks to sociology, biology, ecology and many more. How do such "normal" graphs look like? How-graph, which can be both weighted or unweighted. · Ecology: Food webs are self-graphs with each node

Foreign Fishery Developments Nigeria Plans Large Fishing Fleet Expansion Table 1.-Nigerian fishing reported deliveries. Development Program Nigeria's oil exports have enabled its Government to fInance Africa's most ambitious development program. Nigeria has the largest population of any country in Africa

1 Optimal Deployment of Large Wireless Sensor Networks S. Toumpis, Member, IEEE, and Leandros, Sensor networks. I. INTRODUCTION A. Wireless Sensor Networks Wireless sensor networks are comprised of sensors that are equipped with wireless transceivers and so are able to form a wireless network [3

A method of analyzing relatively large soil samples for actinides by employing a separation process that includes cerium fluoride precipitation for removing the soil matrix and precipitates plutonium, americium, and curium with cerium and hydrofluoric acid followed by separating these actinides using chromatography cartridges.

Using lattice effective field theory, we study the ground state binding energy of N distinct particles in two dimensions with equal mass interacting weakly via an attractive SU(N)-symmetric short range potential. We find that in the limit of zero range and large N, the ratio of binding energies B_{N}/B_{N-1} approaches the value 8.3(6).

Computational Diagnostics based on Large Scale Gene Expression Profiles using MCMC Rainer Spang = Data Loadings Singular values Expression levels of super genes, orthogonal matrix #12;)( genessuperall- #12;Given the Few Profiles With Known Diagnosis: · The uncertainty on the right model is high

below its threshold whereas for achieving nearly transform-limited pulses with high peak power, biaslevels, the large change in the carrier density results in both large linear as well as large compensate for this large linear chirp, a relatively shorter compressed pulse will be realised (at 1Â·5 m

The International Large Detector (ILD) is a concept for a detector at the International Linear Collider, ILC. The ILC will collide electrons and positrons at energies of initially 500 GeV, upgradeable to 1 TeV. The ILC has an ambitious physics program, which will extend and complement that of the Large Hadron Collider (LHC). A hallmark of physics at the ILC is precision. The clean initial state and the comparatively benign environment of a lepton collider are ideally suited to high precision measurements. To take full advantage of the physics potential of ILC places great demands on the detector performance. The design of ILD is driven by these requirements. Excellent calorimetry and tracking are combined to obtain the best possible overall event reconstruction, including the capability to reconstruct individual particles within jets for particle ow calorimetry. This requires excellent spatial resolution for all detector systems. A highly granular calorimeter system is combined with a central tracker which st...

We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

Two-dimensional conformal field theories with a large central charge and a small number of low-dimension operators are studied using the conformal block expansion. A universal formula is derived for the Renyi entropies of N disjoint intervals in the ground state, valid to all orders in a series expansion. This is possible because the full perturbative answer in this regime comes from the exchange of the stress tensor and other descendants of the vacuum state. Therefore, the Renyi entropy is related to the Virasoro vacuum block at large central charge. The entanglement entropy, computed from the Renyi entropy by an analytic continuation, decouples into a sum of single-interval entanglements. This field theory result agrees with the Ryu-Takayanagi formula for the holographic entanglement entropy of a 2d CFT, applied to any number of intervals, and thus can be interpreted as a microscopic calculation of the area of minimal surfaces in 3d gravity.

A very large (10's of meters) aperture space telescope including two separate spacecraft--an optical primary functioning as a magnifying glass and an optical secondary functioning as an eyepiece. The spacecraft are spaced up to several kilometers apart with the eyepiece directly behind the magnifying glass ''aiming'' at an intended target with their relative orientation determining the optical axis of the telescope and hence the targets being observed. The magnifying glass includes a very large-aperture, very-thin-membrane, diffractive lens, e.g., a Fresnel lens, which intercepts incoming light over its full aperture and focuses it towards the eyepiece. The eyepiece has a much smaller, meter-scale aperture and is designed to move along the focal surface of the magnifying glass, gathering up the incoming light and converting it to high quality images. The positions of the two space craft are controlled both to maintain a good optical focus and to point at desired targets.

Crunching large graphs is the basis of many emerging appli- cations, such as social network analysis and bioinformatics. Graph analytics algorithms exhibit little locality and therefore present signi?cant performance challenges. Hardware multi- threading systems (e.g, Cray XMT) show that with enough concurrency, we can tolerate long latencies. Unfortunately, this solution is not available with commodity parts. Our goal is to develop a latency-tolerant system built out of commodity parts and mostly in software. The proposed system includes a runtime that supports a large number of lightweight contexts, full-bit synchronization and a memory manager that provides a high-latency but high-bandwidth global shared memory. This paper lays out the vision for our system, and justi?es its feasibility with a performance analysis of the run- time for latency tolerance.

It is pointed out that controlled release of thermal energy from fission type nuclear reactors can be used to alter weather patterns over significantly large geographical regions. (1) Nuclear heat creates a low pressure region, which can be used to draw moist air from oceans, onto deserts. (2) Creation of low pressure zones over oceans using Nuclear heat can lead to Controlled Cyclone Creation (CCC).(3) Nuclear heat can also be used to melt glaciers and control water flow in rivers.

It is pointed out that controlled release of thermal energy from fission type nuclear reactors can be used to alter weather patterns over significantly large geographical regions. (1) Nuclear heat creates a low pressure region, which can be used to draw moist air from oceans, onto deserts. (2) Creation of low pressure zones over oceans using Nuclear heat can lead to Controlled Cyclone Creation (CCC).(3) Nuclear heat can also be used to melt glaciers and control water flow in rivers.

Dark radiation is a compelling extension to $\\Lambda$CDM: current experimental results hint at $\\Delta N_{\\rm eff} \\gtrsim 0.5$, which is increased to $\\Delta N_{\\rm eff} \\simeq 1$ if the recent BICEP2 results are included. In recent years dark radiation has been considered in the context of string theory models such as the LARGE Volume Scenario of type IIB string theory, forging a link between present-day cosmological observations and models of physics at the Planck scale. In this paper I consider an extension of the LARGE Volume Scenario in which the bulk volume is stabilised by two moduli instead of one. Consequently, the lightest modulus no longer corresponds to the compactification volume but instead to a transverse direction in the bulk geometry. I focus on scenarios in which sequestering of soft masses is achieved by localising the Standard Model on D3 branes at a singularity. The fraction of dark radiation produced in such models vastly exceeds experimental bounds, ruling out the sequestered LARGE Volume Scenario with two bulk moduli as a model of the early Universe.

This paper explores whether Eguchi-Kawai reduction for gauge theories with adjoint fermions is valid. The Eguchi-Kawai reduction relates gauge theories in different numbers of dimensions in the large $N$ limit provided that certain conditions are met. In principle, this relation opens up the possibility of learning about the dynamics of 4D gauge theories through techniques only available in lower dimensions. Dimensional reduction can be understood as a special case of large $N$ equivalence between theories related by an orbifold projection. In this work, we focus on the simplest case of dimensional reduction, relating a 4D gauge theory to a 3D gauge theory via an orbifold projection. A necessary condition for the large N equivalence between the 4D and 3D theories to hold is that certain discrete symmetries in the two theories must not be broken spontaneously. In pure 4D Yang-Mills theory, these symmetries break spontaneously as the size of one of the spacetime dimensions shrinks. An analysis of the effect of adjoint fermions on the relevant symmetries of the 4D theory shows that the fermions help stabilize the symmetries. We consider the same problem from the point of view of the lower dimensional 3D theory and find that, surprisingly, adjoint fermions are not generally enough to stabilize the necessary symmetries of the 3D theory. In fact, a rich phase diagram arises, with a complicated pattern of symmetry breaking. We discuss the possible causes and consequences of this finding.

A Large Bore Powder Gun (LBPG) is being designed to enable experimentalists to characterize material behavior outside the capabilities of the NNSS JASPER and LANL TA-55 PF-4 guns. The combination of these three guns will create a capability to conduct impact experiments over a wide range of pressures and shock profiles. The Large Bore Powder Gun will be fielded at the Nevada National Security Site (NNSS) U1a Complex. The Complex is nearly 1000 ft below ground with dedicated drifts for testing, instrumentation, and post-shot entombment. To ensure the reliability, safety, and performance of the LBPG, a qualification plan has been established and documented here. Requirements for the LBPG have been established and documented in WE-14-TR-0065 U A, Large Bore Powder Gun Customer Requirements. The document includes the requirements for the physics experiments, the gun and confinement systems, and operations at NNSS. A detailed description of the requirements is established in that document and is referred to and quoted throughout this document. Two Gun and Confinement Systems will be fielded. The Prototype Gun will be used primarily to characterize the gun and confinement performance and be the primary platform for qualification actions. This gun will also be used to investigate and qualify target and diagnostic modifications through the life of the program (U1a.104 Drift). An identical gun, the Physics Gun, will be fielded for confirmatory and Pu experiments (U1a.102D Drift). Both guns will be qualified for operation. The Gun and Confinement System design will be qualified through analysis, inspection, and testing using the Prototype Gun for the majority of process. The Physics Gun will be qualified through inspection and a limited number of qualification tests to ensure performance and behavior equivalent to the Prototype gun. Figure 1.1 shows the partial configuration of U1a and the locations of the Prototype and Physics Gun/Confinement Systems.

This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

This paper proposes an efficient technique for partitioning large biometric database during identification. In this technique feature vector which comprises of global and local descriptors extracted from offline signature are used by fuzzy clustering technique to partition the database. As biometric features posses no natural order of sorting, thus it is difficult to index them alphabetically or numerically. Hence, some supervised criteria is required to partition the search space. At the time of identification the fuzziness criterion is introduced to find the nearest clusters for declaring the identity of query sample. The system is tested using bin-miss rate and performs better in comparison to traditional k-means approach.

Monte Carlo studies of $CP^{N-1}$ sigma models have shown that the structure of topological charge in these models undergoes a sharp transition at $N=N_c\\approx 4$. For $NN_c$ it is dominated by extended, thin, 1-dimensionally coherent membranes of topological charge, which can be interpreted as domain walls between discrete quasi-stable vacua. These vacua differ by a unit of background electric flux. The transition can be identified as the delocalization of topological charge, or "instanton melting," a phenomenon first suggested by Witten to resolve the conflict between instantons and large $N$ behavior. Implications for $QCD$ are discussed.

A method of large-scale active THz imaging using a combination of a compact high power THz source (>1 watt), an optional optical system, and a camera for the detection of reflected or transmitted THz radiation, without the need for the burdensome power source or detector cooling systems required by similar prior art such devices. With such a system, one is able to image, for example, a whole person in seconds or less, whereas at present, using low power sources and scanning techniques, it takes several minutes or even hours to image even a 1 cm.times.1 cm area of skin.

A study is being conducted of the resources and planning that would be required to clean up an extensive contamination of the outdoor environment. As part of this study, an assessment of the fleet of machines needed for decontaminating large outdoor surfaces of horizontal concrete will be attempted. The operations required are described. The performance of applicable existing equipment is analyzed in terms of area cleaned per unit time, and the comprehensive cost of decontamination per unit area is derived. Shielded equipment for measuring directional radiation and continuously monitoring decontamination work are described. Shielding of drivers' cabs and remote control vehicles is addressed.

By measuring and adjusting the beta-functions at the interaction point (IP the luminosity is being optimized. In LEP (Large Electron Positron Collider) this was done with the two closest doublet magnets. This approach is not applicable for the LHC (Large Hadron Collider) and RHIC (Relativistic Heavy Ion Collider) due to the asymmetric lattice. In addition in the LHC both beams share a common beam pipe through the inner triplet magnets (in these region changes of the magnetic field act on both beams). To control and adjust the beta-functions without perturbation of other optics functions, quadrupole groups situated on both sides further away from the IP have to be used where the two beams are already separated. The quadrupoles are excited in specific linear combinations, forming the so-called "tuning knobs" for the IP beta-functions. For a specific correction one of these knobs is scaled by a common multiplier. The different methods which were used to compute such knobs are discussed: (1) matching in MAD, (2)i...

The International Large Detector (ILD) is a concept for a detector at the International Linear Collider, ILC. The ILC will collide electrons and positrons at energies of initially 500 GeV, upgradeable to 1 TeV. The ILC has an ambitious physics program, which will extend and complement that of the Large Hadron Collider (LHC). A hallmark of physics at the ILC is precision. The clean initial state and the comparatively benign environment of a lepton collider are ideally suited to high precision measurements. To take full advantage of the physics potential of ILC places great demands on the detector performance. The design of ILD is driven by these requirements. Excellent calorimetry and tracking are combined to obtain the best possible overall event reconstruction, including the capability to reconstruct individual particles within jets for particle ow calorimetry. This requires excellent spatial resolution for all detector systems. A highly granular calorimeter system is combined with a central tracker which stresses redundancy and efficiency. In addition, efficient reconstruction of secondary vertices and excellent momentum resolution for charged particles are essential for an ILC detector. The interaction region of the ILC is designed to host two detectors, which can be moved into the beam position with a push-pull scheme. The mechanical design of ILD and the overall integration of subdetectors takes these operational conditions into account.

A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

Project Objective: The Massachusetts Clean Energy Center (CEC) will design, construct, and ultimately have responsibility for the operation of the Large Wind Turbine Blade Test Facility, which is an advanced blade testing facility capable of testing wind turbine blades up to at least 90 meters in length on three test stands. Background: Wind turbine blade testing is required to meet international design standards, and is a critical factor in maintaining high levels of reliability and mitigating the technical and financial risk of deploying massproduced wind turbine models. Testing is also needed to identify specific blade design issues that may contribute to reduced wind turbine reliability and performance. Testing is also required to optimize aerodynamics, structural performance, encourage new technologies and materials development making wind even more competitive. The objective of this project is to accelerate the design and construction of a large wind blade testing facility capable of testing blades with minimum queue times at a reasonable cost. This testing facility will encourage and provide the opportunity for the U.S wind industry to conduct more rigorous testing of blades to improve wind turbine reliability.

institutions have launched several preventive actions (radar, tickets, etc.) and research programs headed in vehicle fleet to PTW is accompanied by the blast of the number of accidents. For a long time, industrial avoided for several reasons: price, noise measurement, feasibility, etc. On the other hand, there are All

A large-area liquid ion source comprises means for generating, over a large area of the surface of a liquid, an electric field of a strength sufficient to induce emission of ions from a large area of said liquid. Large areas in this context are those distinct from emitting areas in unidimensional emitters.

#12;ImpactsofLarge Dams:agLobaL assessment Editors Cecilia Tortajada, Dogan Altinbilek, Asit K of the most controversial issues of the water sector in recent years has been the impacts of large dams and environmental costs of large dams far exceed their benefits, and that the era of construction of large dams

This report, which focuses on the meteorological aspects of siting large wind turbines (turbines with a rated output exceeding 100 kW), has four main goals. The first is to outline the elements of a siting strategy that will identify the most favorable wind energy sites in a region and that will provide sufficient wind data to make responsible economic evaluations of the site wind resource possible. The second is to critique and summarize siting techniques that were studied in the Department of Energy (DOE) Wind Energy Program. The third goal is to educate utility technical personnel, engineering consultants, and meteorological consultants (who may have not yet undertaken wind energy consulting) on meteorological phenomena relevant to wind turbine siting in order to enhance dialogues between these groups. The fourth goal is to minimize the chances of failure of early siting programs due to insufficient understanding of wind behavior.

The present invention relates to a system for inspecting large scale structural components such as concrete walls or the like. The system includes a mobile gamma radiation source and a mobile gamma radiation detector. The source and detector are constructed and arranged for simultaneous movement along parallel paths in alignment with one another on opposite sides of a structural component being inspected. A control system provides signals which coordinate the movements of the source and detector and receives and records the radiation level data developed by the detector as a function of source and detector positions. The radiation level data is then analyzed to identify areas containing defects corresponding to unexpected variations in the radiation levels detected.

This document provides a brief overview of the recently published report on the design of the Large Hadron Electron Collider (LHeC), which comprises its physics programme, accelerator physics, technology and main detector concepts. The LHeC exploits and develops challenging, though principally existing, accelerator and detector technologies. This summary is complemented by brief illustrations of some of the highlights of the physics programme, which relies on a vastly extended kinematic range, luminosity and unprecedented precision in deep inelastic scattering. Illustrations are provided regarding high precision QCD, new physics (Higgs, SUSY) and electron-ion physics. The LHeC is designed to run synchronously with the LHC in the twenties and to achieve an integrated luminosity of O(100) fb$^{-1}$. It will become the cleanest high resolution microscope of mankind and will substantially extend as well as complement the investigation of the physics of the TeV energy scale, which has been enabled by the LHC.

This document provides a brief overview of the recently published report on the design of the Large Hadron Electron Collider (LHeC), which comprises its physics programme, accelerator physics, technology and main detector concepts. The LHeC exploits and develops challenging, though principally existing, accelerator and detector technologies. This summary is complemented by brief illustrations of some of the highlights of the physics programme, which relies on a vastly extended kinematic range, luminosity and unprecedented precision in deep inelastic scattering. Illustrations are provided regarding high precision QCD, new physics (Higgs, SUSY) and electron-ion physics. The LHeC is designed to run synchronously with the LHC in the twenties and to achieve an integrated luminosity of O(100) fb$^{-1}$. It will become the cleanest high resolution microscope of mankind and will substantially extend as well as complement the investigation of the physics of the TeV energy scale, which has been enabled by the LHC.

The Large Underground Xenon (LUX) collaboration has designed and constructed a dual-phase xenon detector, in order to conduct a search for Weakly Interacting Massive Particles(WIMPs), a leading dark matter candidate. The goal of the LUX detector is to clearly detect (or exclude) WIMPS with a spin independent cross section per nucleon of $2\\times 10^{-46}$ cm$^{2}$, equivalent to $\\sim$1 event/100 kg/month in the inner 100-kg fiducial volume (FV) of the 370-kg detector. The overall background goals are set to have $

The Large Underground Xenon (LUX) collaboration has designed and constructed a dual-phase xenon detector, in order to conduct a search for Weakly Interacting Massive Particles(WIMPs), a leading dark matter candidate. The goal of the LUX detector is to clearly detect (or exclude) WIMPS with a spin independent cross section per nucleon of $2\\times 10^{-46}$ cm$^{2}$, equivalent to $\\sim$1 event/100 kg/month in the inner 100-kg fiducial volume (FV) of the 370-kg detector. The overall background goals are set to have $

The inflationary paradigm has enjoyed phenomenological success; however, a compelling particle physics realization is still lacking. Axions are among the best-motivated inflaton candidates, since the flatness of their potential is naturally protected by a shift symmetry. We reconsider the cosmological perturbations in axion inflation, consistently accounting for the coupling to gauge fields c{phi}FF-tilde, which is generically present in these models. This coupling leads to production of gauge quanta, which provide a new source of inflaton fluctuations, {delta}{phi}. For c > or approx. 10{sup 2}M{sub p}{sup -1}, these dominate over the vacuum fluctuations, and non-Gaussianity exceeds the current observational bound. This regime is typical for concrete realizations that admit a UV completion; hence, large non-Gaussianity is easily obtained in minimal and natural realizations of inflation.

The theory of quadratic-flux-minimizing (QFM) surfaces is reviewed, and numerical techniques that allow high-order QFM surfaces to be efficiently constructed for experimentally relevant, non-integrable magnetic fields are described. As a practical example, the chaotic edge of the magnetic field in the Large Helical Device (LHD) is examined. A precise technique for finding the boundary surface is implemented, the hierarchy of partial barriers associated with the near-critical cantori is constructed, and a coordinate system, which we call chaotic coordinates, that is based on a selection of QFM surfaces is constructed that simplifies the description of the magnetic field, so that flux surfaces become “straight” and islands become “square.”.

We have designed, constructed and put into operation a very large area CCD camera that covers the field of view of the 1.2 m Samuel Oschin Schmidt Telescope at the Palomar Observatory. The camera consists of 112 CCDs arranged in a mosaic of four rows with 28 CCDs each. The CCDs are 600 x 2400 pixel Sarnoff thinned, back illuminated devices with 13 um x 13 um pixels. The camera covers an area of 4.6 deg x 3.6 deg on the sky with an active area of 9.6 square degrees. This camera has been installed at the prime focus of the telescope, commissioned, and scientific quality observations on the Palomar-QUEST Variability Sky Survey were started in September of 2003. The design considerations, construction features, and performance parameters of this camera are described in this paper.

Image compression and viewing are implemented with (1) a method for performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) a method for selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets T{sub ij}(x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles T{sub ij}(x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles T{sub ij}(x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display. The repeated process whereby image views are specified may take the form an interaction with a computer pointing device on an image display from a previous retrieval. 6 figs.

Image compression and viewing are implemented with (1) a method for performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) a method for selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets T.sub.ij (x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles T.sub.ij (x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles T.sub.ij (x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display. The repeated process whereby image views are specified may take the form an interaction with a computer pointing device on an image display from a previous retrieval.

We adopt a new chemical evolution model for the Large Magellanic Cloud (LMC) and thereby investigate its past star formation and chemical enrichment histories. The delay time distribution of Type Ia supernovae recently revealed by Type Ia supernova surveys is incorporated self-consistently into the new model. The principle results are summarized as follows. The present gas mass fraction and stellar metallicity as well as the higher [Ba/Fe] in metal-poor stars at [Fe/H] < -1.5 can be more self-consistently explained by models with steeper initial mass functions. The observed higher [Mg/Fe] ({>=}0.3) at [Fe/H] {approx} -0.6 and higher [Ba/Fe] (>0.5) at [Fe/H] {approx} -0.3 could be due to significantly enhanced star formation about 2 Gyr ago. The observed overall [Ca/Fe]-[Fe/H] relation and remarkably low [Ca/Fe] (< - 0.2) at [Fe/H] > -0.6 are consistent with models with short-delay supernova Ia and with the more efficient loss of Ca possibly caused by an explosion mechanism of Type II supernovae. Although the metallicity distribution functions do not show double peaks in the models with a starburst about 2 Gyr ago, they show characteristic double peaks in the models with double starbursts {approx}200 Myr and {approx}2 Gyr ago. The observed apparent dip of [Fe/H] around {approx}1.5 Gyr ago in the age-metallicity relation can be reproduced by models in which a large amount ({approx}10{sup 9} M{sub Sun }) of metal-poor ([Fe/H] < -1) gas can be accreted onto the LMC.

#12;1 National Roadmap Committee for Large-Scale Research Facilities the netherlands' roadmap for large-scale research facilities #12;2 National Roadmap Committee for Large-Scale Research Facilities1 by Roselinde Supheert) #12;3 National Roadmap Committee for Large-Scale Research Facilities The Netherlands

In this dissertation the author considers the numerical solution of large (100 {le} n {le} 1000) and very large (n {ge} 1000), sparse Lyapunov equations AX-+ XA' + Q = 0. The author first presents a parallel version of the Hammarling algorithm for the solution of Lyapunov equations where the coefficient matrix A is large and dense. The author then presents a novel parallel algorithm for the solution of Lyapunov equations where A is large and banded. A detailed analysis of the computational requirements in tandem with the results of numerical experiments with these algorithms on an Alliant FX-8 multiprocessor is provided. In the second half of this dissertation, the author considers the numerical solution of Lyapunov equations where the coefficient matrix A is very large and sparse. Under these conditions, the solution X of the Lyapunov equation is typically full rank and dense. The associated excessive storage requirements compel us to compute low rank approximations of the solution X of the Lyapunov equation. The author presents in detail two methods for the low rank approximate solution of the Lyapunov equation. The first method, Trace Maximization, computes an orthogonal matrix V {element of}{Re}{sup n{times}k} that maximizes the trace of the solution {Sigma}{sub V} of the associated reduced order Lyapunov equation (V'AV){Sigma}{sub V} + {Sigma}{sub V}(V'A'V) + V'QV = 0. While Trace Maximization is an effective method for low rank approximation of explicitly specified Hermitian matrices, the author shows that Trace Maximization is not an effective strategy for low rank approximation of positive semidefinite Hermitian matrices X that are implicitly specified as the solution of a Lyapunov equation. Our second algorithm for low rank approximate solution of Lyapunov equations, Approximate Power Iteration, attempts to directly compute an orthogonal basis of the dominant eigenspace of the solution X.

The Stanford Synchrotron Radiation Laboratory (SSRL) has successfully commissioned SPEAR3, its newly upgraded 3-GeV synchrotron light source. First stored beam occurred December 15, 2003 and 100mA operation was reached on January 20, 2004. This paper describes the specification, design, and performance of the SPEAR3 DC magnet large power supplies (LGPS) that consist of tightly-regulated (better than {+-}10 ppm) current sources ranging from 100A to 225A and output powers ranging from 70kW to 135kW. A total of 6 LGPS are in successful operation and are used to power strings of quadrupoles and sextupoles. The LGPS are isolated by a delta/delta-wye 60Hz step-down transformer that provides power to 2 series-connected chopper stages operating phase-shifted at a switching frequency of 18-kHz to provide for fast output response and high efficiency. Also described are outside procurement aspects, installation, in-house testing, and operation of the power supplies.

We consider the problem of conditioning a Markov process on a rare event and of representing this conditioned process by a conditioning-free process, called the effective or driven process. The basic assumption is that the rare event used in the conditioning is a large deviation-type event, characterized by a convex rate function. Under this assumption, we construct the driven process via a generalization of Doob's $h$-transform, used in the context of bridge processes, and show that this process is equivalent to the conditioned process in the long-time limit. The notion of equivalence that we consider is based on the logarithmic equivalence of path measures and implies that the two processes have the same typical states. In constructing the driven process, we also prove equivalence with the so-called exponential tilting of the Markov process, which is used with importance sampling to simulate rare events, and which gives rise, from the point of view of statistical mechanics, to a nonequilibrium version of the canonical ensemble. Other links between our results and the topics of bridge processes, quasi-stationary distributions, stochastic control, and conditional limit theorems are mentioned.

Large-field inflation is an interesting and predictive scenario. Its non-trivial embedding in supergravity was intensively studied in the recent literature, whereas its interplay with supersymmetry breaking has been less thoroughly investigated. We consider the minimal viable model of chaotic inflation in supergravity containing a stabilizer field, and add a Polonyi field. Furthermore, we study two possible extensions of the minimal setup. We show that there are various constraints: first of all, it is very hard to couple an O'Raifeartaigh sector with the inflaton sector, the simplest viable option being to couple them only through gravity. Second, even in the simplest model the gravitino mass is bounded from above parametrically by the inflaton mass. Therefore, high-scale supersymmetry breaking is hard to implement in a chaotic inflation setup. As a separate comment we analyze the simplest chaotic inflation construction without a stabilizer field, together with a supersymmetrically stabilized Kahler modulus. Without a modulus, the potential of such a model is unbounded from below. We show that a heavy modulus cannot solve this problem.

Significant improvements in the properties ofsuperconducting materials have occurred recently. These improvements arebeing incorporated into the latest generation of wires, cables, and tapesthat are being used in a broad range of prototype devices. These devicesinclude new, high field accelerator and NMR magnets, magnets for fusionpower experiments, motors, generators, and power transmission lines.These prototype magnets are joining a wide array of existing applicationsthat utilize the unique capabilities of superconducting magnets:accelerators such as the Large Hadron Collider, fusion experiments suchas ITER, 930 MHz NMR, and 4 Tesla MRI. In addition, promising newmaterials such as MgB2 have been discovered and are being studied inorder to assess their potential for new applications. In this paper, wewill review the key developments that are leading to these newapplications for superconducting materials. In some cases, the key factoris improved understanding or development of materials with significantlyimproved properties. An example of the former is the development of Nb3Snfor use in high field magnets for accelerators. In other cases, thedevelopment is being driven by the application. The aggressive effort todevelop HTS tapes is being driven primarily by the need for materialsthat can operate at temperatures of 50 K and higher. The implications ofthese two drivers for further developments will be discussed. Finally, wewill discuss the areas where further improvements are needed in order fornew applications to be realized.

We study the general multi-axion systems, focusing on the possibility of large field inflation driven by axions. We find that through axion mixing from a non-diagonal metric on the moduli space and/or from St\\"uckelberg coupling to a U(1) gauge field, an effectively super-Planckian decay constant can be generated without the need of "alignment" in the axion decay constants. We also investigate the consistency conditions related to the gauge symmetries in the multi-axion systems, such as vanishing gauge anomalies and the potential presence of generalized Chern-Simons terms. Our scenario applies generally to field theory models whose axion periodicities are intrinsically sub-Planckian, but it is most naturally realized in string theory. The types of axion mixings invoked in our scenario appear quite commonly in D-brane models, and we present its implementation in type II superstring theory. Explicit stringy models exhibiting all the characteristics of our ideas are constructed within the frameworks of Type IIA ...

Large area atmospheric-pressure plasma jet. A plasma discharge that can be operated at atmospheric pressure and near room temperature using 13.56 MHz rf power is described. Unlike plasma torches, the discharge produces a gas-phase effluent no hotter than 250.degree. C. at an applied power of about 300 W, and shows distinct non-thermal characteristics. In the simplest design, two planar, parallel electrodes are employed to generate a plasma in the volume therebetween. A "jet" of long-lived metastable and reactive species that are capable of rapidly cleaning or etching metals and other materials is generated which extends up to 8 in. beyond the open end of the electrodes. Films and coatings may also be removed by these species. Arcing is prevented in the apparatus by using gas mixtures containing He, which limits ionization, by using high flow velocities, and by properly spacing the rf-powered electrode. Because of the atmospheric pressure operation, there is a negligible density of ions surviving for a sufficiently long distance beyond the active plasma discharge to bombard a workpiece, unlike the situation for low-pressure plasma sources and conventional plasma processing methods.

I explore many aspects of jet substructure at the Large Hadron Collider, ranging from theoretical techniques for jet calculations, to phenomenological tools for better searches with jets, to software for implementing and comparing such tools. I begin with an application of soft-collinear effective theory, an effective theory of QCD applied to high-energy quarks and gluons. This material is taken from Ref. 1, in which we demonstrate factorization and logarithmic resummation for a certain class of observables in electron-positron collisions. I then explore various phenomenological aspects of jet substructure in simulated events. After observing numerous features of jets at hadron colliders, I describe a method -- jet pruning -- for improving searches for heavy particles that decay to one or more jets. This material is a greatly expanded version of Ref. 2. Finally, I give an overview of the software tools available for these kinds of studies, with a focus on SpartyJet, a package for implementing and comparing jet-based analyses I have collaborated on. Several detailed calculations and software examples are given in the appendices. Sections with no new content are italic in the Table of Contents.

We review the main theoretical aspects of the structure formation paradigm which impinge upon wide angle surveys: the early universe generation of gravitational metric fluctuations from quantum noise in scalar inflaton fields; the well understood and computed linear regime of CMB anisotropy and large scale structure (LSS) generation; the weakly nonlinear regime, where higher order perturbation theory works well, and where the cosmic web picture operates, describing an interconnected LSS of clusters bridged by filaments, with membranes as the intrafilament webbing. Current CMB+LSS data favour the simplest inflation-based $\\Lambda$CDM models, with a primordial spectral index within about 5% of scale invariant and $\\Omega_\\Lambda \\approx 2/3$, similar to that inferred from SNIa observations, and with open CDM models strongly disfavoured. The attack on the nonlinear regime with a variety of N-body and gas codes is described, as are the excursion set and peak-patch semianalytic approaches to object collapse. The ingredients are mixed together in an illustrative gasdynamical simulation of dense supercluster formation.

Water clusters are multimers of water molecules held together by hydrogen bonds. In the present work, multiphoton ionization in the UV range coupled with time of flight mass spectrometry has been applied to water clusters with up to 160 molecules in order to obtain information on the electronic states of clusters of different sizes up to dimensions that can approximate the bulk phase. The dependence of ion intensities of water clusters and their metastable fragments produced by laser ionization at 355 nm on laser power density indicates a (3+1)-photon resonance-enhanced multiphoton ionization process. It also explains the large increase of ionization efficiency at 355 nm compared to that at 266 nm. Indeed, it was found, by applying both nanosecond and picosecond laser ionization with the two different UV wavelengths, that no water cluster sequences after n = 9 could be observed at 266 nm, whereas water clusters up to m/z 2000 Th in reflectron mode and m/z 3000 Th in linear mode were detected at 355 nm. The agreement between our findings on clusters of water, especially true in the range with n > 10, and reported data for liquid water supports the hypothesis that clusters above a critical dimension can approximate the liquid phase. It should thus be possible to study clusters just above 10 water molecules, for getting information on the bulk phase structure.

Specialized remote video systems have been successfully developed and deployed in a number of large radiological Underground Storage Tanks (USTs)that tolerate the hostile tank interior, while providing high resolution video to a remotely located operator. The deployment is through 100 mm (4 in) tank openings, while incorporating full video functions of the camera, lights, and zoom lens. The usage of remote video minimizes the potential for personnel exposure to radiological and hazardous conditions, and maximizes the quality of the visual data used to assess the interior conditions of both tank and contents. The robustness of this type of remote system has a direct effect on the potential for radiological exposure that personnel may encounter. The USTs typical of the Savannah River and Hanford Department Of Energy - (DOE) sites are typically 4.5 million liter (1.2 million gal) units under earth. or concrete overburden with limited openings to the surface. The interior is both highly contaminated and radioactive with a wide variety of nuclear processing waste material. Some of the tanks are -flammable rated -to Class 1, Division 1,and personnel presence at or near the openings should be minimized. The interior of these USTs must be assessed periodically as part of the ongoing management of the tanks and as a step towards tank remediation. The systems are unique in their deployment technology, which virtually eliminates the potential for entrapment in a tank, and their ability to withstand flammable environments. A multiplicity of components used within a common packaging allow for cost effective and appropriate levels of technology, with radiation hardened components on some units and lesser requirements on other units. All units are completely self contained for video, zoom lens, lighting, deployment,as well as being self purging, and modular in construction.

Hewlett-Packard's Industry Standard Servers (ISS) organization offers a large variety of server computers and accessories. The large range of options available to its customers gives way to complex processes and less than ...

CONSULTATION RESPONSE Wellcome Trust response to RCUK Large Facilities Roadmap December 2007 Page 1 of 4 RCUK Large Facilities Roadmap Response by the Wellcome Trust December 2007 1. The Wellcome Trust is pleased to have the opportunity to feed into the process of prioritising the RCUK Large Facilities Roadmap

on the relation between individual brains and the atlas. This is a powerful approach allowing us to study a largeCortical Hemisphere Registration Via Large Deformation Diffeomorphic Metric Curve Mapping Anqi Qiu1 Science, Johns Hopkins University Abstract. We present large deformation diffeomorphic metric curve

Detailed Execution Planning for Large Oil and Gas Construction Projects Presented by James Lozon, University of Calgary There is currently 55.8 billion dollars worth of large oil and gas construction projects scheduled or underway in the province of Alberta. Recently, large capital oil and gas projects

Abstract (max. 2000 char.): Experience from power system operation with the first large offshore wind farm acquired at the two large offshore wind farms in Denmark are applied to validate the models. FinallyRisø-R-Report Power fluctuations from large wind farms - Final report Poul Sørensen, Pierre Pinson

#12;#12;1Design limits and solutions for very large wind turbines UpWind Design limits and solutions for very large wind turbines A 20 MW turbine is feasible March 2011 Supported by: #12;March 20112 Photo:Nordex #12;3Design limits and solutions for very large wind turbines Contents 1. UpWind: Summary

We review the existing weak-coupling results on the thermodynamic potential of deconfined QCD at small and large quark chemical potential and compare with results from lattice gauge theory as well as the exactly solvable case of large-N_f QCD. We also discuss the new analytical results on non-Fermi-liquid effects in entropy and specific heat as well as in dispersion laws of quark quasiparticles at large quark chemical potential.

The generation of large-scale magnetic fields is studied in inflationary cosmology. We consider the violation of the conformal invariance of the Maxwell field by dilatonic as well as non-minimal gravitational couplings. We derive a general formula for the spectrum of large-scale magnetic fields for a general form of the coupling term and the formula for the spectral index. The result tells us clearly the (necessary) condition for the generation of magnetic fields with sufficiently large amplitude.

This paper describes an in-house designed large Electron Energy Filter (EEF) utilized in the Large Volume Plasma Device (LVPD) [S. K. Mattoo, V. P. Anita, L. M. Awasthi, and G. Ravi, Rev. Sci. Instrum. 72, 3864 (2001)] to secure objectives of (a) removing the presence of remnant primary ionizing energetic electrons and the non-thermal electrons, (b) introducing a radial gradient in plasma electron temperature without greatly affecting the radial profile of plasma density, and (c) providing a control on the scale length of gradient in electron temperature. A set of 19 independent coils of EEF make a variable aspect ratio, rectangular solenoid producing a magnetic field (B{sub x}) of 100?G along its axis and transverse to the ambient axial field (B{sub z} ? 6.2?G) of LVPD, when all its coils are used. Outside the EEF, magnetic field reduces rapidly to 1?G at a distance of 20 cm from the center of the solenoid on either side of target and source plasma. The EEF divides LVPD plasma into three distinct regions of source, EEF and target plasma. We report that the target plasma (n{sub e} ? 2 × 10{sup 11}?cm{sup ?3} and T{sub e} ? 2?eV) has no detectable energetic electrons and the radial gradients in its electron temperature can be established with scale length between 50?and?600 cm by controlling EEF magnetic field. Our observations reveal that the role of the EEF magnetic field is manifested by the energy dependence of transverse electron transport and enhanced transport caused by the plasma turbulence in the EEF plasma.

On the large COMPASS polarized deuteron target J. Balla , G. Baumb , N. Doshitac , M. Finger, Jr target in the COMPASS experiment at CERN since 2001. To achieve high luminosities a large solid polarized target is used. The COMPASS polarized target consists of a high cooling power 3 He/4 He dilution refriger

Structural analyses of large precision cathode strip chambers performed up to the date of this publication are documented. Mechanical property data for typical chamber materials are included. This information, originally intended to be an appendix to the {open_quotes}CSC Structural Design Bible,{close_quotes} is presented as a guide for future designers of large chambers.

Adaptive Streaming and Rendering of Large Terrains using Strip Masks Joachim Pouderoux Jean-Eudes Marvie IPARLA Project (LaBRI - INRIA Futurs) University of Bordeaux, France Abstract Terrain rendering is an important factor in the rendering of virtual scenes. If they are large and detailed, digital terrains can

ThemeRiver: Visualizing Thematic Changes in Large Document Collections Susan Havre, Member, IEEE depicts thematic variations over time within a large collection of documents. The thematic changes metaphor to convey several key notions. The document collection's time line, selected thematic content

Summary report: The shadow effect of large wind farms: measurements, data analysis and modelling of large wind farms Department: Wind Energy Risø-R-1615(EN) July 2007 ISSN 0106-2840 ISBN 978 of the project ­ by means of data from the demonstration wind farms Horns Rev and Nysted, analyses of these data

Measuring Similarity in Large-scale Folksonomies Giovanni Quattrone1 , Emilio Ferrara2 , Pasquale by power law distributions of tags, over which commonly used similarity metrics, in- cluding the Jaccard to capture similarity in large-scale folksonomies, that is based on a mutual reinforcement principle: that is

Powers of Ten Thousand: Navigating in Large Information Spaces Henry Lieberman Media Laboratory large display space, for example, a street map of the entire United States? The traditional solution, on a scale of at least 1 to 10,000. Powers of ten thousand The book and film Powers of Ten [Morrison

Deprogramming Large Software Systems Yohann Coppel and George Candea School of Computer, patterns, and designs. Such reverse processes are powerful tools for manipu- lating programs and systems of access to the patterns and designs behind a body of code makes it difficult to maintain large code bases

Large Deformation Unbiased Diffeomorphic Nonlinear Image Registration: Theory and Implementation for con- structing large deformation log-unbiased image registra- tion models that generate theoretically the statistical distributions of Jacobian maps in the logarithmic space. To demonstrate the power of the proposed

1 Minimization of welding residual stress and distortion in large structures P. Michaleris at Champaign Urbana, Urbana, IL Abstract Welding distortion in large structures is usually caused by buckling due to the residual stress. In cases where the design is fixed and minimum weld size requirements

Cyber Threat Trees for Large System Threat Cataloging and Analysis* P. Ongsakorn, K. Turney, M, kturney, mitch, nair, szygenda, manikas}@lyle.smu.edu Abstract--The implementation of cyber threat. Because large systems have many possible threats that may be interdependent, it is crucial

Existing oscillation data point to nonzero neutrino masses with large mixings. We analyze the generic features of the neutrino Majorana mass matrix with inverted hierarchy and construct realistic {\\it minimal schemes} for the neutrino mass matrix that can explain the large (but not maximal) \

Attack Containment Framework for Large-Scale Critical Infrastructures Hoang Nguyen Department-- We present an attack containment framework against value-changing attacks in large-scale critical structure, called attack container, which captures the trust behavior of a group of nodes and assists

Risø-R-1518(EN) The necessary distance between large wind farms offshore - study Sten Frandsen. As it is often the need for offshore wind farms, the model handles a regular array-geometry with straight rows distance between large wind farms in the offshore environment. The main results are given in Section 1

of offshore wind farms, wind power fluctuations may introduce several challenges to reliable power system behaviour due to natural wind fluctuations. The rapid power fluctuations from the large scale wind farms Generation Control (AGC) system which includes large- scale wind farms for long-term stability simulation

Probabilistic Damage Detection Based on Large Area Electronics Sensing Sheets Yao Yao and Branko-stage damage detection and characterization requires continuous sensing over large areas of structure are not sensitive to damage. In this research, a probabilistic approach based on Monte Carlo (MC) simulations

Large-Scale Eucalyptus Energy Farms and Power Cogeneration1 Robert C. Noronla2 The initiation of a large-scale cogeneration project, especially one that combines construction of the power generation supplemental fuel source must be sought if the cogeneration facility will consume more fuel than

Reconstruction Algorithms in the Super--Kamiokande Large Water Cherenkov Detector M. Shiozawa a;1 , On behalf of the Super--Kamiokande Collaboration a Institute for Cosmic Ray Research, University of Tokyo; Abstract The Super--Kamiokande experiment, using a large underground water Cherenkov detector, has started

of its RNAs all have irregular shapes and fit together in the ribosome like the pieces of a three-dimensional jigsaw puzzle to form a large, monolithic structure. Proteins are abundant everywhere on its surface). Earlier this year, an approx- imate model of the RNA structure in the large subunit was constructed to fit

In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

Air Effects on Large Droplet Impact Frank T Smith1 and Richard Purvis2 UCL, London WC1E 6BT, UK A study is presented of the interaction(s) between air and water in determining the motion of a large surrounding air motion. I.Nomenclature A = magnitude of shear flow in the air c = ratio U /V D

Lessons Learned: Planning and Operating Power Systems with Large Amounts of Renewable Energy agency thereof. #12;Lessons Learned: Planning and Operating Power Systems with Large Amounts of Renewable to their systems powered by as-available renewable energy sources (primarily wind and solar). The Big Island also

transmission paths [8], [9]. By spending the energy resources in a wireless network wisely, the existingInformation Delivery in Large Wireless Networks with Minimum Energy Expense Yi Xu and Wenye Wang in large-scale multihop wireless networks because of the limited energy supplies from batteries. We

Scalable Cache Memory Design for Large-Scale SMT Architectures Muhamed F. Mudawar Computer Science in existing SMT and superscalar processors is optimized for latency, but not for bandwidth. The size of the L1 is not suitable for future large-scale SMT processors, which will demand high bandwidth instruction and data

Computer simulation of the injection into the atmosphere of a large quantity of smoke following a nuclear war are described. The focus is on what might happen to the smoke after it enters the atmosphere and what changes, or perturbations, could be induced in the atmospheric structure and circulation by the pressure of a large quantity of smoke. 4 refs., 7 figs. (ACR)

Doctoral Position Aeroelastic Analysis of Large Wind Turbines In the research project "Aeroelastic Analysis Horizontal-axis wind turbine and numerical model. of Large Wind Turbines" funded by the Ger- man involving the in-house Finite-Element CFD code XNS to enable the simulation of wind turbines. The ability

Updatable Process Views for Adapting Large Process Models: The proView Demonstrator Jens Kolb. The increasing adoption of process-aware information sys- tems (PAISs) has resulted in large process model collections. To support users having different perspectives on these processes and related data, a PAIS should

the road accidents, one of the three large programs of the last five year. In spite of encouraging results available ITS currently exist specifically for motorcycles, although several emerging technologies were and various road safety research organizations confirmed this report. However, there are emerging and existing

of the condensate. More interesting for our analysis is a discontinuity of the particle density from #- > 0Large Deviations in the Superstable Weakly Imperfect Bose Gas 1 Large Deviations in the Superstable Weakly Imperfect Bose Gas J.Â­B. Bru a and V.A. Zagrebnov b a FakultË?at fË?ur Physik, UniversitË?at Wien

Large electric motors serve as the prime movers to drive high capacity pumps, fans, compressors, and generators in a variety of nuclear plant systems. This study examined the stressors that cause degradation and aging in large electric motors operating in various plant locations and environments. The operating history of these machines in nuclear plant service was studied by review and analysis of failure reports in the NPRDS and LER databases. This was supplemented by a review of motor designs, and their nuclear and balance of plant applications, in order to characterize the failure mechanisms that cause degradation, aging, and failure in large electric motors. A generic failure modes and effects analysis for large squirrel cage induction motors was performed to identify the degradation and aging mechanisms affecting various components of these large motors, the failure modes that result, and their effects upon the function of the motor. The effects of large motor failures upon the systems in which they are operating, and on the plant as a whole, were analyzed from failure reports in the databases. The effectiveness of the industry`s large motor maintenance programs was assessed based upon the failure reports in the databases and reviews of plant maintenance procedures and programs.

A double inflationary model provides perturbation spectra with enhanced power at large scales (Broken Scale Invariant perturbations -- BSI), leading to a promising scenario for the formation of cosmic structures. We describe a series of high-resolution PM simulations with a model for the thermodynamic evolution of baryons in which we are capable of identifying 'galaxy' halos with a reasonable mass spectrum and following the genesis of large and super-large scale structures. The power spectra and correlation functions of 'galaxies' are compared with reconstructed power spectra of the CfA catalogue and the correlation functions of the Las Campanas Deep Redshift Survey.

The interplay between gravitational and dispersive forces in a multi-streamed medium leads to an effect which is exposed in the present note as the genuine driving force of stabilization of large-scale structure. The conception of `adhesive gravitational clustering' is advanced to interlock the fairly well-understood epoch of formation of large-scale structure and the onset of virialization into objects that are dynamically in equilibrium with their large-scale structure environment. The classical `adhesion model' is opposed to a class of more general models traced from the physical origin of adhesion in kinetic theory.

It is an object of the present invention to provide a procedure for desensitizing zirconium-based alloys to large grain growth (LGG) during thermal treatment above the recrystallization temperature of the alloy. It is a further object of the present invention to provide a method for treating zirconium-based alloys which have been cold-worked in the range of 2 to 8% strain to reduce large grain growth. It is another object of the present invention to provide a method for fabricating a zirconium alloy clad nuclear fuel element wherein the zirconium clad is resistant to large grain growth.

With nearly one billion online videos viewed everyday, an emerging new frontier in computer vision research is recognition and search in video. While much effort has been devoted to the collection and annotation of large ...

Modern computational biology is awash in large-scale data mining problems. Several high-throughput technologies have been developed that enable us, with relative ease and little expense, to evaluate the coordinated expression ...

The webinar focused on specific Building America projects that are looking to gather and analyze large bodies of data on new and existing homes, and will feature opportunities for industry to collaborate with researchers to gather and analyze valuable data.

This paper presents an integrated air handling unit system (OAHU) for large commercial buildings. The system introduces outside air into the interior section and circulates the return air to the exterior section. Detailed analytical models...

We report a measurement of the large optical transmission matrix (TM) of a complex turbid medium. The TM is acquired using polarization-sensitive, full-field interferometric microscopy equipped with a rotating galvanometer ...

Concentrations of population and business activities result in high electricity demand in urban areas. This requires the construction of large-capacity underground substations. Oilless, non-flammable and non-explosive equipment is recommended for underground substations. Therefore, several types of large-capacity gas-insulated transformer have been developed. Because the gas forced cooling type was considered to be available up to approximately 60 MVA, all of these gas-insulated transformers are liquid cooled. But the liquid cooling type has the disadvantage of a complex structure for liquid cooling. For this reason, the authors have been studying the development of a simple design for a gas forced cooling, large-capacity gas-insulated transformer. This paper discusses research and development of cooling and insulation technology for a large-capacity gas-insulated transformer and the development of a 275 kV, 300 MVA gas-insulated transformer.

Discussion of the current manufacturing process of polydimethylsiloxane (PDMS) parts and the emergence of PDMS use in biomedical microfluidic devices addresses the need to develop large scale manufacturing processes for ...

Firms must continuously strive to grow through the creation of new sources of competitive advantage. The challenges to growth are more severe for large, established firms that derive a predominant amount of their present ...

Discriminative training for acoustic models has been widely studied to improve the performance of automatic speech recognition systems. To enhance the generalization ability of discriminatively trained models, a large-margin ...

Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

We have done the numerical modeling of seismic response to multiple sets of vertical large fractures by using finite-difference method (FD), which can easily handle media with monoclinic anisotropy. We consider three types ...

parallel implementation that admits a speed-up nearly proportional to the ... On large-scale matrix completion tasks, Jellyfish is orders of magnitude more ...... get a consistent build of NNLS with mex optimizations at the time of this submission.

In this thesis I present a novel method for constructing large scale mock galaxy and halo catalogues and apply this model to a number of important topics in modern cosmology. Traditionally such mocks are created through ...

Large superconducting magnets used in fusion reactors, as well as other applications, need a diagnostic that can non-invasively measure the temperature and strain throughout the magnet in real-time. A new fiber optic sensor ...

In this thesis, the persistent current qubit in the presence of large amplitude microwave radiation is studied. Three main results are presented in this work. A new coherent quasi classical regime has been observed, where ...

This thesis introduces the Bonsai technique which can efficiently improve the identification of large nonsolution spaces in the search space. The technique can make exponential improvements in the reduction of the search space using a linear method...

High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

In this thesis, advanced interference management techniques are designed and evaluated for large-scale wireless networks with realistic assumptions, such as signal propagation loss, random node distribution and ...

During two field campaigns (OP3 and ACES), which ran in Borneo in 2008, we measured large emissions of estragole (methyl chavicol; IUPAC systematic name 1-allyl-4-methoxybenzene; CAS number 140-67-0) in ambient air above ...

Companies depend on information systems to control their operations. During the last decade, Information Technology (IT) infrastructures have grown in scale and complexity. Any large company runs many enterprise applications ...

A set of large-scale laboratory experiments were conducted to study channel meander migration. Factors affecting the migration of banklines, including the ratio of curvature to channel width, bend angle, and the Froude ...

This thesis describes four novel superconducting machine concepts, in the pursuit of finding a suitable design for large offshore wind turbines. The designs should be reliable, modular and light-weight. The main novelty ...

We discuss several new ideas for reactor neutrino oscillation experiments with a Large Liquid Scintillator Detector. We consider two different scenarios for a measurement of the small mixing angle $\\theta_{13}$ with a mobile $\\bar{\

This thesis focuses on the development of infrastructure for research with large-scale autonomous marine vehicle fleets and the design of sampling trajectories for compressive sensing (CS). The newly developed infrastructure ...

This thesis introduces a framework and two methodologies that enable engineering management teams to assess the value of real options in programs of large-scale, partially standardized systems implemented a few times over ...

In recent years there has been a great deal of new activity at the interface of biology and computation. This has largely been driven by the massive in flux of data from new experimental technologies, particularly ...

We study the response of a model microelectrochemical cell to a large ac voltage of frequency comparable to the inverse cell relaxation time. To bring out the basic physics, we consider the simplest possible model of a ...

the attachment and then select Save Note: Please ensure you copy the files to your personal storage devices (eg This document is designed to assist students to determine large email items, save them in their personal storage

Space solar power is a renewable, environment-friendly alternative to satisfy future terrestrial power needs. Space solar power stations will need to have large dimensions (on the order of hundreds of meters) to be able ...

We report on the successful treatment of hypertension by occlusion of a large iatrogenic renal transplant arteriovenous fistula using detachable embolization coils with concomitant flow reduction by occlusion balloon in two patients.

The distribution of linearly polarized gluons inside a large nucleus is studied in the framework of the color glass condensate. We find that the Weizs\\"acker-Williams distribution saturates the positivity bound at large transverse momenta and is suppressed at small transverse momenta, whereas the dipole distribution saturates the bound for any value of the transverse momentum. We also discuss processes in which both distributions of linearly polarized gluons can be probed.

The flat rotation curve obtained for the outer star clusters of the Large Magellanic Cloud is suggestive of an LMC dark matter halo. From the composite HI and star cluster rotation curve, I estimate the parameters of an isothermal dark matter halo added to a `maximum disk.' I then examine the possibility of detecting high energy gamma-rays from non-baryonic dark matter annihilations in the central region of the Large Magellanic Cloud.

Cleaning large cylinders used to transport low-enriched uranium hexafluoride (UF{sub 6}) presents several challenges to nuclear criticality safety. This paper presents a brief overview of the cleaning process, the criticality controls typically employed and their bases. Potential shortfalls in implementing these controls are highlighted, and a simple example to illustrate the difficulties in complying with the Double Contingency Principle is discussed. Finally, a summary of recommended criticality controls for large cylinder cleaning operations is presented.

The staged commissioning of the Large Hadron Collider presents an opportunity to map gross features of particle production over a significant energy range. I suggest a visual tool - event displays in (pseudo)rapidity-transverse-momentum space - as a scenic route that may help sharpen intuition, identify interesting classes of events for further investigation, and test expectations about the underlying event that accompanies large-transverse-momentum phenomena.

Evolution of large-scale scalar perturbations in the presence of stiff solid (solid with pressure to energy density ratio > 1/3) is studied. If the solid dominated the dynamics of the universe long enough, the perturbations could end up suppressed by as much as several orders of magnitude. To avoid too steep large-angle power spectrum of CMB, radiation must have prevailed over the solid long enough before recombination.

Evolution of large-scale scalar perturbations in the presence of stiff solid (solid with pressure to energy density ratio > 1/3) is studied. If the solid dominated the dynamics of the universe long enough, the perturbations could end up suppressed by as much as several orders of magnitude. To avoid too steep large-angle power spectrum of CMB, radiation must have prevailed over the solid long enough before recombination.

There has been considerable interest in developing dry processes which can effectively replace wet processing in the manufacture of large area photovoltaic devices. Environmental and health issues are a driver for this activity because wet processes generally increase worker exposure to toxic and hazardous chemicals and generate large volumes of liquid hazardous waste. Our work has been directed toward improving the performance of screen-printed solar cells while using plasma processing to reduce hazardous chemical usage.

of personal devices as a MOPED, an autonomous set of MObile grouPEd De- vices, which appears as a single for the MOPED. These personal devices can cooperate to achieve better resource utilization, such as by sharing, to the user. As a user acquires multiple personal technology and communica- tion devices, the efficiency

of personal devices as a MOPED, an autonomous set of MObile grouPEd DeÂ­ vices, which appears as a single for the MOPED. These personal devices can cooperate to achieve better resource utilization, such as by sharing, to the user. As a user acquires multiple personal technology and communicaÂ­ tion devices, the efficiency

11 For economic energy, we need: tritium, large size to obtain hot fusing plasma; high fields: a Component Test Facility is much needed; ST appears simplest and most economic in tritium: BUT the high cost

The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

We present measurements of the optical/UV emission lines for a large homogeneous sample of 993 quasars from the Large Bright Quasar Survey. Our largely automated technique accounts for continuum breaks and galactic reddening, and we perform multicomponent fits to emission line profiles, including the effects of blended iron emission, and of absorption lines both galactic and intrinsic. Here we describe the fitting algorithm and present the results of line fits to the LBQS sample, including upper limits to line equivalent widths when warranted. The distribution of measured line parameters, principally equivalent width and FWHM, are detailed for a variety of lines, including upper limits. We thus initiate a large-scale investigation of correlations between the high energy continuum and emission lines in quasars, to be extended to complementary samples using similar techniques. High quality, reproducible measurements of emission lines for uniformly selected samples will advance our understanding of active galaxies, especially in a new era of large surveys selected by a variety of complementary methods.

Liquid Argon Time Projection Chambers (LArTPCs) show promise as scalable devices for the large detectors needed for long-baseline neutrino oscillation physics. Over the last several years at Fermilab a staged approach to developing the technology for large detectors has been developed. The TPC detectors require ultra-pure liquid argon with respect to electronegative contaminants such as oxygen and water. The tolerable electronegative contamination level may be as pure as 60 parts per trillion of oxygen. Three liquid argon cryostats operated at Fermilab have achieved the extreme purity required by TPCs. These three cryostats used evacuation to remove atmospheric contaminants as the first purification step prior to filling with liquid argon. Future physics experiments may require very large detectors with tens of kilotonnes of liquid argon mass. The capability to evacuate such large cryostats adds significant cost to the cryostat itself in addition to the cost of a large scale vacuum pumping system. This paper describes a 30 ton liquid argon cryostat at Fermilab which uses purging to remove atmospheric contaminants instead of evacuation as the first purification step. This cryostat has achieved electronegative contamination levels better than 60 parts per trillion of oxygen equivalent. The results of this liquid argon purity demonstration will strongly influence the design of future TPC cryostats.

A new method for the determination of radiostrontium in large soil samples has been developed at the Savannah River Environmental Laboratory (Aiken, SC, USA) that allows rapid preconcentration and separation of strontium in large soil samples for the measurement of strontium isotopes by gas flow proportional counting. The need for rapid analyses in the event of a Radiological Dispersive Device (RDD) or Improvised Nuclear Device (IND) event is well-known. In addition, the recent accident at Fukushima Nuclear Power Plant in March, 2011 reinforces the need to have rapid analyses for radionuclides in environmental samples in the event of a nuclear accident. The method employs a novel pre-concentration step that utilizes an iron hydroxide precipitation (enhanced with calcium phosphate) followed by a final calcium fluoride precipitation to remove silicates and other matrix components. The pre-concentration steps, in combination with a rapid Sr Resin separation using vacuum box technology, allow very large soil samples to be analyzed for {sup 89,90}Sr using gas flow proportional counting with a lower method detection limit. The calcium fluoride precipitation eliminates column flow problems typically associated with large amounts of silicates in large soil samples.

In this work, we present results of a large-eddy simulation of the 48 multi-megawatt turbines composing the Lillgrund wind plant. Turbulent inflow wind is created by performing an atmospheric boundary layer precursor simulation and turbines are modeled using a rotating, variable-speed actuator line representation. The motivation for this work is that few others have done wind plant large-eddy simulations with a substantial number of turbines, and the methods for carrying out the simulations are varied. We wish to draw upon the strengths of the existing simulations and our growing atmospheric large-eddy simulation capability to create a sound methodology for performing this type of simulation. We have used the OpenFOAM CFD toolbox to create our solver.

A semiclassical simulation approach is presented for studying quantum noise in large-scale photonic circuits incorporating an ideal Kerr nonlinearity. A circuit solver is used to generate matrices defining a set of stochastic differential equations, in which the resonator field variables represent random samplings of the Wigner quasi-probability distributions. Although the semiclassical approach involves making a large-photon-number approximation, tests on one- and two-resonator circuits indicate satisfactory agreement between the semiclassical and full-quantum simulation results in the parameter regime of interest. The semiclassical model is used to simulate random errors in a large-scale circuit that contains 88 resonators and hundreds of components in total, and functions as a 4-bit ripple counter. The error rate as a function of on-state photon number is examined, and it is observed that the quantum fluctuation amplitudes do not increase as signals propagate through the circuit, an important property for scalability.

The invention is a method to produce large uniform hollow spherical shells by (1) forming uniform size drops of heat decomposable or vaporizable material, (2) evaporating the drops to form dried particles, (3) coating the dried particles with a layer of shell forming material and (4) heating the composite particles to melt the outer layer and to decompose or vaporize the inner particle to form an expanding inner gas bubble. The expanding gas bubble forms the molten outer layer into a shell of relatively large diameter. By cycling the temperature and pressure on the molten shell, nonuniformities in wall thickness can be reduced. The method of the invention is utilized to produce large uniform spherical shells, in the millimeter to centimeter diameter size range, from a variety of materials and of high quality, including sphericity, concentricity and surface smoothness, for use as laser fusion or other inertial confinement fusion targets as well as other applications.

Scientific computing has become a tool as vital as experimentation and theory for dealing with scientific challenges of the twenty-first century. Large scale simulations and modelling serve as heuristic tools in a broad problem-solving process. High-performance computing facilities make possible the first step in this process - a view of new and previously inaccessible domains in science and the building up of intuition regarding the new phenomenology. The final goal of this process is to translate this newly found intuition into better algorithms and new analytical results.In this presentation we give an outline of the research themes pursued at the Scientific Computing Laboratory of the Institute of Physics in Belgrade regarding large-scale simulations of complex classical and quantum physical systems, and present recent results obtained in the large-scale simulations of granular materials and path integrals.

We examine the thermal infrared spectra of large dust grains of different chemical composition and mineralogy. Strong resonances in the optical properties result in detectable spectral structure even when the grain is much larger than the wavelength at which it radiates. We apply this to the thermal infrared spectra of compact amorphous and crystalline silicates. The weak resonances of amorphous silicates at 9.7 and 18 micron virtually disappear for grains larger than about 10 micron. In contrast, the strong resonances of crystalline silicates produce emission dips in the infrared spectra of large grains; these emission dips are shifted in wavelength compared to the emission peaks commonly seen in small crystalline silicate grains. We discuss the effect of a fluffy or compact grain structure on the infrared emission spectra of large grains, and apply our theory to the dust shell surrounding Vega.

The large eddy simulation (LES) approach—solving numerically the large scales of a turbulent system and accounting for the small-scale influence through a model—is applied to nonlinear gyrokinetic systems that are driven by a number of different microinstabilities. Comparisons between modeled, lower resolution, and higher resolution simulations are performed for an experimental measurable quantity, the electron density fluctuation spectrum. Moreover, the validation and applicability of LES is demonstrated through a series of diagnostics based on the free energetics of the system.

The focus of this research project was atoms with scattering lengths that are large compared to the range of their interactions and which therefore exhibit universal behavior at sufficiently low energies. Recent dramatic advances in cooling atoms and in manipulating their scattering lengths have made this phenomenon of practical importance for controlling ultracold atoms and molecules. This research project was aimed at developing a systematically improvable method for calculating few-body observables for atoms with large scattering lengths starting from the universal results as a first approximation. Significant progress towards this goal was made during the five years of the project.

We simulate three-dimensional, horizontally periodic Rayleigh-B\\'enard convection between free-slip horizontal plates, rotating about a horizontal axis. When both the temperature difference between the plates and the rotation rate are sufficiently large, a strong horizontal wind is generated that is perpendicular to both the rotation vector and the gravity vector. The wind is turbulent, large-scale, and vertically sheared. Horizontal anisotropy, engendered here by rotation, appears necessary for such wind generation. Most of the kinetic energy of the flow resides in the wind, and the vertical turbulent heat flux is much lower on average than when there is no wind.

Affinity diagraming is a powerful method for encouraging and capturing lateral thinking in a group environment. The Affinity+ Concept was designed to improve the collaborative brainstorm process through the use of large display surfaces in conjunction with mobile devices like smart phones and tablets. The system works by capturing the ideas digitally and allowing users to sort and group them on a large touch screen manually. Additionally, Affinity+ incorporates theme detection, topic clustering, and other processing algorithms that help bring structured analytic techniques to the process without requiring explicit leadership roles and other overhead typically involved in these activities.

Recent analysis of the WMAP three year data suggests $f_{NL}^{local}\\simeq86.8$ in the WMAP convention. It is necessary to make sure whether general single field inflation can produce a large positive $f_{NL}$ before turning to other scenarios. We give some examples to generate a large positive $f_{NL}^{equil}$ in general single field inflation. Our models are different from ghost inflation. Due to the appearance of non-conventional kinetic terms, $f_{NL}^{equil}\\gg1$ can be realized in single field inflation.

An auger-tube pump mixing device for mixing materials with large density differences while maintaining low stirring RPM and low power consumption. The mixing device minimizes the formation of vortexes and minimizes the incorporation of small bubbles in the liquid during mixing. By avoiding the creation of a vortex the device provides efficient stirring of full containers without spillage over the edge. Also, the device solves the problem of effective mixing in vessels where the liquid height is large compared to the diameter. Because of the gentle stirring or mixing by the device, it has application for biomedical uses where cell damage is to be avoided.

This report describes and summarizes activities, data, and preliminary data interpretation from the INEL Oversight Program R D-1 project titled Hydrologic Studies In Wells Open Through Large Intervals.'' The project is designed to use a straddle-packer system to isolate, hydraulically test, and sample specific intervals of monitoring wells that are open (uncased, unscreened) over large intervals of the Snake River Plain aquifer. The objectives of the project are to determine and compare vertical variations in water quality and aquifer properties that have previously only been determined in an integrated fashion over the entire thickness of the open interval of the observation wells.

Despite the potential for significant energy savings by reducing duct leakage or other thermal losses from duct systems in large commercial buildings, California Title 24 has no provisions to credit energy-efficient duct systems in these buildings. A substantial reason is the lack of readily available simulation tools to demonstrate the energy-saving benefits associated with efficient duct systems in large commercial buildings. The overall goal of the Efficient Distribution Systems (EDS) project within the PIER High Performance Commercial Building Systems Program is to bridge the gaps in current duct thermal performance modeling capabilities, and to expand our understanding of duct thermal performance in California large commercial buildings. As steps toward this goal, our strategy in the EDS project involves two parts: (1) developing a whole-building energy simulation approach for analyzing duct thermal performance in large commercial buildings, and (2) using the tool to identify the energy impacts of duct leakage in California large commercial buildings, in support of future recommendations to address duct performance in the Title 24 Energy Efficiency Standards for Nonresidential Buildings. The specific technical objectives for the EDS project were to: (1) Identify a near-term whole-building energy simulation approach that can be used in the impacts analysis task of this project (see Objective 3), with little or no modification. A secondary objective is to recommend how to proceed with long-term development of an improved compliance tool for Title 24 that addresses duct thermal performance. (2) Develop an Alternative Calculation Method (ACM) change proposal to include a new metric for thermal distribution system efficiency in the reporting requirements for the 2005 Title 24 Standards. The metric will facilitate future comparisons of different system types using a common ''yardstick''. (3) Using the selected near-term simulation approach, assess the impacts of duct system improvements in California large commercial buildings, over a range of building vintages and climates. This assessment will provide a solid foundation for future efforts that address the energy efficiency of large commercial duct systems in Title 24. This report describes our work to address Objective 1, which includes a review of past modeling efforts related to duct thermal performance, and recommends near- and long-term modeling approaches for analyzing duct thermal performance in large commercial buildings.

Generalized Large deviation principles was developed for Colombeau-Ito SDE with a random coefficients. We is significantly expand the classical theory of large deviations for randomly perturbed dynamical systems developed by Freidlin and Wentzell.Using SLDP approach, jumps phenomena, in financial markets, also is considered. Jumps phenomena, in financial markets is explained from the first principles, without any reference to Poisson jump process. In contrast with a phenomenological approach we explain such jumps phenomena from the first principles, without any reference to Poisson jump process.

The authors report on a method for producing freestanding single crystal metal films over large areas using electrodeposition and selective etching. The method can be turned into an inexpensive continuous process for making long ribbons or a large area of single crystal films. Results from a 5x5 mm{sup 2} Ni single crystal film using electron backscattering pattern pole figures and x-ray diffraction demonstrate that the quality of material produced is equivalent to the initial substrate without annealing or polishing.

An auger-tube pump mixing device is disclosed for mixing materials with large density differences while maintaining low stirring RPM and low power consumption. The mixing device minimizes the formation of vortexes and minimizes the incorporation of small bubbles in the liquid during mixing. By avoiding the creation of a vortex the device provides efficient stirring of full containers without spillage over the edge. Also, the device solves the problem of effective mixing in vessels where the liquid height is large compared to the diameter. Because of the gentle stirring or mixing by the device, it has application for biomedical uses where cell damage is to be avoided. 2 figs.

architecture with complex geometry brings along new challenges for manufacturers of building componentsWithin architectural design a large variety of complex shaped buildings can be found. Free form nowadays allows for modelling of almost any possible shape. With these tools designers can structures

In this paper the possibility of generating large scale curvature perturbations induced from the entropic perturbations during the waterfall phase transition of standard hybrid inflation model is studied. We show that whether or not appreciable amounts of large scale curvature perturbations are produced during the waterfall phase transition depend crucially on the competition between the classical and the quantum mechanical back-reactions to terminate inflation. If one considers only the classical evolution of the system we show that the highly blue-tilted entropy perturbations induce highly blue-tilted large scale curvature perturbations during the waterfall phase transition which dominate over the original adiabatic curvature perturbations. However, we show that the quantum back-reactions of the waterfall field inhomogeneities produced during the phase transition dominate completely over the classical back-reactions. The cumulative quantum back-reactions of very small scales tachyonic modes terminate inflation very efficiently and shut off the curvature perturbations evolution during the waterfall phase transition. This indicates that the standard hybrid inflation model is safe under large scale curvature perturbations during the waterfall phase transition.

a fragment of the variables of the current solution. Large Neighborhood Search has three parameters that must be specified (size of the fragment, search limit and fragment selection procedure). Its performances greatly Programming and the speed of Local Search. The LNS metaheuristic has three main parameters that have

of independent, identically distributed random variables. We denote by Â· the expectation with respect, it is believed that there is a small number of relevant island where the potential takes especially large values, the parabolic Anderson model is intermittent. Throughout, we will assume that the logarithmic moment generating

. The security of many such systems, such as Tor, relies on a model where the adversary does not have a global. Private information retrieval (PIR) schemes are designed to prevent an adversary controlling the database-of-the-art PIR schemes have a high computational overhead that makes them expensive for querying large databases

Materialized community ground models for large-scale earthquake simulation Steven W. Schlosser to ground motion sim- ulations, in which ground model datasets are fully materi- alized into octress stored as a service techniques in which scientific computation and storage services become more tightly intertwined. 1

Changes in Large Pulmonary Arterial Viscoelasticity in Chronic Pulmonary Hypertension Zhijie Wang1 of pulmonary arterial hypertension (PAH) and is an excellent predictor of mortality due to right ventricular at a physiologically relevant frequency (10 Hz) in hypertensive PAs. The dynamic elastic modulus (E), a material

ORNL 2013-G00021/tcc 02.2013 Large Scale Graphene Production UT-B ID 201102606 Technology Summary Graphene is an emerging one-atom-thick carbon material which has the potential for a wide range research, graphene has quickly attained the status of a wonder nanomaterial and continued to draw

Data sets of immense size are regularly generated on large scale computing resources. Even among more traditional methods for acquisition of volume data, such as MRI and CT scanners, data which is too large to be effectively visualization on standard workstations is now commonplace. One solution to this problem is to employ a 'visualization cluster,' a small to medium scale cluster dedicated to performing visualization and analysis of massive data sets generated on larger scale supercomputers. These clusters are designed to fit a different need than traditional supercomputers, and therefore their design mandates different hardware choices, such as increased memory, and more recently, graphics processing units (GPUs). While there has been much previous work on distributed memory visualization as well as GPU visualization, there is a relative dearth of algorithms which effectively use GPUs at a large scale in a distributed memory environment. In this work, we study a common visualization technique in a GPU-accelerated, distributed memory setting, and present performance characteristics when scaling to extremely large data sets.

A method of absolute calibration for large aperture optical systems is presented, using the example of the Pierre Auger Observatory fluorescence detectors. A 2.5 m diameter light source illuminated by an ultra--violet light emitting diode is calibrated with an overall uncertainty of 2.1 % at a wavelength of 365 nm.

ply a modified Iterated Local Search procedure to Capacitated Vehicle .... enter and leave the set S, in such a way that at least 2k(S) ? |?R(S)| times an ...... algorithm clearly outperformed, in terms of solution quality, those that dealt with large.

We give a detailed proof of the conjecture by Hohm and Zwiebach in double field theory. This result implies that their proposal for large gauge transformations in terms of the Jacobian matrix for coordinate transformations is, as required, equivalent to the standard exponential map associated with the generalized Lie derivative along a suitable parameter.

Asymmetric Structure-Preserving Subgraph Queries for Large Graphs Zhe Fan1 Byron Choi1 Jianliang Xu the novel cyclic group based encryption so that query processing is transformed into a series of private are effective. I. INTRODUCTION Subgraph queries (via subgraph isomorphism) are a fun- damental and powerful

Message-Passing Algorithms for Large Structured Decentralized POMDPs (Extended Abstract) Akshat first transform the policy opti- mization problem to that of likelihood maximization in a mixture uncertainty [2]. Their expressive power allows them to capture situations when agents must act based

Acoustic Modelling for Large Vocabulary Continuous Speech Recognition Steve Young Engineering Dept probabilities is emphasised. Using this as a basis, two powerful methods are presented for dealing transforms to be robustly estimated using small amounts of adaptation data. Secondly, MMI training based

HBR AT LARGE Doesn't Matterby Nicholas C.Carr As information technology's power and ubiquity have power and presence have begun to transform them from potentially strate- gic resources into commodity, enterprise software, and the Internet-that have transformed the business world. Today, no one would dis- pute

Teaching Model Engineering in the Large Petra Brosch , Gerti Kappel, Martina Seidl, and Manuel-driven development environment consisting of their own modeling languages, certain types of model transformations specific languages and software factories. All of these approaches rely on the power of mod- els instead

Resistive strip Micromegas detectors are discharge tolerant. They have been tested extensively as small detectors of about 10 x 10 cm$^2$ in size and they work reliably at high rates of 100 kHz/cm$^2$ and above. Tracking resolution well below 100 $\\mu$m has been observed for 100 GeV muons and pions. Micromegas detectors are meanwhile proposed as large area muon precision trackers of 2-3 m$^2$ in size. To investigate possible differences between small and large detectors, a 1 m$^2$ detector with 2048 resistive strips at a pitch of 450 $\\mu$m was studied in the LMU Cosmic Ray Measurement Facility (CRMF) using two 4 $\\times$ 2.2 m$^2$ large Monitored Drift Tube (MDT) chambers for cosmic muon reference tracking. A segmentation of the resistive strip anode plane in 57.6 mm x 93 mm large areas has been realized by the readout of 128 strips with one APV25 chip each and by eleven 93 mm broad trigger scintillators placed along the readout strips. This allows for mapping of homogeneity in pulse height and efficiency, d...

In this study, large scale rainfall simulation was used to evaluate runoff generation from canopy and intercanopy areas within an ashe juniper woodland of the Edwards Plateau. One 3 x 12 m site was established beneath the canopy of mature ashe...

Multipole-based preconditioners for large sparse linear systems Sreekanth R. Sambavaram a,1 , Vivek and hierarchical multipole approximations, the cost of computing and storing these preconditioners has reduced drama- tically. This paper describes the use of multipole operators as parallel preconditioners

California agriculture is large, diverse, complex and dynamic. It generated nearly $37.5 billion in cash receipts in 2010. California has been the nation's top agricultural state in cash receipts every in 1960 to about 12 percent in 2010. UniversityofCalifornia AgriculturalIssuesCenter The Measure

been shown to reduce storage over- heads, with varying requirements for resources such as computation permanently and accessed infrequently); e-mail, in which large byte sequences are commonly re- peated. There are nu- merous trade-offs between the effectiveness of data re- duction and the resources required

that cannot be readily explained by examining the dynamics at the scale of single dunes, or by appealingModeling emergent large-scale structures of barchan dune fields S. Worman , A.B. Murray , R for a range of field-scale phenomena including isolated patches of dunes and heterogeneous arrangements

Feb 5, 2002 ... Page 1 ... We present large-scale optimization techniques to model the energy function that underlies the folding process of ..... which we will refer to from now on, we get a system. AT y ? b, ... Although we don't want to rule out that a so- ..... What we believe is interesting in this context is that the results from.

Large Margin Taxonomy Embedding with an Application to Document Categorization Kilian Weinberger that the topics are not just discrete classes, but are nodes in a complex taxonomy with rich inter-topic relationships. For example, web pages can be categorized into the Yahoo! web taxonomy or medical journals can

Spatial Energy Balancing in Large-scale Wireless Multihop Networks Seung Jun Baek and Gustavo de is on optimizing trade-offs between the energy cost of spreading traffic and the improved spatial balance of energy. We propose a parameterized family of energy balancing strategies for grids and approximate

Generating Large Instances of the Gong-Harn Cryptosystem Kenneth J. Giuliani and Guang Gong Centre@cacr.math.uwaterloo.ca Abstract. In 1999, Gong and Harn proposed a new cryptosystem based on third-order characteristic sequences++ using NTL [7] and so a timing results are presented. 1 Introduction In 1998 and 1999, Gong and Harn

of Mathematical Sciences Mathematics Division Stellenbosch University Private Bag X1, Matieland 7602 South Africa work supported by the National Research Foundation of South Africa under grant number 70560. PreprintUnicyclic graphs with large energy Eric Ould Dadah Andriantiana 1 and Stephan Wagner 2 Department

High purity tantalum ingots processed by electron beam melting are typical oligocrystalline materials. They are composed of a few coarse columnar grains aligned to the longitudinal ingot axis. The processing of this material into wires involves cold swaging up to large strains. The present work attempts to clarify the evolution of the microstructure during swaging which determines the subsequent changes related with annealing.

, such as photonic band-gap materials, high dense data storage, and photonic devices. We have developed a maskless areas, such as photonic band-gap materials [1], high dense data storage [2], and photonic devices [3NANO EXPRESS Fabrication of Large Area Periodic Nanostructures Using Nanosphere Photolithography

Self-organizing computational models with specific intracortical connections can explain many functional features of visual cortex, such as topographic orientation and ocular dominance maps. However, due to their computational requirements, it is difficult to use such detailed models to study large-scale phenomena like object segmentation and binding, object recognition, tilt illusions, optic flow, and fovea periphery interaction. This paper introduces two techniques that make large simulations practical. First, a set of general linear scaling equations for the RF-LISSOM self-organizing model is derived and shown to result in quantitatively equivalent maps over a wide range of simulation sizes. This capability makes it possible to debug small simulations and then scale them up to larger simulations only when needed. The scaling equations also facilitate the comparison of biological maps and parameters between individuals and species with different brain region sizes. Second, the equations are combined into a new growing map method called GLISSOM, which dramatically reduces the memory and computational requirements of large self-organizing networks. With GLISSOM it should be possible to simulate all of human V1 at the single-column level using existing supercomputers, making detailed computational study of large-scale phenomena possible.

to imagine a new hydrogen energy economy1 in which hydrogen is generated, transported, stored and made for hydrogen and its desirability2 , this hydrogen energy economy is not inevitable. The gap between where weThe development of large technical systems: implications for hydrogen Jim Watson March 2002 Tyndall

The Effect of Caching in Sustainability of Large Wireless Networks Georgios S. Paschos Informatics the scalability and the efficiency for the future networks. In this landscape, wireless networks are considered]. Despite their worldwide deployment, wireless networks are mostly confined tof wireless networks is limited

Mar 24, 2013 ... We show that for large-scale problems with favourable geometry, this ...... justable “aggressive” stepsize policy [8]; up to this policy, this is nothing but SMP with Pz .... building this representation is O(1)km2 a.o. We build this ...

techniques to the simplex method for the solution of large-scale instances. ... instances up to 5535 nodes and 666639 arcs, arising from an industrial ..... For each node v ? TF ? AF we build a “layered” graph rooted in v, where layer.

Solution of Large Eigenvalue Problems in Electronic Structure Calculations \\Lambda Y. Saad y , A the structural and electronic properties of complex systems is one of the outstanding problems in condensed external perturbations. For example, it may be desirable in certain cases to follow the dynamics of atoms/electrons

reliability concerns due to the large number of active components in the switch fabric. Routers with optical to consider the limits imposed by electronic technology; in particular, it must take into account power not only solutions that make use of optics inside electronic switches but that also switching architectures

of supporting indexed search on large biological sequence databases since the construction cost of the index Yang 2 , Yi Xia 3 , Philip Yu 4 Keywords: protein sequence index, approximate match. Due. Building an appropriate index structure is one of the possibilities to achieve such a goal, which

INTRODUCTION Information on the abundance of large whales in Greenland waters, including fin whales surveys were conducted in West Greenland by the Greenland Fisheries Research Institute (m/v Regina Maris when survey conditions are optimal in Greenlandic waters. Between 1983 and 1993, visual aerial surveys

Center (SPC) has taken a more traditional approach to exploring the pattern and frequency of large hail. The SPC maintains a database of reported severe thunderstorm events over the contiguous United States agreement between the SPC data and the entries in the NCDC publication Storm Data. Prior to that, data were

, where we observed good relative speedup as well. For scientists and engineers who have NP-complete have recently been successful in solv- ing NP-complete problem instances of practical importance which successful in solving NP-complete problem instances of practical importance which were too large to be solved

A process for manufacturing large, fully dense, high purity TiB.sub.2 articles by pressing powders with a sintering aid at relatively low temperatures to reduce grain growth. The process requires stringent temperature and pressure applications in the hot-pressing step to ensure maximum removal of sintering aid and to avoid damage to the fabricated article or the die.

Structure of neighborhoods in a large social network Alina Stoica Orange Labs and Liafa Paris Keywords social networks, roles, patterns, complex networks, personal networks 1. INTRODUCTION The study of social networks has changed a lot since the early pioneering works of anthropologists who decided

Chapter 1 Large Blood Vessels 1.1 Introduction -- The Cardiovascular System The heart is a pump that circulates blood to the lungs for oxygenation (pul- monary circulation) and then throughout the systemic arterial system with a total cycle time of about one minute. From the left ventricle of the heart, blood

Detection of Macrosegregation in a Large Metallic Specimen Using XRF E.J. Pickeringa , M. Hollandb the removal of significant quantitites of material. X-ray fluorescence (XRF) spectroscopy, on the other hand being enclosed in a vacuum or destroyed beyond basic surface preparation. XRF spectroscopy has its

The analysis of economic feasibility for adding a cool storage facility to shift electric demand to off-peak hours for a large industrial facility is presented. DOE-2 is used to generate the necessary cooling load profiles for the analysis...

Improving large-sized PLC programs verification using abstractions V. Gourcuff O. de Smet J [2005]) models of PLC programs, which can be verified with well- known model-checkers, like UPPAAL the development of industrial PLC programs up to now (John- son [2007]). Several reasons can explain

Load Distribution in Large Scale Network Monitoring Infrastructures Josep Sanju`as-Cuxart, Pere to build a scalable, distributed passive network mon- itoring system that can run several arbitrary the principal research challenges behind building a distributed network monitoring system to support

Worst Case Scenario for Large Distribution Networks with Distributed Generation M. A. Mahmud) in distri- bution network has significant effects on voltage profile for both customers and distribution on variation of the voltage and the amount of DG that can be connected to the distribution networks. This paper

Climatology of Large Sediment Resuspension Events in Southern Lake Michigan David J. Schwab1 the southern basin, is subject to recurrent episodes of mas- sive sediment resuspension by storm-induced waves with the largest events are examined. Our analysis indicates that significant resuspension events in southern Lake

A survey of the interrelationships between matrix models and field theories on the noncommutative torus is presented. The discretization of noncommutative gauge theory by twisted reduced models is described along with a rigorous definition of the large N continuum limit. The regularization of arbitrary noncommutative field theories by means of matrix quantum mechanics and its connection to noncommutative solitons is also discussed.

The assumption of a flat prior distribution plays a critical role in the anthropic prediction of the cosmological constant. In a previous paper we analytically calculated the distribution for the cosmological constant, including the prior and anthropic selection effects, in a large toy ``single-jump'' landscape model. We showed that it is possible for the fractal prior distribution we found to behave as an effectively flat distribution in a wide class of landscapes, but only if the single jump size is large enough. We extend this work here by investigating a large ($N \\sim 10^{500}$) toy ``multi-jump'' landscape model. The jump sizes range over three orders of magnitude and an overall free parameter $c$ determines the absolute size of the jumps. We will show that for ``large'' $c$ the distribution of probabilities of vacua in the anthropic range is effectively flat, and thus the successful anthropic prediction is validated. However, we argue that for small $c$, the distribution may not be smooth.

We calculate the strength of the tidal field produced by the large-scale density field acting on primordial density perturbations in power law models. By analysing changes in the orientation of the deformation tensor, resulted from smoothing the density field on different mass scales, we show that the large-scale tidal field can strongly affect the morphology and orientation of density peaks. The measure of the strength of the tidal field is performed as a function of the distance to the peak and of the spectral index. We detected evidence that two populations of perturbations seems to coexist; one, with a misalignment between the main axes of their inertia and deformation tensors. This would lead to the angular momentum acquisition and morphological changes. For the second population, the perturbations are found nearly aligned in the direction of the tidal field, which would imprint them low angular momentum and which would allow an alignment of structures as those reported between clusters of galaxies in filaments, and between galaxies in clusters. Evidence is presented that the correlation between the orientation of perturbations and the large-scale density field could be a common property of Gaussian density fields with spectral indexes $n < 0$. We argue that alignment of structures can be used to probe the flatness of the spectrum on large scales but it cannot determine the exact value of the spectral index.

, sooner or later, a hopeÂ­ less task at first sight. In a combustion engine, many molecules of fuel and air1 Applications of Large Random Matrices in Communications Engineering Ralf R. MË?uller Abstract engineering. Asymptotic eigenvalue distributions of many classes of random matrices are given. The treatment

, sooner or later, a hope- less task at first sight. In a combustion engine, many molecules of fuel and air1 Applications of Large Random Matrices in Communications Engineering Ralf R. MÂ¨uller Abstract engineering. Asymptotic eigenvalue distributions of many classes of random matrices are given. The treatment

demonstrations, and is an important enabling step towards the creation of high-density and low-cost optical unexpected but inherent robustness with respect to short-scale disorder such as fabrication roughness-cladding. A scanning electron micrograph of the large-area PhC possessing about 109 lattice points, fabricated through

Developing A Grid Portal For Large-scale Reservoir Studies 1 Center for Computation & Technology 2 uncertainty. Â· Advantages of grid technology Â· Proposed Solution of the UCoMS Team Â· What is a Portal? Â· UCo of reservoir uncertainty... Petroleum drilling consist of many uncertainties. Main objective is to optimize

environments, whereas field experiments are a compromise strategy where features of the system are ma visualization and visual analyt- ics tools within a large automotive company (BMW Group). From our own visualization in general. Within such an environment a wide range of real data analy- sis problems, tasks

for detection of e appearance events will be greater than 90% for GeV energies. (This is 3 times the efficiency is in a mature state, and readily scalable to large masses. · More in need of further development is the software on this later. 9. Maintenance and operational issues. Repairability. Failure modes and risks. The main detector

of the company can use to improve the business. Research is typically not a business or a profit center to bring out newer products in the market place. But why does a service company need investment in researchResearch Investments in Large Indian Software Companies Pankaj Jalote Professor, Department

The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

In this paper we analyse the alignment of galaxy groups with the surrounding large scale structure traced by spectroscopic galaxies from the Sloan Digital Sky Survey Data Release 7. We characterise these alignments by means of an extension of the classical two-point cross-correlation function, developed by Paz et al. 2008 (arXiv:0804.4477, MNRAS 389 1127). We find a strong alignment signal between the projected major axis of group shapes and the surrounding galaxy distribution up to scales of 30 Mpc/h. This observed anisotropy signal becomes larger as the galaxy group mass increases, in excellent agreement with the corresponding predicted alignment obtained from mock catalogues and LCDM cosmological simulations. These measurements provide new direct evidence of the adequacy of the gravitational instability picture to describe the large-scale structure formation of our Universe.

One of the greatest difficulties that space exploration faces is the lack of technology necessary to establish large volumes of habitable spaces on site. Both transporting the pre-built enclosures or transporting the equipment necessary for building them on site from conventional materials face the same enormous problem: the need to transport huge quantities of material into space, which is technically close to impossible. The current paper, explores the possibility and one approach of building these large spaces from an alternative material, water ice, a material that is a prerequisite for any settlement. The feasibility of dome shaped, pressurized, water ice buildings is analyzed from a structural integrity point of view and the possibility of building them with a technique using water sublimation and deposition onto a thin plastic film, a process which requires extremely little construction equipment with respect to the resulting habitable space.

The size of digital libraries is increasing, making navigation and access to information more challenging. Improving the system by observing the users’ activities can help at providing better services to users of very large digital libraries. In this paper we explain how the Invenio open-source software, used by the CERN Document Server (CDS) allows fine grained logging of user behavior. In the first phase, the sequence of actions performed by users of CDS is captured, while in the second phase statistical data is calculated offline. This paper explains these two steps and the results. Although the analyzed system focuses on the high energy physics literature, the process could be applicable to other scientific communities, with and international, large user base.

For more than a decade now the complete origin of the diffuse gamma-ray emission background (EGRB) has been unknown. Major components like unresolved star-forming galaxies (making 10GeV. Moreover, we show that, even though the gamma-ray emission arising from structure formation shocks at galaxy clusters is below previous estimates, these large scale shocks can still give an important, and even dominant at high energies, contribution to the EGRB. Future detections of cluster gamma-ray emission would make our upper limit of the extragalactic gamma-ray emission from structure-formation process, a firm prediction, and give us deeper insight in evolution of these large scale shock.

We study a significant nuclear suppression of the relative production rates (p(d)+A)/(p+d(p)) for the Drell-Yan process at large Feynman xF. Since this is the region of minimal values for the light-front momentum fraction variable x2 in the target nucleus, it is tempting to interpret this as a manifestation of coherence or of a Color Glass Condensate. We demonstrate, however, that this suppression mechanism is governed by the energy conservation restrictions in multiple parton rescatterings in nuclear matter. To eliminate nuclear shadowing effects coming from the coherence, we calculate nuclear suppression in the light-cone dipole approach at large dilepton masses and at energy accessible at FNAL. Our calculations are in a good agreement with data from the E772 experiment. Using the same mechanism we predict also nuclear suppression at forward rapidities in the RHIC energy range.

A simple analysis is presented concerning an upper limit of the power density (power per unit land area) of a very large wind farm located at the bottom of a fully developed boundary layer. The analysis suggests that the limit of the power density is about 0.38 times $\\tau_{w0}U_{F0}$, where $\\tau_{w0}$ is the natural shear stress on the ground (that is observed before constructing the wind farm) and $U_{F0}$ is the natural or undisturbed wind speed averaged across the height of the farm to be constructed. Importantly, this implies that the maximum extractable power from such a very large wind farm will not be proportional to the cubic of the wind speed at the farm height, or even the farm height itself, but be proportional to $U_{F0}$.

This paper presents a study of prebreakdown and breakdown phenomena under AC voltage in mineral oil in large gaps to 60 cm. The investigations presented concern the study of streamers and the measurement of breakdown voltages in rod-plane and sphere-plane gaps. Also, the influence of a contamination by solid particles in the oil has been considered. A specific breakdown mode under AC voltage is evidenced, where bursts of streamers lead to the lowest breakdown fields recorded. Numerical values of the mean field in oil required for direct or burst breakdown modes are derived from the experiments. As a consequence, the great sensitivity to the presence of particles on EHV transformers insulation with large oil gaps is pointed out.

The gradual crowding out of singleton and small team science by large team endeavors is challenging key features of research culture. It is therefore important for the future of scientific practice to reflect upon the scientists' ethical responsibilities within teams. To facilitate this reflection we show labor force trends in the US revealing a skewed growth in academic ranks and increased levels of competition for promotion within the system; we analyze teaming trends across disciplines and national borders demonstrating why it is becoming difficult to distribute credit and to avoid conflicts of interest; and we use more than a century of Nobel prize data to show how science is outgrowing its old institutions of singleton awards. Of particular concern within the large team environment is the weakening of the mentor-mentee relation, which undermines the cultivation of virtue ethics across scientific generations. These trends and emerging organizational complexities call for a universal set of behavioral norm...

Results are given on tests of large particle trackers for the detection of neutrino interactions in long-baseline experiments. Module prototypes have been assembled using TiO$_2$-doped polycarbonate panels. These were subdivided into cells of $\\sim 1$~cm$^2$ cross section and 6~m length, filled with liquid scintillator. A wavelength-shifting fibre inserted in each cell captured a part of the scintillation light emitted when a cell was traversed by an ionizing particle. Two different fibre-readout systems have been tested: an optoelectronic chain comprising an image intensifier and an Electron Bombarded CCD (EBCCD); and a hybrid photodiode~(HPD). New, low-cost liquid scintillators have been investigated for applications in large underground detectors. Testbeam studies have been performed using a commercially available liquid scintillator. The number of detected photoelectrons for minimum-ionizing particles crossing a module at different distances from the fibre readout end was 6 to 12 with the EBCCD chain and ...

Stochastic orders are binary relations defined on probability distributions which capture intuitive notions like being larger or being more variable. This paper introduces stochastic ordering of interference distributions in large-scale networks modeled as point process. Interference is the main performance-limiting factor in most wireless networks, thus it is important to understand its statistics. Since closed-form results for the distribution of interference for such networks are only available in limited cases, interference of networks are compared using stochastic orders, even when closed form expressions for interferences are not tractable. We show that the interference from a large-scale network depends on the fading distributions with respect to the stochastic Laplace transform order. The condition for path-loss models is also established to have stochastic ordering between interferences. The stochastic ordering of interferences between different networks are also shown. Monte-Carlo simulations are us...

We analyze the interplay between K\\"ahler moduli stabilization and chaotic inflation in supergravity. While heavy moduli decouple from inflation in the supersymmetric limit, supersymmetry breaking generically introduces non-decoupling effects. These lead to inflation driven by a soft mass term, $m_\\varphi^2 \\sim m m_{3/2}$, where $m$ is a supersymmetric mass parameter. This scenario needs no stabilizer field, but the stability of moduli during inflation imposes a large supersymmetry breaking scale, $m_{3/2} \\gg H$, and a careful choice of initial conditions. This is illustrated in three prominent examples of moduli stabilization: KKLT stabilization, K\\"ahler Uplifting, and the Large Volume Scenario. Remarkably, all models have a universal effective inflaton potential which is flattened compared to quadratic inflation. Hence, they share universal predictions for the CMB observables, in particular a lower bound on the tensor-to-scalar ratio, $r \\gtrsim 0.05$.

Thermodynamic study is performed on nitrogen expander cycles for large capacity liquefaction of natural gas. In order to substantially increase the capacity, a Brayton refrigeration cycle with nitrogen expander was recently added to the cold end of the reputable propane pre-cooled mixed-refrigerant (C3-MR) process. Similar modifications with a nitrogen expander cycle are extensively investigated on a variety of cycle configurations. The existing and modified cycles are simulated with commercial process software (Aspen HYSYS) based on selected specifications. The results are compared in terms of thermodynamic efficiency, liquefaction capacity, and estimated size of heat exchangers. The combination of C3-MR with partial regeneration and pre-cooling of nitrogen expander cycle is recommended to have a great potential for high efficiency and large capacity.

This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­?scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

Recently, Mark Steel Corp. of Salt Lake City completed a contract to build nine large driver tubes as part of a blast thermal simulator for the US Army. The finished device will be used to test the hardenability of various defense products under simulated nuclear blast forces. To create the shock wave component of the blast, the tubes will be charged with compressed nitrogen, which will be released to create an explosive force eight times that of hurricane Andrew, or equivalent to a 35-megaton bomb. To hold critical low-hydrogen specs while welding the large, complex parts, Mark Steel needed a combination of tight temperature control throughout the process, stringent test procedures, two versions of a new ultralow-hydrogen electrode, and a new ultralow-hydrogen submerged arc flux developed for the job by the Lincoln Electric Co. These specs are discussed here.

Models of large-field inflation based on axion-like fields with shift symmetries can be simple and natural, and make a promising prediction of detectable primordial gravitational waves. The Weak Gravity Conjecture is known to constrain the simplest case in which a single axion descends from a gauge field in an extra dimension. By supplementing the Weak Gravity Conjecture with considerations of how the mass spectrum of the theory varies across the axion moduli space, we obtain more powerful constraints that apply to a variety of multi-axion theories including N-flation and alignment models. In every case that we consider, plausible assumptions lead to field ranges that cannot be parametrically larger than the Planck scale. Our results are strongly suggestive of a general inconsistency in models of large-field inflation based on axions, and possibly of a more general principle forbidding super-Planckian field ranges.

Models of large-field inflation based on axion-like fields with shift symmetries can be simple and natural, and make a promising prediction of detectable primordial gravitational waves. The Weak Gravity Conjecture is known to constrain the simplest case in which a single axion descends from a gauge field in an extra dimension. By supplementing the Weak Gravity Conjecture with considerations of how the mass spectrum of the theory varies across the axion moduli space, we obtain more powerful constraints that apply to a variety of multi-axion theories including N-flation and alignment models. In every case that we consider, plausible assumptions lead to field ranges that cannot be parametrically larger than the Planck scale. Our results are strongly suggestive of a general inconsistency in models of large-field inflation based on axions, and possibly of a more general principle forbidding super-Planckian field ranges.

In models with Large Extra Dimensions the smallness of neutrino masses can be naturally explained by introducing gauge singlet fermions which propagate in the bulk. The Kaluza-Klein modes of these fermions appear as towers of sterile neutrino states on the brane. We study the phenomenological consequences of this picture for the high energy atmospheric neutrinos. For this purpose we construct a detailed equivalence between a model with large extra dimensions and a (3 + n) scenario consisting of three active and n extra sterile neutrino states, which provides a clear intuitive understanding of Kaluza-Klein modes. Finally, we analyze the collected data of high energy atmospheric neutrinos by IceCube experiment and obtain bounds on the radius of extra dimensions.

We calculate analytically the probability of large deviations from its mean of the largest (smallest) eigenvalue of random matrices belonging to the Gaussian orthogonal, unitary and symplectic ensembles. In particular, we show that the probability that all the eigenvalues of an (N\\times N) random matrix are positive (negative) decreases for large N as \\exp[-\\beta \\theta(0) N^2] where the parameter \\beta characterizes the ensemble and the exponent \\theta(0)=(\\ln 3)/4=0.274653... is universal. We also calculate exactly the average density of states in matrices whose eigenvalues are restricted to be larger than a fixed number \\zeta, thus generalizing the celebrated Wigner semi-circle law. The density of states generically exhibits an inverse square-root singularity at \\zeta.

For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

Large fires are significant seasonal contributors to western visibility reduction. We have found that the relative concentration of supermicron size particles (assumed to be a mixture of mechanically generated particles by high winds associated with large fires and low density chain aggregates from coagulation in the fire) and high turbulence in fire plumes can radically change the aerosol sizes in the fire plume. This is especially important for aerosols with high visibility reduction and long range transport potential. This calculation was done with a 10 level one dimensional model with parameterized vertical and horizontal diffusion, sedimentation and coagulation. The optical effects of the evolving concentration and size distributions were modeled using Mie scattering and absorption assumptions.

Monitoring of microbial corrosion is always difficult because of the sessile nature of bacteria and the lack of meaningful correlation between routine bacteria counts and bacterial activity. This problem is further aggravated in a large oilfield water system because of its size and sampling difficulties. This paper discusses some monitoring techniques currently used in the oil industry, their limitations and the possible areas for improvement. These suggested improvements either are presently being implemented or will be implemented in the Aramco systems.

Monitoring of microbial corrosion is always difficult because of the sessile nature of bacteria and the lack of meaningful correlation between routine bacteria counts and bacterial activity. This problem is further aggravated in a large oilfield water system because of size and sampling difficulties. This paper discusses some monitoring techniques currently used in the oil industry, their limitations, and possible areas for improvement. These improved techniques are in use or will be implemented in the Aramco systems.

the years in these areas, large dairy herds have developed that are often heavily de- pendent upon purchased feedstuffs. The competition for land for alternative uses and the high cost of irrigation water has curtailed forage production on many diary.... In recent years the supply of high-quality replacement cows available for purchase has decreased. More emphasis is being placed upon raising replace- ments. A few dairymen start their own calves and then contract to have them grown out by an experienced...

This is one of the chapters in the book titled “Advances in batteries for large- and medium-scale energy storage: Applications in power systems and electric vehicles” that will be published by the Woodhead Publishing Limited. The chapter discusses the basic electrochemical fundamentals of electrochemical energy storage devices with a focus on the rechargeable batteries. Several practical secondary battery systems are also discussed as examples

A device for imaging scenes with a very large range of intensity having a pair of polarizers, a primary lens, an attenuating mask, and an imaging device optically connected along an optical axis. Preferably, a secondary lens, positioned between the attenuating mask and the imaging device is used to focus light on the imaging device. The angle between the first polarization direction and the second polarization direction is adjustable.

A significant part of the experimental program in Hall-B of the Jefferson Lab is dedicated to the studies of the structure of baryons. CEBAF Large Acceptance Spectrometer (CLAS), availability of circularly and linearly polarized photon beams and recent addition of polarized targets provides remarkable opportunity for single, double and in some cases triple polarization measurements in photoproduction. An overview of the experiments will be presented.

VFD for pump motors larger than 5 hp. Three Way Constant Speed Systems with Blending Station (Figures 5 and 6) Figures 5 and 6 show three-way valve constant speed system with a blending station. This type of system is more widely used than... by these large campuses. If the thermal distribution efficiency is improved, the overall energy consumption of the system is also improved (Deng et al., 2000). Several options that seem to improve the thermal transmission performance include: VFD systems...

extending up to three thousand feet from the producing well. Also, a model simulating a nuclear cavity was designed. This model simulated a well containing an eighty foot radius cavity with a fractured zone of one hundred times the reservoir permeability... of each system was prepared. The results of this study showed that all fractures of greater than one thousand foot radius had greater productivity and greater cumu- lative gas produced than did the nuclear cavity. It appears that large hydraulic...

EXO-200 uses 468 large area avalanche photodiodes (LAAPDs) for detection of scintillation light in an ultra-low-background liquid xenon (LXe) detector. We describe initial measurements of dark noise, gain and response to xenon scintillation light of LAAPDs at temperatures from room temperature to 169 K - the temperature of liquid xenon. We also describe the individual characterization of more than 800 LAAPDs for selective installation in the EXO-200 detector.

In this Sixth International School on Field Theory and Gravitation, I was invited to give this talk to the students and researchers of Field Theory mainly about LHC - The Large Hadron Collider and results. I will try to summarize the main daily life of the high energy physics and give an idea about the experiments and the expectations for the near future. I will comment the present results and the prospects to LHC/CMS.

The work required to solve for the fully interacting N boson wave function, which is widely believed to scale exponentially with N, is rearranged so the problem scales order by order in a perturbation series as N{sup 0}. The exponential complexity reappears in an exponential scaling with the order of our perturbation series allowing exact analytical calculations for very large N systems through low order.

The purpose of this document is to outline the methodology used to baseline and maintain the cleanliness status of the newly built and installed Large Optic Cleaning Station (LOCS). The station has currently been in use for eleven months; and after many cleaning studies and implementation of resulting improvements appears to be cleaning optics to a level that is acceptable for the fabrication of Nano-Laminates.

The second peak in the Fe XVI 33.5 nm line irradiance observed during solar flares by Extreme ultraviolet Variability Experiment (EVE) is known as Extreme UltraViolet (EUV) late phase. Our previous paper (Liu et al. 2013) found that the main emissions in the late phase are originated from large-scale loop arcades that are closely connected to but different from the post flare loops (PFLs), and we also proposed that a long cooling process without additional heating could explain the late phase. In this paper, we define the extremely large late phase because it not only has a bigger peak in the warm 33.5 irradiance profile, but also releases more EUV radiative energy than the main phase. Through detailedly inspecting the EUV images from three point-of-view, it is found that, besides the later phase loop arcades, the more contribution of the extremely large late phase is from a hot structure that fails to erupt. This hot structure is identified as a flux rope, which is quickly energized by the flare reconnection...

In this contribution I will present the current status of our project of stellar population analyses and spatial information of both Magellanic Clouds (MCs). The Magellanic Clouds - especially the LMC with its large size and small depth (<300pc) - are suitable laboratories and testing ground for theoretical models of star formation. With distance moduli of 18.5 and 18.9mag for the LMC and SMC, respectively, and small galactic extinction, their stellar content can be studied in detail from the most massive stars of the youngest populations (<25Myr) connected to H-alpha emission down to the low mass end of about 1/10 of a solar mass. Based on broad-band photometry (U,B,V) I present results for the supergiant shell (SGS) SMC1, some regions at the LMC east side incl. LMC2 showing different overlapping young populations and the region around N171 with its large and varying colour excess, and LMC4. This best studied SGS shows a coeval population aged about 12Myr with little age spread and no correlation to distance from LMC4's centre. I will show that the available data are not compatible with many of the proposed scenarios like SSPSF or a central trigger (like a cluster or GRB), while a large-scale trigger like the bow-shock of the rotating LMC can do the job.

Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

We study incomplete relaxation to quantum equilibrium at long wavelengths, during a pre-inflationary phase, as a possible explanation for the reported large-scale anomalies in the cosmic microwave background (CMB). Our scenario makes use of the de Broglie-Bohm pilot-wave formulation of quantum theory, in which the Born probability rule has a dynamical origin. The large-scale power deficit could arise from incomplete relaxation for the amplitudes of the primordial perturbations. We show, by numerical simulations for a spectator scalar field, that if the pre-inflationary era is radiation dominated then the deficit in the emerging power spectrum will have a characteristic shape (an inverse-tangent dependence on wavenumber k, with oscillations). It is found that our scenario is able to produce a power deficit in the observed region and of the observed (approximate) magnitude for an appropriate choice of cosmological parameters. We also discuss the large-scale anisotropy, which could arise from incomplete relaxation for the phases of the primordial perturbations. We present numerical simulations for phase relaxation, and we show how to define characteristic scales for amplitude and phase nonequilibrium. The extent to which the data might support our scenario is left as a question for future work. Our results suggest that we have a potentially viable model that might explain two apparently independent cosmic anomalies by means of a single mechanism.

This report examines the inherent vulnerability of nuclear power plant structures to the thermal environments arising from large, external fires. The inherent vulnerability is the capacity of the concrete safety-related structures to absorb thermal loads without exceeding the appropriate thermal and structural design criteria. The potential sources of these thermal environments are large, offsite fires arising from accidents involving the transportation or storage of large quantities of flammable gases or liquids. A realistic thermal response analysis of a concrete panel was performed using three limiting criteria: temperature at the first rebar location, erosion and ablation of the front (exterior) surface due to high heat fluxes, and temperature at the back (interior) surface. The results of this analysis yield a relationship between incident heat flux and the maximum allowable exposure duration. Example calculations for the break of a 0.91 m (3') diameter high-pressure natural gas pipeline and a 1 m/sup 2/ hole in a 2-1/2 million gallon gasoline tank show that the resulting fires do not pose a significant hazard for ranges of 500 m or greater.

The Large Synoptic Survey Telescope (LSST) will use an active optics system (AOS) to maintain alignment and surface figure on its three large mirrors. Corrective actions fed to the LSST AOS are determined from information derived from 4 curvature wavefront sensors located at the corners of the focal plane. Each wavefront sensor is a split detector such that the halves are 1mm on either side of focus. In this paper we describe the extensions to published curvature wavefront sensing algorithms needed to address challenges presented by the LSST, namely the large central obscuration, the fast f/1.23 beam, off-axis pupil distortions, and vignetting at the sensor locations. We also describe corrections needed for the split sensors and the effects from the angular separation of different stars providing the intra- and extra-focal images. Lastly, we present simulations that demonstrate convergence, linearity, and negligible noise when compared to atmospheric effects when the algorithm extensions are applied to the LS...

Summary The objective of this document is to present the designers of the next generation of large-mass, ultra-low background experiments with lessons learned and design strategies from previous experimental work. Design issues divided by topic into mechanical, thermal and electrical requirements are addressed. Large mass low-background experiments have been recognized by the scientific community as appropriate tools to aid in the refinement of the standard model. The design of these experiments is very costly and a rigorous engineering review is required for their success. The extreme conditions that the components of the experiment must withstand (heavy shielding, vacuum/pressure and temperature gradients), in combination with unprecedented noise levels, necessitate engineering guidance to support quality construction and safe operating conditions. Physical properties and analytical results of typical construction materials are presented. Design considerations for achieving ultra-low-noise data acquisition systems are addressed. Five large-mass, low-background conceptual designs for the one-tonne scale germanium experiment are proposed and analyzed. The result is a series of recommendations for future experiments engineering and for the Majorana simulation task group to evaluate the different design approaches.

This paper describes technical challenges in supporting large-scale real-time data analysis for future power grid systems and discusses various design options to address these challenges. Even though the existing U.S. power grid has served the nation remarkably well over the last 120 years, big changes are in the horizon. The widespread deployment of renewable generation, smart grid controls, energy storage, plug-in hybrids, and new conducting materials will require fundamental changes in the operational concepts and principal components. The whole system becomes highly dynamic and needs constant adjustments based on real time data. Even though millions of sensors such as phase measurement units (PMUs) and smart meters are being widely deployed, a data layer that can support this amount of data in real time is needed. Unlike the data fabric in cloud services, the data layer for smart grids must address some unique challenges. This layer must be scalable to support millions of sensors and a large number of diverse applications and still provide real time guarantees. Moreover, the system needs to be highly reliable and highly secure because the power grid is a critical piece of infrastructure. No existing systems can satisfy all the requirements at the same time. We examine various design options. In particular, we explore the special characteristics of power grid data to meet both scalability and quality of service requirements. Our initial prototype can improve performance by orders of magnitude over existing general-purpose systems. The prototype was demonstrated with several use cases from PNNL’s FPGI and was shown to be able to integrate huge amount of data from a large number of sensors and a diverse set of applications.

We study the reach of the Large Hadron Collider with 1 fb?¹ of data at ?s=7 TeV for several classes of supersymmetric models with compressed mass spectra, using jets and missing transverse energy cuts like those employed by ATLAS for summer 2011 data. In the limit of extreme compression, the best limits come from signal regions that do not require more than 2 or 3 jets and that remove backgrounds by requiring more missing energy rather than a higher effective mass.

Using data from the Michelson Doppler Imager (MDI) instrument on board the Solar and Heliospheric Observatory (SOHO), we study the large-scale velocity fields in the outer part of the solar convection zone using the ring diagram technique. We use observations from four different times to study possible temporal variations in flow velocity. We find definite changes in both the zonal and meridional components of the flows. The amplitude of the zonal flow appears to increase with solar activity and the flow pattern also shifts towards lower latitude with time.

A brief history of electric energy metering at a large research laboratory is presented. Limited orientation on metering of power and energy quantities derived from single dimension magnitudes is introduced. Operation and application of electromechanical watthour demand, solid state single function transducers, analog multifunction, and digital multifunction discussed. Applications for interdepartmental revenue transfer based on energy and power flow, load profiling, system planning, and use as a troubleshooting tool are described. The material is presented in a perspective for persons familiar with protective relaying components, but lacking similar experience in energy metering.

Modern fossil and nuclear generating units require the support of a fairly large and complex electric auxiliary power system. The selection of an optimized and cost-effective auxiliary power transformer rating may be a difficult process, since the loading profile and coincident operation of the loads often cannot be firmly defined at an early stage of design. The authors believe that this important design process could be greatly aided by systematic field tests and recording of the actual auxiliary loading profiles during various modes of plant operations.

We study Compton scattering, gamma e -> gamma e, in the context of the recent proposal for Weak Scale Quantum Gravity (WSQG) with large extra dimensions. It is shown that, with an ultraviolet cutoff $M_S \\sim 1$ TeV for the effective gravity theory, the cross section for this process at the Next Linear Collider (NLC) deviates from the prediction of the Standard Model significantly. Our results suggest that, for typical proposed NLC energies and luminosities, WSQG can be tested in the range 4 TeV$\\lsim M_S \\lsim$ 16 TeV, making gamma e -> gamma e an important test channel.

Methods for the statistical characterization of the large-scale structure in the Universe will be the main topic of the present text. The focus is on geometrical methods, mainly Minkowski functionals and the J-function. Their relations to standard methods used in cosmology and spatial statistics and their application to cosmological datasets will be discussed. This work is not only meant as a short review for comologist, but also attempts to illustrate these morphological methods and to make them accessible to scientists from other fields. Consequently, a short introduction to the standard picture of cosmology is given.

For liquid-scintillator neutrino detectors of kiloton scale, the transparency of the organic solvent is of central importance. The present paper reports on laboratory measurements of the optical scattering lengths of the organic solvents PXE, LAB, and Dodecane which are under discussion for next-generation experiments like SNO+, Hanohano, or LENA. Results comprise the wavelength range from 415 to 440nm. The contributions from Rayleigh and Mie scattering as well as from absorption/re-emission processes are discussed. Based on the present results, LAB seems to be the preferred solvent for a large-volume detector.

We present approximate formulas for the tensor BB, EE, TT, and TE multipole coefficients for large multipole order l. The error in using the approximate formula for the BB multipole coefficients is less than cosmic variance for l>10. These approximate formulas make various qualitative properties of the calculated multipole coefficients transparent: specifically, they show that, whatever values are chosen for cosmological parameters, the tensor EE multipole coefficients will always be larger than the BB coefficients for all l>15, and that these coefficients will approach each other for lmultipole coefficients depend on cosmological parameters.

An oxide or nitride layer is provided on an amorphous semiconductor layer prior to performing metal-induced crystallization of the semiconductor layer. The oxide or nitride layer facilitates conversion of the amorphous material into large grain polycrystalline material. Hence, a native silicon dioxide layer provided on hydrogenated amorphous silicon (a-Si:H), followed by deposited Al permits induced crystallization at temperatures far below the solid phase crystallization temperature of a-Si. Solar cells and thin film transistors can be prepared using this method.

One embodiment of the invention relates to a segmented photovoltaic (PV) module which is manufactured from laminate segments. The segmented PV module includes rectangular-shaped laminate segments formed from rectangular-shaped PV laminates and further includes non-rectangular-shaped laminate segments formed from rectangular-shaped and approximately-triangular-shaped PV laminates. The laminate segments are mechanically joined and electrically interconnected to form the segmented module. Another embodiment relates to a method of manufacturing a large-area segmented photovoltaic module from laminate segments of various shapes. Other embodiments relate to processes for providing a photovoltaic array for installation at a site. Other embodiments and features are also disclosed.

. Figure 4. Figure 5. Data Synchronization. Simple Inverter and Dynamic Power Dissipation. Single Wire Clock Network. H ? Tree Clock Distribution Network. Balanced Clock Tree al'ter [8]. Figurc 6. PLL Based Distributed System. Figurc 7. Phase Lock... is distributed to the various elements using such an architecture. This type of network has been successfully used in the Alpha chip [3]. A large buffer is used to drive a single wire laid across the chip. Several of the buffer ? wire are connected in Vin 1...

At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domain has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode revealed that it is crucial to account for basin-wide hydrological response time when assessing lead time performances notwithstanding structural limitations in the hydrological model and possibly large inaccuracies in precipitation data.

We study the consequences of spacetime torsion coexisting with gravity in the bulk in scenarios with large extra dimensions. Having linked torsion with the Kalb-Ramond antisymmetric tensor field arising in string theories, we examine its artifacts on the visible 3-brane when the extra dimensions are compactified. It is found that while torsion would have led to parity violation in a 4-dimensional framework, all parity violating effects disappear on the visible brane when the torsion originates in the bulk. However, such a scenario is found to have characteristics of its own, some of which can be phenomenologically significant.

I discuss how global QCD fits of parton distribution functions can make the somewhat separated fields of high-energy particle physics and lower energy hadronic and nuclear physics interact to the benefit of both. In particular, I will argue that large rapidity gauge boson production at the Tevatron and the LHC has the highest short-term potential to constrain the theoretical nuclear corrections to DIS data on deuteron targets necessary for up/down flavor separation. This in turn can considerably reduce the PDF uncertainty on cross section calculations of heavy mass particles such as W' and Z' bosons.

ATP3 (Algae Testbed Public-Private Partnership) is hosting the Large-Scale Algal Cultivation, Harvesting and Downstream Processing Workshop on November 2–6, 2015, at the Arizona Center for Algae Technology and Innovation in Mesa, Arizona. Topics will include practical applications of growing and managing microalgal cultures at production scale (such as methods for handling cultures, screening strains for desirable characteristics, identifying and mitigating contaminants, scaling up cultures for outdoor growth, harvesting and processing technologies, and the analysis of lipids, proteins, and carbohydrates). Related training will include hands-on laboratory and field opportunities.

Projections of performance from small-area devices to large-area windows and enterprise marketing have created high expectations for electrochromic glazings. As a result, this paper seeks to precipitate an objective dialog between material scientists and building-application scientists to determine whether actual large-area electrochromic devices will result in significant performance benefits and what material improvements are needed, if any, to make electrochromics more practical for commercial building applications. Few in-situ tests have been conducted with large-area electrochromic windows applied in buildings. This study presents monitored results from a full-scale field test of large-area electrochromic windows to illustrate how this technology will perform in commercial buildings. The visible transmittance (Tv) of the installed electrochromic ranged from 0.11 to 0.38. The data are limited to the winter period for a south-east-facing window. The effect of actual device performance on lighting energy use, direct sun control, discomfort glare, and interior illumination is discussed. No mechanical system loads were monitored. These data demonstrate the use of electrochromics in a moderate climate and focus on the most restrictive visual task: computer use in offices. Through this small demonstration, we were able to determine that electrochromic windows can indeed provide unmitigated transparent views and a level of dynamic illumination control never before seen in architectural glazing materials. Daily lighting energy use was 6-24 percent less compared to the 11 percent-glazing, with improved interior brightness levels. Daily lighting energy use was 3 percent less to 13 percent more compared to the 38 percent-glazing, with improved window brightness control. The electrochromic window may not be able to fulfill both energy-efficiency and visual comfort objectives when low winter direct sun is present, particularly for computer tasks using cathode-ray tube (CRT) displays. However, window and architectural design as well as electrochromic control options are suggested as methods to broaden the applicability of electrochromics for commercial buildings. Without further modification, its applicability is expected to be limited during cold winter periods due to its slow switching speed.

We present a preliminary study to develop a large area photodetector, based on a semiconductor crystal placed inside a superconducting resonant cavity. Laser pulses are detected through a variation of the cavity impedance, as a consequence of the conductivity change in the semiconductor. A novel method, whereby the designed photodetector is simulated by finite element analysis, makes it possible to perform pulse-height spectroscopy on the reflected microwave signals. We measure an energy sensitivity of 100 fJ in the average mode without the employment of low noise electronics and suggest possible ways to further reduce the single-shot detection threshold, based on the results of the described method.

A thin film at a liquid interface responds to uniaxial confinement by wrinkling and then by folding; its shape and energy have been computed exactly before self contact. Here, we address the mechanics of large folds, i.e. folds that absorb a length much larger than the wrinkle wavelength. With scaling arguments and numerical simulations, we show that the antisymmetric fold is energetically favorable and can absorb any excess length at zero pressure. Then, motivated by puzzles arising in the comparison of this simple model to experiments on lipid monolayers and capillary rafts, we discuss how to incorporate film weight, self-adhesion and energy dissipation.

In the course of its operation, the EGRET experiment detected high-energy gamma ray sources at energies above 100 MeV over the whole sky. In this communication, we search for large-scale anisotropy patterns among the catalogued EGRET sources using an expansion in spherical harmonics, accounting for EGRET's highly non-uniform exposure. We find significant excess in the quadrupole and octopole moments. This is consistent with the hypothesis that, in addition to the galactic plane, a second mid-latitude (5^{\\circ} < |b| < 30^{\\circ}) population, perhaps associated with the Gould belt, contributes to the gamma ray flux above 100 MeV.

A complete family of statistical descriptors for the morphology of large--scale structure based on Minkowski--Functionals is presented. These robust and significant measures can be used to characterize the local and global morphology of spatial patterns formed by a coverage of point sets which represent galaxy samples. Basic properties of these measures are highlighted and their relation to the `genus statistics' is discussed. Test models like a Poissonian point process and samples generated from a Voronoi--model are put into perspective.

Originally published in 2001 this updated report provides a definition of the market for large rigid haulers in surface mining. The analysis covers changes to the mining market segments buying these machines including the gains made by coal producers, retrenchment in copper mining, the consolidation taking place among gold mining companies, and the expansion of iron ore producers in Australia and Brazil. It includes a detailed accounting of 2001 truck shipments, and an analysis of trends in the Ultra-truck segment. It concludes with a revised forecast for shipments through 2006. 12 charts, 56 tabs., 2 apps.

We consider a self-gravitating string generated by a global vortex solution in general relativity. We investigate the Einstein and field equations of a global vortex in the region of its central line and at a distance from the centre of the order of the inverse of its Higgs boson mass. By combining the two we establish by a limiting process of large Higgs mass the dynamics of a self-gravitating global string. Under our assumptions the presence of gravitation restricts the world sheet of the global string to be totally geodesic.

Large area, surface discharge pumped, vacuum ultraviolet (VUV) light source. A contamination-free VUV light source having a 225 cm.sup.2 emission area in the 240-340 nm region of the electromagnetic spectrum with an average output power in this band of about 2 J/cm.sup.2 at a wall-plug efficiency of approximately 5% is described. Only ceramics and metal parts are employed in this surface discharge source. Because of the contamination-free, high photon energy and flux, and short pulse characteristics of the source, it is suitable for semiconductor and flat panel display material processing.

Large area, surface discharge pumped, vacuum ultraviolet (VUV) light source is disclosed. A contamination-free VUV light source having a 225 cm{sup 2} emission area in the 240-340 nm region of the electromagnetic spectrum with an average output power in this band of about 2 J/cm{sup 2} at a wall-plug efficiency of approximately 5% is described. Only ceramics and metal parts are employed in this surface discharge source. Because of the contamination-free, high photon energy and flux, and short pulse characteristics of the source, it is suitable for semiconductor and flat panel display material processing. 3 figs.

The role of turbulence in a spherically symmetric accreting system has been studied on very large spatial scales of the system. This is also a highly subsonic flow region and here the accreting fluid has been treated as nearly incompressible. It has been shown here that the coupling of the mean flow and the turbulent fluctuations, gives rise to a scaling relation for an effective "turbulent viscosity". This in turn leads to a dynamic scaling for sound propagation in the accretion process. As a consequence of this scaling, the sonic horizon of the transonic inflow solution is shifted inwards, in comparison with the inviscid flow.

So far the models used to study dust grain-plasma interactions in fusion plasmas neglect the effects of dust material vapor, which is always present around dust in rather hot and dense edge plasma environment in fusion devices. However, when the vapor density and/or the amount of ionized vapor atoms become large enough, they can alter the grain-plasma interactions. Somewhat similar processes occur during pellet injection in fusion plasma. In this brief communication the applicability limits of the models ignoring vapor effects in grain-plasma interactions are obtained.

Large scale obscuration and related climate effects of nuclear detonations first became a matter of concern in connection with the so-called ``Nuclear Winter Controversy`` in the early 1980`s. Since then, the world has changed. Nevertheless, concern remains about the atmospheric effects of nuclear detonations, but the source of concern has shifted. Now it focuses less on global, and more on regional effects and their resulting impacts on the performance of electro-optical and other defense-related systems. This bibliography reflects the modified interest.

In October 1996, an interdisciplinary team began a three-year LDRD project to study the plasma processes of reactive ion etching and plasma-enhanced chemical vapor deposition on large-area silicon devices. The goal was to develop numerical models that could be used in a variety of applications for surface cleaning, selective etching, and thin-film deposition. Silicon solar cells were chosen as the experimental vehicle for this project because an innovative device design was identified that would benefit from immediate performance improvement using a combination of plasma etching and deposition processes. This report presents a summary of the technical accomplishments and conclusions of the team.

Hydrodynamical analysis of experimental data of ultrarelativistic heavy ion collisions seems to indicate that the hot QCD matter created in the collisions thermalizes very quickly. Theoretically, we have no idea why this should be true. In this proceeding, I will describe how the thermalization takes place in the most theoretically clean limit -- that of large nuclei at asymptotically high energy per nucleon, where the system is described by weak-coupling QCD. In this limit, plasma instabilities dominate the dynamics from immediately after the collision until well after the plasma becomes nearly in equilibrium at time t \\alpha^(-5/2)Q^(-1).

In this paper we propose computationally efficient and robust methods for estimating the moment tensor and location of micro-seismic event(s) for large search volumes. Our contribution is two-fold. First, we propose a novel joint-complexity measure, namely the sum of nuclear norms which while imposing sparsity on the number of fractures (locations) over a large spatial volume, also captures the rank-1 nature of the induced wavefield pattern. This wavefield pattern is modeled as the outer-product of the source signature with the amplitude pattern across the receivers from a seismic source. A rank-1 factorization of the estimated wavefield pattern at each location can therefore be used to estimate the seismic moment tensor using the knowledge of the array geometry. In contrast to existing work this approach allows us to drop any other assumption on the source signature. Second, we exploit the recently proposed first-order incremental projection algorithms for a fast and efficient implementation of the resulting...

Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up to $1024^3$ grid points and Reynolds numbers of $\\approx 1000$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $\\sim k_\\perp^{-5/3}$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.

The origin of dust in a galaxy is poorly understood. Recently, the surveys of the Large Magellanic Cloud (LMC) provide astrophysical laboratories for the dust studies. By a method of population synthesis, we investigate the contributions of dust produced by asymptotic giant branch (AGB) stars, common envelope (CE) ejecta and type II supernovae (SNe II) to the total dust budget in the LMC. Based on our models, the dust production rates (DPRs) of AGB stars in the LMC are between about $2.5\\times10^{-5}$ and $4.0\\times10^{-6}M_\\odot{\\rm yr^{-1}}$. The uncertainty mainly results from different models for the dust yields of AGB stars. The DPRs of CE ejecta are about $6.3\\times10^{-6}$(The initial binary fraction is 50\\%). These results are within the large scatter of several observational estimates. AGB stars mainly produce carbon grains, which is consistent with the observations. Most of dust grains manufactured by CE ejecta are silicate and iron grains. The contributions of SNe II are very uncertain. Compared wi...

Using the quantum fluid model for self-gravitating quantum plasmas with the Bernoulli pseudopotential method and taking into account the relativistic degeneracy effect, it is shown that gravity-induced large-amplitude density rarefaction solitons can exist in gravitationally balanced quantum plasmas. These nonlinear solitons are generated due to the force imbalance between the gravity and the quantum fluid pressure via local density perturbations, similar to that on shallow waters. It is found that both the fluid mass-density and the atomic-number of the constituent ions have significant effect on the amplitude and width of these solitonic profiles. Existence of a large-scale gravity-induced solitonic activities on neutron-star surface, for instance, can be a possible explanation for the recently proposed resonant shattering mechanism [D. Tsang et al., Phys. Rev. Lett. 108, 011102 (2012)] causing the intense short gamma ray burst phenomenon, in which release of ?10{sup 46}–10{sup 47} ergs would be possible from the surface. The resonant shattering of the crust in a neutron star has been previously attributed to the crust-core interface mode and the tidal surface tensions. We believe that current model can be a more natural explanation for the energy liberation by solitonic activities on the neutron star surfaces, without a requirement for external mergers like other neutron stars or black holes for the crustal shatter.

The Large Observatory For x-ray Timing (LOFT) was studied within ESA M3 Cosmic Vision framework and participated in the final down-selection for a launch slot in 2022-2024. Thanks to the unprecedented combination of effective area and spectral resolution of its main instrument, LOFT will study the behaviour of matter under extreme conditions, such as the strong gravitational field in the innermost regions of accretion flows close to black holes and neutron stars, and the supra-nuclear densities in the interior of neutron stars. The science payload is based on a Large Area Detector (LAD, 10 m 2 effective area, 2-30 keV, 240 eV spectral resolution, 1 deg collimated field of view) and a WideField Monitor (WFM, 2-50 keV, 4 steradian field of view, 1 arcmin source location accuracy, 300 eV spectral resolution). The WFM is equipped with an on-board system for bright events (e.g. GRB) localization. The trigger time and position of these events are broadcast to the ground within 30 s from discovery. In this paper we ...

In an attempt to clarify the persisting controversy over the effect of the electrode area versus that of the stressed oil volume in large-oil-volume breakdown, a study was carried out under well defined conditions of the oil quality, particularly with regard to the degree of particle contamination. The results indicate that both the electrode area and the stressed oil volume can affect the dielectric strength of transformer oil, with the stressed-oil-volume effect being most pronounced under particle contamination conditions. Test results with technically clean transformer oil as currently accepted for use in power apparatus indicate that the degree of particle contamination in these oils is sufficient to produce an observable stressed-oil-volume effect. Finally, it is demonstrated that the observed phenomenon can be interpreted in terms of an apparent effect of either the electrode area or the stressed oil volume. This has led to the development of a semi-empirical method of quantitatively assessing the breakdown phenomenon in large oil volumes with reasonable accuracy.

This paper presents a method for an optimal operation of large-scale power systems similar to the one utilized by the Houston Lighting and Power Company. The main objective is to minimize the system fuel costs, and maintain an acceptable system performance in terms of limits on generator real and reactive power outputs, transformer tap settings, and bus voltage levels. Minimizing the fuel costs of such large-scale systems enhances the performance of optimal real power generator allocation and of optimal power flow that results in an economic dispatch. The gradient projection method (GPM) is utilized in solving the optimization problems. It is an iterative numerical procedure for finding an extremum of a function of several variables that are required to satisfy various constraining relations without using penalty functions or Lagrange multipliers among other advantages. Mathematical models are developed to represent the sensitivity relationships between dependent and control variables for both real- and reactive-power optimization procedures; and thus eliminate the use of B-coefficients. Data provided by the Houston lighting and Power Company are used to demonstrate the effectiveness of the proposed procedures.

We examine the approximations made in using Hooke's law as a constitutive relation for an isotropic thermoelastic material subjected to large deformation by calculating the stress evolution equation from the free energy. For a general thermoelastic material, we employ the volume-preserving part of the deformation gradient to facilitate volumetric/shear strain decompositions of the free energy, its first derivatives (the Cauchy stress and entropy), and its second derivatives (the specific heat, Grueneisen tensor, and elasticity tensor). Specializing to isotropic materials, we calculate these constitutive quantities more explicitly. For deformations with limited shear strain, but possibly large changes in volume, we show that the differential equations for the stress components involve new terms in addition to the traditional Hooke's law terms. These new terms are of the same order in the shear strain as the objective derivative terms needed for frame indifference; unless the latter terms are negligible, the former cannot be neglected. We also demonstrate that accounting for the new terms requires that the deformation gradient be included as a field variable

The Einstein radius of a cluster provides a relatively model-independent measure of the mass density of a cluster within a projected radius of ~ 150 kpc, large enough to be relatively unaffected by gas physics. We show that the observed Einstein radii of four well-studied massive clusters, for which reliable virial masses are measured, lie well beyond the predicted distribution of Einstein radii in the standard LambdaCDM model. Based on large samples of numerically simulated cluster-sized objects with virial masses ~ 1e15 solar, the predicted Einstein radii are only 15-25'', a factor of two below the observed Einstein radii of these four clusters. This is because the predicted mass profile is too shallow to exceed the critical surface density for lensing at a sizable projected radius. After carefully accounting for measurement errors as well as the biases inherent in the selection of clusters and the projection of mass measured by lensing, we find that the theoretical predictions are excluded at a 4-sigma significance. Since most of the free parameters of the LambdaCDM model now rest on firm empirical ground, this discrepancy may point to an additional mechanism that promotes the collapse of clusters at an earlier time thereby enhancing their central mass density.

In this paper, I present the calculation of the third and fourth moments of both the distribution function of the large--scale density and the large--scale divergence of the velocity field, $\\theta$. These calculations are made by the mean of perturbative calculations assuming Gaussian initial conditions and are expected to be valid in the linear or quasi linear regime. The moments are derived for a top--hat window function and for any cosmological parameters $\\Omega$ and $\\Lambda$. It turns out that the dependence with $\\Lambda$ is always very weak whereas the moments of the distribution function of the divergence are strongly dependent on $\\Omega$. A method to measure $\\Omega$ using the skewness of this field has already been presented by Bernardeau et al. (1993). I show here that the simultaneous measurement of the skewness and the kurtosis allows to test the validity of the gravitational instability scenario hypothesis. Indeed there is a combination of the first three moments of $\\theta$ that is almost independent of the cosmological parameters $\\Omega$ and $\\Lambda$, $${(-3^2) \\over ^2}\\approx 1.5,$$ (the value quoted is valid when the index of the power spectrum at the filtering scale is close to -1) so that any cosmic velocity field created by gravitational instabilities should verify such a property.

We show that models of `just enough' inflation, where the slow-roll evolution lasted only $50-60$ e-foldings, feature modifications of the CMB power spectrum at large angular scales. We perform a systematic and model-independent analysis of any possible non-slow-roll background evolution prior to the final stage of slow-roll inflation. We find a high degree of universality since most common backgrounds like fast-roll evolution, matter or radiation-dominance give rise to a power loss at large angular scales and a peak together with an oscillatory behaviour at scales around the value of the Hubble parameter at the beginning of slow-roll inflation. Depending on the value of the equation of state parameter, different pre-inflationary epochs lead instead to an enhancement of power at low-$\\ell$, and so seem disfavoured by recent observational hints for a lack of CMB power at $\\ell\\lesssim 40$. We also comment on the importance of initial conditions and the possibility to have multiple pre-inflationary stages.

A floating LNG plant design has been developed which is technically feasible, economical, safe, and reliable. This technology will allow monetization of small marginal fields and improve the economics of large fields. Mobil`s world-scale plant design has a capacity of 6 million tons/year of LNG and up to 55,000 b/d condensate produced from 1 bcfd of feed gas. The plant would be located on a large, secure, concrete barge with a central moonpool. LNG storage is provided for 250,000 cu m and condensate storage for 650,000 bbl. And both products are off-loaded from the barge. Model tests have verified the stability of the barge structure: barge motions are low enough to permit the plant to continue operation in a 100-year storm in the Pacific Rim. Moreover, the barge is spread-moored, eliminating the need for a turret and swivel. Because the design is generic, the plant can process a wide variety of feed gases and operate in different environments, should the plant be relocated. This capability potentially gives the plant investment a much longer project life because its use is not limited to the life of only one producing area.

Historically, the conventional approach to large-volume offshore acid stimulation has been to use a vessel dedicated to offshore stimulation services or to use a semi-permanent installation on the rig or platform. This approach often results in long-term commitment to an offshore vessel restricted to stimulation work or a great reduction of the valuable space on the rig or platform for an extended time. Both of these options usually require continuous, large capital outlays with periods of little or no use. The fast response stimulation package (FRSP) described in this paper provides a 25-bbl/min, centrally controlled, modular, acid-stimulation system with 50,000-gal acid storage capacity, 25-bbl/min, computer-operated blending equipment, acid-resistant manifold modules, and high-pressure pumping units. All processes are monitored and operated from a central control cabin. The FRSP can be quickly installed on an offshore work vessel or drilling rig of sufficient size to perform matrix acidizing and acid frac, and then be removed between jobs. The equipment has recently completed acid-stimulation services in a five-well program in 30 days, delivering up to 41,000 gal of blended acid at rates of 0.5 to 17 bbl/min from two different drilling rigs. This equipment has provided for greater versatility and better use of operator assets while providing specified requirements for stimulation services.

This Technical Support Document documents the technical analysis and design guidance for large hospitals to achieve whole-building energy savings of at least 50% over ANSI/ASHRAE/IESNA Standard 90.1-2004 and represents a step toward determining how to provide design guidance for aggressive energy savings targets. This report documents the modeling methods used to demonstrate that the design recommendations meet or exceed the 50% goal. EnergyPlus was used to model the predicted energy performance of the baseline and low-energy buildings to verify that 50% energy savings are achievable. Percent energy savings are based on a nominal minimally code-compliant building and whole-building, net site energy use intensity. The report defines architectural-program characteristics for typical large hospitals, thereby defining a prototype model; creates baseline energy models for each climate zone that are elaborations of the prototype models and are minimally compliant with Standard 90.1-2004; creates a list of energy design measures that can be applied to the prototype model to create low-energy models; uses industry feedback to strengthen inputs for baseline energy models and energy design measures; and simulates low-energy models for each climate zone to show that when the energy design measures are applied to the prototype model, 50% energy savings (or more) are achieved.

Direct and large eddy simulations of hydrodynamic and hydromagnetic turbulence have been performed in an attempt to isolate artifacts from real and possibly asymptotic features in the energy spectra. It is shown that in a hydrodynamic turbulence simulation with a Smagorinsky subgrid scale model using 512^3 meshpoints two important features of the 4096^3 simulation on the Earth simulator (Kaneda et al. 2003, Phys. Fluids 15, L21) are reproduced: a k^{-0.1} correction to the inertial range with a k^{-5/3} Kolmogorov slope and the form of the bottleneck just before the dissipative subrange. Furthermore, it is shown that, while a Smagorinsky-type model for the induction equation causes an artificial and unacceptable reduction in the dynamo efficiency, hyper-resistivity yields good agreement with direct simulations. In the large-scale part of the inertial range, an excess of the spectral magnetic energy over the spectral kinetic energy is confirmed. However, a trend towards spectral equipartition at smaller scales in the inertial range can be identified. With magnetic fields, no explicit bottleneck effect is seen.

Scientific knowledge of natural clathrate hydrates has grown enormously over the past decade, with spectacular new findings of large exposures of complex hydrates on the sea floor, the development of new tools for examining the solid phase in situ, significant progress in modeling natural hydrate systems, and the discovery of exotic hydrates associated with sea floor venting of liquid CO{sub 2}. Major unresolved questions remain about the role of hydrates in response to climate change today, and correlations between the hydrate reservoir of Earth and the stable isotopic evidence of massive hydrate dissociation in the geologic past. The examination of hydrates as a possible energy resource is proceeding apace for the subpermafrost accumulations in the Arctic, but serious questions remain about the viability of marine hydrates as an economic resource. New and energetic explorations by nations such as India and China are quickly uncovering large hydrate findings on their continental shelves. In this report we detail research carried out in the period October 1, 2007 through September 30, 2008. The primary body of work is contained in a formal publication attached as Appendix 1 to this report. In brief we have surveyed the recent literature with respect to the natural occurrence of clathrate hydrates (with a special emphasis on methane hydrates), the tools used to investigate them and their potential as a new source of natural gas for energy production.

In this paper, we analyze the strong unidentified emission near 3.28 {mu}m in Titan's upper daytime atmosphere recently discovered by Dinelli et al. We have studied it by using the NASA Ames PAH IR Spectroscopic Database. The polycyclic aromatic hydrocarbons (PAHs), after absorbing UV solar radiation, are able to emit strongly near 3.3 {mu}m. By using current models for the redistribution of the absorbed UV energy, we have explained the observed spectral feature and have derived the vertical distribution of PAH abundances in Titan's upper atmosphere. PAHs have been found to be present in large concentrations, about (2-3) Multiplication-Sign 10{sup 4} particles cm{sup -3}. The identified PAHs have 9-96 carbons, with a concentration-weighted average of 34 carbons. The mean mass is {approx}430 u; the mean area is about 0.53 nm{sup 2}; they are formed by 10-11 rings on average, and about one-third of them contain nitrogen atoms. Recently, benzene together with light aromatic species as well as small concentrations of heavy positive and negative ions have been detected in Titan's upper atmosphere. We suggest that the large concentrations of PAHs found here are the neutral counterpart of those positive and negative ions, which hence supports the theory that the origin of Titan main haze layer is located in the upper atmosphere.

A Workshop on Large Scale Obsurcation and Related Climate Effects was held 29--31 January, 1992, in Albuquerque, New Mexico. The objectives of the workshop were: to determine through the use of expert judgement the current state of understanding of regional and global obscuration and related climate effects associated with nuclear weapons detonations; to estimate how large the uncertainties are in the parameters associated with these phenomena (given specific scenarios); to evaluate the impact of these uncertainties on obscuration predictions; and to develop an approach for the prioritization of further work on newly-available data sets to reduce the uncertainties. The workshop consisted of formal presentations by the 35 participants, and subsequent topical working sessions on: the source term; aerosol optical properties; atmospheric processes; and electro-optical systems performance and climatic impacts. Summaries of the conclusions reached in the working sessions are presented in the body of the report. Copies of the transparencies shown as part of each formal presentation are contained in the appendices (microfiche).

Due to their transitionary nature, yellow supergiants (YSGs) provide a critical challenge for evolutionary modeling. Previous studies within M31 and the Small Magellanic Cloud show that the Geneva evolutionary models do a poor job at predicting the lifetimes of these short-lived stars. Here, we extend this study to the Large Magellanic Cloud (LMC) while also investigating the galaxy's red supergiant (RSG) content. This task is complicated by contamination by Galactic foreground stars that color and magnitude criteria alone cannot weed out. Therefore, we use proper-motions and the LMC's large systemic radial velocity ({approx}278 km s{sup -1}) to separate out these foreground dwarfs. After observing nearly 2000 stars, we identified 317 probable YSGs, 6 possible YSGs, and 505 probable RSGs. Foreground contamination of our YSG sample was {approx}80%, while that of the RSG sample was only 3%. By placing the YSGs on the Hertzsprung-Russell diagram and comparing them against the evolutionary tracks, we find that new Geneva evolutionary models do an exemplary job at predicting both the locations and the lifetimes of these transitory objects.

The Sunflower large solar concentrator, developed in the early 1970's, is a salient example of a high-efficiency concentrator. The newly emphasized needs for solar dynamic power on the Space Station and for large, lightweight thermal sources are outlined. Existing concepts for high efficiency reflector surfaces are examined with attention to accuracy needs for concentration rates of 1000 to 3000. Concepts using stiff reflector panels are deemed most likely to exhibit the long-term consistent accuracy necessary for low-orbit operation, particularly for the higher concentration ratios. Quantitative results are shown of the effects of surface errors for various concentration and focal-length diameter ratios. Cost effectiveness is discussed. Principal sources of high cost include the need for various dished panels for paraboloidal reflectors and the expense of ground testing and adjustment. A new configuration is presented addressing both problems, i.e., a deployable Pactruss backup structure with identical panels installed on the structure after deployment in space. Analytical results show that with reasonable pointing errors, this new concept is capable of concentration ratios greater than 2000.

High-end computing is suffering a data deluge from experiments, simulations, and apparatus that creates overwhelming application dataset sizes. This has led to the proliferation of high-end mass storage systems, storage area clusters, and data centers. These storage facilities offer a large range of choices in terms of capacity and access rate, as well as strong data availability and consistency support. However, for most end-users, the "last mile" in their analysis pipeline often requires data processing and visualization at local computers, typically local desktop workstations. End-user workstations-despite having more processing power than ever before-are ill-equipped to cope with such data demands due to insufficient secondary storage space and I/O rates. Meanwhile, a large portion of desktop storage is unused. We propose the FreeLoader framework, which aggregates unused desktop storage space and I/O bandwidth into a shared cache/scratch space, for hosting large, immutable datasets and exploiting data access locality. This article presents the FreeLoader architecture, component design, and performance results based on our proof-of-concept prototype. Its architecture comprises contributing benefactor nodes, steered by a management layer, providing services such as data integrity, high performance, load balancing, and impact control. Our experiments show that FreeLoader is an appealing low-cost solution to storing massive datasets by delivering higher data access ratesthan traditional storage facilities, namely, local or remote shared file systems, storage systems, and Internet data repositories. In particular, we present novel data striping techniques that allow FreeLoader to efficiently aggregate a workstation's network communication bandwidth and local I/O bandwidth. In addition, the performance impact on the native workload of donor machines is small and can be effectively controlled. Further, we show that security features such as data encryptions and integrity checks can be easily added as filters for interested clients. Finally, we demonstrate how legacy applications can use the FreeLoader API to store and retrieve datasets.

In order to evaluate the electron cyclotron (EC) heating power inside the Large Helical Device vacuum vessel and to investigate the physics of the interaction between the EC beam and the plasma, a direct measurement system for the EC beam transmitted through the plasma column was developed. The system consists of an EC beam target plate, which is made of isotropic graphite and faces against the EC beam through the plasma, and an IR camera for measuring the target plate temperature increase by the transmitted EC beam. This system is applicable to the high magnetic field (up to 2.75 T) and plasma density (up to 0.8 × 10{sup 19} m{sup ?3}). This system successfully evaluated the transmitted EC beam profile and the refraction.

Gamma-ray bursts (GRBs) that emit photons at GeV energies form a small but significant population of GRBs. However, the number of GRBs whose GeV-emitting period is simultaneously observed in X-rays remains small. We report {gamma}-ray observations of GRB 110625A using Fermi's Large Area Telescope in the energy range 100 MeV-20 GeV. Gamma-ray emission at these energies was clearly detected using data taken between 180 s and 580 s after the burst, an epoch after the prompt emission phase. The GeV light curve differs from a simple power-law decay, and probably consists of two emission periods. Simultaneous Swift X-Ray Telescope observations did not show flaring behaviors as in the case of GRB 100728A. We discuss the possibility that the GeV emission is the synchrotron self-Compton radiation of underlying ultraviolet flares.

The traveler attended the 33rd Session of CIGRE (The International Conference on Large High Voltage Electric Systems in Paris, France) as a US technical expert advisor the Study Committee 15, Insulating Materials. Over 200 papers were discussed, contributed from over 45 countries at the conference on all aspects of electric power generation and transmission. Of special interest was a panel session on superconducting technology for electric power systems and the participation on a new task force on the electrical insulation at cryogenic temperatures. Significant insight was gained into the development of superconducting power technologies in Europe and Japan. CIGRE has set up a committee to follow the development in research on the biological effects of electric and magnetic fields. The traveler also visited the Centre for Electric Power Engineering at the University of Strathclyde, Glasgow, Scotland and discussed research on degradation of polymeric cable insulation and gas insulated equipment. 5 refs.

From the view point of safety and maintenance simplicity, the development of large capacity gas insulated transformer has been desirable. In this type of transformer, the coolant gas is circulated in the gap between the coils to cool it. The flow pattern of coolant in the flow path strongly depend on its configuration formed by the coil. Therefore, in order to achieve high cooling efficiency of coils and at the same time to reduce the pressure loss, it is important to have sufficient knowledge about the flow behavior in the coil flow path. In the present work, in order to improve the coil cooling efficiency, appropriate flow path configuration were decided on the basis of numerical simulation using various coil configuration and validity of the computed results were tested by comparing with experimental data.

A plasma generating apparatus for plasma processing applications is based on a permanent magnet line-cusp plasma confinement chamber coupled to a compact single-coil microwave waveguide launcher. The device creates an electron cyclotron resonance (ECR) plasma in the launcher and a second ECR plasma is created in the line cusps due to a 0.0875 tesla magnetic field in that region. Additional special magnetic field configuring reduces the magnetic field at the substrate to below 0.001 tesla. The resulting plasma source is capable of producing large-area (20-cm diam), highly uniform (.+-.5%) ion beams with current densities above 5 mA/cm[sup 2]. The source has been used to etch photoresist on 5-inch diam silicon wafers with good uniformity. 3 figures.

A plasma generating apparatus for plasma processing applications is based on a permanent magnet line-cusp plasma confinement chamber coupled to a compact single-coil microwave waveguide launcher. The device creates an electron cyclotron resonance (ECR) plasma in the launcher and a second ECR plasma is created in the line cusps due to a 0.0875 tesla magnetic field in that region. Additional special magnetic field configuring reduces the magnetic field at the substrate to below 0.001 tesla. The resulting plasma source is capable of producing large-area (20-cm diam), highly uniform (.+-.5%) ion beams with current densities above 5 mA/cm.sup.2. The source has been used to etch photoresist on 5-inch diam silicon wafers with good uniformity.

We report the measurement of a large optical reflection matrix (RM) of a highly disordered medium. Incident optical fields onto a turbid sample are controlled by a spatial light modulator, and the corresponding fields reflected from the sample are measured using full-field Michelson interferometry. The number of modes in the measured RM is set to exceed the number of resolvable modes in the scattering media. We successfully study the subtle intrinsic correlations in the RM which agrees with the theoretical prediction by random-matrix theory when the effect of the limited numerical aperture on the eigenvalue distribution of the RM is taken into account. The possibility of the enhanced delivery of incident energy into scattering media is also examined from the eigenvalue distribution which promises efficient light therapeutic applications.

The creation of complex entangled states, resources that enable quantum computation, can be achieved via simple 'probabilistic' operations which are individually likely to fail. However, typical proposals exploiting this idea carry a severe overhead in terms of the accumulation of errors. Here we describe an method that can rapidly generate large entangled states with an error accumulation that depends only logarithmically on the failure probability. We find that the approach may be practical for success rates in the sub-10% range, while ultimately becoming unfeasible at lower rates. The assumptions that we make, including parallelism and high connectivity, are appropriate for real systems including measurement-induced entanglement. This result therefore shows the feasibility for real devices based on such an approach.

Alpha Solarco Inc. announced on May 18, 1987 the signing of two $175 million exclusive development contracts with the Pawnee and Otoe-Missouria Tribes of Oklahoma to build two 70,000-kilowatt photovoltaic electric generating stations on Tribal lands in Oklahoma to supply Indian and other requirements. The projects, to be built in four phases, will each consists of 35,000 kilowatts of photovoltaic generating capacity to be supplied by the company's proprietary Modular Solar-Electric Photovoltaic Generator (MSEPG), and 35,000 kilowatts of gas-fired cogeneration. Alpha Solarco is starting to build and finance itself a 500-kilowatt demonstration plant as the initial step in the first project. This plant will be used to demonstrate that proven MSEPG design and technology can be integrated in electric utility systems, either as a base-load generator for small utilities, or as a peak-shaving device for large ones.