The present development and testing of a generalized method for analytically examining 1D steady flow of perfect gases allows area change, heat transfer, friction, and mass injection. Generalized flow functions are developed, and sample tables are calculated and tested for both simple cases and combined changes. Normal shocks are noted to occur from the supersonic portion of these loci to the subsonic portion, in a manner analogous to simple-change behavior. 9 refs.

The third annual ARPA-E Energy Innovation Summit was held in Washington D.C. in February, 2012. The event brought together key players from across the energy ecosystem - researchers, entrepreneurs, investors, corporate executives, and government officials - to share ideas for developing and deploying the next generation of energy technologies. A few videos were selected for showing during the Summit to attendees. These 'performer videos' highlight innovative research that is ongoing and related to the main topics of the Summit's sessions. Featured in this video are David Marcus, Founder of GeneralCompression, and Eric Ingersoll, CEO of GeneralCompression. GeneralCompression, with the help of ARPA-E funding, has created an advanced air compression process which can store and release more than a weeks worth of the energy generated by wind turbines.

The third annual ARPA-E Energy Innovation Summit was held in Washington D.C. in February, 2012. The event brought together key players from across the energy ecosystem - researchers, entrepreneurs, investors, corporate executives, and government officials - to share ideas for developing and deploying the next generation of energy technologies. A few videos were selected for showing during the Summit to attendees. These 'performer videos' highlight innovative research that is ongoing and related to the main topics of the Summit's sessions. Featured in this video are David Marcus, Founder of GeneralCompression, and Eric Ingersoll, CEO of GeneralCompression. GeneralCompression, with the help of ARPA-E funding, has created an advanced air compression process which can store and release more than a weeks worth of the energy generated by wind turbines.

RANDOM VARIATE GENERATION FOR THE DIGAMMA AND TRIGAMMA DISTRIBUTIONS Luc Devroye School of Computer these distributions and selected generalized hypergeometric distributions. The generators can also be used for the discrete stable distribution, the Yule distribution, Mizutani's distribution and the Waring distribution

A general formulation was developed to represent material models for applications in dynamic loading. Numerical methods were devised to calculate response to shock and ramp compression, and ramp decompression, generalizing previous solutions for scalar equations of state. The numerical methods were found to be flexible and robust, and matched analytic results to a high accuracy. The basic ramp and shock solution methods were coupled to solve for composite deformation paths, such as shock-induced impacts, and shock interactions with a planar interface between different materials. These calculations capture much of the physics of typical material dynamics experiments, without requiring spatially-resolving simulations. Example calculations were made of loading histories in metals, illustrating the effects of plastic work on the temperatures induced in quasi-isentropic and shock-release experiments, and the effect of a phase transition.

We report on the discovery, isolation, and use of a novel yellow fluorescent protein. Lucigen Yellow (LucY) binds one FAD molecule within its core, thus shielding it from water and maintaining its structure so that fluorescence is 10-fold higher than freely soluble FAD. LucY displays excitation and emission spectra characteristic of FAD, with 3 excitation peaks at 276nm, 377nm, and 460nm and a single emission peak at 530nm. These excitation and emission maxima provide the large Stokes shift beneficial to fluorescence experimentation. LucY belongs to the MurB family of UDP-N-acetylenolpyruvylglucosamine reductases. The high resolution crystal structure shows that in contrastmore »to other structurally resolved MurB enzymes, LucY does not contain a potentially quenching aromatic residue near the FAD isoalloxazine ring, which may explain its increased fluorescence over related proteins. Using E. coli as a system in which to develop LucY as a reporter, we show that it is amenable to circular permutation and use as a reporter of protein-protein interaction. Fragmentation between its distinct domains renders LucY non-fluorescent, but fluorescence can be partially restored by fusion of the fragments to interacting protein domains. Thus, LucY may find application in Protein-fragment Complementation Assays for evaluating protein-protein interactions.« less

FROM GENETIC CODING TO PROTEIN FOLDING Jean-Luc Jestin ABSTRACT A discrete classical mechanics (DCM of the genetic code. A DCM model for protein folding allows a set of folding nuclei to be derived for each. A PROTEIN FOLDING MODEL Let us consider the following protein folding model. A chemical group of mass m

COMPRESSIVE VIDEO CLASSIFICATION IN A LOW-DIMENSIONAL MANIFOLD WITH LEARNED DISTANCE METRIC George Tzagkarakis1 , Grigorios Tsagkatakis2 , Jean-Luc Starck1 and Panagiotis Tsakalides2 1 Commissariat `a l'Â´Energie of video classification based on a set of com- pressed features, without the need of accessing the original

The Fast Multipole Method on the Cell processor Pierre Fortin and Jean-Luc Lamotte UPMC Univ Paris.fortin@lip6.fr Abstract This paper presents the first deployment of the Fast Multipole Method on the Cell Multipole with BLAS) in order to directly and efficiently offload the most time consuming operators of both

corresponding author: jean-luc.maurice@polytechnique.edu DEVELOPING LOW-COST GRAPHENE DEVICES C. S In spite of numerous efforts for developing the applications of graphene, it remains difficult to put-area (industrial) graphene includes in its structure and on its surfaces a significant density of defects that make

1 Wavelet-Based Combined Signal Filtering and Prediction Olivier Renaud, Jean-Luc Starck, and Fionn Murtagh Abstract-- We survey a number of applications of the wavelet transform in time series prediction experimental assessment, we demonstrate the powerfulness of this methodology. Index Terms-- Wavelet transform

Entropy-based Power Attack Houssem Maghrebi, Sylvain Guilley, Jean-Luc Danger, Florent Flament D to Higher-Order Differential Power Analysis (HO-DPA). For instance, an attack based on a variance anal- ysis to information- theoretic HO attacks, called the Entropy-based Power Analysis (EPA). This new attack gives

DiSC: Benchmarking Secure Chip DBMS Nicolas Anciaux, Luc Bouganim, Philippe Pucheral, and Patrick irrelevant. The main problem faced by secure chip DBMS designers is to be able to assess various design choices and trade-offs for different applications. Our solution is to use a benchmark for secure chip DBMS

A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%.

A method and apparatus for embedding auxiliary information into the digital representation of host data created by a lossy compression technique and a method and apparatus for constructing auxiliary data from the correspondence between values in a digital key-pair table with integer index values existing in a representation of host data created by a lossy compression technique are disclosed. The methods apply to data compressed with algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as ordered sequences of blocks containing integer indices having redundancy and uncertainty of value by one unit, allowing indices which are adjacent in value to be manipulated to encode auxiliary data. Also included is a method to improve the efficiency of lossy compression algorithms by embedding white noise into the integer indices. Lossy compression methods use loss-less compression to reduce to the final size the intermediate representation as indices. The efficiency of the loss-less compression, known also as entropy coding compression, is increased by manipulating the indices at the intermediate stage. Manipulation of the intermediate representation improves lossy compression performance by 1 to 10%. 21 figs.

Modern RDBMSs support the ability to compress data using methods such as null suppression and dictionary encoding. Data compression offers the promise of significantly reducing storage requirements and improving I/O performance for decision support queries. However, compression can also slow down update and query performance due to the CPU costs of compression and decompression. In this paper, we study how data compression affects choice of appropriate physical database design, such as indexes, for a given workload. We observe that approaches that decouple the decision of whether or not to choose an index from whether or not to compress the index can result in poor solutions. Thus, we focus on the novel problem of integrating compression into physical database design in a scalable manner. We have implemented our techniques by modifying Microsoft SQL Server and the Database Engine Tuning Advisor (DTA) physical design tool. Our techniques are general and are potentially applicable to DBMSs that support other co...

A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique is disclosed. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method. 11 figs.

A method of embedding auxiliary information into the digital representation of host data created by a lossy compression technique. The method applies to data compressed with lossy algorithms based on series expansion, quantization to a finite number of symbols, and entropy coding. Lossy compression methods represent the original data as integer indices having redundancy and uncertainty in value by one unit. Indices which are adjacent in value are manipulated to encode auxiliary data. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user. Lossy compression methods use loss-less compressions known also as entropy coding, to reduce to the final size the intermediate representation as indices. The efficiency of the compression entropy coding, known also as entropy coding is increased by manipulating the indices at the intermediate stage in the manner taught by the method.

AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnualPropertyd8c-a9ae-f8521cbb8489InformationFrenchtown,

- efficients are extracted and energy is computed in 40 per- ceptual channels. This energy is then filteredROBUST SPEECH / MUSIC CLASSIFICATION IN AUDIO DOCUMENTS Julien PINQUIER, Jean-Luc ROUAS and R energy. The relevance of these features is studied in a first experiment based on a development corpus

Automatic Layout Optimization of an EMC filter Thomas DE OLIVEIRA, Jean-Luc SCHANEN, Jean France Abstract- The transfer function of an EMC (Electro-Magnetic Compatibility) filter is strongly for embedded applications. When looking on a power converter, the EMC filter represents 30% of the volume

study, Luc Racaut seeks to recast the traditional interpretation of the Reformation in France by restoring Catholi- cism to the equation. Despite the tremendous vitality of the early Reformation, he notes that ?its eventual achievements were lim- ited... guaranteed religious toleration to the Reformed church in France. Why did France remain Catholic? By looking at a sample of French Catholic polemic published before the St. Bartholomew?s Day massacre of 1572, Racaut hopes to provide a partial answer...

This project, under contract from California Energy Commission, developed the CASE (Compressed Air Supply Efficiency) Index as a stand-alone value for compressor central plant efficiency. This Index captures the overall efficiency of a compressed...

Compressed sensing allows perfect recovery of sparse signals (or signals sparse in some basis) using only a small number of random measurements. Existing results in compressed sensing literature have focused on characterizing ...

A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.

these requirements. A signi cant application of this program is to the problem of compression of speech les problems associated with general waveform compression, namely predictive modelling and residual coding at the xed bit rate. Similarly there has been much work in design of general purpose lossless compressors

Experimental fusion facilities present a variety of hazards to the operators and staff. There are unique or specialized hazards, including magnetic fields, cryogens, radio frequency emissions, and vacuum reservoirs. There are also more general industrial hazards, such as a wide variety of electrical power, pressurized air, and cooling water systems in use, there are crane and hoist loads, working at height, and handling compressed gas cylinders. This paper outlines the projectile hazard assoicated with compressed gas cylinders and mthods of treatment to provide for compressed gas safety. This information should be of interest to personnel at both magnetic and inertial fusion experiments.

Experimental fusion facilities present a variety of hazards to the operators and staff. There are unique or specialized hazards, including magnetic fields, cryogens, radio frequency emissions, and vacuum reservoirs. There are also more general industrial hazards, such as a wide variety of electrical power, pressurized air and cooling water systems in use, there are crane and hoist loads, working at height, and handling compressed gas cylinders. This paper outlines the projectile hazard associated with compressed gas cylinders and methods of treatment to provide for compressed gas safety. This information should be of interest to personnel at both magnetic and inertial fusion experiments.

efficiency for a given compression time and compression ratio. The main part of the heat transfer model profile for a general heat transfer model. While the results show a good improvement both in the lumped

Chapter 6 LAPPED TRANSFORMS FOR IMAGE COMPRESSION Ricardo L. de Queiroz Digital Imaging Technology aspects of lapped transforms and their applications to image compression. It is a subject that has been extensively studied mainly because lapped transforms are closely related to filter banks, wavelets, and time

Compressed Gas Cylinder Safety I. Background. Due to the nature of gas cylinders hazards of a ruptured cylinder. There are almost 200 different types of materials in gas cylinders, there are several general procedures to follow for safe storage and handling of a compressed gas cylinder: II

A COMPRESSED AIR REDUCTION PROGRAM K. Dwight Hawks General Motors Corporation - Ruick-Oldsmobi1e-Cadillac Group Warren, Michigan ABSTRACT The reascn for implementing this program was to assist the plant in Quantifying some of its leaks... in the equipme~t throuqhout the plant and to provide direction as to which leaks are yenerat~ng high uti 1ity costs. The direction is very beneficial in lIlaking maintenance aware of prolill,Pls within equipment .IS \\Iell as notifying them as to whf're thei...

From ICE to ICF, the effect of mean compression or expansion is important for predicting the state of the turbulence. When developing combustion models, we would like to know the mix state of the reacting species. This involves density and concentration fluctuations. To date, research has focused on the effect of compression on the turbulent kinetic energy. The current work provides constraints to help development and calibration for models of species mixing effects in compressed turbulence. The Cambon, et al., re-scaling has been extended to buoyancy driven turbulence, including the fluctuating density, concentration, and temperature equations. The new scalings give us helpful constraints for developing and validating RANS turbulence models.

GRIDS Project: GeneralCompression has developed a transformative, near-isothermal compressed air energy storage system (GCAES) that prevents air from heating up during compression and cooling down during expansion. When integrated with renewable generation, such as a wind farm, intermittent energy can be stored in compressed air in salt caverns or pressurized tanks. When electricity is needed, the process is reversed and the compressed air is expanded to produce electricity. Unlike conventional compressed air energy storage (CAES) projects, no gas is burned to convert the stored high-pressure air back into electricity. The result of this breakthrough is an ultra-efficient, fully shapeable, 100% renewable and carbon-free power product. The GCAES system can provide high quality electricity and ancillary services by effectively integrating renewables onto the grid at a cost that is competitive with gas, coal and nuclear generation.

to develop, design and test compressors built to meet the needs of the mechanically demanding industrial heat pump applications which often require high compression ratios and temperatures in excess of 200 degrees F. This paper will review the theoretical...

We commonly find plants using padding to transport liquids or light solids short distances from tankers into storage tanks. Padding can wreck havoc in compressed air systems with limited storage, undersized cleanup equipment (dryers and filters...

The present invention relates generally to the field of homogenous charge compression ignition engines, in which fuel is injected when the cylinder piston is relatively close to the bottom dead center position for its compression stroke. The fuel mixes with air in the cylinder during the compression stroke to create a relatively lean homogeneous mixture that preferably ignites when the piston is relatively close to the top dead center position. However, if the ignition event occurs either earlier or later than desired, lowered performance, engine misfire, or even engine damage, can result. The present invention utilizes internal exhaust gas recirculation and/or compression ratio control to control the timing of ignition events and combustion duration in homogeneous charge compression ignition engines. Thus, at least one electro-hydraulic assist actuator is provided that is capable of mechanically engaging at least one cam actuated intake and/or exhaust valve.

US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.

Compressive Rendering of Multidimensional Scenes Pradeep Sen, Soheil Darabi, and Lei Xiao Advanced of using compressed sensing to reconstruct the 2D images produced by a rendering system, a process we called compressive rendering. In this work, we present the natural extension of this idea

The present invention relates generally to the field of homogeneous charge compression engines. In these engines, fuel is injected upstream or directly into the cylinder when the power piston is relatively close to its bottom dead center position. The fuel mixes with air in the cylinder as the power piston advances to create a relatively lean homogeneous mixture that preferably ignites when the power piston is relatively close to the top dead center position. However, if the ignition event occurs either earlier or later than desired, lowered performance, engine misfire, or even engine damage, can result. Thus, the present invention divides the homogeneous charge between a controlled volume higher compression space and a lower compression space to better control the start of ignition.

Here we report quasi-isentropic dynamic compression and thermodynamic characterization of solid, precompressed deuterium over an ultrafast time scale (< 100 ps) and a microscopic length scale (< 1 {micro}m). We further report a fast transition in shock wave compressed solid deuterium that is consistent with the ramp to shock transition, with a time scale of less than 10 ps. These results suggest that high-density dynamic compression of hydrogen may be possible on microscopic length scales.

. The design of a compressed air system was formerly limited to the selection of an air compressor large enough to deliver sufficient compressed air for the estimated system requirements. As system air requirements grew, additional compressors were added... specification, selection and installation process will follow. BACKGROUND For more than 100 years compressed air has been used throughout industry as a safe and reliable utility. The generation of this utility is performed by an air compressor. The first...

We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

A compression algorithm is presented that uses the set of prime numbers. Sequences of numbers are correlated with the prime numbers, and labeled with the integers. The algorithm can be iterated on data sets, generating factors of doubles on the compression.

This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

A compressor is a device used to increase the pressure of a compressible fluid. The inlet pressure can vary from a deep vacuum to a high positive pressure. The discharge pressure can range from subatmospheric levels to tens of thousands of pounds per square inch. Compressors come in numerous forms, but for oilfield applications there are two primary types, reciprocating and rotary. Both reciprocating and rotary compressors are grouped in the intermittent mode of compression. Intermittent is cyclic in nature, in that a specific quantity of gas is ingested by the compressor, acted upon and discharged before the cycle is repeated. Reciprocating compression is the most common form of compression used for oilfield applications. Rotary screw compressors have a long history but are relative newcomers to oilfield applications. The rotary screw compressor-technically a helical rotor compressor-dates back to 1878. That was when the first rotary screw was manufactured for the purpose of compressing air. Today thousands of rotary screw compression packages are being used throughout the world to compress natural gas.

A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

Air compressors are a significant industrial energy user and therefore a prime target for industrial energy audits. The project goal was to develop a software tool, AIRMaster, and supporting methodology for performing compressed air system audits...

In this study for the first time we generalize streamline models to compressible flow using a rigorous formulation while retaining most of its computational advantages. Our new formulation is based on three major elements and requires only minor...

We present a procedure for efficiently compressing astronomical radio data for high performance applications. Integrated, post-correlation data are first passed through a nearly lossless rounding step which compares the precision of the data to a generalized and calibration-independent form of the radiometer equation. This allows the precision of the data to be reduced in a way that has an insignificant impact on the data. The newly developed Bitshuffle lossless compression algorithm is subsequently applied. When the algorithm is used in conjunction with the HDF5 library and data format, data produced by the CHIME Pathfinder telescope is compressed to 28% of its original size and decompression throughputs in excess of 1 GB/s are obtained on a single core.

We explore a generalization of quantum secret sharing (QSS) in which classical shares play a complementary role to quantum shares, exploring further consequences of an idea first studied by Nascimento, Mueller-Quade, and Imai [Phys. Rev. A 64, 042311 (2001)]. We examine three ways, termed inflation, compression, and twin thresholding, by which the proportion of classical shares can be augmented. This has the important application that it reduces quantum (information processing) players by replacing them with their classical counterparts, thereby making quantum secret sharing considerably easier and less expensive to implement in a practical setting. In compression, a QSS scheme is turned into an equivalent scheme with fewer quantum players, compensated for by suitable classical shares. In inflation, a QSS scheme is enlarged by adding only classical shares and players. In a twin-threshold scheme, we invoke two separate thresholds for classical and quantum shares based on the idea of information dilution.

The compressive mechanical response of fine sand is experimentally investigated. The strain rate, initial density, stress state, and moisture level are systematically varied. A Kolsky bar was modified to obtain uniaxial and triaxial compressive response at high strain rates. A controlled loading pulse allows the specimen to acquire stress equilibrium and constant strain-rates. The results show that the compressive response of the fine sand is not sensitive to strain rate under the loading conditions in this study, but significantly dependent on the moisture content, initial density and lateral confinement. Partially saturated sand is more compliant than dry sand. Similar trends were reported in the quasi-static regime for experiments conducted at comparable specimen conditions. The sand becomes stiffer as initial density and/or confinement pressure increases. The sand particle size become smaller after hydrostatic pressure and further smaller after dynamic axial loading.

Simulation of quantum systems promises to deliver physical and chemical predictions for the frontiers of technology. Unfortunately, the exact representation of these systems is plagued by the exponential growth of dimension with the number of particles, or colloquially, the curse of dimensionality. The success of approximation methods has hinged on the relative simplicity of physical systems with respect to the exponentially complex worst case. Exploiting this relative simplicity has required detailed knowledge of the physical system under study. In this work, we introduce a general and efficient black box method for many-body quantum systems that utilizes technology from compressed sensing to find the most compact wavefunction possible without detailed knowledge of the system. It is a Multicomponent Adaptive Greedy Iterative Compression (MAGIC) scheme. No knowledge is assumed in the structure of the problem other than correct particle statistics. This method can be applied to many quantum systems such as spins, qubits, oscillators, or electronic systems. As an application, we use this technique to compute ground state electronic wavefunctions of hydrogen fluoride and recover 98% of the basis set correlation energy or equivalently 99.996% of the total energy with $50$ configurations out of a possible $10^7$. Building from this compactness, we introduce the idea of nuclear union configuration interaction for improving the description of reaction coordinates and use it to study the dissociation of hydrogen fluoride and the helium dimer.

We present a progressive compression technique for volumetric subdivision meshes based on the slow growing refinement algorithm. The system is comprised of a wavelet transform followed by a progressive encoding of the resulting wavelet coefficients. We compare the efficiency of two wavelet transforms. The first transform is based on the smoothing rules used in the slow growing subdivision technique. The second transform is a generalization of lifted linear B-spline wavelets to the same multi-tier refinement structure. Direct coupling with a hierarchical coder produces progressive bit streams. Rate distortion metrics are evaluated for both wavelet transforms. We tested the practical performance of the scheme on synthetic data as well as data from laser indirect-drive fusion simulations with multiple fields per vertex. Both wavelet transforms result in high quality trade off curves and produce qualitatively good coarse representations.

CHAPTER 9 Image Compression by Back Propagation: An Example of Extensional Programming* GARRISON W the case with the computatiolls associated with basic cognitive pro- cesses such as vision and audition techniques. The technique we employ is known as back propagation. developed by l1umelhart, Hinton

to precompression characteristics (Brockmann, 1966). Hsmdy (1962) found that acceptable, compressed and freeze-dried spinach could be obtained by plasticizing the product to a moisture content of 9X before compression. Ishler (1962) reported that spraying... the dehydrated food before compression with either water, glycerine or propylene glycol produced bars with excellent rehydra- tion characteristics. He recommended spraying freeze-dried cellu- lar foods to 5-13X moisture, compressing, and redrying to lees than...

Compressibility and strength of nanocrystalline tungsten boride under compression to 60GPa Haini://jap.aip.org/about/rights_and_permissions #12;Compressibility and strength of nanocrystalline tungsten boride under compression to 60 GPa Haini of nanocrystalline tungsten boride (WB) were investigated using radial x-ray diffraction (RXRD) in a diamond

The U.S. natural gas pipeline industry is facing the twin challenges of increased flexibility and capacity expansion. To meet these challenges, the industry requires improved choices in gas compression to address new construction and enhancement of the currently installed infrastructure. The current fleet of installed reciprocating compression is primarily slow-speed integral machines. Most new reciprocating compression is and will be large, high-speed separable units. The major challenges with the fleet of slow-speed integral machines are: limited flexibility and a large range in performance. In an attempt to increase flexibility, many operators are choosing to single-act cylinders, which are causing reduced reliability and integrity. While the best performing units in the fleet exhibit thermal efficiencies between 90% and 92%, the low performers are running down to 50% with the mean at about 80%. The major cause for this large disparity is due to installation losses in the pulsation control system. In the better performers, the losses are about evenly split between installation losses and valve losses. The major challenges for high-speed machines are: cylinder nozzle pulsations, mechanical vibrations due to cylinder stretch, short valve life, and low thermal performance. To shift nozzle pulsation to higher orders, nozzles are shortened, and to dampen the amplitudes, orifices are added. The shortened nozzles result in mechanical coupling with the cylinder, thereby, causing increased vibration due to the cylinder stretch mode. Valve life is even shorter than for slow speeds and can be on the order of a few months. The thermal efficiency is 10% to 15% lower than slow-speed equipment with the best performance in the 75% to 80% range. The goal of this advanced reciprocating compression program is to develop the technology for both high speed and low speed compression that will expand unit flexibility, increase thermal efficiency, and increase reliability and integrity. Retrofit technologies that address the challenges of slow-speed integral compression are: (1) optimum turndown using a combination of speed and clearance with single-acting operation as a last resort; (2) if single-acting is required, implement infinite length nozzles to address nozzle pulsation and tunable side branch absorbers for 1x lateral pulsations; and (3) advanced valves, either the semi-active plate valve or the passive rotary valve, to extend valve life to three years with half the pressure drop. This next generation of slow-speed compression should attain 95% efficiency, a three-year valve life, and expanded turndown. New equipment technologies that address the challenges of large-horsepower, high-speed compression are: (1) optimum turndown with unit speed; (2) tapered nozzles to effectively reduce nozzle pulsation with half the pressure drop and minimization of mechanical cylinder stretch induced vibrations; (3) tunable side branch absorber or higher-order filter bottle to address lateral piping pulsations over the entire extended speed range with minimal pressure drop; and (4) semi-active plate valves or passive rotary valves to extend valve life with half the pressure drop. This next generation of large-horsepower, high-speed compression should attain 90% efficiency, a two-year valve life, 50% turndown, and less than 0.75 IPS vibration. This program has generated proof-of-concept technologies with the potential to meet these ambitious goals. Full development of these identified technologies is underway. The GMRC has committed to pursue the most promising enabling technologies for their industry.

Compressed-air systems are used widely throughout industry for many operations, including pneumatic tools, packaging and automation equipment, conveyors, and other industrial process operations. Compressed-air systems are defined as a group of subsystems composed of air compressors, air treatment equipment, controls, piping, pneumatic tools, pneumatically powered machinery, and process applications using compressed air. A compressed-air system has three primary functional subsystems: supply, distribution, and demand. Air compressors are the primary energy consumers in a compressed-air system and are the primary focus of this protocol. The two compressed-air energy efficiency measures specifically addressed in this protocol are: high-efficiency/variable speed drive (VSD) compressor replacing modulating compressor; compressed-air leak survey and repairs. This protocol provides direction on how to reliably verify savings from these two measures using a consistent approach for each.

An internal combustion reciprocating engine is operable as a compressor during slack demand periods utilizing excess power from a power grid to charge air into an air storage reservoir and as an expander during peak demand periods to feed power into the power grid utilizing air obtained from the air storage reservoir together with combustible fuel. Preferably the internal combustion reciprocating engine is operated at high pressure and a low pressure turbine and compressor are also employed for air compression and power generation.

For storing a word or the whole text segment, we need a huge storage space. Typically a character requires 1 Byte for storing it in memory. Compression of the memory is very important for data management. In case of memory requirement compression for text data, lossless memory compression is needed. We are suggesting a lossless memory requirement compression method for text data compression. The proposed compression method will compress the text segment or the text file based on two level approaches firstly reduction and secondly compression. Reduction will be done using a word lookup table not using traditional indexing system, then compression will be done using currently available compression methods. The word lookup table will be a part of the operating system and the reduction will be done by the operating system. According to this method each word will be replaced by an address value. This method can quite effectively reduce the size of persistent memory required for text data. At the end of the first l...

When we were approached to give a general discussion of some aspects of the Los Alamos flux compression program, we decided to present historical backgrounds of a few topics that have some relevance to programs that we very much In the forefront of activities going on today. Of some thirty abstracts collected at Los Alamos for this conference, ten of them dealt with electromagnetic acceleration of materials, notably the compression of heavy liners, and five dealt with plasma compression. Both of these topics have been under investigation, off and on, from the time a formal flux compression program was organized at Los Alamos. We decided that a short overview of work done In these areas would be of some interest. Some of the work described below has been discussed in Laboratory reports that, while referenced and available, are not readily accessible. For completeness, some previously published, accessible work Is also discussed but much more briefly. Perhaps the most striking thing about the early work In these two areas is how primitive much of it was when compared to the far more sophisticated, related activities of today. Another feature of these programs, actually for most programs, Is their cyclic nature. Their relevance and/or funding seems to come land go. Eventually, many of the older programs come back into favor. Activities Involving the dense plasma focus (DPF), about which some discussions will be given later, furnish a classic example of this kind, coming Into and then out of periods of heightened interest. We devote the next two sections of this paper to a review of our work In magnetic acceleration of solids and of plasma compression. A final section gives a survey of our work In which thin foils are imploded to produce intense quantities of son x-rays. The authors are well aware of much excellent work done elsewhere In all of these topics, but partly because of space limitations, have confined this discussion to work done at Los Alamos.

Aggressive engine downsizing, variable compression ratio and use of the Atkinson cycle are being combined to improve fuel economy by up to 40 percent relative to port fuel injected gasoline engines, while maintaining full engine power. Approach Engine downsizing is viewed by US and foreign automobile manufacturers as one of the best options for improving fuel economy. While this strategy has already demonstrated a degree of success, downsizing and fuel economy gains are currently limited. With new variable compression ratio technology however, the degree of engine downsizing and fuel economy improvement can be greatly increased. A small variable compression ratio (VCR) engine has the potential to return significantly higher vehicle fuel economy while also providing high power. Affordability and potential for near term commercialization are key attributes of the Envera VCR engine. VCR Technology To meet torque and power requirements, a smaller engine needs to do more work per stroke. This is typically accomplished by boosting the incoming charge with either a turbo or supercharger so that more energy is present in the cylinder per stroke to do the work. With current production engines the degree of engine boosting (which correlates to downsizing) is limited by detonation (combustion knock) at high boost levels. Additionally, the turbo or supercharger needs to be responsive and efficient while providing the needed boost. VCR technology eliminates the limitation of engine knock at high load levels by reducing compression ratio to {approx}9:1 (or whatever level is appropriate) when high boost pressures are needed. By reducing the compression ratio during high load demand periods there is increased volume in the cylinder at top dead center (TDC) which allows more charge (or energy) to be present in the cylinder without increasing the peak pressure. Cylinder pressure is thus kept below the level at which the engine would begin to knock. When loads on the engine are low the compression ratio can be raised (to as much as 18:1) providing high engine efficiency. It is important to recognize that for a well designed VCR engine cylinder pressure does not need to be higher than found in current production turbocharged engines. As such, there is no need for a stronger crankcase, bearings and other load bearing parts within the VCR engine. The Envera VCR mechanism uses an eccentric carrier approach to adjust engine compression ratio. The crankshaft main bearings are mounted in this eccentric carrier or 'crankshaft cradle' and pivoting the eccentric carrier 30 degrees adjusts compression ratio from 9:1 to 18:1. The eccentric carrier is made up of a casting that provides rigid support for the main bearings, and removable upper bearing caps. Oil feed to the main bearings transits through the bearing cap fastener sockets. The eccentric carrier design was chosen for its low cost and rigid support of the main bearings. A control shaft and connecting links are used to pivot the eccentric carrier. The control shaft mechanism features compression ratio lock-up at minimum and maximum compression ratio settings. The control shaft method of pivoting the eccentric carrier was selected due to its lock-up capability. The control shaft can be rotated by a hydraulic actuator or an electric motor. The engine shown in Figures 3 and 4 has a hydraulic actuator that was developed under the current program. In-line 4-cylinder engines are significantly less expensive than V engines because an entire cylinder head can be eliminated. The cost savings from eliminating cylinders and an entire cylinder head will notably offset the added cost of the VCR and supercharging. Replacing V6 and V8 engines with in-line VCR 4-cylinder engines will provide high fuel economy at low cost. Numerous enabling technologies exist which have the potential to increase engine efficiency. The greatest efficiency gains are realized when the right combination of advanced and new technologies are packaged together to provide the greatest gains at the least cost. Aggressive engine downsiz

Energy efficient design, operation and maintenance of compressed air systems in industrial plants can provide substantial reductions in electric power and other operational costs. This guidebook will help identify cost effective, energy efficiency opportunities in compressed air system design, re-design, operation and maintenance. The guidebook provides: (1) a broad overview of industrial compressed air systems, (2) methods for estimating compressed air consumption and projected air savings, (3) a description of applicable, generic energy conservation measures, and, (4) a review of some compressed air system demonstration projects that have taken place over the last two years. The primary audience for this guidebook includes plant maintenance supervisors, plant engineers, plant managers and others interested in energy management of industrial compressed air systems.

A variable compression ratio mechanism for an internal combustion engine that has an engine block and a crankshaft is disclosed. The variable compression ratio mechanism has a plurality of eccentric disks configured to support the crankshaft. Each of the plurality of eccentric disks has at least one cylindrical portion annularly surrounded by the engine block. The variable compression ratio mechanism also has at least one actuator configured to rotate the plurality of eccentric disks.

The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

This Letter proposed spatial scaling laws of the density-weighted energy spectrum of compressible flow in terms of dissipation rate, wave number and the Mach number. The study has shown the compressible turbulence energy spectrum does not show the complete similarity, but incomplete similarity as $E(k,Ma)=(C+\\frac{D}{\\ln{Ma}})\

Compressed Earth Blocks (CEB) is a developed earth technology, in which unbaked brick is produced by compressing raw soil using manual, hydraulic, or mechanical compressing machines. Revealing the potential of an affordable ...

in compressing the periodic data arising from 3D stator/rotor and utter applications. 1 #12;1 Fourier compressioncompression can only be performed as a post-processing step. Also, the reconstruction at a particular instant

Many database applications make extensive use of bitmap indexing schemes. In this paper, we study how to improve the efficiencies of these indexing schemes by proposing new compression schemes for the bitmaps. Most compression schemes are designed primarily to achieve good compression. During query processing they can be orders of magnitude slower than their uncompressed counterparts. The new schemes are designed to bridge this performance gap by reducing compression effectiveness and improving operation speed. In a number of tests on both synthetic data and real application data, we found that the new schemes significantly outperform the well-known compression schemes while using only modestly more space. For example, compared to the Byte-aligned Bitmap Code, the new schemes are 12 times faster and it uses only 50 percent more space. The new schemes use much less space(<30 percent) than the uncompressed scheme and are faster in a majority of the test cases.

Smart Grids measure energy usage in real-time and tailor supply and delivery accordingly, in order to improve power transmission and distribution. For the grids to operate effectively, it is critical to collect readings from massively-installed smart meters to control centers in an efficient and secure manner. In this paper, we propose a secure compressed reading scheme to address this critical issue. We observe that our collected real-world meter data express strong temporal correlations, indicating they are sparse in certain domains. We adopt Compressed Sensing technique to exploit this sparsity and design an efficient meter data transmission scheme. Our scheme achieves substantial efficiency offered by compressed sensing, without the need to know beforehand in which domain the meter data are sparse. This is in contrast to traditional compressed-sensing based scheme where such sparse-domain information is required a priori. We then design specific dependable scheme to work with our compressed sensing based ...

Data compression is a ubiquitous aspect of modern information technology, and the advent of quantum information raises the question of what types of compression are feasible for quantum data, where it is especially relevant given the extreme difficulty involved in creating reliable quantum memories. We present a protocol in which an ensemble of quantum bits (qubits) can in principle be perfectly compressed into exponentially fewer qubits. We then experimentally implement our algorithm, compressing three photonic qubits into two. This protocol sheds light on the subtle differences between quantum and classical information. Furthermore, since data compression stores all of the available information about the quantum state in fewer physical qubits, it could provide a vast reduction in the amount of quantum memory required to store a quantum ensemble, making even today's limited quantum memories far more powerful than previously recognized.

An aerogel composite material produced by compression molding of aerogel microspheres (powders) mixed together with a small percentage of polymer binder to form monolithic shapes in a cost-effective manner is disclosed. The aerogel composites are formed by mixing aerogel microspheres with a polymer binder, placing the mixture in a mold and heating under pressure, which results in a composite with a density of 50--800 kg/m{sup 3} (0.05--0.80 g/cc). The thermal conductivity of the thus formed aerogel composite is below that of air, but higher than the thermal conductivity of monolithic aerogels. The resulting aerogel composites are attractive for applications such as thermal insulation since fabrication thereof does not require large and expensive processing equipment. In addition to thermal insulation, the aerogel composites may be utilized for filtration, ICF target, double layer capacitors, and capacitive deionization. 4 figs.

An aerogel composite material produced by compression molding of aerogel microspheres (powders) mixed together with a small percentage of polymer binder to form monolithic shapes in a cost-effective manner. The aerogel composites are formed by mixing aerogel microspheres with a polymer binder, placing the mixture in a mold and heating under pressure, which results in a composite with a density of 50-800 kg/m.sup.3 (0.05-0.80 g/cc). The thermal conductivity of the thus formed aerogel composite is below that of air, but higher than the thermal conductivity of monolithic aerogels. The resulting aerogel composites are attractive for applications such as thermal insulation since fabrication thereof does not require large and expensive processing equipment. In addition to thermal insulation, the aerogel composites may be utilized for filtration, ICF target, double layer capacitors, and capacitive deionization.

An investigation of the shock compression properties of high-strength ceramics has been performed using controlled planar impact techniques. In a typical experimental configuration, a ceramic target disc is held stationary, and it is struck by plates of either a similar ceramic or by plates of a well-characterized metal. All tests were performed using either a single-stage propellant gun or a two-stage light-gas gun. Particle velocity histories were measured with laser velocity interferometry (VISAR) at the interface between the back of the target ceramic and a calibrated VISAR window material. Peak impact stresses achieved in these experiments range from about 3 to 70 GPa. Ceramics tested under shock impact loading include: Al{sub 2}O{sub 3}, AlN, B{sub 4}C, SiC, Si{sub 3}N{sub 4}, TiB{sub 2}, WC and ZrO{sub 2}. This report compiles the VISAR wave profiles and experimental impact parameters within a database-useful for response model development, computational model validation studies, and independent assessment of the physics of dynamic deformation on high-strength, brittle solids.

PHELIX (Precision High Energy-density Liner Implosion eXperiment) is a concept for studying electromagnetic implosions using proton radiography. This approach requires a portable pulsed power and liner implosion apparatus that can be operated in conjunction with an 800 MeV proton beam at the Los Alamos Neutron Science Center. The high resolution (< 100 micron) provided by proton radiography combined with similar precision of liner implosions driven electromagnetically can permit close comparisons of multi-frame experimental data and numerical simulations within a single dynamic event. To achieve a portable implosion system for use at high energy-density in a proton laboratory area requires sub-megajoule energies applied to implosions only a few cms in radial and axial dimension. The associated inductance changes are therefore relatively modest, so a current step-up transformer arrangement is employed to avoid excessive loss to parasitic inductances that are relatively large for low-energy banks comprising only several capacitors and switches. We describe the design, construction and operation of the PHELIX system and discuss application to liner-driven, magnetic flux compression experiments. For the latter, the ability of strong magnetic fields to deflect the proton beam may offer a novel technique for measurement of field distributions near perturbed surfaces.

Arithmetic coding, in conjunction with a suitable probabilistic model, can pro- vide nearly optimal data compression. In this article we analyze the e ect that the model and the particular implementation of arithmetic ...

the kth- order empirical entropy of T, and ¾ is the size of the alphabet. In this paper we study compressed representation for another classical problem of string indexing, which is called dictionary matching in the literature. Precisely, a collection D...

We present the first sensitivity study of the material isentropes extracted from ramp compression experiments. We perform hydrodynamic simulations of representative experimental geometries associated with ramp compression experiments and discuss the major factors determining the accuracy of the equation of state information extracted from such data. In conclusion, we analyzed both qualitatively and quantitatively the major experimental factors that determine the accuracy of equations of state extracted from ramp compression experiments. Since in actual experiments essentially all the effects discussed here will compound, factoring out individual signatures and magnitudes, as done in the present work, is especially important. This study should provide some guidance for the effective design and analysis of ramp compression experiments, as well as for further improvements of ramp generators performance.

are assessed. It is a common practice in facilities to simply add compressor capacity when faced with supply pressure or volume deficiencies, increasing the energy consumption associated with compressed air systems in industry. Additionally, in recent years...

This document provides specifications for the process air compressor for a compressed air storage project, requests a budgetary quote, and provides supporting information, including compressor data, site specific data, water analysis, and Seneca CAES value drivers.

We present methods that can provide an exponential savings in the resources required to perform dynamic parameter estimation using quantum systems. The key idea is to merge classical compressive sensing techniques with ...

A laboratory test program was conducted to evaluate the one-dimensional (1D) compression and creep properties of intact sand (and silty-sand) samples from a deep borehole at the Malamocco Inlet to the Venice Lagoon. The ...

every industrial plant as a source of exergy for tools, actuators, and a myriad of manufacturing processes. For this analysis, a typical scenario is considered with a compressor installed indoors. Conditions for the indoor surroundings... are temperature T I and pressure Ph while the outdoor conditions, the environment, are To and Po. The compressor system is defined as the compressor, dryer (aftercooler) and compressed air distribution system (piping). We assume that the compressed air exits...

Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmic rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10{sup -4}. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.

of transfer of energy. Typical applications in this category are motive applications, such as driving pneumatic tools and cylinders, operating instruments, pneumatic actuation and other such processes. - Active air, where the compressed air takes..., for ease. 3. MINIMIZING THE COSTS OF USAGE OF COMPRESSED AIR Within the factory, similar rules as for distribution would apply. Older factories must have their piping thoroughly checked for leakage in the pipelines. Tools such as ultrasonic leak...

Future orbiting observatories will survey large areas of sky in order to constrain the physics of dark matter and dark energy using weak gravitational lensing and other methods. Lossy compression of the resultant data will improve the cost and feasibility of transmitting the images through the space communication network. We evaluate the consequences of the lossy compression algorithm of Bernstein et al. (2010) for the high-precision measurement of weak-lensing galaxy ellipticities. This square-root algorithm compresses each pixel independently, and the information discarded is by construction less than the Poisson error from photon shot noise. For simulated space-based images (without cosmicmore »rays) digitized to the typical 16 bits per pixel, application of the lossy compression followed by image-wise lossless compression yields images with only 2.4 bits per pixel, a factor of 6.7 compression. We demonstrate that this compression introduces no bias in the sky background. The compression introduces a small amount of additional digitization noise to the images, and we demonstrate a corresponding small increase in ellipticity measurement noise. The ellipticity measurement method is biased by the addition of noise, so the additional digitization noise is expected to induce a multiplicative bias on the galaxies measured ellipticities. After correcting for this known noise-induced bias, we find a residual multiplicative ellipticity bias of m {approx} -4 x 10{sup -4}. This bias is small when compared to the many other issues that precision weak lensing surveys must confront, and furthermore we expect it to be reduced further with better calibration of ellipticity measurement methods.« less

Wavelet Based Color Image Compression and MathematicalWavelet Based Color Image Compression and MathematicalWavelet Based Color Image Compression and MathematicalWavelet Based Color Image Compression.olivier@univ-poitiers.fr Abstract One of the advantages of the Discrete Wavelet Transform (DWT) compared to Fourier Transform (e

The insurgence of compression induces wrinkling in actuation devices based on EAPs thin films leading to a sudden decrease of performances up to failure. Based on the classical tension field theory for thin elastic membranes, we provide a general framework for the analysis of the insurgence of in-plane compression in membranes of electroactive polymers (EAPs). Our main result is the deduction of a (voltage-dependent) domain in the stretch space which represents tensile configurations. Under the assumption of Mooney-Rivlin materials, we obtain that for growing values of the applied voltage the domain contracts, vanishing at a critical voltage above which the polymer is wrinkled for any stretch configuration. Our approach can be easily implemented in numerical simulations for more complex material behaviors and provides a tool for the analysis of compression instability as a function of the elastic moduli.

Noisy network coding is recently proposed for the general multi-source network by Lim, Kim, El Gamal and Chung. This scheme builds on compress-forward (CF) relaying but involves three new ideas, namely no Wyner-Ziv binning, relaxed simultaneous decoding and message repetition. In this paper, using the two-way relay channel as the underlining example, we analyze the impact of each of these ideas on the achievable rate region of relay networks. First, CF without binning but with joint decoding of both the message and compression index can achieve a larger rate region than the original CF scheme for multi-destination relay networks. With binning and successive decoding, the compression rate at each relay is constrained by the weakest link from the relay to a destination; but without binning, this constraint is relaxed. Second, simultaneous decoding of all messages over all blocks without uniquely decoding the compression indices can remove the constraints on compression rate completely, but is still subject to t...

A compressed full-text self-index occupies space close to that of the compressed text and simultaneously allows fast pattern matching and random access to the underlying text. Among the best compressed self-indexes, in theory and in practice, are several members of the FM-index family. In this paper, we describe new FM-index variants that combine nice theoretical properties, simple implementation and improved practical performance. Our main result is a new technique called fixed block compression boosting, which is a simpler and faster alternative to optimal compression boosting and implicit compression boosting used in previous FM-indexes.

Compressed Air Energy Storage (CAES) is a hybrid energy storage and generation concept that has many potential benefits especially in a location with increasing percentages of intermittent wind energy generation. The objectives of the NYSEG Seneca CAES Project included: for Phase 1, development of a Front End Engineering Design for a 130MW to 210 MW utility-owned facility including capital costs; project financials based on the engineering design and forecasts of energy market revenues; design of the salt cavern to be used for air storage; draft environmental permit filings; and draft NYISO interconnection filing; for Phase 2, objectives included plant construction with a target in-service date of mid-2016; and for Phase 3, objectives included commercial demonstration, testing, and two-years of performance reporting. This Final Report is presented now at the end of Phase 1 because NYSEG has concluded that the economics of the project are not favorable for development in the current economic environment in New York State. The proposed site is located in NYSEGs service territory in the Town of Reading, New York, at the southern end of Seneca Lake, in New York States Finger Lakes region. The landowner of the proposed site is Inergy, a company that owns the salt solution mining facility at this property. Inergy would have developed a new air storage cavern facility to be designed for NYSEG specifically for the Seneca CAES project. A large volume, natural gas storage facility owned and operated by Inergy is also located near this site and would have provided a source of high pressure pipeline quality natural gas for use in the CAES plant. The site has an electrical take-away capability of 210 MW via two NYSEG 115 kV circuits located approximately one half mile from the plant site. Cooling tower make-up water would have been supplied from Seneca Lake. NYSEGs engineering consultant WorleyParsons Group thoroughly evaluated three CAES designs and concluded that any of the designs would perform acceptably. Their general scope of work included development of detailed project construction schedules, capital cost and cash flow estimates for both CAES cycles, and development of detailed operational data, including fuel and compression energy requirements, to support dispatch modeling for the CAES cycles. The Dispatch Modeling Consultant selected for this project was Customized Energy Solutions (CES). Their general scope of work included development of wholesale electric and gas market price forecasts and development of a dispatch model specific to CAES technologies. Parsons Brinkerhoff Energy Storage Services (PBESS) was retained to develop an air storage cavern and well system design for the CAES project. Their general scope of work included development of a cavern design, solution mining plan, and air production well design, cost, and schedule estimates for the project. Detailed Front End Engineering Design (FEED) during Phase 1 of the project determined that CAES plant capital equipment costs were much greater than the $125.6- million originally estimated by EPRI for the project. The initial air storage cavern Design Basis was increased from a single five million cubic foot capacity cavern to three, five million cubic foot caverns with associated air production wells and piping. The result of this change in storage cavern Design Basis increased project capital costs significantly. In addition, the development time required to complete the three cavern system was estimated at approximately six years. This meant that the CAES plant would initially go into service with only one third of the required storage capacity and would not achieve full capability until after approximately five years of commercial operation. The market price forecasting and dispatch modeling completed by CES indicated that the CAES technologies would operate at only 10 to 20% capacity factors and the resulting overall project economics were not favorable for further development. As a result of all of these factors, the Phase 1 FEED developed an installe

In this study we have developed the techniques to investigate the hydrodynamic response of high-strength ceramics by mixing these powders with copper powder, preparing compacts, and performing shock compression tests on these mixtures. Hydrodynamics properties of silicon carbide, titanium diboride, and boron carbide to 30 GPa were examined by this method, and hydrodynamic compression data for these ceramics have been determined. We have concluded, however, that the measurement method is sensitive to sample preparation and uncertainties in shock wave measurements. Application of the experimental technique is difficult and further efforts are needed.

. Secondly, join us in the definition of compressed air as a system, the totality of which is comprised of the Supply Side and the Demand Side. The Supply Side is the compressors and their controls, receivers (primary storage tanks), aftercoolers, filters... and dryers, and ends at the Compressor Room door. The Demand side is all of the distribution piping system, and all of the end uses of the compressed air, including leaks The function of the audit (be it walk-through, assessment or full audit...

Various technologies described herein pertain to compressive sensing electron microscopy. A compressive sensing electron microscope includes a multi-beam generator and a detector. The multi-beam generator emits a sequence of electron patterns over time. Each of the electron patterns can include a plurality of electron beams, where the plurality of electron beams is configured to impart a spatially varying electron density on a sample. Further, the spatially varying electron density varies between each of the electron patterns in the sequence. Moreover, the detector collects signals respectively corresponding to interactions between the sample and each of the electron patterns in the sequence.

The objective of this thesis is to evaluate a new carbon dioxide compression technology - shock compression - applied specifically to capture-enabled power plants. Global warming has increased public interest in carbon ...

Compressing Social Networks The Minimum Logarithmic Arrangement Problem Chad Waters School Orderings Heuristic Conclusion Motivation Determine the extent to which social networks can be compressed adjacency queries. Social networks are not random graphs. Exhibit distinctive local properties

Energy use in compressed air systems accounts for typically 10% of the total industrial electricity consumption. It also accounts for close to 99% of the CO2 footprint of an air compressor and approximately 80% of the life cycle costs of a...

at a Goulds Pumps manufacturing plant in Seneca Falls, New York, and is currently undergoing field testing. The compressed air system will optimize the energy efficiency of the 7 compressor system (1,850hp) at Goulds, while reducing system pressure...

Control of the radial profile of trapped antiproton clouds is critical to trapping antihydrogen. We report the first detailed measurements of the radial manipulation of antiproton clouds, including areal density compressions by factors as large as ten, by manipulating spatially overlapped electron plasmas. We show detailed measurements of the near-axis antiproton radial profile and its relation to that of the electron plasma.

. Introduction COMPRESSIVE sensing is a newly emerging signal-processing method [1,2] in information technologies. If some prior knowledge, such as an inherent model of the fluid process of interest, can be incorporated noise by interaction of fan and stator assembly [5,12]. This tonal frequency sound propagates through

Distributed Compressed Sensing in Dynamic Networks Stacy Patterson Department of Computer Science theoretical results to develop a distributed version of IHT for dynamic networks. Evaluations show that our throughout the network, it is desirable to perform this recovery within the network in a distributed fashion

, in addition to the traditional simple measures such as count and average. Such new measures will allow users space. In this paper, we propose a fundamentally new class of measures, compressible measures, in order. With years of research and development of data warehouse and OLAP technology [15], [7], [1], [34], a large

produces considerable static power during the charge accumulation stage. As both the mixer and integrator] for data compression. An employed random demodulator front-end, which consisting of a dedicated mixer and an integrator, implements CS front-end in [4]. In this architecture the mixer must operate at or above

It is shown that a gravitational compression of a spherical body results in an infinite growth of the energy of a body as its radius comes close to $GM/c^2$. This gives rise to a negative defect of mass and, due to an instability, to an expansion or to an explosion. A rigorous proof of the above statement can be obtained within the General Relativity in the harmonic coordinates.

contained compressed air systems.(I) Air compressors are generally driven by electric motors, often in large sizes and often operating continuously throughout the day. As a result, compressors can account for a substantial fraction of the electricity... consumption and peak demand in a given facility. A study by North Carolina A&T University found that air compressors accounted for as much as 49 % of base energy consumption, and up to 58% of peak electrical demand, in the facilities they audited.(2...

The main objective of this project is to confirm on a well-instrumented prototype the theoretically derived claims of higher efficiency and coefficient of performance for geothermal heat pumps based on a new regenerative thermodynamic cycle as comparing to existing technology. In order to demonstrate the improved performance of the prototype, it will be compared to published parameters of commercially available geothermal heat pumps manufactured by US and foreign companies. Other objectives are to optimize the design parameters and to determine the economic viability of the new technology. Background (as stated in the proposal): The proposed technology closely relates to EERE mission by improving energy efficiency, bringing clean, reliable and affordable heating and cooling to the residential and commercial buildings and reducing greenhouse gases emission. It can provide the same amount of heating and cooling with considerably less use of electrical energy and consequently has a potential of reducing our nations dependence on foreign oil. The theoretical basis for the proposed thermodynamic cycle was previously developed and was originally called a dynamic equilibrium method. This theory considers the dynamic equations of state of the working fluid and proposes the methods for modification of T-S trajectories of adiabatic transformation by changing dynamic properties of gas, such as flow rate, speed and acceleration. The substance of this proposal is a thermodynamic cycle characterized by the regenerative use of the potential energy of two-phase flow expansion, which in traditional systems is lost in expansion valves. The essential new features of the process are: (1) The application of two-step throttling of the working fluid and two-step compression of its vapor phase. (2) Use of a compressor as the initial step compression and a jet device as a second step, where throttling and compression are combined. (3) Controlled ratio of a working fluid at the first and second step of compression. In the proposed system, the compressor compresses the vapor only to 50-60% of the final pressure, while the additional compression is provided by a jet device using internal potential energy of the working fluid flow. Therefore, the amount of mechanical energy required by a compressor is significantly reduced, resulting in the increase of efficiency (either COP or EER). The novelty of the cycle is in the equipment and in the way the multi-staging is accomplished. The anticipated result will be a new refrigeration system that requires less energy to accomplish a cooling task. The application of this technology will be for more efficient designs of: (1) Industrial chillers, (2) Refrigeration plants, (3) Heat pumps, (4) Gas Liquefaction plants, (5) Cryogenic systems.

Ground motion data has been recorded for many years at Nevada Test Site and is now stored on thousands of digital tapes. The recording format is very inefficient in terms of space on tape. This report outlines a method to compress the data onto a few hundred tapes while maintaining the accuracy of the recording and allowing restoration of any file to the original format for future use. For future digitizing a more efficient format is described and suggested.

An organic rankine cycle system is combined with a vapor compression cycle system with the turbine generator of the organic rankine cycle generating the power necessary to operate the motor of the refrigerant compressor. The vapor compression cycle is applied with its evaporator cooling the inlet air into a gas turbine, and the organic rankine cycle is applied to receive heat from a gas turbine exhaust to heat its boiler within one embodiment, a common condenser is used for the organic rankine cycle and the vapor compression cycle, with a common refrigerant, R-245a being circulated within both systems. In another embodiment, the turbine driven generator has a common shaft connected to the compressor to thereby eliminate the need for a separate motor to drive the compressor. In another embodiment, an organic rankine cycle system is applied to an internal combustion engine to cool the fluids thereof, and the turbo charged air is cooled first by the organic rankine cycle system and then by an air conditioner prior to passing into the intake of the engine.

We present a new class of adaptive algorithms that use compressed bitmap indexes to speed up evaluation of the range join query in relational databases. We determine the best strategy to process a join query based on a fast sub-linear time computation of the join selectivity (the ratio of the number of tuples in the result to the total number of possible tuples). In addition, we use compressed bitmaps to represent the join output compactly: the space requirement for storing the tuples representing the join of two relations is asymptotically bounded by min(h; n . cb), where h is the number of tuple pairs in the result relation, n is the number of tuples in the smaller of the two relations, and cb is the cardinality of the larger column being joined. We present a theoretical analysis of our algorithms, as well as experimental results on large-scale synthetic and real data sets. Our implementations are efficient, and consistently outperform well-known approaches for a range of join selectivity factors. For instance, our count-only algorithm is up to three orders of magnitude faster than the sort-merge approach, and our best bitmap index-based algorithm is 1.2x-80x faster than the sort-merge algorithm, for various query instances. We achieve these speedups by exploiting several inherent performance advantages of compressed bitmap indexes for join processing: an implicit partitioning of the attributes, space-efficiency, and tolerance of high-cardinality relations.

Consider the dynamics of a gas bubble in an inviscid, compressible liquid with surface tension. Kinematic and dynamic boundary conditions couple the bubble surface deformation dynamics with the dynamics of waves in the fluid. This system has a spherical equilibrium state, resulting from the balance of the pressure at infinity and the gas pressure within the bubble. We study the linearized dynamics about this equilibrium state in a center of mass frame: 1) We prove that the velocity potential and bubble surface perturbation satisfy point-wise in space exponential time-decay estimates. 2) The time-decay rate is governed by scattering resonances, eigenvalues of a non-selfadjoint spectral problem. These are pole singularities in the lower half plane of the analytic continuation of a resolvent operator from the upper half plane, across the real axis into the lower half plane. 3) The time-decay estimates are a consequence of resonance mode expansions for the velocity potential and bubble surface perturbations. 4) For small compressibility (Mach number, a ratio of bubble wall velocity to sound speed, \\epsilon), this is a singular perturbation of the incompressible limit. The scattering resonances which govern the anomalously slow time-decay, are {\\it Rayleigh resonances}. Asymptotics, supported by high-precision numerical studies, indicate that the Rayleigh resonances which are closest to the real axis satisfy | \\frac{\\Im \\lambda_\\star(\\epsilon)}{\\Re \\lambda_\\star(\\epsilon)} | = {\\cal O} (\\exp(-\\kappa\\ \\We\\ \\epsilon^{-2})), \\kappa>0. Here, \\We denotes the Weber number, a dimensionless ratio comparing inertia and surface tension. 5) To obtain the above results we prove a general result, of independent interest, estimating the Neumann to Dirichlet map for the wave equation, exterior to a sphere.

This EA will evaluate the potential environmental impacts associated with a proposal by Emera CNG, LLC that would include Emera's CNG plant Emeras CNG plant would include facilities to receive, dehydrate, and compress gas to fill pressure vessels with an open International Organization for Standardization (ISO) container frame mounted on trailers. Emera plans to truck the trailers a distance of a quarter mile from its proposed CNG facility to a berth at the Port of Palm Beach, where the trailers will be loaded onto a roll-on/roll-off ocean going carrier. Emera plans to receive natural gas at its planned compression facility from the Riviera Lateral, a pipeline owned and operated by Peninsula Pipeline Company. Although this would be the principal source of natural gas to Emeras CNG facility for export, during periods of maintenance at Emeras facility, or at the Port of Palm Beach, Emera may obtain CNG from other sources and/or export CNG from other general-use Florida port facilities. The proposed Emera facility will initially be capable of loading 8 million cubic feet per day (MMcf/day) of CNG into ISO containers and, after full build-out, would be capable to load up to 25 MMcf/day. For the initial phase of the project, Emera intends to send these CNG ISO containers from Florida to Freeport, Grand Bahama Island, where the trailers will be unloaded, the CNG decompressed, and injected into a pipeline for transport to electric generation plants owned and operated by Grand Bahama Power Company (GBPC). DOE is authorizing the exportation of CNG and is not providing funding or financial assistance for the Emera Project.

Secondary compression is important in peat deposits because they exist at high void ratios and exhibit high values of compression index C{sub c}, display the highest values of C{sub {alpha}}/C{sub c} among geotechnical materials, and primary consolidation is completed in weeks or months in typical field situations. Secondary compression of Middleton peat was investigated by odometer tests on undisturbed specimens. The observed secondary compression behavior of this fibrous peat, with or without surcharging, is in accordance with the C{sub {alpha}}/C{sub c} concept of compressibility. Near the preconsolidation pressure, the secondary compression index, C{sub {alpha}}, increases significantly with time. In the compression range C{sub {alpha}} decreases only slightly with time, and for most practical purposes a constant C{sub {alpha}} with time can be used to compute secondary settlement. Postsurcharge secondary compression index, C{prime}{sub {alpha}}, always increases with time. This is predicted by the C{sub {alpha}}/C{sub c} concept of compressibility. A secant post-surcharge secondary compression index, C{double_prime}{sub {alpha}}, is therefore introduced for a simple computation of secondary settlement.

A few individuals have tried to broaden the understanding of specific and salient pulsed-power topics. One such attempt is this documentation of a workshop on magnetic switching as it applies primarily to pulse compression (power transformation), affording a truly international perspective by its participants under the initiative and leadership of Hugh Kirbie and Mark Newton of the Lawrence Livermore National Laboratory (LLNL) and supported by other interested organizations. During the course of the Workshop at Granlibakken, a great deal of information was amassed and a keen insight into both the problems and opportunities as to the use of this switching approach was developed. The segmented workshop format proved ideal for identifying key aspects affecting optimum performance in a variety of applications. Individual groups of experts addressed network and system modeling, magnetic materials, power conditioning, core cooling and dielectrics, and finally circuits and application. At the end, they came together to consolidate their input and formulate the workshop's conclusions, identifying roadblocks or suggesting research projects, particularly as they apply to magnetic switching's trump card--its high-average-power-handling capability (at least on a burst-mode basis). The workshop was especially productive both in the quality and quantity of information transfer in an environment conducive to a free and open exchange of ideas. We will not delve into the organization proper of this meeting, rather we wish to commend to the interested reader this volume, which provides the definitive and most up-to-date compilation on the subject of magnetic pulse compression from underlying principles to current state of the art as well as the prognosis for the future of magnetic pulse compression as a consensus of the workshop's organizers and participants.

A homogeneous charge compression ignition engine is set up by first identifying combinations of compression ratio and exhaust gas percentages for each speed and load across the engines operating range. These identified ratios and exhaust gas percentages can then be converted into geometric compression ratio controller settings and exhaust gas recirculation rate controller settings that are mapped against speed and load, and made available to the electronic

A method and apparatus for operating a compression ignition engine having a cylinder wall, a piston, and a head defining a combustion chamber. The method and apparatus includes delivering fuel substantially uniformly into the combustion chamber, the fuel being dispersed throughout the combustion chamber and spaced from the cylinder wall, delivering an oxidant into the combustion chamber sufficient to support combustion at a first predetermined combustion duration, and delivering a diluent into the combustion chamber sufficient to change the first predetermined combustion duration to a second predetermined combustion duration different from the first predetermined combustion duration.

The deformation of the coke cake and load on the side wall during pushing were studied using an electric furnace equipped with a movable wall. Coke cake was found to deform in three stages under compressive forces. The coke cake was shortened in the pushing direction in the cake deformation stage, and load was generated on the side walls in the high wall load stage. Secondary cracks in the coke cake were found to prevent load transmission on the wall. The maximum load transmission rate was controlled by adjusting the maximum fluidity and mean reflectance of the blended coal.

for processing, to a "computa- tional signal processing" (CSP) paradigm, where analog signals are converted nonlinear techniques. 1.1. Compressive sensing CSP builds upon a core tenet of signal processing a decorrelating transform to compact a correlated signal's energy into just a few essential coefficients.1

Advances in DNA sequencing technology will soon result in databases of thousands of genomes. Within a species, individuals' genomes are almost exact copies of each other; e.g., any two human genomes are 99.9% the same. Relative Lempel-Ziv (RLZ) compression takes advantage of this property: it stores the first genome uncompressed or as an FM-index, then compresses the other genomes with a variant of LZ77 that copies phrases only from the first genome. RLZ achieves good compression and supports fast random access; in this paper we show how to support fast search as well, thus obtaining an efficient compressed self-index.

Saturable inductor and transformer for magnetic compression of an electronic pulse, using a continuous electrical conductor looped several times around a tightly packed core of saturable inductor material.

A mica based compressive seal has been developed exhibiting superior thermal cycle stability when compared to other compressive seals known in the art. The seal is composed of compliant glass or metal interlayers and a sealing (gasket) member layer composed of mica that is infiltrated with a glass forming material, which effectively reduces leaks within the seal. The compressive seal shows approximately a 100-fold reduction in leak rates compared with previously developed hybrid seals after from 10 to about 40 thermal cycles under a compressive stress of from 50 psi to 100 psi at temperatures in the range from 600.degree. C. to about 850.degree. C.

Boiled down to its essentials, the grants purpose was to develop and demonstrate the viability of compressed air energy storage (CAES) for use in renewable energy development. While everyone agrees that energy storage is the key component to enable widespread adoption of renewable energy sources, the development of a viable scalable technology has been missing. The Department of Energy has focused on expanded battery research and improved forecasting, and the utilities have deployed renewable energy resources only to the extent of satisfying Renewable Portfolio Standards. The lack of dispatchability of solar and wind-based electricity generation has drastically increased the cost of operation with these components. It is now clear that energy storage coupled with accurate solar and wind forecasting make up the only combination that can succeed in dispatchable renewable energy resources. Conventional batteries scale linearly in size, so the price becomes a barrier for large systems. Flow batteries scale sub-linearly and promise to be useful if their performance can be shown to provide sufficient support for solar and wind-base electricity generation resources. Compressed air energy storage provides the most desirable answer in terms of scalability and performance in all areas except efficiency. With the support of the DOE, Tucson Electric Power and Science Foundation Arizona, the Arizona Research Institute for Solar Energy (AzRISE) at the University of Arizona has had the opportunity to investigate CAES as a potential energy storage resource.

This was probably the largest pipeline project in the US last year, and the largest in Texas in the last decade. The new compressor station is a key element in this project. TECO, its servicing dealer, and compression packager worked closely throughout the planning and installation stages of the project. To handle the amount of gas required, TECO selected the GEMINI F604-1 compressor, a four-throw, single-stage unit with a six-inch stroke manufactured by Weatherford Enterra Compression Co. (WECC) in Corpus Christi, TX. TECO also chose WECC to package the compressors. Responsibility for ongoing support of the units will be shared among TECO, the service dealer and the packager. TECO is sending people to be trained by WECC, and because the G3600 family of engines is still relatively new, both the Caterpillar dealer and WECC sent people for advanced training at Caterpillar facilities in Peoria, IL. As part of its service commitment to TECO, the servicing dealer drew up a detailed product support plan, encompassing these five concerns: Training, tooling; parts support; service support; and commissioning.

Tomographic imaging modalities sample subjects with a discrete, finite set of measurements, while the underlying object function is continuous. Because of this, inversion of the imaging model, even under ideal conditions, necessarily entails approximation. The error incurred by this approximation can be important when there is rapid variation in the object function or when the objects of interest are small. In this work, we investigate this issue with the Fourier transform (FT), which can be taken as the imaging model for magnetic resonance imaging (MRl) or some forms of wave imaging. Compressive sensing has been successful for inverting this data model when only a sparse set of samples are available. We apply the compressive sensing principle to a somewhat related problem of frequency extrapolation, where the object function is represented by a super-resolution grid with many more pixels than FT measurements. The image on the super-resolution grid is obtained through nonconvex minimization. The method fully utilizes the available FT samples, while controlling aliasing and ringing. The algorithm is demonstrated with continuous FT samples of the Shepp-Logan phantom with additional small, high-contrast objects.

Supported by this award, the PI and his research group at the University of California, Irvine (UCI) have carried out computational and theoretical studies of instability, turbulence, and transport in laboratory and space plasmas. Several massively parallel, gyrokinetic particle simulation codes have been developed to study electromagnetic turbulence in space and laboratory plasmas. In space plasma projects, the simulation codes have been successfully applied to study the spectral cascade and plasma heating in kinetic Alfven wave turbulence, the linear and nonlinear properties of compressible modes including mirror instability and drift compressional mode, and the stability of the current sheet instabilities with finite guide field in the context of collisionless magnetic reconnection. The research results have been published in 25 journal papers and presented at many national and international conferences. Reprints of publications, source codes, and other research-related information are also available to general public on the PIs webpage (http://phoenix.ps.uci.edu/zlin/). Two PhD theses in space plasma physics are highlighted in this report.

, for mobile graphics, API standards such as OpenGL ES and JSR-184 have been proposed [8], and the graphicsMesh Geometry Compression for Mobile Graphics Jongseok Lee POSTECH thirdeye@postech.ac.kr Sungyul--This paper presents a compression scheme for mesh geometry, which is suitable for mobile graphics. The main

A PARADIGM FOR TIME-PERIODIC SOUND WAVE PROPAGATION IN THE COMPRESSIBLE EULER EQUATIONS BLAKE consistent with time-periodic sound wave propagation in the 3 Ã? 3 nonlinear compressible Euler equations description of shock-free waves that propagate through an oscillating entropy field without breaking or dis

Lossless Wavelet Based Image Compression with Adaptive 2D Decomposition Manfred Kopp Technical.kopp@ieee.org WWW: http://www.cg.tuwien.ac.at/~kopp/ Abstract 2D wavelets are usually generated from 1D wavelets wavelet functions based on the compression of the coefficients, but needs only the same number of 1D

Recovery of partially occluded objects by applying compressive Fresnel holography Yair Rivenson,1, 2012; posted March 2, 2012 (Doc. ID 161160); published May 15, 2012 A compressive Fresnel holography, and is given by px. This may be regarded as a subsampling of the object's Fresnel field; hence the motivation

Homogeneous Charge Compression Ignition: Formulation Effect of a Diesel Fuel on the Initiation and the Combustion Potential of Olefin Impact in a Diesel Base Fuel D. Alseda1,2, X. Montagne1 and P. Dagaut2 1 Compression Ignition: Formulation Effect of a Diesel Fuel on the Initiation and the Combustion - Potential

1 Simplified Compression of Redundancy Free Trellis Sections in Turbo Decoder Emmanuel Boutillon that for an M state Turbo decoder, among the L compressed trellis stages, only m = 3 or even m = 2 are necessary turbo-code and/or to reduce its power consumption.1 I. INTRODUCTION The quality of an error control code

Typically the same transforms, such as the 2-D Discrete Cosine Transform (DCT), are used to compress both images in image compression and prediction residuals in video compression. However, these two signals have different ...

As radars move to Unmanned Aerial Vehicles with limited-bandwidth data downlinks, the amount of data stored and transmitted with each image becomes more significant. This document gives the results of a study to determine the effect of lossy compression in the image magnitude and phase on Coherent Change Detection (CCD). We examine 44 lossy compression types, plus lossless zlib compression, and test each compression method with over 600 CCD image pairs. We also derive theoretical predictions for the correlation for most of these compression schemes, which compare favorably with the experimental results. We recommend image transmission formats for limited-bandwidth programs having various requirements for CCD, including programs which cannot allow performance degradation and those which have stricter bandwidth requirements at the expense of CCD performance.

The pore space compressibility of a rock provides a robust, model-independent descriptor of porosity and pore fluid effects on effective moduli. The pore space compressibility is also the direct physical link between the dry and fluid-saturated moduli, and is therefore the basis of Gassmann`s equation for fluid substitution. For a fixed porosity, an increase in pore space compressibility increase the sensitivity of the modulus to fluid substitution. Two simple techniques, based on pore compressibility, are presented for graphically applying Gassmann`s relation for fluid substitution. In the first method, the pore compressibility is simply reweighted with a factor that depends only on the ratio of fluid to mineral bulk modulus. In the second technique, the rock moduli are rescaled using the Reuss average, which again depends only on the fluid and mineral moduli.

Knowledge about the optical properties of materials at high pressure and high temperature is needed for EOS research. Ellipsometry measures the change in the polarization of a probe beam reflected from a surface. From the change in polarization, the real and imaginary parts of the time dependent complex index of refraction can be extracted. From the measured optical properties, fundamental physical properties of the material, such as emissivity, phase transitions, and electrical conductivity can be extracted. A dynamic ellipsometry measurement system with nanosecond resolution was built in order to measure all four stocks parameters. Gas gun was used to accelerate the impact flyer. Our experiments concentrated on the optical properties of 1020 steel targets with impact pressure range of 40-250 kbar. Although there are intrinsic difficulties with dynamic ellipsometric measurements, distinct changes were observed for 1020 steel under shock compression larger than 130 kbar, the alpha->epsilon phase transition.

We study the evolution of a reactive field advected by a one-dimensional compressible velocity field and subject to an ignition-type nonlinearity. In the limit of small molecular diffusivity the problem can be described by a spatially discretized system, and this allows for an efficient numerical simulation. If the initial field profile is supported in a region of size l < lc one has quenching, i.e., flame extinction, where lc is a characteristic length-scale depending on the system parameters (reacting time, molecular diffusivity and velocity field). We derive an expression for lc in terms of these parameters and relate our results to those obtained by other authors for different flow settings.

This report provides a review and an analysis of potential environmental justice areas that could be affected by the New York State Electric & Gas (NYSEG) compress air energy storage (CAES) project and identifies existing environmental burden conditions on the area and evaluates additional burden of any significant adverse environmental impact. The review assesses the socioeconomic and demographic conditions of the area surrounding the proposed CAES facility in Schuyler County, New York. Schuyler County is one of 62 counties in New York. Schuyler Countys 2010 population of 18,343 makes it one of the least populated counties in the State (U.S. Census Bureau, 2010). This report was prepared for WorleyParsons by ERM and describes the study area investigated, methods and criteria used to evaluate this area, and the findings and conclusions from the evaluation.

Recent work has shown that quantum simulation is a valuable tool for learning empirical models for quantum systems. We build upon these results by showing that a small quantum simulators can be used to characterize and learn control models for larger devices for wide classes of physically realistic Hamiltonians. This leads to a new application for small quantum computers: characterizing and controlling larger quantum computers. Our protocol achieves this by using Bayesian inference in concert with Lieb-Robinson bounds and interactive quantum learning methods to achieve compressed simulations for characterization. Whereas Fisher information analysis shows that current methods which employ short-time evolution are suboptimal, interactive quantum learning allows us to overcome this limitation. We illustrate the efficiency of our bootstrapping protocol by showing numerically that an 8-qubit Ising model simulator can be used to calibrate and control a 50 qubit Ising simulator while using only about 750 kilobits of experimental data.

Shock compression of very low density micro-cellular materials allows entirely new regimes of hot fluid states to be investigated experimentally. Using a two-stage light-gas gun to generate strong shocks, temperatures of several eV are readily achieved at densities of roughly 0.5--1 g/cm{sup 3} in large, uniform volumes. The conditions in these hot, expanded fluids are readily found using the Hugoniot jump conditions. We will briefly describe the basic methodology for sample preparation and experimental measurement of shock velocities. We present data for several materials over a range of initial densities. This paper will explore the applications of these methods for investigations of equations of state and phase diagrams, spectroscopy, and plasma physics. Finally, we discus the need for future work on these and related low-density materials.

HEATS Project: UTRC is developing a new climate-control system for EVs that uses a hybrid vapor compression adsorption system with thermal energy storage. The targeted, closed system will use energy during the battery-charging step to recharge the thermal storage, and it will use minimal power to provide cooling or heating to the cabin during a drive cycle. The team will use a unique approach of absorbing a refrigerant on a metal salt, which will create a lightweight, high-energy-density refrigerant. This unique working pair can operate indefinitely as a traditional vapor compression heat pump using electrical energy, if desired. The project will deliver a hot-and-cold battery that provides comfort to the passengers using minimal power, substantially extending the driving range of EVs.

Using fast lattice Monte Carlo (FLMC) simulations [Q. Wang, Soft Matter 5, 4564 (2009)] and the corresponding lattice self-consistent field (LSCF) calculations, we studied a model system of grafted homopolymers, in both the brush and mushroom regimes, in an explicit solvent compressed by an impenetrable surface. Direct comparisons between FLMC and LSCF results, both of which are based on the same Hamiltonian (thus without any parameter-fitting between them), unambiguously and quantitatively reveal the fluctuations/correlations neglected by the latter. We studied both the structure (including the canonical-ensemble averages of the height and the mean-square end-to-end distances of grafted polymers) and thermodynamics (including the ensemble-averaged reduced energy density and the related internal energy per chain, the differences in the Helmholtz free energy and entropy per chain from the uncompressed state, and the pressure due to compression) of the system. In particular, we generalized the method for calculating pressure in lattice Monte Carlo simulations proposed by Dickman [J. Chem. Phys. 87, 2246 (1987)], and combined it with the Wang-LandauOptimized Ensemble sampling [S. Trebst, D. A. Huse, and M. Troyer, Phys. Rev. E 70, 046701 (2004)] to efficiently and accurately calculate the free energy difference and the pressure due to compression. While we mainly examined the effects of the degree of compression, the distance between the nearest-neighbor grafting points, the reduced number of chains grafted at each grafting point, and the system fluctuations/correlations in an athermal solvent, the ?-solvent is also considered in some cases.

Polymer-Electrolyte-Fuel-Cells (PEFCs) are promising candidates for powering vehicles and portable devices using renewable-energy sources. The core of a PEFC is the solid electrolyte membrane that conducts protons from anode to cathode, where water is generated. The conductivity of the membrane, however, depends on the water content of the membrane, which is strongly related to the cell operating conditions. The membrane and other cell components are typically compressed to minimize various contact resistances. Moreover, the swelling of a somewhat constrained membrane in the cell due to the humidity changes generates additional compressive stresses in the membrane. These external stresses are balanced by the internal swelling pressure of the membrane and change the swelling equilibrium. It was shown using a fuel-cell setup that compression could reduce the water content of the membrane or alter the cell resistance. Nevertheless, the effect of compression on the membranes transport properties is yet to be understood, as well as its implications in the structure-functions relationships of the membrane. We previously studied, both experimentally and theoretically, how compression affects the water content of the membrane.6 However, more information is required the gain a fundamental understanding of the compression effects. In this talk, we present the results of our investigation on the in-situ conductivity of the membrane as a function of humidity and cell compression pressure. Moreover, to better understand the morphology of compressed membrane, small-angle X-ray-scattering (SAXS) experiments were performed. The conductivity data is then analyzed by investigating the size of the water domains of the compressed membrane determined from the SAXS measurements.

The measurement of the isothermal compression of solid nitromethane to 15 GPa at 298 K using high pressure x-ray diffraction techniques is described. The compression data are fit to a model from which bulk moduli are calculated. Most interesting are the linear compression data. The a axis direction, along which the C--N bonds are aligned, shows little increase in repulsion with increasing pressure above 5 GPa. This indicates that the nitro and methyl groups of neighboring molecules may be interacting.

MTS and Anthony's models 43 4. 2 Mean resilient force on the model bale per time FIGURE Page 4. 3 Ansys output of stress state in a compressed cotton bale 4. 4 Ansys output of state of strain in a compressed cotton bale 47 48 4. 5 Nodal forces... compressed bales is important so that it can be handled in the channels of trade (McCaskill and Anthony, 1977). Anthony et al. (1994) reported that the packaging system consists of a battery condenser, lint slide, lint feeder, tramper, bale press, and bale...

We introduce the first grammar-compressed representation of a sequence that supports searches in time that depends only logarithmically on the size of the grammar. Given a text $T[1..u]$ that is represented by a (context-free) grammar of $n$ (terminal and nonterminal) symbols and size $N$ (measured as the sum of the lengths of the right hands of the rules), a basic grammar-based representation of $T$ takes $N\\lg n$ bits of space. Our representation requires $2N\\lg n + N\\lg u + \\epsilon\\, n\\lg n + o(N\\lg n)$ bits of space, for any $0

The current article deals with analytical bunch compression studies for FLUTE whose results are compared to simulations. FLUTE is a linac-based electron accelerator with a design energy of approximately 40 MeV currently being constructed at the Karlsruhe Institute of Technology. One of the goals of FLUTE is to generate electron bunches with their length lying in the femtosecond regime. In the first phase this will be accomplished using a magnetic bunch compressor. This compressor forms the subject of the studies presented. The paper is divided into three parts. The first part deals with pure geometric investigations of the bunch compressor where space charge effects and the back reaction of bunches with coherent synchrotron radiation (CSR) are neglected. The second part is dedicated to the treatment of space charge effects and the third part gives some analytical results on the emission of CSR. The upshot is that the results of the first and the third part agree quite well with what is obtained from simulatio...

Annihilation processes, where the reacting particles are influenced by some external advective field, are one of the simplest examples of nonlinear statistical systems. This type of processes can be observed in miscellaneous chemical, biological or physical systems. In low space dimensions usual description by means of kinetic rate equation is not sufficient and the effect of density fluctuations must be taken into ac- count. Using perturbative renormalization group we study the influ- ence of random velocity field on the kinetics of single-species annihila- tion reaction at and below its critical dimension $d_c = 2$. The advecting velocity field is modelled by the self-similar in space Gaussian variable finite correlated in time (Antonov-Kraichnan model). Effect of the compressibility of velocity field is taken into account and the model is analyzed near its critical dimension by means of three-parameter expansion in $\\epsilon$, $\\Delta$ and $\\eta$. Here $\\epsilon$ is the deviation from the Kolmogorov scaling, $\\Delta$ is the deviation from the (critical) space dimension 2 and {\\eta} is the deviation from the parabolic dispersion law. Depending on the value of these exponents and the value of compressiblity parameter {\\alpha}, the studied model can exhibit various asymptotic (long-time) regimes corresponding to the infrared (IR) fixed points of the renormalization group. The possible regimes are summarized and the decay rates for the mean particle number are calculated in the leading order of the perturbation theory.

Homogenous-charge, compression-ignition (HCCI) combustion is a new method of burning fuel in internal combustion (IC) engines. In an HCCI engine, the fuel and air are premixed prior to combustion, like in a spark-ignition ...

In this thesis we develop models for sentence compression. This text rewriting task has recently attracted a lot of attention due to its relevance for applications (e.g., summarisation) and simple formulation by means ...

system pressure problem because VSD compressors are able toproblems, the enterprise should first consider whether the existing array of compressorsproblems When enterprises focus their attention on their compressed air system, it is mainly on the compressors.

We investigate the statistical properties of Lagrangian tracers transported by a time-correlated compressible renewing flow. We show that the preferential sampling of the phase space performed by tracers yields significant differences between the Lagrangian statistics and its Eulerian counterpart. In particular, the effective compressibility experienced by tracers has a non-trivial dependence on the time correlation of the flow. We examine the consequence of this phenomenon on the clustering of tracers, focusing on the transition from the weak- to the strong-clustering regime. We find that the critical compressibility at which the transition occurs is minimum when the time correlation of the flow is of the order of the typical eddy turnover time. Further, we demonstrate that the clustering properties in time-correlated compressible flows are non-universal and are strongly influenced by the spatio-temporal structure of the velocity field.

Gasoline - ethanol blends were explored as a strategy to mitigate engine knock, a phenomena in spark ignition engine combustion when a portion of the end gas is compressed to the point of spontaneous auto-ignition. This ...

Selecting proper transforms for video compression has been based on the rate-distortion criterion. Transforms that appear reasonable are incorporated into a video coding system and their performance is evaluated. This ...

A laser modulator (10) having a low voltage assembly (12) with a plurality of low voltage modules (14) with first stage magnetic compression circuits (20) and magnetic assist inductors (28) with a common core (91), such that timing of the first stage magnetic switches (30b) is thereby synchronized. A bipolar second stage of magnetic compression (42) is coupled to the low voltage modules (14) through a bipolar pulse transformer (36) and a third stage of magnetic compression (44) is directly coupled to the second stage of magnetic compression (42). The low voltage assembly (12) includes pressurized boxes (117) for improving voltage standoff between the primary winding assemblies (34) and secondary winding (40) contained therein.

We describe a new outphasing energy recovery amplifier (OPERA) which replaces the isolation resistor in the conventional matched combiner with a resistance-compressed rectifier for improved efficiency. The rectifier recovers ...

The compressibility of nuclear matter has received significant attention in the last decade and a variety of approaches have been employed to extract this fundamental property of matter. Recently, significant differences have emerged between the results of relativistic and non-relativistic calculations of breathing mode giant monopole resonance (GMR). This is due to a lack of understanding of the dynamics of GMR and of its exact relationship to the compression modulus of the infinite nuclear matter. Here, I present an alternative approach based upon nuclear shell effects. The shell effects are known to manifest experimentally in terms of particle-separation energies with an exceedingly high precision. Within the framework of the non-relativistic density-dependent Skyrme theory, it is shown that the compressibility of nuclear matter has a significant influence on shell effects in nuclei. It is shown that 2-neutron separation energies and hence the empirical shell effects can be used to constrain the compressibility of nuclear matter.

We study the representation, approximation, and compression of functions in M dimensions that consist of constant or smooth regions separated by smooth (M-1)-dimensional discontinuities. Examples include images containing ...

We investigate the statistical properties of Lagrangian tracers transported by a time-correlated compressible renewing flow. We show that the preferential sampling of the phase space performed by tracers yields significant differences between the Lagrangian statistics and its Eulerian counterpart. In particular, the effective compressibility experienced by tracers has a non-trivial dependence on the time correlation of the flow. We examine the consequence of this phenomenon on the clustering of tracers, focusing on the transition from the weak- to the strong-clustering regime. We find that the critical compressibility at which the transition occurs is minimum when the time correlation of the flow is of the order of the typical eddy turnover time. Further, we demonstrate that the clustering properties in time-correlated compressible flows are non-universal and are strongly influenced by the spatio-temporal structure of the velocity field.

An error correction and grid adaptive method is presented for improving the accuracy of functional outputs of compressible flow simulations. The procedure is based on an adjoint formulation in which the estimated error in ...

This paper considers mission design strategies for mobile robots whose task is to perform spatial sampling of a static environmental field, in the framework of compressive sensing. According to this theory, we can reconstruct ...

of the thermal compression process. However, many applications either do not have adequate waste heat, or the waste heat is difficult to access economically. In these cases, traditional fuels can be used to provide

with independent compressed air consultant organizations. Energy Audits Supplied by DOE-Sponsored University Students University students led by their instructors as part of their training perform these studies. Often the reports are accurate about...Air Power USA, Inc. PO Box 292 Pickerington, OH 43147 740 862-4112 740 862-8464 (Fax) www.airpowerusainc.com THE MANY FACES OF A COMPRESSED AIR AUDIT Industrial Energy Technology Conference May 9-12, 2006 New Orleans, LA Today...

of a liquid phase. Gas compressibility factors are used in the gas material balance equations. These equations are used to estimate initial gas in place and reserves. Normally gas compressibility factors are used when a reservoir fluid depletion.... 0 on a material balance plot. Dake's equation has limited usefulness because the initial gas in place G is an unknown in the field. Also note that Gp at the abandonment pressure is the ultimate reserves of the reservoir. We can also derive...

The program undertaken by this contract is intended to quantify the current state of knowledge in American industry concerning the energy efficient design and operation of industrial compressed air systems and system components. Since there is no standard reference for designers and operators of compressed air systems which provides guidelines for maximizing the energy efficiency of these systems, a major product of this contract was the preparation of a guidebook for this purpose.

There is a significant need to protect the nations energy infrastructures from malicious actors using cyber methods. Supervisory, Control, and Data Acquisition (SCADA) systems may be vulnerable due to the insufficient security implemented during the design and deployment of these control systems. This is particularly true in older legacy SCADA systems that are still commonly in use. The purpose of INLs research on the SCADA Protocol Anomaly Detection Utilizing Compression (SPADUC) project was to determine if and how data compression techniques could be used to identify and protect SCADA systems from cyber attacks. Initially, the concept was centered on how to train a compression algorithm to recognize normal control system traffic versus hostile network traffic. Because large portions of the TCP/IP message traffic (called packets) are repetitive, the concept of using compression techniques to differentiate non-normal traffic was proposed. In this manner, malicious SCADA traffic could be identified at the packet level prior to completing its payload. Previous research has shown that SCADA network traffic has traits desirable for compression analysis. This work investigated three different approaches to identify malicious SCADA network traffic using compression techniques. The preliminary analyses and results presented herein are clearly able to differentiate normal from malicious network traffic at the packet level at a very high confidence level for the conditions tested. Additionally, the master dictionary approach used in this research appears to initially provide a meaningful way to categorize and compare packets within a communication channel.

A system and method is disclosed for multimedia encryption. Within the system of the present invention, a data compression module receives and compresses a media signal into a compressed data stream. A data acquisition module receives and selects a set of data from the compressed data stream. And, a hashing module receives and hashes the set of data into a keyword. The method of the present invention includes the steps of compressing a media signal into a compressed data stream; selecting a set of data from the compressed data stream; and hashing the set of data into a keyword.

A method of storing information from filled-in form-documents comprises extracting the unique user information in the foreground from the document form information in the background. The contrast of the pixels is enhanced by a gamma correction on an image array, and then the color value of each of pixel is enhanced. The color pixels lying on edges of an image are converted to black and an adjacent pixel is converted to white. The distance between black pixels and other pixels in the array is determined, and a filled-edge array of pixels is created. User information is then converted to a two-color format by creating a first two-color image of the scanned image by converting all pixels darker than a threshold color value to black. All the pixels that are lighter than the threshold color value to white. Then a second two-color image of the filled-edge file is generated by converting all pixels darker than a second threshold value to black and all pixels lighter than the second threshold color value to white. The first two-color image and the second two-color image are then combined and filtered to smooth the edges of the image. The image may be compressed with a unique Huffman coding table for that image. The image file is also decimated to create a decimated-image file which can later be interpolated back to produce a reconstructed image file using a bilinear interpolation kernel. 10 figs.

year in compressed air energy costs. Their system included three older oil-flooded screw compressors two 25-horsepower (HP) and one 75 -HP. Although 100-HP ofcompressor was running continuously, it was having trouble providing enough air... such as improperly installed or leaking distribution lines, outdated or inadequate controls, and excess compressor capacity. But efficiency improvements and speed controls could save over [7%. In the Pacific Northwest (PNW) compressed air systems consumed 4...

A method of recovering energy from a cool compressed gas, compressed liquid, vapor, or supercritical fluid is disclosed which includes incrementally expanding the compressed gas, compressed liquid, vapor, or supercritical fluid through a plurality of expansion engines and heating the gas, vapor, compressed liquid, or supercritical fluid entering at least one of the expansion engines with a low quality heat source. Expansion engines such as turbines and multiple expansions with heating are disclosed.

The Department of Energys Office of Electricity Delivery and Energy Reliability (DOE-OE) has a critical mission to secure the energy infrastructure from cyber attack. Through DOE-OEs Cybersecurity for Energy Delivery Systems (CEDS) program, the Idaho National Laboratory (INL) has developed a method to detect malicious traffic on Supervisory, Control, and Data Acquisition (SCADA) network using a data compression technique. SCADA network traffic is often repetitive with only minor differences between packets. Research performed at the INL showed that SCADA network traffic has traits desirable for using compression analysis to identify abnormal network traffic. An open source implementation of a Lempel-Ziv-Welch (LZW) lossless data compression algorithm was used to compress and analyze surrogate SCADA traffic. Infected SCADA traffic was found to have statistically significant differences in compression when compared against normal SCADA traffic at the packet level. The initial analyses and results are clearly able to identify malicious network traffic from normal traffic at the packet level with a very high confidence level across multiple ports and traffic streams. Statistical differentiation between infected and normal traffic level was possible using a modified data compression technique at the 99% probability level for all data analyzed. However, the conditions tested were rather limited in scope and need to be expanded into more realistic simulations of hacking events using techniques and approaches that are better representative of a real-world attack on a SCADA system. Nonetheless, the use of compression techniques to identify malicious traffic on SCADA networks in real time appears to have significant merit for infrastructure protection.

The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

While conventional low-pressure LH? dewars have existed for decades, advanced methods of cryogenic hydrogen storage have recently been developed. These advanced methods are cryo-compression and cryo-adsorption hydrogen storage, which operate best in the temperature range 30100 K. We present a comparative analysis of both approaches for cryogenic hydrogen storage, examining how pressure and/or sorbent materials are used to effectively increase onboard H? density and dormancy. We start by reviewing some basic aspects of LH? properties and conventional means of storing it. From there we describe the cryo-compression and cryo-adsorption hydrogen storage methods, and then explore the relationship between them, clarifying the materials science and physics of the two approaches in trying to solve the same hydrogen storage task (~58 kg H?, typical of light duty vehicles). Assuming that the balance of plant and the available volume for the storage system in the vehicle are identical for both approaches, the comparison focuses on how the respective storage capacities, vessel weight and dormancy vary as a function of temperature, pressure and type of cryo-adsorption material (especially, powder MOF-5 and MIL-101). By performing a comparative analysis, we clarify the science of each approach individually, identify the regimes where the attributes of each can be maximized, elucidate the properties of these systems during refueling, and probe the possible benefits of a combined hybrid system with both cryo-adsorption and cryo-compression phenomena operating at the same time. In addition the relationships found between onboard H? capacity, pressure vessel and/or sorbent mass and dormancy as a function of rated pressure, type of sorbent material and fueling conditions are useful as general designing guidelines in future engineering efforts using these two hydrogen storage approaches.

The present invention relates to concrete, mortar and other hardenable mixtures comprising cement and fly ash for use in construction. The invention includes a method for predicting the compressive strength of such a hardenable mixture, which is very important for planning a project. The invention also relates to hardenable mixtures comprising cement and fly ash which can achieve greater compressive strength than hardenable mixtures containing only concrete over the time period relevant for construction. In a specific embodiment, a formula is provided that accurately predicts compressive strength of concrete containing fly ash out to 180 days. In other specific examples, concrete and mortar containing about 15% to 25% fly ash as a replacement for cement, which are capable of meeting design specifications required for building and highway construction, are provided. Such materials can thus significantly reduce construction costs.

The present invention relates to concrete, mortar and other hardenable mixtures comprising cement and fly ash for use in construction. The invention includes a method for predicting the compressive strength of such a hardenable mixture, which is very important for planning a project. The invention also relates to hardenable mixtures comprising cement and fly ash which can achieve greater compressive strength than hardenable mixtures containing only concrete over the time period relevant for construction. In a specific embodiment, a formula is provided that accurately predicts compressive strength of concrete containing fly ash out to 180 days. In other specific examples, concrete and mortar containing about 15% to 25% fly ash as a replacement for cement, which are capable of meeting design specifications required for building and highway construction, are provided. Such materials can thus significantly reduce construction costs. 33 figs.

The present invention relates to concrete, mortar and other hardenable mixtures comprising cement and fly ash for use in construction. The invention includes a method for predicting the compressive strength of such a hardenable mixture, which is very important for planning a project. The invention also relates to hardenable mixtures comprising cement and fly ash which can achieve greater compressive strength than hardenable mixtures containing only concrete over the time period relevant for construction. In a specific embodiment, a formula is provided that accurately predicts compressive strength of concrete containing fly ash out to 180 days. In other specific examples, concrete and mortar containing about 15% to 25% fly ash as a replacement for cement, which are capable of meeting design specification required for building and highway construction, are provided. Such materials can thus significantly reduce construction costs.

The present invention relates to concrete, mortar and other hardenable mixtures comprising cement and fly ash for use in construction. The invention includes a method for predicting the compressive strength of such a hardenable mixture, which is very important for planning a project. The invention also relates to hardenable mixtures comprising cement and fly ash which can achieve greater compressive strength than hardenable mixtures containing only concrete over the time period relevant for construction. In a specific embodiment, a formula is provided that accurately predicts compressive strength of concrete containing fly ash out to 180 days. In other specific examples, concrete and mortar containing about 15% to 25% fly ash as a replacement for cement, which are capable of meeting design specification required for building and highway construction, are provided. Such materials can thus significantly reduce construction costs. 33 figs.

A gas turbine engine. The engine is based on the use of a gas turbine driven rotor having a compression ramp traveling at a local supersonic inlet velocity (based on the combination of inlet gas velocity and tangential speed of the ramp) which compresses inlet gas against a stationary sidewall. The supersonic compressor efficiently achieves high compression ratios while utilizing a compact, stabilized gasdynamic flow path. Operated at supersonic speeds, the inlet stabilizes an oblique/normal shock system in the gasdynamic flow path formed between the rim of the rotor, the strakes, and a stationary external housing. Part load efficiency is enhanced by use of a lean pre-mix system, a pre-swirl compressor, and a bypass stream to bleed a portion of the gas after passing through the pre-swirl compressor to the combustion gas outlet. Use of a stationary low NOx combustor provides excellent emissions results.

Oxide glasses exhibit significant densification under an applied isostatic pressure at the glass transition temperature. The glass compressibility is correlated with the chemical composition and atomic packing density, e.g., borate glasses with planar triangular BO{sub 3} units are more disposed for densification than silicate glasses with tetrahedral units. We here show that there is a direct relation between the plastic compressibility following hot isostatic compression and the extent of the indentation size effect (ISE), which is the decrease of hardness with indentation load exhibited by most materials. This could suggest that the ISE is correlated with indentation-induced shear bands, which should form in greater density when the glass network is more adaptable to volume changes through structural and topological rearrangements under an applied pressure.

The FBI has formulated national standards for digitization and compression of gray-scale fingerprint images. The compression algorithm for the digitized images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition, a technique referred to as the wavelet/scalar quantization method. The algorithm produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations. We will review the current status of the FBI standard, including the compliance testing process and the details of the first-generation encoder.

We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using ...

Intense beams of heavy ions are well suited for heating matter to regimes of emerging interest. A new facility, NDCX-II, will enable studies of warm dense matter at {approx}1 eV and near-solid density, and of heavy-ion inertial fusion target physics relevant to electric power production. For these applications the beam must deposit its energy rapidly, before the target can expand significantly. To form such pulses, ion beams are temporally compressed in neutralizing plasma; current amplification factors of {approx}50-100 are routinely obtained on the Neutralized Drift Compression Experiment (NDCX) at LBNL. In the NDCX-II physics design, an initial non-neutralized compression renders the pulse short enough that existing high-voltage pulsed power can be employed. This compression is first halted and then reversed by the beam's longitudinal space-charge field. Downstream induction cells provide acceleration and impose the head-to-tail velocity gradient that leads to the final neutralized compression onto the target. This paper describes the discrete-particle simulation models (1-D, 2-D, and 3-D) employed and the space-charge-dominated beam dynamics being realized.

Intense beams of heavy ions are well suited for heating matter to regimes of emerging interest. A new facility, NDCX-II, will enable studies of warm dense matter at {approx}1 eV and near-solid density, and of heavy-ion inertial fusion target physics relevant to electric power production. For these applications the beam must deposit its energy rapidly, before the target can expand significantly. To form such pulses, ion beams are temporally compressed in neutralizing plasma; current amplification factors of {approx}50-100 are routinely obtained on the Neutralized Drift Compression Experiment (NDCX) at LBNL. In the NDCX-II physics design, an initial non-neutralized compression renders the pulse short enough that existing high-voltage pulsed power can be employed. This compression is first halted and then reversed by the beam's longitudinal space-charge field. Downstream induction cells provide acceleration and impose the head-to-tail velocity gradient that leads to the final neutralized compression onto the target. This paper describes the discrete-particle simulation models (1-D, 2-D, and 3-D) employed and the space-charge-dominated beam dynamics being realized.

A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

In this work a general relativistic generalization of Bell inequality is suggested. Namely,it is proved that practically in any general relativistic metric there is a generalization of Bell inequality.It can be satisfied within theories of local (subluminal) hidden variables, but it cannot be satisfied in the general case within standard quantum mechanical formalism or within theories of nonlocal (superluminal) hidden variables. It is shown too that within theories of nonlocal hidden variables but not in the standard quantum mechanical formalism a paradox appears in the situation when one of the correlated subsystems arrives at a Schwarzschild black hole. Namely, there is no way that black hole horizon obstructs superluminal influences between spin of the subsystem without horizon and spin of the subsystem within horizon,or simply speaking,there is none black hole horizon nor "no hair" theorem for subsystems with correlated spins. It implies that standard quantum mechanical formalism yields unique consistent and complete description of the quantum mechanical phenomenons.

We report studies on first-order Fermi acceleration in parallel modified shock waves with a large scattering center compression ratio expected from turbulence transmission models. Using a Monte Carlo technique we have modeled particle acceleration in shocks with a velocity ranging from nonrelativistic to ultrarelativistic and a thickness extending from nearly steplike to very wide structures exceeding the particle diffusion length by orders of magnitude. The nonrelativistic diffusion approximation is found to be surprisingly accurate in predicting the spectral index of a thick shock with large compression ratio even in the cases involving relativistic shock speeds.

We study the reach of the Large Hadron Collider with 1 fb?¹ of data at ?s=7 TeV for several classes of supersymmetric models with compressed mass spectra, using jets and missing transverse energy cuts like those employed by ATLAS for summer 2011 data. In the limit of extreme compression, the best limits come from signal regions that do not require more than 2 or 3 jets and that remove backgrounds by requiring more missing energy rather than a higher effective mass.

The Durability of Lightweight Composite Structures Project was established at Oak Ridge National Laboratory (ORNL) by the US Department of Energy to provide the experimentally-based, durability-driven design guidelines necessary to assure long-term structural integrity of automotive composite components. The initial focus of the ORNL Durability Project was on one representative reference material--an isocyanurate (polyurethane) reinforced with continuous strand, swirl-mat E-glass. The present report describes tensile and compressive testing and results for the reference composite. Behavior trends and proportional limit are established for both tension and compression. Damage development due to tensile loading and strain rate effects are discussed.

Signal processing techniques have been developed that use different strategies to bypass the Nyquist sampling theorem in order to recover more information than a traditional discrete Fourier transform. Here we examine three such methods: filter diagonalization, compressed sensing, and super-resolution. We apply them to a broad range of signal forms commonly found in science and engineering in order to discover when and how each method can be used most profitably. We find that filter diagonalization provides the best results for Lorentzian signals, while compressed sensing and super-resolution perform better for arbitrary signals.

. Therefore, high quality compressed air is a key component of that objective. The compressed air used in the manufacturing process at this facility is held to ISO (International Organization for Standardization) class 2 air quality standards.... The compressor room contained two wet storage tanks, with a total capacity of 1,800 gallons. A heated desiccant dryer with associated filters was also located in the compressor room. Dry air was sent into the plant to a 2,000 gallon dry storage tank, which...

A turbulent energy cascade has been recently identified in high-latitude solar wind data samples by using a Yaglom-like relation. However, analogous scaling law, suitably modified to take into account compressible fluctuations, has been observed in a much more extended fraction of the same data set recorded by the Ulysses spacecraft. Thus, it seems that large scale density fluctuations, despite their low amplitude, play a major role in the basic scaling properties of turbulence. The compressive turbulent cascade, moreover, seems to be able to supply the energy needed to account for the local heating of the non-adiabatic solar wind.

Numerical integration errors and volumetric locking in the near-incompressible limit are two outstanding issues in Galerkin-based meshfree computations. In this paper, we present a modified Gaussian integration scheme on background cells for meshfree methods that alleviates errors in numerical integration and ensures patch test satisfaction to machine precision. Secondly, a locking-free small-strain elasticity formulation for meshfree methods is proposed, which draws on developments in assumed strain methods and nodal integration techniques. In this study, maximum-entropy basis functions are used; however, the generality of our approach permits the use of any meshfree approximation. Various benchmark problems in two-dimensional compressible and near-incompressible small strain elasticity are presented to demonstrate the accuracy and optimal convergence in the energy norm of the maximum-entropy meshfree formulation.

feeders, and major equipment and systems including compressed air. For the compressed air system, monitored data included compressor amps, electrical demand and consumption, pressure and airflow. The resulting UtiliTRACK® reports and graphs showed a...

The effects of electrode compression on the performance of a polymer electrolyte fuel cell (PEFC) were investigated. Preliminary testing showed that considerable compression of the carbon cloth electrodes was provided by the PEFC structure. Further...

Despite the study of shock wave compression of condensed matter for over 100 years, scant progress has been made in understanding the microscopic details. This thesis explores microscopic phenomena in shock compression of ...

consideration of compression conditions as found in fuel cells. Given the input of a 3D microstructure of some compression states, an optimal vector field is estimated by simulated annealing. The model is applied to 3D im

providing the best compression performance and the latter being the most energy efficient in terms of local is based on the premise that compressed data implies the transmission of smaller packets. This, in turn

(www.gzip.org/) or bzip2 (sourceware.cygnus.com/bzip2/index.html). The problems with this approach existing text compressors. Another XML compression system, XMLPPM, refines this idea further by using different text compressors with different XML components, i.e. one model for element and attribute

and transmission. To save the energy in WSNs and thus prolong the network lifetime, we present a novel approach1 Energy Efficient Signal Acquisition in Wireless Sensor Networks : A Compressive Sensing Framework networks (WSNs) determines the rate of its energy consumption since most of the energy is used in sampling

. (1999), which requires O(n^{1+epsilon}) words of space and reports all occurrences in O(|T|loglog n + occ) time. Recently, there have been successes in compressing the dictionary matching index while keeping the query time optimal (Belazzougui, 2010, Hon...

POSTER PRESENTATION Open Access Compressed sensing with stochastic spikes David Rotermund* , Klaus of a high dimensional state can be possible also from much lower dimensional sam- ples provided the state for non- negative activities can efficiently exploit the information contained in stochastic spike events

of the cold wall condition used in Ref. 6, and, other differences. The computational method used in this study compressible flows because of the interest in designing high speed vehicles and the associated propulsion on temperature. Under the adiabatic conditions of the experiment, the temperature increases as the wall

­frequency laser beam into the energy of a short lower­frequency laser pulse. The standard approach to generating high­intensity ultra­short laser pulses is Chirped Pulse Amplification [1] (CPA), in which a laser Garching, Germany Abstract: Laser pulses can be e#ciently compressed to femto­ second duration when

Pellet Production Wood Pellets are made by compressing clean dry sawdust, under very high pressure into a pellet as it cools. The material used for producing pellets usually comes from industries who are already pellets reduces the volume of material they have to treat as waste, reducing landfill. Pellets have

A homogenous charge compression ignition engine is operated by compressing a charge mixture of air, exhaust and fuel in a combustion chamber to an autoignition condition of the fuel. The engine may facilitate a transition from a first combination of speed and load to a second combination of speed and load by changing the charge mixture and compression ratio. This may be accomplished in a consecutive engine cycle by adjusting both a fuel injector control signal and a variable valve control signal away from a nominal variable valve control signal. Thereafter in one or more subsequent engine cycles, more sluggish adjustments are made to at least one of a geometric compression ratio control signal and an exhaust gas recirculation control signal to allow the variable valve control signal to be readjusted back toward its nominal variable valve control signal setting. By readjusting the variable valve control signal back toward its nominal setting, the engine will be ready for another transition to a new combination of engine speed and load.

that it is a contextÂ­based compressor of unbounded order, but those contexts are completely restructured by the sort that the requirements on the final compression stage are quite different from those in compressors of more conventional, eventually yielding a compressor which is among the best so far presented and is actually based

compressors. The original paper did little more than present the algorithm, with strong advice for efficient on aspects of its operation. Consideration of the possible efficiency of text compression leads to the revival of ideas by Shannon as the basis of a text compressor and then to the classification of the Block

It is well known that a complete description of the solar wind requires a kinetic description and that, particularly at sub-proton scales, kinetic effects cannot be ignored. It is nevertheless usually assumed that at scales significantly larger than the proton gyroscale r{sub L} , magnetohydrodynamics or its extensions, such as Hall-MHD and two-fluid models with isotropic pressures, provide a satisfactory description of the solar wind. Here we calculate the polarization and magnetic compressibility of oblique kinetic Alfven waves and show that, compared with linear kinetic theory, the isotropic two-fluid description is very compressible, with the largest discrepancy occurring at scales larger than the proton gyroscale. In contrast, introducing anisotropic pressure fluctuations with the usual double-adiabatic (or CGL) equations of state yields compressibility values which are unrealistically low. We also show that both of these classes of fluid models incorrectly describe the electric field polarization. To incorporate linear kinetic effects, we use two versions of the Landau fluid model that include linear Landau damping and finite Larmor radius (FLR) corrections. We show that Landau damping is crucial for correct modeling of magnetic compressibility, and that the anisotropy of pressure fluctuations should not be introduced without taking into account the Landau damping through appropriate heat flux equations. We also show that FLR corrections to all the retained fluid moments appear to be necessary to yield the correct polarization. We conclude that kinetic effects cannot be ignored even for kr{sub L} << 1.

In this paper we investigate the problem of partitioning an input string T in such a way that compressing individually its parts via a base-compressor C gets a compressed output that is shorter than applying C over the entire T at once. This problem was introduced in the context of table compression, and then further elaborated and extended to strings and trees. Unfortunately, the literature offers poor solutions: namely, we know either a cubic-time algorithm for computing the optimal partition based on dynamic programming, or few heuristics that do not guarantee any bounds on the efficacy of their computed partition, or algorithms that are efficient but work in some specific scenarios (such as the Burrows-Wheeler Transform) and achieve compression performance that might be worse than the optimal-partitioning by a $\\Omega(\\sqrt{\\log n})$ factor. Therefore, computing efficiently the optimal solution is still open. In this paper we provide the first algorithm which is guaranteed to compute in $O(n \\log_{1+\\eps}...

RisÃ¸-R-1393(EN) Compression Strength of a Fibre Composite Main Spar in a Wind Turbine Blade Find out in the project "Fun- damentals for improved design of large wind turbine blade of fibre composites of a wind turbine blade is found and compared with a full-scale test, made in the same project. Especially

of the climate change. Such applications require the least control and intervention as well as minimum energy1 Random Access Compressed Sensing over Fading and Noisy Communication Channels Fatemeh Fazel on integrating random sensing with the communication architecture, and achieves overall efficiency in terms

The Neutralized Drift Compression Experiment (NDCX-II) is an 11 M$ induction accelerator project currently in construction at Lawrence Berkeley National Laboratory for warm dense matter (WDM) experiments investigating the interaction of ion beams with matter at elevated temperature and pressure. The machine consists of a lithium injector, induction accelerator cells, diagnostic cells, a neutralized drift compression line, a final focus solenoid, and a target chamber. The induction cells and some of the pulsed power systems have been reused from the decommissioned Advanced Test Accelerator at Lawrence Livermore National Laboratory after refurbishment and modification. The machine relies on a sequence of acceleration waveforms to longitudinally compress the initial ion pulse from 600 ns to less than 1 ns in {approx} 12 m. Radial confinement of the beam is achieved with 2.5 T pulsed solenoids. In the initial hardware configuration, 50 nC of Li{sup +} will be accelerated to 1.25 MeV and allowed to drift-compress to a peak current of {approx}40 A. The project started in the summer of 2009. Construction of the accelerator will be completed in the fall of 2011 and will provide a worldwide unique opportunity for ion-driven warm dense matter experiments as well as research related to novel beam manipulations for heavy ion fusion drivers.

Compression of redundancy free trellis stages in Turbo-Decoder E. Boutillon, J. SÃ¡nchez-Rojas and C. Marchand For turbo code with coding rate close to one, the high puncturing rate induces long sequences. The computation is reduced accordingly. Introduction: Turbo codes with coding rate close to one are specified

NUMERICAL INVESTIGATION OF CAVITATION IN MULTI-DIMENSIONAL COMPRESSIBLE FLOWS KRISTEN J. DEVAULT, cavitation, pseudospectral AMS subject classifications. 35Q30, 65M70, 76N99 1. Introduction. A long to first make a careful numerical study of the simplest possible scenario where one could expect cavitation

sent to describe allocations WT-BASED COMPRESSION EXAMPLE Time Frequency #12;42 From http://www.amara.com/IEEEwave/IW_fbi.html See also http://www.c3.lanl.gov/~brislawn/FBI/FBI.html Original Fingerprint Image Decoded Fingerprint

compressive stress/strain characteristics that enhance the thermomechanical performance of buckling or bending is used as an interior for a double-walled tube, its ability to suppress local buckling leads to slightly enhanced structural eciency [Fig. 1(a)] [1]. Moreover, and more importantly, by suppressing lateral defor

Beyond Leaks: Demand-side Strategies for Bill Howe, PE Director, Corporate Energy Services E Source, Inc. Boulder, Colorado SUMMARY Staggering amounts of compressed air are wasted or misapplied in otherwise well run manufacturing...-maintained plants lose about 10 percent of compressed air to leaks, while many more lose over 50 percent. In addition to leaks, wasteful application of compressed air can eat up another 5 to 40 percent of compressed air volume-even in otherwise well...

General perturbations of a spherical gas bubble in a compressible and inviscid fluid with surface tension were proved in Shapiro and Weinstein (2011), in the linearized approximation, to decay exponentially, $\\sim e^{-\\Gamma t}, \\Gamma>0$, as time advances. Formal asymptotic and numerical evidence led to the conjecture that $\\Gamma \\approx \\frac{A}{\\epsilon} \\frac{We}{\\epsilon^{2}} \\exp(-B \\frac{We}{\\epsilon^2})$, where $0

Fractal Volume Compression Wayne O. Cochran John C. Hart Patrick J. Flynn School of Electrical@eecs.wsu.edu October 18, 1995 Abstract This research is the rst application of fractal compression to volumetric data. The various components of the fractal image compression method extend simply and di- rectly to the volumetric

LBNL-60891 Performance of Multi-Level and Multi-Component Compressed Bitmap Indexes Kesheng Wu cardinality attributes when certain compression methods are applied. There are many different bitmap indexes subsets of compressed bitmap indexes that use multi-component and multi-level encodings. We combine

DIRECT NUMERICAL SIMULATION OF COMPRESSIBLE TRANSITION: AN OVERVIEW M.Y. Hussaini and G. Erlebacher in the field of compressible transition. As a result, new computational tools have made their appearance. Recently however, research at Langley has begun to focus on the simulation of compressible transition

of sam- pling and compression [1, 2]. CS enables the design of new kinds of compressive imaging systems ratio test; in the case of image classification, it exploits the fact that a set of images of a fixed- quires compressive image projections, we achieve high clas- sification rates using many fewer

its performance. Index Terms-- Difference scaling, Genetic algorithm, MS-SSIM. 1. INTRODUCTION Lossy image compression techniques such as JPEG2000 al- low high compression rates, but only at the cost as the compression rate is increased to optimize the construction of psychovisual scale. Such a scale will serve

, such as virtual humans [1] [2]. A virtual human body model is animated using a stream of body animation parameters virtual bodies and their animation to be compressed using a standard compression pipeline comprisingBAP Sparsing: A Novel Approach to MPEG-4 Body Animation Parameter Compression Siddhartha

COMPRESSIVE SAMPLING FOR NON-INTRUSIVE APPLIANCE LOAD MONITORING (NALM) USING CURRENT WAVEFORMS advanced services like dynamic electricity pricing. The non-intrusive appliance load monitoring (NALM) [1/off status of each appliance from the compressed measurement as if the original non-compressed measurement

Partial Encryption of Compressed Images and Videos \\Lambda Howard Cheng y and Xiaobo Li z, if not impossible, to carry out realÂ­time secure image and video communication and processing. Methods have been of the compressed data. Partial encryption is applied to several image and video compression algorithms

The galilean genesis scenario is an alternative to inflation in which the universe starts expanding from Minkowski in the asymptotic past by violating the null energy condition stably. Several concrete models of galilean genesis have been constructed so far within the context of galileon-type scalar-field theories. We give a generic, unified description of the galilean genesis scenario in terms of the Horndeski theory, i.e., the most general scalar-tensor theory with second-order field equations. In doing so we generalize the previous models to have a new parameter (denoted by {\\alpha}) which results in controlling the evolution of the Hubble rate. The background dynamics is investigated to show that the generalized galilean genesis solution is an attractor, similarly to the original model. We also study the nature of primordial perturbations in the generalized galilean genesis scenario. In all the models described by our generalized genesis Lagrangian, amplification of tensor perturbations does not occur as ...

An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described. 22 figs.

An image represented in a first image array of pixels is first decimated in two dimensions before being compressed by a predefined compression algorithm such as JPEG. Another possible predefined compression algorithm can involve a wavelet technique. The compressed, reduced image is then transmitted over the limited bandwidth transmission medium, and the transmitted image is decompressed using an algorithm which is an inverse of the predefined compression algorithm (such as reverse JPEG). The decompressed, reduced image is then interpolated back to its original array size. Edges (contours) in the image are then sharpened to enhance the perceptual quality of the reconstructed image. Specific sharpening techniques are described.

A general approach to description of multigravity models in D-dimensional space-time is presented. Different possibilities of generalization of the invariant volume are given. Then a most general form of the interaction potential is constructed, which for bigravity coincides with the Pauli-Fierz model. A thorough analysis of the model along the 3+1 expansion formalism is done. It is shown that the absence of ghosts the considered bigravity model is equivalent in the weak field limit to the massive gravity (the Pauli-Fierz model). Thus, on the concrete example it is shown, that the interaction between metrics leads to nonvanishing mass of graviton.

Kalmykov, Serguei; Shvets, Gennady [Department of Physics and Institute for Fusion Studies, University of Texas at Austin, One University Station C1500, Austin, Texas 78712 (United States)

2006-05-15T23:59:59.000Z

A train of few-laser-cycle relativistically intense radiation spikes with a terahertz repetition rate can be organized self-consistently in plasma from two frequency detuned co-propagating laser beams of low intensity. Large frequency bandwidth for the compression of spikes is produced via laser-induced periodic modulation of the plasma refractive index. The beat-wave-driven electron plasma wave downshifted from the plasma frequency creates a moving index grating thus inducing a periodic phase modulation of the driving laser (in spectral terms, electromagnetic cascading). The group velocity dispersion compresses the chirped laser beat notes to a few-cycle duration and relativistic intensity either concurrently in the same, or sequentially in different plasmas. Particle-in-cell simulations indicate that the effect persists in a realistic three-dimensional axisymmetric geometry.

We report on the complete characterization of time resolution in an ultrafast electron diffraction (UED) instrument based on radio-frequency electron pulse compression. The temporal impulse response function of the instrument was determined directly in pump-probe geometry by performing electron-laser pulse cross-correlation measurements using the ponderomotive interaction. With optimal settings, a stable impulse response of 334{+-}10 fs was measured at a bunch charge of 0.1 pC (6.24 Multiplication-Sign 10{sup 5} electrons/pulse); a dramatic improvement compared to performance without pulse compression. Phase stability currently limits the impulse response of the UED diffractometer to the range of 334-500 fs, for bunch charges ranging between 0.1 and 0.6 pC.

S hock compression exper iments in the few hundred GPa (multi - Mabr) regime were performed on Lithium Deuteride (LiD) single crystals . This study utilized the high velocity flyer plate capability of the Sandia Z Machine to perform impact experiments at flyer plate velocities in the range of 17 - 32 km/s. Measurements included pressure, density, and temperature between ~200 - 600 GPa along the Principal Hugoniot - the locus of end states achievable through compression by large amplitude shock waves - as well as pressure and density of re - shock states up to ~900 GPa . The experimental measurements are compared with recent density functional theory calculations as well as a new tabular equation of state developed at Los Alamos National Labs.

A class of explosive magnetic flux compression generators is described that has been used successfully to power rail guns. A program to increase current magnitudes and pulse lengths is outlined. Various generator loss terms are defined and plans to overcome some of them are discussed. Included are various modifications of the conventional strip generators that are more resistant to undesirable expansion of generator components from magnetic forces. Finally, an integral rail gun is discussed that has coaxial geometry. Integral rail guns utilize the rails themselves as flux compression generator elements and, under ideal conditions, are theoretically capable of driving projectiles to arbitrarily high velocities. Integral coaxial rail guns should be superior in some regards to their square bore counterparts.

In this paper we report on the radiography of a shock-compressed target using laser produced proton beams. A low-density carbon foam target was shock compressed by long pulse high-energy laser beams. The shock front was transversally probed with a proton beam produced in the interaction of a high intensity laser beam with a gold foil. We show that from radiography data, the density profile in the shocked target can be deduced using Monte Carlo simulations. By changing the delay between long and short pulse beams, we could probe different plasma conditions and structures, demonstrating that the details of the steep density gradient can be resolved. This technique is validated as a diagnostic for the investigation of warm dense plasmas, allowing an in situ characterization of high-density contrasted plasmas.

Magnetic isentropic compression experiments (ICE) provide the most accurate shock free compression data for materials at megabar stresses. Recent ICE experiments performed on the Sandia Z-machine (Asay, 1999) and at the Los Alamos High Explosive Pulsed Power facility (Tasker, 2006) are providing our nation with data on material properties in extreme dynamic high stress environments. The LANL National High Magnetic Field Laboratory (NHMFL) can offer a less complex ICE experiment at high stresses (up to {approx}1Mbar) with a high sample throughput and relatively low cost. This is not to say that the NHMFL technique will replace the other methods but rather complement them. For example, NHMFL-ICE is ideal for the development of advanced diagnostics, e.g., to detect phase changes. We will discuss the physics of the NHMFL-ICE experiments and present data from the first proof-of-principle experiments that were performed in September 2010.

This document is designed to help fleets understand the cost factors associated with fueling infrastructure for compressed natural gas (CNG) vehicles. It provides estimated cost ranges for various sizes and types of CNG fueling stations and an overview of factors that contribute to the total cost of an installed station. The information presented is based on input from professionals in the natural gas industry who design, sell equipment for, and/or own and operate CNG stations.

The first shock-compression experiments on liquid helium are reported. With a two-stage light-gas gun, liquid He at 4.3 K and 1 atm was shocked to 16 GPa and 12 000 K and double shocked to 56 GPa and 21 000 K. Liquid perturbation theory has been used to determine an effective interatomic potential from which the equation of state of He can be obtained over a wide range of densities and temperatures.

Kalmykov, Serguei; Shvets, Gennady [Department of Physics and Institute for Fusion Studies, University of Texas at Austin, One University Station C1500, Austin, Texas 78712 (United States)

2005-06-17T23:59:59.000Z

Compressing high-power laser beams in plasmas via generation of a coherent cascade of electromagnetic sidebands is described. The technique requires two copropagating beams detuned by a near-resonant frequency {omega} < or approx. {omega}{sub p}. The ponderomotive force of the laser beat wave drives an electron plasma wave which modifies the refractive index of plasma so as to produce a periodic phase modulation of the laser field with the beat period {tau}{sub b}=2{pi}/{omega}. A train of chirped laser beat notes (each of duration {tau}{sub b}) is thus created. The group velocity dispersion of radiation in plasma can then compress each beat note to a few-laser-cycle duration. As a result, a train of sharp electromagnetic spikes separated in time by {tau}{sub b} is formed. Depending on the plasma and laser parameters, chirping and compression can be implemented either concurrently in the same plasma or sequentially in different plasmas.

A magnetohydrodynamic loading technique was used to shocklessly compress beryllium to peak longitudinal stresses of 19110?GPa and, subsequently, unload in order to determine both the compressive response and also the shear stress supported upon release. Loading strain rates were on the order of 10{sup 6?}s{sup ?1}, while the unloading rates were nearly constant at 3?×?10{sup 5?}s{sup ?1}. Velocimetry was used to monitor the ramp and release behavior of a beryllium/lithium fluoride window interface. After applying window corrections to infer in situ beryllium velocities, a Lagrangian analysis was employed to determine the material response. The Lagrangian wavespeed-particle velocity response is integrated to generate the stress-strain path, average change in shear stress over the elastic unloading, and estimates of the shear modulus at peak compression. These data are used to infer the pressure dependence of the flow strength at the unloading rate. Comparisons to several strength models reveal good agreement to 45?GPa, but the data indicate 20%30% higher strength near 100?GPa.

An exact method is developed for computing the height of an elastic medium subjected to centrifugal compression, for arbitrary constitutive relation between stress and strain. Example solutions are obtained for power-law media and for cases where the stress diverges at a critical strain -- for example as required by packings composed of deformable but incompressible particles. Experimental data are presented for the centrifugal compression of thermo-responsive N-isopropylacrylamide (NIPA) microgel beads in water. For small radial acceleration, the results are consistent with Hertzian elasticity, and are analyzed in terms of the Young elastic modulus of the bead material. For large radial acceleration, the sample compression asymptotes to a value corresponding to a space-filling particle volume fraction of unity. Therefore we conclude that the gel beads are incompressible, and deform without deswelling. In addition, we find that the Young elastic modulus of the particulate gel material scales with cross-link density raised to the power 3.3+-0.8, somewhat larger than the Flory expectation.

A high-power RF switching device employs a semiconductor wafer positioned in the third port of a three-port RF device. A controllable source of directed energy, such as a suitable laser or electron beam, is aimed at the semiconductor material. When the source is turned on, the energy incident on the wafer induces an electron-hole plasma layer on the wafer, changing the wafer's dielectric constant, turning the third port into a termination for incident RF signals, and. causing all incident RF signals to be reflected from the surface of the wafer. The propagation constant of RF signals through port 3, therefore, can be changed by controlling the beam. By making the RF coupling to the third port as small as necessary, one can reduce the peak electric field on the unexcited silicon surface for any level of input power from port 1, thereby reducing risk of damaging the wafer by RF with high peak power. The switch is useful to the construction of an improved pulse compression system to boost the peak power of microwave tubes driving linear accelerators. In this application, the high-power RF switch is placed at the coupling iris between the charging waveguide and the resonant storage line of a pulse compression system. This optically controlled high power RF pulse compression system can handle hundreds of Megawatts of power at X-band.

We present a construction method for mappings between generalized connections, comprising, e.g., the action of gauge transformations, diffeomorphisms and Weyl transformations. Moreover, criteria for continuity and measure preservation are stated.

Campus IT General com m unity Technology community ITsystem owners Campus Council for Information Technology (CCFIT) · ~30 members · Advisory evaluation and review role · Input from faculty, staff, students formal representation on steering team and subcommittees Technology Support Program · Technology support

The confrontation between Einstein's theory of gravitation and experiment is summarized. Although all current experimental data are compatible with general relativity, the importance of pursuing the quest for possible deviations from Einstein's theory is emphasized.

We discuss the twisting of gauge symmetry in noncommutative gauge theories and show how this can be generalized to a whole continuous family of twisted gauge invariances. The physical relevance of these twisted invariances is discussed.

We introduce the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of new single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length, but also asymptotically achieve the quantum Hamming bound for large block length.

Recently, DiFrancesco and Zuber have characterized the RCFTs which have a description in terms of a fusion potential in one variable, and proposed a generalized potential to describe other theories. In this note we give a simple criterion to determine when such a generalized description is possible. We also determine which RCFTs can be described by a fusion potential in more than one variable, finding that in fact all RCFTs can be described in such a way, as conjectured by Gepner.

Homogeneous charge compression ignition (HCCI) engines are being considered as an alternative to diesel engines. The HCCI concept involves premixing fuel and air prior to induction into the cylinder (as is done in current spark-ignition engine) then igniting the fuel-air mixture through the compression process (as is done in current diesel engines). The combustion occurring in an HCCI engine is fundamentally different from a spark-ignition or Diesel engine in that the heat release occurs as a global autoignition process, as opposed to the turbulent flame propagation or mixing controlled combustion used in current engines. The advantage of this global autoignition is that the temperatures within the cylinder are uniformly low, yielding very low emissions of oxides of nitrogen (NO{sub x}, the chief precursors to photochemical smog). The inherent features of HCCI combustion allows for design of engines with efficiency comparable to, or potentially higher than, diesel engines. While HCCI engines have great potential, several technical barriers exist which currently prevent widespread commercialization of this technology. The most significant challenge is that the combustion timing cannot be controlled by typical in-cylinder means. Means of controlling combustion have been demonstrated, but a robust control methodology that is applicable to the entire range of operation has yet to be developed. This research focuses on understanding basic characteristics of controlling and operating HCCI engines. Experiments and detailed chemical kinetic simulations have been applied to the characterize some of the fundamental operational and design characteristics of HCCI engines. Experiments have been conducted on single and multi-cylinder engines to investigate general features of how combustion timing affects the performance and emissions of HCCI engines. Single-zone modeling has been used to characterize and compare the implementation of different control strategies. Multi-zone modeling has been applied to investigate combustion chamber design with respect to increasing efficiency and reducing emissions in HCCI engines.

We define a theory of noncommutative general relativity for canonical noncommutative spaces. We find a subclass of general coordinate transformations acting on canonical noncommutative spacetimes to be volume-preserving transformations. Local Lorentz invariance is treated as a gauge theory with the spin connection field taken in the so(3,1) enveloping algebra. The resulting theory appears to be a noncommutative extension of the unimodular theory of gravitation. We compute the leading order noncommutative correction to the action and derive the noncommutative correction to the equations of motion of the weak gravitation field.

We define a theory of noncommutative general relativity for canonical noncommutative spaces. We find a subclass of general coordinate transformations acting on canonical noncommutative spacetimes to be volume-preserving transformations. Local Lorentz invariance is treated as a gauge theory with the spin connection field taken in the so(3,1) enveloping algebra. The resulting theory appears to be a noncommutative extension of the unimodular theory of gravitation. We compute the leading order noncommutative correction to the action and derive the noncommutative correction to the equations of motion of the weak gravitation field.

TC 9-524 Chapter 4 DRILLING MACHINES GENERAL INFORMATION PURPOSE This chapter contains basic information pertaining to drilling machines. A drilling machine comes in many shapes and sizes, from small hand-held power drills to bench mounted and finally floor-mounted models. They can perform operations

Communication Definitions... general definition "the process of conveying information from a sender to a receiver with the use of a medium in which the communicated information is understood the same way by both sender and receiver" (Wikipedia)! Biological communication Action by one organism (individual

Conformal transformations of a Euclidean (complex) plane have some kind of completeness (sufficiency) for the solution of many mathematical and physical-mathematical problems formulated on this plane. There is no such completeness in the case of Euclidean, pseudo-Euclidean and polynumber spaces of dimension greater than two. In the present paper we show that using the concepts of analogical geometries allows us to generalize conformal transformations not only to the case of Euclidean or pseudo-Euclidean spaces, but also to the case of Finsler spaces, analogous to the spaces of affine connectedness. Examples of such transformations in the case of complex and hypercomplex numbers H_4 are presented. In the general case such transformations form a group of transitions, the elements of which can be viewed as transitions between projective Euclidean geometries of a distinguished class fixed by the choice of metric geometry admitting affine coordinates. The correlation between functions realizing generalized conformal transformations and generalized analytical functions can appear to be productive for the solution of fundamental problems in theoretical and mathematical physics.

We address the possibility to control high power pulses extracted from the maximally compressed pulse in a nonlinear optical fiber by adjusting the initial excitation parameters. The numerical results show that the power, location and splitting order number of the maximally compressed pulse and the transmission features of high power pulses extracted from the maximally compressed pulse can be manipulated through adjusting the modulation amplitude, width, and phase of the initial Gaussian-type perturbation pulse on a continuous wave background.

Five alternatives to vapor compression technology were qualitatively evaluated to determine their prospects for being better than vapor compression for space cooling and food refrigeration applications. The results of the assessment are summarized in the report. Overall, thermoacoustic and magnetic technologies were judged to have the best prospects for competing with vapor compression technology, with thermotunneling, thermoelectric, and thermionic technologies trailing behind in that order.

Compressed sensing (CS) schemes are proposed for monostatic as well as synthetic aperture radar (SAR) imaging of sparse targets with chirps. In particular, a simple method is developed to improve performance with off-grid targets. Tomographic formulation of spotlight SAR is analyzed by CS methods with several bases and under various bandwidth constraints. Performance guarantees are established via coherence bound and the restricted isometry property. CS analysis provides a fresh and clear perspective on how to optimize temporal and angular samplings for spotlight SAR.

A homogeneous charge compression ignition engine operates by injecting liquid fuel directly in a combustion chamber, and mixing the fuel with recirculated exhaust and fresh air through an auto ignition condition of the fuel. The engine includes at least one turbocharger for extracting energy from the engine exhaust and using that energy to boost intake pressure of recirculated exhaust gas and fresh air. Elevated proportions of exhaust gas recirculated to the engine are attained by throttling the fresh air inlet supply. These elevated exhaust gas recirculation rates allow the HCCI engine to be operated at higher speeds and loads rendering the HCCI engine a more viable alternative to a conventional diesel engine.

A Homogeneous Charge Compression Ignition (HCCI) engine system includes an engine that produces exhaust gas. A vaporization means vaporizes fuel for the engine an air induction means provides air for the engine. An exhaust gas recirculation means recirculates the exhaust gas. A blending means blends the vaporized fuel, the exhaust gas, and the air. An induction means inducts the blended vaporized fuel, exhaust gas, and air into the engine. A control means controls the blending of the vaporized fuel, the exhaust gas, and the air and for controls the inducting the blended vaporized fuel, exhaust gas, and air into the engine.

Large scale coherent structures are intrinsic fluid mechanical characteristics of all free-shear flows, from incompressible to compressible, and laminar to fully turbulent. These quasi-periodic fluid structures, eddies of size comparable to the thickness of the shear layer, dominate the mixing process at the free-shear interface. As a result, large scale coherent structures greatly influence the operation and efficiency of many important commercial and defense technologies. Large scale coherent structures have been studied here in a research program that combines a synergistic blend of experiment, direct numerical simulation, and analysis. This report summarizes the work completed for this Sandia Laboratory-Directed Research and Development (LDRD) project.

Looking at rational solid-fluid mixture theories in the context of their biomechanical perspectives, this work aims at proposing a two-scale constitutive theory of a poroelastic solid infused with an inviscid compressible fluid. The propagation of steady-state harmonic plane waves in unbounded media is investigated in both cases of unconstrained solid-fluid mixtures and fluid-saturated poroelastic solids. Relevant effects on the resulting characteristic speed of longitudinal and transverse elastic waves, due to the constitutive parameters introduced, are finally highlighted and discussed.

A method of avoiding CSR induced beam quality defects in free electron laser operation by a) controlling the rate of compression and b) using a novel means of integrating the compression with the remainder of the transport system: both are accomplished by means of dispersion modulation. A large dispersion is created in the penultimate dipole magnet of the compression region leading to rapid compression; this large dispersion is demagnified and dispersion suppression performed in a final small dipole. As a result, the bunch is short for only a small angular extent of the transport, and the resulting CSR excitation is small.

A method for spectrally compressing data sets enables the efficient analysis of very large multivariate images. The spectral compression algorithm uses a factored representation of the data that can be obtained from Principal Components Analysis or other factorization technique. Furthermore, a block algorithm can be used for performing common operations more efficiently. An image analysis can be performed on the factored representation of the data, using only the most significant factors. The spectral compression algorithm can be combined with a spatial compression algorithm to provide further computational efficiencies.

Regulation XVI: GENERAL UNIVERSITY REGULATIONS APPLICATION AND INTERPRETATION 1. Unless stated otherwise, these and the following Regulations apply to students in all Faculties, including the International Faculty: General Regulations for First Degrees; General Regulations for Higher Degrees

We present a general method to calculate radiative transfer including scattering in the continuum as well as in lines in spherically symmetric systems that are influenced by the effects of general relativity (GR). We utilize a comoving wavelength ansatz that allows to resolve spectral lines throughout the atmosphere. The used numerical solution is an operator splitting (OS) technique that uses a characteristic formal solution. The bending of photon paths and the wavelength shifts due to the effects of GR are fully taken into account, as is the treatment of image generation in a curved spacetime. We describe the algorithm we use and demonstrate the effects of GR on the radiative transport of a two level atom line in a neutron star like atmosphere for various combinations of continuous and line scattering coefficients. In addition, we present grey continuum models and discuss the effects of different scattering albedos on the emergent spectra and the determination of effective temperatures and radii of neutron star atmospheres.

The definition of relative accelerations and strains among a set of comoving particles is studied in connection with the geometric properties of the frame adapted to a "fiducial observer." We find that a relativistically complete and correct definition of strains must take into account the transport law of the chosen spatial triad along the observer's congruence. We use special congruences of (accelerated) test particles in some familiar spacetimes to elucidate such a point. The celebrated idea of Szekeres' compass of inertia, arising when studying geodesic deviation among a set of free-falling particles, is here generalized to the case of accelerated particles. In doing so we have naturally contributed to the theory of relativistic gravity gradiometer. Moreover, our analysis was made in an observer-dependent form, a fact that would be very useful when thinking about general relativistic tests on space stations orbiting compact objects like black holes and also in other interesting gravitational situations.

The Resource Description Framework (RDF) is a popular data model for representing linked data sets arising from the web, as well as large scienti#12;c data repositories such as UniProt. RDF data intrinsically represents a labeled and directed multi-graph. SPARQL is a query language for RDF that expresses subgraph pattern-#12;nding queries on this implicit multigraph in a SQL- like syntax. SPARQL queries generate complex intermediate join queries; to compute these joins e#14;ciently, we propose a new strategy based on bitmap indexes. We store the RDF data in column-oriented structures as compressed bitmaps along with two dictionaries. This paper makes three new contributions. (i) We present an e#14;cient parallel strategy for parsing the raw RDF data, building dictionaries of unique entities, and creating compressed bitmap indexes of the data. (ii) We utilize the constructed bitmap indexes to e#14;ciently answer SPARQL queries, simplifying the join evaluations. (iii) To quantify the performance impact of using bitmap indexes, we compare our approach to the state-of-the-art triple-store RDF-3X. We #12;nd that our bitmap index-based approach to answering queries is up to an order of magnitude faster for a variety of SPARQL queries, on gigascale RDF data sets.

A method and apparatus for the in situ repair of a failed compression fitting is provided. Initially, a portion of a guide tube is inserted coaxially in the bore of the compression fitting and locked therein. A close fit dethreading device is then coaxially mounted on the guide tube to cut the threads from the fitting. Thereafter, the dethreading device and guide tube are removed and a new fitting is inserted onto the dethreaded fitting with the body of the new fitting overlaying the dethreaded portion. Finally, the main body of the new fitting is welded to the main body of the old fitting whereby a new threaded portion of the replacement fitting is precisely coaxial with the old threaded portion. If needed, a bushing is located on the dethreaded portion which is sized to fit snugly between the dethreaded portion and the new fitting. Preferably, the dethreading device includes a cutting tool which is moved incrementally in a radial direction whereby the threads are cut from the threaded portion of the failed fitting in increments.

A method and apparatus for the in situ repair of a failed compression fitg is provided. Initially, a portion of a guide tube is inserted coaxially in the bore of the compression fitting and locked therein. A close fit dethreading device is then coaxially mounted on the guide tube to cut the threads from the fitting. Thereafter, the dethreading device and guide tube are removed and a new fitting is inserted onto the dethreaded fitting with the body of the new fitting overlaying the dethreaded portion. Finally, the main body of the new fitting is welded to the main body of the old fitting whereby a new threaded portion of the replacement fitting is precisely coaxial with the old threaded portion. If needed, a bushing is located on the dethreaded portion which is sized to fit snugly between the dethreaded portion and the new fitting. Preferably, the dethreading device includes a cutting tool which is moved incrementally in a radial direction whereby the threads are cut from the threaded portion of the failed fitting in increments.

In this letter, we propose a scheme to generate tunable coherent X-ray radiation for future light source applications. This scheme uses an energy chirped electron beam, a laser modulators, a laser chirper and two bunch compressors to generate a prebunched kilo-Ampere current electron beam from a few tens Ampere electron beam out of a linac. The initial modulation energy wavelength can be compressed by a factor of $1+h_b R_{56}^a$ in phase space, where $h_b$ is the energy bunch length chirp introduced by the laser chirper, $R_{56}^a$ is the momentum compaction factor of the first bunch compressor. As an illustration, we present an example to generate more than $400$ MW, $170$ atto-seconds pulse, $1$ nm coherent X-ray radiation using a $60$ Ampere electron beam out of the linac and $200$ nm laser seed. Both the final wavelength and the radiation pulse length in the proposed scheme are tunable by adjusting the compression factor and the laser parameters.

In this letter, we propose a scheme to generate tunable coherent X-ray radiation for future light source applications. This scheme uses an energy chirped electron beam, a laser modulators, a laser chirper and two bunch compressors to generate a prebunched kilo-Ampere current electron beam from a few tens Ampere electron beam out of a linac. The initial modulation energy wavelength can be compressed by a factor of 1 + h{sub b}R{sub 56}{sup a} phase space, where h{sub b} is the energy bunch length chirp introduced by the laser chirper, R{sub 56}{sup a} is the momentum compaction factor of the first bunch compressor. As an illustration, we present an example to generate more than 400 MW, 170 atto-seconds pulse, 1 nm coherent X-ray radiation using a 60 Ampere electron beam out. of the linac and 200 nm laser seed. Both the final wavelength and the radiation pulse length in the proposed scheme are tunable by adjusting the compression factor and the laser parameters.

EXTENDED NOSE´ THERMOSTAT In this section, we show that the Nose´ approach ~and its corresponding real-time versionNose´-Poincare´! is only the simplest realization of a vast range of generalized thermo- stating Hamiltonians. In particular, we show below... reason for the difficulty encountered in thermo- stating molecular systems with stiff bonds that are weakly coupled to the rest of the system @10#. The unthermostated Hamiltonian for this system is H ~ p ,q !5 p2 2 1 q2 2 , where we have assumed unit mass...

A method and apparatus for more effectively squeezing moisture from wood chips and/or other "green" biomass materials. A press comprising a generally closed chamber having a laterally movable base at the lower end thereof, and a piston or ram conforming in shape to the cross-section of the chamber is adapted to periodically receive a charge of biomass material to be dehydrated. The ram is forced against the biomass material with suffcient force to compress the biomass and to crush the matrix in which moisture is contained within the material with the face of the ram being configured to cause a preferential flow of moisture from the center of the mass outwardly to the grooved walls of the chamber. Thus, the moisture is effectively squeezed from the biomass and flows through the grooves formed in the walls of the chamber to a collecting receptacle and is not drawn back into the mass by capillary action when the force is removed from the ram.

Numerical Modeling of Hydro-acoustic Waves In Weakly Compressible Fluid Ali Abdolali1,2 , James T of Civil Engineering, University of Roma Tre Low-frequency hydro-acoustic waves are precursors of tsunamis. Detection of hydro-acoustic waves generated due to the water column compression triggered by sudden seabed

1 Compression Behaviour of Natural and Reconstituted Clays Zhen-Shun Hong1 , Ling-Ling Zeng2 , Yu the effect of the starting point on the compressibility of natural and reconstituted clays. It is found of reconstituted clays is controlled solely by the water content at the remoulded yield stress and the liquid limit

A Method for Compressing Test Data Based on Burrows-Wheeler Transformation Takahiro J. Yamaguchi AbstractÃThe overall throughput of automatic test equipment (ATE) is affected by the download time of test data. An effective approach to the reduction of the download time is to compress test data before

Compressing magnetic fields with high-energy lasersa... J. P. Knauer,1,b O. V. Gotchev,1,2,3 P. Y, Rochester, New York 14623, USA 3 Department of Mechanical Engineering, University of Rochester, 250 East-driven magnetic-field compression producing a magnetic field of tens of megaGauss is reported for the first time

1 Improving MPEG-4 coding performance by jointly optimising compression and blocking effect and Information Engineering The Hong Kong Polytechnic University, Hong Kong ABSTRACT In most current block into account in the compression and the two processes can be jointly optimised. An example is also provided

for the polycrystalline gold under the highest load. Polycrystalline diamond can support a microscopic deviatoric stress of polycrystalline diamond and gold in the DAC under nonhydrostatic compression to above 300 GPa. The influenceStress state of diamond and gold under nonhydrostatic compression to 360 GPa Jianghua Wang,1

Density Dependent Exchange Contribution to @=@n and Compressibility in Graphene E. H. Hwang,1 Ben potential and n electron density), which is associated with the compressibility, in graphene as a function in the quasiparticle velocity of intrinsic graphene disappears in the extrinsic case. The calculated renormalized @=@n

de l'électricité. Abstract 2014 The new heat pump, here proposed, is intermediate between a compression heat pump and an absorp- tion heat pump. Like the absorption heat pump, it uses a binary mixture made of a volatile solvent and a non-volatile solute. Like the compression heat pump, it uses

is not the paramount issue. Independent auditors should have no obvious or hidden agenda. CAT compressed air audit services are in essence, holistic. The analysis of your facility compressed air and gases systems is all-inclusive and consists of an engineered...

A On the Performance of Lossy Compression Schemes for Energy Constrained Sensor Networking DAVIDE of Padova Lossy temporal compression is key for energy constrained wireless sensor networks (WSN), where complexity and energy consumption. Specifically, we first carry out a performance evaluation of existing

A SCALABLE FULLY IMPLICIT COMPRESSIBLE EULER SOLVER FOR MESOSCALE NONHYDROSTATIC SIMULATION for the mesoscale nonhydrostatic simulation of atmospheric flows governed by the compressible Euler equations is of interest as in mesoscale and cloud-resolving atmospheric simulations, fast and efficient solution

COUPLING OF DARCY-FORCHHEIMER AND COMPRESSIBLE NAVIER-STOKES EQUATIONS WITH HEAT TRANSFER M. AMARA are respectively described by the Darcy-Forchheimer and the compressible Navier-Stokes equations, together coordinates and consisting of the Darcy- Forchheimer equation coupled with an exhaustive energy balance, has

In this paper, we are first interested in the compressible Navier-Stokes equations with densitydependent viscosities in bounded domains with on-homogeneous Dirichlet conditions. We study the wellposedness of such models with non-constant coefficients in non-stationary and stationary cases. We apply the last result in thin domains context, justifying the compressible Reynolds equations.

We present a simple and robust technique to retrieve the phase of ultrashort laser pulses, based on a chirped mirror and glass wedges compressor. It uses the compression system itself as a diagnostic tool, thereby making unnecessary the use of complementary diagnostic tools. We used this technique to compress and characterize 7.1 fs laser pulses from an ultrafast laser oscillator.

Compressing Rectilinear Pictures and Minimizing Access Control Lists David L. Applegate Gruia model for the problem of minimiz- ing access control lists (ACLs) in network routers, a model that also has applications to rectilinear picture compression and figure drawing in common graphics software

1 Hydro-Mechanical Loading and Compressibility of Fibrous Media for Resin Infusion Processes P investigating the compressibility behaviour of composite preform with a view of modelling resin infusion Infusion The need for manufacturing large composite parts in the aeronautic industry is ever increasing

Compressible air cushioning in liquid-solid impacts Peter D. Hicks Department of Mechanical of Mathematics, University of East Anglia, Norwich, NR4 7TJ, UK. r.purvis@uea.ac.uk Abstract--Air cushioning the influence of air compressibility. Building on earlier incompress- ible analyses, a local asymptotic model

is numerically studied for the compression of ultra-short pulses. The silica is therefore the only solid material are used in the compression of ultra-short pulses amplified by the so-called « frequency drift » method laser induced damage threshold. In comparison to gratings engraved on a dielectric stack (MLD

This report discusses the theory, implementation and performance of a combinatorial fuzzy-binary and-or (FBAR) algorithm for lossless data compression (LDC) and decompression (LDD) on 8-bit characters. A combinatorial pairwise flags is utilized, as new zero/nonzero, impure/pure bit-pair operators, where their combination forms a 4D hypercube to compress a sequence of bytes. The compressed sequence is stored in a grid file of constant size. Decompression is by using a fixed size translation table (TT) to access the grid file during I/O data conversions. Compared to other LDC algorithms, double-efficient (DE) entropies denoting 50% compressions with reasonable bitrates were observed. Double-extending the usage of the TT component in the code, exhibits a Universal Predictability via its negative growth of entropy for LDCs > 87.5% compression.

Phaseouts of CFCs and HCFCs to protect the stratospheric ozone layer have caused many developments in replacement or alternative technologies for heat pumping. Some of this effort has been of an ``evolutionary`` nature where the designs of conventional vapor compression systems were adapted to use chlorine-free refrigerants. Other alternatives are more radical departures from conventional practice such as operating above the critical point of an alternative refrigerant. Revolutionary changes in technology based on cycles sor principles not commonly associated with refrigeration have also attracted interest. Many of these technologies are being touted because they are ``ozone-safe`` or because they do not use greenhouse gases as refrigerants. Basic principles and some advantages and disadvantages of each technology are discussed in this paper.

Dynamic time expansion or compression of a small-amplitude input signal generated with an initial scale is performed using a nonlinear waveguide. A nonlinear waveguide having a variable refractive index is connected to a bias voltage source having a bias signal amplitude that is large relative to the input signal to vary the reflective index and concomitant speed of propagation of the nonlinear waveguide and an electrical circuit for applying the small-amplitude signal and the large amplitude bias signal simultaneously to the nonlinear waveguide. The large amplitude bias signal with the input signal alters the speed of propagation of the small-amplitude signal with time in the nonlinear waveguide to expand or contract the initial time scale of the small-amplitude input signal.

A stem (34) extends from a second part (30) through a hole (28) in a first part (22). A groove (38) around the stem provides a non-threaded contact surface (42) for a ring element (44) around the stem. The ring element exerts an inward force against the non-threaded contact surface at an angle that creates axial tension (T) in the stem, pulling the second part against the first part. The ring element is formed of a material that shrinks relative to the stem by sintering. The ring element may include a split collet (44C) that fits partly into the groove, and a compression ring (44E) around the collet. The non-threaded contact surface and a mating distal surface (48) of the ring element may have conic geometries (64). After shrinkage, the ring element is locked onto the stem.

The Dark Matter Particle Explorer (DAMPE) is an upcoming scientific satellite mission for high energy gamma-ray, electron and cosmic rays detection. The silicon tracker (STK) is a sub detector of the DAMPE payload with an excellent position resolution (readout pitch of 242um), which measures the incident direction of particles, as well as charge. The STK consists 12 layers of Silicon Micro-strip Detector (SMD), equivalent to a total silicon area of 6.5m$^2$. The total readout channels of the STK are 73728, which leads to a huge amount of raw data to be dealt. In this paper, we focus on the on-board data compression algorithm and procedure in the STK, which was initially verified by cosmic-ray measurements.

In 2003, Raytheon Company upgraded the efficiency of the compressed air system at its Integrated Air Defense Center in Andover, Massachusetts, to save energy and reduce costs. Worn compressors and dryers were replaced, a more sophisticated control strategy was installed, and an aggressive leak detection and repair effort was carried out. The total cost of these improvements was $342,000; however, National Grid, a utility service provider, contributed a $174,000 incentive payment. Total annual energy and maintenance cost savings are estimated at $141,500, and energy savings are nearly 1.6 million kWh. This case study was prepared for the U.S. Department of Energy's Industrial Technologies Program.

Properties of degenerate hydrogen and deuterium (D) at pressures of the order of terapascals are of key interest to Planetary Science and Inertial Confinement Fusion. In order to recreate these conditions in the laboratory, we present a scheme, where a metal liner drives a cylindrically convergent quasi-isentropic compression in a D fill. We first determined an external pressure history for driving a self-similar implosion of a D shell from a fictitious flow simulation [D. S. Clark and M. Tabak, Nucl. Fusion 47, 1147 (2007)]. Then, it is shown that this D implosion can be recreated inside a beryllium liner by shaping the current pulse. For a peak current of 10.8 MA cold and nearly isochoric D is assembled at around 12 500 kg/m{sup 3}. Finally, our two-dimensional Gorgon simulations show the robustness of the implosion method to the magneto-Rayleigh-Taylor instability when using a sufficiently thick liner.

Dynamic time expansion or compression of a small amplitude input signal generated with an initial scale is performed using a nonlinear waveguide. A nonlinear waveguide having a variable refractive index is connected to a bias voltage source having a bias signal amplitude that is large relative to the input signal to vary the reflective index and concomitant speed of propagation of the nonlinear waveguide and an electrical circuit for applying the small amplitude signal and the large amplitude bias signal simultaneously to the nonlinear waveguide. The large amplitude bias signal with the input signal alters the speed of propagation of the small-amplitude signal with time in the nonlinear waveguide to expand or contract the initial time scale of the small-amplitude input signal.

Stirling engines have many unique advantages including higher thermal efficiencies, preferable exhaust gas characteristics, multi-fuel usage, and low noise and vibration. On the other hand, heat pump systems are very attractive for space heating and cooling and industrial usage because of their potential to save energy. Especially, there are many environmental merits of Stirling-driven vapor-compression (SDVC) systems. This paper introduces a design method for the SDVC based on reliable mathematical methods for Stirling and Rankine cycles with reliable thermophysical information for refrigerants. The model treats a kinematic Stirling engine and a scroll compressor coupled by a belt. Some experimental coefficients are used to formulate the SDVC items. The obtained results show the performance behavior of the SDVC in detail. The measured performance of the actual system agrees with the calculated results. Furthermore, the calculated results indicate attractive SDVC performance using alternative refrigerants.

In this paper we study the Leray weak solutions of the incompressible Navier Stokes equation in an exterior domain.We describe, in particular, an hyperbolic version of the so called artificial compressibility method investigated by J.L.Lions and Temam. The convergence of these type of approximation show in general a lack of strong convergence due to the presence of acoustic waves. In this paper we face this difficulty by taking care of the dispersive nature of these waves by means of the Strichartz estimates or waves equations satisfied by the pressure. We actually decompose the pressure in different acoustic components, each one of them satisfies a specific initial boundary value problem. The strong convergence analysis of the velocity field will be achieved by using the associated Leray-Hodge decomposition.

The response of a reticulated, elastomeric foam filled with colloidal silica under dynamic compression is studied. Under compression beyond local strain rates on the order of 1 s[superscript ?1], the non-Newtonian, colloidal ...

Integrating a hydrothermal diamond anvil cell (HDAC) and focused high energy x-ray beam from the superconductor wiggler X17 beamline at the National Synchrotron Light Source (NSLS) at the Brookhaven National Laboratory (BNL), we have successfully collected high quality total x-ray scattering data of liquid gallium. The experiments were conducted at a pressure range from 0.1GPa up to 2GPa at ambient temperature. For the first time, pair distribution functions (PDF) for liquid gallium at high pressure were derived up to 10 {angstrom}. Liquid gallium structure has been studied by x-ray absorption (Di Cicco & Filipponi, 1993; Wei et al., 2000; Comez et al., 2001), x-ray diffraction studies (Waseda & Suzuki, 1972), and molecular dynamics simulation (Tsay, 1993; Hui et al., 2002). These previous reports have focused on the 1st nearest neighbor structure, which tells us little about the atomic arrangement outside the first shell in non- crystalline materials. This study focuses on the structure of liquid gallium and the atomic structure change due to compression. The PDF results show that the observed atomic distance of the first nearest neighbor at 2.78 {angstrom} (first G(r) peak and its shoulder at the higher Q position) is consistent with previous studies by x-ray absorption (2.76 {angstrom}, Comez et al., 2001). We have also observed that the first nearest neighbor peak position did not change with pressure increasing, while the farther peaks positions in the intermediate distance range decreased with pressure increasing. This leads to a conclusion of the possible existence of 'locally rigid units' in the liquid. With the addition of reverse Monte Carlo modeling, we have observed that the coordination number in the local rigit unit increases with pressure. The bulk modulus of liquid gallium derived from the volume compression curve at ambient temperature (300K) is 12.1(6) GPa.

1. Definition of Subject The purpose of this text is to provide an introduction to aspects of oceanic general circulation models (OGCMs), an important component of Climate System or Earth System Model (ESM). The role of the ocean in ESMs is described in Chapter XX (EDITOR: PLEASE FIND THE COUPLED CLIMATE or EARTH SYSTEM MODELING CHAPTERS). The emerging need for understanding the Earths climate system and especially projecting its future evolution has encouraged scientists to explore the dynamical, physical, and biogeochemical processes in the ocean. Understanding the role of these processes in the climate system is an interesting and challenging scientific subject. For example, a research question how much extra heat or CO2 generated by anthropogenic activities can be stored in the deep ocean is not only scientifically interesting but also important in projecting future climate of the earth. Thus, OGCMs have been developed and applied to investigate the various oceanic processes and their role in the climate system.

Modelling of the Effects of Friction and Compression on Explosives ESGI80 Modelling of the Effects of Friction and Compression on Explosives Problem presented by John Curtis Atomic Weapons Establishment, based on the compression of a sample of the explosive. The study group identified frictional heating

This project has documented and demonstrated the feasibility of technologies and operational choices for companies who operate the large installed fleet of integral engine compressors in pipeline service. Continued operations of this fleet is required to meet the projected growth of the U.S. gas market. Applying project results will meet the goals of the DOE-NETL Natural Gas Infrastructure program to enhance integrity, extend life, improve efficiency, and increase capacity, while managing NOx emissions. These benefits will translate into lower cost, more reliable gas transmission, and options for increasing deliverability from the existing infrastructure on high demand days. The power cylinders on large bore slow-speed integral engine/compressors do not in general combust equally. Variations in cylinder pressure between power cylinders occur cycle-to-cycle. These variations affect both individual cylinder performance and unit average performance. The magnitude of the variations in power cylinder combustion is dependent on a variety of parameters, including air/fuel ratio. Large variations in cylinder performance and peak firing pressure can lead to detonation and misfires, both of which can be damaging to the unit. Reducing the variation in combustion pressure, and moving the high and low performing cylinders closer to the mean is the goal of engine balancing. The benefit of improving the state of the engine ''balance'' is a small reduction in heat rate and a significant reduction in both crankshaft strain and emissions. A new method invented during the course of this project is combustion pressure ratio (CPR) balancing. This method is more effective than current methods because it naturally accounts for differences in compression pressure, which results from cylinder-to-cylinder differences in the amount of air flowing through the inlet ports and trapped at port closure. It also helps avoid compensation for low compression pressure by the addition of excess fuel to achieve equalizing peak firing pressure, even if some of the compression pressure differences are attributed to differences in cylinder and piston geometry, clearance, and kinematics. The combination of high-pressure fuel injection and turbocharging should produce better mixing of fuel and air in lean mixtures. Test results documented modest improvements in heat rate and efficiency and significant improvements in emissions. The feasibility of a closed-loop control of waste-gate setting, which will maintain an equivalence ratio set point, has been demonstrated. This capability allows more direct tuning to enhance combustion stability, heat rate, or emissions. The project has documented the strong dependence of heat rate on load. The feasibility of directly measuring power and torque using the GMRC Rod Load Monitor (RLM) has been demonstrated. This capability helps to optimize heat rate while avoiding overload. The crankshaft Strain Data Capture Module (SDCM) has shown the sensitivity to changes in operating conditions and how they influence crankshaft bending strain. The results indicate that: balancing reduces the frequency of high-strain excursions, advanced timing directly increases crankshaft dynamic strain, reduced speed directly reduces strain, and high-pressure fuel injection reduces crankshaft strain slightly. The project demonstrated that when the timing is advanced, the heat rate is reduced, and when the timing is retarded, the heat rate is increased. One reason why timing is not advanced as much as it might be is the potential for detonation on hot days. A low-cost knock detector was demonstrated that allowed active control to use timing to allow the heat rate benefit to be realized safely. High flow resistance losses in the pulsation control systems installed on some compressors have been shown to hurt efficiency of both compressor and engine/compressor system. Improved pulsation control systems have the potential to recover almost 10% of available engine power. Integrity enhancements and reduced component failure probability will enhance aggregate

In the United States, nearly 60,000 patients per day receive general anesthesia for surgery.1 General anesthesia is a drug-induced, reversible condition that includes specific behavioral and physiological traits  ...

In the United States, nearly 60,000 patients per day receive general anesthesia for surgery.1 General anesthesia is a drug-induced, reversible condition that includes specific behavioral and physiological traits  ...

This paper examines whether lossy compression can be used effectively in physics simulations as a possible strategy to combat the expected data-movement bottleneck in future high performance computing architectures. We show that, for the codes and simulations we tested, compression levels of 35X can be applied without causing significant changes to important physical quantities. Rather than applying signal processing error metrics, we utilize physics-based metrics appropriate for each code to assess the impact of compression. We evaluate three different simulation codes: a Lagrangian shock-hydrodynamics code, an Eulerian higher-order hydrodynamics turbulence modeling code, and an Eulerian coupled laser-plasma interaction code. Wemore »compress relevant quantities after each time-step to approximate the effects of tightly coupled compression and study the compression rates to estimate memory and disk-bandwidth reduction. We find that the error characteristics of compression algorithms must be carefully considered in the context of the underlying physics being modeled.« less