Sample records for maximum horizontal compressive

For application to industrial heating of large pools by immersed heat exchangers, the socalled maximum allowable (or open-quotes criticalclose quotes) heat flux is studied for unconfined tube bundles aligned horizontally in a pool without forced flow. In general, we are considering boiling after the pool reaches its saturation temperature rather than sub-cooled pool boiling which should occur during early stages of transient operation. A combination of literature review and simple approximate analysis has been used. To date our main conclusion is that estimates of q inch chf are highly uncertain for this configuration

Full Text Available The main approach behind the present numerical investigation is to estimate the mass flow rate of air sucked into a horizontal open-ended louvered pipe from the surrounding atmosphere. The present numerical investigation has been performed by solving the conservation equations for mass, momentum and energy along with two equation based k-ɛ model for a louvered horizontal cylindrical pipe by finite volume method. It has been found from the numerical investigation that mass suction rate of air into the pipe increases with increase in louvered opening area and the number of nozzles used. Keeping other parameters fixed, for a given mass flow rate there exists an optimum protrusion of nozzle for highest mass suction into the pipe. It was also found from the numerical investigation that increasing the pipe diameter the suction mass flow rate of air was increased.

The monthly average daily values of the extraterrestrial irradiation on a horizontal surface (H 0 ) and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by scientists each time they are needed and by using the approximate short-cut methods. Computations for these values have been made once and for all for latitude values of 60 deg. N to 60 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables should avoid the need for repetition and approximate calculations and serve as a useful ready reference for solar energy scientists and engineers. (author)

Full Text Available The objective of this paper is to reconsider the Maximum Entropy Production conjecture (MEP in the context of a very simple two-dimensional zonal-vertical climate model able to represent the total material entropy production due at the same time to both horizontal and vertical heat fluxes. MEP is applied first to a simple four-box model of climate which accounts for both horizontal and vertical material heat fluxes. It is shown that, under condition of fixed insolation, a MEP solution is found with reasonably realistic temperature and heat fluxes, thus generalising results from independent two-box horizontal or vertical models. It is also shown that the meridional and the vertical entropy production terms are independently involved in the maximisation and thus MEP can be applied to each subsystem with fixed boundary conditions. We then extend the four-box model by increasing its resolution, and compare it with GCM output. A MEP solution is found which is fairly realistic as far as the horizontal large scale organisation of the climate is concerned whereas the vertical structure looks to be unrealistic and presents seriously unstable features. This study suggest that the thermal meridional structure of the atmosphere is predicted fairly well by MEP once the insolation is given but the vertical structure of the atmosphere cannot be predicted satisfactorily by MEP unless constraints are imposed to represent the determination of longwave absorption by water vapour and clouds as a function of the state of the climate. Furthermore an order-of-magnitude estimate of contributions to the material entropy production due to horizontal and vertical processes within the climate system is provided by using two different methods. In both cases we found that approximately 40 mW m−2 K−1 of material entropy production is due to vertical heat transport and 5–7 mW m−2 K−1 to horizontal heat transport.

The paper is concerned with the analysis of rigid particle and compressible gas bubble motion in a horizontally oscillating vessel with a compressible fluid. A nonlinear differential equation describing motion of inclusions with respect to the vessel is derived and solved by the method of direct...... of the bubbles which are affected by the negligible vibrational force is found. Also an approximate expression has been obtained for the average velocity of bubble׳s motion in the fluid; relationship between this velocity and bubble radius and vibration parameters has been revealed. A simple physical explanation...

The maximumhorizontal gradient method was first proposed by Blakely and Simpson (1986) for determining the boundaries between geological bodies with different densities. The method involves the comparison of a center point with its eight nearest neighbors in four directions within each 3 × 3 calculation grid. The horizontal location and magnitude of the maximum values are found by interpolating a second-order polynomial through the trio of points provided that the magnitude of the middle point is greater than its two nearest neighbors in one direction. In theoretical models of multiple sources, however, the above condition does not allow the maximumhorizontal locations to be fully located, and it could be difficult to correlate the edges of complicated sources. In this paper, the authors propose an additional condition to identify more maximumhorizontal locations within the calculation grid. This additional condition will improve the method algorithm for interpreting the boundaries of magnetic and/or gravity sources. The improved algorithm was tested on gravity models and applied to gravity data for the Phu Khanh basin on the continental shelf of the East Vietnam Sea. The results show that the additional locations of the maximumhorizontal gradient could be helpful for connecting the edges of complicated source bodies.

Full Text Available Brazilian design code ABNT NBR6118:2003 - Design of Concrete Structures - Procedures - [1] proposes the use of simplified models for the consideration of non-linear material behavior in the evaluation of horizontal displacements in buildings. These models penalize stiffness of columns and beams, representing the effects of concrete cracking and avoiding costly physical non-linear analyses. The objectives of the present paper are to investigate the accuracy and uncertainty of these simplified models, as well as to evaluate the reliabilities of structures designed following ABNT NBR6118:2003[1&] in the service limit state for horizontal displacements. Model error statistics are obtained from 42 representative plane frames. The reliabilities of three typical (4, 8 and 12 floor buildings are evaluated, using the simplified models and a rigorous, physical and geometrical non-linear analysis. Results show that the 70/70 (column/beam stiffness reduction model is more accurate and less conservative than the 80/40 model. Results also show that ABNT NBR6118:2003 [1] design criteria for horizontal displacement limit states (masonry damage according to ACI 435.3R-68(1984 [10] are conservative, and result in reliability indexes which are larger than those recommended in EUROCODE [2] for irreversible service limit states.

Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximumcompression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximumcompression is a function of both thermal spread and the velocity errors. The effects of the

The composition of naturally cemented deposits is very complicated; thus, estimating the maximum shear modulus (Gmax, or shear modulus at very small strains) of cemented sands using the previous empirical formulas is very difficult. The purpose of this experimental investigation is to evaluate the effects of particle size and cement type on the Gmax and unconfined compressive strength (qucs) of cemented sands, with the ultimate goal of estimating Gmax of cemented sands using qucs. Two sands were artificially cemented using Portland cement or gypsum under varying cement contents (2%-9%) and relative densities (30%-80%). Unconfined compression tests and bender element tests were performed, and the results from previous studies of two cemented sands were incorporated in this study. The results of this study demonstrate that the effect of particle size on the qucs and Gmax of four cemented sands is insignificant, and the variation of qucs and Gmax can be captured by the ratio between volume of void and volume of cement. qucs and Gmax of sand cemented with Portland cement are greater than those of sand cemented with gypsum. However, the relationship between qucs and Gmax of the cemented sand is not affected by the void ratio, cement type and cement content, revealing that Gmax of the complex naturally cemented soils with unknown in-situ void ratio, cement type and cement content can be estimated using qucs.

A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕ b . In the presence of large voltage errors, δU⪢ΔE b , the maximumcompression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.

A simulated full-scale plutonium storage cubicle with 22 horizontally positioned and heated 3013 canisters is proposed to confirm the effectiveness of natural circulation. Temperature and airflow measurements will be made for different heat generation and cubicle door configurations. Comparisons will be made to computer based thermal Hydraulic models

We propose a systematic method for constructing a sparse data reconstruction algorithm in compressed sensing at a relatively low computational cost for general observation matrix. It is known that the cost of ℓ 1 -norm minimization using a standard linear programming algorithm is O(N 3 ). We show that this cost can be reduced to O(N 2 ) by applying the approach of posterior maximization. Furthermore, in principle, the algorithm from our approach is expected to achieve the widest successful reconstruction region, which is evaluated from theoretical argument. We also discuss the relation between the belief propagation-based reconstruction algorithm introduced in preceding works and our approach

This work is aimed at study of maximum available work and irreversibility (mixing, combustion, unburned, and friction) of a dual-fuel diesel engine (H 2 (hydrogen)–diesel) using exergy analysis. The maximum available work increased with H 2 addition due to reduction in irreversibility of combustion because of less entropy generation. The irreversibility of unburned fuel with the H 2 fuel also decreased due to the engine combustion with high temperature whereas there is no effect of H 2 on mixing and friction irreversibility. The maximum available work of the diesel engine at rated load increased from 29% with conventional base mode (without H 2 ) to 31.7% with dual-fuel mode (18% H 2 energy share) whereas total irreversibility of the engine decreased drastically from 41.2% to 39.3%. The energy efficiency of the engine with H 2 increased about 10% with 36% reduction in CO 2 emission. The developed methodology could also be applicable to find the effect and scope of different technologies including exhaust gas recirculation and turbo charging on maximum available work and energy efficiency of diesel engines. - Highlights: • Energy efficiency of diesel engine increases with hydrogen under dual-fuel mode. • Maximum available work of the engine increases significantly with hydrogen. • Combustion and unburned fuel irreversibility decrease with hydrogen. • No significant effect of hydrogen on mixing and friction irreversibility. • Reduction in CO 2 emission along with HC, CO and smoke emissions

Structural ceramics are attracting attention in the development of space planes, aircraft and nuclear fusion reactors because they have excellent wear-resistant and heat-resistant characteristics. However, in some applications it is anticipated that they will be exposed to very-high-temperature environments of the order of thousands of degrees. Therefore, it is very important to investigate their thermal shock characteristics. In this report, the distributions of temperatures and thermal stresses of cylindrically shaped ceramics under irradiation by laser beams are discussed using the finite-element computer code (MARC) with arbitrary quadrilateral axisymmetric ring elements. The relationships between spot diameters of laser beams and maximum values of compressive thermal stresses are derived for various power densities. From these relationships, a critical fracture curve is obtained, and it is compared with the experimental results. (author)

Full Text Available Purpose: The prevalence of compression garment (CG use is increasing with athletes striving to take advantage of the purported benefits to recovery and performance. Here, we investigated the effect of CG on muscle force and movement velocity performance in athletes.Methods: Ten well-trained male rugby athletes wore a wrestling-style CG suit applying 13–31 mmHg of compressive pressure during a training circuit in a repeated-measures crossover design. Force and velocity data were collected during a 5-s isometric mid-thigh pull (IMTP and repeated countermovement jump (CMJ, respectively; and time to complete a 5-m horizontal loaded sled push was also measured.Results: IMTP peak force was enhanced in the CG condition by 139 ± 142 N (effect size [ES] = 0.36. Differences in CMJ peak velocity (ES = 0.08 and loaded sled-push sprint time between the conditions were trivial (ES = −0.01. A qualitative assessment of the effects of CG wear suggested that the likelihood of harm was unlikely in the CMJ and sled push, while a beneficial effect in the CMJ was possible, but not likely. Half of the athletes perceived a functional benefit in the IMTP and CMJ exercises.Conclusion: Consistent with other literature, there was no substantial effect of wearing a CG suit on CMJ and sprint performance. The improvement in peak force generation capability in an IMTP may be of benefit to rugby athletes involved in scrummaging or lineout lifting. The mechanism behind the improved force transmission is unclear, but may involve alterations in neuromuscular recruitment and proprioceptive feedback.

Purpose: The prevalence of compression garment (CG) use is increasing with athletes striving to take advantage of the purported benefits to recovery and performance. Here, we investigated the effect of CG on muscle force and movement velocity performance in athletes. Methods: Ten well-trained male rugby athletes wore a wrestling-style CG suit applying 13-31 mmHg of compressive pressure during a training circuit in a repeated-measures crossover design. Force and velocity data were collected during a 5-s isometric mid-thigh pull (IMTP) and repeated countermovement jump (CMJ), respectively; and time to complete a 5-m horizontal loaded sled push was also measured. Results: IMTP peak force was enhanced in the CG condition by 139 ± 142 N (effect size [ES] = 0.36). Differences in CMJ peak velocity (ES = 0.08) and loaded sled-push sprint time between the conditions were trivial (ES = -0.01). A qualitative assessment of the effects of CG wear suggested that the likelihood of harm was unlikely in the CMJ and sled push, while a beneficial effect in the CMJ was possible, but not likely. Half of the athletes perceived a functional benefit in the IMTP and CMJ exercises. Conclusion: Consistent with other literature, there was no substantial effect of wearing a CG suit on CMJ and sprint performance. The improvement in peak force generation capability in an IMTP may be of benefit to rugby athletes involved in scrummaging or lineout lifting. The mechanism behind the improved force transmission is unclear, but may involve alterations in neuromuscular recruitment and proprioceptive feedback.

In this paper, we consider the Wind Farm layout optimization problem using a genetic algorithm. Both the Horizontal –Axis Wind Turbines (HAWT) and Vertical-Axis Wind Turbines (VAWT) are considered. The goal of the optimization problem is to optimally position the turbines within the wind farm such that the wake effects are minimized and the power production is maximized. The reasonably accurate modeling of the turbine wake is critical in determination of the optimal layout of the turbines and the power generated. For HAWT, two wake models are considered; both are found to give similar answers. For VAWT, a very simple wake model is employed.

Planning the future hydro capacity in Brazil includes considering power reinforcements in the peak hours due to the increasing numbers of run-of-river plants. It means that the Brazilian's reservoir profile has been changing over the years from five years regularization to monthly regularization. This article presents a deterministic methodology to evaluate the capacity reserve of the Brazilian power system. It is based on the 'capacity reserve margin' calculation done by a monthly comparison between peak availability and maximum peak load. This methodology was applied to the Decennial Energy Expansion Plan 2019 (PDE 2019) and its final results are shown here through Peak Balances considering different operation conditions and the whole inflows historical records. Additionally, it is presented a suggestion about the evolution of peak evaluation criteria to be applied to the Brazilian power system on its expansion planning. (author)

We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m 3 and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s.

Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

Call your health insurance or prescription plan: Find out if they pay for compression stockings. Ask if your durable medical equipment benefit pays for compression stockings. Get a prescription from your doctor. Find a medical equipment store where they can ...

The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references

best for bits-per-pixel rates below 1.4 bpp, while HEVC obtains best performance in the range 1.4 to 6.5 bpp. The compression performance is also evaluated based on maximum errors. These results also show that HEVC can achieve a precision of 1°C with an average of 1.3 bpp....

The waste is filled into auxiliary barrels made of sheet steel and compressed with the auxiliary barrels into steel jackets. These can be stacked in storage barrels. A hydraulic press is included in the plant, which has a horizontalcompression chamber and a horizontal pressure piston, which works against a counter bearing slider. There is a filling and emptying device for the pressure chamber behind the counter bearing slider. The auxiliary barrels can be introduced into the compression chamber by the filling and emptying device. The pressure piston also pushes out the steel jackets formed, so that they are taken to the filling and emptying device. (orig./HP) [de

The first horizontal well was drilled in Saskatchewan in 1987. Since then, the number of horizontal wells drilled has escalated rapidly, averaging approximately 500 per year since 1993. When combined with horizontal wells drilled in Alberta, the major Canadian oil-producing province, the total number drilled in 1995 was 978. This total exceeds the National Energy Board (NEB) projected maximum of 816 wells per year. The NEB projections were based on a break-even point for the drilling of horizontal wells of a return of CDN $285,000 using a discount rate of 15%. This corresponded to a cumulative production from each individual well of some 11,000 m 3 . The introduction of a royalty-free production volume of 12,000 m 3 per horizontal well in Saskatchewan was instrumental in stimulating the rapid expansion in the use of horizontal wells and helping Canada to exceed the forecasted drilling level. Within Saskatchewan, daily production from 1964 active horizontal wells is in excess of 20,000 m 3 . Comparative analysis indicates that the average daily production per well has increased from approximately by 40% with the advent of horizontal wells. In total production terms, provincial production has increased from 11.7 million cubic metres in 1989 to 20.9 million m 3 in 1996. This represents an increase of almost 79% based primarily on the extensive use of horizontal wells. In 1996, horizontal wells produced 36% of the province's oil from 12% of the active wells. In the southeastern producing areas of Saskatchewan, the Williston Basin, declining oil-production has jumped 100%, with horizontal wells accounting for approximately 50% of total regional production. Pay zones in this areas, as in most of the province, tend to be relatively thin, with net pay frequently less that 5 m. The modest investment of some CDN $5 million in government research funding 10 years ago to stimulate the development of horizontal wells, combined with a favourable royalty structure, has been at

Full Text Available This article makes an attempt to analyze the principle of subsidiarity in its two main manifestations, namely vertical and horizontal, to outline the principles of relations between the state and regions within the vertical subsidiarity, and features a collaboration of the government and civil society within the horizontal subsidiarity. Scientists identify two types, or two levels of the subsidiarity principle: vertical subsidiarity and horizontal subsidiarity. First, vertical subsidiarity (or territorial concerning relations between the state and other levels of subnational government, such as regions and local authorities; second, horizontal subsidiarity (or functional concerns the relationship between state and citizen (and civil society. Vertical subsidiarity expressed in the context of the distribution of administrative responsibilities to the appropriate higher level lower levels relative to the state structure, ie giving more powers to local government. However, state intervention has subsidiary-lower action against local authorities in cases of insolvency last cope on their own, ie higher organisms intervene only if the duties are less authority is insufficient to achieve the goals. Horizontal subsidiarity is within the relationship between power and freedom, and is based on the assumption that the concern for the common good and the needs of common interest community, able to solve community members (as individuals and citizens’ associations and role of government, in accordance horizontal subsidiarity comes to attracting features subsidiarity assistance, programming, coordination and possibly control.

Full Text Available One’s effort to clarify the definition of horizontal labour violence is of great importance, due to the variety of definitions that are mentioned in the worldwide scientific literature. Furthermore, the reference of multiple forms of such violence herein the nurse professional group is challenging, as well. Another fact of great importance is that, any form of professional violence (horizontal violence, horizontal mobbing in the work place environment can be possibly escalated and include even physical abuse (Bullying, besides the psychological and emotional impact for the victim. The definitions of Horizontal violence, Mobbing and Bullying, include a repeated negative behaviour emanating from at least one “predator” towards at least one “victim”, with work status differences and the existence or lack of physical abuse (Bullying. Horizontal violence is a hostile, aggressive and harmful behaviour which is either overt or concealed and is pointed from an individual to another individual of the same working rank and causes intense emotional pain at the victim. The manifestations vary from humiliating tasks assignment or the victim’s efforts undermining to clearly aggressive behaviors (criticism, intimidation, sarcasm etc.. The reason behind this phenomenon is multifactorial extended not only towards the working environment but also to the personal characteristics of the “predator” as well as the possible “victim”. The researchers emphasize the high incidence of the phenomenon, as well as the cost that is induced by the violent behaviors to both the health professionals and the hospital. Finally, they point out the paradox of the presence of violence inside a system that is designed to promote health.

A complete understanding of the initiation, evolution, and termination of volcanic eruptions requires reliable monitoring techniques to detect changes in the conduit system during periods of activity, as well as corresponding knowledge of conduit structure and of magma physical properties. Case studies of stress field orientation prior to, during, and after magmatic activity can be used to relate changes in stress field orientation to the state of the magmatic conduit system. These relationships may be tested through modeling of induced stresses. Here I present evidence from case studies and modeling that horizontal rotation of the axis of maximumcompressive stress at an active volcano indicates pressurization of a magmatic conduit, and that this rotation, when observed, may also be indicative of the physical properties of the ascending magma. Changes in the local stress field orientation during the 1992 eruption sequence at Crater Peak (Mt. Spurr), Alaska were analyzed by calculating and inverting subsets of over 150 fault-plane solutions. Local stress tensors for four time periods, corresponding approximately to changes in activity at the volcano, were calculated based on the misfit of individual fault-plane solutions to a regional stress tensor. Results indicate that for nine months prior to the eruption, local maximumcompressive stress was oriented perpendicular to regional maximumcompressive stress. A similar horizontal rotation was observed beginning in November of 1992, coincident with an episode of elevated earthquake and tremor activity indicating intrusion of magma into the conduit. During periods of quiescence the local stress field was similar to the regional stress field. Similar horizontal rotations have been observed at Mt. Ruapehu, New Zealand (Miller and Savage 2001, Gerst 2003), Usu Volcano, Japan (Fukuyama et al. 2001), Unzen Volcano, Japan (Umakoshi et al. 2001), and Mt. St. Helens Volcano, USA (Moran 1994) in conjunction with eruptive

Data relevant to curd compression in a horizontal, solid bowl decanter centrifuge have been obtained by studying the dewatering of acid casein curd in a batch laboratory centrifuge. Analysis of curd compression under centrifugal force predicts a moisture content gradient in the dewatered curd from a maximum at the curd-liquid interface to a minimum at the centrifuge bowl wall. This moisture content gradient was also measured experimentally, and its practical implications are discussed. Increases in centrifugal force, centrifugation time, and centrifugation temperature all caused a marked de crease in dewatered curd moisture content, whereas in creases in precipitation pH and maximum washing temperature caused a smaller decrease in dewatered curd moisture content.

Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)

By using global energy and pressure balance applied to a shock model it is shown that for a piston-driven fast compression, the maximumcompression ratio is not dependent on the absolute magnitude of the piston power, but rather on the power pulse shape. Specific cases are considered and a maximum density compression ratio of 27 is obtained for a square-pulse power compressing a spherical pellet with specific heat ratio of 5/3. Double pulsing enhances the density compression ratio to 1750 in the case of linearly rising compression pulses. Using this method further enhancement by multiple pulsing becomes obvious. (author)

Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

Full Text Available Aims: This study evaluated the horizontal magnification of images taken from adults and pediatrics with PM 2002 CC Planmeca analogue machine. Materials and Methods: A series of 120 panoramic radiographs were obtained of 60 adults and 60 pediatrics. For all patients, negative impressions were used to make positive casts of the teeth. A caliper was used to measure the maximum mesiodistal length of the buccal surface of all teeth except canines on both casts and radiographs. The horizontal magnification factor was calculated for incisor, premolar, and molar regions by dividing the values obtained from the casts by the values obtained from the radiographs. Statistical Analysis: Independent t-test and one-way analysis of variance (ANOVA were used. Results: The results indicated that with regard to adults, maxillary and mandibular incisor regions, unlike the other two sessions, didn′t show significant difference of the mean magnification of horizontal dimension (P = 0.5. In pediatrics, the comparison between mean magnification factors of all subgroups showed significant difference (P < 0.0001. Despite the adults′ radiographs, the results of pediatrics′ radiographs showed significantly higher magnification than the index listed by the manufacturer of the radiographic machine used. Conclusion: The present study results point to the fact that PM 2002 CC Proline panoramic machine makes possible precise measurements on radiographs of adults′ jaws in the horizontal dimension.

An investigation of fatigue failure in wood subjected to load cycles in compression parallel to grain is presented. Small clear specimens of spruce are taken to failure in square wave formed fatigue loading at a stress excitation level corresponding to 80% of the short term strength. Four...... frequencies ranging from 0.01 Hz to 10 Hz are used. The number of cycles to failure is found to be a poor measure of the fatigue performance of wood. Creep, maximum strain, stiffness and work are monitored throughout the fatigue tests. Accumulated creep is suggested identified with damage and a correlation...

We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

Typical iliac vein compression syndrome (IVCS) is characterized by compression of left common iliac vein (LCIV) by the overlying right common iliac artery (RCIA). We described an underestimated type of IVCS with dual compression by right and left common iliac arteries (LCIA) simultaneously. Thirty-one patients with IVCS were retrospectively included. All patients received trans-catheter venography and computed tomography (CT) examinations for diagnosing and evaluating IVCS. Late venography and reconstructed CT were used for evaluating the anatomical relationship among LCIV, RCIA and LCIA. Imaging manifestations as well as demographic data were collected and evaluated by two experienced radiologists. Sole and dual compression were found in 32.3% (n = 10) and 67.7% (n = 21) of 31 patients respectively. No statistical differences existed between them in terms of age, gender, LCIV diameter at the maximumcompression point, pressure gradient across stenosis, and the percentage of compression level. On CT and venography, sole compression was commonly presented with a longitudinal compression at the orifice of LCIV while dual compression was usually presented as two types: one had a lengthy stenosis along the upper side of LCIV and the other was manifested by a longitudinal compression near to the orifice of external iliac vein. The presence of dual compression seemed significantly correlated with the tortuous LCIA (p = 0.006). Left common iliac vein can be presented by dual compression. This type of compression has typical manifestations on late venography and CT.

The paper is published without modifications. Kolmogorov's manuscript was apparently prepared during his participation in one of expeditions of the ship 'D. Mendeleev' to the Atlantic Ocean (1969) or in a circumnavigation of the world (1971) organized by the Institute for Oceanology led at the time by A.S. Monin. As Kolmogorov himself wrote, the choice of the topic was stimulated by observations concerning '...meanders with horizontal sizes of hundreds of kilometers on a flow involving a layer of hundreds of meters, with subsequent disintegration of these meanders into vortices gradually decreasing in size to several kilometers'. In modern terminology, the paper is devoted to the problem of intensive mixing in pycnoclines, that is, thin layers of stratified fluid, caused by internal waves whose frequencies are less than the Brent-Vaeisaelae frequency. Here I would like to note two circumstances. The first is the scientific insight characteristic for Kolmogorov; this very approach was later reflected in numerous publications (see, for instance, the monograph by V.S. Modevich, V.I. Nikulin, and A.G. Stetsenko 'Dynamics of internal mixing in a stratified medium', Institute for Hydromechanics, Academy of Sciences of Ukraine, Naukova Dumka, Kiev 1988). The second, the more significant in my opinion, is the genuine intellectual curiosity and breadth of thought of this great thinker, who studied not only the most abstract mathematical constructions but also got his head out of the clouds with great interest to solve concrete applied problems

The two-phase flow in the narrow short horizontal rectangular channels 1 millimeter in height was studied experimentally. The features of formation of the two-phase flow were studied in detail. It is shown that with an increase in the channel width, the region of the churn and bubble regimes increases, compressing the area of the jet flow. The areas of the annular and stratified flow patterns vary insignificantly.

Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...

Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

Drilling oil wells under Lake Erie calls for horizontal drilling wells to be drilled from shore out into the pay-zone under the lake. The nature and characteristics of horizontal wells as compared to vertical wells are explored. Considerations that have to be taken into account in drilling horizontal wells are explained (the degree of curvature, drilling fluid quality, geosteering in the pay-zone, steering instrumentation, measurements while drilling (MWD), logging while drilling (LWD)). The concept and reasons for extended reach wells are outlined, along with characteristic features of multilateral wells.

General theory of compressed air/vacuum transportation will be presented. In this transportation, a vehicle (such as an automobile or a rail car) is powered either by compressed air or by air at near vacuum pressure. Four version of such transportation is feasible. In all versions, a ``c-shaped'' plastic or ceramic pipe lies buried a few inches under the ground surface. This pipe carries compressed air or air at near vacuum pressure. In type I transportation, a vehicle draws compressed air (or vacuum) from this buried pipe. Using turbine or reciprocating air cylinder, mechanical power is generated from compressed air (or from vacuum). This mechanical power transferred to the wheels of an automobile (or a rail car) drives the vehicle. In type II-IV transportation techniques, a horizontal force is generated inside the plastic (or ceramic) pipe. A set of vertical and horizontal steel bars is used to transmit this force to the automobile on the road (or to a rail car on rail track). The proposed transportation system has following merits: virtually accident free; highly energy efficient; pollution free and it will not contribute to carbon dioxide emission. Some developmental work on this transportation will be needed before it can be used by the traveling public. The entire transportation system could be computer controlled.

Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

Tsunami propagation in the open ocean is most commonly modeled by solving the shallow water wave equations. These equations require initial conditions on sea surface height and depth-averaged horizontal particle velocity or, equivalently, horizontal momentum. While most modelers assume that initial velocity is zero, Y.T. Song and collaborators have argued for nonzero initial velocity, claiming that horizontal displacement of a sloping seafloor imparts significant horizontal momentum to the ocean. They show examples in which this effect increases the resulting tsunami height by a factor of two or more relative to models in which initial velocity is zero. We test this claim with a "full-physics" integrated dynamic rupture and tsunami model that couples the elastic response of the Earth to the linearized acoustic-gravitational response of a compressible ocean with gravity; the model self-consistently accounts for seismic waves in the solid Earth, acoustic waves in the ocean, and tsunamis (with dispersion at short wavelengths). Full-physics simulations of subduction zone megathrust ruptures and tsunamis in geometries with a sloping seafloor confirm that substantial horizontal momentum is imparted to the ocean. However, almost all of that initial momentum is carried away by ocean acoustic waves, with negligible momentum imparted to the tsunami. We also compare tsunami propagation in each simulation to that predicted by an equivalent shallow water wave simulation with varying assumptions regarding initial velocity. We find that the initial horizontal velocity conditions proposed by Song and collaborators consistently overestimate the tsunami amplitude and predict an inconsistent wave profile. Finally, we determine tsunami initial conditions that are rigorously consistent with our full-physics simulations by isolating the tsunami waves from ocean acoustic and seismic waves at some final time, and backpropagating the tsunami waves to their initial state by solving the

A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

The new research reactor in Garching FRM-II is equipped with 10 leak tight horizontal beam tubes (BT1 - BT10), each of them consisting of a beam tube structure taking an insert with neutron channels. The design of all beam tube structures is similar whereas the inserts are adapted to the special requirements of the using of each beam tube. Inside the reflector tank the beam tube structures are shaped by the inner cones which are made of Al-alloy with circular and rectangular cross sections. They are located in the region of maximum neutron flux (exception BT10), they are directly connected to the flanges of the reflector tank, their lengths are about 1.5 m (exception BT10) and their axes are directed tagentially to the core centre thus contributing to a low γ-noise at the experiments. (orig.)

Measurements of the beam emittance during bunch compression in the CLIC Test Facility (CTF-II) are described. The measurements were made with different beam charges and different energy correlations versus the bunch compressor settings which were varied from no compression through the point of full compression and to over-compression. Significant increases in the beam emittance were observed with the maximum emittance occurring near the point of full (maximal) compression. Finally, evaluation of possible emittance dilution mechanisms indicate that coherent synchrotron radiation was the most likely cause.

This paper proposes a taxonomy of the Stackelberg equilibria emerging from a standard game of horizontal differentiation à la Hotelling in which the strategy set of the sellers in the location stage is the real axis. Repeated leadership appears the most advantageous position. Furthermore, this endogenously yields vertical differentiation between products at equilibrium.

Full Text Available Diplopia is an infrequent complication after blepharoplasty. Most of the cases are in its vertical form due to trauma of the extraocular muscles. In this article, we present a case of horizontal diplopia following cosmetic upper blepharoplasty; we review the literature on this unexpected complication and offer some recommendations to avoid it.

For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

The current seismic fragility capacity of horizontal shaft pump is 1.6 x 9.8 m/s 2 (1.6 g), which was decided from previous vibration tests and we believe that it must have sufficient margin. The purpose of fragility capacity test is to obtain realistic seismic fragility capacity of horizontal shaft pump by vibration tests. Reactor Building Closed Cooling Water (RCW) Pump was tested as a typical horizontal shaft pump, and then bearings and liner rings were tested as important parts to evaluate critical acceleration and dispersion. Regarding RCW pump test, no damage was found, though maximum input acceleration level was 6 x 9.8 m/s 2 (6 g). Some kinds of bearings and liner rings were tested on the element test. Input load was based on seismic motion which was same with the RCW pump test, and maximum load was equivalent to over 20 times of design seismic acceleration. There was not significant damage that caused emergency stop of pump but degradation of surface roughness was found on some kinds of bearings. It would cause reduction of pump life, but such damage on bearings occurred under large seismic load condition that was equivalent to over 10 to 20 g force. Test results show that realistic fragility capacity of horizontal shaft pump would be at least four times as higher as current value which has been used for our seismic PSA. (authors)

This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

The paper discusses questions related to the generation of increasing crustal horizontalcompressive stresses compared to the idea of the standard gravitational state at the elastic stage or even from the prevalence of horizontalcompression over vertical stress equal to the lithostatic pressure. We consider a variant of superfluous horizontalcompression related to internal lithospheric processes occurrin in the crust of orogens, shields, and plates. The vertical ascending movements caused by these motions at the sole of the crust or the lithosphere pertain to these and the concomitant exogenic processes giving rise to denudation and, in particular, to erosion of the surfaces of forming rises. The residual stresses of the gravitational stressed state at the upper crust of the Kola Peninsula have been estimated for the first time. These calculations are based on the volume of sediments that have been deposited in Arctic seas beginning from the Mesozoic. The data speak to the possible level of residual horizontalcompressive stresses up to 90 MPa in near-surface crustal units. This estimate is consistent with the results of in situ measurements that have been carried out at the Mining Institute of the Kola Science Center, Russian Academy of Sciences (RAS), for over 40 years. It is possible to forecast the horizontal stress gradient based on depth using our concept on the genesis of horizontal overpressure, and this forecasting is important for studying the formation of endogenic deposits.

Full Text Available An oriented cluster perforating technology, which integrates both advantages of cluster and oriented perforating, will help solve a series of technical complexities in horizontal well drilling. For realizing its better application in oil and gas development, a series of technologies were developed including perforator self-weight eccentricity, matching of the electronic selective module codes with the surface program control, axial centralized contact signal transmission, and post-perforation intercluster sealing insulation. In this way, the following functions could be realized, such as cable-transmission horizontal well perforator self-weight orientation, dynamic signal transmission, reliable addressing & selective perforation and post-perforation intercluster sealing. The combined perforation and bridge plug or the multi-cluster perforation can be fulfilled in one trip of perforation string. As a result, the horizontal-well oriented cluster perforating technology based on cable conveying was developed. This technology was successfully applied in unconventional gas reservoir exploitation, such as shale gas and coalbed methane, with accurate orientation, reliable selective perforation and satisfactory inter-cluster sealing. The horizontal-well oriented cluster perforating technology benefits the orientation of horizontal well drilling with a definite target and direction, which provides a powerful support for the subsequent reservoir stimulation. It also promotes the fracturing fluid to sweep the principal pay zones to the maximum extent. Moreover, it is conductive to the formation of complex fracture networks in the reservoirs, making quality and efficient development of unconventional gas reservoirs possible.

An experimental research program was undertaken to ascertain the effectiveness of a new technique for strengthening masonry wall panels using steel strips on compressive and shear strength enhancement. The experimental work includes eight wall panels, four each for compressive and shear strength evaluation. This work was the phase I of extensive research project which include testing of strengthened masonry wall panels under monotonic load (Phase-I), static cyclic load (Phase-2) and dynamic load (Phase-3). The wall panels were strengthened with different steel strip arrangements, which consist of single/double face application of coarse and fine steel strip mesh with reduced spacing of horizontal strips. This paper investigates only the effectiveness of horizontal steel strips on strength enhancement. Four masonry wall panels are considered in two groups and in each group, one wall was retrofitted with coarse steel mesh on single face and on second wall fine steel mesh was applied on one side. Furthermore, test results of strengthened specimens are also compared with the un-strengthened specimen (REFE). The mechanisms by which load was carried were observed, varying from the initial, uncracked state, and the final, fully cracked state. The results demonstrate a quite significant increase in the compressive and shear capacity of strengthened panels as compared to REFE-panel. However, increase in the compressive strength of fine mesh above that of coarse mesh is negligible. The technique/approach is found quite viable for strengthening of masonry walls, for rehabilitation of old deteriorated buildings and unreinforced masonry structures in seismic zones. (author)

A great number of applications of such a flow in geophysics are found in a ... We have considered an infinite, horizontal, compressible electrically conducting Walters' (Model B′) fluid layer of .... Linearized stability theory and normal mode analysis .... boundaries the boundary conditions are (see Chandrasekhar, 1981). 2.

The Horizontal Impact Rig has been designed to allow studies of the impact of radioactive material transport containers and their associated transport vehicles and impact limiters, using large scale models, and to allow physically large missiles to be projected for studying the impact behaviour of metal and concrete structures. It provides an adequately rigid support structure for impact experiments with targets of large dimensions. Details of its design, instrumentation, performance prediction and construction are given. (U.K.)

The action of horizontal divergence on diffusion near the ground is established through.a very simple flow model. The shape of the well-known Pasquill-Gifford-Turner curves, that apparently take account in some way of divergence, is justified. The possibility of explaining the discre--pancies between the conventional straight line model and experimental results, mainly under low-wind-speed satable conditions, is considered. Some hints for further research are made. (auth.)

The interaction of a high-speed vortex ring with a shock wave is one of the fundamental issues as it is a source of sound in supersonic jets. The complex flow field induced by the vortex alters the propagation of the shock wave greatly. In order to understand the process, a compressible vortex ring is studied in detail using Particle Image Velocimetry (PIV) and shadowgraphic techniques. The high-speed vortex ring is generated from a shock tube and the shock wave, which precedes the vortex, is reflected back by a plate and made to interact with the vortex. The shadowgraph images indicate that the reflected shock front is influenced by the non-uniform flow induced by the vortex and is decelerated while passing through the vortex. It appears that after the interaction the shock is "split" into two. The PIV measurements provided clear picture about the evolution of the vortex at different time interval. The centerline velocity traces show the maximum velocity to be around 350 m/s. The velocity field, unlike in incompressible rings, contains contributions from both the shock and the vortex ring. The velocity distribution across the vortex core, core diameter and circulation are also calculated from the PIV data.

The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments

This paper presents comparative analysis between the pressure behavior of ... Green and source function were used to evaluate the performance of horizontal well and ..... Superscript. ' derivative. D = dimensionless. h = horizontal. = change.

Full Text Available Pile foundations are frequently used in very loose and weak deposits, in particular soft marine clays deposits to support various industrial structures, power plants, petrochemical complexes, compressor stations and residential multi-storeyed buildings. Under these circumstances, piles are predominantly subjected to horizontal dynamic loads and the pile response to horizontal vibration is very critical due to its low stiffness. Though many analytical methods have been developed to estimate the horizontal vibration response, but they are not well validated with the experimental studies. This paper presents the results of horizontal vibration tests carried out on model aluminium single piles embedded in a simulated Elastic Half Space filled with clay. The influence of various soil and pile parameters such as pile length, modulus of clay, magnitude of dynamic load and frequency of excitation on the horizontal vibration response of single piles was examined. Measurement of various response quantities, such as the load transferred to the pile, pile head displacement and the strain variation along the pile length were done using a Data Acquisition System. It is found that the pile length, modulus of clay and dynamic load, significantly influences the natural frequency and peak amplitude of the soil-pile system. The maximum bending moment occurs at the fundamental frequency of the soil-pile system. The maximum bending moment of long piles is about 2 to 4 times higher than that of short piles and it increases drastically with the increase in the shear modulus of clay for both short and long piles. The active or effective pile length is found to be increasing under dynamic load and empirical equations are proposed to estimate the active pile length under dynamic loads.

Highlights: • The bubble shapes in intermittent flows are presented experimentally. • The nose-tail inversion phenomenon appears at a low Froude number in downward pipe. • Transition from plug to slug flow occurs when the bubble tail changes from staircase pattern to hydraulic jump. - Abstract: This paper presents an experimental study of the shape of isolated bubbles in horizontal and near horizontal intermittent flows. It is found that the shapes of the nose and body of bubble depend on the Froude number defined by gas/liquid mixture velocity in a pipe, whereas the shape of the back of bubble region depends on both the Froude number and bubble length. The photographic studies show that the transition from plug to slug flow occurs when the back of the bubble changes from staircase pattern to hydraulic jump with the increase of the Froude number and bubble length. The effect of pipe inclination on characteristics of bubble is significant: The bubble is inversely located in a downwardly inclined pipe when the Froude number is low, and the transition from plug flow to slug flow in an upward inclined pipe is more ready to occur compared with that in a downwardly inclined pipe

Vertical vibration with large acceleration was observed in KOBE earthquake in 1995. Concerning PWR fuel assembly, though the vertical response has been calculated by a static analysis, it had better be calculated by a dynamic analysis in detail. Furthermore, mutual effects between horizontal and vertical motions attract our attention. For these reasons, a dynamic analysis method in the vertical direction was developed and linked with the previously developed method in the horizontal direction. This is the method that takes effect of vertical vibration into the horizontal vibration analysis as the change of horizontal stiffness, which is brought by axial compressive force. In this paper, fundamental test results for developing the method are introduced and summary of the advanced method's procedure and analysis results are also described. (authors)

Vertical vibration with large acceleration was observed in KOBE earthquake in 1995. Concerning PWR fuel assembly, though the vertical response has been calculated by a static analysis, it had better be calculated by a dynamic analysis in detail. Furthermore, mutual effects between horizontal and vertical motions attract our attention. For these reasons, a dynamic analysis method in the vertical direction was developed and linked with the previously developed method in the horizontal direction. This is the method that takes effect of vertical vibration into the horizontal vibration analysis as the change of horizontal stiffness, which is brought by axial compressive force. In this paper, fundamental test results for developing the method are introduced and summary of the advanced method's procedure and analysis results are also described. (authors)

Full Text Available An experimental study on natural convection heat transfer from two parallel horizontal cylinders in horizontal cylindrical enclosure was carried out under condition of constant surfaces temperature for two cylinders and cylindrical enclosure. The study included the effect of Rayleigh number, rotation angle that represent the confined angle between the passing horizontal plane in cylindrical enclosure center and passing line in two cylinders centers, and the spaces between two cylinders on their heat loss ability.39An experimental set-up was used for this purpose which consist watercontainer, test section which is formed of plastic cylinder that represent the cylindrical enclosure, and two heating elements which are formed of two copper cylinders with (19 mm in diameters heated internally by electrical sources that represents transfer and heat loss elements through this set-up. The experiments were done at the range of Rayleigh number between ( , cylinders rotation angle at ( , and spacing ratio at ( . The study showed that the ability of heat loss from two cylinders is a function of Rayleigh number, cylinders rotation angle, and the spaces between them. This ability is increased by increasing of Rayleigh number and it was showed that this ability reaches maximum value at the first cylinder ( and minimum value at the second cylinder ( at spacing ratio (S/D=3 and rotation angle ( for the first and ( for the second cylinder respectively. The effective variables on natural convection heat transfer from the above two cylinders are related by two correlating equations, each one explains dimensionless relation of heat transfer from each cylinder that represented by Nusselt number against Rayleigh number, rotation angle, and the spacing ratio between two cylinders.

This paper reports that under sponsorship from the U.S. Department of Energy, technical personnel from the Savannah River Laboratory and other DOE laboratories, universities and private industry have completed a full scale demonstration of environmental remediation using horizontal wells. The test successfully removed approximately 7250 kg of contaminants. A large amount of characterization and monitoring data was collected to aid in interpretation of the test and to provide the information needed for future environmental restorations that employ directionally drilled wells as extraction or delivery systems

This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary.Â Apart from compressed sensing this book contains other related app

Full Text Available Abstract Background Horizontal gene transfer (HGT, the non-genealogical transfer of genetic material between different organisms, is considered a potentially important mechanism of genome evolution in eukaryotes. Using phylogenomic analyses of expressed sequence tag (EST data generated from a clonal cell line of a free living dinoflagellate alga Karenia brevis, we investigated the impact of HGT on genome evolution in unicellular chromalveolate protists. Results We identified 16 proteins that have originated in chromalveolates through ancient HGTs before the divergence of the genera Karenia and Karlodinium and one protein that was derived through a more recent HGT. Detailed analysis of the phylogeny and distribution of identified proteins demonstrates that eight have resulted from independent HGTs in several eukaryotic lineages. Conclusion Recurring intra- and interdomain gene exchange provides an important source of genetic novelty not only in parasitic taxa as previously demonstrated but as we show here, also in free-living protists. Investigating the tempo and mode of evolution of horizontally transferred genes in protists will therefore advance our understanding of mechanisms of adaptation in eukaryotes.

The influence of well pattern involving the use of horizontal wells on the overall efficiency of the waterflooding process was analyzed. Three different scenarios were examined: (1) a pattern of using two parallel horizontal wells, one for injection, the other for production, (2) a pattern of one horizontal well for water injection and several vertical wells for production, and (3) a pattern of using vertical wells for injection and one horizontal well for production. In each case, the waterflooding process was simulated using a two phase two dimensional numerical model. Results showed that the pressure loss along the horizontal section had a large influence on the sweep efficiency whether the horizontal well was used for injection or production. Overall, the most successful combination appeared to be using vertical wells for injection and horizontal wells for production. 4 refs., 1 tab., 15 figs.

As part of the design effort for a free electron laser driven by the Next Linear Collider Test Accelerator (NLCTA), the author reports studies of bunch-length compression utilizing the existing infrastructure and hardware. In one possible version of the NLCTA FEL, bunches with 900-microm FWHM length, generated by an S-band photo-injector, would be compressed to an rms length of 60--120 microm before entering the FEL undulator. It is shown that, using the present magnetic chicane, the bunch compression is essentially straightforward, and that almost all emittance-diluting effects, e.g. wakefields, chromaticity, or space charge in the bending magnets, are small. The only exception to this finding is the predicted increase of the horizontal emittance due to coherent synchrotron radiation (CSR). Estimates based on existing theories of coherent synchrotron radiation suggest a tripling or quadrupling of the initial emittance, which seems to preclude bunch compression during regular FEL operation. Serendipitously, the magnitude of the predicted emittance growth would, on the other hand, make the NLCTA chicane an excellent tool for measuring the effects of coherent synchrotron radiation. This will be of considerable interest to many future projects, in particular to the Linac Coherent Light Source (LCLS). As an aside, it is shown that coherent synchrotron radiation in a bending magnet gives rise to a minimum possible bunch length, which is very reminiscent of the Oide limit on the vertical spot size at the interaction point of a linear collider

When the load carrying capacity of existing concrete structures is (re-)assessed it is often based on compressive strength of cores drilled out from the structure. Existing studies show that the core compressive strength is anisotropic; i.e. it depends on whether the cores are drilled parallel...

Several compression methods of finite-state automata are presented and evaluated. Most compression methods used here are already described in the literature. However, their impact on the size of automata has not been described yet. We fill that gap, presenting results of experiments carried out on

Drag reduction and/or speed augmentation of marine vehicles by means of supercavitation is a topic of great interest. During the initial launch of a supercavitating vehicle, an artificial supercavity is required until the vehicle can reach conditions at which a natural supercavity can be sustained. Previous studies at Saint Anthony Falls Laboratory (SAFL) focused on the behavior of ventilated supercavities in steady horizontal flows. In open waters, vehicles can encounter unsteady flows, especially when traveling under waves. A study has been carried out at SAFL to investigate the effects of unsteady flow on axisymmetric supercavities. An attempt is made to duplicate sea states seen in open waters. In an effort to track cavity dimensions throughout a wave cycle, an automated cavity tracking script has been developed. Using a high speed camera and the proper software, it is possible to synchronize cavity dimensions with pressure measurements taken inside the cavity. Results regarding supercavity shape, ventilation demand, cavitation parameters and closure methods are presented. It was found that flow unsteadiness caused a decrease in the overall length of the supercavity while having only a minimal effect on the maximum diameter. The supercavity volume varied with cavitation number and a possible relationship between the two is being explored. (Supported by ONR)

Background The domesticated silkworm, Bombyx mori, is the model insect for the order Lepidoptera, has economically important values, and has gained some representative behavioral characteristics compared to its wild ancestor. The genome of B. mori has been fully sequenced while function analysis of BmChi-h and BmSuc1 genes revealed that horizontal gene transfer (HGT) maybe bestow a clear selective advantage to B. mori. However, the role of HGT in the evolutionary history of B. mori is largely unexplored. In this study, we compare the whole genome of B. mori with those of 382 prokaryotic and eukaryotic species to investigate the potential HGTs. Results Ten candidate HGT events were defined in B. mori by comprehensive sequence analysis using Maximum Likelihood and Bayesian method combining with EST checking. Phylogenetic analysis of the candidate HGT genes suggested that one HGT was plant-to- B. mori transfer while nine were bacteria-to- B. mori transfer. Furthermore, functional analysis based on expression, coexpression and related literature searching revealed that several HGT candidate genes have added important characters, such as resistance to pathogen, to B. mori. Conclusions Results from this study clearly demonstrated that HGTs play an important role in the evolution of B. mori although the number of HGT events in B. mori is in general smaller than those of microbes and other insects. In particular, interdomain HGTs in B. mori may give rise to functional, persistent, and possibly evolutionarily significant new genes. PMID:21595916

In a nuclear reactor having a reactor vessel, a reactor guard vessel, a thermal insulation shell and a horizontal seismic restraint, a restraint is described comprising: a. a first ring on the wall of the reactor vessel; b. a second ring on the wall of the reactor guard vessel in alignment with the first ring; c. a first block attached to the second ring proximate the first ring so as to provide a predetermined clearance between the first block and the first ring which is reduced to zero during thermal expansion; d. motion limit means extending through an aperture in the thermal insulation shell in alignment with the second ring and the first block; the e. a second block attached to the motion limit means proximate the second ring and in alignment the first block so as to provide a predetermined clearance between the second block and the second ring which is reduced to zero during thermal expansion

This study investigated the effects of fast-acting hearing-aid compression on normal-hearing and hearing-impaired listeners’ spatial perception in a reverberant environment. Three compression schemes—independent compression at each ear, linked compression between the two ears, and “spatially ideal......” compression operating solely on the dry source signal—were considered using virtualized speech and noise bursts. Listeners indicated the location and extent of their perceived sound images on the horizontal plane. Linear processing was considered as the reference condition. The results showed that both...... independent and linked compression resulted in more diffuse and broader sound images as well as internalization and image splits, whereby more image splits were reported for the noise bursts than for speech. Only the spatially ideal compression provided the listeners with a spatial percept similar...

Particle density and arrival time of muons has been measured in Horizontal Air Showers. 5,600 showers have been recorded in 7,800 hours. Using stringent selection criteria 155 showers have been found horizontal (zenith angle larger 70 0 ) in the size range 4.1 > lg N > 5.5. The muons observed in these showers can be explained by purely electromagnetic origin of horizontal showers. (orig.) [de

An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly ...

We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

Seismic refraction measurement is one of the geophysics exploration techniques to determine soil profile. Meanwhile, the borehole technique is an established way to identify the changes of soil layer based on number of blows penetrating the soil. Both techniques are commonly adopted for subsurface investigation. The seismic refraction test is a non-destructive and relatively fast assessment compared to borehole technique. The soil velocities of compressive wave and shear wave derived from the seismic refraction measurements can be directly utilised to calculate soil parameters such as soil modulus and Poisson’s ratio. This study investigates the seismic refraction techniques to obtain compressive and shear wave velocity profile. Using the vertical and horizontal geophones as well as vertical and horizontal strike directions of the transient seismic source, the propagation of compressive wave and shear wave can be examined, respectively. The study was conducted at Sejagung Sri Medan. The seismic velocity profile was obtained at a depth of 20 m. The velocity of the shear wave is about half of the velocity of the compression wave. The soil profiles of compressive and shear wave velocities were verified using the borehole data and showed good agreement with the borehole data. (paper)

Full Text Available Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature. The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor affecting the output of the model. The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

Results of an analysis of transient pressure near a horizontal well using a coupled diffusion-deformation method are discussed. The results are compared with those obtained from the single diffusivity equation. Implications for practical applications such as well testing are addressed. Results indicate that the diffusion-deformation behaviour of porous material affects the transient pressure response near a horizontal well. Evaluation by conventional well testing, based as it is on the single diffusion equation, would likely result in an overestimate of the permeability value. Comparison of results between the coupled diffusion-deformation approach and the single diffusion equation suggests that a better prediction of pressure response could be derived from total compressibility than by using only fluid compressibility. 6 refs., 9 figs.

Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.

A number of configuration features and maintenance operations are influenced by the choice of whether a design is based on vertical or horizontal access for replacing reactor components. The features which are impacted most include the first wall/blanket segmentation, the poloidal field coil locations, the toroidal field coil number and size, access port size for in-vessel components, and facilities. Since either configuration can be made to work, the choice between the two is not clear cut because both have certain advantages. It is apparent that there are large cost benefits in the poloidal field coil system for ideal coil locations for high elongation plasmas and marginal savings for the INTOR case. If we assume that a new tokamak design will require a higher plasma elongation, the recommendation is to arrange the poloidal field coils in a cost-effective manner while providing reasonable midplane access for heating interfaces and test modules. If a new design study is not based on a high elongation plasma, it still appears prudent to consider this approach so that in-vessel maintenance can be accomplished without moving very massive structures such as the bulk shield. 10 refs., 29 figs., 3 tabs

Horizontal gene transfer (HGT) refers to the acquisition of foreign genes by organisms. The occurrence of HGT among bacteria in the environment is assumed to have implications in the risk assessment of genetically modified bacteria which are released into the environment. First, introduced genetic sequences from a genetically modified bacterium could be transferred to indigenous micro-organisms and alter their genome and subsequently their ecological niche. Second, the genetically modified bacterium released into the environment might capture mobile genetic elements (MGE) from indigenous micro-organisms which could extend its ecological potential. Thus, for a risk assessment it is important to understand the extent of HGT and genome plasticity of bacteria in the environment. This review summarizes the present state of knowledge on HGT between bacteria as a crucial mechanism contributing to bacterial adaptability and diversity. In view of the use of GM crops and microbes in agricultural settings, in this mini-review we focus particularly on the presence and role of MGE in soil and plant-associated bacteria and the factors affecting gene transfer.

Taking geometric non-linearity into account anoscillator of the form as aportal frame with a rigid traverse and with ideal-elastic ideal-plasticclamped-in columns behaves under horizontalexcitation as an ideal-elastic hardening / softening-plastic oscilator given that the columns carry atension....../compression axial force. Assuming that the horizontal excitationof the traverse is Gaussian white noise, statistics related to the plastic displacement response are determinedby use of simulation based on the Slepian modelprocess method combined with envelope excursion properties. Besidesgiving physical insight...... the method givesgood approximations to results obtained by slow direct simulation of thetotal response. Moreover, the influence of a randomly varying axial column force isinvestigated by direct response simulation. This case corresponds to parametric excitation as generated by the vertical acceleration...

Explores the process of horizontal differentiation by examining events leading to the establishment of 30 new departments in five universities. Two types of horizontal differentiation processes--administrative and academic--were observed and each was associated with different organizational conditions. (Author/IRT)

The volume of trash deposited on the Belo Horizonte landfill (Southeast Brazil) is 500 ton per day. The organic material contained in this urban residues undergoes an aerobic decomposition, generating the raw biogas. For the utilization of this source of energy, which combustion releases non toxic and non polluted products, a collection, depuration, and compression systems have been built, with a nominal production capacity of 400 Nm{sup 3} per hour of purified biogas. The obtained experience in the plant implementation and pre-operation have permitted the development of new kinds of collection wells, new ways of sealing landfill areas, and the adaptation of national equipment for the operational conditions of the depuration system. 1 ref., 2 figs., 1 tab

The results of an analytical and experimental study of the initiation of transverse fractures from horizontal wells are presented. Analytical criteria for the initiation of single hydraulic fracture are reviewed, and criterion for initiation of multiple hydraulic fractures was developed by modification of the existing Drucker and Prager criterion for single hydraulic fracture initiation. The developed criterion for multiple fracture initiation was validated by comparisons with actual hydraulic fracture initiation pressures, which were obtained from scaled laboratory experiments and numerical results from boundary element analysis. Other criteria are assessed against the experimental results. Experimentally obtained transverse fracture initiation pressures were found close to longitudinal fracture initiation pressures estimated from maximum tensile stress criterion and Hoek and Brown criterion. One possible explanation of this finding is presented. Results from Drucker and Prager criteria for single and multiple fracture initiation were, however, found closer to experimental values. Therefore, these criteria could be useful to engineers involved with hydraulic fracturing for predicting transverse fracture initiation pressures from horizontal wells drilled parallel to the minimum horizontal in-situ stress.

the compression ratio can be raised (to as much as 18:1) providing high engine efficiency. It is important to recognize that for a well designed VCR engine cylinder pressure does not need to be higher than found in current production turbocharged engines. As such, there is no need for a stronger crankcase, bearings and other load bearing parts within the VCR engine. The Envera VCR mechanism uses an eccentric carrier approach to adjust engine compression ratio. The crankshaft main bearings are mounted in this eccentric carrier or 'crankshaft cradle' and pivoting the eccentric carrier 30 degrees adjusts compression ratio from 9:1 to 18:1. The eccentric carrier is made up of a casting that provides rigid support for the main bearings, and removable upper bearing caps. Oil feed to the main bearings transits through the bearing cap fastener sockets. The eccentric carrier design was chosen for its low cost and rigid support of the main bearings. A control shaft and connecting links are used to pivot the eccentric carrier. The control shaft mechanism features compression ratio lock-up at minimum and maximumcompression ratio settings. The control shaft method of pivoting the eccentric carrier was selected due to its lock-up capability. The control shaft can be rotated by a hydraulic actuator or an electric motor. The engine shown in Figures 3 and 4 has a hydraulic actuator that was developed under the current program. In-line 4-cylinder engines are significantly less expensive than V engines because an entire cylinder head can be eliminated. The cost savings from eliminating cylinders and an entire cylinder head will notably offset the added cost of the VCR and supercharging. Replacing V6 and V8 engines with in-line VCR 4-cylinder engines will provide high fuel economy at low cost. Numerous enabling technologies exist which have the potential to increase engine efficiency. The greatest efficiency gains are realized when the right combination of advanced and new

Full Text Available Mechanical extending limit in horizontal drilling means the maximumhorizontal extending length of a horizontal well under certain ground and down-hole mechanical constraint conditions. Around this concept, the constrained optimization model of mechanical extending limits is built and simplified analytical results for pick-up and slack-off operations are deduced. The horizontal extending limits for kinds of tubular strings under different drilling parameters are calculated and drawn. To improve extending limits, an optimal design model of drill strings is built and applied to a case study. The results indicate that horizontal extending limits are underestimated a lot when the effects of friction force on critical helical buckling loads are neglected. Horizontal extending limits firstly increase and tend to stable values with vertical depths. Horizontal extending limits increase faster but finally become smaller with the increase of horizontal pushing forces for tubular strings of smaller modulus-weight ratio. Sliding slack-off is the main limit operation and high axial friction is the main constraint factor constraining horizontal extending limits. A sophisticated installation of multiple tubular strings can greatly inhibit helical buckling and increase horizontal extending limits. The optimal design model is called only once to obtain design results, which greatly increases the calculation efficiency.

In this paper, performance analysis and comparison based on the maximum power and maximum power density conditions have been conducted for an Atkinson cycle coupled to variable temperature heat reservoirs. The Atkinson cycle is internally reversible but externally irreversible, since there is external irreversibility of heat transfer during the processes of constant volume heat addition and constant pressure heat rejection. This study is based purely on classical thermodynamic analysis methodology. It should be especially emphasized that all the results and conclusions are based on classical thermodynamics. The power density, defined as the ratio of power output to maximum specific volume in the cycle, is taken as the optimization objective because it considers the effects of engine size as related to investment cost. The results show that an engine design based on maximum power density with constant effectiveness of the hot and cold side heat exchangers or constant inlet temperature ratio of the heat reservoirs will have smaller size but higher efficiency, compression ratio, expansion ratio and maximum temperature than one based on maximum power. From the view points of engine size and thermal efficiency, an engine design based on maximum power density is better than one based on maximum power conditions. However, due to the higher compression ratio and maximum temperature in the cycle, an engine design based on maximum power density conditions requires tougher materials for engine construction than one based on maximum power conditions

A new concept of compressing a toroidal plasma using a plasma focus device is considered. Maximumcompression ratio of toroidal plasma is determined merely by the initial density ratio of the toroidal plasma to a sheet plasma in a focus device because of the Rayleigh-Taylor instability. An initiation senario of plasma-linear is also proposed with a possible application of this concepts to the creation of a burning plasma in reversed field configurations, i.e., burning plasma vortex. (author)

The interest in using large lasers to achieve a very short and intense pulse for generating fusion plasma has provided a strong impetus to reexamine the possibilities of optical pulse compression at high energy. Pulse compression allows one to generate pulses of long duration (minimizing damage problems) and subsequently compress optical pulses to achieve the short pulse duration required for specific applications. The ideal device for carrying out this program has not been developed. Of the two approaches considered, the Gires--Tournois approach is limited by the fact that the bandwidth and compression are intimately related, so that the group delay dispersion times the square of the bandwidth is about unity for all simple Gires--Tournois interferometers. The Treacy grating pair does not suffer from this limitation, but is inefficient because diffraction generally occurs in several orders and is limited by the problem of optical damage to the grating surfaces themselves. Nonlinear and parametric processes were explored. Some pulse compression was achieved by these techniques; however, they are generally difficult to control and are not very efficient. (U.S.)

A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine.

A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine. PMID:25379536

Full Text Available A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine.

Full Text Available Background: Back squats (BSQ have been shown to transiently improve performance in explosive vertical movements such as the vertical jump (VJ. Still, understanding of this phenomenon, termed post-activation potentiation (PAP, remains nebulous as it relates to explosive horizontal movements. Objective: Therefore, the purpose of the present investigation was to assess whether heavy BSQ can potentiate both VJ and horizontal jump (HJ performance. Method: Nine male ice hockey players from the Long Beach State ice hockey team performed five testing sessions separated by 96-hours. The first testing session consisted of a one repetition maximum (1-RM BSQ to determine subsequent testing loads. The four subsequent testing sessions, which were randomized for order, consisted of five repetitions of BSQ at 87% 1-RM followed by horizontal jump (BSQ-HJ, five repetitions of BSQ at 87% 1-RM followed by vertical jump (BSQ-VJ, horizontal jump only (CT-HJ and vertical jump only (CT-VJ. During the potentiated conditions, rest intervals were set at five minutes between the BSQ and either VJ or HJ. Alpha-level was set a priori at 0.05. Results: The results indicate that both vertical (p=0.017 and horizontal (p=0.003 jump were significantly increased (VJ= +5.51cm, HJ= +11.55cm following a BSQ. Conclusion: These findings suggest that BSQ may improve both vertical and horizontal jump performance in athletes who participate in sports emphasizing horizontal power, such as ice hockey.

We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal

The advantages of the Pulsed Compression Reactor (PCR) over the internal combustion engine-type chemical reactors are briefly discussed. Over the last four years a project concerning the fundamentals of the PCR technology has been performed by the University of Twente, Enschede, Netherlands. In order to assess the feasibility of the application of the PCR principle for the conversion methane to syngas, several fundamental questions needed to be answered. Two important questions that relate to the applicability of the PCR for any process are: how large is the heat transfer rate from a rapidly compressed and expanded volume of gas, and how does this heat transfer rate compare to energy contained in the compressed gas? And: can stable operation with a completely free piston as it is intended with the PCR be achieved?.

The authors made a retrospective study in 105 patients treated in the Radiotherapy Department of the National Institute of Neoplasmic Diseases from 1973 to 1992. The objective of this evaluation was to determine the influence of radiotherapy in patients with medullary compression syndrome in aspects concerning pain palliation and improvement of functional impairment. Treatment sheets of patients with medullary compression were revised: 32 out of 39 of patients (82%) came to hospital by their own means and continued walking after treatment, 8 out of 66 patients (12%) who came in a wheelchair or were bedridden, could mobilize by their own after treatment, 41 patients (64%) had partial alleviation of pain after treatment. In those who came by their own means and did not change their characteristics, functional improvement was observed. It is concluded that radiotherapy offers palliative benefit in patients with medullary compression syndrome. (authors). 20 refs., 5 figs., 6 tabs

A new concept of compressing a plasma in a closed magnetic configuration by a version of liner implosion flux compression technique is considered. The liner consists of a dense plasma cylinder, i.e. the plasma-liner. Maximumcompression ratio of toroidal plasma is determined just by the initial density ratio of the toroidal plasma to the liner plasma because of the Rayleigh-Taylor instability. A start-up senario of plasma-liner is also proposed with a possible application of this concept to the creation of a burning plasma in reversed field configurations, i.e. burning plasma vortex. (author)

A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

Full Text Available The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.

Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

Horizontal steam generators are typical components of nuclear power plants with pressure water reactor type VVER. Thermal-hydraulic behavior of horizontal steam generators is very different from the vertical U-tube steam generator, which has been extensively studied for several years. To contribute to the understanding of the horizontal steam generator thermal-hydraulics a computer program for 3-D steady state analysis of the PGV-1000 steam generator has been developed. By means of this computer program, a detailed thermal-hydraulic and thermodynamic study of the horizontal steam generator PGV-1000 has been carried out and a set of important steam generator characteristics has been obtained. The 3-D distribution of the void fraction and 3-D level profile as functions of load and secondary side pressure have been investigated and secondary side volumes and masses as functions of load and pressure have been evaluated. Some of the interesting results of calculations are presented in the paper.

One of the key issues addressed was pressure drop in long horizontal wells and its influence on well performance. Very little information is available in the literature on flow in pipes with influx through pipe walls. Virtually all of this work has been in small diameter pipes and with single-phase flow. In order to address this problem new experimental data on flow in horizontal and near horizontal wells have been obtained. Experiments were conducted at an industrial facility on typical 6 1/8 ID, 100 feet long horizontal well model. The new data along with available information in the literature have been used to develop new correlations and mechanistic models. Thus it is now possible to predict, within reasonable accuracy, the effect of influx through the well on pressure drop in the well.

This thesis deals with horizontal cooperation in transport and logistics. It contains a comprehensive discussion of the available academic literature on this topic, many practical examples, and an empirical investigation of opportunities and impediments. Furthermore, three enabling concepts for

The Next Generation Attenuation (NGA) relationships for shallow crustal earthquakes in the western United States predict a rotated geometric mean of horizontal spectral demand, termed GMRotI50, and not maximum spectral demand. Differences between strike-normal, strike-parallel, geometric-mean, and maximum spectral demands in the near-fault region are investigated using 147 pairs of records selected from the NGA strong motion database. The selected records are for earthquakes with moment magnitude greater than 6.5 and for closest site-to-fault distance less than 15 km. Ratios of maximum spectral demand to NGA-predicted GMRotI50 for each pair of ground motions are presented. The ratio shows a clear dependence on period and the Somerville directivity parameters. Maximum demands can substantially exceed NGA-predicted GMRotI50 demands in the near-fault region, which has significant implications for seismic design, seismic performance assessment, and the next-generation seismic design maps. Strike-normal spectral demands are a significantly unconservative surrogate for maximum spectral demands for closest distance greater than 3 to 5 km. Scale factors that transform NGA-predicted GMRotI50 to a maximum spectral demand in the near-fault region are proposed.

Methods and systems for temporal compressive sensing are disclosed, where within each of one or more sensor array data acquisition periods, one or more sensor array measurement datasets comprising distinct linear combinations of time slice data are acquired, and where mathematical reconstruction allows for calculation of accurate representations of the individual time slice datasets.

A molecular gas is compressed in a quasi-adiabatic manner to produce pulsed radiation during each compressor cycle when the pressure and temperature are sufficiently high, and part of the energy is recovered during the expansion phase, as defined in U.S. Pat. No. 3,751,666; characterized by use of a cylinder with a reciprocating piston as a compressor

In a previous report, the method for ultra-high magnetic field compression by using the pinchplasma was discussed. It is summarized as follows. The experiment is performed with the Mather-type plasma focus device tau/sub 1/4/ = 2 μs, I=880 kA at V=20 kV). An initial DC magnetic field is fed by an electromagnet embedded in the inner electrode. The axial component of the magnetic field diverges from the maximum field of 1 kG on the surface of the inner electrode. The density profile deduced from a Mach-Zehnder interferogram with a 2-ns N/sub 2/-laser shows a density dip lasting for 30 ns along the axes. Using the measured density of 8 x 10/sup 18/ cm/sup -3/, the temperature of 1.5 keV and the pressure balance relation, the magnitude of the trapped magnetic field is estimated to be 1.0 MG. The magnitude of the compressed magnetic field is also measured by Faraday rotation in a single-mode quartz fiber and a magnetic pickup soil. A protective polyethylene tube (3-mm o.d.) is used along the central axis through the inner electrode and the discharge chamber. The peak value of the compressed field range from 150 to 190 kG. No signal of the magnetic field appears up to the instance of the maximum pinch

Tsunami propagation in the open ocean is most commonly modeled by solving the shallow water wave equations. These equations require two initial conditions: one on sea surface height and another on depth-averaged horizontal particle velocity or, equivalently, horizontal momentum. While most modelers assume that initial velocity is zero, Y.T. Song and collaborators have argued for nonzero initial velocity, claiming that horizontal displacement of a sloping seafloor imparts significant horizontal momentum to the ocean. They show examples in which this effect increases the resulting tsunami height by a factor of two or more relative to models in which initial velocity is zero. We test this claim with a "full-physics" integrated dynamic rupture and tsunami model that couples the elastic response of the Earth to the linearized acoustic-gravitational response of a compressible ocean with gravity; the model self-consistently accounts for seismic waves in the solid Earth, acoustic waves in the ocean, and tsunamis (with dispersion at short wavelengths). We run several full-physics simulations of subduction zone megathrust ruptures and tsunamis in geometries with a sloping seafloor, using both idealized structures and a more realistic Tohoku structure. Substantial horizontal momentum is imparted to the ocean, but almost all momentum is carried away in the form of ocean acoustic waves. We compare tsunami propagation in each full-physics simulation to that predicted by an equivalent shallow water wave simulation with varying assumptions regarding initial conditions. We find that the initial horizontal velocity conditions proposed by Song and collaborators consistently overestimate the tsunami amplitude and predict an inconsistent wave profile. Finally, we determine tsunami initial conditions that are rigorously consistent with our full-physics simulations by isolating the tsunami waves (from ocean acoustic and seismic waves) at some final time, and backpropagating the tsunami

This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs

This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr

The failure processes in concrete and other brittle materials are just the results of the propagation, coalescence and interaction of many preexisting microcracks or voids. To understand the real behaviour of the brittle materials, it is necessary to bridge the gap from the relatively matured one crack behaviour to the stochastically distributed imperfections, that is, to concern the crack propagation and interaction of microscopic mechanism with macroscopic parameters of brittle materials. Brittle failure in compression has been studied theoretically by Horii and Nemat-Nasser (1986), in which a closed solution was obtained for a preexisting flaw or some special regular flaws. Zaitsev and Wittmann (1981) published a paper on crack propagation in compression, which is so-called numerical concrete, but they did not take account of the interaction among the microcracks. As for the modelling of the influence of crack interaction on fracture parameters, many studies have also been reported. Up till now, some researcher are working on crack interaction considering the ratios of SIFs with and without consideration of the interaction influences, there exist amplifying or shielding effects of crack interaction which are depending on the relative positions of these microcracks. The present paper attempts to simulate the whole failure process of brittle specimen in compression, which includes the complicated coupling effects between the interaction and propagation of randomly distributed or other typical microcrack configurations step by step. The lengths, orientations and positions of microcracks are all taken as random variables. The crack interaction among many preexisting random microcracks is evaluated with the help of a simple interaction matrix (Yang and Liu, 1991). For the subcritically stable propagation of microcracks in mixed mode fracture, fairly known maximum hoop stress criterion is adopted to compute branching lengths and directions at each tip of the crack

A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

;'Adiabatic Liquid Piston Compressed Air Energy Storage'' (ALP-CAES). The compression ratio of the gas in the vessel (ratio between maximum and minimum pressure) is relatively low; typical values would be < 1,5, whereas the compression ratio in existing CAES systems can be higher than 100, because the air is compressed from atmospheric pressure to the storage pressure. This investigation leads to the conclusion that: 1) The mechanical/electrical efficiency of the ALP-CAES system is significantly higher than existing CAES systems due to a low or nearly absent compression heat loss. Furthermore, pumps/turbines, which use a liquid as a medium, are more efficient than air/gas compressors/turbines. In addition, the demand for fuel during expansion does not occur. 2) The energy density of the ALP-CAES system is much lower than that of existing CAES systems (by a factor of 15-30) leading to a similar increase in investment in pressure vessel volume per stored MWh. Since the pressure vessel constitutes a relatively large fraction of the overall cost of a CAES system, an increase of 15-30 times renders the system economically unfeasible unless the operating conditions and the system design are very carefully selected to compensate the low energy density. Future electricity prices may increase to the extent that the efficiency benefit of ALP-CAES partly compensates the added investment. 3) When comparing ALP-CAES to an adiabatic CAES system, where compression heat is stored in thermal oil, the ALP-CAES system is found only to be competitive under a very specific set of operating/design conditions, including very high operation pressure and the use of very large caverns. 4) New systems are under development, which show an interesting trend in that they use near-isothermal compression and expansion of air (compression/expansion at almost constant temperature), eliminate compression heat loss and still maintain nearly the same level of energy density as existing CAES systems. This

... compressible fluid suspension system that was designed and installed on the vehicle by DTI. The purpose of the tests was to evaluate the possible performance benefits of the compressible fluid suspension system...

Using a numerical model that calculates the dynamics of Plasma Focus devices, we compared the results of three different compression models of the plasma pinch.One of the main objectives in this area is to develop a simplified model to calculate the neutron production of Plasma Focus devices, to study the influence of the main parameters in this neutron yield.The dynamics is thoroughly studied, and the model predicts fairly well values such as maximum currents and times for pinch collapse.Therefore, we evaluate here different models of pinch compression, to try to predict the neutron production with good agreement with the rest of the variables involved.To fulfill this requirement, we have experimental results of neutron production as a function of deuterium filling pressure in the chamber, and typical values of other main variables in the dynamics of the current sheet

We show how to compress string dictionaries using the Lempel-Ziv (LZ78) data compression algorithm. Our approach is validated experimentally on dictionaries of up to 1.5 GB of uncompressed text. We achieve compression ratios often outperforming the existing alternatives, especially on dictionaries containing many repeated substrings. Our query times remain competitive.

We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

In this paper we show how to construct a data structure for a string S of size N compressed into a context-free grammar of size n that supports efficient Karp–Rabin fingerprint queries to any substring of S. That is, given indices i and j, the answer to a query is the fingerprint of the substring S......[i,j]. We present the first O(n) space data structures that answer fingerprint queries without decompressing any characters. For Straight Line Programs (SLP) we get O(log⁡N) query time, and for Linear SLPs (an SLP derivative that captures LZ78 compression and its variations) we get O(log⁡log⁡N) query time...

Background: Attempts to successfully regenerate lost alveolar bone have always been a clinician’s dream. Angular defects, at least, have a fairer chance, but the same cannot be said about horizontal bone loss. The purpose of the present study was to evaluate the prevalence of horizontal alveolar bone loss and vertical bone defects in periodontal patients; and later, to correlate it with the treatment modalities available in the literature for horizontal and vertical bone defects. Materials and Methods: The study was conducted in two parts. Part I was the radiographic evaluation of 150 orthopantomographs (OPGs) (of patients diagnosed with chronic periodontitis and seeking periodontal care), which were digitized and read using the AutoCAD 2006 software. All the periodontitis-affected teeth were categorized as teeth with vertical defects (if the defect angle was ≤45° and defect depth was ≥3 mm) or as having horizontal bone loss. Part II of the study comprised search of the literature on treatment modalities for horizontal and vertical bone loss in four selected periodontal journals. Results: Out of the 150 OPGs studied, 54 (36%) OPGs showed one or more vertical defects. Totally, 3,371 teeth were studied, out of which horizontal bone loss was found in 3,107 (92.2%) teeth, and vertical defects were found only in 264 (7.8%) of the teeth, which was statistically significant (Phorizontal types of bone loss specifically. Out of the 477 papers, 461 (96.3%) have addressed vertical bone loss, and 18 (3.7%) have addressed treatment options for horizontal bone loss. Two papers have addressed both types of bone loss and are included in both categories. Conclusion: Horizontal bone loss is more prevalent than vertical bone loss but has been sidelined by researchers as very few papers have been published on the subject of regenerative treatment modalities for

Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.

The problem of deformation of a horizontal plane layer of a compressible material is solved in the framework of the theory of small strains. The upper boundary of the layer is under the action of shear and compressing loads, and the no-slip condition is satisfied on the lower boundary of the layer. The loads increase in absolute value with time, then become constant, and then decrease to zero.Various plasticity conditions are consideredwith regard to the material compressibility, namely, the Coulomb-Mohr plasticity condition, the von Mises-Schleicher plasticity condition, and the same conditions with the viscous properties of the material taken into account. To solve the system of partial differential equations for the components of irreversible strains, a finite-difference scheme is developed for a spatial domain increasing with time. The laws of motion of elastoplastic boundaries are presented, the stresses, strains, rates of strain, and displacements are calculated, and the residual stresses and strains are found.

Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...

Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the ﬁnding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reﬂects the phylogenetic relationship among species in comparison.

Horizontal wells can take advantage of gravity drainage mechanisms, which can be important in conventional heavy oil and bitumen recovery. Horizontal drilling will impact on the development of established conventional heavy oil pools by infill drilling and application of enhanced recovery techniques. There will also be an impact on the development of extensions to established and newly discovered heavy oil pools, as well as a major impact on development of bitumen resources. To assess the impact of horizontal drilling on heavy oil supply, high-impact and low-impact scenarios were evaluated under specified oil-price assumptions for four heavy oil areas in Saskatchewan and Alberta. Horizontal well potential for infill drilling, waterflood projects, and thermal projects was assessed and estimates were made of such developments as reserves additions and heavy oil development wells under the two scenarios. In the low case, projected supply of conventional heavy oil and bitumen stabilizes at a level in the 90,000-94,000 m 3 /d after 1994. In the high case, overall supply continuously grows from 80,000 m 3 /d in 1992 to 140,000 m 3 /d in 2002. Through application of horizontal drilling, reserves additions in western Canada could be improved by ca 100 million m 3 by 2002. 14 figs., 6 tabs

The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

The recent mathematical concept of compressed sensing (CS) asserts that a small number of well-chosen measurements can suffice to reconstruct signals that are amenable to sparse or compressible representation. In addition to powerful theoretical results, the principles of CS are being exploited increasingly across a range of experiments to yield substantial performance gains relative to conventional approaches. In this work we describe the application of CS to electron tomography (ET) reconstruction and demonstrate the efficacy of CS–ET with several example studies. Artefacts present in conventional ET reconstructions such as streaking, blurring of object boundaries and elongation are markedly reduced, and robust reconstruction is shown to be possible from far fewer projections than are normally used. The CS–ET approach enables more reliable quantitative analysis of the reconstructions as well as novel 3D studies from extremely limited data. - Highlights: • Compressed sensing (CS) theory and its application to electron tomography (ET) is described. • The practical implementation of CS–ET is outlined and its efficacy demonstrated with examples. • High fidelity tomographic reconstruction is possible from a small number of images. • The CS–ET reconstructions can be more reliably segmented and analysed quantitatively. • CS–ET is applicable to different image content by choice of an appropriate sparsifying transform

An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

A technique is described which yields an accurate measurement of the temperature of density maximum of fluids which exhibit such anomalous behaviour. The method relies on the detection of changes in convective flow in a rectangular cavity containing the test fluid.The normal single-cell convection which occurs in the presence of a horizontal temperature gradient changes to a double cell configuration in the vicinity of the density maximum, and this transition manifests itself in changes in th...

Using the method of adjoint equations described in Ref. [1], we have calculated the maximum thermal efficiencies that are theoretically attainable by free-piston Stirling and Carnot engine generators by considering the work loss due to friction and Joule heat. The net work done by the Carnot cycle is negative even when the duration of heat addition is optimized to give the maximum amount of heat addition, which is the same situation for the Brayton cycle described in our previous paper. For the Stirling cycle, the net work done is positive, and the thermal efficiency is greater than that of the Otto cycle described in our previous paper by a factor of about 2.7-1.4 for compression ratios of 5-30. The Stirling cycle is much better than the Otto, Brayton, and Carnot cycles. We have found that the optimized piston trajectories of the isothermal, isobaric, and adiabatic processes are the same when the compression ratio and the maximum volume of the same working fluid of the three processes are the same, which has facilitated the present analysis because the optimized piston trajectories of the Carnot and Stirling cycles are the same as those of the Brayton and Otto cycles, respectively.

Horizontal gene transfer (HGT) is believed to be ubiquitous among bacteria, and plays a major role in their genome diversification as well as their ability to develop resistance to antibiotics. In light of its evolutionary significance and implications for human health, developing accurate and efficient methods for detecting and reconstructing HGT is imperative. In this article we provide a new HGT-oriented likelihood framework for many problems that involve phylogeny-based HGT detection and reconstruction. Beside the formulation of various likelihood criteria, we show that most of these problems are NP-hard, and offer heuristics for efficient and accurate reconstruction of HGT under these criteria. We implemented our heuristics and used them to analyze biological as well as synthetic data. In both cases, our criteria and heuristics exhibited very good performance with respect to identifying the correct number of HGT events as well as inferring their correct location on the species tree. Implementation of the criteria as well as heuristics and hardness proofs are available from the authors upon request. Hardness proofs can also be downloaded at http://www.cs.tau.ac.il/~tamirtul/MLNET/Supp-ML.pdf

Full Text Available Main objective of horizontal driling is to place a drain-hole for a long distance within the pay zone to enhance productivity or injectivity. In drilling horizontal wells, more serious problems appear than in drilling vertical wells. These problems are: poor hole cleaning, excessive torque and drag, hole filling, pipe stucking, wellbore instability, loss of circulation, formation damage, poor cement job, and difficulties at logging jobs. From that reason, successful drilling and production of horizontal well depends largely on the fluid used during drilling and completion phases. Several new fluids, that fulfill some or all of required properties (hole cleaning, cutting suspension, good lubrication, and relative low formation damage, are presented in this paper.

When a high-voltage direct-current is applied to two beakers filled with polar liquid dielectrica like water or methanol, a horizontal bridge forms between the two beakers. By repeating a version of Pellat's experiment, it is shown that a horizontal bridge is stable by the action of electrohydrodynamic pressure. Thus, the static and dynamic properties of the phenomenon called a `floating water bridge' can be explained by the gradient of Maxwell pressure, replenishing the liquid within the bridge against any drainage mechanism. It is also shown that a number of liquids can form stable and long horizontal bridges. The stability of such a connection, and the asymmetry in mass flow through such bridges caused by the formation of ion clouds in the vicinity of the electrodes, is also discussed by two further experiments.

When a high-voltage direct-current is applied to two beakers filled with polar liquid dielectrica like water or methanol, a horizontal bridge forms between the two beakers. By repeating a version of Pellat's experiment, it is shown that a horizontal bridge is stable by the action of electrohydrodynamic pressure. Thus, the static and dynamic properties of the phenomenon called a 'floating water bridge' can be explained by the gradient of Maxwell pressure, replenishing the liquid within the bridge against any drainage mechanism. It is also shown that a number of liquids can form stable and long horizontal bridges. The stability of such a connection, and the asymmetry in mass flow through such bridges caused by the formation of ion clouds in the vicinity of the electrodes, is also discussed by two further experiments. (orig.)

We conducted a cross-sectional study to assess vocal and swallowing functions after horizontal glottectomy. Our study population was made up of 22 men aged 45 to 72 years (mean: 58.3) who underwent horizontal glottectomy and completed at least 1 year of follow-up. To compare postoperative results, 20 similarly aged men were included as a control group; all glottectomy patients and all controls were smokers. We used three methods-acoustic and aerodynamic voice analyses, the GRBAS (grade, roughness, breathiness, asthenicity, and strain) scale, and the voice handicap index-30 (VHI-30)-to assess vocal function objectively, perceptually, and subjectively, respectively. We also assessed swallowing function objectively by fiberoptic endoscopic evaluation of swallowing (FEES) and subjectively with the M.D. Anderson dysphagia inventory (MDADI). The 22 patients were also subcategorized into three groups according to the extent of their arytenoid cartilage resection, and their outcomes were compared. Acoustic and aerodynamic analyses showed that the mean maximum phonation time was significantly shorter and the fundamental frequency was significantly lower in the glottectomy group than in the controls (p = 0.001 for both), and that the mean jitter and shimmer values and the mean harmonics-to-noise ratio were all significantly higher (p = 0.001 for all); there were no significant differences among the three arytenoid subgroups. Self-assessments revealed that there were no statistically significant differences among the three subgroups in GRBAS scale scores except for the breathiness score (p = 0.045), which was lower in the arytenoid preservation subgroup than in the total resection subgroup; there were no statistically significant differences among the three subgroups in VHI-30 scores. Finally, swallow testing found no statistically significant differences in FEES scores or MDADI scores. We conclude that horizontal glottectomy caused a deterioration in vocal function, but

The feasibility of producing heavy oil from shallow formations using either horizontal wells or short horizontal wells fractured horizontally is demonstrated. The problem of optimum proppant placement is solved in two steps. In step one, the finite productivity performance is considered in general terms showing that the performance is a function of two dimensionless parameters. Following derivation of optimum conditions, the solution is applied to the horizontal fracture consideration. The limiting factor is that to create an effective finite conductivity fracture, the dimensionless fracture conductivity must be on the order of unity, a fracture that is difficult to realize in higher permeability formations. The best candidates for the suggested configuration are shallow or moderate formations, or formations otherwise proven to accept horizontal fractures, and formations with low permeability/viscosity ratio. 7 refs., 2 tabs., 10 figs., 2 appendices.

1) An examination of the intellectual and material resources which have directed the French programme towards: a) the natural uranium and plutonium system, b) the use of compressed gas as heat transfer fluid (primary fluid). 2) The parts played in exploring the field by the pile EL2 and G1, EL2 a natural uranium, heavy water and compressed gas pile, G1 a natural uranium, graphite and atmospheric air pile. 3) Development of the neutronics of graphite piles: physical study of G1. 4) The examination of certain problem posed by centres equipped with natural uranium, graphite and compressed carbon dioxide piles: structure, special materials, fluid circuits, maximum efficiency. Economic aspects. 5) Aids to progress: a) piles for testing materials and for tests on canned fuel elements, b) laboratory and calculation facilities. 6) Possible new orientations of compressed gas piles: a) raising of the pressure, b) enriched fuel, c) higher temperatures, d) use of heavy water. (author) [fr

The general objective of the International Seminars of Horizontal Steam Generator Modelling has been the improvement in understanding of realistic thermal hydraulic behaviour of the generators when performing safety analyses for VVER reactors. The main topics presented in the fourth seminar were: thermal hydraulic experiments and analyses, primary collector integrity, feedwater distributor replacement, management of primary-to-secondary leakage accidents and new developments in the VVER safety technology. The number of participants, representing designers and manufacturers of the horizontal steam generators, plant operators, engineering companies, research organizations, universities and regulatory authorities, was 70 from 10 countries.

The general objective of the International Seminars of Horizontal Steam Generator Modelling has been the improvement in understanding of realistic thermal hydraulic behaviour of the generators when performing safety analyses for VVER reactors. The main topics presented in the fourth seminar were: thermal hydraulic experiments and analyses, primary collector integrity, feedwater distributor replacement, management of primary-to-secondary leakage accidents and new developments in the VVER safety technology. The number of participants, representing designers and manufacturers of the horizontal steam generators, plant operators, engineering companies, research organizations, universities and regulatory authorities, was 70 from 10 countries.

The general objective of the International Seminars of Horizontal Steam Generator Modelling has been the improvement in understanding of realistic thermal hydraulic behaviour of the generators when performing safety analyses for VVER reactors. The main topics presented in the fourth seminar were: thermal hydraulic experiments and analyses, primary collector integrity, feedwater distributor replacement, management of primary-to-secondary leakage accidents and new developments in the VVER safety technology. The number of participants, representing designers and manufacturers of the horizontal steam generators, plant operators, engineering companies, research organizations, universities and regulatory authorities, was 70 from 10 countries

Full Text Available Micro data is a valuable source of information for research. However, publishing data about individuals for research purposes, without revealing sensitive information, is an important problem. The main objective of privacy preserving data mining algorithms is to obtain accurate results/rules by analyzing the maximum possible amount of data without unintended information disclosure. Data sets for analysis may be in a centralized server or in a distributed environment. In a distributed environment, the data may be horizontally or vertically partitioned. We have developed a simple technique by which horizontally partitioned data can be used for any type of mining task without information loss. The partitioned sensitive data at 'm' different sites are transformed using a mapping table or graded grouping technique, depending on the data type. This transformed data set is given to a third party for analysis. This may not be a trusted party, but it is still allowed to perform mining operations on the data set and to release the results to all the 'm' parties. The results are interpreted among the 'm' parties involved in the data sharing. The experiments conducted on real data sets prove that our proposed simple transformation procedure preserves one hundred percent of the performance of any data mining algorithm as compared to the original data set while preserving privacy.

Full Text Available Internal combustion engines are the primary energy conversion machines both in industry and transportation. Modern technologies are being implemented to engines to fulfill today's low fuel consumption demand. Friction energy consumed by the rubbing parts of the engines are becoming an important parameter for higher fuel efficiency. Rate of friction loss is primarily affected by sliding speed and the load acting upon rubbing surfaces. Compression ratio is the main parameter that increases the peak cylinder pressure and hence normal load on components. Aim of this study is to investigate the effect of compression ratio on total friction loss of a diesel engine. A variable compression ratio diesel engine was operated at four different compression ratios which were "12.96", "15:59", "18:03", "20:17". Brake power and speed was kept constant at predefined value while measuring the in- cylinder pressure. Friction mean effective pressure ( FMEP data were obtained from the in cylinder pressure curves for each compression ratio. Ratio of friction power to indicated power of the engine was increased from 22.83% to 37.06% with varying compression ratio from 12.96 to 20:17. Considering the thermal efficiency , FMEP and maximum in- cylinder pressure optimum compression ratio interval of the test engine was determined as 18.8 ÷ 19.6.

in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

Full text: Compression algorithms have been mooted to minimize storage space and transmission times of digital images. We assessed the impact of high-level lousy compression using JPEG and wavelet algorithms on image quality and reporting accuracy of cardiac Sestamibi studies. Twenty stress/rest Sestamibi cardiac perfusion studies were reconstructed into horizontal short, vertical long and horizontal long axis slices using conventional methods. Each of these six sets of slices were aligned for reporting and saved (uncompressed) as a bitmap. This bitmap was then compressed using JPEG compression, then decompressed and saved as a bitmap for later viewing. This process was repeated using the original bitmap and wavelet compression. Finally, a second copy of the original bitmap was made. All 80 bitmaps were randomly coded to ensure blind reporting. The bitmaps were read blinded and by consensus of 2 experienced nuclear medicine physicians using a 5-point scale and 25 cardiac segments. Subjective image quality was also reported using a 3-point scale. Samples of the compressed images were also subtracted from the original bitmap for visual comparison of differences. Results showed an average compression ratio of 23:1 for wavelet and 13:1 for JPEG. Image subtraction showed only very minor discordance between the original and compressed images. There was no significant difference in subjective quality between the compressed and uncompressed images. There was no significant difference in reporting reproducibility of the identical bitmap copy, the JPEG image and the wavelet image compared with the original bitmap. Use of the high compression algorithms described had no significant impact on reporting reproducibility and subjective image quality of cardiac Sestamibi perfusion studies

The vertical eye movements in humans produced in response to head-over-heels constant velocity pitch rotation about a horizontal axis resemble those from other species. At 60 degrees/s these are persistent and tend to have non-reversing slow components that are compensatory to the direction of rotation. In most, but not all subjects, the slow component velocity was well characterized by a rapid build-up followed by an exponential decay to a non-zero baseline. Super-imposed was a cyclic or modulation component whose frequency corresponded to the time for one revolution and whose maximum amplitude occurred during a specific head orientation. All response components (exponential decay, baseline and modulation) were larger during pitch backward compared to pitch forward runs. Decay time constants were shorter during the backward runs, thus, unlike left to right yaw axis rotation, pitch responses display significant asymmetries between paired forward and backward runs.

This paper presents an optimization model for rotor blades of horizontal axis wind turbines. The model refers to the wind speed distribution function on the specific wind site, with an objective to satisfy the maximum annual energy output. To speed up the search process and guarantee a global optimal result, the extended compact genetic algorithm (ECGA) is used to carry out the search process.Compared with the simple genetic algorithm, ECGA runs much faster and can get more accurate results with a much smaller population size and fewer function evaluations. Using the developed optimization program, blades of a 1.3 MW stall-regulated wind turbine are designed. Compared with the existing blades, the designed blades have obviously better aerodynamic performance.

Liquid-metal-cooled breeder reactors are expected to use large quantities of sodium or sodium-potassium alloy, and evaluation of the possible consequences of a liquid-metal fire, henceforth referred to as a sodium fire, is an important consideration. Of particular interest is the sodium aerosol concentration at the air intake ports that are used for reactor cooling, and which might suffer restricted flow under high aerosol concentrations. The authors have devised and applied a methodology for estimating the concentration of aerosols released vertically and horizontally from building surfaces and monitored at other building surface points. This methodology has been used to make calculations that indicate the time development of aerosol buildup, and the maximum aerosol concentration, at air intake ports. Building wake effects, momentum-driven plume rise, and density-driven plume rise are considered

Full Text Available In this study, the model tests were conducted on the short piles installed in sands under a horizontal pullout load to investigate their behavior characteristics. From the horizontal loading tests where dimensions of the pile diameter and length, and loading point were varied, the horizontal pullout resistance and the rotational and translational movement pattern of the pile were investigated. As a result, the horizontal pullout resistance of the pile embedded in sands was dependent on the pile length, diameter, loading point, etc. The ultimate horizontal pullout load tended to increase as the loading point (h/L moved to the bottom from the top of the pile, regardless of the ratio between the pile length and diameter (L/D, reached the maximum value at the point of h/L = 0.75, and decreased afterwards. When the horizontal pullout load acted on the upper part above the middle of the pile, the pile rotated clockwise and moved to the pullout direction, and the pivot point of the pile was located at 150–360mm depth below the ground surface. On the other hand, when the horizontal pullout load acted on the lower part of the pile, the pile rotated counterclockwise and travelled horizontally, and the rotational angle was very small.

There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).

High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.

It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.

Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms

MEL is a geometric music encoding language designed to allow for musical objects to be encoded parsimoniously as sets of points in pitch-time space, generated by performing geometric transformations on component patterns. MEL has been implemented in Java and coupled with the SIATEC pattern...... discovery algorithm to allow for compact encodings to be generated automatically from in extenso note lists. The MEL-SIATEC system is founded on the belief that music analysis and music perception can be modelled as the compression of in extenso descriptions of musical objects....

To avoid high bandwidth detector, fast speed A/D converter, and large size memory disk, a compressive full waveform LIDAR system, which uses a temporally modulated laser instead of a pulsed laser, is studied in this paper. Full waveform data from NEON (National Ecological Observatory Network) are used. Random binary patterns are used to modulate the source. To achieve 0.15 m ranging resolution, a 100 MSPS A/D converter is assumed to make measurements. SPIRAL algorithm with canonical basis is employed when Poisson noise is considered in the low illuminated condition.

Conventional hydrogen compressors often contribute over half of the cost of hydrogen stations, have poor reliability, and have insufficient flow rates for a mature FCEV market. Fatigue associated with their moving parts including cracking of diaphragms and failure of seal leads to failure in conventional compressors, which is exacerbated by the repeated starts and stops expected at fueling stations. Furthermore, the conventional lubrication of these compressors with oil is generally unacceptable at fueling stations due to potential fuel contamination. Metal hydride (MH) technology offers a very good alternative to both conventional (mechanical) and newly developed (electrochemical, ionic liquid pistons) methods of hydrogen compression. Advantages of MH compression include simplicity in design and operation, absence of moving parts, compactness, safety and reliability, and the possibility to utilize waste industrial heat to power the compressor. Beyond conventional H2 supplies of pipelines or tanker trucks, another attractive scenario is the on-site generating, pressuring and delivering pure H2 at pressure (≥ 875 bar) for refueling vehicles at electrolysis, wind, or solar generating production facilities in distributed locations that are too remote or widely distributed for cost effective bulk transport. MH hydrogen compression utilizes a reversible heat-driven interaction of a hydride-forming metal alloy with hydrogen gas to form the MH phase and is a promising process for hydrogen energy applications [1,2]. To deliver hydrogen continuously, each stage of the compressor must consist of multiple MH beds with synchronized hydrogenation & dehydrogenation cycles. Multistage pressurization allows achievement of greater compression ratios using reduced temperature swings compared to single stage compressors. The objectives of this project are to investigate and demonstrate on a laboratory scale a two-stage MH hydrogen (H2) gas compressor with a

In the presentation some of the calculated results of horizontal steam generator PGV - 440 modelling with RELAP5/Mod3 are described. Two nodalization schemes have been used with different components in the steam dome. A study of parameters variation on the steam generator work and calculated results is made in cases with separator and branch.

The Third International Seminar on Horizontal Steam Generators held on October 18-20, 1994 in Lappeenranta, consisted of six sessions dealing with the topics: thermal hydraulic experiments and analyses, primary collector integrity, management of primary-to-secondary leakage accidents, feedwater collector replacement and discussion of VVER-440 steam generator safety issues.

The Third International Seminar on Horizontal Steam Generators held on October 18-20, 1994 in Lappeenranta, consisted of six sessions dealing with the topics: thermal hydraulic experiments and analyses, primary collector integrity, management of primary-to-secondary leakage accidents, feedwater collector replacement and discussion of VVER-440 steam generator safety issues

A complete horizontal molecular orientation of a linear-shaped thermally activated delayed fluorescent guest emitter 2,6-bis(4-(10Hphenoxazin-10-yl)phenyl)benzo[1,2-d:5,4-d′] bis(oxazole) (cis-BOX2) was obtained in a glassy host matrix by vapor deposition. The orientational order of cis-BOX2 depended on the combination of deposition temperature and the type of host matrix. Complete horizontal orientation was obtained when a thin film with cis-BOX2 doped in a 4,4′-bis(N-carbazolyl)-1,1′-biphenyl (CBP) host matrix was fabricated at 200 K. The ultimate orientation of guest molecules originates from not only the kinetic relaxation but also the kinetic stability of the deposited guest molecules on the film surface during film growth. Utilizing the ultimate orientation, a highly efficient organic light-emitting diode with the external quantum efficiency of 33.4 ± 2.0% was realized. The thermal stability of the horizontal orientation of cis-BOX2 was governed by the glass transition temperature (T{sub g}) of the CBP host matrix; the horizontal orientation was stable unless the film was annealed above T{sub g}.

and demonstrate the link between horizontal inequalities and conflict. Section two will ... the US race riots in the 1960's and the 2005 Paris riots to the genocides that .... be seen as a combination of ethnic fighting between the black population.

Flow behaviour for ESS horizontal target is studied experimentally using two dimensional water model. A velocity field of stationary flow in reaction zone has been obtained. Three dimensional effect was also studied as a spanwise flow structure. (author) 3 figs., 3 refs.

The study also reveals domestic investment plays an important role to enhance vertical as well as horizontal export diversification for East Asia, while it only ... resource-based industries and gradually shift production and exports from customary products to more dynamic ones by developing competitive advantage in the ...

A mathematical model was developed to permit dynamic simulation of nitrogen interaction in a pilot horizontal subsurface flow constructed wetland receiving effluents from primary facultative pond. The system was planted with Phragmites mauritianus, which was provided with root zone depth of 75 cm. The root zone was ...

In the presentation some of the calculated results of horizontal steam generator PGV - 440 modelling with RELAP5/Mod3 are described. Two nodalization schemes have been used with different components in the steam dome. A study of parameters variation on the steam generator work and calculated results is made in cases with separator and branch.

The Third International Seminar on Horizontal Steam Generators held on October 18-20, 1994 in Lappeenranta, consisted of six sessions dealing with the topics: thermal hydraulic experiments and analyses, primary collector integrity, management of primary-to-secondary leakage accidents, feedwater collector replacement and discussion of VVER-440 steam generator safety issues.

Purpose: A posterior-anterior vertebral vector is proposed to facilitate visualization and understanding of scoliosis. The aim of this study was to highlight the interest of using vertebral vectors, especially in the horizontal plane, in clinical practice. Methods: We used an EOS two-/three-dimen......Purpose: A posterior-anterior vertebral vector is proposed to facilitate visualization and understanding of scoliosis. The aim of this study was to highlight the interest of using vertebral vectors, especially in the horizontal plane, in clinical practice. Methods: We used an EOS two...... cases of a normal spine and a thoracic scoliosis are presented. Results: For a normal spine, vector projections in the transverse plane are aligned with the posterior-anterior anatomical axis. For a scoliotic spine, vector projections in the horizontal plane provide information on the lateral...... decompensation of the spine and the lateral displacement of vertebrae. In the horizontal plane view, vertebral rotation and projections of the sagittal curves can also be analyzed simultaneously. Conclusions: The use of posterior-anterior vertebral vector facilitates the understanding of the 3D nature...

manipulated in HRTFs used for binaural synthesis of sound in the horizontal plane. The manipulation of cues resulted in HRTFs with cues ranging from correct combinations of spectral information and ITDs to combinations with severely conflicting cues. Both the ITD and the spectral information seem...

Dynamics of central collisions of heavy nuclei in the energy range from few tens of MeV/nucleon to a couple of GeV/nucleon is discussed. As the beam energy increases and/or the impact parameter decreases, the maximumcompression increases. It is argued that the hydrodynamic behaviour of matter sets in the vicinity of balance energy. At higher energies shock fronts are observed to form within head-on reaction simulations, perpendicular to beam axis and separating hot compressed matter from cold. In the semi-central reactions a weak tangential discontinuity develops in-between these fronts. The hot compressed matter exposed to the vacuum in directions parallel to the shock front begin to expand collectively into these directions. The expansion affects particle angular distributions and mean energy components and further shapes of spectra and mean energies of particles emitted into any one direction. The variation of slopes and the relative yields measured within the FOPI collaboration are in a general agreement with the results of simulations. As to the FOPI data on stopping, they are consistent with the preference for transverse over the longitudinal motion in the head-on Au + Au collisions. Unfortunately, though, the data can not be used to decide directly on that preference due to acceptance cuts. Tied to the spatial and temporal changes in the reactions are changes in the entropy per nucleon. (authors)

The full-frame bit-allocation algorithm for radiological image compression can achieve an acceptable compression ratio as high as 30:1. It involves two stages of operation: a two-dimensional discrete cosine transform and pixel quantization in the transformed space with pixel depth kept accountable by a bit-allocation table. The cosine transform hardware design took an expandable modular approach based on the VME bus system with a maximum data transfer rate of 48 Mbytes/sec and a microprocessor (Motorola 68000 family). The modules are cascadable and microprogrammable to perform 1,024-point butterfly operations. A total of 18 stages would be required for transforming a 1,000 x 1,000 image. Multiplicative constants and addressing sequences are to be software loaded into the parameter buffers of each stage prior to streaming data through the processor stages. The compression rate for 1K x 1K images is expected to be faster than one image per sec

The anomalous compressibility of vitreous silica has been known for nearly a century, but the mechanisms responsible for it remain poorly understood. Using GHz-ultrasonic interferometry, we measured longitudinal and transverse acoustic wave travel times at pressures up to 5 GPa in vitreous silica with fictive temperatures (Tf) ranging between 985 °C and 1500 °C. The maximum in ultrasonic wave travel times-corresponding to a minimum in acoustic velocities-shifts to higher pressure with increasing Tf for both acoustic waves, with complete reversibility below 5 GPa. These relationships reflect polyamorphism in the supercooled liquid, which results in a glassy state possessing different proportions of domains of high- and low-density amorphous phases (HDA and LDA, respectively). The relative proportion of HDA and LDA is set at Tf and remains fixed on compression below the permanent densification pressure. The bulk material exhibits compression behavior systematically dependent on synthesis conditions that arise from the presence of floppy modes in a mixture of HDA and LDA domains.

The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.

The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.

In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

During the flight of vehicles, their propulsion energy must overcome gravity, to ensure the displacement of air masses on vehicle trajectory, to cover both energy losses from the friction between a solid surface and the air and also the kinetic energy of reflected air masses due to the impact with the flying vehicle. The flight optimization by increasing speed and reducing fuel consumption has directed research in the aerodynamics field. The flying vehicles shapes obtained through studies in the wind tunnel provide the optimization of the impact with the air masses and the airflow along the vehicle. By energy balance studies for vehicles in flight, the author Ioan Rusu directed his research in reducing the energy lost at vehicle impact with air masses. In this respect as compared to classical solutions for building flight vehicles aerodynamic surfaces which reduce the impact and friction with air masses, Ioan Rusu has invented a device which he named free compression tube for rockets, registered with the State Office for Inventions and Trademarks of Romania, OSIM, deposit f 2011 0352. Mounted in front of flight vehicles it eliminates significantly the impact and friction of air masses with the vehicle solid. The air masses come into contact with the air inside the free compression tube and the air-solid friction is eliminated and replaced by air to air friction.

It has been shown theoretically that intense microwave radiation is absorbed non-classically by a newly enunciated mechanism when interacting with hydrogen plasma. Fields > 1 Mg, lambda > 1 mm are within this regime. The predicted absorption, approximately P/sub rf/v/sub theta/sup e/, has not yet been experimentally confirmed. The applications of such a coupling are many. If microwave bursts approximately > 5 x 10 14 watts, 5 ns can be generated, the net generation of power from pellet fusion as well as various military applications becomes feasible. The purpose, then, for considering gas-gun photon compression is to obtain the above experimental capability by converting the gas kinetic energy directly into microwave form. Energies of >10 5 joules cm -2 and powers of >10 13 watts cm -2 are potentially available for photon interaction experiments using presently available technology. The following topics are discussed: microwave modes in a finite cylinder, injection, compression, switchout operation, and system performance parameter scaling

The Karp-Rabin fingerprint of a string is a type of hash value that due to its strong properties has been used in many string algorithms. In this paper we show how to construct a data structure for a string S of size N compressed by a context-free grammar of size n that answers fingerprint queries...... derivative that captures LZ78 compression and its variations) we get O(loglogN) query time. Hence, our data structures has the same time and space complexity as for random access in SLPs. We utilize the fingerprint data structures to solve the longest common extension problem in query time O(logNlogℓ) and O....... That is, given indices i and j, the answer to a query is the fingerprint of the substring S[i,j]. We present the first O(n) space data structures that answer fingerprint queries without decompressing any characters. For Straight Line Programs (SLP) we get O(logN) query time, and for Linear SLPs (an SLP...

This report describes the development and use of a method for calculating the velocity at impact of a projectile fired from a compressed air gun. The method is based on a simple but effective approach which has been incorporated into a computer program. The method was developed principally for use with the Horizontal Impact Facility at AEE Winfrith but has been adapted so that it can be applied to any compressed air gun of a similar design. The method has been verified by comparison of predicted velocities with test data and the program is currently being used in a predictive manner to specify test conditions for the Horizontal Impact Facility at Winfrith. (author)

The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

The full-frame bit allocation algorithm for radiological image compression developed in the authors' laboratory can achieve compression ratios as high as 30:1. The software development and clinical evaluation of this algorithm has been completed. It involves two stages of operations: a two-dimensional discrete cosine transform and pixel quantization in the transform space with pixel depth kept accountable by a bit allocation table. Their design took an expandable modular approach based on the VME bus system which has a maximum data transfer rate of 48 Mbytes per second and a Motorola 68020 microprocessor as the master controller. The transform modules are based on advanced digital signal processor (DSP) chips microprogrammed to perform fast cosine transforms. Four DSP's built into a single-board transform module can process an 1K x 1K image in 1.7 seconds. Additional transform modules working in parallel can be added if even greater speeds are desired. The flexibility inherent in the microcode extends the capabilities of the system to incorporate images of variable sizes. Their design allows for a maximum image size of 2K x 2K

The gas circulation in a gas centrifuge due to temperature differences, differential rotation and injection, and removal of fluid at the ends, as well as due to temperature gradients at the cylinder wall, is treated analytically. The motion consists of a small perturbation on a state of isothermal

An ideal system of equations with shock heating is used for describing of a Z pinch in a gas with high atomic number. In this case equations do not depend from the installation parameters. The approximate simple solution of such a system is presented. Numerical calculations of equations with radiative cooling and various dissipative effects have determined the employment conditions of ideal magnetohydrodynamic equation system. 10 refs

Flux is conserved during mechanical compression of magnetic fields for both nonrelativistic and relativistic compressors. However, the relativistic compressor generates radiation, which can carry up to twice the energy content of the magnetic field compressed adiabatically. The radiation may be either confined or allowed to escape

An electron beam ion source nicknamed NICE-I (Naked Ion Collision Experiments) has been constructed at IPP for studies of atomic processes in fusion plasmas. A super conducting magnet is adopted to generate a strong, stable and homogenious magnetic field to compress a high density electron beam. The solenoid is 1 m long, the inner diameter is 100 mm and the maximum magnetic field is 2T. It is placed horizontally and coaxially with a liquid nitrogen (L-N 2 ) reservoir and a vaccum vessel. In order to fix their axes inmovable even when the reservoirs are cooled by L-N 2 and He, a structure having spokes strained uniformly like a wheel is used between the vaccum vessel and the L-N 2 reservoir and also between the L-N 2 reservoir and the solenoid bore. The electrodes, such as the electron gun, the drift tubes and so on, are mounted on the radiation shields fixed on the L-N 2 reservoir, and they are centered to the solenoid bore within the precision of 0.1 mm. The evapolation rate of L-He is about 1.4 l/h, which is not so much larger than the estimated value. This provides a continuous operation for 16 hours with a charge of 50 l L-He including the precooling of the reservoir. The ultimate pressure 4 x 10 -10 Torr is achived in the vacuum vessel, and the residual gas pressure in the ionization region is expected to be much lower than 1 x 10 -10 Torr. The consideration for mechanical strength and the heat conduction of the materials related to the design are described as well as the details of the structure. (author)

Bayesian probability theory and Maximum Entropy methods are at the core of a new view of scientific inference. These 'new' ideas, along with the revolution in computational methods afforded by modern computers allow astronomers, electrical engineers, image processors of any type, NMR chemists and physicists, and anyone at all who has to deal with incomplete and noisy data, to take advantage of methods that, in the past, have been applied only in some areas of theoretical physics. The title workshops have been the focus of a group of researchers from many different fields, and this diversity is evident in this book. There are tutorial and theoretical papers, and applications in a very wide variety of fields. Almost any instance of dealing with incomplete and noisy data can be usefully treated by these methods, and many areas of theoretical research are being enhanced by the thoughtful application of Bayes' theorem. Contributions contained in this volume present a state-of-the-art overview that will be influential and useful for many years to come

Highlights: • Compression force and radiation dose for 17 951 screening mammograms were analyzed. • Large variations in mean applied compression force between the breast centers. • Limited associations between compression force and radiation dose. - Abstract: Purpose: Compression force is used in mammography to reduce breast thickness and by that decrease radiation dose and improve image quality. There are no evidence-based recommendations regarding the optimal compression force. We analyzed compression force and radiation dose between screening centers in the Norwegian Breast Cancer Screening Program (NBCSP), as a first step towards establishing evidence-based recommendations for compression force. Materials and methods: The study included information from 17 951 randomly selected screening examinations among women screened with equipment from four different venors at fourteen breast centers in the NBCSP, January-March 2014. We analyzed the applied compression force and radiation dose used on craniocaudal (CC) and mediolateral-oblique (MLO) view on left breast, by breast centers and vendors. Results: Mean compression force used in the screening program was 116N (CC: 108N, MLO: 125N). The maximum difference in mean compression force between the centers was 63N for CC and 57N for MLO. Mean radiation dose for each image was 1.09 mGy (CC: 1.04mGy, MLO: 1.14mGy), varying from 0.55 mGy to 1.31 mGy between the centers. Compression force alone had a negligible impact on radiation dose (r{sup 2} = 0.8%, p = < 0.001). Conclusion: We observed substantial variations in mean compression forces between the breast centers. Breast characteristics and differences in automated exposure control between vendors might explain the low association between compression force and radiation dose. Further knowledge about different automated exposure controls and the impact of compression force on dose and image quality is needed to establish individualised and evidence

Now in its second edition, this book continues to give readers a broad mathematical basis for modelling and understanding the wide range of wave phenomena encountered in modern applications. New and expanded material includes topics such as elastoplastic waves and waves in plasmas, as well as new exercises. Comprehensive collections of models are used to illustrate the underpinning mathematical methodologies, which include the basic ideas of the relevant partial differential equations, characteristics, ray theory, asymptotic analysis, dispersion, shock waves, and weak solutions. Although the main focus is on compressible fluid flow, the authors show how intimately gasdynamic waves are related to wave phenomena in many other areas of physical science. Special emphasis is placed on the development of physical intuition to supplement and reinforce analytical thinking. Each chapter includes a complete set of carefully prepared exercises, making this a suitable textbook for students in applied mathematics, ...

This paper is devoted to a numerical investigation of the free convection flow about a horizontal cylinder maintained at 0 °C in a water ambient close to the point of maximum density. Complete numerical solutions covering both the transient as well as steady state have been obtained. Principal results indicate that the proximity of the ambient temperature to the point of maximum density plays an important role in the type of convection pattern that may be obtained. When the ambient temperature is within 4.7 °C

An experimental study of the phenomenon of buoyancy driven natural ventilation through single-sided horizontal openings was performed in a full-scale laboratory test rig. The measurements were made for opening ratios L/D ranging from 0.027 to 4.455, where L and D are the length of the opening...... and the diameter of the opening, respectively. The basic nature of airflow through single-sided openings, including airflow rate, air velocity, temperature difference between the rooms and the dimensions of the horizontal openings, were measured. A bi-directional airflow rate was measured using the constant...... quite well with the Epstein's formula ratio are presented. In some cases the measured airflow rates fit quite well with the Epstein's formula but in other cases the measured data show clear deviations from the Epstein's formula. Thus, revised formulas for natural ventilation are proposed....

An experimental study of the phenomenon of buoyancy driven natural ventilation through single-sided horizontal openings was performed in a full-scale laboratory test rig. The measurements were made for opening ratios L/D ranging from 0.027 to 4.455, where L and D are the length of the opening...... and the diameter of the opening, respectively. The basic nature of airflow through single-sided openings, including airflow rate, air velocity, temperature difference between the rooms and the dimensions of the horizontal openings, were measured. A bi-directional airflow rate was measured using the constant...... quite well with the Epstein's formula but in other cases the measured data show clear deviations from the Epstein's formula. Thus, revised formulas for natural ventilation are proposed....

A horizontal, modular, dry, irradiated fuel storage system (10) includes a thin-walled canister (12) for containing irradiated fuel assemblies (20), which canister (12) can be positioned in a transfer cask (14) and transported in a horizontal manner from a fuel storage pool (18), to an intermediate-term storage facility. The storage system (10) includes a plurality of dry storage modules (26) which accept the canister (12) from the transfer cask (14) and provide for appropriate shielding about the canister (12). Each module (26) also provides for air cooling of the canister (12) to remove the decay heat of the irradiated fuel assemblies (20). The modules (26) can be interlocked so that each module (26) gains additional shielding from the next adjacent module (26). Hydraulic rams (30) are provided for inserting and removing the canisters (12) from the modules (26).

Evaluation of the two formation characteristics conducive to economic well production is important when tight shale formation characterization and completion design are being considered. This paper presents the basic understanding required to improve the efficiency of horizontal completions in oil and gas producing shales. Guidelines are defined for effective perforation and fracturing to improve the efficiency and sustainability of horizontal completions using extensive laboratory characterization of mechanical properties on core, core/log integration and continuous mapping of these properties by logging-while-drilling (LWD) methods. The objective is to improve completion design efficiency. This is accomplished by suitable selection of perforation intervals based on an understanding of the relevant physical processes and rock characterization. Conditions at two reservoir regions, the near-wellbore and the far-wellbore, are outlined and are essential to completion design. From the study, it can be concluded that tight shales are strongly anisotropic and cannot be approximated using isotropic models.

In Upper Zakum Field, the interest in horizontal drilling has continued. A second horizontal well was drilled during the second half of 1989. This necessitated running logging tools for well control and to evaluate the reservoir characteristics. The logging tool selected for this well is that of Sperry-Sun. Tools configuration and tolerance were found to fulfil SADCO's requirements and specifications. This paper reports on the services produced which included Measurement While Drilling (MWD) directional services and RLL (Recorded Lithology Logging). The RLL services cover Dual Gamma Ray (DGR), Electromagnetic Wave Resistivity (EWR) and Compensated Neutron Porosity (CN porosity). All the RLL tools were an integrated part of the Bottom Hole Drilling Assembly. Data acquired while surveying was recorded in a recording sub down-hole and retrieved when the tools were up at the surface. A PC assisted quick look interpretation was carried out using Archie's equation in shale free limestone to calculate: Effective porosity, Water Saturation and, Bulk water volume

Full Text Available Traumatic injuries of teeth are the main cause of emergency treatment in dental practice. Radicular fractures in permanent teeth are uncommon, being only 0.5-7% of the cases. Horizontal root fractures are more frequently observed in the maxillary anterior region of young male patients and vary in severity from enamel fractures to avulsions. Fracture occurs often in the middle-third of the root followed by apical and coronal third. The present case report describes a clinical case of a horizontal root fracture located at the middle third of a maxillary left-central incisor treated endodontically after approximating fracture segment with the help of orthodontic appliance. After 6 months follow-up, the tooth was asymptomatic with normal periodontal health.

Helicopters are among the most complex machines ever made. While ensuring high performance from the aeronautical point of view, they are not very comfortable due to vibration mainly created by the main rotor and by the interaction with the surrounding air. One of the most solicited structural elements of the vehicle are the horizontal stabilizers. These elements are particularly stressed because of their composite structure which, while guaranteeing lightness and strength, is characterized by a low damping. This work makes a preliminary analysis on the dynamics of the structure and proposes different solutions to actively suppress vibrations. Among them, the best in terms of the relationship between performance and weight / complexity of the system is that based on inertial actuators mounted on the inside of the horizontal stabilizers. The work addresses the issue of the design of the device and its use in the stabilizer from both the numerical and the experimental points of view.

Full Text Available AIM: To observe the effect of surgery for paralytic horizontal strabismus and the paralytic horizontal strabismus performed by Jensen procedure with antagonist muscle of paralytic muscle recession and medial or lateral rectus extra large resection/recession.METHODS: Fifteen cases(17 eyeswith complete or nearly complete paralytic horizontal strabismus from January 2005 to August. 2014 in our hospital were assessed retrospectively,7 eyes of 7 cases with treatment group A were performed Jensen procedure combined antagonist muscle of paralytic muscle recession, 10 eyes of 8 cases with treatment group B were performed medial or lateral rectus extra large resection/recession. seventeen eyes of 15 cases with an average of 21±8.71mo follow-up were observed.RESULTS: All 17 eyes of 15 cases after the operation obtained satisfied effects, 16 eyes of 14 cases obtained ideal long-term effect. One eye of a patient with a 6mo follow-up was undercorrected of 30△. We found a varying degree of postoperative improvement in visual function. There was a significant reduction in the strabismus angle for distance and near(t=28.71, Pt=36.21, Pt=17.96, Pt=9.20,PCONCLUSION: Jensen procedure combined antagonist muscle of paralytic muscle recession and medial or lateral rectus extra large resection/recession is a safe and successful method of treatment in complete or nearly complete paralysis horizontal strabismus. Patients achieve orthophoria, improvement of the motor ability, and larger field of binocular single vision for long time.

The production process anti p p → l - l' + + X, where the leptons belong to two different generations and X refers to spectator jets, provides a clear signature for horizontal (generation-changing) bosons when the leptons are emitted nearly back-to-back and p/sub T//sup miss/ = 0. Cross sections and p/sub T/ distributions for each lepton are presented, and discovery limits on M/sub H/ are extracted for several different channels

The Emlichheim oilfield is located in North-West-Germany on the Dutch-German border being the Southern downdip part of the Schoonebeek anticline. The field was discovered in 1943 and came on production in 1944. Since production startup, Wintershall operates the field as owner of the concession (90% share of interest) in a joint venture with Mobil Erdgas-Erdoel GmbH (10%). For more than 50 years an average crude oil production of 150.000 t/year has been maintained. Starting with huff ''n'' puff and hot water flooding in the late 60's, the first steam flood project was implemented in 1981. Further steamflood projects started in 1989, 1992, 1993, 1994 and 1998 in different areas of the field. Until 1997, only vertical production wells were drilled in the field. Wellbore stability seemed to be a major problem for drilling horizontal wells in the unconsolidated sandstone reservoir at that time. In 1999 an innovative steamflood project was started with three newly drilled horizontal producers surrounding a vertical steam injector. First results are showing a significant improvement in the performance as compared to the earlier projects and offer new chances for further development of the field. Today, the field could no longer be operated without the steam projects as roughly 95% of the field production is coming from thermal EOR. This paper gives a brief overview of the field and its production history, the planning and realization of a current steamflood project using horizontal well technology as well as its performance compared to the earlier projects. It also describes the experience of drilling horizontal wells in the unconsolidated sandstone. A brief outlook to the future field development is given. (orig.)

Spatial localization of sound is often described as unconscious evaluation of cues given by the interaural time difference (ITD) and the spectral information of the sound that reaches the two ears. Our present knowledge suggests the hypothesis that the ITD roughly determines the cone of the perce...... independently in HRTFs used for binaural synthesis. The ITD seems to be dominant for localization in the horizontal plane even when the spectral information is severely degraded....

The aim of the thesis is the study of soil dynamics for important structures like nuclear power plants, offshore platforms, dams etc. Experimental results of horizontal vibrations on a pile partially anchored in a soil scale model put into a centrifuge are presented. Mechanical similitude conditions from equilibrium equations or rheologic laws are described. After a description of testing equipment (centrifuge, electrodynamic excitator) experimental results are interpreted with a model. Non-linearities on frequency response curves are characterized [fr

Full Text Available Acrylonitrile–butadiene–styrene (ABS is commonly used material in the fused deposition modeling (FDM process. In this work, ABS and ABS plus parts were built with different building parameters and they were tested according to the ASTM D695 standard. Compression strength results were compared to stock ABS material values. The fracture surfaces of selected specimens were examined under a Scanning Electron Microscope (SEM, to determine the failure mode of the filament strands. Following this a Steward Platform part was tested under compression in a tensile testing machine. The experimental results were employed to develop a finite element model of the Steward Platform part, in order to determine the maximum force the part can withstand. The Finite Element Model results were in good agreement with the values measured in the Steward Platform part compressive tests, demonstrating that the model developed is reliable. In these experiments, it was found that ABS parts build with a larger layer thickness showed lower compressive strength, which ABS plus did not show. ABS specimens on average developed about half the compressive strength of the ABS plus specimens, while the ABS plus specimens showed lower compressive strength values than stock ABS material.

Full Text Available Lightweight foamcrete is a versatile material; primarily consist of a cement based mortar mixed with at least 20% volume of air. High flow ability, lower self-weight, minimal requirement of aggregate, controlled low strength and good thermal insulation properties are a few characteristics of foamcrete. Its dry densities, typically, is below 1600kg/m3 with compressive strengths maximum of 15MPa. The ASTM standard provision specifies a correction factor for concrete strengths of between 14 and 42MPa to compensate for the reduced strength when the aspect height-to-diameter ratio of specimen is less than 2.0, while the CEB-FIP provision specifically mentions the ratio of 150 x 300mm cylinder strength to 150 mm cube strength. However, both provisions requirements do not specifically clarify the applicability and/or modification of the correction factors for the compressive strength of foamcrete. This proposed laboratory work is intended to study the effect of different dimensions and profiles on the axial compressive strength of concrete. Specimens of various dimensions and profiles are cast with square and circular cross-sections i.e., cubes, prisms and cylinders, and to investigate their behavior in compression strength at 7 and 28 days. Hypothetically, compressive strength will decrease with the increase of concrete specimen dimension and concrete specimen with cube profile would yield comparable compressive strength to cylinder (100 x 100 x 100mm cube to 100dia x 200mm cylinder.

Results of a uniaxial, unconfined compression test on artificial diesel-contaminated and uncontaminated frozen silty soils are discussed. The testing program involved 59 specimens. The results show that for the same fluid content, diesel contamination reduced the strength of the frozen specimens by increasing the unfrozen water content. For example, in specimens containing 50 per cent diesel oil of the fluid content by weight the maximum strength was reduced by 95 per cent compared to the strength of an uncontaminated specimen. Diesel contamination was also shown to contribute to the slippage between soil particles by acting as a lubricant, thus accelerating the loss of compressive strength.13 refs., 18 figs

A design approach giving the optimum number of stages in a magnetic pulse compression circuit and gain per stage is given. The limitation on the maximum gain per stage is discussed. The total system volume minimization is done by considering the energy storage capacitor volume and magnetic core volume at each stage. At the end of this paper, the design of a magnetic pulse compression based linear induction accelerator of 200 kV, 5 kA, and 100 ns with a repetition rate of 100 Hz is discussed with its experimental results

The experimental work includes developing and using a thermal convection cell to obtain measurements of the heat flux and turbulent core temperature of a horizontal layer of fluid heated internally and subject to both stabilizing and destabilizing temperature differences. The ranges of Rayleigh numbers tested were 10 7 equal to or less than R/sub I/ equal to or less than 10 13 and -10 10 equal to or less than R/sub E/ equal to or less than 10 10 . Power integral methods were found to be adequate for interpolating and extrapolating the data. The theoretical work consists of the derivation, solution and use of the mean field equations for study of thermally driven convection in horizontal layers of infinite extent. The equations were derived by a separation of variables technique where the horizontal directions were described by periodic structures and the vertical being some function of z. The derivation resulted in a coupled set of momentum and energy equations. The equations were simplified by using the infinite Prandtl number limit and neglecting direct intermodal interaction. Solutions to these equations are used to predict the existence of multi-wavenumber flows at all supercritical Rayleigh numbers. Subsequent inspection of existing experimental photographs of convecting fluids confirms their existence. The onset of time dependence is found to coincide with the onset of the second convective mode. Each mode is found to consist of two wavenumbers and typically the velocity and temperature fields of the right modal branch are found to be out of phase

Steam assisted gravity drainage (SAGD) is a thermal recovery process used to recover bitumen and heavy oil. This paper presented a newly developed model to estimate cooling time and formation thermal diffusivity by using a thermal transient analysis along the horizontal wellbore under a steam heating process. This radial conduction heating model provides information on the heat influx distribution along a horizontal wellbore or elongated steam chamber, and is therefore important for determining the effectiveness of the heating process in the start-up phase in SAGD. Net heat flux estimation in the target formation during start-up can be difficult to measure because of uncertainties regarding heat loss in the vertical section; steam quality along the horizontal segment; distribution of steam along the wellbore; operational conditions; and additional effects of convection heating. The newly presented model can be considered analogous to pressure transient analysis of a buildup after a constant pressure drawdown. The model is based on an assumption of an infinite-acting system. This paper also proposed a new concept of a heating ring to measure the heat storage in the heated bitumen at the time of testing. Field observations were used to demonstrate how the model can be used to save heat energy, conserve steam and enhance bitumen recovery. 18 refs., 14 figs., 2 appendices.

Tidal signals have been largely studied with gravimeters, strainmeters and tiltmeters, but can also be retrieved from digital records of the output of long-period seismometers, such as STS-1, particularly if they are properly isolated. Horizontal components are often noisier than the vertical ones, due to sensitivity to tilt at long periods. Hence, horizontal components are often disturbed by local effects such as topography, geology and cavity effects, which imply a strain-tilt coupling. We use series of data (duration larger than 1 month) from several permanent broadband seismological stations to examine these disturbances. We search a minimal set of observable signals (tilts, horizontal and vertical displacements, strains, gravity) necessary to reconstruct the seismological record. Such analysis gives a set of coefficients (per component for each studied station), which are stable over years and then can be used systematically to correct data from these disturbances without needing heavy numerical computation. A special attention is devoted to ocean loading for stations close to oceans (e.g. Matsushiro station in Japon (MAJO)), and to pressure correction when barometric data are available. Interesting observations are made for vertical seismometric components; in particular, we found a pressure admittance between pressure and data 10 times larger than for gravimeters for periods larger than 1 day, while this admittance reaches the usual value of -3.5 nm/s 2/mbar for periods below 3 h. This observation may be due to instrumental noise, but the exact mechanism is not yet understood.

In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

Full Text Available The Modified Cam Clay model is extended to account for the behaviour of unsaturated soils using Bishop’s stress. To describe the Loading – Collapse behaviour, the model incorporates a compressibility framework with suction and degree of saturation dependent compression lines. For simplicity, the present paper describes the model in the triaxial stress space with characteristic simulations of constant suction compression and triaxial tests, as well as wetting tests. The model reproduces an evolving post yield compressibility under constant suction compression, and thus, can adequately describe a maximum of collapse.

With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

Baryonic matter in the core of a massive and evolved star is compressed significantly to form a supra-nuclear object, and compressed baryonic matter (CBM) is then produced after supernova. The state of cold matter at a few nuclear density is pedagogically reviewed, with significant attention paid to a possible quark-cluster state conjectured from an astrophysical point of view.

We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

A presentation on the applications of linear algebra to image compression. Covers entropy, the discrete cosine transform, thresholding, quantization, and examples of images compressed with DCT. Given in Spring 2015 at Ocean County College as part of the honors program.

The performance of two methods for images compression in nuclear medicine was evaluated. The LZW precise, and Cosine Transformed, approximate, methods were analyzed. The results were obtained, showing that the utilization of approximated method produced images with an agreeable quality for visual analysis and compression rates, considerably high than precise method. (C.G.C.)

Two magnetic pulse compression circuits (MPCC), for two different plasma devices, are presented. The first is a 20 J/pulse, 3-stage circuit designed to trigger a low pressure discharge. The circuit has 16-18 kV working voltage, and 200 nF in each stage. The saturable inductors are realized with toroidal 25 {mu}m strip-wound cores, made of a Fe-Ni alloy, with 1.5 T saturation induction. The total magnetic volume is around 290 cm{sup 3}. By using a 25 kV/1 A thyratron as a primary switch, the time compression is from 3.5 {mu}s to 450 ns, in a short-circuit load. The second magnetic pulser is a 200 J/pulse circuit, designed to drive a high average power plasma focus soft X-ray source, for X-ray microlithography as the main application. The 3-stage pulser should supply a maximum load current of 100 kA with a rise-time of 250 - 300 ns. The maximum pulse voltage applied on the plasma discharge chamber is around 20 - 25 kV. The three saturable inductors in the circuit are made of toroidal strip-wound cores with METGLAS 2605 CO amorphous alloy as the magnetic material. The total, optimized mass of the magnetic material is 34 kg. The maximum repetition rate is limited at 100 Hz by the thyratron used in the first stage of the circuit, the driver supplying to the load about 20 kW average power. (author). 1 tab., 3 figs., 3 refs.

. The need for cheaper, smarter and more energy efficient wireless devices is greater now than ever. This thesis addresses this problem and concerns the application of the recently developed sampling theory of compressive sensing in communication systems. Compressive sensing is the merging of signal...... acquisition and compression. It allows for sampling a signal with a rate below the bound dictated by the celebrated Shannon-Nyquist sampling theorem. In some communication systems this necessary minimum sample rate, dictated by the Shannon-Nyquist sampling theorem, is so high it is at the limit of what...... with using compressive sensing in communication systems. The main contribution of this thesis is two-fold: 1) a new compressive sensing hardware structure for spread spectrum signals, which is simpler than the current state-of-the-art, and 2) a range of algorithms for parameter estimation for the class...

It has been previously demonstrated that salinity can be reconstructed from sediment porewater. The goal of our study is to reconstruct high precision salinity during the Last Glacial Maximum (LGM). Salinity is usually determined at high precision via conductivity, which requires a larger volume of water than can be extracted from a sediment core, or via chloride titration, which yields lower than ideal precision. It has been demonstrated for water column samples that high precision density measurements can be used to determine salinity at the precision of a conductivity measurement using the equation of state of seawater. However, water column seawater has a relatively constant composition, in contrast to porewater, where variations from standard seawater composition occur. These deviations, which affect the equation of state, must be corrected for through precise measurements of each ion's concentration and knowledge of apparent partial molar density in seawater. We have developed a density-based method for determining porewater salinity that requires only 5 mL of sample, achieving density precisions of 10-6 g/mL. We have applied this method to porewater samples extracted from long cores collected along a N-S transect across the western North Atlantic (R/V Knorr cruise KN223). Density was determined to a precision of 2.3x10-6 g/mL, which translates to salinity uncertainty of 0.002 gms/kg if the effect of differences in composition is well constrained. Concentrations of anions (Cl-, and SO4-2) and cations (Na+, Mg+, Ca+2, and K+) were measured. To correct salinities at the precision required to unravel LGM Meridional Overturning Circulation, our ion precisions must be better than 0.1% for SO4-/Cl- and Mg+/Na+, and 0.4% for Ca+/Na+, and K+/Na+. Alkalinity, pH and Dissolved Inorganic Carbon of the porewater were determined to precisions better than 4% when ratioed to Cl-, and used to calculate HCO3-, and CO3-2. Apparent partial molar densities in seawater were

Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

The mechanical behaviors of rocks affected by high temperature and stress are generally believed to be significant for the stability of certain projects involving rocks, such as nuclear waste storage and geothermal resource exploitation. In this paper, veined marble specimens were treated to high temperature treatment and then used in conventional triaxial compression tests to investigate the effect of temperature, confining pressure, and vein angle on strength and deformation behaviors. The results show that the strength and deformation parameters of the veined marble specimens changed with the temperature, presenting a critical temperature of 600 °C. The triaxial compression strength of a horizontal vein (β = 90°) is obviously larger than that of a vertical vein (β = 0°). The triaxial compression strength, elasticity modulus, and secant modulus have an approximately linear relation to the confining pressure. Finally, Mohr-Coulomb and Hoek-Brown criteria were respectively used to analyze the effect of confining pressure on triaxial compression strength.

textabstractWe build a dynamic general equilibrium model with 2 countries, horizontal and vertical multinational activity and endogenous domestic and foreign investment. It is found that horizontal multinational activity always leads to a complementary relationship between domestic and foreign

Lossy image coding significantly improves performance over lossless methods, but a reliable control of diagnostic accuracy regarding compressed images is necessary. The acceptable range of compression ratios must be safe with respect to as many objective criteria as possible. This study evaluates the compression efficiency of digital mammograms in both numerically lossless (reversible) and lossy (irreversible) manner. Effective compression methods and concepts were examined to increase archiving and telediagnosis performance. Lossless compression as a primary applicable tool for medical applications was verified on a set 131 mammograms. Moreover, nine radiologists participated in the evaluation of lossy compression of mammograms. Subjective rating of diagnostically important features brought a set of mean rates given for each test image. The lesion detection test resulted in binary decision data analyzed statistically. The radiologists rated and interpreted malignant and benign lesions, representative pathology symptoms, and other structures susceptible to compression distortions contained in 22 original and 62 reconstructed mammograms. Test mammograms were collected in two radiology centers for three years and then selected according to diagnostic content suitable for an evaluation of compression effects. Lossless compression efficiency of the tested coders varied, but CALIC, JPEG-LS, and SPIHT performed the best. The evaluation of lossy compression effects affecting detection ability was based on ROC-like analysis. Assuming a two-sided significance level of p=0.05, the null hypothesis that lower bit rate reconstructions are as useful for diagnosis as the originals was false in sensitivity tests with 0.04 bpp mammograms. However, verification of the same hypothesis with 0.1 bpp reconstructions suggested their acceptance. Moreover, the 1 bpp reconstructions were rated very similarly to the original mammograms in the diagnostic quality evaluation test, but the

This article shows how horizontal industry integration can arise from transferable asymmetry of technologies and endowments. The Nash bargaining solution suggests that greater technological diversity among coordinating parties yields greater gains from horizontal integration. The framework fits the case where a firm with a superior technology franchises the technology by horizontal integration. The results appear to fit hog production where integration has been primarily horizontal and, in pa...

Full Text Available It is well-known that energy harvesting from wind can be used to power remote monitoring systems. There are several studies that use wind energy in small-scale systems, mainly with wind turbine vertical axis. However, there are very few studies with actual implementations of small wind turbines. This paper compares the performance of horizontal and vertical axis wind turbines for energy harvesting on wireless sensor network applications. The problem with the use of wind energy is that most of the time the wind speed is very low, especially at urban areas. Therefore, this work includes a study on the wind speed distribution in an urban environment and proposes a controller to maximize the energy transfer to the storage systems. The generated power is evaluated by simulation and experimentally for different load and wind conditions. The results demonstrate the increase in efficiency of wind generators that use maximum power transfer tracking, even at low wind speeds.

The estimation of gas production rates from hydrate bearing sediments requires complex numerical simulations. This manuscript presents a set of simple and robust analytical solutions to estimate the maximum depressurization-driven recoverable gas. These limiting-equilibrium solutions are established when the dissociation front reaches steady state conditions and ceases to expand further. Analytical solutions show the relevance of (1) relative permeabilities between the hydrate free sediment, the hydrate bearing sediment, and the aquitard layers, and (2) the extent of depressurization in terms of the fluid pressures at the well, at the phase boundary, and in the far field. Close form solutions for the size of the produced zone allow for expeditious financial analyses; results highlight the need for innovative production strategies in order to make hydrate accumulations an economically-viable energy resource. Horizontal directional drilling and multi-wellpoint seafloor dewatering installations may lead to advantageous production strategies in shallow seafloor reservoirs.

Recent studies have emphasized that the etiology of tendinopathy is not as simple as was once thought. The etiology is likely to be multifactorial. Etiologic factors may include some of the traditional factors such as overuse, inflexibility, and equipment problems; however, other factors need to be considered as well, such as age-related tendon degeneration and biomechanical considerations as outlined in this article. More research is needed to determine the significance of stress-shielding and compression in tendinopathy. If they are confirmed to play a role, this finding may significantly alter our approach in both prevention and in treatment through exercise therapy. The current biomechanical studies indicate that certain joint positions are more likely to place tensile stress on the area of the tendon commonly affected by tendinopathy. These joint positions seem to be different than the traditional positions for stretching exercises used for prevention and rehabilitation of tendinopathic conditions. Incorporation of different joint positions during stretching exercises may exert more uniform, controlled tensile stress on these affected areas of the tendon and avoid stresshielding. These exercises may be able to better maintain the mechanical strength of that region of the tendon and thereby avoid injury. Alternatively, they could more uniformly stress a healing area of the tendon in a controlled manner, and thereby stimulate healing once an injury has occurred. Additional work will have to prove if a change in rehabilitation exercises is more efficacious that current techniques.

An apparatus for confining molten metal with a horizontal alternating magnetic field. In particular, this invention employs a magnet that can produce a horizontal alternating magnetic field to confine a molten metal at the edges of parallel horizontal rollers as a solid metal sheet is cast by counter-rotation of the rollers.

We studied the GABA sensitivity of horizontal cells in the isolated goldfish retina. After the glutamatergic input to the horizontal cells was blocked with DNQX, GABA depolarized the monophasic and biphasic horizontal cells. The pharmacology of these GABA-induced depolarizations was tested with the

This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately

Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)

This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

Kossow, AJ, DeChiara, TG, Neahous, SM, and Ebben, WP. Kinetic analysis of horizontal plyometric exercise intensity. J Strength Cond Res 32(5): 1222-1229, 2018-Plyometric exercises are frequently performed as part of a strength and conditioning program. Most studies assessed the kinetics of plyometric exercises primarily performed in the vertical plane. The purpose of this study was to evaluate the multiplanar kinetic characteristics of a variety of plyometric exercises, which have a significant horizontal component. This study also sought to assess sex differences in the intensity progression of these exercises. Ten men and 10 women served as subjects. The subjects performed a variety of plyometric exercises including the double-leg hop, standing long jump, single-leg standing long jump, bounding, skipping, power skipping, cone hops, and 45.72-cm hurdle hops. Subjects also performed the countermovement jump for comparison. All plyometric exercises were evaluated using a force platform. Dependent variables included the landing rate of force development and landing ground reaction forces for each exercise in the vertical, frontal, and sagittal planes. A 2-way mixed analysis of variance with repeated-measures for plyometric exercise type demonstrated main effects for exercise type for all dependent variables (p ≤ 0.001). There was no significant interaction between plyometric exercise type and sex for any of the variable assessed. Bonferroni-adjusted pairwise comparisons identified a number of differences between the plyometric exercises for the dependent variables assessed (p ≤ 0.05). These findings should be used to guide practitioners in the progression of plyometric exercise intensity, and thus program design, for those who require significant horizontal power in their sport.

The geosteering process should not be seen as a process solely designated for the most expensive or highest profile horizontal wells. It can be regarded as another tool for improving the odds of success by remaining in the productive zone for longer periods of drilling. Also, it can be used to optimize the positioning of a horizontal wellbore in the sweet spots within the reservoir. The current process has been successfully applied to large infill drilling programs at over 40 wells for heavy oil, tight gas, conventional oil and gas plays and for Mannville coalbed methane (CBM) in Alberta. The service has been provided irrespective of location, as long as the Wellsite Information Transfer Standard Markup Language (WITSML)/Pason Satellite service is available. Exploration and production (E&P) companies are continuously being driven to reduce the cost per barrel of oil equivalent (BOE). E&P needs and technologies related to advanced and accurate directional drilling, communication of vital data in real-time through the internet, as well as reduced cycle time associated with advanced forward-looking 3D geo-modelling and visualization technologies, are currently converging. The motivation to reduce costs has been responsible for advancing the horizontal well geosteering process by incorporating the Measurement While Drilling (MWD) tool into mainstream drilling practices. The universal economic benefits gained can be found in all resource play types (conventional oil and gas, heavy oil, tight gas and coalbed methane). It is important to note that the process described here is essentially collaborative. For best results, there must be cooperation between the E&P operational geologist, wellsite geologist, directional driller and geo-modelling staff, as well as the engineering consultants involved in the project (i.e. the team as a whole).

A study is being conducted of the resources and planning that would be required to clean up an extensive contamination of the outdoor environment. As part of this study, an assessment of the fleet of machines needed for decontaminating large outdoor surfaces of horizontal concrete will be attempted. The operations required are described. The performance of applicable existing equipment is analyzed in terms of area cleaned per unit time, and the comprehensive cost of decontamination per unit area is derived. Shielded equipment for measuring directional radiation and continuously monitoring decontamination work are described. Shielding of drivers' cabs and remote control vehicles is addressed

An experimental study of the phenomenon of buoyancy driven natural ventilation through single-sided horizontal openings was performed in a full-scale laboratory test rig. The measurements were made for opening ratios L/D ranging from 0.027 to 4.455, where L and D are the length of the opening and the diameter of the opening, respectively. The basic nature of airflow through single-sided openings, including airflow rate, air velocity, temperature difference between the rooms and the dimensions...

Desalinated water supplies are one of the problems of the nuclear power plants located by the seas. This paper explains saline water desalination by a Horizontal Tube Evaporator (HTE) and compares it with flash evaporation. A thermo compressor research project using HTE method has been designed, constructed, and operated at the Esfahan Nuclear Technology Center ENTC. The poject's ultimate goal is to obtain empirical formulae based on data gathered during operation of the unit and its subsequent development towards design and construction of desalination plants on an industrial scale

A horizontal, floating, plastic hose oil skimmer operates at -20/sup 0/ to +100/sup 0/C as a moving belt driven by a motor at 0.7 kw at 1400 rpm to pick up oil by adhesion from a surface such as that of used cooling water or cutting oil for subsequent stripping and collection by gravity flow. Two models provide collection rates of 10-45 l./hr for diesel oil, 35-115 l./hr for hydraulic oil, and 170-455 l./hr for gear oils and heavy heating oils.

A horizontal multi-purpose microbeam system with a single electrostatic quadruplet focusing lens has been developed at the Columbia University Radiological Research Accelerator Facility (RARAF). It is coupled with the RARAF 5.5 MV Singleton accelerator (High Voltage Engineering Europa, the Netherlands) and provides micrometer-size beam for single cell irradiation experiments. It is also used as the primary beam for a neutron microbeam and microPIXE (particle induced x-ray emission) experiment because of its high particle fluence. The optimization of this microbeam has been investigated with ray tracing simulations and the beam spot size has been verified by different measurements.

Proton target, the polarization vector of which may be arbitrary oriented in horizontal plane relatively to the beam, is developed and tested. 70% value of polarization is obtained. 0.6 K temperature is acquired through 3 He pumping out continuous cycle. 1.2-propylene glycol - Cr(V) was used as working medium. Magnetic system is made in the form of Helmholtz sperconducting coils with working curren close to critical one. Target polarization is measured by NMR technique using original system of proton signal processing

The trajectory of a laminar buoyant jet discharged horizontally has been studied. The experimental observations were based on the injection of pure water into a brine solution. Under certain conditions the jet has been found to undergo bifurcation. The bifurcation of the jet occurs in a limited domain of Grashof number and Reynolds number. The regions in which the bifurcation occurs has been mapped in the Reynolds number Grashof number plane. There are three regions where bifurcation does not occur. The various mechanisms that prevent bifurcation have been proposed.

An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

With the emergence of compressive sensing and sparse signal reconstruction, approaches to urban radar have shifted toward relaxed constraints on signal sampling schemes in time and space, and to effectively address logistic difficulties in data acquisition. Traditionally, these challenges have hindered high resolution imaging by restricting both bandwidth and aperture, and by imposing uniformity and bounds on sampling rates.Compressive Sensing for Urban Radar is the first book to focus on a hybrid of two key areas: compressive sensing and urban sensing. It explains how reliable imaging, tracki

The time-average horizontal distribution of the near-inertial waves (NIWs) on the western Gulf of Mexico (GoM) is investigated using horizontal velocity data obtained from Lagrangian trajectories of 200 surface drifters drogued at 50m and deployed between September 2008 and September 2012. Preliminary results suggest maximum time-averaged near-inertial circle radius of 2.6km located in the southern Campeche bay near [22N,95W]; implying an inertial velocity of about 0.14m/s. Similar conclusions are delineated using horizontal velocity data obtained from 21 moorings deployed in the western GoM during the same time period. Maximum near-inertial kinetic energy and clockwise spectral energy is found in the mooring LNK3500 located at 21.850N and 94.028W. Maximum inertial circles measured with mooring data, however, are of about 1.6km leading to inertial currents of 0.087m/s, approximately a 40% smaller. This discrepancy seems to be due to the different depth level of the measurements and the bandwidth used to extract the near-inertial oscillations from the total flow. The time-average horizontal distributions of wind work computed from Lagrangian and Eulerian data are compared and they are not consistent with the time-averaged NIW field. The differences are not well understood but we speculate they may be due to the different time scales of wind fluctuations in the northwestern GoM compared to those observed in the Bay of Campeche, together with the change of sign of the background vorticity in the region; being negative (anticyclonic) in the northern GoM and positive (cyclonic) in the Bay of Campeche.

This paper presents comprehensive performance analyses and comparisons for air-standard irreversible thermodynamic cycle engines (TCE) based on the power output, power density, thermal efficiency, maximum dimensionless power output (MP), maximum dimensionless power density (MPD) and maximum thermal efficiency (MEF) criteria. Internal irreversibility of the cycles occurred during the irreversible-adiabatic processes is considered by using isentropic efficiencies of compression and expansion processes. The performances of the cycles are obtained by using engine design parameters such as isentropic temperature ratio of the compression process, pressure ratio, stroke ratio, cut-off ratio, Miller cycle ratio, exhaust temperature ratio, cycle temperature ratio and cycle pressure ratio. The effects of engine design parameters on the maximum and optimal performances are investigated. - Highlights: • Performance analyses are conducted for irreversible thermodynamic cycle engines. • Comprehensive computations are performed. • Maximum and optimum performances of the engines are shown. • The effects of design parameters on performance and power density are examined. • The results obtained may be guidelines to the engine designers

Desalination plants based on Mechanical Vapour Compression (MVC) technology are inherently the most thermodynamically efficient. The thermodynamic efficiency of the MVC process is derived from the application of the heat pump principle. A single unit of two-effect MVC desalination pilot plant of capacity 50 m3/day has recently been commissioned at Trombay, Mumbai. The desalination unit is very compact and unique of its kind in the seawater desalination technologies and is being operated by using electricity only. Horizontal tube thin film spray desalination evaporators are used for efficient heat transfer. It is suitable for a site, where feed water is highly saline and condenser cooling water is absent and where a thermal heat source is not available. The unit produces high quality water, nearly demineralized (DM) quality directly from seawater. There is no need of polishing unit and product water can be utilized directly as make up of boiler feed and for other high quality process water requirements in the industries. This paper includes the design and highlights the technical features of this unit. (author)

Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

The aim of this study was to find the relationships between the degree of cord compression as seen on MRIs with persisting cord atrophy after decompression and patient outcomes in spinal meningiomas. We undertook a retrospective analysis of 31 patients' pre- and postoperative MRIs, preoperative functional status and their outcomes at follow-up. The following metrics were analysed; percentage cord area at maximumcompression, percentage tumour occupancy and percentage cord occupancy. These were then compared with outcome as measured by the Nurick scale. Of the 31 patients, 27 (87%) had thoracic meningiomas, 3 (10%) cervical and 1 (3%) cervicothoracic. The meningiomas were pathologically classified as grade 1 (29) or grade 2 (2) according to the WHO classification. The average remaining cord cross-sectional area was 61% of the estimated original value. The average tumour occupancy of the canal was 72%. The average cord occupancy of the spinal canal at maximumcompression was 20%. No correlation between cord cross-section area and Nurick Scale was seen. On the postoperative scan, the average cord area had increased to 84%. No correlation was seen between this value and outcome. We found that cross-section area measurements on MRI scans have no obvious relationship with function before or after surgery. This is a base for future research into the mechanism of cord recovery and other compressive cord conditions.

This project is designed to demonstrate in situ bioremediation of groundwater and sediment contaminated with chlorinated solvents. Indigenous microorganisms were stimulated to degrade TCE, PCE and their daughter products in situ by addition of nutrients to the contaminated zone. In situ biodegradation is a highly attractive technology for remediation because contaminants are destroyed, not simply moved to another location or immobilized, thus decreasing costs, risks, and time, while increasing efficiency and public and regulatory acceptability. Bioremediation has been found to be among the least costly technologies in applications where it will work (Radian 1989). Subsurface soils and water adjacent to an abandoned process sewer line at the SRS have been found to have elevated levels of TCE (Marine and Bledsoe 1984). This area of subsurface and groundwater contamination is the focus of a current integrated demonstration of new remediation technologies utilizing horizontal wells. Bioremediation has the potential to enhance the performance of in situ air stripping as well as offering stand-alone remediation of this and other contaminated sites (Looney et al. 1991). Horizontal wells could also be used to enhance the recovery of groundwater contaminants for bioreactor conversions from deep or inaccessible areas (e.g., under buildings) and to enhance the distribution of nutrient or microbe additions in an in situ bioremediation

The class Mollicutes (trivial name "mycoplasma") is composed of wall-less bacteria with reduced genomes whose evolution was long thought to be only driven by gene losses. Recent evidences of massive horizontal gene transfer (HGT) within and across species provided a new frame to understand the successful adaptation of these minimal bacteria to a broad range of hosts. Mobile genetic elements are being identified in a growing number of mycoplasma species, but integrative and conjugative elements (ICEs) are emerging as pivotal in HGT. While sharing common traits with other bacterial ICEs, such as their chromosomal integration and the use of a type IV secretion system to mediate horizontal dissemination, mycoplasma ICEs (MICEs) revealed unique features: their chromosomal integration is totally random and driven by a DDE recombinase related to the Mutator-like superfamily. Mycoplasma conjugation is not restricted to ICE transmission, but also involves the transfer of large chromosomal fragments that generates progenies with mosaic genomes, nearly every position of chromosome being mobile. Mycoplasmas have thus developed efficient ways to gain access to a considerable reservoir of genetic resources distributed among a vast number of species expanding the concept of minimal cell to the broader context of flowing information.

Here we show a simple mechanism in which changes in the rate of horizontal stirring by mesoscale ocean eddies can trigger or suppress plankton blooms and can lead to an abrupt change in the average plankton density. We consider a single species phytoplankton model with logistic growth, grazing and a spatially non-uniform carrying capacity. The local dynamics have multiple steady states for some values of the carrying capacity that can lead to localized blooms as fluid moves across the regions with different properties. We show that for this model even small changes in the ratio of biological timescales relative to the flow timescales can greatly enhance or reduce the global plankton productivity. Thus, this may be a possible mechanism in which changes in horizontal mixing can trigger plankton blooms or cause regime shifts in some oceanic regions. Comparison between the spatially distributed model and Lagrangian simulations considering temporal fluctuations along fluid trajectories, demonstrates that small scale transport processes also play an important role in the development of plankton blooms with a significant influence on global biomass.

A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

This book presents a survey of the state-of-the art in the exciting and timely topic of compressed sensing for distributed systems. It has to be noted that, while compressed sensing has been studied for some time now, its distributed applications are relatively new. Remarkably, such applications are ideally suited to exploit all the benefits that compressed sensing can provide. The objective of this book is to provide the reader with a comprehensive survey of this topic, from the basic concepts to different classes of centralized and distributed reconstruction algorithms, as well as a comparison of these techniques. This book collects different contributions on these aspects. It presents the underlying theory in a complete and unified way for the first time, presenting various signal models and their use cases. It contains a theoretical part collecting latest results in rate-distortion analysis of distributed compressed sensing, as well as practical implementations of algorithms obtaining performance close to...

linear pulse propagation is the nonlinear Schrödinger (NLS) equation [1]. There are ... Optical pulse compression finds important applications in optical fibres. The pulse com ..... to thank CSIR, New Delhi for financial support in the form of SRF.

Horizontal wells are of increasing interest in the oil and gas industry, as is evident from the increase in the numbers of such wells being drilled. Horizontal well technology is used to improve production rates, notably in low permeability formations; to capture reserves if a reservoir is not economic using non-horizontal wells; to manage breakthrough of sweep fluids and increase sweep efficiency; and to extend the areal reach from a single surface location, especially in offshore production. The types of horizontal wells, differentiated on the basis of how quickly the well becomes horizontal, are briefly outlined and a short history of horizontal wells is presented. Canadian accomplishments in this field are then described, including steerable drilling systems, measurement-while-drilling systems, management of hole drag and torque, and well completion techniques. About 25 horizontal wells are forecast to be drilled in Canada in 1989, indicating the favorable future of this technology. 2 figs., 5 tabs.

Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

In recent years the Lawrence Livermore National Laboratory (LLNL) has been conducting experiments that require pulsed high currents to be delivered into inductive loads. The loads fall into two categories (1) pulsed high field magnets and (2) the input stage of Magnetic Flux Compression Generators (MFCG). Three capacitor banks of increasing energy storage and controls sophistication have been designed and constructed to drive these loads. One bank was developed for the magnet driving application (20kV {approx} 30kJ maximum stored energy.) Two banks where constructed as MFCG seed banks (12kV {approx} 43kJ and 26kV {approx} 450kJ). This paper will describe the design of each bank including switching, controls, circuit protection and safety.

We have used spatially resolved micro Raman spectroscopy to map the full width at half maximum (FWHM) of the graphene G-band and the 2D and G peak positions, for as-grown graphene on copper catalyst layers, for transferred CVD graphene and for micromechanically exfoliated graphene, in order...... to characterize the effects of a transfer process on graphene properties. Here we use the FWHM(G) as an indicator of the doping level of graphene, and the ratio of the shifts in the 2D and G bands as an indicator of strain. We find that the transfer process introduces an isotropic, spatially uniform, compressive...... strain in graphene, and increases the carrier concentration....

This document proposes for analysis, comparative study of the turbocharged, compression-ignition engine, equipped with EGR valve, operation in case the injection in intake manifold thereof a maximum flow rate of 1l/min oxyhydrogen resulted of water electrolysis, at two different injection pressures, namely 100 Pa and 3000 Pa, from the point of view of flue gas opacity. We found a substantial reduction of flue gas opacity in both cases compared to conventional diesel operation, but in different proportions.

After an experimental test using Waspmotes the fixed-variable variant has a 56.58% reduction of power consumption by introducing a maximum error ± 0.00195g and compress in 52.44% the amount of samples. This algorithm increased the network energy autonomy from 17 hours to 26.5 hours. Through mathematical analysis, the variable-fixed technique reduces in 74.81% the power consumption in sensing nodes transmissions and decrease in 90% the number of samples.

In this paper, an attempt was made to assess the effectiveness of finger jointing in utilising mango wood sections for various end uses like furniture. The study was based on the estimation of Modulus of elasticity and Modulus of rupture under static bending and Maximum Crushing Stress and Modulus of elasticity under compression parallel to grain of finger jointed sections and comparing them with the values measured for clear wood sections from the same lot. For joining the sections, the Poly...

The standard Indian PHWR incorporates a pressure suppression type of containment system with a suppression pool.The design of KAPS (Kakrapar Atomic Power Station) suppression pool system adopts a modified system of downcomers having horizontal vents as compared to vertical vents of NAPS (Narora Atomic Power Station). Hydrodynamic studies for vertical vents have been reported earlier. This paper presents hydrodynamic studies for horizontal type vent system during LOCA. These studies include the phenomenon of vent clearing (where the water slug standing in downcomer initially is injected to wetwell due to rapid pressurization of drywell) followed by pool swell (elevation of pool water due to formation of bubbles due to air mass entering pool at the exit of horizontal vents from drywell). The analysis performed for vent clearing and pool swell is based on rigorous thermal hydraulic calculation consisting of conservation of air-steam mixture mass, momentum and thermal energy and mass of air. Horizontal vent of downcomer is modelled in such a way that during steam-air flow, variation of flow area due to oscillating water surface in downcomer could be considered. Calculation predicts that the vent gets cleared in about 1.0 second and the corresponding downward slug velocity in the downcomer is 4.61 m/sec. The maximum pool swell for a conservative lateral expansion is calculated to be 0.56 m. (author). 3 refs., 12 figs

Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

At the foreground of computerized radiology and the filmless hospital are the possibilities for easy image retrieval, efficient storage, and rapid image communication. This paper represents the authors' continuous efforts in compression research on full-frame discrete wavelet (FFDWT) and full-frame discrete cosine transforms (FFDCT) for medical image compression. Prior to the coding, it is important to evaluate the global entropy in the decomposed space. It is because of the minimum entropy, that a maximumcompression efficiency can be achieved. In this study, each image was split into the top three most significant bit (MSB) and the remaining remapped least significant bit (RLSB) images. The 3MSB image was compressed by an error-free contour coding and received an average of 0.1 bit/pixel. The RLSB image was either transformed to a multi-channel wavelet or the cosine transform domain for entropy evaluation. Ten x-ray chest radiographs and ten mammograms were randomly selected from our clinical database and were used for the study. Our results indicated that the coding scheme in the FFDCT domain performed better than in FFDWT domain for high-resolution digital chest radiographs and mammograms. From this study, we found that decomposition efficiency in the DCT domain for relatively smooth images is higher than that in the DWT. However, both schemes worked just as well for low resolution digital images. We also found that the image characteristics of the `Lena' image commonly used in the compression literature are very different from those of radiological images. The compression outcome of the radiological images can not be extrapolated from the compression result based on the `Lena.'

Full Text Available Purpose.Horizontal settlers are one of the most important elements in the technological scheme of water purification. Their use is associated with the possibility to pass a sufficiently large volume of water. The important task at the stage of their designing is evaluating of their effectiveness. Calculation of the efficiency of the settler can be made by mathematical modeling. Empirical, analytical models and techniques that are currently used to solve the problem, do not allow to take into account the shape of the sump and various design features that significantly affects the loyalty to a decision on the choice of the size of the settling tank and its design features. The use of analytical models is limited only to one-dimensional solutions, does not allow accounting for nonuniform velocity field of the flow in the settler. The use of advanced turbulence models for the calculation of the hydrodynamics in the settler complex forms now requires very powerful computers. In addition, the calculation of one variant of the settler may last for dozens of hours. The aim of the paper is to build a numerical model to evaluate the effectiveness of horizontal settling tank modified design. Methodology. Numerical models are based on: 1 equation of potential flow; 2 equation of inviscid fluid vortex flow; 3 equation of viscous fluid dynamics; 4 mass transfer equation. For numerical simulation the finite difference schemes are used. The numerical calculation is carried out on a rectangular grid. For the formation of the computational domain markers are used. Findings.The models allow calculating the clarification process in the settler with different form and different configuration of baffles. Originality. A new approach to investigate the mass transfer process in horizontal settler was proposed. This approach is based on the developed CFD models. Three fluid dynamics models were used for the numerical investigation of flows and waste waters purification

An array of sensors is receiving radiation from a source of interest. The source and the array are in a one- or two-dimensional waveguide. The maximum-likelihood estimators of the coordinates of the source are analyzed under the assumptions that the noise field is Gaussian. The Cramer-Rao lower bound is of the order of the number of modes which define the source excitation function. The results show that the accuracy of the maximum likelihood estimator of source depth using a vertical array in a infinite horizontal waveguide (such as the ocean) is limited by the number of modes detected by the array regardless of the array size

Because some designers of aerosol transport systems use the assumption that aerosol penetration through a system is maximized of the flow Reynolds number is 2,800, we have conducted tests to determine if such an assumption is appropriate. Although we do not believe that optimal performance of an aerosol sample transport system can be presented solely in terms of the Reynolds number, we have presented our results in terms of that parameter to compare our work with the results of an earlier study. Two types of experiments were performed. First, the penetration of liquid aerosol particles through horizontal tubes was experimentally investigated for a range of design and operational conditions. For a particle size of 10 μm aerodynamic diameter, the maximum penetration through a 6.7 mm diameter tube was associated with a Reynolds number of approximately 2,000; the maximum penetration through a tube of 15.9 mm occurred at a Reynolds number of about 3,000; and the maximum penetration through a 26.7 mm diameter tube occurred at about 4,000. It was also experimentally demonstrated that for a fixed flow rate through a horizontal tube, there is an optimum tube diameter for which the aerosol penetration is a maximum. An early study dealing with aerosol particle penetration through a 16.8 mm inside diameter loop of tubing (two vertical tubes, two horizontal tubes and three 90 degrees bends) suggested there was a fixed Reynolds number for optimal aerosol penetration independent of particle size. Those experiments were repeated here and the agreement with those tests is excellent. 16 refs., 8 figs., 3 tabs

A MWT (miniature wind turbine) has received great attention recently for powering WISP (Wireless Intelligent Sensor Platform). In this study, two MHAWTs (miniature horizontal axis wind turbines) with and without gear transmission were designed and fabricated. A physics-based model was proposed and the optimal load resistances of the MHAWTs were predicted. The open circuit voltages, output powers and net efficiencies were measured under various ambient winds and load resistances. The experimental results showed the optimal load resistances matched well with the predicted results; the MHAWT without gear obtained higher output power at the wind speed of 2 m/s to 6 m/s, while the geared MHAWT exhibited better performance at the wind speed higher than 6 m/s. In addition, a DCM (discontinuous conduction mode) buck-boost converter was adopted as an interface circuit to maximize the charging power from MHAWTs to rechargeable batteries, exhibiting maximum efficiencies above 85%. The charging power reached about 8 mW and 36 mW at the wind speeds of 4 m/s and 6 m/s respectively, which indicated that the MHAWTs were capable for sufficient energy harvesting for powering low-power electronics continuously. - Highlights: • Performance of the miniature wind turbines with and without gears was compared. • The physics-based model was established and proved successfully. • The interface circuit with efficiency of more than 85% was designed

This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.

In this paper the capability of the five- and six-equation models of the simulation code APROS to simulate the behaviour of the horizontal steam generator is discussed. Different nodalizations are used in the modelling and the results of the stationary state runs are compared. Exactly the same nodalizations have been created for the five- and six-equation models. The main simulation results studied in this paper are void fraction and mass flow distributions in the secondary side of the steam generator. It was found that quite a large number of simulation volumes is required to simulate the distributions with a reasonable accuracy. The simulation results of the different models are presented and their validity is discussed. (orig.). 4 refs.

Aiming at the requirements of modification for many old import machine tools in industry, the schemes suited to the renovation are presented in this paper. A horizontal boring and milling machine (HBM) involved in machining of tank Al-Khalid has been modified using Mitsubishi FX-1N and FX-2N PLC. The developed software is for control of all the functions of the said machine. These functions include power on/off oil pump, spindle rotation and machine movement in all axes. All the decisions required by the machine for actuation of instructions are based on the data acquired from the control panel, timers and limit switches. Also the developed software minimize the down time, safety of operator and error free actuation of instructions. (author)

Full Text Available The mixed convection flow past a horizontal plate being aligned through a small angle of attack to a uniform free stream will be considered in the limit of large Reynolds number and small Richardson number. Even a small angle of inclination of the wake is sufficient for the buoyancy force to accelerate the flow in the wake which causes a velocity overshoot in the wake. Moreover a hydrostatic pressure difference across the wake induces a correction to the potential flow which influences the inclination of the wake. Thus the wake and the correction of the potential flow have to be determined simultaneously. However, it turns out that solutions exist only if the angle of attack is sufficiently large. Solutions are computed numerically and the influence of the buoyancy on the lift coefficient is determined.

In this paper the capability of the five- and six-equation models of the simulation code APROS to simulate the behaviour of the horizontal steam generator is discussed. Different nodalizations are used in the modelling and the results of the stationary state runs are compared. Exactly the same nodalizations have been created for the five- and six-equation models. The main simulation results studied in this paper are void fraction and mass flow distributions in the secondary side of the steam generator. It was found that quite a large number of simulation volumes is required to simulate the distributions with a reasonable accuracy. The simulation results of the different models are presented and their validity is discussed. (orig.). 4 refs.

A horizontal dilution refrigerator was constructed with a view to the spin frozen target and the deuteron polarized target. High cooling power at high temperature such as 3.7 mW at 400 mK serves for overcoming a heat load of microwave to polarize the nuclear spins in the target material. The cooling power at 50 mK was 50 μW, which is sufficient to hold the high nuclear polarization for long time. The lowest temperature reached was 26 mK. The refrigerator has rather simple heat exchangers, a long stainless steel double tube heat exchanger and two coaxial type heat exchangers with sintered copper. The mixing chamber is made of polytetrafluoroethylene (TFE) and demountable so that the target material can be easily put into it. (Auth.)

An analysis is carried out of the spread of a flame along a horizontal solid fuel rod, for which a weak aiding natural convection flow is established in the underside of the rod by the action of the axial gradient of the pressure variation that gravity generates in the warm gas surrounding the flame. The spread rate is determined in the limit of infinitely fast kinetics, taking into account the effect of radiative losses from the solid surface. The effect of a small inclination of the rod is discussed, pointing out a continuous transition between upward and downward flame spread. Flame spread along flat-bottomed solid cylinders, for which the gradient of the hydrostatically generated pressure drives the flow both along and across the direction of flame propagation, is also analysed.

A simple technique is presented for estimating maximum credible gross beta air concentrations from nuclear detonations in the atmosphere, based on aircraft sampling of radioactivity following each Chinese nuclear test from 1964 to 1976. The calculated concentration is a function of the total yield and fission yield, initial vertical radioactivity distribution, time after detonation, and rate of horizontal spread of the debris with time. calculated maximum credible concentrations are compared with the highest concentrations measured during aircraft sampling. The technique provides a reasonable estimate of maximum air concentrations from 1 to 10 days after a detonation. An estimate of the whole-body external gamma dose rate corresponding to the maximum credible gross beta concentration is also given. (author)

The test consisted of methane mixed with air into the contaminated aquifer via a horizontal well and extraction from the vadose zone via a parallel horizontal well. This configuration has the advantage of simultaneously stimulating methanotrophic activity in both the groundwater and vadose zone, and inhibiting spread of the contaminant plume. Groundwater was monitored biweekly from 13 wells for a variety of chemical and microbiological parameters. Groundwater from wells in affected areas showed increases in methanotrophs of more than 1 order of magnitude every 2 weeks for several weeks after 1% methane-in-air injection was started. Some wells had increases as much as 7 orders of magnitude. Simultaneous with the increase in methanotrophs was a decrease in water and soil gas concentrations of trichloroethylene (TCE) and tetrachloroethane (PCE). Two wells declined in TCE/PCE concentration in the water by more than 90% to below 2 ppb. All of the wells in the affected zone showed significant decreases in contaminants in less than one month. Chloride concentrations in the water were inversely correlated with TCE/PCE concentration. Four of five vadose zone piezometers declined from concentration as high as 10,000 ppm to less than 5 ppm in less than 6 weeks. The fifth cluster also declined by more than 95%. After only three months on injection, a decline in TCE/PCE in the sediment of more than 30% was also observed, with TCE/PCE being undetectable in most sediments at the end of the 14-month test. Gene probes and direct isolation from the water and sediment revealed that the right types of methanotrophs were being stimulated and that isolates could degrade TCE at a high rate

The best compression paddle position during air kerma measurement in mammography dosimetry was studied. The amount of forward scattering as a function of the compression paddle distance was measured with different X-ray spectra and different types of paddles and dose meters. The contribution of forward scattering to the air kerma did not present significant dependency on the beam quality or of the compression paddle type. The tested dose meter types detected different amounts of forward scattering due to different internal collimation. When the paddle was adjusted to its maximum clinical distance, the proportion of the detected forward scattering was only 1 % for all dose meter types. The most consistent way of performing air kerma measurements is to position the compression paddle at the maximum distance from the dose meter and use a constant forward scattering factor for all dose meters. Thus, the dosimetric uncertainty due to the forward scatter can be minimised. (authors)

The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

The NiTi alloy can be trained by repetitive loading or heating cycles. As a result of the training, a two-way shape memory effect (TWSME) can be induced. Considerable research has been reported regarding the TWSME trained by tensile loading. However, the TWSME trained by compressive loading has not been investigated nearly as much. In this paper, the TWSME is induced by compressive loading cycles and the two-way shape memory strain is evaluated by using two types of specimen: a solid cylinder type and a tube type. The TWSME trained by compressive loading is different from that trained by tensile loading owing to the severe tension/compression asymmetry as described in previous research. After repetitive compressive loading cycles, strain variation upon cooling is observed, and this result proves that the TWSME is induced by compressive loading cycles. By performing compressive loading cycles, plastic deformation in NiTi alloy occurs more than for tensile loading cycles, which brings about the appearance of TWSME. It can be said that the TWSME is induced by compressive loading cycles more easily. The two-way shape memory strain increases linearly as the maximum strain of compressive loading cycles increases, regardless of the shape and the size of the NiTi alloy; this two-way shape memory strain then shows a tendency towards saturation after some repeated cycles

The authors present the evaluation of a new image compression technique, subband coding using vector quantization, on 44 CT examinations of the upper abdomen. Three independent radiologists reviewed the original images and compressed versions. The compression ratios used were 16:1 and 20:1. Receiver operating characteristic analysis showed no difference in the diagnostic contents between originals and their compressed versions. Subjective visibility of anatomic structures was equal. Except for a few 20:1 compressed images, the observers could not distinguish compressed versions from original images. They conclude that subband coding using vector quantization is a valuable method for data compression in CT scans of the abdomen

Full Text Available In recent years, great progress has been made in geologic evaluation, engineering test and development optimization of the Lower Cambrian Wufeng Fm–Lower Silurian Longmaxi Fm shale gas in the Sichuan Basin, and the main shale gas exploitation technologies have been understood preliminarily. In addition, scale productivity construction has been completed in Jiaoshiba, Changning and Weiyuan blocks. In this paper, the Wufeng Fm–Longmaxi Fm shale gas wells in Changning Block were taken as the study object to provide technical reference for the development design of similar shale-gas horizontal wells. The technology combining geology with engineering, dynamic with static, and statistical analysis with simulation prediction was applied to quantify the main factors controlling shale-gas well productivity, develop the shale-gas well production prediction model, and optimize the key technical parameters of geologic target of shale-gas horizontal wells in the block (e.g. roadway orientation, location and spacing, horizontal section length and gas well production index. In order to realize high productivity of shale gas wells, it is necessary to maximize the included angle between the horizontal section orientation and the maximum major stress and fracture development direction, deploy horizontal-well roadway in top-quality shale layers, and drill the horizontal section in type I reservoirs over 1000 m long. It is concluded that high productivity of shale gas wells is guaranteed by the horizontal-well wellbore integrity and the optimized low-viscosity slickwater and ceramsite fracturing technology for complex fracture creation. Based on the research results, the technical policies for shale gas development of Changning Block are prepared and a guidance and reference are provided for the shale gas development and productivity construction in the block and the development design of similar shale-gas horizontal wells.

Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

Preneel, Govaerts and Vandewalle (PGV) analysed the security of single-block-length block cipher based compression functions assuming that the underlying block cipher has no weaknesses. They showed that 12 out of 64 possible compression functions are collision and (second) preimage resistant. Black......, Rogaway and Shrimpton formally proved this result in the ideal cipher model. However, in the indifferentiability security framework introduced by Maurer, Renner and Holenstein, all these 12 schemes are easily differentiable from a fixed input-length random oracle (FIL-RO) even when their underlying block...

Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

These two lectures will cover the following three topics: (i) The application of adiabatic compression to toroidal devices is reviewed. The special case of adiabatic compression in tokamaks is considered in more detail, including a discussion of the equilibrium, scaling laws, and heating effects. (ii) The ATC (Adiabatic Toroidal Compressor) device which was completed in May 1972, is described in detail. Compression of a tokamak plasma across a static toroidal field is studied in this device. The device is designed to produce a pre-compression plasma with a major radius of 17 cm, toroidal field of 20 kG, and current of 90 kA. The compression leads to a plasma with major radius of 38 cm and minor radius of 10 cm. Scaling laws imply a density increase of a factor 6, temperature increase of a factor 3, and current increase of a factor 2.4. An additional feature of ATC is that it is a large tokamak which operates without a copper shell. (iii) Data which show that the expected MHD behavior is largely observed is presented and discussed. (U.S.)

Data compression techniques involve transforming data of a given format, called source message, to data of a smaller sized format, called codeword. The primary objective of data encryption is to ensure security of data if it is intercepted by an eavesdropper. It transforms data of a given format, called plaintext, to another format, called ciphertext, using an encryption key or keys. Thus, combining the processes of compression and encryption together must be done in this order, that is, compression followed by encryption because all compression techniques heavily rely on the redundancies which are inherently a part of a regular text or speech. The aim of this research is to combine two processes of compression (using an existing scheme) with a new encryption scheme which should be compatible with encoding scheme embedded in encoder. The novel technique proposed by the authors is new, unique and is highly secured. The deployment of sentinel marker' enhances the security of the proposed TR-One algorithm from 2/sup 44/ ciphertexts to 2/sup 44/ +2/sub 20/ ciphertexts thus imposing extra challenges to the intruders. (author)

The objective of radiologic image compression is to reduce the data volume of and to achieve a lot bit rate in the digital representation of radiologic images without perceived loss of image quality. However, the demand for transmission bandwidth and storage space in the digital radiology environment, especially picture archiving and communication systems (PACS) and teleradiology, and the proliferating use of various imaging modalities, such as magnetic resonance imaging, computed tomography, ultrasonography, nuclear medicine, computed radiography, and digital subtraction angiography, continue to outstrip the capabilities of existing technologies. The availability of lossy coding techniques for clinical diagnoses further implicates many complex legal and regulatory issues. This paper reviews the recent progress of lossless and lossy radiologic image compression and presents the legal challenges of using lossy compression of medical records. To do so, the authors first describe the fundamental concepts of radiologic imaging and digitization. Then, the authors examine current compression technology in the field of medical imaging and discuss important regulatory policies and legal questions facing the use of compression in this field. The authors conclude with a summary of future challenges and research directions. 170 refs

Vapor extraction wells used for site remediation of volatile organic chemicals in the vadose zone are typically vertical wells. Over the past few years, there has been an increased interest in horizontal wells for environmental remediation. Despite the interest and potential benefits of horizontal wells, there has been little study of the relative performance of horizontal and vertical vapor extraction wells. This study uses numerical simulations to investigate the relative performance of horizontal versus vertical vapor extraction wells under a variety of conditions. The most significant conclusion that can be drawn from this study is that in a homogeneous medium, a single, horizontal vapor extraction well outperforms a single, vertical vapor extraction well (with surface capping) only for long, linear plumes. Guidelines are presented regarding the use of horizontal wells

The limited penetrable horizontal visibility graph algorithm was recently introduced to map time series in complex networks. In this work, we extend this algorithm to create a directed-limited penetrable horizontal visibility graph and an image-limited penetrable horizontal visibility graph. We define two algorithms and provide theoretical results on the topological properties of these graphs associated with different types of real-value series. We perform several numerical simulations to check the accuracy of our theoretical results. Finally, we present an application of the directed-limited penetrable horizontal visibility graph to measure real-value time series irreversibility and an application of the image-limited penetrable horizontal visibility graph that discriminates noise from chaos. We also propose a method to measure the systematic risk using the image-limited penetrable horizontal visibility graph, and the empirical results show the effectiveness of our proposed algorithms.

The measured data of global and diffuse solar radiation on a horizontal surface, the number of bright sunshine hours, mean daily ambient temperature, maximum and minimum ambient temperatures, relative humidity and amount of cloud cover for Jeddah (lat. 21 o 42'37''N, long. 39 o 11'12''E), Saudi Arabia, during the period (1996-2007) are analyzed. The monthly averages of daily values for these meteorological variables have been calculated. The data are then divided into two sets. The sub-data set I (1996-2004) are employed to develop empirical correlations between the monthly average of daily global solar radiation fraction (H/H 0 ) and the various weather parameters. The sub-data set II (2005-2007) are then used to evaluate the derived correlations. Furthermore, the total solar radiation on horizontal surfaces is separated into the beam and diffuses components. Empirical correlations for estimating the diffuse solar radiation incident on horizontal surfaces have been proposed. The total solar radiation incident on a tilted surface facing south H t with different tilt angles is then calculated using both Liu and Jordan isotropic model and Klucher's anisotropic model. It is inferred that the isotropic model is able to estimate H t more accurate than the anisotropic one. At the optimum tilt angle, the maximum value of H t is obtained as ∼36 (MJ/m 2 day) during January. Comparisons with 22 years average data of NASA SSE Model showed that the proposed correlations are able to predict the total annual energy on horizontal and tilted surfaces in Jeddah with a reasonable accuracy. It is also found that at Jeddah, the solar energy devices have to be tilted to face south with a tilt angle equals the latitude of the place in order to achieve the best performance all year round.

We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

The effectiveness of including a horizontal rebar compared to only a vertical rebar in concrete filled core interlocking concrete block retaining wall sections was investigated with respect to the horizontal retaining force. Experimental results for three specimens of interlocking blocks with vertical rebar and concrete filled cores showed an average horizontal retaining force of 24546 N ± 5.7% at an average wall deflection of 13.3 mm. Experimental results for three wall specimens of interloc...

Concept of horizontal and vertical rule bases is introduced. Using this method enables the designers to look for main behaviors of system and describes them with greater approximations. The rules which describe the system in first stage are called horizontal rule base. In the second stage, the designer modulates the obtained surface by describing needed changes on first surface for handling real behaviors of system. The rules used in the second stage are called vertical rule base. Horizontal...

Background A fundamental concept in biology is that heritable material is passed from parents to offspring, a process called vertical gene transfer. An alternative mechanism of gene acquisition is through horizontal gene transfer (HGT), which involves movement of genetic materials between different species. Horizontal gene transfer has been found prevalent in prokaryotes but very rare in eukaryote. In this paper, we investigate horizontal gene transfer in the human genome. Results From the pa...

As the use of compressed air in industrial work has increased, so has the risk of associated pneumatic injury from its improper use. However, damage of large intestine caused by compressed air is uncommon. Herein a case of pneumatic rupture of the rectum is described. The patient was admitted to the Emergency Room complaining of abdominal pain and distension. His colleague triggered a compressed air nozzle over his buttock. On arrival, vital signs were stable but physical examination revealed peritoneal irritation and marked distension of the abdomen. Computed tomography showed a large volume of air in the peritoneal cavity and subcutaneous emphysema at the perineum. A rectal perforation was found at laparotomy and the Hartmann procedure was performed.

The possibility that a compact torus (CT) might be accelerated to large velocities has been suggested by Hartman and Hammer. If this is feasible one application of these moving CTs might be to compress microwaves. The proposed mechanism is that a coaxial vacuum region in front of a CT is prefilled with a number of normal electromagnetic modes on which the CT impinges. A crucial assumption of this proposal is that the CT excludes the microwaves and therefore compresses them. Should the microwaves penetrate the CT, compression efficiency is diminished and significant CT heating results. MFE applications in the same parameters regime have found electromagnetic radiation capable of penetrating, heating, and driving currents. We report here a cursory investigation of rf penetration using a 1-D version of a direct implicit PIC code

Prediction of chemical ignition delay in an autoignition process is critical in combustion systems like compression ignition engines and gas turbines. Often, ignition delay times measured in simple homogeneous experiments or homogeneous calculations are not representative of actual autoignition processes in complex turbulent flows. This is due the presence of turbulent mixing which results in fluctuations in thermodynamic properties as well as chemical composition. In the present study the effect of fluctuations of thermodynamic variables on the ignition delay is quantified with direct numerical simulations of compressible isotropic turbulence. A premixed syngas-air mixture is used to remove the effects of inhomogeneity in the chemical composition. Preliminary results show a significant spatial variation in the ignition delay time. We analyze the topology of autoignition kernels and identify the influence of extreme events resulting from compressibility and intermittency. The dependence of ignition delay time on Reynolds and turbulent Mach numbers is also quantified. Supported by Basic Energy Sciences, Dept of Energy, United States.

We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...... cannot be expected to code losslessly at a rate of 125 Mbit/s. We investigate the rate and quality effects of quantization using standard JPEG-LS quantization and two new techniques: visual quantization and trellis quantization. Visual quantization is not part of baseline JPEG-LS, but is applicable...... in the framework of JPEG-LS. Visual tests show that this quantization technique gives much better quality than standard JPEG-LS quantization. Trellis quantization is a process by which the original image is altered in such a way as to make lossless JPEG-LS encoding more effective. For JPEG-LS and visual...

A compression technique is presented that allows a high degree of compression but requires only logarithmic access time. The technique is a constant suppression scheme, and is most applicable to stable databases whose distribution of constants is fairly clustered. Furthermore, the repeated use of the technique permits the suppression of a multiple number of different constants. Of particular interest is the application of the constant suppression technique to databases the composite key of which is made up of an incomplete cross product of several attribute domains. The scheme for compressing the full cross product composite key is well known. This paper, however, also handles the general, incomplete case by applying the constant suppression technique in conjunction with a composite key suppression scheme

Interpreting the cosmological constant as a pressure, whose thermodynamically conjugate variable is a volume, modifies the first law of black hole thermodynamics. Properties of the resulting thermodynamic volume are investigated: the compressibility and the speed of sound of the black hole are derived in the case of nonpositive cosmological constant. The adiabatic compressibility vanishes for a nonrotating black hole and is maximal in the extremal case--comparable with, but still less than, that of a cold neutron star. A speed of sound v s is associated with the adiabatic compressibility, which is equal to c for a nonrotating black hole and decreases as the angular momentum is increased. An extremal black hole has v s 2 =0.9 c 2 when the cosmological constant vanishes, and more generally v s is bounded below by c/√(2).

The compressive mechanical response of fine sand is experimentally investigated. The strain rate, initial density, stress state, and moisture level are systematically varied. A Kolsky bar was modified to obtain uniaxial and triaxial compressive response at high strain rates. A controlled loading pulse allows the specimen to acquire stress equilibrium and constant strain-rates. The results show that the compressive response of the fine sand is not sensitive to strain rate under the loading conditions in this study, but significantly dependent on the moisture content, initial density and lateral confinement. Partially saturated sand is more compliant than dry sand. Similar trends were reported in the quasi-static regime for experiments conducted at comparable specimen conditions. The sand becomes stiffer as initial density and/or confinement pressure increases. The sand particle size become smaller after hydrostatic pressure and further smaller after dynamic axial loading.

Compression of pulsed Nd : glass laser radiation under stimulated Brillouin scattering (SBS) in perfluorooctane is investigated. Compression of 16-ns pulses at a beam diameter of 30 mm is implemented. The maximumcompression coefficient is 28 in the optimal range of laser pulse energies from 2 to 4 J. The Stokes pulse power exceeds that of the initial laser pulse by a factor of about 11.5. The Stokes pulse jitter (fluctuations of the Stokes pulse exit time from the compressor) is studied. The rms spread of these fluctuations is found to be 0.85 ns.

The frequency of spondylolysis and the relationship between spondylolysis and the sacro-horizontal angle in 143 athletes and 30 non-athletes is reported. Athletes had a larger sacro-horizontal angle than non-athletes. The sacro-horizontal angle was larger in athletes with spondylolysis as compared with those without. An increased incidence of spondylolysis with an increased angle was demonstrated. It is suggested that an increased sacro-horizontal angle may predispose to spondylolysis, especially in combination with the high mechanical loads sustained in certain sports. (orig.)

The frequency of spondylolysis and the relationship between spondylolysis and the sacro-horizontal angle in 143 athletes and 30 non-athletes is reported. Athletes had a larger sacro-horizontal angle than non-athletes. The sacro-horizontal angle was larger in athletes with spondylolysis as compared with those without. An increased incidence of spondylolysis with an increased angle was demonstrated. It is suggested that an increased sacro-horizontal angle may predispose to spondylolysis, especially in combination with the high mechanical loads sustained in certain sports. (orig.).

The present TESLA damping ring is designed for a normalized horizontal emittance of 8x10 -6 m. γ-γ collisions at the TESLA linear collider will benefit from a further decrease of the horizontal emittance. This paper reviews the processes which limit the horizontal emittance in the damping ring. Preliminary estimates on the smallest horizontal emittance for the present TESLA damping ring design as well as an ultimate limit of the emittance reachable with the TESLA damping ring concept will be given

Cardiopulmonary resuscitation (CPR) is a kind of emergency treatment for cardiopulmonary arrest, and chest compression is the most important and necessary part of CPR. The American Heart Association published the new Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care in 2010 and demanded for better performance of chest compression practice, especially in compression depth and rate. The current study was to explore the relationship of quality indexes of chest compression and to identify the key points in chest compression training and practice. Totally 219 healthcare workers accepted chest compression training by using Laerdal ACLS advanced life support resuscitation model. The quality indexes of chest compression, including compression hands placement, compression rate, compression depth, and chest wall recoil as well as self-reported fatigue time were monitored by the Laerdal Computer Skills and Reporting System. The quality of chest compression was related to the gender of the compressor. The indexes in males, including self-reported fatigue time, the accuracy of compression depth and the compression rate, the accuracy of compression rate, were higher than those in females. However, the accuracy of chest recoil was higher in females than in males. The quality indexes of chest compression were correlated with each other. The self-reported fatigue time was related to all the indexes except the compression rate. It is necessary to offer CPR training courses regularly. In clinical practice, it might be better to change the practitioner before fatigue, especially for females or weak practitioners. In training projects, more attention should be paid to the control of compression rate, in order to delay the fatigue, guarantee enough compression depth and improve the quality of chest compression.

Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

The Department of Energy's Office of Technology Development initiated an integrated demonstration of innovative technologies and systems for cleanup of VOCs in soils and groundwater at the Savannah River Site (SRS) in 1989. The overall goal of the program is demonstration of multiple technologies and systems in the fields of drilling, characterization, monitoring, and remediation at a single test bed. Innovative technologies are compared to one another and to baseline technologies in terms of technical performance and cost effectiveness. Transfer of successfully demonstrated technologies and systems to DOE environmental restoration organizations, to other government agencies, and to industry is a critical part of the program. Directional drilling has been shown to be a successful technique for enhancing access to the subsurface, thus improving remediation systems, especially remediation systems which perform in situ. Demonstration of an innovative directional drilling system at the Integrated Demonstration Site at the SRS, was initiated in June of 1992. The directional drilling system was designed to install an in situ remediation system. The drilling system is an experimental compaction/dry drilling technique developed by Charles Machine Works (Ditch Witch reg-sign) of Perry, Oklahoma. A horizontal well was installed in the M Area of the SRS below and parallel to an abandoned tile process sewer line. The installation of the horizontal well was a two-part process. Part one consisted of drilling the borehole, and part two was the horizontal well completion

boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...

The resistance against earthquakes of high-temperature gas-cooled reactor (HTGR) core with block-type fuels is not fully ascertained yet. Seismic studies must be made if such a reactor plant is to be installed in areas with frequent earthquakes. The paper presented the test results of seismic behavior of a half scale two-dimensional horizontal slice core model and analysis. The following is a summary of the more important results. (1) When the core is subjected to the single axis excitation and simultaneous two-axis excitations to the core across-corners, it has elliptical motion. The core stays lumped motion at the low excitation frequencies. (2) When the load is placed on side fixed reflector blocks from outside to the core center, the core displacement and reflector impact reaction force decrease. (3) The maximum displacement occurs at simultaneous two-axis excitations. The maximum displacement occurs at the single axis excitation to the core across-flats. (4) The results of two-dimensional horizontal slice core model was compared with the results of two-dimensional vertical one. It is clarified that the seismic response of actual core can be predicted from the results of two-dimensional vertical slice core model. (5) The maximum reflector impact reaction force for seismic waves was below 60 percent of that for sinusoidal waves. (6) Vibration behavior and impact response are in good agreement between test and analysis. (author)

The results of an experimental investigation of the mechanical behavior of Borsic/aluminum are presented. Composite laminates were tested in tension and compression for monotonically increasing load and also for variable loading cycles in which the maximum load was increased in each successive cycle. It is shown that significant strain-hardening, and corresponding increase in yield stress, is exhibited by the metal matrix laminates. For matrix dominated laminates, the current yield stress is essentially identical to the previous maximum stress, and unloading is essentially linear with large permanent strains after unloading. For laminates with fiber dominated behavior, the yield stress increases with increase in the previous maximum stress, but the increase in yield stress does not keep pace with the previous maximum stress. These fiber dominated laminates exhibit smaller nonlinear strains, reversed nonlinear behavior during unloading, and smaller permanent strains after unloading. Compression results from sandwich beams and flat coupons are shown to differ considerably. Results from beam specimens tend to exhibit higher values for modulus, yield stress, and strength.

Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.

Background and goal of study: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with

The recent interest in using microorganisms for biofuels is motivation enough to study bioconvection and cell dispersion in tubes subject to imposed flow. To optimize light and nutrient uptake, many microorganisms swim in directions biased by environmental cues (e.g. phototaxis in algae and chemotaxis in bacteria). Such taxes inevitably lead to accumulations of cells, which, as many microorganisms have a density different to the fluid, can induce hydrodynamic instabilites. The large-scale fluid flow and spectacular patterns that arise are termed bioconvection. However, the extent to which bioconvection is affected or suppressed by an imposed fluid flow and how bioconvection influences the mean flow profile and cell transport are open questions. This experimental study is the first to address these issues by quantifying the patterns due to suspensions of the gravitactic and gyrotactic green biflagellate alga Chlamydomonas in horizontal tubes subject to an imposed flow. With no flow, the dependence of the dominant pattern wavelength at pattern onset on cell concentration is established for three different tube diameters. For small imposed flows, the vertical plumes of cells are observed merely to bow in the direction of flow. For sufficiently high flow rates, the plumes progressively fragment into piecewise linear diagonal plumes, unexpectedly inclined at constant angles and translating at fixed speeds. The pattern wavelength generally grows with flow rate, with transitions at critical rates that depend on concentration. Even at high imposed flow rates, bioconvection is not wholly suppressed and perturbs the flow field.

The purpose of this work is to develop an axis-symmetric two-phase flow model describing the growth of a single bubble squeezed between a horizontal heated upward-facing disc and an insulating surface placed parallel to the heated surface. Heat transfers at the liquid-vapour interfaces are predicted by the kinetic limit of vaporisation. The depths of the liquid films deposed on the surfaces (heated surface and confinement space) are determined using the Moriyama and Inoue correlation (1996). Transient heat transfers within the heated wall are taken into account. The model is applied to pentane bubble growth. The influence of the gap size, the initial temperature of the system, the thermal effusivity of the heated wall and the kinetic limit of vaporisation are studied. The results show that the expansion of the bubbles strongly depends on the gap size and can be affected by the effusivity of the material. Mechanical inertia effects are mainly dominant at the beginning of the bubble expansion. Pressure drop induced by viscous effects have to be taken into account for high capillary numbers. Heat transfers at the meniscus are negligible except at the early stages of the bubble growth. (author)

The purpose of this work is to develop an axis-symmetric two-phase flow model describing the growth of a single bubble squeezed between a horizontal heated upward-facing disc and an insulating surface placed parallel to the heated surface. Heat transfers at the liquid-vapour interfaces are predicted by the kinetic limit of vaporisation. The depths of the liquid films deposed on the surfaces (heated surface and confinement space) are determined using the Moriyama and Inoue correlation (1996). Transient heat transfers within the heated wall are taken into account. The model is applied to pentane bubble growth. The influence of the gap size, the initial temperature of the system, the thermal effusivity of the heated wall and the kinetic limit of vaporisation are studied. The results show that the expansion of the bubbles strongly depends on the gap size and can be affected by the effusivity of the material. Mechanical inertia effects are mainly dominant at the beginning of the bubble expansion. Pressure drop induced by viscous effects have to be taken into account for high capillary numbers. Heat transfers at the meniscus are negligible except at the early stages of the bubble growth. (author)

Designers of a horizontal axis wind turbine yaw mechanism are faced with a difficult decision. They know that if they elect to use a yaw- controlled rotor then the system will suffer increased initial cost and increased inherent maintenance and reliability problems. On the other hand, if they elect to allow the rotor to freely yaw they known they will have to account for unknown and random, though bounded, yaw rates. They will have a higher-risk design to trade-off against the potential for cost savings and reliability improvement. The risk of a yaw-free system could be minimized if methods were available for analyzing and understanding yaw behavior. The complexity of yaw behavior has, until recently, discouraged engineers from developing a complete yaw analysis method. The objectives of this work are to (1) provide a fundamental understanding of free-yaw mechanics and the design concepts most effective at eliminating yaw problems, and (2) provide tested design tools and guidelines for use by free-yaw wind systems manufacturers. The emphasis is on developing practical and sufficiently accurate design methods.

This paper concerns the comparison of horizontal versus vertical maintenance options of internal components (blanket and segment) of fusion reactors NET (Next European Torus) and INTOR Design. The described mechanical options are taken to ensure the handling of internals with the required precision, taking into account the problems raised by the safety and confinement requirements. Handling is obviously performed remotely. The option comparisons are performed according to the criteria of feasibility, building size, duration of maintenance operations, safety, flexibility, availability and cost. The first conclusions point on that the vertical handling option offers advantages, as regards the ease of handling and confinement possibilities. From the building size point of view, the two solutions are almost equivalent, while other criteria do not provide a basis for choice. It is emphasized that the confinement option C.T.U. (Containment Transfer Unit) or T.I.C. (Tight Intermediate Confinement) should be the major factor in determining the best options. In additions, a cost comparative analysis emphasizes the best cost/benefit ratio for the different options studied

Purpose\\ud Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables.\\ud Methods\\ud Twenty healthcare professionals performed two minutes of co...

This study is aimed to aerodynamically design a 1 mega-Watt horizontal axis wind turbine in order to obtain the maximum power coefficient by linearizing the chord and twist distributions. A new linearization method has been used for chord and twist distributions by crossing tangent line through...... the geometry of the blades determines the power generated by rotor, designing the blade is a very important issue. Herein, calculations are done for different types of airfoil families namely Risø-A1-21, Risø-A1-18, S809, S814 and Du 93-W-210. Hence, the effect of selecting different airfoil families is also...

Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.

We deal with the endurance problem of Phase Change Memories (PCM) by proposing Compression for Endurance in PCM RAM (CEPRAM), a technique to elongate the lifespan of PCM-based main memory through compression. We introduce a total of three compression schemes based on already existent schemes, but targeting compression for PCM-based systems. We do a two-level evaluation. First, we quantify the performance of the compression, in terms of compressed size, bit-flips and how they are affected by e...

Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO 4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO 2 glass. However, internal silanole groups still remain even at 38.1 GPa.

Probabilistic XML (PXML) files resulting from data integration can become extremely large, which is undesired. For XML there are several techniques available to compress the document and since probabilistic XML is in fact (a special form of) XML, it might benefit from these methods even more. In

Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO2 glass. However, internal silanole groups still remain even at 38.1 GPa.

Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO{sub 4} tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO{sub 2} glass. However, internal silanole groups still remain even at 38.1 GPa.

Wire mesh sensors were used to investigate the void fraction distribution along a 9 meter long, 50.8 mm diameter, horizontal test section that contained two 90 degree bends. Deionised water and compressed air were used as the working fluids, with the bubbly flow regime achieved at a superficial liquid velocity of 3.5 m/s and superficial gas velocities that varied between 0.1 and 1.2 m/s. The effects of superficial gas velocity and axial location on the void fraction distribution were investigated. Bubble and slug flow patterns were identified using a probability density function analysis based on a Gaussian mixture model. (author)

Purpose: In mammography, the height of the image receptor is adjusted to the patient before compressing the breast. An inadequate height setting can result in an imbalance between the forces applied by the image receptor and the paddle, causing the clamped breast to be pushed up or down relative to the body during compression. This leads to unnecessary stretching of the skin and other tissues around the breast, which can make the imaging procedure more painful for the patient. The goal of this study was to implement a method to measure and minimize the force imbalance, and to assess its feasibility as an objective and reproducible method of setting the image receptor height. Methods: A trial was conducted consisting of 13 craniocaudal mammographic compressions on a silicone breast phantom, each with the image receptor positioned at a different height. The image receptor height was varied over a range of 12 cm. In each compression, the force exerted by the compression paddle was increased up to 140 N in steps of 10 N. In addition to the paddle force, the authors measured the force exerted by the image receptor and the reaction force exerted on the patient body by the ground. The trial was repeated 8 times, with the phantom remounted at a slightly different orientation and position between the trials. Results: For a given paddle force, the obtained results showed that there is always exactly one image receptor height that leads to a balance of the forces on the breast. For the breast phantom, deviating from this specific height increased the force imbalance by 9.4 ± 1.9 N/cm (6.7%) for 140 N paddle force, and by 7.1 ± 1.6 N/cm (17.8%) for 40 N paddle force. The results also show that in situations where the force exerted by the image receptor is not measured, the craniocaudal force imbalance can still be determined by positioning the patient on a weighing scale and observing the changes in displayed weight during the procedure. Conclusions: In mammographic breast

MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

In contrast to the single shock compression state that can be obtained directly via experimental measurements, the multi-shock compression states, however, have to be calculated with the aid of theoretical models. In order to determine experimentally the multiple shock states, a diagnostic approach with the Doppler pins system (DPS) and the pyrometer was used to probe multiple shocks in dense argon plasmas. Plasma was generated by a shock reverberation technique. The shock was produced using the flyer plate impact accelerated up to ∼6.1 km/s by a two-stage light gas gun and introduced into the plenum argon gas sample, which was pre-compressed from the environmental pressure to about 20 MPa. The time-resolved optical radiation histories were determined using a multi-wavelength channel optical transience radiance pyrometer. Simultaneously, the particle velocity profiles of the LiF window was measured with multi-DPS. The states of multi-shock compression argon plasma were determined from the measured shock velocities combining the particle velocity profiles. We performed the experiments on dense argon plasmas to determine the principal Hugonoit up to 21 GPa, the re-shock pressure up to 73 GPa, and the maximum measure pressure of the fourth shock up to 158 GPa. The results are used to validate the existing self-consistent variational theory model in the partial ionization region and create new theoretical models.

This report is focused on tests with Variable Compression Ratio (VCR) engines, according to the Alvar engine principle. Variable compression ratio means an engine design where it is possible to change the nominal compression ratio. The purpose is to increase the fuel efficiency at part load by increasing the compression ratio. At maximum load, and maybe supercharging with for example turbocharger, it is not possible to keep a high compression ratio because of the knock phenomena. Knock is a shock wave caused by self-ignition of the fuel-air mix. If knock occurs, the engine will be exposed to a destructive load. Because of the reasons mentioned it would be an advantage if it would be possible to change the compression ratio continuously when the load changes. The Alvar engine provides a solution for variable compression ratio based on well-known engine components. This paper provides information about efficiency and emission characteristics from tests with two Alvar engines. Results from tests with a phase shift mechanism (for automatic compression ratio control) for the Alvar engine are also reviewed Examination paper. 5 refs, 23 figs, 2 tabs, 5 appendices

Full Text Available This paper is on the modification design of petrol engine for alternative fuelling using Compressed Natural Gas (CNG. It provides an analytical background in the modification design process. A petrol engine Honda CR-V 2.0 auto which has a compression ratio of 9.8 was selected as case study. In order for this petrol engine to run on CNG, its compression had to be increased. An optimal compression ratio of 11.97 was computed using the standard temperature-specific volume relationship for an isentropic compression process. This computation of compression ratio is based on an inlet air temperature of 30oC (representative of tropical ambient condition and pre-combustion temperature of 540oC (corresponding to the auto-ignition temperature of CNG. Using this value of compression ratio, a dimensional modification Quantity =1.803mm was obtained using simple geometric relationships. This value of 1.803mm is needed to increase the length of the connecting rod, the compression height of the piston or reducing the sealing plate’s thickness. After the modification process, a CNG engine of air standard efficiency 62.7% (this represents a 4.67% increase over the petrol engine, capable of a maximum power of 83.6kW at 6500rpm, was obtained.

Purpose – Over the last decades, horizontal cooperations between logistics service providers (LSPs) have become a well-established organizational form and their use is expected to grow even further in the future. In spite of this increasing importance of horizontal LSP cooperations, little research...

Rodent spatial cognition studies allow links to be made between neural and behavioural phenomena, and much is now known about the encoding and use of horizontal space. However, the real world is three dimensional, providing cognitive challenges that have yet to be explored. Motivated by neural findings suggesting weaker encoding of vertical than horizontal space, we examined whether rats show a similar behavioural anisotropy when distributing their time freely between vertical and horizontal movements. We found that in two- or three-dimensional environments with a vertical dimension, rats showed a prioritization of horizontal over vertical movements in both foraging and detour tasks. In the foraging tasks, the animals executed more horizontal than vertical movements and adopted a “layer strategy” in which food was collected from one horizontal level before moving to the next. In the detour tasks, rats preferred the routes that allowed them to execute the horizontal leg first. We suggest three possible reasons for this behavioural bias. First, as suggested by Grobety and Schenk [5], it allows minimisation of energy expenditure, inasmuch as costly vertical movements are minimised. Second, it may be a manifestation of the temporal discounting of effort, in which animals value delayed effort as less costly than immediate effort. Finally, it may be that at the neural level rats encode the vertical dimension less precisely, and thus prefer to bias their movements in the more accurately encoded horizontal dimension. We suggest that all three factors are related, and all play a part. PMID:21419172

Preemption plays a crucial role in arms merger decisions. The author studies whether and under which circumstances preemptive merging occurs in vertically related industries. He finds that vertical mergers often preempt horizontal mergers and are dominant outcomes. Preempting the threat of a detrimental horizontal integration may be the main reason for vertically integrating. Copyright 1995 by Blackwell Publishing Ltd.

This paper presents a study for the horizontal and vertical seismic isolation of a nuclear power plant with a base isolation system, developed by the author, called the Alexisismon. This system -- which comprises different schemes for horizontal or vertical or both horizontal and vertical isolation -- is a linear system based on the principle of separation of functions. That is, horizontal and vertical isolation are realized through different components and act independently from each other. As far as horizontal isolation is concerned, the role of transmitting vertical loads is uncoupled from the role of inducing horizontal restoring forces so that both functions can be performed without instability. It is possible either to provide both horizontal and vertical isolation to the whole nuclear plant or to isolate the whole plant horizontally and to provide vertical isolation to sensitive and costly equipment only. When the fundamental period of the plant or equipment is 2 seconds and when the vertical displacements are of the order of + or - 20 inches, the structure or equipment are protected against earthquakes up to 1.10 and 1.30 g for actual and 0.60 and 1.50 g for artificial accelerograms. In both cases all the isolation elements behave elastically up to these acceleration limits as well as the superstructure and equipment

A study has been made of the compression of collisionless ion rings in an increasing external magnetic field, B/sub e/ = zB/sub e/(t), by numerically implementing a previously developed kinetic theory of ring compression. The theory is general in that there is no limitation on the ring geometry or the compression ratio, lambdaequivalentB/sub e/ (final)/B/sub e/ (initial)> or =1. However, the motion of a single particle in an equilibrium is assumed to be completely characterized by its energy H and canonical angular momentum P/sub theta/ with the absence of a third constant of the motion. The present computational work assumes that plasma currents are negligible, as is appropriate for a low-temperature collisional plasma. For a variety of initial ring geometries and initial distribution functions (having a single value of P/sub theta/), it is found that the parameters for ''fat'', small aspect ratio rings follow general scaling laws over a large range of compression ratios, 1 3 : The ring radius varies as lambda/sup -1/2/; the average single particle energy as lambda/sup 0.72/; the root mean square energy spread as lambda/sup 1.1/; and the total current as lambda/sup 0.79/. The field reversal parameter is found to saturate at values typically between 2 and 3. For large compression ratios the current density is found to ''hollow out''. This hollowing tends to improve the interchange stability of an embedded low β plasma. The implications of these scaling laws for fusion reactor systems are discussed

Intermittent pneumatic compression has been established as a method of clinically preventing deep vein thrombosis, but the mechanism has not been documented. This study observed the effects of intermittent pneumatic compression of legs on the microcirculation of distant skeletal muscle. The cremaster muscles of 80 male rats were exposed, a specially designed intermittent pneumatic-compression device was applied to both legs for 60 minutes, and the microcirculation of the muscles was assessed by measurement of the vessel diameter in three categories (10-20, 21-40, and 41-70 microm) for 120 minutes. The results showed significant vasodilation in arterial and venous vessels during the application of intermittent pneumatic compression, which disappeared after termination of the compression. The vasodilation reached a maximum 30 minutes after initiation of the compression and could be completely blocked by an inhibitor of nitric oxide synthase, NG-monomethyl-L-arginine (10 micromol/min). A 120-minute infusion of NG-monomethyl-L-arginine, beginning coincident with 60 minutes of intermittent pneumatic compression, resulted in a significant decrease in arterial diameter that remained at almost the same level after termination of the compression. The magnitude of the decrease in diameter in the group treated with intermittent pneumatic compression and NG-monomethyl-L-arginine was comparable with that in the group treated with NG-monomethyl-L-arginine alone. The results imply that the production of nitric oxide is involved in the positive influence of intermittent pneumatic compression on circulation. It is postulated that the rapid increase in venous velocity induced by intermittent pneumatic compression produces strong shear stress on the vascular endothelium, which stimulates an increased release of nitric oxide and thereby causes systemic vasodilation.

We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.

This is a case study which describes the planning and results of a horizontal well in a shallow Wilcox sandstone waterflood unit in central Louisiana. The Tremont H-13-1 was OXY USA Inc.'s first horizontal well. Analysis will include examination of the selection criteria, planning, execution, completion, and production. A variety of well and field data is presented and reviewed to access the value of this information as it applies towards other applications. The Cruse Waterflood Unit is a 2100 ft. Wilcox formation in central Louisiana. Production improvements have been 500% or greater for the horizontal well versus adjacent vertical wells. The horizontal well paid out in less than 4 months Results from this well indicate that not only was this project an economic success, but that other fields will similar conditions can be produced in a more profitable manner with horizontal wells

Full Text Available The 1990s may become known in the oil field as the decade of the horizontal well. Horizontal wells can increase the production rate and the ultimate recovery, and can reduce the number of platforms on wells required to develop a reservoir.An empirical equation to calculate the inflow performance of two-phase flow for a vertical and a horizontal well in regime of dissolved gas presented by Vogel in 1968. His equation was based on the results of reservoir simulation. The created model whore result (output is the ratio of the productivity of a horizontal well to the productivity of a vertical well for a given area expressed by anumber of vertical wells the replaced by one horizontal well. The model is applied for a concrete ideological model.

through horizontal openings. Two cases of full-scale measurements of buoyancy driven natural ventilation through horizontal openings are performed: one horizontal opening and one horizontal opening combined with one vertical opening. For the case of one horizontal opening, the measurements are made....... Computational fluid dynamics (CFD) are used to study these two air flow cases. The air flow rate and air flow pattern are predicted and compared with the full-scale measurements. The measurement data are used to compare two CFD models: standard k- ε model and large eddy simulation (LES) model. The cases...... transient, unstable and complex, and the air flow rates oscillate with time. Correlations between the Froude number Fr and the opening ratio L/D are obtained, which is reasonable agreement with Epstein's formula derived from brine-water measurements, but the obtained Fr values show considerable deviations...

Over what region of space are horizontal disparities integrated to form the stimulus for vergence? The vergence system might be expected to respond to disparities within a small area of interest to bring them into the range of precise stereoscopic processing. However, the literature suggests that disparities are integrated over a fairly large parafoveal area. We report the results of six experiments designed to explore the spatial characteristics of the stimulus for vergence. Binocular eye movements were recorded using magnetic search coils. Each dichoptic display consisted of a central target stimulus that the subject attempted to fuse, and a competing stimulus with conflicting disparity. In some conditions the target was stationary, providing a fixation stimulus. In other conditions, the disparity of the target changed to provide a vergence-tracking stimulus. The target and competing stimulus were combined in a variety of conditions including those in which (1) a transparent textured-disc target was superimposed on a competing textured background, (2) a textured-disc target filled the centre of a competing annular background, and (3) a small target was presented within the centre of a competing annular background of various inner diameters. In some conditions the target and competing stimulus were separated in stereoscopic depth. The results are consistent with a disparity integration area with a diameter of about 5 degrees. Stimuli beyond this integration area can drive vergence in their own right, but they do not appear to be summed or averaged with a central stimulus to form a combined disparity signal. A competing stimulus had less effect on vergence when separated from the target by a disparity pedestal. As a result, we propose that it may be more useful to think in terms of an integration volume for vergence rather than a two-dimensional retinal integration area.

Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...

The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.

Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

A survey is made of applications where explosive-driven magnetic flux compression generators have been or can be used to directly power devices that produce dense plasmas. Representative examples are discussed that are specific to the theta pinch, the plasma gun, the dense plasma focus and the Z pinch. These examples are used to illustrate the high energy and power capabilities of explosive generators. An application employing a rocket-borne, generator-powered plasma gun emphasizes the size and weight potential of flux compression power supplies. Recent results from a local effort to drive a dense plasma focus are provided. Imploding liners ae discussed in the context of both the theta and Z pinches

The U.S. natural gas pipeline industry is facing the twin challenges of increased flexibility and capacity expansion. To meet these challenges, the industry requires improved choices in gas compression to address new construction and enhancement of the currently installed infrastructure. The current fleet of installed reciprocating compression is primarily slow-speed integral machines. Most new reciprocating compression is and will be large, high-speed separable units. The major challenges with the fleet of slow-speed integral machines are: limited flexibility and a large range in performance. In an attempt to increase flexibility, many operators are choosing to single-act cylinders, which are causing reduced reliability and integrity. While the best performing units in the fleet exhibit thermal efficiencies between 90% and 92%, the low performers are running down to 50% with the mean at about 80%. The major cause for this large disparity is due to installation losses in the pulsation control system. In the better performers, the losses are about evenly split between installation losses and valve losses. The major challenges for high-speed machines are: cylinder nozzle pulsations, mechanical vibrations due to cylinder stretch, short valve life, and low thermal performance. To shift nozzle pulsation to higher orders, nozzles are shortened, and to dampen the amplitudes, orifices are added. The shortened nozzles result in mechanical coupling with the cylinder, thereby, causing increased vibration due to the cylinder stretch mode. Valve life is even shorter than for slow speeds and can be on the order of a few months. The thermal efficiency is 10% to 15% lower than slow-speed equipment with the best performance in the 75% to 80% range. The goal of this advanced reciprocating compression program is to develop the technology for both high speed and low speed compression that will expand unit flexibility, increase thermal efficiency, and increase reliability and integrity

Regarding Structural Health Monitoring (SHM) for seismic acceleration, Wireless Sensor Networks (WSN) is a promising tool for low-cost monitoring. Compressed sensing and transmission schemes have been drawing attention to achieve effective data collection in WSN. Especially, SHM systems installing massive nodes of WSN require efficient data transmission due to restricted communications capability. The dominant frequency band of seismic acceleration is occupied within 100 Hz or less. In addition, the response motions on upper floors of a structure are activated at a natural frequency, resulting in induced shaking at the specified narrow band. Focusing on the vibration characteristics of structures, we introduce data compression techniques for seismic acceleration monitoring in order to reduce the amount of transmission data. We carry out a compressed sensing and transmission scheme by band pass filtering for seismic acceleration data. The algorithm executes the discrete Fourier transform for the frequency domain and band path filtering for the compressed transmission. Assuming that the compressed data is transmitted through computer networks, restoration of the data is performed by the inverse Fourier transform in the receiving node. This paper discusses the evaluation of the compressed sensing for seismic acceleration by way of an average error. The results present the average error was 0.06 or less for the horizontal acceleration, in conditions where the acceleration was compressed into 1/32. Especially, the average error on the 4th floor achieved a small error of 0.02. Those results indicate that compressed sensing and transmission technique is effective to reduce the amount of data with maintaining the small average error.

In this paper we consider the relationship of control tasks and image compression losses. The main idea of this approach is to allocate structural lines simplified image and further compress the selected data

A new reconstruction method for Wigner function is reported for quantum tomography based on compressed sensing. By analogy with computed tomography, Wigner functions for some quantum states can be reconstructed with less measurements utilizing this compressed sensing based method.

.... In this paper, 3D compression and expansion analysis of the tongue will be presented. Patterns of expansion and compression have been compared for different syllables and various repetitions of each syllable...

Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

Partially pressure depleted reservoirs and unfavorable horizontal flow geometries can impact artificial lift designs and diagnostics. In addition, terrain slugging, drilling fines, high gas volume fractions, H{sub 2}S gas and high bottom hole temperatures also pose challenges. This paper provides an overview of various systems utilized by Amerada Hess, a company which examines methods of reducing gas lift gas volumes to achieve maximum flow. A description of naturally fractured reservoirs and limited natural fractures was provided. A comparison was presented between the original conditions at Beaver Lodge Madison and existing conditions with horizontal development. Various artificial lift challenges were examined. It was suggested that high volume lift utilizing gas lift was the preferred artificial lift system for high volume wells. It was noted that downhole sensors can be used as an indicator of potential run life. However, reliability is limited by downhole operating temperatures and electrical ground faults. A comparison of friendly and unfriendly flow systems was presented, as well as a gas lift pressure chart. A summary of average gas volume systems was provided as well as an example of a response to increase drawdown. Examples of downhole Electric Submersible Pump (ESP) sensors were provided, as well as possible flowing pressure profiles in horizontal completion because of the constraints of lift capacity. It was concluded that a single point injection and proven gas lift system is the next step in high volume lift strategy. 2 tabs, 16 figs.

Full Text Available This paper presents the design of a highly efficient pneumatic motor system. The air engine is currently the most generally used device to convert potential energy of compressed air into mechanical energy. However, the efficiency of the air engines is too low to provide sufficient operating range for the vehicle. In this study, the energy contained in compressed air/pressurized hydraulic oil is transformed by a hydraulic motor to mechanical energy to enhance the efficiency of using air power. To evaluate the theoretical efficiency, the principle of balance of energy is applied. The theoretical efficiency of converting air into hydraulic energy is found to be a function of pressure; thus, the maximum converting efficiency can be determined. To confirm the theoretical evaluation, a prototype of the pneumatic hydraulic system is built. The experiment verifies that the theoretical evaluation of the system efficiency is reasonable, and that the layout of the system is determined by the results of theoretical evaluation.

Full Text Available This paper aims to examine changes in common longevity and variability of the adult life span, and attempts to answer whether or not the compression of mortality continues in Switzerland in the years 1876-2005. The results show that the negative relationships between the large increase in the adult modal age at death, observed at least from the 1920s, and the decrease in the standard deviation of the ages at deaths occurring above it, illustrate a significant compression of adult mortality. Typical adult longevity increased by about 10Š during the last fifty years in Switzerland, and adult heterogeneity in the age at death decreased in the same proportion. This analysis has not found any evidence suggesting that we are approaching longevity limits in term of modal or even maximum life spans. It ascertains a slowdown in the reduction of adult heterogeneity in longevity, already observed in Japan and other low mortality countries.

Normalized Compression Distance (NCD) is a popular tool that uses compression algorithms to cluster and classify data in a wide range of applications. Existing discussions of NCD's theoretical merit rely on certain theoretical properties of compression algorithms. However, we demonstrate that many popular compression algorithms don't seem to satisfy these theoretical properties. We explore the relationship between some of these properties and file size, demonstrating that this theoretical pro...

Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table s...

We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....

A biological compression model, expert model, is presented which is superior to existing compression algorithms in both compression performance and speed. The model is able to compress whole eukaryotic genomes. Most importantly, the model provides a framework for knowledge discovery from biological data. It can be used for repeat element discovery, sequence alignment and phylogenetic analysis. We demonstrate that the model can handle statistically biased sequences and distantly related sequences where conventional knowledge discovery tools often fail.

In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

Horizontal wells have the potential to become an important tool for use in characterization, remediation and monitoring operations at hazardous waste disposal, chemical manufacturing, refining and other sites where subsurface pollution may develop from operations or spills. Subsurface pollution of groundwater aquifers can occur at these sites by leakage of surface disposal ponds, surface storage tanks, underground storage tanks (UST), subsurface pipelines or leakage from surface operations. Characterization and remediation of aquifers at or near these sites requires drilling operations that are typically shallow, less than 500-feet in depth. Due to the shallow nature of polluted aquifers, waste site subsurface geologic formations frequently consist of unconsolidated materials. Fractured, jointed and/or layered high compressive strength formations or compacted caliche type formations can also be encountered. Some formations are unsaturated and have pore spaces that are only partially filled with water. Completely saturated underpressured aquifers may be encountered in areas where the static ground water levels are well below the ground surface. Each of these subsurface conditions can complicate the drilling and completion of wells needed for monitoring, characterization and remediation activities. This report describes some of the equipment that is available from petroleum drilling operations that has direct application to groundwater characterization and remediation activities. A brief discussion of petroleum directional and horizontal well drilling methodologies is given to allow the reader to gain an understanding of the equipment needed to drill and complete horizontal wells. Equipment used in river crossing drilling technology is also discussed. The final portion of this report is a description of the drilling equipment available and how it can be applied to groundwater characterization and remediation activities

Radioactive spent fuel assemblies are a source of hazardous waste that will have to be dealt with in the near future. It is anticipated that the spent fuel assemblies will be transported to disposal sites in spent fuel transportation casks. In order to design a reliable and safe transportation cask, the maximum cladding temperature of the spent fuel rod arrays must be calculated. The maximum rod temperature is a limiting factor in the amount of spent fuel that can be loaded in a transportation cask. The scope of this work is to demonstrate that reasonable and conservative spent fuel rod temperature predictions can be made using commercially available thermal analysis codes. The demonstration is accomplished by a comparison between numerical temperature predictions, with a commercially available thermal analysis code, and experimental temperature data for electrical rod heaters simulating a horizontally oriented spent fuel rod bundle

Horizontal seismic data are utilized in a large number of Earth studies. Such work depends on the published orientations of the sensitive axes of seismic sensors relative to true North. These orientations can be estimated using a number of different techniques: SensOrLoc (Sensitivity, Orientation and Location), comparison to synthetics (Ekstrom and Busby, 2008), or by way of magnetic compass. Current methods for finding relative station azimuths are unable to do so with arbitrary precision quickly because of limitations in the algorithms (e.g. grid search methods). Furthermore, in order to determine instrument orientations during station visits, it is critical that any analysis software be easily run on a large number of different computer platforms and the results be obtained quickly while on site. We developed a new technique for estimating relative sensor azimuths by inverting for the orientation with the maximum correlation to a reference instrument, using a non-linear parameter estimation routine. By making use of overlapping windows, we are able to make multiple azimuth estimates, which helps to identify the confidence of our azimuth estimate, even when the signal-to-noise ratio (SNR) is low. Finally, our algorithm has been written as a stand-alone, platform independent, Java software package with a graphical user interface for reading and selecting data segments to be analyzed.

Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.

The results of this study were used to determine the linear regression constants in the Maltenfort model by correlating the measured board edgewise compression strength (ECT) with the predicted strength, using the paper components' compression strengths, measured with the short-span compression test (SCT) and the ...

This article discusses shear horizontal (SH) guided-waves that can be excited with shear type piezoelectric wafer active sensor (SH-PWAS). The paper starts with a review of state of the art SH waves modelling and their importance in non-destructive evaluation (NDE) and structural health monitoring (SHM). The basic piezoelectric sensing and actuation equations for the case of shear horizontal piezoelectric wafer active sensor (SH-PWAS) with electro-mechanical coupling coefficient d 35 are reviewed. Multiphysics finite element modelling (MP-FEM) was performed on a free SH-PWAS to show its resonance modeshapes. The actuation mechanism of the SH-PWAS is predicted by MP-FEM, and modeshapes of excited structure are presented. The structural resonances are compared with experimental measurements and showed good agreement. Analytical prediction of SH waves was performed. SH wave propagation experimental study was conducted between different combinations of SH-PWAS and regular in-plane PWAS transducers. Experimental results were compared with analytical predictions for aluminium plates and showed good agreement. 2D wave propagation effects were studied by MP-FEM. An analytical model was developed for SH wave power and energy. The normal mode expansion (NME) method was used to account for superpositioning multimodal SH waves. Modal participation factors were presented to show the contribution of every mode. Power and energy transfer between SH-PWAS and the structure was analyzed. Finally, we present simulations of our developed wave power and energy analytical models. (paper)

We analyze theoretically and experimentally the wake behind a horizontal cylinder of diameter d horizontally translated at constant velocity U in a fluid rotating about the vertical axis at a rate Ω . Using particle image velocimetry measurements in the rotating frame, we show that the wake is stabilized by rotation for Reynolds number Re =U d /ν much larger than in a nonrotating fluid. Over the explored range of parameters, the limit of stability is Re ≃(275 ±25 )/Ro , with Ro =U /2 Ω d the Rossby number, indicating that the stabilizing process is governed by the Ekman pumping in the boundary layer. At low Rossby number, the wake takes the form of a stationary pattern of inertial waves, similar to the wake of surface gravity waves behind a ship. We compare this steady wake pattern to a model, originally developed by Johnson [E. R. Johnson, J. Fluid Mech. 120, 359 (1982), 10.1017/S0022112082002808], assuming a free-slip boundary condition and a weak streamwise perturbation. Our measurements show quantitative agreement with this model for Ro ≲0.3 . At larger Rossby number, the phase pattern of the wake is close to the prediction for an infinitely small line object. However, the wake amplitude and phase origin are not correctly described by the weak-streamwise-perturbation model, calling for an alternative model for the boundary condition at moderate rotation rate.

Full Text Available Transmission of Helicobacter pylori is thought to occur mainly during childhood, and predominantly within families. However, due to the difficulty of obtaining H. pylori isolates from large population samples and to the extensive genetic diversity between isolates, the transmission and spread of H. pylori remain poorly understood. We studied the genetic relationships of H. pylori isolated from 52 individuals of two large families living in a rural community in South Africa and from 43 individuals of 11 families living in urban settings in the United Kingdom, the United States, Korea, and Colombia. A 3,406 bp multilocus sequence haplotype was determined for a total of 142 H. pylori isolates. Isolates were assigned to biogeographic populations, and recent transmission was measured as the occurrence of non-unique isolates, i.e., isolates whose sequences were identical to those of other isolates. Members of urban families were almost always infected with isolates from the biogeographic population that is common in their location. Non-unique isolates were frequent in urban families, consistent with familial transmission between parents and children or between siblings. In contrast, the diversity of H. pylori in the South African families was much more extensive, and four distinct biogeographic populations circulated in this area. Non-unique isolates were less frequent in South African families, and there was no significant correlation between kinship and similarity of H. pylori sequences. However, individuals who lived in the same household did have an increased probability of carrying the same non-unique isolates of H. pylori, independent of kinship. We conclude that patterns of spread of H. pylori under conditions of high prevalence, such as the rural South African families, differ from those in developed countries. Horizontal transmission occurs frequently between persons who do not belong to a core family, blurring the pattern of familial

In order to provide theoretical basis for land reclamation in subsidence area, the mining subsidence area is divided into three areas: zone I (stretching zone), zone II (compression zone) and zone III (neutral zone). On this basis, the change characteristics of the soil in the three areas of the horizontal coal seam mining subsidence area are studied. The results show that: due to stretching, soil of zone I cracks was developed, the soil continuity damage, poor integrity, serious leakage of soil Water Leakage fertilizer, the area shows the soil water holding capacity decreased, the decline of soil fertility, soil coarsening and barren trend. The soil mass in zone II is compressed and the soil structure is relatively complete, but the soil bulk density increases correspondingly, while the soil porosity decreases gradually and the permeability decreases. The main soil layer in the zone III is vertical deformation, and the soil integrity is better. But the influence of mined out area leads to the movement of water and nutrients to the lower part of the soil. This paper suggests that in the land reclamation process should adopt corresponding reclamation method based on the variation law of the three soil area of reclamation area of mining subsidence, for improving soil physicochemical properties, so as to achieve the purpose of effective reclamation.

This paper presents the results obtained for medical image compression using autoencoder neural networks. Since mammograms (medical images) are usually of big sizes, training of autoencoders becomes extremely tedious and difficult if the whole image is used for training. We show in this paper that the autoencoders can be trained successfully by using image patches instead of the whole image. The compression performances of different types of autoencoders are compared based on two parameters, namely mean square error and structural similarity index. It is found from the experimental results that the autoencoder which does not use Restricted Boltzmann Machine pre-training yields better results than those which use this pre-training method.

Masonry cavity walls are loaded by wind pressure and vertical load from upper floors. These loads results in bending moments and compression forces in the ties connecting the outer and the inner wall in a cavity wall. Large cavity walls are furthermore loaded by differential movements from...... the temperature gradient between the outer and the inner wall, which results in critical increase of the bending moments in the ties. Since the ties are loaded by combined compression and moment forces, the loadbearing capacity is derived from instability equilibrium equations. Most of them are iterative, since...... exact instability solutions are complex to derive, not to mention the extra complexity introducing dimensional instability from the temperature gradients. Using an inverse variable substitution and comparing an exact theory with an analytical instability solution a method to design tie...

Compression-induced neuropathy of peripheral nerves can cause severe pain of the foot and ankle. Early diagnosis is important to institute prompt treatment and to minimize potential injury. Although clinical examination combined with electrophysiological studies remain the cornerstone of the diagnostic work-up, in certain cases, imaging may provide key information with regard to the exact anatomic location of the lesion or aid in narrowing the differential diagnosis. In other patients with peripheral neuropathies of the foot and ankle, imaging may establish the etiology of the condition and provide information crucial for management and/or surgical planning. MR imaging and ultrasound provide direct visualization of the nerve and surrounding abnormalities. Bony abnormalities contributing to nerve compression are best assessed by radiographs and CT. Knowledge of the anatomy, the etiology, typical clinical findings, and imaging features of peripheral neuropathies affecting the peripheral nerves of the foot and ankle will allow for a more confident diagnosis. (orig.) [de

Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.

Recent advances in nanotechnology have enabled researchers to control individual quantum mechanical objects with unprecedented accuracy, opening the door for both quantum and extreme- scale conventional computation applications. As these devices become more complex, designing for facility of control becomes a daunting and computationally infeasible task. Here, motivated by ideas from compressed sensing, we introduce a protocol for the Compressed Optimization of Device Architectures (CODA). It leads naturally to a metric for benchmarking and optimizing device designs, as well as an automatic device control protocol that reduces the operational complexity required to achieve a particular output. Because this protocol is both experimentally and computationally efficient, it is readily extensible to large systems. For this paper, we demonstrate both the bench- marking and device control protocol components of CODA through examples of realistic simulations of electrostatic quantum dot devices, which are currently being developed experimentally for quantum computation.

Horizontal drilling in the mature Lake Maracaibo field in Venezuela as a means of stimulating production are discussed. The Miocene sand where the horizontal well technology was applied, presented a number of intervals of unconsolidated sand of varied continuity, pay intervals at ten to twenty feet thickness, and reservoir pressures mostly at hydrostatic or below hydrostatic values. This paper evaluates a horizontal drilling program in the Lagunallis Lago Production Unit of Maracaibo, involving 91 wells to date (since 1995). When assessed in economic terms, results indicate that horizontal wells are a better economic alternative than vertical wells. The same results also showed that drainage from thin sand reservoirs resulted in better production with horizontal well technology than production from vertical wells. Payout was less than two years for 50 per cent of the horizontal wells while 40 per cent had payouts of between two and four years. Profit to investment ratio was greater than two in the case of about 70 per cent of the horizontal wells drilled in 1996. 2 tabs., 10 figs.

Solid glass spheres – Zn22Al2Cu composites, having different densities and microstructures, were elaborated and studied under compression. Their elaboration process involves alloy melting, spheres submersion into the liquid alloy and finally air cooling. The achieved composites with densities 2.6884, 2.7936 and 3.1219 g/cm 3 were studied in casting and thermally induced, fine-grain matrix microstructures. Test samples of the composites were compressed at a 10 −3 s −1 strain rate, and their microstructure characterized before and after compression by using optical and scanning electron microscopes. Although they exhibit different compression behavior depending on their density and microstructure, all of them show an elastic region at low strains, reach their maximum stress (σ max ) at hundreds of MPa before the stress fall or collapse up to a lowest yield point (LYP), followed by an important plastic deformation at nearly constant stress (σ p ): beyond this plateau, an extra deformation can be limitedly reached only by a significant stress increase. This behavior under compression stresses is similar to that reported for metal foams, being the composites with fine microstructure which nearest behave to metal foams under this pattern. Nevertheless, the relative values of the elastic modulus, and maximum and plateau stresses do not follow the Ashby equations by changing the relative density. Generally, the studied composites behave as foams under compression, except for their peculiar parameters values (σ max , LYP, and σ p )

An internal combustion reciprocating engine is operable as a compressor during slack demand periods utilizing excess power from a power grid to charge air into an air storage reservoir and as an expander during peak demand periods to feed power into the power grid utilizing air obtained from the air storage reservoir together with combustible fuel. Preferably the internal combustion reciprocating engine is operated at high pressure and a low pressure turbine and compressor are also employed for air compression and power generation.

such that the most common spatio-temporal queries can still be answered approximately after the compression has taken place. In the process, we develop an implementation of the Douglas–Peucker path-simplification algorithm which works efficiently even in the case where the polygonal path given as input is allowed...... to self-intersect. For a polygonal path of size n, the processing time is O(nlogkn) for k=2 or k=3 depending on the type of simplification....

The regularity of a compressive knit is defined as its ability to perform its function in a burnt skin. This property is essential to avoid the phenomenon of rejection of the material or toxicity problems But: Make knits biocompatible with high burnet of human skin. We fabric knits of elastic material. To ensure good adhesion to the skin, we made elastic material, typically a tight loop knitted. The Length of yarn absorbed by stitch and the raw matter are changed with each sample. The physical properties of each sample are measured and compared. Surface modifications are made to these samples by impregnation of microcapsules based on jojoba oil. Knits are compressif, elastic in all directions, light, thin, comfortable, and washable for hygiene issues. In addition, the washing can find their compressive properties. The Jojoba Oil microcapsules hydrated the human burnet skin. This moisturizer is used to the firmness of the wound and it gives flexibility to the skin. Compressive Knits are biocompatible with burnet skin. The mixture of natural and synthetic fibers is irreplaceable in terms comfort and regularity.

We investigate the effect of compressibility on passive scalar mixing in isotropic turbulence with a focus on the fundamental mechanisms that are responsible for such effects using a large Direct Numerical Simulation (DNS) database. The database includes simulations with Taylor Reynolds number (Rλ) up to 100, turbulent Mach number (Mt) between 0.1 and 0.6 and Schmidt number (Sc) from 0.5 to 1.0. We present several measures of mixing efficiency on different canonical flows to robustly identify compressibility effects. We found that, like shear layers, mixing is reduced as Mach number increases. However, data also reveal a non-monotonic trend with Mt. To assess directly the effect of dilatational motions we also present results with both dilatational and soleniodal forcing. Analysis suggests that a small fraction of dilatational forcing decreases mixing time at higher Mt. Scalar spectra collapse when normalized by Batchelor variables which suggests that a compressive mechanism similar to Batchelor mixing in incompressible flows might be responsible for better mixing at high Mt and with dilatational forcing compared to pure solenoidal mixing. We also present results on scalar budgets, in particular on production and dissipation. Support from NSF is gratefully acknowledged.

This paper reports a receiver operating characteristic (ROC) experiment conducted to compare the diagnostic performance of a compressed bone image with the original. The compression was done on custom hardware that implements an algorithm based on full-frame cosine transform. The compression ratio in this study is approximately 10:1, which was decided after a pilot experiment. The image set consisted of 45 hand images, including normal images and images containing osteomalacia and osteitis fibrosa. Each image was digitized with a laser film scanner to 2,048 x 2,048 x 8 bits. Six observers, all board-certified radiologists, participated in the experiment. For each ROC session, an independent ROC curve was constructed and the area under that curve calculated. The image set was randomized for each session, as was the order for viewing the original and reconstructed images. Analysis of variance was used to analyze the data and derive statistically significant results. The preliminary results indicate that the diagnostic quality of the reconstructed image is comparable to that of the original image

Full Text Available Purpose. The purpose of this study is to verify whether the headless cannulated compression screw (HCCS has higher biomechanical stability than the ordinary cannulated compression screw (OCCS in the treatment of vertical femoral neck fractures. Materials and Methods. 30 synthetic femur models were equally divided into 2 groups, with 50°, 60°, and 70° Pauwels angle of femoral neck fracture, under 3D printed guiding plates and C-arm fluoroscopic guidance. The femur molds were fixed with three parallel OCCSs as OCCS group and three parallel HCCSs as HCCS group. All specimens were tested for compressive strength and maximum load to failure with a loading rate of 2 mm/min. Results. The result showed that there was no significant difference with the compressive strength in the Pauwels angle of 50° and 60°. However, we observed that the maximum load to failure with the Pauwels angle of 50°, 60°, and 70° and the compressive strength with 70° of HCCS group showed better performance than the OCCS group. Conclusion. HCCS performs with better biomechanical stability than OCCS in the treatment of vertical femoral neck fracture, especially with the Pauwels angle of 70°.

Soil bioengineering is a construction technique using biological components for hydraulic and civil engineering solutions, based on the application of living plants and other auxiliary materials including among others log wood. Considering the reliability of the construction it is important to know about the durability and the degradation process of the wooden logs to estimate and retain the integral performance of a soil bioengineering system. An important performance indicator is the compression strength, but this parameter is not easy to examine by non-destructive methods. The Rinntech Resistograph is an instrument to measure the drilling resistance by a 3 mm wide needle in a wooden log. It is a quasi-non-destructive method as the remaining hole has no weakening effects to the wood. This is an easy procedure but result in values, hard to interpret. To assign drilling resistance values to specific compression strengths, wooden specimens were tested in an experiment and analysed with the Resistograph. Afterwards compression tests were done at the same specimens. This should allow an easier interpretation of drilling resistance curves in future. For detailed analyses specimens were investigated by means of branch inclusions, cracks and distances between annual rings. Wood specimens are tested perpendicular to the grain. First results show a correlation between drilling resistance and compression strength by using the mean drilling resistance, average width of the annual rings and the mean range of the minima and maxima values as factors for the drilling resistance. The extended limit of proportionality, the offset yield strength and the maximum strength were taken as parameters for compression strength. Further investigations at a second point in time strengthen these results.

Highlights: • We propose a harmonic/inter-harmonic analysis scheme with compressed sensing theory. • Property of sparseness of harmonic signal in electrical power system is proved. • The ratio formula of fundamental and harmonic components sparsity is presented. • Spectral Projected Gradient-Fundamental Filter reconstruction algorithm is proposed. • SPG-FF enhances the precision of harmonic detection and signal reconstruction. - Abstract: The advent of Integrated Energy Systems enabled various distributed energy to access the system through different power electronic devices. The development of this has made the harmonic environment more complex. It needs low complexity and high precision of harmonic detection and analysis methods to improve power quality. To solve the shortages of large data storage capacities and high complexity of compression in sampling under the Nyquist sampling framework, this research paper presents a harmonic analysis scheme based on compressed sensing theory. The proposed scheme enables the performance of the functions of compressive sampling, signal reconstruction and harmonic detection simultaneously. In the proposed scheme, the sparsity of the harmonic signals in the base of the Discrete Fourier Transform (DFT) is numerically calculated first. This is followed by providing a proof of the matching satisfaction of the necessary conditions for compressed sensing. The binary sparse measurement is then leveraged to reduce the storage space in the sampling unit in the proposed scheme. In the recovery process, the scheme proposed a novel reconstruction algorithm called the Spectral Projected Gradient with Fundamental Filter (SPG-FF) algorithm to enhance the reconstruction precision. One of the actual microgrid systems is used as simulation example. The results of the experiment shows that the proposed scheme effectively enhances the precision of harmonic and inter-harmonic detection with low computing complexity, and has good

Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

Result of a simulation of an openhole horizontal well that was treated with hydrajet fracturing, a new fracturing process wherein fractures are placed at different locations in a horizontal well without using sectional isolation techniques, are discussed. The process uses high-pressure jetting to concentrate fracturing energy at a precise fracture location, and data is obtained by means of surface and downhole pressure recorders, flow recorders, and tracers. This technique was used in a substantially depleted horizontal well in New Mexico with good results. The new process is reported to be expensive to implement which prevents widespread application at the present time. 7 refs., 9 figs.

A well completion scheme currently in use in a thick, large, elongated carbonate anticline Middle-East oil reservoir is described. This method of well completion calls for a combination of an open hole horizontal section penetrating the top 10 feet of the reservoir and a cased or undisturbed vertical segment through the thick formation. The horizontal section is used for producing and the vertical segment is used for monitoring purposes. Field experience and supported reservoir simulation exercises have shown that the horizontal application is superior to conventional vertical completion both from the economic and from the sweep point of view. 4 refs., 12 figs.

At an industrial site in Bruchsal (Germany) a huge trichloroethene contamination was found. After common remedial actions proved to be widely ineffective, new investigations led to a highly contaminated thin aquifer above the main aquifer. The investigation and the beginning of the remediation of the thin aquifer by two horizontal wells is described in this paper. Special attention was given to the dependence between precipitation and the flow direction in the thin aquifer and to hydraulic connections between the thin and the main aquifer. Also a short introduction into a new remedial technique by horizontal wells and first results of the test phase of the horizontal wells are given.

The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

In the scheme of magnetic bunch compression, an electron bunch with linear energy chirp (energy-bunch length correlation), imposed by an upstream RF cavity, is sent to a magnetic chicane. The bunch length at the exit of the chicane can thus be manipulated via the pathlength-energy dependence due to chicane dispersion. As a linear energy-chirped bunch ((delta)-z correlation) being transported through a dispersive region (x-(delta) correlation), the bunch will have a linear horizontal-longitudinal (x-z) correlation in the configuration space (bunch tilt). Comparing to the case of a nontilted bunch, this x-z correlation modifies the geometry of particle interaction with respect to the direction of particle motion, which consequently modifies the retardation solution and the effective CSR forces. The simulation result of the CSR field for a tilted thin beam was presented earlier by Dohlus [1]. In this paper, we first give an example of the bunch x-z correlation, or bunch tilt, in a bunch compression chicane. The effect of this x-z correlation on the retardation solution and the longitudinal effective force are then analyzed for a line bunch with linear energy chirp transported by design optics

EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....

Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs

A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

In recent commissioning of the 10 kW FEL at Jefferson Lab, as one varies the energy chirp of the electron bunches at the entrance of the chicane to make the bunch more and more compressed at the exit of the chicane, a sudden increase in the energy spread is observed [1] at the crossover of the full compression point. This phenomenon is accompanied simultaneously with a significant increase of the THz radiation from the electron beam. A similar observation was made earlier in the CTF II CSR experiment at CERN [2]. For example, for 5 nC bunch charge, ''the mean momentum spread increased by a factor of 4 at full compression with respect to the initial spread, and decreased to a factor of 3 larger than the initial spread at overcompression''. There is also a sudden drop of mean momentum at the full compression, along with a sudden increase in the horizontal emittance (see Fig. 5 of [2]). As a first step to understand this phenomenon, in this paper, we analyze the effective longitudinal CSR force using our recent formulation of CSR dynamics [3], and show there is a sudden increase in the magnitude of the effective longitudinal CSR force at the cross-over of the full compression point. A numerical example is given for an LCLS type chicane. The physical picture of this sudden increase is also discussed

This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

Compression and expansion processes of cross-linked sodium polyacrylate hydrogels under mechanical pressure were investigated. A packed spherical gel bed shows irreversible deformation when the applied pressure is decreased; the expansion behavior depends on the maximum pressure applied to the gel bed. The time required to attain a certain degree of deformation is directly proportional to the square of the total solid volume of the gel bed; this relation is very similar to that observed in expression or expansion processes of ordinary solid-liquid mixtures. The driving force of the deformation is an effective osmotic pressure gradient in the gel bed, where the effective osmotic pressure of the gel is the difference between the swelling pressure of the gel and the pressure applied to the gel. The flow rate of liquid through any gel layer can be expressed by Darcy's equation. The deformation ceases when the swelling pressure of each gel particle is equal to the applied pressure. Thus, the deformation of a packed gel bed can be recognized as a process of equalizing the swelling pressure distribution in the bed. (author)

In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

Full Text Available In X-ray computed tomography (CT an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

The purpose of data compression is storage and transmission of images with minimization of memory for storage and bandwidth for transmission, while maintaining robustness in the presence of transmission noise or storage medium errors. Here, the fast Hartley transform (FHT) is used for transformation and a new thresholding method is devised. The FHT is used instead of the fast Fourier transform (FFT), thus providing calculation at least as fast as that of the fastest algorithm of FFT. This real numbered transform requires only half the memory array space for saving of transform coefficients and allows for easy implementation on very large-scale integrated circuits because of the use of the same formula for both forward and inverse transformation and the conceptually straightforward algorithm. Threshold values were adaptively selected according to the correlation factor of each block of equally divided blocks of the image. Therefore, this approach provided a coding scheme that included maximum information with minimum image bandwidth. Overall, the results suggested that the Hartley transform adaptive thresholding approach results in improved fidelity, shorter decoding time, and greater robustness in the presence of noise than previous approaches

We describe an electron bunch compression scheme suitable for use in a light source driven by a superconducting radio frequency (SRF) linac. The key feature is the use of a recirculating linac to perform the initial bunch compression. Phasing of the second pass beam through the linac is chosen to de-chirp the electron bunch prior to acceleration to the final energy in an SRF linac ('afterburner'). The final bunch compression is then done at maximum energy. This scheme has the potential to circumvent some of the most technically challenging aspects of current longitudinal matches; namely transporting a fully compressed, high peak current electron bunch through an extended SRF environment, the need for a RF harmonic linearizer and the need for a laser heater. Additional benefits include a substantial savings in capital and operational costs by efficiently using the available SRF gradient.

Time-resolved compression of a laser-driven solid deuterated plastic sphere with a cone was measured with flash Kα x-ray radiography. A spherically converging shockwave launched by nanosecond GEKKO XII beams was used for compression while a flash of 4.51 keV Ti Kα x-ray backlighter was produced by a high-intensity, picosecond laser LFEX (Laser for Fast ignition EXperiment) near peak compression for radiography. Areal densities of the compressed core were inferred from two-dimensional backlit x-ray images recorded with a narrow-band spherical crystal imager. The maximum areal density in the experiment was estimated to be 87 ± 26 mg/cm"2. The temporal evolution of the experimental and simulated areal densities with a 2-D radiation-hydrodynamics code is in good agreement.

Time-resolved compression of a laser-driven solid deuterated plastic sphere with a cone was measured with flash Kα x-ray radiography. A spherically converging shockwave launched by nanosecond GEKKO XII beams was used for compression while a flash of 4.51 keV Ti Kα x-ray backlighter was produced by a high-intensity, picosecond laser LFEX (Laser for Fast ignition EXperiment) near peak compression for radiography. Areal densities of the compressed core were inferred from two-dimensional backlit x-ray images recorded with a narrow-band spherical crystal imager. The maximum areal density in the experiment was estimated to be 87 ± 26 mg/cm{sup 2}. The temporal evolution of the experimental and simulated areal densities with a 2-D radiation-hydrodynamics code is in good agreement.

Manufacturers' instructions of multi-component compression bandage systems inform that these products can remain up to 7 days during the therapy of venous leg ulcer. This implies that the pressure needed will be sustained during this time. The present research investigated the persistence of pressure of compression systems over 7 days. All 6 compression systems available in Germany at the time of the trial were tested on 35 volunteering persons without signs of venous leg disease. Bandaging with short-stretch bandages was included for comparison. Pressure was measured by using PicoPress®. Initially, all products showed sufficient resting pressure of 40 mm Hg checked with a pressure monitor, except for one system in which the pressure fell by at least 23.8%, the maximum being 47.5% over a period of 7 days. The currently available compression systems are not fit to keep the required pressure. Optimized products need to be developed.

Highlights: • The influence of compression on MSW flushing was evaluated using 13 tracer tests. • Compression has little effect on solute diffusion times in MSW. • Lithium tracer was conservative in non-degrading waste but not in degrading waste. • Bromide tracer was conservative, but deuterium was not. - Abstract: The effect of applied compression on the nature of liquid flow and hence the movement of contaminants within municipal solid waste was examined by means of thirteen tracer tests conducted on five separate waste samples. The conservative nature of bromide, lithium and deuterium tracers was evaluated and linked to the presence of degradation in the sample. Lithium and deuterium tracers were non-conservative in the presence of degradation, whereas the bromide remained effectively conservative under all conditions. Solute diffusion times into and out of less mobile blocks of waste were compared for each test under the assumption of dominantly dual-porosity flow. Despite the fact that hydraulic conductivity changed strongly with applied stress, the block diffusion times were found to be much less sensitive to compression. A simple conceptual model, whereby flow is dominated by sub-parallel low permeability obstructions which define predominantly horizontally aligned less mobile zones, is able to explain this result. Compression tends to narrow the gap between the obstructions, but not significantly alter the horizontal length scale. Irrespective of knowledge of the true flow pattern, these results show that simple models of solute flushing from landfill which do not include depth dependent changes in solute transport parameters are justified.

represented by curves from X-ray diffraction analysis and differential thermogravimetric analysis, as well as particle size distributions. PLS gave maximum explained variance in compressive strength at 1, 2, 7 and 28 days of 93%, 90%, 79% and 67%, respectively. The high explained variance makes the prediction...

Full Text Available Efficient image compression approaches can provide the best solutions to the recent growth of the data intensive and multimedia based applications. As presented in many papers the Haar matrix–based methods and wavelet analysis can be used in various areas of image processing such as edge detection, preserving, smoothing or filtering. In this paper, color image compression analysis and synthesis based on Haar and modified Haar is presented. The standard Haar wavelet transformation with N=2 is composed of a sequence of low-pass and high-pass filters, known as a filter bank, the vertical and horizontal Haar filters are composed to construct four 2-dimensional filters, such filters applied directly to the image to speed up the implementation of the Haar wavelet transform. Modified Haar technique is studied and implemented for odd based numbers i.e. (N=3 & N=5 to generate many solution sets, these sets are tested using the energy function or numerical method to get the optimum one.The Haar transform is simple, efficient in memory usage due to high zero value spread (it can use sparse principle, and exactly reversible without the edge effects as compared to DCT (Discrete Cosine Transform. The implemented Matlab simulation results prove the effectiveness of DWT (Discrete Wave Transform algorithms based on Haar and Modified Haar techniques in attaining an efficient compression ratio (C.R, achieving higher peak signal to noise ratio (PSNR, and the resulting images are of much smoother as compared to standard JPEG especially for high C.R. A comparison between standard JPEG, Haar, and Modified Haar techniques is done finally which approves the highest capability of Modified Haar between others.

Rare occurrence of the left maxillary horizontal third molar impaction, the right maxillary third molar vertical impaction and the left mandibular third molar vertical impaction with inferior alveolar nerve proximity in a 30 year old female: a case report.

Heritable microbial symbionts can have important effects on many aspects of their hosts' biology. Acquisition of a novel symbiont strain can provide fitness benefits to the host, with significant ecological and evolutionary consequences. We measured barriers to horizontal transmission by

This paper reports that the logging-while-drilling (LWD) measurement to two resistivities of different characteristics had led to a new interpretation method for the analysis of horizontal wells. By logging deep and shallow resistivity in real-time, marker beds were identified to help maintain well bore trajectory. The resistivity measurements were split into vertical and horizontal components to provide additional information of formation evaluation. In 1945, Ark Fuel Co. discovered and began developing the Olla field on the crest of the La Salle arch in La Salle Parish, La. Oil production comes from the Wilcox formation from alluvial sand packages that range in thickness from 3 ft to 120 ft. Now operated by Oxy U.S.A. Inc., Olla field was chosen in 1990 for a horizontal well pilot project. It was hoped that a horizontal well could alleviate water coming in one of the field's more productive sand packages- the 40-ft Cruse sand

The horizontal-longitudinal correlations of the acoustic field in deep water are investigated based on the experimental data obtained in the South China Sea. It is shown that the horizontal-longitudinal correlation coefficients in the convergence zone are high, and the correlation length is consistent with the convergence zone width, which depends on the receiver depth and range. The horizontal-longitudinal correlation coefficients in the convergence zone also have a division structure for the deeper receiver. The signals from the second part of the convergence zone are still correlated with the reference signal in the first part. The horizontal-longitudinal correlation coefficients in the shadow zone are lower than that in the convergence zone, and the correlation length in the shadow zone is also much shorter than that in the convergence zone. The numerical simulation results by using the normal modes theory are qualitatively consistent with the experimental results. (paper)

The horizontal dimensions of ionosphere agitation provoked by underground nuclear explosions have been experimentally determined for 13 explosions conducted at the Balapan test site of the Semipalatinsk test site. (author)

This report summarizes the Horizontal Curve Virtual Peer Exchange sponsored by the Federal Highway Administration (FHWA) Office of Safetys Roadway Safety Professional Capacity Building Program on June 17, 2014. This virtual peer exchange was the f...

This thesis submitted to the Swiss Federal Institute of Technology ETH in Zurich presents the development and validation of a model for the condensation of steam in horizontal pipes. Condensation models were introduced and developed particularly for the application in the emergency cooling system of a Gen-III+ boiling water reactor. Such an emergency cooling system consists of slightly inclined horizontal pipes, which are immersed in a cold water tank. The pipes are connected to the reactor pressure vessel. They are responsible for a fast depressurization of the reactor core in the case of accident. Condensation in horizontal pipes was investigated with both one-dimensional system codes (RELAP5) and three-dimensional computational fluid dynamics software (ANSYS FLUENT). The performance of the RELAP5 code was not sufficient for transient condensation processes. Therefore, a mechanistic model was developed and implemented. Four models were tested on the LAOKOON facility, which analysed direct contact condensation in a horizontal duct

in reservoir characterisation, specifically in terms of fracture delineation. We propose a travel-time-based approach to estimate the anellipticity parameter η and the symmetry axis azimuth ϕ of a horizontal transversely isotropic medium, given an inhomogeneous

A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.

A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed

OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who

Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....

represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.

The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example

... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...

Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

Full Text Available The two-phase flow in a short horizontal channel of rectangular cross-section of 1 × 19 mm2 has been studied experimentally. Five conventional two-phase flow patterns have been detected (bubble, churn, stratified, annular and jet and transitions between them have been determined. It is shown that a change in the width of the horizontal channels has a substantial effect on the boundaries between the flow regimes.

Predicting the future performance of horizontal wells under varying pumping conditions requires estimates of basic aquifer parameters, notably transmissivity and storativity. For vertical wells, there are well-established methods for estimating these parameters, typically based on either the recovery from induced head changes in a well or from the head response in observation wells to pumping in a test well. Comparable aquifer parameter estimation methods for horizontal wells have not been presented in the ground water literature. Formation parameter estimation methods based on measurements of pressure in horizontal wells have been presented in the petroleum industry literature, but these methods have limited applicability for ground water evaluation and are based on pressure measurements in only the horizontal well borehole, rather than in observation wells. This paper presents a simple and versatile method by which pumping test procedures developed for vertical wells can be applied to horizontal well pumping tests. The method presented here uses the principle of superposition to represent the horizontal well as a series of partially penetrating vertical wells. This concept is used to estimate a distance from an observation well at which a vertical well that has the same total pumping rate as the horizontal well will produce the same drawdown as the horizontal well. This equivalent distance may then be associated with an observation well for use in pumping test algorithms and type curves developed for vertical wells. The method is shown to produce good results for confined aquifers and unconfined aquifers in the absence of delayed yield response. For unconfined aquifers, the presence of delayed yield response increases the method error.

Horizontal gaze palsy with progressive scoliosis (HGPPS) is a rare congenital disorder characterized by absence of conjugate horizontal eye movements and progressive scoliosis developing in childhood and adolescence. We present a child with clinical and neuroimaging findings typical of HGPPS. CT and MRI of the brain demonstrated pons hypoplasia, absence of the facial colliculi, butterfly configuration of the medulla and a deep midline pontine cleft. We briefly discuss the imaging aspects of this rare entity in light of the current literature. (orig.)

The existence of two motive forces on a Crookes radiometer has complicated the investigation of either force independently. The thermal creep shear force in particular has been subject to differing interpretations of the direction in which it acts and its order of magnitude. In this article we provide a horizontal vane radiometer design which isolates the thermal creep shear force. The horizontal vane radiometer is explored through experiment, kinetic theory, and the Direct Simulation Monte C...

The effect of vertical shear on the horizontal dispersion properties of passive tracer particles on the continental shelf of South Mediterranean is investigated by means of observative and model data. In-situ current measurements reveal that vertical velocity gradients in the upper mixed layer decorrelate quite fast (∼ 1 day), whereas basin-scale ocean circulation models tend to overestimate such decorrelation time because of finite resolution effects. Horizontal dispers...

A fundamental concept in biology is that heritable material is passed from parents to offspring, a process called vertical gene transfer. An alternative mechanism of gene acquisition is through horizontal gene transfer (HGT), which involves movement of genetic materials between different species. Horizontal gene transfer has been found prevalent in prokaryotes but very rare in eukaryote. In this paper, we investigate horizontal gene transfer in the human genome. From the pair-wise alignments between human genome and 53 vertebrate genomes, 1,467 human genome regions (2.6 M bases) from all chromosomes were found to be more conserved with non-mammals than with most mammals. These human genome regions involve 642 known genes, which are enriched with ion binding. Compared to known horizontal gene transfer regions in the human genome, there were few overlapping regions, which indicated horizontal gene transfer is more common than we expected in the human genome. Horizontal gene transfer impacts hundreds of human genes and this study provided insight into potential mechanisms of HGT in the human genome.

The physical properties relating to 4f electrons in cerium phosphide, especially the temperature dependence and the isomorphous transition that occurs at around 10 GPa, were studied by means of x-ray powder diffraction and charge density distribution maps derived by the maximum-entropy method. The compressibility of CeP was exactly determined using a helium pressure medium and the anomaly that indicated the isomorphous transition was observed in the compressibility. We also discuss the anisotropic charge density distribution of Ce ions and its temperature dependence.

It is clinically important to evaluate tongue function in terms of rehabilitation of swallowing and eating ability. We have developed a disposable tongue pressure measurement device designed for clinical use. In this study we used this device to determine standard values of maximum tongue pressure in adult Japanese. Eight hundred fifty-three subjects (408 male, 445 female; 20-79 years) were selected for this study. All participants had no history of dysphagia and maintained occlusal contact in the premolar and molar regions with their own teeth. A balloon-type disposable oral probe was used to measure tongue pressure by asking subjects to compress it onto the palate for 7 s with maximum voluntary effort. Values were recorded three times for each subject, and the mean values were defined as maximum tongue pressure. Although maximum tongue pressure was higher for males than for females in the 20-49-year age groups, there was no significant difference between males and females in the 50-79-year age groups. The maximum tongue pressure of the seventies age group was significantly lower than that of the twenties to fifties age groups. It may be concluded that maximum tongue pressures were reduced with primary aging. Males may become weaker with age at a faster rate than females; however, further decreases in strength were in parallel for male and female subjects.

The trajectories of beam edge electrons are calculated in the transition region between an electrostatic gun and an increasing magnetic field for various field shapes, transition length, and cathode fluxes, assuming that the resultant beam is of Brillouin flow type. The results give a good physical interpretation to the axial gradient of the magnetic field being responsible for the amount of magnetic compression and also for the proper injection conditions. Therefore it becomes possible to predict from the known characteristics of any fairly laminary electrostatic gun the necessary axial gradient of the magnetic field and the axial position of the gun with respect to the field build-up. (orig.) [de

Control of the radial proﬁle of trapped antiproton clouds is critical to trapping antihydrogen. We report detailed measurements of the radial manipulation of antiproton clouds, including areal density compressions by factors as large as ten, achieved by manipulating spatially overlapped electron plasmas. We show detailed measurements of the near-axis antiproton radial proﬁle, and its relation to that of the electron plasma. We also measure the outer radial proﬁle by ejecting antiprotons to the trap wall using an octupole magnet.

The interplay of thermal noise and molecular forces is responsible for surprising features of liquids on sub-micrometer lengths-in particular at interfaces. Not only does the surface tension depend on the size of an applied distortion and nanoscopic thin liquid films dewet faster than would be expected from hydrodynamics, but also the dispersion relation of capillary waves differ at the nanoscale from the familiar macroscopic behavior. Starting with the stochastic Navier-Stokes equation we study the coupling of capillary waves to acoustic surface waves which is possible in compressible fluids. We find propagating 'acoustic-capillary waves' at nanometer wavelengths where in incompressible fluids capillary waves are overdamped.