This document reports and compares the results compiled from chemical analyses and biological testing of coal liquefaction process materials which were fractionally distilled, after production, into various comparable boiling-point range cuts. Comparative analyses were performed on solvent refined coal (SRC)-I, SRC-II, H-Coal, EDS an integrated two-stageliquefaction (ITSL) distillate materials. Mutagenicity and carcinogenicity assays were conducted in conjunction with chromatographic and mass spectrometric analyses to provide detailed, comparative, chemical and biological assessments. Where possible, results obtained from the distillate cuts are compared to those from coal liquefaction materials with limited boiling ranges. Work reported here was conducted by investigators in the Biology and Chemistry Department at the Pacific Northwest Laboratory (PNL), Richland, WA. 38 refs., 16 figs., 27 tabs.

This document reports the results from chemical analyses and biological testing of process materials sampled during operation of the Wilsonville Advanced Coal Liquefaction Research and Development Facility (Wilsonville, Alabama) in both the noncoupled or nonintegrated (NTSL Run 241) and coupled or integrated (ITSL Run 242) two-stageliquefaction operating modes. Mutagenicity and carcinogenicity assays were conducted in conjunction with chromatographic and mass spectrometric analyses to provide detailed, comparative chemical and biological assessments of several NTSL and ITSL process materials. In general, the NTSL process materials were biologically more active and chemically more refractory than analogous ITSL process materials. To provide perspective, the NTSL and ITSL results are compared with those from similar testing and analyses of other direct coal liquefaction materials from the solvent refined coal (SRC) I, SRC II and EDS processes. Comparisons are also made between two-stage coal liquefaction materials from the Wilsonville pilot plant and the C.E. Lummus PDU-ITSL Facility in an effort to assess scale-up effects in these two similar processes. 36 references, 26 figures, 37 tables.

Coal-derived materials from two coal conversion processes were screened for potential ecological toxicity. We examined the toxicity of materials from different engineering or process options to an aquatic invertebrate and also related potential hazard to relative concentration, composition, and stability of water soluble components. For materials tested from the Integrated Two-StageLiquefaction (ITSL) process, only the LC finer (LCF) 650/sup 0/F distillate was highly soluble in water at 20/sup 0/C. The LCF feed and Total Liquid Product (TLP) were not in liquid state at 20/sup 0/C and were relatively insoluble in water. Relative hazard to daphnids from ITSL materials was as follows: LCF 650/sup 0/F distillate greater than or equal to LCF feed greater than or equal to TLP. For Exxon Donor Solvent (EDS) materials, process solvent produced in the bottoms recycle mode was more soluble in water than once-through process solvent and, hence, slightly more acutely toxic to daphnids. When compared to other coal liquids or petroleum products, the ITSL or EDS liquids were intermediate in toxicity; relative hazard ranged from 1/7 to 1/13 of the Solvent Refined Coal (SRC)-II distillable blend, but was several times greater than the relative hazard for No. 2 diesel fuel oil or Prudhoe Bay crude oil. Although compositonal differences in water-soluble fractions (WSF) were noted among materials, phenolics were the major compound class in all WSFs and probably the primary contributor to acute toxicity.

This report presents data from the chemical analysis and biological testing of coal liquefaction materials obtained from the Hydrocarbon Research, Incorporated (HRI) catalytic two-stageliquefaction (CTSL) process. Materials from both an experimental run and a 25-day demonstration run were analyzed. Chemical methods of analysis included adsorption column chromatography, high-resolution gas chromatography, gas chromatography/mass spectrometry, low-voltage probe-inlet mass spectrometry, and proton nuclear magnetic resonance spectroscopy. The biological activity was evaluated using the standard microbial mutagenicity assay and an initiation/promotion assay for mouse-skin tumorigenicity. Where applicable, the results obtained from the analyses of the CTSL materials have been compared to those obtained from the integrated and nonintegrated two-stage coal liquefaction processes. 18 refs., 26 figs., 22 tabs.

This report presents data from the chemical analysis and biological testing of coal liquefaction materials obtained from the Hydrocarbon Research, Incorporated (HRI) catalytic two-stageliquefaction (CTSL) process. Materials from both an experimental run and a 25-day demonstration run were analyzed. Chemical methods of analysis included adsorption column chromatography, high-resolution gas chromatography, gas chromatography/mass spectrometry, low-voltage probe-inlet mass spectrometry, and proton nuclear magnetic resonance spectroscopy. The biological activity was evaluated using the standard microbial mutagenicity assay and an initiation/promotion assay for mouse-skin tumorigenicity. Where applicable, the results obtained from the analyses of the CTSL materials have been compared to those obtained from the integrated and nonintegrated two-stage coal liquefaction processes. 18 refs., 26 figs., 22 tabs.

Coal-derived materials from experimental runs of Hydrocarbon Research Incorporated's (HRI) catalytic two-stageliquefaction (CTSL) process were chemically characterized and screened for microbial mutagenicity. This process differs from two-stage coal liquefaction processes in that catalyst is used in both stages. Samples from both the first and second stages were class-fractionated by alumina adsorption chromatography. The fractions were analyzed by capillary column gas chromatography; gas chromatography/mass spectrometry; direct probe, low voltage mass spectrometry; and proton nuclear magnetic resonance spectrometry. Mutagenicity assays were performed with the crude and class fractions in Salmonella typhimurium, TA98. Preliminary results of chemical analyses indicate that >80% CTSL materials from both process stages were aliphatic hydrocarbon and polynuclear aromatic hydrocarbon (PAH) compounds. Furthermore, the gross and specific chemical composition of process materials from the first stage were very similar to those of the second stage. In general, the unfractionated materials were only slightly active in the TA98 mutagenicity assay. Like other coal liquefaction materials investigated in this laboratory, the nitrogen-containing polycyclic aromatic compound (N-PAC) class fractions were responsible for the bulk of the mutagenic activity of the crudes. Finally, it was shown that this activity correlated with the presence of amino-PAH. 20 figures, 9 tables.

The TwoStage Coal Liquefaction process became operational at Wilsonville in May 1981, with the inclusion of an H-OIL ebullated-bed catalytic reactor. The twostage process was initially operated in a nonintegrated mode and has recently been reconfigurated to fully integrate the thermal and the catalytic stages. This report focuses on catalyst activity trends observed in both modes of operation. A literature review of relevant catalyst screening studies in bench-scale and PDU units is presented. Existing kinetic and deactivation models were used to analyze process data over an extensive data base. Based on the analysis, three separate, application studies have been conducted. The first study seeks to elucidate the dependence of catalyst deactivation rate on type of coal feedstock used. A second study focuses on the significance of catalyst type and integration mode on SRC hydrotreatment. The third study presents characteristic deactivation trends observed in integrated operation with different first-stage thermal severities. In-depth analytical work was conducted at different research laboratories on aged catalyst samples from Run 242. Model hydrogenation and denitrogenation activity trends are compared with process activity trends and with changes observed in catalyst porosimetric properties. The accumulation of metals and coke deposits with increasing catalyst age, as well as their distribution across a pellet cross-section, are discussed. The effect of catalyst age and reactor temperature on the chemical composition of flashed bottoms product is addressed. Results from regenerating spent catalysts are also presented. 35 references, 31 figures, 18 tables.

Samples for chemical characterization and biological testing were obtained from ITSL runs 3LCF7, 3LCF8 and 3LCF9. Chemical analysis of these materials showed that SCT products were composed of fewer compounds than analogous materials from Solvent Refined Coal (SRC) processes. Major components in the SCT materials were three-, four-, five- and six-ring neutral polycyclic aromatic hydrocarbons (PAH). Methyl(C/sub 1/) and C/sub 2/ homologs of these compounds were present in relatively low concentrations, compared to their non-alkylated homologs. Organic nitrogen was primarily in the form of tertiary polycyclic aromatic nitrogen heterocycles and carbazoles. Little or no amino PAH (APAH) or cyano PAH were detected in samples taken during normal PDU operations, however, mutagenic APAH were produced during off-normal operation. Microbial mutagenicity appeared to be due mainly to the presence of APAH which were probably formed in the LC finer due to failure of the catalyst to promote deamination following carbon-nitrogen bond scission of nitrogen-containing hydroaromatics. This failure was observed for the off-normal runs where it was likely that the catalyst had been deactivated. Carcinogenic activity of ITSL materials as assessed by (tumors per animal) in the initiation/promotion mouse skin painting assay was slightly reduced for materials produced with good catalyst under normal operation compared to those collected during recycle of the LC Finer feed. Initiation activity of the latter samples did not appear to be significantly different from that of other coal derived materials with comparable boiling ranges. The observed initiation activity was not unexpected, considering analytical data which showed the presence of four-, five- and six-ring PAH in ITSL materials.

Reported herein are the details and results of Laboratory and Bench-Scale experiments using bituminous coal concluded at Hydrocarbon Research, Inc., under DOE contract during the period October 1, 1988 to December 31, 1992. The work described is primarily concerned with the application of coal cleaning methods and solids separation methods to the Catalytic Two-StageLiquefaction (CTSL) Process. Additionally a predispersed catalyst was evaluated in a thermal/catalytic configuration, and an alternative nickel molybdenum catalyst was evaluated for the CTSL process. Three coals were evaluated in this program: Bituminous Illinois No. 6 Burning Star and Sub-bituminous Wyoming Black Thunder and New Mexico McKinley Mine seams. The results from a total of 16 bench-scale runs are reported and analyzed in detail. The tests involving the Illinois coal are reported herein, and the tests involving the Wyoming and New Mexico coals are described in Topical Report No. 1. On the laboratory scale, microautoclave tests evaluating coal, start-up oils, catalysts, thermal treatment, CO{sub 2} addition and sulfur compound effects are reported in Topical Report No. 3. Other microautoclave tests, such as tests on rejuvenated catalyst, coker liquids, and cleaned coals, are described in the Bench Run sections to which they refer. The microautoclave tests conducted for modelling the CTSL process are described in the CTSL Modelling section of Topical Report No. 3 under this contract.

This report presents the operating results for Run 243 at the Advanced Coal Liquefaction R and D Facility in Wilsonville, Alabama. This run was made in an Integrated Two-StageLiquefaction (ITSL) mode using Illinois 6 coal from the Burning Star mine. The primary objective was to demonstrate the effect of a dissolver on the ITSL product slate, especially on the net C/sub 1/-C/sub 5/ gas production and hydrogen consumption. Run 243 began on 3 February 1983 and continued through 28 June 1983. During this period, 349.8 tons of coal was fed in 2947 hours of operation. Thirteen special product workup material balances were defined, and the results are presented herein. 29 figures, 19 tables.

This study demonstrated the feasibility of using non-aqueous ion exchange liquid chromatography (NIELC) for the examination of the tetrahydrofuran (THF)-soluble distillation resids and THF-soluble whole oils derived from direct coal liquefaction. The technique can be used to separate the material into a number of acid, base, and neutral fractions. Each of the fractions obtained by NIELC was analyzed and then further fractionated by high-performance liquid chromatography (HPLC). The separation and analysis schemes are given in the accompanying report. With this approach, differences can be distinguished among samples obtained from different process streams in the liquefaction plant and among samples obtained at the same sampling location, but produced from different feed coals. HPLC was directly applied to one THF-soluble whole process oil without the NIELC preparation, with limited success. The direct HPLC technique used was directed toward the elution of the acid species into defined classes. The non-retained neutral and basic components of the oil were not analyzable by the direct HPLC method because of solubility limitations. Sample solubility is a major concern in the application of these techniques.

This is the final report of a four year and ten month contract starting on October 1, 1988 to July 31, 1993 with the US Department of Energy to study and improve Close-Coupled Catalytic Two-Stage Direct Liquefaction of coal by producing high yields of distillate with improved quality at lower capital and production costs in comparison to existing technologies. Laboratory, Bench and PDU scale studies on sub-bituminous and bituminous coals are summarized and referenced in this volume. Details are presented in the three topical reports of this contract; CTSL Process Bench Studies and PDU Scale-Up with Sub-Bituminous Coal-DE-88818-TOP-1, CTSL Process Bench Studies with Bituminous Coal-DE-88818-TOP-2, and CTSL Process Laboratory Scale Studies, Modelling and Technical Assessment-DE-88818-TOP-3. Results are summarized on experiments and studies covering several process configurations, cleaned coals, solid separation methods, additives and catalysts both dispersed and supported. Laboratory microautoclave scale experiments, economic analysis and modelling studies are also included along with the PDU-Scale-Up of the CTSL processing of sub-bituminous Black Thunder Mine Wyoming coal. During this DOE/HRI effort, high distillate yields were maintained at higher throughput rates while quality was markedly improved using on-line hydrotreating and cleaned coals. Solid separations options of filtration and delayed coking were evaluated on a Bench-Scale with filtration successfully scaled to a PDU demonstration. Directions for future direct coal liquefaction related work are outlined herein based on the results from this and previous programs.

Reported are the details and results of Laboratory and Bench-Scale experiments using sub-bituminous coal conducted at Hydrocarbon Research, Inc., under DOE Contract No. DE-AC22-88PC88818 during the period October 1, 1988 to December 31, 1992. The work described is primarily concerned with testing of the baseline Catalytic Two-StageLiquefaction (CTSL{trademark}) process with comparisons with other twostage process configurations, catalyst evaluations and unit operations such as solid separation, pretreatments, on-line hydrotreating, and an examination of new concepts. In the overall program, three coals were evaluated, bituminous Illinois No. 6, Burning Star and sub-bituminous Wyoming Black Thunder and New Mexico McKinley Mine seams. The results from a total of 16 bench-scale runs are reported and analyzed in detail. The runs (experiments) concern process variables, variable reactor volumes, catalysts (both supported, dispersed and rejuvenated), coal cleaned by agglomeration, hot slurry treatments, reactor sequence, on-line hydrotreating, dispersed catalyst with pretreatment reactors and CO{sub 2}/coal effects. The tests involving the Wyoming and New Mexico Coals are reported herein, and the tests involving the Illinois coal are described in Topical Report No. 2. On a laboratory scale, microautoclave tests evaluating coal, start-up oils, catalysts, thermal treatment, CO{sub 2} addition and sulfur compound effects were conducted and reported in Topical Report No. 3. Other microautoclave tests are described in the Bench Run sections to which they refer such as: rejuvenated catalyst, coker liquids and cleaned coals. The microautoclave tests conducted for modelling the CTSL{trademark} process are described in the CTSL{trademark} Modelling section of Topical Report No. 3 under this contract.

This is the tenth Quarterly Technical Progress Report under DOE Contract DE-AC22-89PC89883. Process oils from Wilsonville Run 262 were analyzed to provide information on process performance. Run 262 was operated from July 10 through September 30, 1991, in the thermal/catalytic Close-Coupled Integrated Two-StageLiquefaction (CC-ITSL) configuration with ash recycle. The feed coal was Black Thunder Mine subbituminous coal. The high/low temperature sequence was used. Each reactor was operated at 50% of the available reactor volume. The interstage separator was in use throughout the run. The second-stage reactor was charged with aged Criterion 324 catalyst (Ni/Mo on 1/16 inch alumina extrudate support). Slurry catalysts and sulfiding agent were fed to the first-stage reactor. Molyvan L is an organometallic compound which contains 8.1% Mo, and is commercially available as an oil-soluble lubricant additive. It was used in Run 262 as a dispersed hydrogenation catalyst precursor, primarily to alleviate deposition problems which plagued past runs with Black Thunder coal. One test was made with little supported catalyst in the second stage. The role of phenolic groups in donor solvent properties was examined. In this study, four samples from direct liquefaction process oils were subjected to O-methylation of the phenolic groups, followed by chemical analysis and solvent quality testing.

This is the eleventh Quarterly Technical Progress Report under DOE Contract DE-AC22-89PC89883. Major topics reported are: (1) The results of a study designed to determine the effects of the conditions employed at the Wilsonville slurry preheater vessel on coal conversion is described. (2) Stable carbon isotope ratios were determined and used to source the carbon of three product samples from Period 49 of UOP bench-scale coprocessing Run 37. The results from this coprocessing run agree with the general trends observed in other coprocessing runs that we have studied. (3) Microautoclave tests and chemical analyses were performed to calibrate'' the reactivity of the standard coal used for determining donor solvent quality of process oils in this contract. (4) Several aspects of Wilsonville Close-Coupled Integrated Two-StageLiquefaction (CC-ITSL) resid conversion kinetics were investigated; results are presented. Error limits associated with calculations of deactivation rate constants previously reported for Runs 258 and 261 are revised and discussed. A new procedure is described that relates the conversions of 850[degrees]F[sup +] , 1050[degrees]F[sup +], and 850 [times] 1050[degrees]F material. Resid conversions and kinetic constants previously reported for Run 260 were incorrect; corrected data and discussion are found in Appendix I of this report.

Samples from the H-Coal process, a catalytic, single-stage, coal liquefaction technology, were chemically characterized and screened for microbial mutagenicity. For these investigations, a blend of light and heavy H-Coal process oils was fractionally distilled into 50/sup 0/F boiling point cuts. The chemical analyses and biological testing results presented in this status report deal primarily with the blended material and the distillate fractions boiling above 650/sup 0/F. Results from the microbial mutagenicity assays indicated that onset of biological activity in the crude materials occurred above 700/sup 0/F. Similar trends have been observed for Solvent Refined Coal (SRC) I, SRC II, Integrated Two-StageLiquefaction (ITSL) and Exxon EDS process materials. After chemical class fractionation, the primary source of microbial mutagenicity of the crude boiling point cuts was the nitrogen-containing polycyclic aromatic compound (N-PAC) fractions. Amino polycyclic aromatic hydrocarbons (amino-PAH) were present at sufficient concentration levels in the N-PAC fractions to account for the observed mutagenic responses. In general, the chemical composition of the H-Coal materials studied was similar to that of other single-stage liquefaction materials. The degree of alkylation in these materials was determined to be greater than in the SRC and less than in the EDS process distillate cuts. 13 references, 8 figures, 11 tables.

Samples from the H-Coal process, a catalytic, single-stage, coal liquefaction technology, were chemically characterized and screened for microbial mutagenicity. For these investigations, a blend of light and heavy H-Coal process oils was fractionally distilled into 50/sup 0/F boiling point cuts. The chemical analyses and biological testing results presented in this status report deal primarily with the blended material and the distillate fractions boiling above 650/sup 0/F. Results from the microbial mutagenicity assays indicated that onset of biological activity in the crude materials occurred above 700/sup 0/F. Similar trends have been observed for Solvent Refined Coal (SRC) I, SRC II, Integrated Two-StageLiquefaction (ITSL) and Exxon EDS process materials. After chemical class fractionation, the primary source of microbial mutagenicity of the crude boiling point cuts was the nitrogen-containing polycyclic aromatic compound (N-PAC) fractions. Amino polycyclic aromatic hydrocarbons (amino-PAH) were present at sufficient concentration levels in the N-PAC fractions to account for the observed mutagenic responses. In general, the chemical composition of the H-Coal materials studied was similar to that of other single-stage liquefaction materials. The degree of alkylation in these materials was determined to be greater than in the SRC and less than in the EDS process distillate cuts. 13 references, 8 figures, 11 tables.

This is the tenth Quarterly Technical Progress Report under DOE Contract DE-AC22-89PC89883. Process oils from Wilsonville Run 262 were analyzed to provide information on process performance. Run 262 was operated from July 10 through September 30, 1991, in the thermal/catalytic Close-Coupled Integrated Two-StageLiquefaction (CC-ITSL) configuration with ash recycle. The feed coal was Black Thunder Mine subbituminous coal. The high/low temperature sequence was used. Each reactor was operated at 50% of the available reactor volume. The interstage separator was in use throughout the run. The second-stage reactor was charged with aged Criterion 324 catalyst (Ni/Mo on 1/16 inch alumina extrudate support). Slurry catalysts and sulfiding agent were fed to the first-stage reactor. Molyvan L is an organometallic compound which contains 8.1% Mo, and is commercially available as an oil-soluble lubricant additive. It was used in Run 262 as a dispersed hydrogenation catalyst precursor, primarily to alleviate deposition problems which plagued past runs with Black Thunder coal. One test was made with little supported catalyst in the second stage. The role of phenolic groups in donor solvent properties was examined. In this study, four samples from direct liquefaction process oils were subjected to O-methylation of the phenolic groups, followed by chemical analysis and solvent quality testing.

This is the eleventh Quarterly Technical Progress Report under DOE Contract DE-AC22-89PC89883. Major topics reported are: (1) The results of a study designed to determine the effects of the conditions employed at the Wilsonville slurry preheater vessel on coal conversion is described. (2) Stable carbon isotope ratios were determined and used to source the carbon of three product samples from Period 49 of UOP bench-scale coprocessing Run 37. The results from this coprocessing run agree with the general trends observed in other coprocessing runs that we have studied. (3) Microautoclave tests and chemical analyses were performed to ``calibrate`` the reactivity of the standard coal used for determining donor solvent quality of process oils in this contract. (4) Several aspects of Wilsonville Close-Coupled Integrated Two-StageLiquefaction (CC-ITSL) resid conversion kinetics were investigated; results are presented. Error limits associated with calculations of deactivation rate constants previously reported for Runs 258 and 261 are revised and discussed. A new procedure is described that relates the conversions of 850{degrees}F{sup +} , 1050{degrees}F{sup +}, and 850 {times} 1050{degrees}F material. Resid conversions and kinetic constants previously reported for Run 260 were incorrect; corrected data and discussion are found in Appendix I of this report.

The objective of this research program was to evaluate the effectiveness of selected nondonor solvents (i.e., solvents that are not generally considered to have hydrogen available for hydrogenolysis reactions) for the solubilization of coals. Principal criteria for selection of candidate solvents were that the compound should be representative of a major chemical class, should be present in reasonable concentration in coal liquid products, and should have the potential to participate in hydrogen redistribution reactions. Naphthalene, phenanthrene, pyrene, carbazole, phenanthridine, quinoline, 1-naphthol, and diphenyl ether were evaluated to determine their effect on coal liquefaction yields and were compared with phenol and two high-quality process solvents, Wilsonville SRC-I recycle solvent and Lummus ITSL heavy oil solvent. The high conversion efficacy of 1-naphthol may be attributed to its condensation to binaphthol and the consequent availability of hydrogen. The effectiveness of both the nitrogen heterocycles and the polycyclic aromatic hydrocarbon (PAH) compounds may be due to their polycyclic aromatic nature (i.e., possible hydrogen shuttling or transfer agents) and their physical solvent properties. The relative effectiveness for coal conversion of the Lummus ITSL heavy oil solvent as compared with the Wilsonville SRC-I process solvent may be attributed to the much higher concentration of 3-, 4-, and 5-ring PAH and hydroaromatic constituents in Lummus solvent. The chemistry of coal liquefaction and the development of recycle, hydrogen donor, and nondonor solvents are reviewed. The experimental methodology for tubing-bomb tests is outlined, and experimental problem areas are discussed.

The project on biotechnology of indirect liquefaction was focused on conversion of coal derived synthesis gas to liquid fuels using a two-stage, acidogenic and solventogenic, anaerobic bioconversion process. The acidogenic fermentation used a novel and versatile organism, Butyribacterium methylotrophicum, which was fully capable of using CO as the sole carbon and energy source for organic acid production. In extended batch CO fermentations the organism was induced to produce butyrate at the expense of acetate at low pH values. Long-term, steady-state operation was achieved during continuous CO fermentations with this organism, and at low pH values (a pH of 6.0 or less) minor amounts of butanol and ethanol were produced. During continuous, steady-state fermentations of CO with cell recycle, concentrations of mixed acids and alcohols were achieved (approximately 12 g/l and 2 g/l, respectively) which are high enough for efficient conversion in stage two of the indirect liquefaction process. The metabolic pathway to produce 4-carbon alcohols from CO was a novel discovery and is believed to be unique to our CO strain of B. methylotrophicum. In the solventogenic phase, the parent strain ATCC 4259 of Clostridium acetobutylicum was mutagenized using nitrosoguanidine and ethyl methane sulfonate. The E-604 mutant strain of Clostridium acetobutylicum showed improved characteristics as compared to parent strain ATCC 4259 in batch fermentation of carbohydrates.

Although two-stage testing is not the most efficient form of adaptive testing, it has some advantages. In this paper, linear programming models are given for the construction of two-stage tests. In these models, practical constraints with respect to, among other things, test composition, administrat

Sometimes a regulatory requirement or a quality-assurance procedure sets an allowed maximum on a confidence limit for a mean. If the sample mean of the measurements is below the allowed maximum, but the confidence limit is above it, a very widespread practice is to increase the sample size and recalculate the confidence bound. The confidence level of this two-stage procedure is rarely found correctly, but instead is typically taken to be the nominal confidence level, found as if the final sample size had been specified in advance. In typical settings, the correct nominal [alpha] should be between the desired P(Type I error) and half that value. This note gives tables for the correct a to use, some plots of power curves, and an example of correct two-stage sampling.

Sometimes a regulatory requirement or a quality-assurance procedure sets an allowed maximum on a confidence limit for a mean. If the sample mean of the measurements is below the allowed maximum, but the confidence limit is above it, a very widespread practice is to increase the sample size and recalculate the confidence bound. The confidence level of this two-stage procedure is rarely found correctly, but instead is typically taken to be the nominal confidence level, found as if the final sample size had been specified in advance. In typical settings, the correct nominal {alpha} should be between the desired P(Type I error) and half that value. This note gives tables for the correct a to use, some plots of power curves, and an example of correct two-stage sampling.

cordi - tions and associated iteration prooedure become more complex. This is due to both the increased number of components and to the time for a...solved for each stage in the twostage solution . There are (3 + ntrrber of planets) degrees of freedom fcr eacb stage plus two degrees of freedom...should be devised. It should be noted that this is not minor task. In general, each stage plus an input or output shaft will have 2 times (4 + number

that the organic compounds and the inhibition effect are very low even before treatment with activated carbon. The moderate inhibition effect relates to a high content of ammonia in the condensate. The nitrifiers become tolerant to the condensate after a few weeks of exposure. The level of organic compounds......Condensate, produced when gas from downdraft biomass gasifier is cooled, contains organic compounds that inhibit nitrifiers. Treatment with activated carbon removes most of the organics and makes the condensate far less inhibitory. The condensate from an optimised two-stage gasifier is so clean...

Decision theory is applied to the problem of setting thresholds in medical screening when it is organised in twostages. In the first stage that involves a less expensive procedure that can be applied on a mass scale, an individual is classified as a negative or a likely positive. In the second stage, the likely positives are subjected to another test that classifies them as (definite) positives or negatives. The second-stage test is more accurate, but also more expensive and more involved, and so there are incentives to restrict its application. Robustness of the method with respect to the parameters, some of which have to be set by elicitation, is assessed by sensitivity analysis.

The epicyclic gear dynamics program was expanded to add the option of evaluating the tooth pair dynamics for two epicyclic gear stages with peripheral components. This was a practical extension to the program as multiple gear stages are often used for speed reduction, space, weight, and/or auxiliary units. The option was developed for either stage to be a basic planetary, star, single external-external mesh, or single external-internal mesh. The twostage system allows for modeling of the peripherals with an input mass and shaft, an output mass and shaft, and a connecting shaft. Execution of the initial test case indicated an instability in the solution with the tooth paid loads growing to excessive magnitudes. A procedure to trace the instability is recommended as well as a method of reducing the program's computation time by reducing the number of boundary condition iterations.

This article reviews the hydrothermal liquefaction of biomass with the aim of describing the current status of the technology. Hydrothermal liquefaction is a medium-temperature, high-pressure thermochemical process, which produces a liquid product, often called bio-oil or bi-crude. During...... the hydrothermal liquefaction process, the macromolecules of the biomass are first hydrolyzed and/or degraded into smaller molecules. Many of the produced molecules are unstable and reactive and can recombine into larger ones. During this process, a substantial part of the oxygen in the biomass is removed...... by dehydration or decarboxylation. The chemical properties of bio-oil are highly dependent of the biomass substrate composition. Biomass constitutes of various components such as protein; carbohydrates, lignin and fat, and each of them produce distinct spectra of compounds during hydrothermal liquefaction...

The main objective of this publication was to present a two-stage algorithm of modelling random phenomena, based on multidimensional function modelling, on the example of modelling the real estate market for the purpose of real estate valuation and estimation of model parameters of foundations vertical displacements. The first stage of the presented algorithm includes a selection of a suitable form of the function model. In the classical algorithms, based on function modelling, prediction of the dependent variable is its value obtained directly from the model. The better the model reflects a relationship between the independent variables and their effect on the dependent variable, the more reliable is the model value. In this paper, an algorithm has been proposed which comprises adjustment of the value obtained from the model with a random correction determined from the residuals of the model for these cases which, in a separate analysis, were considered to be the most similar to the object for which we want to model the dependent variable. The effect of applying the developed quantitative procedures for calculating the corrections and qualitative methods to assess the similarity on the final outcome of the prediction and its accuracy, was examined by statistical methods, mainly using appropriate parametric tests of significance. The idea of the presented algorithm has been designed so as to approximate the value of the dependent variable of the studied phenomenon to its value in reality and, at the same time, to have it "smoothed out" by a well fitted modelling function.

The report presents a summary the work performed under DOE Contract No. DE-AC22-95PC95050. Investigations performed under Task 4--Integrated Flow Sheet Testing are detailed. In this program, a novel direct coal liquefaction technology was investigated by CONSOL Inc. with the University of Kentucky Center for Applied Energy Research and LDP Associates. The process concept explored consists of a first-stage coal dissolution step in which the coal is solubilized by hydride ion donation. In the second stage, the products are catalytically upgraded to refinery feedstocks. Integrated first-stage and solids-separation steps were used to prepare feedstocks for second-stage catalytic upgrading. An engineering and economic evaluation was conducted concurrently with experimental work throughout the program. Approaches to reduce costs for a conceptual commercial plant were recommended at the conclusion of Task 3. These approaches were investigated in Task 4. The economic analysis of the process as it was defined at the conclusion of Task 4, indicates that the production of refined product (gasoline) via this novel direct liquefaction technology is higher than the cost associated with conventional two-stageliquefaction technologies.

Coal liquefaction is an emerging technology receiving great attention as a possible liquid fuel source. Currently, four general methods of converting coal to liquid fuel are under active development: direct hydrogenation; pyrolysis/hydrocarbonization; solvent extraction; and indirect liquefaction. This work is being conducted at the pilot plant stage, usually with a coal feed rate of several tons per day. Several conceptual design studies have been published recently for large (measured in tens of thousands of tons per day coal feed rate) commercial liquefaction plants, and these reports form the data base for this evaluation. Products from a liquefaction facility depend on the particular method and plant design selected, and these products range from synthetic crude oils up through the lighter hydrocarbon gases, and, in some cases, electricity. Various processes are evaluated with respect to product compositions, thermal efficiency, environmental effects, operating and maintenance requirements, and cost. Because of the large plant capacities of current conceptual designs, it is not clear as to how, and on what scale, coal liquefaction may be considered appropriate as an energy source for Integrated Community Energy Systems (CES). Development work, both currently under way and planned for the future, should help to clarify and quantify the question of applicability.

Full Text Available Liquefaction is one of the critical problems in geotechnical engineering. High ground water levels and alluvial soils have a high potential risk for damage due to liquefaction, especially in seismically active regions. Eskişehir urban area, studied in this article, is situated within the second degree earthquake region on the seismic hazard zonation map of Turkey and is surrounded by Eskişehir, North Anatolian, Kütahya and Simav Fault Zones. Geotechnical investigations are carried out in twostages: field and laboratory. In the first stage, 232 boreholes in different locations were drilled and Standard Penetration Test (SPT was performed. Test pits at 106 different locations were also excavated to support geotechnical data obtained from field tests. In the second stage, experimental studies were performed to determine the Atterberg limits and physical properties of soils. Liquefaction potential was investigated by a simplified method based on SPT. A scenario earthquake of magnitude M=6.4, produced by Eskişehir Fault Zone, was used in the calculations. Analyses were carried out for PGA levels at 0.19, 0.30 and 0.47 g. The results of the analyses indicate that presence of high ground water level and alluvial soil increase the liquefaction potential with the seismic features of the region. Following the analyses, liquefaction potential maps were produced for different depth intervals and can be used effectively for development plans and risk management practices in Eskişehir.

Biomass is one of the most abundant sources of renewable energy, and will be an important part of a more sustainable future energy system. In addition to direct combustion, there is growing attention on conversion of biomass into liquid en-ergy carriers. These conversion methods are divided...... into liquid biofuels, with the aim of describing the current status and development challenges of the technology. During the hydrothermal liquefaction process, the biomass macromolecules are first hydrolyzed and/or degraded into smaller molecules. Many of the produced molecules are unstable and reactive...... into biochemical/biotechnical methods and thermochemical methods; such as direct combustion, pyrolysis, gasification, liquefaction etc. This chapter will focus on hydrothermal liquefaction, where high pressures and intermediate temperatures together with the presence of water are used to convert biomass...

Biomass is one of the most abundant sources of renewable energy, and will be an important part of a more sustainable future energy system. In addition to direct combustion, there is growing attention on conversion of biomass into liquid en-ergy carriers. These conversion methods are divided...... into biochemical/biotechnical methods and thermochemical methods; such as direct combustion, pyrolysis, gasification, liquefaction etc. This chapter will focus on hydrothermal liquefaction, where high pressures and intermediate temperatures together with the presence of water are used to convert biomass...... into liquid biofuels, with the aim of describing the current status and development challenges of the technology. During the hydrothermal liquefaction process, the biomass macromolecules are first hydrolyzed and/or degraded into smaller molecules. Many of the produced molecules are unstable and reactive...

The project on biotechnology of indirect liquefaction was focused on conversion of coal derived synthesis gas to liquid fuels using a two-stage, acidogenic and solventogenic, anaerobic bioconversion process. The acidogenic fermentation used a novel and versatile organism, Butyribacterium methylotrophicum, which was fully capable of using CO as the sole carbon and energy source for organic acid production. In extended batch CO fermentations the organism was induced to produce butyrate at the expense of acetate at low pH values. Long-term, steady-state operation was achieved during continuous CO fermentations with this organism, and at low pH values (a pH of 6.0 or less) minor amounts of butanol and ethanol were produced. During continuous, steady-state fermentations of CO with cell recycle, concentrations of mixed acids and alcohols were achieved (approximately 12 g/l and 2 g/l, respectively) which are high enough for efficient conversion in stage two of the indirect liquefaction process. The metabolic pathway to produce 4-carbon alcohols from CO was a novel discovery and is believed to be unique to our CO strain of B. methylotrophicum. In the solventogenic phase, the parent strain ATCC 4259 of Clostridium acetobutylicum was mutagenized using nitrosoguanidine and ethyl methane sulfonate. The E-604 mutant strain of Clostridium acetobutylicum showed improved characteristics as compared to parent strain ATCC 4259 in batch fermentation of carbohydrates.

The treatment of cadmium dust with a two-stage leaching process was investigated to replace the existing sulphation roast-leaching processes. The process parameters in the first stage leaching were basically similar to the neutralleaching in zinc hydrometallurgy. The effects of process parameters in the second stage leaching on the extraction of zincand cadmium were mainly studied. The experimental results indicated that zinc and cadmium could be efficiently recoveredfrom the cadmium dust by two-stage leaching process. The extraction percentages of zinc and cadmium in twostage leach-ing reached 95% and 88% respectively under the optimum conditions. The total extraction percentage of Zn and Cdreached 94%.

Significant progress was made in the May 1990--May 1991 contract period in three primary coal liquefaction research areas: catalysis, structure-reactivity studies, and novel liquefaction processes. A brief summary of the accomplishments in the past year in each of these areas is given.

This paper studies the coordination effects between stages for scheduling problems where decision-making is a two-stage process. Twostages are considered as one system. The system can be a supply chain that links twostages, one stage representing a manufacturer; and the other, a distributor.It also can represent a single manufacturer, while each stage represents a different department responsible for a part of operations. A problem that jointly considers both stages in order to achieve ideal overall system performance is defined as a system problem. In practice, at times, it might not be feasible for the twostages to make coordinated decisions due to (i) the lack of channels that allow decision makers at the twostages to cooperate, and/or (ii) the optimal solution to the system problem is too difficult (or costly) to achieve.Two practical approaches are applied to solve a variant of two-stage logistic scheduling problems. The Forward Approach is defined as a solution procedure by which the first stage of the system problem is solved first, followed by the second stage. Similarly, the Backward Approach is defined as a solution procedure by which the second stage of the system problem is solved prior to solving the first stage. In each approach, twostages are solved sequentially and the solution generated is treated as a heuristic solution with respect to the corresponding system problem. When decision makers at twostages make decisions locally without considering consequences to the entire system,ineffectiveness may result - even when each stage optimally solves its own problem. The trade-off between the time complexity and the solution quality is the main concern. This paper provides the worst-case performance analysis for each approach.

Residential two-stage gas furnaces account for almost a quarter of the total number of models listed in the March 2005 GAMA directory of equipment certified for sale in the United States. Two-stage furnaces are expanding their presence in the market mostly because they meet consumer expectations for improved comfort. Currently, the U.S. Department of Energy (DOE) test procedure serves as the method for reporting furnace total fuel and electricity consumption under laboratory conditions. In 2006, American Society of Heating Refrigeration and Air-conditioning Engineers (ASHRAE) proposed an update to its test procedure which corrects some of the discrepancies found in the DOE test procedure and provides an improved methodology for calculating the energy consumption of two-stage furnaces. The objectives of this paper are to explore the differences in the methods for calculating two-stage residential gas furnace energy consumption in the DOE test procedure and in the 2006 ASHRAE test procedure and to compare test results to research results from field tests. Overall, the DOE test procedure shows a reduction in the total site energy consumption of about 3 percent for two-stage compared to single-stage furnaces at the same efficiency level. In contrast, the 2006 ASHRAE test procedure shows almost no difference in the total site energy consumption. The 2006 ASHRAE test procedure appears to provide a better methodology for calculating the energy consumption of two-stage furnaces. The results indicate that, although two-stage technology by itself does not save site energy, the combination of two-stage furnaces with BPM motors provides electricity savings, which are confirmed by field studies.

This paper studies local M-estimation of the nonparametric components of additive models. A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives. Under very mild conditions, the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known. The established asymptotic results also hold for two particular local M-estimations: the local least squares and least absolute deviation estimations. However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions, its implementation is time-consuming. To reduce the computational burden, one-step approximations to the two-stage local M-estimators are developed. The one-step estimators are shown to achieve the same efficiency as the fully iterative two-stage local M-estimators, which makes the two-stage local M-estimation more feasible in practice. The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers. In addition, the practical implementation of the proposed estimation is considered in details. Simulations demonstrate the merits of the two-stage local M-estimation, and a real example illustrates the performance of the methodology.

This paper studies local M-estimation of the nonparametric components of additive models.A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives.Under very mild conditions,the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known.The established asymptotic results also hold for two particular local M-estimations:the local least squares and least absolute deviation estimations.However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions,its implementation is time-consuming.To reduce the computational burden,one-step approximations to the two-stage local M-estimators are developed.The one-step estimators are shown to achieve the same effciency as the fully iterative two-stage local M-estimators,which makes the two-stage local M-estimation more feasible in practice.The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers.In addition,the practical implementation of the proposed estimation is considered in details.Simulations demonstrate the merits of the two-stage local M-estimation,and a real example illustrates the performance of the methodology.

The liquefaction of crop residues in the presence of ethylene glycol, ethylene carbonate, or polyethylene glycol using sulfuric acid as a catalyst was studied. For all experiments, the liquefaction was conducted at 160C and atmospheric pressure. The mass ratio of feedstock to liquefaction solvents used in all the experiments was 30:100. The results show that the acid catalyzed liquefaction process fit a pseudo-first-order kinetics model. Liquefaction yields of 80, 74, and 60% were obtained i...

BESSY is proposing a demonstration facility, called STARS, for a two-stage high-gain harmonic generation free electron laser (HGHG FEL). STARS is planned for lasing in the wavelength range 40 to 70 nm, requiring a beam energy of 325 MeV. The facility consists of a normal conducting gun, three superconducting TESLA-type acceleration modules modified for CW operation, a single stage bunch compressor and finally a two-stage HGHG cascaded FEL. This paper describes the faciliy layout and the rationale behind the operation parameters.

A two-stage gasification pilot plant was designed and built as a co-operative project between the Technical University of Denmark and the company REKA.A dynamic, mathematical model of the two-stage pilot plant was developed to serve as a tool for optimising the process and the operating conditions...... of the gasification plant.The model consists of modules corresponding to the different elements in the plant. The modules are coupled together through mass and heat conservation.Results from the model are compared with experimental data obtained during steady and unsteady operation of the pilot plant. A good...

The technical report Proceedings From the 2nd U.S.-Japan Workshop on Liquefaction, Large Ground Deformation and Their Effects on Lifelines is available free of charge from the National Center for Earthquake Engineering Research, headquartered at the State University of New York at Buffalo. The 499-page proceedings contain more than 30 reports on case studies of liquefaction and earthquake-induced ground deformation from previous earthquakes in the U.S. and Japan.

The invention relates to a method of producing ethanol by fermentation, said method comprising a secondary liquefaction step in the presence of a themostable acid alpha-amylase or, a themostable maltogenic acid alpha-amylase.......The invention relates to a method of producing ethanol by fermentation, said method comprising a secondary liquefaction step in the presence of a themostable acid alpha-amylase or, a themostable maltogenic acid alpha-amylase....

Full Text Available This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the second-stage programming problem is derived. As a result, the proposed two-stage model is equivalent to a single-stage model, and the analytical optimal solution of the two-stage model is obtained, which helps us to discuss the properties of the optimal solution. Finally, some numerical experiments are performed to demonstrate the new modeling idea and the effectiveness. The computational results provided by the proposed model show that the more risk-averse investor will invest more wealth in the risk-free security. They also show that the optimal invested amount in risky security increases as the risk-free return decreases and the optimal utility increases as the risk-free return increases, whereas the optimal utility increases as the transaction costs decrease. In most instances the utilities provided by the proposed two-stage model are larger than those provided by the single-stage model.

Group testing algorithms are very useful tools for DNA library screening. Building on recent work by Levenshtein (2003) and Tonchev (2008), we construct in this paper new infinite classes of combinatorial structures, the existence of which are essential for attaining the minimum number of individual tests at the second stage of a two-stage disjunctive testing algorithm.

Based on more than 15 years of research and practical experience, the Technical University of Denmark (DTU) and COWI Consulting Engineers and Planners AS present the two-stage gasification process, a concept for high efficiency gasification of biomass producing negligible amounts of tars. In the ......Based on more than 15 years of research and practical experience, the Technical University of Denmark (DTU) and COWI Consulting Engineers and Planners AS present the two-stage gasification process, a concept for high efficiency gasification of biomass producing negligible amounts of tars....... In the two-stage gasification concept, the pyrolysis and the gasification processes are physical separated. The volatiles from the pyrolysis are partially oxidized, and the hot gases are used as gasification medium to gasify the char. Hot gases from the gasifier and a combustion unit can be used for drying...... a cold gas efficiency exceeding 90% is obtained. In the original design of the two-stage gasification process, the pyrolysis unit consists of a screw conveyor with external heating, and the char unit is a fixed bed gasifier. This design is well proven during more than 1000 hours of testing with various...

Objective To evaluate the effectiveness of free graft transplantation two-stage urethroplasty for hypospadias repair.Methods Fifty-eight cases with different types of hypospadias including 10 subcoronal, 36 penile shaft, 9 scrotal, and 3 perineal were treated with free full-thickness skin graft or (and) buccal mucosal graft transplantation two-stage urethroplasty. Of 58 cases, 45 were new cases, 13 had history of previous failed surgeries. Operative procedure included twostages: the first stage is to correct penile curvature (chordee), prepare transplanting bed, harvest and prepare full-thickness skin graft, buccal mucosal graft, and perform graft transplantation. The second stage is to complete urethroplasty and glanuloplasty.Results After the first stage operation, 56 of 58 cases (96.6%) were successful with grafts healing well, another 2foreskin grafts got gangrened. After the second stage operation on 56 cases, 5 cases failed with newly formed urethras opened due to infection, 8 cases had fistulas, 43 (76.8%) cases healed well.Conclusions Free graft transplantation two-stage urethroplasty for hypospadias repair is a kind of effective treatment with broad indication, comparatively high success rate, less complicationsand good cosmatic results, indicative of various types of hypospadias repair.

In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

For the one-sample problem, a two-stage rank test is derived which realizes a required power against a given local alternative, for all sufficiently smooth underlying distributions. This is achieved using asymptotic expansions resulting in a precision of orderm −1, wherem is the size of the first

In this paper mixed integer linear programming models for customizing two-stage tests are given. Model constraints are imposed with respect to test composition, administration time, inter-item dependencies, and other practical considerations. It is not difficult to modify the models to make them use

The lowest frequency band (70 - 450 MHz) of the Square Kilometre Array will consist of sparse aperture arrays grouped into geographically-localised patches, or stations. Signals from thousands of antennas in each station will be beamformed to produce station beams which form the inputs for the central correlator. Two-stage beamforming within stations can reduce SKA-low signal processing load and costs, but has not been previously explored for the irregular station layouts now favoured in radio astronomy arrays. This paper illustrates the effects of two-stage beamforming on sidelobes and effective area, for two representative station layouts (regular and irregular gridded tile on an irregular station). The performance is compared with a single-stage, irregular station. The inner sidelobe levels do not change significantly between layouts, but the more distant sidelobes are affected by the tile layouts; regular tile creates diffuse, but regular, grating lobes. With very sparse arrays, the station effective area...

A lower stage chemisorption refrigeration system physically and functionally coupled to an upper stage physical adsorption refrigeration system is disclosed. Waste heat generated by the lower stage cycle is regenerated to fuel the upper stage cycle thereby greatly improving the energy efficiency of a two-stage sorption refrigerator. The twostages are joined by disposing a first pressurization chamber providing a high pressure flow of a first refrigerant for the lower stage refrigeration cycle within a second pressurization chamber providing a high pressure flow of a second refrigerant for the upper stage refrigeration cycle. The first pressurization chamber is separated from the second pressurization chamber by a gas-gap thermal switch which at times is filled with a thermoconductive fluid to allow conduction of heat from the first pressurization chamber to the second pressurization chamber.

A recursive algorithm for the two-stage empirical frequency-domain optimal param-eter (EFOP) estimation method Was proposed. The EFOP method was a novel sys-tem identificallon method for Black-box models that combines time-domain esti-mation and frequency-domain estimation. It has improved anti-disturbance perfor-mance, and could precisely identify models with fewer sample numbers. The two-stage EFOP method based on the boot-strap technique was generally suitable for Black-box models, but it was an iterative method and takes too much computation work so that it did not work well online. A recursive algorithm was proposed for dis-turbed stochastic systems. Some simulation examples are included to demonstrate the validity of the new method.

Natural language parsing is a task of great importance and extreme difficulty. In this paper, we present a full Chinese parsing system based on a two-stage approach. Rather than identifying all phrases by a uniform model, we utilize a divide and conquer strategy. We propose an effective and fast method based on Markov model to identify the base phrases. Then we make the first attempt to extend one of the best English parsing models i.e. the head-driven model to recognize Chinese complex phrases. Our two-stage approach is superior to the uniform approach in two aspects. First, it creates synergy between the Markov model and the head-driven model. Second, it reduces the complexity of full Chinese parsing and makes the parsing system space and time efficient. We evaluate our approach in PARSEVAL measures on the open test set, the parsing system performances at 87.53% precision, 87.95% recall.

Two popular explanations of urban poverty are the "welfare-disincentive" and "urban-deindustrialization" theories. Using cross-sectional Census data, we develop a two-stage model to predict an SMSAs median family income and poverty rate. The model allows the city's welfare level and industrial structure to affect its median family income and poverty rate directly. It also allows welfare and industrial structure to affect income and poverty indirectly, through their effects on family structure...

Spectral emissivity is a key in the temperature measurement by radiation methods, but not easy to determine in a combustion environment, due to the interrelated influence of temperature and wave length of the radiation. In multi-wavelength radiation thermometry, knowing the spectral emissivity of the material is a prerequisite. However in many circumstances such a property is a complex function of temperature and wavelength and reliable models are yet to be sought. In this study, a twostages...

A two-stage collaborative exam is one in which students first complete the exam individually, and then complete the same or similar exam in collaborative groups immediately afterward. To quantify the learning effect from the group component of these two-stage exams in an introductory Physics course, a randomized crossover design was used where each student participated in both the treatment and control groups. For each of the two two-stage collaborative group midterm exams, questions were designed to form matched near-transfer pairs with questions on an end-of-term diagnostic which was used as a learning test. For learning test questions paired with questions from the first midterm, which took place six to seven weeks before the learning test, an analysis using a mixed-effects logistic regression found no significant differences in learning-test performance between the control and treatment group. For learning test questions paired with questions from the second midterm, which took place one to two weeks prio...

We present a 45-degree two-stage venous cannula that confers advantage to the surgeon using cardiopulmonary bypass. This cannula exits the mediastinum under the transverse bar of the sternal retractor, leaving the rostral end of the sternal incision free of apparatus. It allows for lifting of the heart with minimal effect on venous return and does not interfere with the radially laid out sutures of an aortic valve replacement using an interrupted suture technique.

Full Text Available In recent years, the use of adaptive design methods in clinical research and development based on accrued data has become very popular because of its efficiency and flexibility in modifying trial and/or statistical procedures of ongoing clinical trials. One of the most commonly considered adaptive designs is probably a two-stage seamless adaptive trial design that combines two separate studies into one single study. In many cases, study endpoints considered in a two-stage seamless adaptive design may be similar but different (e.g. a biomarker versus a regular clinical endpoint or the same study endpoint with different treatment durations. In this case, it is important to determine how the data collected from both stages should be combined for the final analysis. It is also of interest to know how the sample size calculation/allocation should be done for achieving the study objectives originally set for the twostages (separate studies. In this article, formulas for sample size calculation/allocation are derived for cases in which the study endpoints are continuous, discrete (e.g. binary responses, and contain time-to-event data assuming that there is a well-established relationship between the study endpoints at different stages, and that the study objectives at different stages are the same. In cases in which the study objectives at different stages are different (e.g. dose finding at the first stage and efficacy confirmation at the second stage and when there is a shift in patient population caused by protocol amendments, the derived test statistics and formulas for sample size calculation and allocation are necessarily modified for controlling the overall type I error at the prespecified level.

Dairy effluents contains high organic load and unscrupulous discharge of these effluents into aquatic bodies is a matter of serious concern besides deteriorating their water quality. Whilst physico-chemical treatment is the common mode of treatment, immobilized microalgae can be potentially employed to treat high organic content which offer numerous benefits along with waste water treatment. A novel low cost twostage treatment was employed for the complete treatment of dairy effluent. The first stage consists of treating the diary effluent in a photobioreactor (1 L) using immobilized Chlorella pyrenoidosa while the second stage involves a two column sand bed filtration technique. Whilst NH4+-N was completely removed, a 98% removal of PO43--P was achieved within 96 h of twostage purification processes. The filtrate was tested for toxicity and no mortality was observed in the zebra fish which was used as a model at the end of 96 h bioassay. Moreover, a significant decrease in biological oxygen demand and chemical oxygen demand was achieved by this novel method. Also the biomass separated was tested as a biofertilizer to the rice seeds and a 30% increase in terms of length of root and shoot was observed after the addition of biomass to the rice plants. We conclude that the twostage treatment of dairy effluent is highly effective in removal of BOD and COD besides nutrients like nitrates and phosphates. The treatment also helps in discharging treated waste water safely into the receiving water bodies since it is non toxic for aquatic life. Further, the algal biomass separated after first stage of treatment was highly capable of increasing the growth of rice plants because of nitrogen fixation ability of the green alga and offers a great potential as a biofertilizer.

We present test results for a two-stage integrated SQUID amplifier which uses a series array of d.c. SQUIDS to amplify the signal from a single input SQUID. The device was developed by Welty and Martinis at NIST and recent versions have been manufactured by HYPRES, Inc. Shielding and filtering techniques were employed during the testing to minimize the external noise. Energy resolution of 300 h was demonstrated using a d.c. excitation at frequencies above 1 kHz, and better than 500 h resolution was typical down to 300 Hz.

The paper presents a twostage classification approach for handwritten devanagari characters The first stage is using structural properties like shirorekha, spine in character and second stage exploits some intersection features of characters which are fed to a feedforward neural network. Simple histogram based method does not work for finding shirorekha, vertical bar (Spine) in handwritten devnagari characters. So we designed a differential distance based technique to find a near straight line for shirorekha and spine. This approach has been tested for 50000 samples and we got 89.12% success

In response to adverse conditions, myxobacteria form aggregates which develop into fruiting bodies. We model myxobacteria aggregation with a lattice cell model based entirely on short range (non-chemotactic) cell-cell interactions. Local rules result in a two-stage process of aggregation mediated by transient streams. Aggregates resemble those observed in experiment and are stable against even very large perturbations. Noise in individual cell behavior increases the effects of streams and result in larger, more stable aggregates. Phys. Rev. Lett. 93: 068301 (2004).

Additive-prepared straw pellets were gasified in the 100 kW two-stage gasifier at The Department of Mechanical Engineering of the Technical University of Denmark (DTU). The fixed bed temperature range was 800-1000°C. In order to avoid bed sintering, as observed earlier with straw gasification...... residues were examined after the test. No agglomeration or sintering was observed in the ash residues. The tar content was measured both by solid phase amino adsorption (SPA) method and cold trapping (Petersen method). Both showed low tar contents (~42 mg/Nm3 without gas cleaning). The particle content...

A two-stage, highly-loaded fan was designed to deliver an overall pressure ratio of 2.8 with an adiabatic efficiency of 83.9 percent. At the first rotor inlet, design flow per unit annulus area is 42 lbm/sec/sq ft (205 kg/sec/sq m), hub/tip ratio is 0.4 with a tip diameter of 31 inches (0.787 m), and design tip speed is 1450 ft/sec (441.96 m/sec). Other features include use of multiple-circular-arc airfoils, resettable stators, and split casings over the rotor tip sections for casing treatment tests.

Efficiency of an optimization process is largely determined by the search algorithm and its fundamental characteristics. In a given optimization, a single type of algorithm is used in most applications. In this paper, we will investigate the Eagle Strategy recently developed for global optimization, which uses a two-stage strategy by combing two different algorithms to improve the overall search efficiency. We will discuss this strategy with differential evolution and then evaluate their performance by solving real-world optimization problems such as pressure vessel and speed reducer design. Results suggest that we can reduce the computing effort by a factor of up to 10 in many applications.

A review is presented of recent advances in seabed liquefaction and its implications for marine structures. The review is organized in seven sections: Residual liquefaction, including the sequence of liquefaction, mathematical modelling, centrifuge modelling and comparison with standard wave-flum......-flume results; Momentary liquefaction; Floatation of buried pipelines; Sinking of pipelines and marine objects; Liquefaction at gravity structures; Stability of rock berms in liquefied soils; and Impact of seismic-induced liquefaction.......A review is presented of recent advances in seabed liquefaction and its implications for marine structures. The review is organized in seven sections: Residual liquefaction, including the sequence of liquefaction, mathematical modelling, centrifuge modelling and comparison with standard wave...

When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).

The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, can also be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. This paper introduces a twostage heuristic algorithm for solving the Runway Operations Planning (ROP) problem. In the first stage, sequences of departure class slots and runway crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program with a Branch & Bound algorithm implementation. Preliminary results from this implementation of the two-stage algorithm on real-world traffic data are presented.

Full Text Available This study focuses on the aircraft recovery problem (ARP. In real-life operations, disruptions always cause schedule failures and make airlines suffer from great loss. Therefore, the main objective of the aircraft recovery problem is to minimize the total recovery cost and solve the problem within reasonable runtimes. An aircraft recovery model (ARM is proposed herein to formulate the ARP and use feasible line of flights as the basic variables in the model. We define the feasible line of flights (LOFs as a sequence of flights flown by an aircraft within one day. The number of LOFs exponentially grows with the number of flights. Hence, a two-stage heuristic is proposed to reduce the problem scale. The algorithm integrates a heuristic scoring procedure with an aggregated aircraft recovery model (AARM to preselect LOFs. The approach is tested on five real-life test scenarios. The computational results show that the proposed model provides a good formulation of the problem and can be solved within reasonable runtimes with the proposed methodology. The two-stage heuristic significantly reduces the number of LOFs after each stage and finally reduces the number of variables and constraints in the aircraft recovery model.

In the last decade the interest for liquefied natural gas (LNG) is growing. A tendency is to produce and transport LNG on large floating vessels. One important choice in designing such a vessel is the liquefaction process. Several processes have been developed in recent years, ranging from mixed ref

Detecting pedestrians is still a challenging task for automotive vision system due the extreme variability of targets, lighting conditions, occlusions, and high speed vehicle motion. A lot of research has been focused on this problem in the last 10 years and detectors based on classifiers has...... gained a special place among the different approaches presented. This work presents a state-of-the-art pedestrian detection system based on a twostages classifier. Candidates are extracted with a Haar cascade classifier trained with the DaimlerDB dataset and then validated through part-based HOG...... of several metrics, such as detection rate, false positives per hour, and frame rate. The novelty of this system rely in the combination of HOG part-based approach, tracking based on specific optimized feature and porting on a real prototype....

Gall bladder torsion （GBT） is a relatively uncommonentity and rarely diagnosed preoperatively. A constantfactor in all occurrences of GBT is a freely mobilegall bladder due to congenital or acquired anomalies.GBT is commonly observed in elderly white females.We report a 77-year-old, Caucasian lady who wasoriginally diagnosed as gall bladder perforation butwas eventually found with a twostaged torsion of thegall bladder with twisting of the Riedel＇s lobe （partof tongue like projection of liver segment 4A）. Thistogether, has not been reported in literature, to thebest of our knowledge. We performed laparoscopiccholecystectomy and she had an uneventful postoperativeperiod. GBT may create a diagnostic dilemmain the context of acute cholecystitis. Timely diagnosisand intervention is necessary, with extra care whileoperating as the anatomy is generally distorted. Thefundus first approach can be useful due to alteredanatomy in the region of Calot＇s triangle. Laparoscopiccholecystectomy has the benefit of early recovery.

Full Text Available The type of lightweight aggregate and its volume fraction in a mix determine the density of lightweight concrete. Minimizing the density obviously requires a higher volume fraction, but this usually causes aggregates segregation in a conventional mixing process. This paper proposes a two-stage casting process to produce a lightweight concrete. This process involves placing lightweight aggregates in a frame and then filling in the remaining interstitial voids with cementitious grout. The casting process results in the lowest density of lightweight concrete, which consequently has low compressive strength. The irregularly shaped aggregates compensate for the weak point in terms of strength while the round-shape aggregates provide a strength of 20 MPa. Therefore, the proposed casting process can be applied for manufacturing non-structural elements and structural composites requiring a very low density and a strength of at most 20 MPa.

A two-stage object recognition algorithm with the presence of occlusion is presented for microassembly. Coarse localization determines whether template is in image or not and approximately where it is, and fine localization gives its accurate position. In coarse localization, local feature, which is invariant to translation, rotation and occlusion, is used to form signatures. By comparing signature of template with that of image, approximate transformation parameter from template to image is obtained, which is used as initial parameter value for fine localization. An objective function, which is a function of transformation parameter, is constructed in fine localization and minimized to realize sub-pixel localization accuracy. The occluded pixels are not taken into account in objective function, so the localization accuracy will not be influenced by the occlusion.

The topic of applying two-stage designs in the field of bioequivalence studies has recently gained attention in the literature and in regulatory guidelines. While there exists some methodological research on the application of group sequential designs in bioequivalence studies, implementation of adaptive approaches has focused up to now on superiority and non-inferiority trials. Especially, no comparison of the features and performance characteristics of these designs has been performed, and therefore, the question of which design to employ in this setting remains open. In this paper, we discuss and compare 'classical' group sequential designs and three types of adaptive designs that offer the option of mid-course sample size recalculation. A comprehensive simulation study demonstrates that group sequential designs can be identified, which show power characteristics that are similar to those of the adaptive designs but require a lower average sample size. The methods are illustrated with a real bioequivalence study example.

Full Text Available The anticlockwise cycle is commonly used for refrigeration, air conditioning and heat pumps applications. The application of refrigerant in the compression cycle is within the temperature limits of the triple point and the critical point. New refrigerants such as 1234yf or 1234ze have many disadvantages, therefore natural refrigerants application is favourable. The carbon dioxide and water can be applied only in the hybrid twostages cycle. The possibilities of this solutions are shown for refrigerating applications, as well some experimental results of the adsorption-compression double stages cycle, powered with solar collectors are shown. As a high temperature cycle the adsorption system is applied. The low temperature cycle is the compression stage with carbon dioxide as a working fluid. This allows to achieve relatively high COP for low temperature cycle and for the whole system.

Ambiguities in the use of the term liquefaction and in defining the relation between liquefaction and ground failure have led to encumbered communication between workers in various fields and between specialists in the same field, and the possibility that evaluations of liquefaction potential could be misinterpreted or misapplied. Explicit definitions of liquefaction and related concepts are proposed herein. These definitions, based on observed laboratory behavior, are then used to clarify the relation between liquefaction and ground failure. Soil liquefaction is defined as the transformation of a granular material from a solid into a liquefied state as a consequence of increased pore-water pressures. This definition avoids confusion between liquefaction and possible flow-failure conditions after liquefaction. Flow-failure conditions are divided into two types: (1) unlimited flow if pore-pressure reductions caused by dilatancy during flow deformation are not sufficient to solidify the material and thus arrest flow, and (2) limited flow if they are sufficient to solidify the material after a finite deformation. After liquefaction in the field, unlimited flow commonly leads to flow landslides, whereas limited flow leads at most to lateral-spreading landslides. Quick-condition failures such as loss of bearing capacity form a third type of ground failure associated with liquefaction.

The results of research into the application of selected thermal indices of men's work and climate indices in a twostage assessment of climatic work conditions in underground mines have been presented in this article. The difference between these two kinds of indices was pointed out during the project entitled "The recruiting requirements for miners working in hot underground mine environments". The project was coordinated by The Institute of Mining Technologies at Silesian University of Technology. It was a part of a Polish strategic project: "Improvement of safety in mines" being financed by the National Centre of Research and Development. Climate indices are based only on physical parameters of air and their measurements. Thermal indices include additional factors which are strictly connected with work, e.g. thermal resistance of clothing, kind of work etc. Special emphasis has been put on the following indices - substitute Silesian temperature (TS) which is considered as the climatic index, and the thermal discomfort index (δ) which belongs to the thermal indices group. The possibility of the twostage application of these indices has been taken into consideration (preliminary and detailed estimation). Based on the examples it was proved that by the application of thermal hazard (detailed estimation) it is possible to avoid the use of additional technical solutions which would be necessary to reduce thermal hazard in particular work places according to the climate index. The threshold limit value for TS has been set, based on these results. It was shown that below TS = 24°C it is not necessary to perform detailed estimation.

Product streams containing solids are generated in both direct and indirect coal liquefaction processes. This project seeks to improve the effectiveness of coal liquefaction by novel application of sonic and ultrasonic energy to separation of solids from coal liquefaction streams.

Two-stage concrete (TSC) is an innovative concrete that does not require vibration for placing and compaction. TSC is a simple concept; it is made using the same basic constituents as traditional concrete: cement, coarse aggregate, sand and water as well as mineral and chemical admixtures. As its name suggests, it is produced through a two-stage process. Firstly washed coarse aggregate is placed into the formwork in-situ. Later a specifically designed self compacting grout is introduced into the form from the lowest point under gravity pressure to fill the voids, cementing the aggregate into a monolith. The hardened concrete is dense, homogeneous and has in general improved engineering properties and durability. This paper presents the results from a research work attempt to study the effect of silica fume (SF) and superplasticizers admixtures (SP) on compressive and tensile strength of TSC using various combinations of water to cement ratio (w/c) and cement to sand ratio (c/s). Thirty six concrete mixes with different grout constituents were tested. From each mix twenty four standard cylinder samples of size (150mm×300mm) of concrete containing crushed aggregate were produced. The tested samples were made from combinations of w/c equal to: 0.45, 0.55 and 0.85, and three c/s of values: 0.5, 1 and 1.5. Silica fume was added at a dosage of 6% of weight of cement, while superplasticizer was added at a dosage of 2% of cement weight. Results indicated that both tensile and compressive strength of TSC can be statistically derived as a function of w/c and c/s with good correlation coefficients. The basic principle of traditional concrete, which says that an increase in water/cement ratio will lead to a reduction in compressive strength, was shown to hold true for TSC specimens tested. Using a combination of both silica fume and superplasticisers caused a significant increase in strength relative to control mixes.

For the coal liquefaction, improvement of liquefaction conditions and increase of liquefied oil yield are expected by suppressing the recombination through rapid stabilization of pyrolytic radicals which are formed at the initial stage of liquefaction. Two-stageliquefaction combining prethermal treatment and liquefaction was performed under various conditions, to investigate the effects of reaction conditions on the yields and properties of products as well as to increase liquefied oil yield. Consequently, it was found that the catalyst contributes greatly to the hydrogen transfer to coal at the prethermal treatment. High yield of n-hexane soluble fraction with products having low condensation degree could be obtained by combining the prethermal treatment in the presence of hydrogen and catalyst with the concentration of slurry after the treatment. This was considered to be caused by the synergetic effect between the improvement of liquefaction by suppressing polymerization/condensation at the initial stage of reaction through the prethermal treatment and the effective hydrogen transfer accompanied with the improvement of contact efficiency of coal/catalyst by the concentration of slurry at the stage of liquefaction. 4 refs., 8 figs.

ugh the sinusoid loading dynamic triaxial test, the liquefaction property of saturated loess and sand selected from a civil airport of Lanzhou, Gansu is examined. Based on the laboratory results, a comprehensive assessment on the earthquake liquefaction potential of the loess and sand is given, using the liquefaction resistance shear stress method and the results of seismic hazard assessment. It is found that under the effect of ground motion with exceedance probability of 10% within 50 years, the loess in the study is more susceptible to liquefaction than sand.

The paper presents the results of an experimental study of the influence of clay content (in silt-clay and sand-clay mixtures) on liquefaction beneath progressive waves. The experiments showed that the influence of clay content is very significant. Susceptibility of silt to liquefaction...... is increased with increasing clay content, up to 30%, beyond which the mixture of silt and clay is not liquefied. Sand may become prone to liquefaction with the introduction of clay, contrary to the general perception that this type of sediment is normally liquefaction resistant under waves....

Full Text Available This study concerns the characterization of both the steady and unsteady flows and the analysis of stator/rotor interactions of a two-stage axial turbine. The predicted aerodynamic performances show noticeable differences when simulating the turbine stages simultaneously or separately. By considering the multi-blade per row and the scaling technique, the Computational fluid dynamics (CFD produced better results concerning the effect of pitchwise positions between vanes and blades. The recorded pressure fluctuations exhibit a high unsteadiness characterized by a space–time periodicity described by a double Fourier decomposition. The Fast Fourier Transform FFT analysis of the static pressure fluctuations recorded at different interfaces reveals the existence of principal harmonics and their multiples, and each lobed structure of pressure wave corresponds to the number of vane/blade count. The potential effect is seen to propagate both upstream and downstream of each blade row and becomes accentuated at low mass flow rates. Between vanes and blades, the potential effect is seen to dominate the quasi totality of blade span, while downstream the blades this effect seems to dominate from hub to mid span. Near the shroud the prevailing effect is rather linked to the blade tip flow structure.

The characteristics of a two-stage fluidized-bed hybrid coal gasification system to produce syngas from coal, lignite, and peat are described. Devolatilization heat of 823 K is supplied by recirculating gas heated by a solar receiver/coal heater. A second-stage gasifier maintained at 1227 K serves to crack remaining tar and light oil to yield a product free from tar and other condensables, and sulfur can be removed by hot clean-up processes. CO is minimized because the coal is not burned with oxygen, and the product gas contains 50% H2. Bench scale reactors consist of a stage I unit 0.1 m in diam which is fed coal 200 microns in size. A stage II reactor has an inner diam of 0.36 m and serves to gasify the char from stage I. A solar power source of 10 kWt is required for the bench model, and will be obtained from a central receiver with quartz or heat pipe configurations for heat transfer.

This study concerns the characterization of both the steady and unsteady flows and the analysis of stator/rotor interactions of a two-stage axial turbine. The predicted aerodynamic perfor-mances show noticeable differences when simulating the turbine stages simultaneously or sepa-rately. By considering the multi-blade per row and the scaling technique, the Computational fluid dynamics (CFD) produced better results concerning the effect of pitchwise positions between vanes and blades. The recorded pressure fluctuations exhibit a high unsteadiness characterized by a space–time periodicity described by a double Fourier decomposition. The Fast Fourier Transform FFT analysis of the static pressure fluctuations recorded at different interfaces reveals the existence of principal harmonics and their multiples, and each lobed structure of pressure wave corresponds to the number of vane/blade count. The potential effect is seen to propagate both upstream and downstream of each blade row and becomes accentuated at low mass flow rates. Between vanes and blades, the potential effect is seen to dominate the quasi totality of blade span, while down-stream the blades this effect seems to dominate from hub to mid span. Near the shroud the prevail-ing effect is rather linked to the blade tip flow structure.

In order to understand the key mechanisms of the composting processes, the municipal solid waste(MSW) composting processes were divided into twostages, and the characteristics of typical experimental scenarios from the viewpoint of microbial kinetics was analyzed. Through experimentation with advanced composting reactor under controlled composting conditions, several equations were worked out to simulate the degradation rate of the substrate. The equations showed that the degradation rate was controlled by concentration of microbes in the first stage. The degradation rates of substrates of inoculation Run A, B, C and Control composting systems were 13.61 g/(kg·h), 13.08 g/(kg·h), 15.671 g/(kg·h), and 10.5 g/(kg·h), respectively. The value of Run C is around 1.5 times higher than that of Control system. The decomposition rate of the second stage is controlled by concentration of substrate. Although the organic matter decomposition rates were similar to all Runs, inoculation could reduce the values of the half velocity coefficient and could be more efficient to make the composting stable. Particularly. For Run C, the decomposition rate is high in the first stage, and is low in the second stage. The results indicated that the inoculation was efficient for the composting processes.

A novel gas loading system was designed for the specific application of remotely loading high purity gases into targets for gas-gun driven plate impact experiments. The high purity gases are loaded into well-defined target configurations to obtain Hugoniot states in the gas phase at greater than ambient pressures. The small volume of the gas samples is challenging, as slight changing in the ambient temperature result in measurable pressure changes. Therefore, the ability to load a gas gun target and continually monitor the sample pressure prior to firing provides the most stable and reliable target fielding approach. We present the design and evaluation of a gas loading system built for the LANL 50 mm bore two-stage light gas gun. Targets for the gun are made of 6061 Al or OFHC Cu, and assembled to form a gas containment cell with a volume of approximately 1.38 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with corrosive gases. Piping and valves are stainless steel with wetted seals made from Kalrez and Teflon. Preliminary testing was completed to ensure proper flow rate and that the proper safety controls were in place. The system has been used to successfully load Ar, Kr, Xe, and anhydrous ammonia with purities of up to 99.999 percent. The design of the system, and example data from the plate impact experiments will be shown. LA-UR-15-20521

Obtaining liquid fuels from coal which are economically competitive with those obtained from petroleum based sources is a significant challenge for the researcher as well as the chemical industry. Presently, the economics of coal liquefaction are not favorable because of relatively intense processing conditions (temperatures of 430 degrees C and pressures of 2200 psig), use of a costly catalyst, and a low quality product slate of relatively high boiling fractions. The economics could be made more favorable by achieving adequate coal conversions at less intense processing conditions and improving the product slate. A study has been carried out to examine the effect of a surfactant in reducing particle agglomeration and improving hydrodynamics in the coal liquefaction reactor to increase coal conversions...

The primary objective of the study is to develop a computer model for a base line direct coal liquefaction design based on twostage direct coupled catalytic reactors. This primary objective is to be accomplished by completing the following: a base line design based on previous DOE/PETC results from Wilsonville pilot plant and other engineering evaluations; a cost estimate and economic analysis; a computer model incorporating the above two steps over a wide range of capacities and selected process alternatives; a comprehensive training program for DOE/PETC Staff to understand and use the computer model; a thorough documentation of all underlying assumptions for baseline economics; and a user manual and training material which will facilitate updating of the model in the future.

The primary objective of the study is to develop a computer model for a base line direct coal liquefaction design based on twostage direct coupled catalytic reactors. This primary objective is to be accomplished by completing the following: a base line design based on previous DOE/PETC results from Wilsonville pilot plant and other engineering evaluations; a cost estimate and economic analysis; a computer model incorporating the above two steps over a wide range of capacities and selected process alternatives; a comprehensive training program for DOE/PETC Staff to understand and use the computer model; a thorough documentation of all underlying assumptions for baseline economics; and a user manual and training material which will facilitate updating of the model in the future.

Helium liquefaction with a two-stage 4 K pulse tube cryocooler is introduced in this paper. The helium liquefier has a feature of precooling helium gas to be liquefied by using inefficiency of the second stage regenerator in the pulse tube cryocooler. This process reduces enthalpy of the incoming helium gas when entering the condenser and significantly increases the condensation rate. Numerical analysis predicts the precooling heat load on the second stage regenerator, decreases the PTC second stage cooling capacity by only 11% of the heat actually absorbed into the regenerator. A prototype pulse tube helium liquefier was built, which has two precooling heat exchangers on the first stage cold head and the second stage regenerator. It continuously liquefies helium with a rate of 4.8 l/day under normal pressure while consumes 4.6 kW power input.

The Linde Group, through its Australian subsidiary BOC Limited, has signed an agreement with Darwin LNG Pty Ltd for the supply of feed-gas to Linde's new helium refining and liquefaction facility in Darwin, Australia. Linde Kryotechnik AG, located in Switzerland, has carried out the engineering and fabrication of the equipment for the turn key helium plant. The raw feed gas flow of 20'730 Nm3/h contains up to of 3 mol% helium. The purification process of the feed gas consists of partial condensation of nitrogen in twostages, cryogenic adsorption and finally catalytic oxidation of hydrogen followed by a dryer system. Downstream of the purification the refined helium is liquefied using a modified Bryton process and stored in a 30'000 gal LHe tank. For further distribution and export of the liquid helium there are two stations available for filling of truck trailers and containers. The liquid nitrogen, required for refrigeration capacity to the nitrogen removal stages in the purification process as well as for the pre-cooling of the pure helium in the liquefaction process, is generated on site during the feed gas purification process. The optimized process provides low power consumption, maximum helium recovery and a minimum helium loss.

A survey of coal liquefaction technology and analysis of projected relative performance of high potential candidates has been completed and the results are reported here. The key objectives of the study included preparation of a broad survey of the status of liquefaction processes under development, selection of a limited number of high potential process candidates for further study, and an analysis of the relative commercial potential of these candidates. Procedures which contributed to the achievement of the above key goals included definition of the characteristics and development status of known major liquefaction process candidates, development of standardized procedures for assessing technical, environmental, economic and product characteristics for the separate candidates, and development of procedures for selecting and comparing high potential processes. The comparisons were made for three production areas and four marketing areas of the US. In view of the broad scope of the objectives the survey was a limited effort. It used the experience gained during preparation of seven comprehensive conceptual designs/economic evaluations plus comprehensive reviews of the designs, construction and operation of several pilot plants. Results and conclusions must be viewed in the perspective of the information available, how this information was treated, and the full context of the economic comparison results. Comparative economics are presented as ratios; they are not intended to be predictors of absolute values. Because the true cost of constructing and operating large coal conversion facilities will be known only after commercialization, relative values are considered more appropriate. (LTN)

Full Text Available The present study deals with the performance of a twostage solar adsorption refrigeration system with activated carbon-methanol pair investigated experimentally. Such a system was fabricated and tested under the conditions of National Institute of Technology Calicut, Kerala, India. The system consists of a parabolic solar concentrator,two water tanks, two adsorbent beds, condenser, expansion device, evaporator and accumulator. In this particular system the second water tank is act as a sensible heat storage device so that the system can be used during night time also. The system has been designed for heating 50 litres of water from 25oC to 90oC as well ascooling 10 litres of water from 30oC to 10oC within one hour. The performance parameters such as specific cooling power (SCP, coefficient of performance, solar COP and exergetic efficiency are studied. The dependency between the exergetic efficiency and cycle COP with the driving heat source temperature is also studied. The optimum heat source temperature for this system is determined as 72.4oC. The results show that the system has better performance during night time as compared to the day time. The system has a mean cycle COP of 0.196 during day time and 0.335 for night time. The mean SCP values during day time and night time are 47.83 and 68.2, respectively. The experimental results also demonstrate that the refrigerator has cooling capacity of 47 to 78 W during day time and 57.6 W to 104.4W during night time.

Reconnaissance reports and pertinent research on seismic hazards show that liquefaction is one of the key sources of damage to geotechnical and structural engineering systems. Therefore, identifying site liquefaction conditions plays an important role in seismic hazard mitigation. One of the widely used approaches for detecting liquefaction is based on the time-frequency analysis of ground motion recordings, in which short-time Fourier transform is typically used. It is known that recordings at a site with liquefaction are the result of nonlinear responses of seismic waves propagating in the liquefied layers underneath the site. Moreover, Fourier transform is not effective in characterizing such dynamic features as time-dependent frequency of the recordings rooted in nonlinear responses. Therefore, the aforementioned approach may not be intrinsically effective in detecting liquefaction. An alternative to the Fourier-based approach is presented in this study,which proposes time-frequency analysis of earthquake ground motion recordings with the aid of the Hilbert-Huang transform (HHT), and offers justification for the HHT in addressing the liquefaction features shown in the recordings. The paper then defines the predominant instantaneous frequency (PIF) and introduces the PIF-related motion features to identify liquefaction conditions at a given site. Analysis of 29 recorded data sets at different site conditions shows that the proposed approach is effective in detecting site liquefaction in comparison with other methods.

This technology pathway case investigates the feasibility of using whole wet microalgae as a feedstock for conversion via hydrothermal liquefaction. Technical barriers and key research needs have been assessed in order for the hydrothermal liquefaction of microalgae to be competitive with petroleum-derived gasoline-, diesel-, and jet-range hydrocarbon blendstocks.

Reconnaissance reports and pertinent research on seismic hazards show that liquefaction is one of the key sources of damage to geotechnical and structural engineering systems. Therefore, identifying site liquefaction conditions plays an important role in seismic hazard mitigation. One of the widely used approaches for detecting liquefaction is based on the time-frequency analysis of ground motion recordings, in which short-time Fourier transform is typically used. It is known that recordings at a site with liquefaction are the result of nonlinear responses of seismic waves propagating in the liquefied layers underneath the site. Moreover, Fourier transform is not effective in characterizing such dynamic features as time-dependent frequency of the recordings rooted in nonlinear responses. Therefore, the aforementioned approach may not be intrinsically effective in detecting liquefaction. An alternative to the Fourier-based approach is presented in this study, which proposes time-frequency analysis of earthquake ground motion recordings with the aid of the Hilbert-Huang transform (HHT), and offers justification for the HHT in addressing the liquefaction features shown in the recordings. The paper then defines the predominant instantaneous frequency (PIF) and introduces the PIF-related motion features to identify liquefaction conditions at a given site. Analysis of 29 recorded data sets at different site conditions shows that the proposed approach is effective in detecting site liquefaction in comparison with other methods.

Full Text Available One stage bilateral or twostage unilateral video assisted thoracoscopic sympathectomy could be performed in the treatment of primary focal hyperhidrosis. Here we present a case with compensatory sweating of contralateral side after a twostage operation.

This paper considers two-stage solutions for multi-issue allocation situations.Characterisations are provided for the two-stage constrained equal awards and constrained equal losses rules, based on the properties of composition and path independence.

Two-stage exams--where students complete part one of an exam closed book and independently and part two is completed open book and independently (two-stage independent, or TS-I) or collaboratively (two-stage collaborative, or TS-C)--provide a means to include collaborative learning in summative assessments. Collaborative learning has been shown to…

A coal liquefaction system is disclosed with a novel preasphaltene recycle from a supercritical extraction unit to the slurry mix tank wherein the recycle stream contains at least 90% preasphaltenes (benzene insoluble, pyridine soluble organics) with other residual materials such as unconverted coal and ash. This subject process results in the production of asphaltene materials which can be subjected to hydrotreating to acquire a substitute for No. 6 fuel oil. The preasphaltene-predominant recycle reduces the hydrogen consumption for a process where asphaltene material is being sought.

Full Text Available The liquefaction of crop residues in the presence of ethylene glycol, ethylene carbonate, or polyethylene glycol using sulfuric acid as a catalyst was studied. For all experiments, the liquefaction was conducted at 160C and atmospheric pressure. The mass ratio of feedstock to liquefaction solvents used in all the experiments was 30:100. The results show that the acid catalyzed liquefaction process fit a pseudo-first-order kinetics model. Liquefaction yields of 80, 74, and 60% were obtained in 60 minutes of reaction when corn stover was liquefied with ethylene glycol, a mixture of polyethylene glycol and glycerol (9:1, w/w, and ethylene carbonate, respectively. When ethylene carbonate was used as solvent, the liquefaction yields of rice straw and wheat straw were 67% and 73%, respectively, which is lower than that of corn stover (80%. When a mixture of ethylene carbonate and ethylene glycol (8:2, w/w was used as solvent, the liquefaction yields for corn stover, rice straw and wheat straw were 78, 68, and 70%, respectively.

Progress reports are presented for the following tasks: coliquefaction of coal with waste materials; catalysts for coal liquefaction to clean transportation fuels; fundamental research in coal liquefaction; and in situ analytical techniques for coal liquefaction and coal liquefaction catalysts.

The objectives of this work are to test the application of steam pretreatment to direct coal liquefaction, to investigate the reaction of model compounds with water, and to explore the use of zeolites in these processes. Previous work demonstrated the effectiveness of steam pretreatment in a subsequent flash pyrolysis. Apparently, subcritical steam ruptures nearly all of the ether cross links, leaving a partially depolymerized structure. It was postulated that very rapid heating of the pretreated coal to liquefaction conditions would be required to preserve the effects of such treatment. Accordingly, a method was adopted in which coal slurry is injected into a hot autoclave containing solvent. Since oxygen is capable of destroying the pretreatment effect, precautions were taken for its rigorous exclusion. Tests were conducted with Illinois No. 6 coal steam treated at 340sp°C, 750 psia for 15 minutes. Both raw and pretreated samples were liquified in deoxygenated tetralin at high severity (400sp°C, 30 min.) and low severity (a: 350sp°C, 30 min., and b: 385sp°C, 15 min.) conditions under 1500 psia hydrogen. Substantial improvement in liquid product quality was obtained and the need for rapid heating and oxygen exclusion demonstrated. Under low severity conditions, the oil yield was more than doubled, going from 12.5 to 29 wt%. Also chemistry of the pretreatment process was studied using aromatic ethers as model compounds. alpha-Benzylnaphthyl ether (alpha-BNE), alpha-naphthylmethyl phenyl (alpha-NMPE), and 9-phenoxyphenanthrene were exposed to steam and inert gas at pretreatment conditions and in some cases to liquid water at 315sp°C. alpha-BNE and alpha-NMPE showed little difference in conversion in inert gas and in steam. Hence, these compounds are poor models for coal in steam pretreatment. Thermally stable 9-phenoxyphenanthrene, however, was completely converted in one hour by liquid water at 315sp°C. At pretreatment conditions mostly rearranged starting

Monolith catalysts of MoO/sub 3/-CoO-Al/sub 2/O/sub 3/ were prepared and tested for coal liquefaction in a stirred autoclave. In general, the monolith catalysts were not as good as particulate catalysts prepared on Corning alumina supports. Measurement of O/sub 2/ chemisorption and BET surface area has been made on a series of Co/Mo/Al/sub 2/O/sub 3/ catalysts obtained from PETC. The catalysts were derived from Cyanamid 1442A and had been tested for coal liquefaction in batch autoclaves and continuous flow units. MoO/sub 3/-Al/sub 2/O/sub 3/ catalysts over the loading range 3.9 to 14.9 wt % MoO/sub 3/ have been studied with respect to BET surface (before and after reduction), O/sub 2/ chemisorption at -78/sup 0/C, redox behavior at 500/sup 0/C, and activity for cyclohexane dehydrogenation at 500/sup 0/C. In connection with the fate of tin catalysts during coal liquefaction, calculations have been made of the relative thermodynamic stability of SnCl/sub 2/, Sn, SnO/sub 2/, and SnS in the presence of H/sub 2/, HCl, H/sub 2/S and H/sub 2/O. Ferrous sulfate dispersed in methylnaphthalene has been shown to be reduced to ferrous sulfide under typical coal hydroliquefaction conditions (1 hour, 450/sup 0/C, 1000 psi initial p/sub H/sub 2//). This suggests that ferrous sulfide may be the common catalytic ingredient when either (a) ferrous sulfate impregnated on powdered coal, or (b) finely divided iron pyrite is used as the catalyst. Old research on impregnated ferrous sulfate, impregnated ferrous halides, and pyrite is consistent with this assumption. Eight Co/Mo/Al/sub 2/O/sub 3/ catalysts from commercial suppliers, along with SnCl/sub 2/, have been studied for the hydrotreating of 1-methylnaphthalene (1-MN) in a stirred autoclave at 450 and 500/sup 0/C.

CONSOL R D is conducting a three-year program to characterize process and product streams from direct coal liquefaction process development projects. The program objectives are two-fold: (1) to obtain and provide appropriate samples of coal liquids for the evaluation of analytical methodology, and (2) to support ongoing DOE-sponsored coal liquefaction process development efforts. The two broad objectives have considerable overlap and together serve to provide a bridge between process development and analytical chemistry.

Worldwide primary energy consumption is entering an era of pluralism and high quality under the influence of rapid economic development, increasing energy shortage and strict environmental policies. Although renewable energy technology is developing rapidly, fossil fuels (coal, oil and gas) are still the dominant energy sources in the world. As a country rich in coal but short ofoil and gas, China's oil imports have soared in the past few years. Government, research organizations and enterprises in China are paying more and more attention to the processes of converting coal into clean liquid fuels. Direct and indirect coal liquefaction technologies are compared in this paper based on China's current energy status and technological progress not only in China itself but also in the world.

According to the characteristics of low — pressure natural gas, three closed expander liquefaction systems for natural gas were designed. The PR (Peng - Robinson) equation was selected to calculate phase equilibrium of the mixture, the chemical process simulation software PRO/ Ⅱ was used to simulate the difference liquefaction systems and the effects of some key thermodynamics parameters on liquefaction systems were analyzed in detail. The results shown that the liquefaction power with propane pre - cooling was less than one of single - stage liquefaction system without pre - cooling and the two - stageliquefaction system without pre - cooling,and had higher liquefaction rate. According to analysis results, the liquefaction system with propane pre -cooling is a better choice. At last,we notice that the lower the throttling temperature is,the higher the liquefaction rate is, the smaller the liquefaction power is. The higher the expansion ratio is, the higher the liquefaction rate is.%针对低压天然气特点,设计了3套天然气闭式膨胀液化流程,选择PR(Peng-Robinson)方程进行混合物的相平衡计算,采用化工模拟软件PRO/Ⅱ进行了模拟计算；分析比较了不同液化流程的关键热力学参数,并进行了关键设备的可行性分析.结果表明:丙烷预冷双级天然气膨胀液化流程的比功耗比无预冷单级天然气膨胀液化流程、无预冷双级天然气膨胀液化流程低,液化率高,而且设备均可实现.综合分析结果,选用了丙烷预冷双级天然气膨胀液化流程.并指出天然气节流前温度越低,其液化率越高,比功耗越小.天然气膨胀比越高,液化率越高.

Volume I contains papers presented at the following sessions: AR-Coal Liquefaction; Gas to Liquids; and Direct Liquefaction. Selected papers have been processed separately for inclusion in the Energy Science and Technology Database.

This paper deals with the problem of preemptive scheduling in a two-stage supply chain framework. The supply chain environment contains twostages: production and transportation. In the production stage jobs are processed on a manufacturer's bounded serial batching machine, preemptions are allowed,

Batch coal liquefaction experiments using tubing bombs and continuous experiments by cell liquefaction test facility were carried out. The main purpose was to maximize the coal liquefaction yields by improving the activity of coal dissolution catalysts which are oil soluble transition metal naphthenate and to supplement the incomplete research results. In the meantime, the study on the reaction characteristics of coal liquefaction and coal liquid upgrading catalyst upon sulfiding conditions and phosphorous addition have been conducted (author). 102 refs., 35 figs.

The origin and influence factors of sand liquefaction were analyzed, and the relation between liquefaction and its influence factors was founded. A model based on support vector machines (SVM) was established whose input parameters were selected as following influence factors of sand liquefaction: magnitude (M), the value of SPT, effective pressure of superstratum, the content of clay and the average of grain diameter. Sand was divided into two classes: liquefaction and non-liquefaction, and the class label was treated as output parameter of the model. Then the model was used to estimate sand samples, 20 support vectors and 17 borderline support vectors were gotten, then the parameters were optimized, 14 support vectors and 6 borderline support vectors were gotten, and the prediction precision reaches 100%. In order to verify the generalization of the SVM method, two other practical samples’ data from two cities, Tangshan of Hebei province and Sanshui of Guangdong province, were dealt with by another more intricate model for polytomies, which also considered some influence factors of sand liquefaction as the input parameters and divided sand into four liquefaction grades: serious liquefaction, medium liquefaction, slight liquefaction and non-liquefaction as the output parameters. The simulation results show that the latter model has a very high precision, and using SVM model to estimate sand liquefaction is completely feasible.

This volume contains 55 papers presented at the conference. They are divided into the following topical sections: Direct liquefaction; Indirect liquefaction; Gas conversion (methane conversion); and Advanced research liquefaction. Papers in this last section deal mostly with coprocessing of coal with petroleum, plastics, and waste tires, and catalyst studies. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

This paper uses a well-regarded, hardware based heat pump system model to investigate a two-stage economizing cycle for cold climate heat pump applications. The two-stage compression cycle has two variable-speed compressors. The high stage compressor was modelled using a compressor map, and the low stage compressor was experimentally studied using calorimeter testing. A single-stage heat pump system was modelled as the baseline. The system performance predictions are compared between the two-stage and single-stage systems. Special considerations for designing a cold climate heat pump are addressed at both the system and component levels.

This paper uses a well-regarded, hardware based heat pump system model to investigate a two-stage economizing cycle for cold climate heat pump applications. The two-stage compression cycle has two variable-speed compressors. The high stage compressor was modelled using a compressor map, and the low stage compressor was experimentally studied using calorimeter testing. A single-stage heat pump system was modelled as the baseline. The system performance predictions are compared between the two-stage and single-stage systems. Special considerations for designing a cold climate heat pump are addressed at both the system and component levels.

This report is based on the proceedings of the U.S. Department of Energy Bioenergy Technologies Office Biomass Indirect Liquefaction Strategy Workshop. The workshop, held March 20–21, 2014, in Golden, Colorado, discussed and detailed the research and development needs for biomass indirect liquefaction. Discussions focused on pathways that convert biomass-based syngas (or any carbon monoxide, hydrogen gaseous stream) to liquid intermediates (alcohols or acids) and further synthesize those intermediates to liquid hydrocarbons that are compatible as either a refinery feed or neat fuel.

In the current study, the performances of some decision tree (DT) techniques are evaluated for postearthquake soil liquefaction assessment. A database containing 620 records of seismic parameters and soil properties is used in this study. Three decision tree techniques are used here in two different ways, considering statistical and engineering points of view, to develop decision rules. The DT results are compared to the logistic regression (LR) model. The results of this study indicate that the DTs not only successfully predict liquefaction but they can also outperform the LR model. The best DT models are interpreted and evaluated based on an engineering point of view.

The objectives of this project are to investigate the chemistry and kinetics that occur in the initial stages of coal liquefaction and to determine the effects of hydrogen pressure, catalyst activity, and solvent type on the quantity and quality of the products produced. The project comprises three tasks: (1) preconversion chemistry and kinetics, (2) hydrogen utilization studies, and (3) assessment of kinetic models for liquefaction. The hydrogen utilization studies work will be the main topic of this report. However, the other tasks are briefly described.

A two-stage pulse tube refrigerator has a great advantage in that there are no moving parts at low temperatures. The problem is low theoretical efficiency. In an ordinary two-stage pulse tube refrigerator, the expansion work of the first stage pulse tube is rather large, but is changed to heat. The theoretical efficiency is lower than that of a Stirling refrigerator. A series two-stage pulse tube refrigerator was introduced for solving this problem. The hot end of the regenerator of the second stage is connected to the hot end of the first stage pulse tube. The expansion work in the first stage pulse tube is part of the input work of the second stage, therefore the efficiency is increased. In a simulation result for a step-piston type two-stage series pulse tube refrigerator, the efficiency is increased by 13.8%.

Full Text Available Two-stage stabilizer is compared with one-stage. There have been got formulas, which give the possibility to make an engineering calculation. There is an example of the calculation.

.... Pre-treatment of the residue prior to its anaerobic digestion (AD) was investigated using a two-stage pre-treatment approach with two fungal strains, CCHT-1 and Trichoderma reesei in succession in anaerobic batch bioreactors...

The process flow and the main devices of a new two-stage dry-fed coal gasification pilot plant with a throughout of 36 t/d are introduced in this paper. For comparison with the traditional one-stage gasifiers, the influences of the coal feed ratio between twostages on the performance of the gasifier are detailedly studied by a series of experiments. The results reveal that the two-stage gasification decreases the temperature of the syngas at the outlet of the gasifier, simplifies the gasification process, and reduces the size of the syngas cooler. Moreover, the cold gas efficiency of the gasifier can be improved by using the two-stage gasification. In our experiments, the efficiency is about 3%-6% higher than the existing one-stage gasifiers.

This paper describes a two-stage classification method for (1) classification of isolated characters and (2) verification of the classification result. Character prototypes are generated using hierarchical clustering. For those prototypes known to sometimes produce wrong classification results, a

A method of natural gas liquefaction may include cooling a gaseous NG process stream to form a liquid NG process stream. The method may further include directing the first tail gas stream out of a plant at a first pressure and directing a second tail gas stream out of the plant at a second pressure. An additional method of natural gas liquefaction may include separating CO.sub.2 from a liquid NG process stream and processing the CO.sub.2 to provide a CO.sub.2 product stream. Another method of natural gas liquefaction may include combining a marginal gaseous NG process stream with a secondary substantially pure NG stream to provide an improved gaseous NG process stream. Additionally, a NG liquefaction plant may include a first tail gas outlet, and at least a second tail gas outlet, the at least a second tail gas outlet separate from the first tail gas outlet.

The two-stage AC to AC direct power converter is an alternative matrix converter topology, which offers the benefits of sinusoidal input currents and output voltages, bidirectional power flow and controllable input power factor. The absence of any energy storage devices, such as electrolytic capacitors, has increased the potential lifetime of the converter. In this research work, a new multi-motor drive system based on a two-stage direct power converter has been proposed, with two motors c...

Full Text Available Chia-Chang Chien1, Shu-Fen Huang1,2,3,4, For-Wey Lung1,2,3,41Department of Psychiatry, Kaohsiung Armed Forces General Hospital, Kaohsiung, Taiwan; 2Graduate Institute of Behavioral Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan; 3Department of Psychiatry, National Defense Medical Center, Taipei, Taiwan; 4Calo Psychiatric Center, Pingtung County, TaiwanObjective: The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts.Methods: We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST and the Wechsler Adult Intelligence Scale-Revised (WAIS-R assessments.Results: Logistic regression analysis showed the conceptual level responses (CLR index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84. We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%.Conclusion: The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.Keywords: intellectual disability, intelligence screening, two-stage positive screening, Wisconsin Card Sorting Test, Wechsler Adult Intelligence Scale-Revised

The replacement of fossil fuels by renewable fuels such as biogas and biohydrogen will require efficient and economically competitive process technologies together with new kinds of biomass. A two-stage system for biogas production has several advantages over the widely used one-stage continuous stirred tank reactor (CSTR). However, it has not yet been widely implemented on a large scale. Biohydrogen can be produced in the anaerobic two-stage system. It is considered to be a useful fuel for t...

The microalgae Phaeodactylum tricornutum was processed by hydrothermal liquefaction in order to assess the influence of reaction temperature and reaction time on the product and elemental distribution. The experiments were carried out at different reaction times (5 and 15 min) and over a wide range...

The microalgae Phaeodactylum tricornutum was processed by hydrothermal liquefaction in order to assess the influence of reaction temperature and reaction time on the product and elemental distribution. The experiments were carried out at different reaction times (5 and 15 min) and over a wide range...

Understanding how sedimentary basins respond to seismic-wave energy generated by earthquake events is a significant concern for seismic-hazard estimation and risk analysis. The main goal of this study is assessing the vulnerability index, Kg, as an indicator for liquefaction potential sites in the Nile delta basin based on the microtremor measurements. Horizontal to Vertical spectral ratio analyses (HVSR) of ambient noise data, which was conducted in 2006 at 120 sites covering the Nile delta from south to north were reprocessed using Geopsy software. HVSR factors of amplification, A, and fundamental frequency, F, were calculated and Kg was estimated for each measurement. The Kg value varies widely from south toward north delta and the potential liquefaction places were estimated. The higher vulnerability indices are associated with sites located in southern part of the Nile delta and close to the branches of Nile River. The HVSR factors were correlated with geologic setting of the Nile delta and show good correlations with the sediment thickness and subsurface stratigraphic boundaries. However, we note that sites located in areas that have greatest percentage of sand also yielded relatively high Kg values with respect to sites in areas where clay is abundant. We concluded that any earthquake with ground acceleration more than 50 gal at hard rock can cause a perceived deformation of sandy sediments and liquefaction can take place in the weak zones of Kg ≥ 20. The worst potential liquefaction zones (Kg > 30) are frequently joined to the Damietta and Rosetta Nile River branches and south Delta where relatively coarser sand exists. The HVSR technique is a very sensitive tool for lithological stratigraphy variations in two dimensions and varying liquefaction susceptibility.

Two-stage underground coal gasification was studied to improve the caloric value of the syngas and to extend gas production times. A model test using the oxygen-enriched two-stage coal gasification method was carried out. The composition of the gas produced, the time ratio of the twostages, and the role of the temperature field were analysed. The results show that oxygen-enriched two-stage gasification shortens the time of the first stage and prolongs the time of the second stage. Feed oxygen concentrations of 30%,35%, 40%, 45%, 60%, or 80% gave time ratios (first stage to second stage) of 1:0.12, 1:0.21, 1:0.51, 1:0.64,1:0.90, and 1:4.0 respectively. Cooling rates of the temperature field after steam injection decreased with time from about 19.1-27.4 ℃/min to 2.3-6.8 ℃/min. But this rate increased with increasing oxygen concentrations in the first stage. The caloric value of the syngas improves with increased oxygen concentration in the first stage. Injection of 80% oxygen-enriched air gave gas with the highest caloric value and also gave the longest production time. The caloric value of the gas obtained from the oxygenenriched two-stage gasification method lies in the range from 5.31 MJ/Nm3 to 10.54 MJ/Nm3.

From the studies on the magnetostriction characteristics of two-stage sintered polycrystalline CoFe{sub 2}O{sub 4} made from nanocrystalline powders, it is found that two-stage sintering at low temperatures is very effective for enhancing the density and for attaining higher magnetostriction coefficient. Magnetostriction coefficient and strain derivative are further enhanced by magnetic field annealing and relatively larger enhancement in the magnetostriction parameters is obtained for the samples sintered at lower temperatures, after magnetic annealing, despite the fact that samples sintered at higher temperatures show larger magnetostriction coefficients before annealing. A high magnetostriction coefficient of ∼380 ppm is obtained after field annealing for the sample sintered at 1100 °C, below a magnetic field of 400 kA/m, which is the highest value so far reported at low magnetic fields for sintered polycrystalline cobalt ferrite. - Highlights: • Effect of two-stage sintering on the magnetostriction characteristics of CoFe{sub 2}O{sub 4} is studied. • Two-stage sintering is very effective for enhancing the density and the magnetostriction parameters. • Higher magnetostriction for samples sintered at low temperatures and after magnetic field annealing. • Highest reported magnetostriction of 380 ppm at low fields after two-stage, low-temperature sintering.

Stirling-type pulse tube refrigerators have attracted academic and commercial interest in recent years due to their more compact configuration and higher efficiency than those of G-M type pulse tube refrigerators. In order to achieve a no-load cooling temperature below 20 K, a thermally coupled two-stage Stirling-type pulse tube refrigerator has been built. The thermally coupled arrangement was expected to minimize the interference between the twostages and to simplify the adjustment and optimization of the phase shifters. A no-load cooling temperature of 14.97 K has been realized with the two-stage cooler driven by one linear compressor of 200 W electric input. When the twostages are driven by two compressors respectively, with total electric input of 400 W, the prototype has attained a no-load cooling temperature of 12.96 K, which is the lowest temperature ever reported with two-stage Stirling-type pulse tube refrigerators.

Full Text Available Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

The NiMo sulfide supported on Ketjen Black (KB) was more effective and yielded lighter oil products containing light fractions with their boiling point below 300{degree}C during the twostageliquefaction combining low temperature and high temperature hydrogenation the conventional NiMo/alumina catalyst and FeS2 catalyst. Although the NiMo/alumina yielded increased oil products during the twostageliquefaction, the lighter oil fractions did not increase and the heavier fractions increased mainly. This suggests that the hydrogenation of aromatic rings and successive cleavage of the rings are necessary for producing the light oil, which is derived from the sufficient hydrogenation of aromatic rings using catalysts. For the twostage reaction with NiMo/KB catalyst, it was considered that sufficient hydrogen was directly transferred to coal molecules at the first stage of the low temperature reaction, which promoted the solubilization of coal and the successive hydrogenation at the high temperature reaction. Thus, high activity of the catalyst must be obtained. It is expected that further high quality distillates can be produced through the optimization of catalysts and solvents at the twostage reaction. 1 ref., 4 figs., 1 tab.

This report outlines the design and construction of the X-2 two-stage free piston driven expansion tube. The project has completed its construction phase and the facility has been installed in the new impulsive research laboratory where commissioning is about to take place. The X-2 uses a unique, two-stage driver design which allows a more compact and lower overall cost free piston compressor. The new facility has been constructed in order to examine the performance envelope of the two-stage driver and how well it couple to sub-orbital and super-orbital expansion tubes. Data obtained from these experiments will be used for the design of a much larger facility, X-3, utilizing the same free piston driver concept.

In this paper, the theoretical analysis and simulating calculation were conducted for a basic two-stage semiconductor thermoelectric module, which contains one thermocouple in the second stage and several thermocouples in the first stage. The study focused on the configuration of the two-stage semiconductor thermoelectric cooler, especially investigating the influences of some parameters, such as the current I1 of the first stage, the area A1 of every thermocouple and the number n of thermocouples in the first stage, on the cooling performance of the module. The obtained results of analysis indicate that changing the current I1 of the first stage, the area A1 of thcrmocouples and the number n of thermocouples in the first stage can improve the cooling performance of the module. These results can be used to optimize the configuration of the two-stage semiconductor thermoelectric module and provide guides for the design and application of thermoelectric cooler.

Because it helps protect the environment and encourages economic development, composting has become a viable method for organic waste disposal. The objective of this study was to investigate the effects of earthworm casts (EWCs) (at 0.0%, 0.30%, and 0.60%) and zeolite (clinoptilolite, CL) (at 0%, 15%, and 25%) on the two-stage composting of green waste. The combination of EWCs and CL improved the conditions of the composting process and the quality of the compost products in terms of the thermophilic phase, humification, nitrification, microbial numbers and enzyme activities, the degradation of cellulose and hemicellulose, and physico-chemical characteristics and nutrient contents of final composts. The compost matured in only 21days with the optimized two-stage composting method rather than in the 90-270days required for traditional composting. The optimal two-stage composting and the best quality compost were obtained with 0.30% EWCs and 25% CL.

Outcomes of primary anterior cruciate ligament (ACL) reconstruction have been reported to be far superior to those of revision reconstruction. However, as the incidence of ACL reconstruction is rapidly increasing, so is the number of failures. The subsequent need for revision ACL reconstruction is estimated to occur in up to 13,000 patients each year in the United States. Revision ACL reconstruction can be performed in one or twostages. A two-stage approach is recommended in cases of improper placement of the original tunnels or in cases of unacceptable tunnel enlargement. The aim of this study was to describe the technique for allograft ACL tunnel bone grafting in patients requiring a two-stage revision ACL reconstruction.

Full Text Available Seasonal Autoregressive Fractionally Integrated Moving Average (SARFIMA models are used in the analysis of seasonal long memory-dependent time series. Two methods, which are conditional sum of squares (CSS and two-staged methods introduced by Hosking (1984, are proposed to estimate the parameters of SARFIMA models. However, no simulation study has been conducted in the literature. Therefore, it is not known how these methods behave under different parameter settings and sample sizes in SARFIMA models. The aim of this study is to show the behavior of these methods by a simulation study. According to results of the simulation, advantages and disadvantages of both methods under different parameter settings and sample sizes are discussed by comparing the root mean square error (RMSE obtained by the CSS and two-staged methods. As a result of the comparison, it is seen that CSS method produces better results than those obtained from the two-staged method.

In order to investigate the influence of the vertical vibration loading on the liquefaction of saturated sand, one dimensional model for the saturated sand with a vertical vibration is presented based on the two phase continuous media theory. The development of the liquefaction and the liquefaction region are analyzed. It is shown that the vertical vibration loading could induce liquefaction.The rate of the liquefaction increases with the increase of the initial limit strain or initial porosity or amplitude and frequency of loading, and increases with the decrease of the permeability or initial modulus. It is shown also that there is a phase lag in the sand column. When the sand permeability distribution is non-uniform, the pore pressure and the strain will rise sharply where the permeability is the smallest, and fracture might be induced. With the development of liquefaction, the strength of the soil foundation becomes smaller and smaller. In the limiting case, landslides or debris flows could occur.

By using a two-stage constructed wetland (CW) system operated with an organic load of 40 gCOD.m(-2).d(-1) (2 m2 per person equivalent) average nitrogen removal efficiencies of about 50% and average nitrogen elimination rates of 980 g N.m(-2).yr(-1) could be achieved. Two vertical flow beds with intermittent loading have been operated in series. The first stage uses sand with a grain size of 2-3.2 mm for the main layer and has a drainage layer that is impounded; the second stage sand with a grain size of 0.06-4 mm and a drainage layer with free drainage. The high nitrogen removal can be achieved without recirculation thus it is possible to operate the two-stage CW system without energy input. The paper shows performance data for the two-stage CW system regarding removal of organic matter and nitrogen for the two year operating period of the system. Additionally, its efficiency is compared with the efficiency of a single-stage vertical flow CW system designed and operated according to the Austrian design standards with 4 m2 per person equivalent. The comparison shows that a higher effluent quality could be reached with the two-stage system although the two-stage CW system is operated with the double organic load or half the specific surface area requirement, respectively. Another advantage is that the specific investment costs of the two-stage CW system amount to 1,200 EUR per person (without mechanical pre-treatment) and are only about 60% of the specific investment costs of the singe-stage CW system. IWA Publishing 2008.

The start-up of a two-stage reactor configuration for the anaerobic digestion of sweet sorghum residues was evaluated. The sweet sorghum residues were a waste stream originating from the alcoholic fermentation of sweet sorghum and the subsequent distillation step. This waste stream contained high concentration of solid matter (9% TS) and thus could be characterized as a semi-solid, not easily biodegradable wastewater with high COD (115 g/l). The application of the proposed two-stage configuration (consisting of one thermophilic hydrolyser and one mesophilic methaniser) achieved a methane production of 16 l/l wastewater under a hydraulic retention time of 19 d. (author)

Full Text Available This paper presents a well-established Fully Differential sample & hold circuitry, implemented in 180-nm CMOS technology. In this twostage method the first stage give us very high gain and second stage gives large voltage swing. The proposed opamp provides 149MHz unity-gain bandwidth , 78 degree phase margin and a differential peak to peak output swing more than 2.4v. using the improved fully differential twostage operational amplifier of 76.7dB gain. Although the sample and hold circuit meets the requirements of SNR specifications.

Full Text Available The paper provides the reader with the detailed description of current techniques of one-stage and two-stage penile buccal mucosa urethroplasty. The paper provides the reader with the preoperative patient evaluation paying attention to the use of diagnostic tools. The one-stage penile urethroplasty using buccal mucosa graft with the application of glue is preliminary showed and discussed. Two-stage penile urethroplasty is then reported. A detailed description of first-stage urethroplasty according Johanson technique is reported. A second-stage urethroplasty using buccal mucosa graft and glue is presented. Finally postoperative course and follow-up are addressed.

A valveless linear compressor was built up to drive a self-made two-stage pulse tube cryocooler. With a designed maximum swept volume of 60 cm~3, the compressor can provide the cryocooler with a pressure volume (PV) power of 400 W.Preliminary measurements of the compressor indicated that both an efficiency of 35%～55% and a pressure ratio of 1.3～1.4 could be obtained. The two-stage pulse tube cryocooler driven by this compressor achieved the lowest temperature of 14.2 K.

Based on the tests between anoxic and aerobic process, the two-stage aerobic process with a biological selector was chosen to treat terephthalic acid wastewater (PTA). By adopting the two- stage aerobic process, the CODCr in PTA wastewater could be reduced from 4000-6000 mg/L to below 100 mg/L; the COD loading in the first aerobic tank could reach 7.0-8.0 kgCODCr/(m3.d) and that of the second stage was from 0.2 to 0.4 kgCODCr/(m3.d). Further researches on the kinetics of substrate degradation were carried out.

The traditional two-stage vapor compression refrigeration cycle might be replaced by a two-stage ejector-vapor compression refrigeration cycle if it is aimed the decrease of irreversibility during expansion...

Supercritical water liquefaction of scrap tire rubber and Ishikari coal, separately and in mixtures was investigated to study the possible synergetic effects of coliquefaction between the feedstocks...

Extensively utilizing a special advanced airbreathing propulsion archives database, as well as direct contacts with individuals who were active in the field in previous years, a technical assessment of cryogenic hydrogen-induced air liquefaction, as a prospective onboard aerospace vehicle process, was performed and documented. The resulting assessment report is summarized. Technical findings are presented relating the status of air liquefaction technology, both as a singular technical area, and also that of a cluster of collateral technical areas including: compact lightweight cryogenic heat exchangers; heat exchanger atmospheric constituents fouling alleviation; para/ortho hydrogen shift conversion catalysts; hydrogen turbine expanders, cryogenic air compressors and liquid air pumps; hydrogen recycling using slush hydrogen as heat sink; liquid hydrogen/liquid air rocket-type combustion devices; air collection and enrichment systems (ACES); and technically related engine concepts.

In support of the Bioenergy Technologies Office, the National Renewable Energy Laboratory (NREL) and the Pacific Northwest National Laboratory (PNNL) are undertaking studies of biomass conversion technologies to hydrocarbon fuels to identify barriers and target research toward reducing conversion costs. Process designs and preliminary economic estimates for each of these pathway cases were developed using rigorous modeling tools (Aspen Plus and Chemcad). These analyses incorporated the best information available at the time of development, including data from recent pilot and bench-scale demonstrations, collaborative industrial and academic partners, and published literature and patents. This pathway case investigates the feasibility of using whole wet microalgae as a feedstock for conversion via hydrothermal liquefaction. Technical barriers and key research needs have been assessed in order for the hydrothermal liquefaction of microalgae to be competitive with petroleum-derived gasoline, diesel and jet range blendstocks.

The primary objective of the study is to develop a computer model for a base line direct coal liquefaction design based on twostage direct coupled catalytic reactors. This primary objective is to be accomplished by completing the following: a base line design based on previous DOE/PETC results from Wilsonville pilot plant and other engineering evaluations; a cost estimate and economic analysis; a computer model incorporating the above two steps over a wide range of capacities and selected process alternatives; a comprehensive training program for DOE/PETC Staff to understand and use the computer model; a thorough documentation of all underlying assumptions for baseline economics; and a user manual and training material which will facilitate updating of the model in the future.

The primary objective of the study is to develop a computer model for a base line direct coal liquefaction design based on twostage direct coupled catalytic reactors. This primary objective is to be accomplished by completing the following: a base line design based on previous DOE/PETC results from Wilsonville pilot plant and other engineering evaluations; a cost estimate and economic analysis; a computer model incorporating the above two steps over a wide range of capacities and selected process alternatives; a comprehensive training program for DOE/PETC Staff to understand and use the computer model; a thorough documentation of all underlying assumptions for baseline economics; and a user manual and training material which will facilitate updating of the model in the future.

Two-stage anaerobic digestion (AD) of two-phase olive mill solid waste (OMSW) was applied for reducing the inhibiting factors by optimizing the acidification stage. Single-stage AD and co-fermentation with chicken manure were conducted coinstantaneous for direct comparison. Degradation of the polyphenols up to 61% was observed during the methanogenic stage. Nevertheless the concentration of phenolic substances was still high; the two-stage fermentation remained stable at OLR 1.5 kgVS/m³day. The buffer capacity of the system was twice as high, compared to the one-stage fermentation, without additives. The two-stage AD was a combined process - thermophilic first stage and mesophilic second stage, which pointed out to be the most profitable for AD of OMSW for the reduced hydraulic retention time (HRT) from 230 to 150 days, and three times faster than the single-stage and the co-fermentation start-up of the fermentation. The optimal HRT and incubation temperature for the first stage were determined to four days and 55°C. The performance of the two-stage AD concerning the stability of the process was followed by the co-digestion of OMSW with chicken manure as a nitrogen-rich co-substrate, which makes them viable options for waste disposal with concomitant energy recovery.

The Two-Stage Gasifier was operated for several weeks (465 hours) and of these 190 hours continuously. The gasifier is operated automatically unattended day and night, and only small adjustments of the feeding rate were necessary once or twice a day. The operation was successful, and the output a...... of the reactor had to be constructed in some other material....

In this study the mesophilic two-stage anaerobic digestion (AD) of corn bioethanol distillery wastewater is investigated in laboratory-scale reactors. Two-stage AD technology separates the different sub-processes of the AD in two distinct reactors, enabling the use of optimal conditions for the different microbial consortia involved in the different process phases, and thus allowing for higher applicable organic loading rates (OLRs), shorter hydraulic retention times (HRTs) and better conversion rates of the organic matter, as well as higher methane content of the produced biogas. In our experiments the reactors have been operated in semi-continuous phase-separated mode. A specific methane production of 1,092 mL/(L·d) has been reached at an OLR of 6.5 g TCOD/(L·d) (TCOD: total chemical oxygen demand) and a total HRT of 21 days (5.7 days in the first-stage, and 15.3 days in the second-stage reactor). Nonetheless the methane concentration in the second-stage reactor was very high (78.9%); the two-stage AD outperformed the reference single-stage AD (conducted at the same reactor loading rate and retention time) by only a small margin in terms of volumetric methane production rate. This makes questionable whether the higher methane content of the biogas counterbalances the added complexity of the two-stage digestion.

A two-stage Method of Auxiliary Sources (MAS) technique is proposed for analysis of dielectric resonator antenna (DRA) elements and arrays on finite ground planes (FGPs). The problem is solved by first analysing the DRA on an infinite ground plane (IGP) and then using this solution to model the FGP...... problem....

The two-stage hydraulic pump is commonly used in many high school and college courses to demonstrate hydraulic systems. Unfortunately, many textbooks do not provide a good explanation of how the technology works. Another challenge that instructors run into with teaching hydraulic systems is the cost of procuring an expensive real-world machine…

Two-stage sampling procedures for comparing two population means when variances are heterogeneous have been developed by D. G. Chapman (1950) and B. K. Ghosh (1975). Both procedures assume sampling from populations that are normally distributed. The present study reports on the effect that sampling from non-normal distributions has on Type I error…

textabstractWe consider two-stage production lines with an intermediate buffer. A buffer is needed when fluctuations occur. For single-product production lines fluctuations in capacity availability may be caused by random processing times, failures and random repair times. For multi-product producti

Lube-oil injection is used in positive-displacement compressors and, among them, in sliding-vane machines to guarantee the correct lubrication of the moving parts and as sealing to prevent air leakage. Furthermore, lube-oil injection allows to exploit lubricant also as thermal ballast with a great thermal capacity to minimize the temperature increase during the compression. This study presents the design of a two-stage sliding-vane rotary compressor in which the air cooling is operated by high-pressure cold oil injection into a connection duct between the twostages. The heat exchange between the atomized oil jet and the air results in a decrease of the air temperature before the second stage, improving the overall system efficiency. This cooling system is named here intracooling, as opposed to intercooling. The oil injection is realized via pressure-swirl nozzles, both within the compressors and inside the intracooling duct. The design of the two-stage sliding-vane compressor is accomplished by way of a lumped parameter model. The model predicts an input power reduction as large as 10% for intercooled and intracooled two-stage compressors, the latter being slightly better, with respect to a conventional single-stage compressor for compressed air applications. An experimental campaign is conducted on a first prototype that comprises the low-pressure compressor and the intracooling duct, indicating that a significant temperature reduction is achieved in the duct.

A mean value model was developed by using Matrixx/ Systembuild simulation tool for designing real-time control algorithms for the two-stage engine. All desired characteristics are achieved, apart from lower A/F ratio at lower engine speeds and Turbocharger matches calculations. The CANbus is used to

This paper presents a fundamental analysis of the processing steps in the production of methanol from southern red oak (Quercus falcata Michx.) by two-stage dilute sulfuric acid hydrolysis. Data for hemicellulose and cellulose hydrolysis are correlated using models. This information is used to develop and evaluate a process design.

Full Text Available A two-stage data envelopment analysis (DEA which uses mathematical linear programming techniques is applied to evaluate the efficiency of a system composed of two relational sub-processes, by which the outputs from the first sub-process (as the intermediate outputs of the system are the inputs for the second sub-process. The relative efficiencies of the system and its sub-processes can be measured by applying the two-stage DEA. According to the literature review on the supply chain management, this technique can be used as a tool for evaluating the efficiency of the supply chain composed of two relational sub-processes. The technique can help to determine the inefficient sub-processes. Once the inefficient sub-process was improved its efficiency, it would result in better aggregate efficiency of the supply chain. This paper aims to present a procedure for evaluating the efficiency of the supply chain by using the two-stage DEA, under the assumption of constant returns to scale, with an example of internal supply chain efficiency measurement of insurance companies by applying the two-stage DEA for illustration. Moreover, in this paper the authors also present some observations on the application of this technique.

In this paper register based family studies provide the motivation for studying a two-stage estimation procedure in copula models for multivariate failure time data. The asymptotic properties of the estimators in both parametric and semi-parametric models are derived, generalising the approach by...

The valorisation of agroindustrial waste through anaerobic digestion represents a significant opportunity for refuse treatment and renewable energy production. This study aimed to improve the codigestion of cheese whey (CW) and cattle manure (CM) by an innovative two-stage process, based on concentric acidogenic and methanogenic phases, designed for enhancing performance and reducing footprint. The optimum CW to CM ratio was evaluated under batch conditions. Thereafter, codigestion was implemented under continuous-flow conditions comparing one- and two-stage processes. The results demonstrated that the addition of CM in codigestion with CW greatly improved the anaerobic process. The highest methane yield was obtained co-treating the two substrates at equal ratio by using the innovative two-stage process. The proposed system reached the maximum value of 258 mL(CH4) g(gv(-1), which was more than twice the value obtained by the one-stage process and 10% higher than the value obtained by the two-stage one.

The aim of the study was to compare the osseointegration success rate and time for delivery of the prosthesis among cases treated by two-stage or one-stage surgery for orbit rehabilitation between 2003 and 2011. Forty-five patients were included, 31 males and 14 females; 22 patients had two-stage surgery and 23 patients had one-stage surgery. A total 138 implants were installed, 42 (30.4%) on previously irradiated bone. The implant survival rate was 96.4%, with a success rate of 99.0% among non-irradiated patients and 90.5% among irradiated patients. Two-stage patients received 74 implants with a survival rate of 94.6% (four implants lost); one-stage surgery patients received 64 implants with a survival rate of 98.4% (one implant lost). The median time interval between implant fixation and delivery of the prosthesis for the two-stage group was 9.6 months and for the one-stage group was 4.0 months (P < 0.001). The one-stage technique proved to be reliable and was associated with few risks and complications; the rate of successful osseointegration was similar to those reported in the literature. The one-stage technique should be considered a viable procedure that shortens the time to final rehabilitation and facilitates appropriate patient follow-up treatment.

The Viking gasification plant at the Technical University of Denmark was built to demonstrate a continuous combined heat and power operation of a two-stage gasifier fueled with wood chips. The nominal input of the gasifier is 75 kW thermal. To validate the continuous operation of the plant, a 9-d...

The feasibility was studied of anaerobic treatment of wastewater generated during purified terephthalic acid (PTA) production in two-stage upflow anaerobic sludge blanket (UASB) reactor system. The artificial influent of the system contained the main organic substrates of PTA-wastewater: acetate, be

Performance of two-stage evaporative coolers at various outdoor air conditions was numerically studied, and its geometric and physical characteristics were obtained based on thermal comfort criteria. For this purpose, a mathematical model was developed based on conservation equations of mass, momentum and energy to determine heat and mass transfer characteristics of the system. The results showed that two-stage indirect/direct cooler can provide the thermal comfort condition when outdoor air temperature and relative humidity are located in the range of 34-54 °C and 10-60 %, respectively. Moreover, as relative humidity of the ambient air rises, two-stage evaporative cooler with the smaller direct and larger indirect cooler will be needed. In building with high cooling demand, thermal comfort may be achieved at a greater air change per hour number, and thus an expensive two-stage evaporative cooler with a higher electricity consumption would be required. Finally, a design guideline was proposed to determine the size of required plate heat exchangers at various operating conditions.

The present study focused on the application of the Anaerobic Digestion Model 1 οn the methane production from acidified sorghum extract generated from a hydrogen producing bioreactor in a two-stage anaerobic process. The kinetic parameters for hydrogen and volatile fatty acids consumption were...

Performance of two-stage evaporative coolers at various outdoor air conditions was numerically studied, and its geometric and physical characteristics were obtained based on thermal comfort criteria. For this purpose, a mathematical model was developed based on conservation equations of mass, momentum and energy to determine heat and mass transfer characteristics of the system. The results showed that two-stage indirect/direct cooler can provide the thermal comfort condition when outdoor air temperature and relative humidity are located in the range of 34-54 °C and 10-60 %, respectively. Moreover, as relative humidity of the ambient air rises, two-stage evaporative cooler with the smaller direct and larger indirect cooler will be needed. In building with high cooling demand, thermal comfort may be achieved at a greater air change per hour number, and thus an expensive two-stage evaporative cooler with a higher electricity consumption would be required. Finally, a design guideline was proposed to determine the size of required plate heat exchangers at various operating conditions.

Describes a two-stage experiment that was designed to explain binomial distribution to undergraduate statistics students. A manual coin flipping exercise is explained as the first stage; a computerized simulation using MINITAB software is presented as stage two; and output from the MINITAB exercises is included. (two references) (LRW)

The rearrangement process in the two-stage broadcast switching network presented by F.K. Hwang and G.W. Richards (ibid., vol.COM-33, no.10, p.1025-1035, Oct. 1985) is considered. By defining a certain function it is possible to calculate an upper bound on the number of connections to be moved...

Full Text Available We report herein the case of 70-year-old woman in whom colon cancer and a synchronous metastatic liver tumour were successfully resected laparoscopically. The tumours were treated in twostages. Both post-operative courses were uneventful, and there has been no recurrence during the 8 months since the second procedure.

Full Text Available We report herein the case of 70-year-old woman in whom colon cancer and a synchronous metastatic liver tumour were successfully resected laparoscopically. The tumours were treated in twostages. Both postoperative courses were uneventful, and there has been no recurrence during the 8 months since the second procedure.

This paper studies coverage extension in a simple general equilibrium model with a dual labour market. The union sector is characterized by two-stage bargaining whereas the firms set wages in the non-union sector. In this model firms and unions of the union sector have a commonality of interest...

This study was designed to apply the method of field ionization mass spectrometry (FIMS) for the analysis of direct coal liquefaction process-stream samples. The FIMS method was shown to have a high potential for application to direct coal liquefaction-derived samples in a Phase 1 project in this program. In this Phase 3 project, the FIMS method was applied to a set of samples produced in HRI bench-scale liquefaction Runs CC-15 and CC-16. FIMS was used to obtain the molecular weight profile of the samples and to identify specific prominent peaks in the low end (160--420 Da) region of the molecular weight profile. In the samples examined in this study, species were identified which previously were recognized as precursors to the formation of high molecular weight structures associated with the formation of coke in petroleum vacuum gas oils.

Full Text Available The paper contains the experimental research performed in Bucharest like the borehole data (Standard Penetration Test and the data obtained from seismic investigations (down-hole prospecting and surface-wave methods. The evaluation of the soils liquefaction resistance based on the results of the SPT, down-hole prospecting and surface-wave method tests and the use of the earthquake records will be presented.

Although this report deals with potential regulatory constraints only on development of coal liquids, it should be noted that every basic industry in the national economy is constrained by a myriad of state, local, and federal laws, and many of these existing laws may eventually affect coal liquids development. The American Petroleum Institute has prepared a list of the 12 most generally applicable environmental laws; these are summarized. For the present study, the most comprehensive constraining regulations likely to apply to coal liquefaction were chosen from this list. The choices depended in part upon which laws could be complied with by appropriate facility design. Therefore, for this study, the regulations examined were those covering solid and hazardous wastes and emissions of air and water pollutants. It should be noted that there are at present no emission regulations pertaining specifically to coal liquefaction. A survey of such analogous industries was conducted to identify regulations on air and water pollutants and solid waste disposal that might pertain to coal synfuel plants. The Federal New Source Performance Standards (NSPS) for air and water pollutants were specified where applicable. Wherever federal standards for a particular emission source or pollutant did not exist but appeared necessary, appropriate standards were specified on the basis of state regulations.Estimates of emission and effluent standards that may be applicable to coal liquefaction facilities are presented. Emission standards are defined for coal driers, boilers, process, and combustion equipment and for Claus sulfur plants. Effluent standards are provided for process, boiler, and miscellaneous waste streams. Sources of solid wastes from coal liquefaction and proposed disposal regulations for hazardous wastes are also described.

An improved process for liquefying solid carbonaceous materials wherein the solid carbonaceous material is slurried with a suitable solvent and then subjected to liquefaction at elevated temperature and pressure to produce a normally gaseous product, a normally liquid product and a normally solid product. The normally liquid product is further separated into a naphtha boiling range product, a solvent boiling range product and a vacuum gas-oil boiling range product. At least a portion of the solvent boiling-range product and the vacuum gas-oil boiling range product are then combined and passed to a hydrotreater where the mixture is hydrotreated at relatively severe hydrotreating conditions and the liquid product from the hydrotreater then passed to a catalytic cracker. In the catalytic cracker, the hydrotreater effluent is converted partially to a naphtha boiling range product and to a solvent boiling range product. The naphtha boiling range product is added to the naphtha boiling range product from coal liquefaction to thereby significantly increase the production of naphtha boiling range materials. At least a portion of the solvent boiling range product, on the other hand, is separately hydrogenated and used as solvent for the liquefaction. Use of this material as at least a portion of the solvent significantly reduces the amount of saturated materials in said solvent.

Under contract from the DOE , and in association with CONSOL Inc., Battelle, Pacific Northwest Laboratory (PNL) evaluated four principal and several complementary techniques for the analysis of non-distillable direct coal liquefaction materials in support of process development. Field desorption mass spectrometry (FDMS) and nuclear magnetic resonance (NMR) spectroscopic methods were examined for potential usefulness as techniques to elucidate the chemical structure of residual (nondistillable) direct coal liquefaction derived materials. Supercritical fluid extraction (SFE) and supercritical fluid chromatography/mass spectrometry (SFC/MS) were evaluated for effectiveness in compound-class separation and identification of residual materials. Liquid chromatography (including microcolumn) separation techniques, gas chromatography/mass spectrometry (GC/MS), mass spectrometry/mass spectrometry (MS/MS), and GC/Fourier transform infrared (FTIR) spectroscopy methods were applied to supercritical fluid extracts. The full report authored by the PNL researchers is presented here. The following assessment briefly highlights the major findings of the project, and evaluates the potential of the methods for application to coal liquefaction materials. These results will be incorporated by CONSOL into a general overview of the application of novel analytical techniques to coal-derived materials at the conclusion of CONSOL's contract.

Full Text Available Background: Surgical correction of severe proximal hypospadias represents a significant surgical challenge and single-stage corrections are often associated with complications and reoperations. Bracka two-stage repair is an attractive alternative surgical procedure with superior, reliable, and reproducible results. Purpose: To study the feasibility and applicability of Bracka two-stage repair for the severe proximal hypospadias and to analyze the outcomes and complications of this surgical technique. Materials and Methods: This prospective study was conducted from January 2011 to December 2013. Bracka two-stage repair was performed using inner preputial skin as a free graft in subjects with proximal hypospadias in whom severe degree of chordee and/or poor urethral plate was present. Only primary cases were included in this study. All subjects received three doses of intra-muscular testosterone 3 weeks apart before first stage. Second stage was performed 6 months after the first stage. Follow-up ranged from 6 months to 24 months. Results: A total of 43 patients operated for Bracka repair, out of which 30 patients completed two-stage repair. Mean age of the patients was 4 years and 8 months. We achieved 100% graft uptake and no revision was required. Three patients developed fistula, while two had metal stenosis. Glans dehiscence, urethral stricture and the residual chordee were not found during follow-up and satisfactory cosmetic results with good urinary stream were achieved in all cases. Conclusion: The Bracka two-stage repair is a safe and reliable approach in select patients in whom it is impractical to maintain the axial integrity of the urethral plate, and, therefore, a full circumference urethral reconstruction become necessary. This gives good results both in terms of restoration of normal function with minimal complication.

It has previously been shown that the use of two-phase screw expanders in power generation cycles can achieve an increase in the utilisation of available energy from a low temperature heat source when compared with more conventional single-phase turbines. However, screw expander efficiencies are more sensitive to expansion volume ratio than turbines, and this increases as the expander inlet vapour dryness fraction decreases. For singlestage screw machines with low inlet dryness, this can lead to under expansion of the working fluid and low isentropic efficiency for the expansion process. The performance of the cycle can potentially be improved by using a two-stage expander, consisting of a low pressure machine and a smaller high pressure machine connected in series. By expanding the working fluid over twostages, the built-in volume ratios of the two machines can be selected to provide a better match with the overall expansion process, thereby increasing efficiency for particular inlet and discharge conditions. The mass flow rate though both stages must however be matched, and the compromise between increasing efficiency and maximising power output must also be considered. This research uses a rigorous thermodynamic screw machine model to compare the performance of single and two-stage expanders over a range of operating conditions. The model allows optimisation of the required intermediate pressure in the two- stage expander, along with the rotational speed and built-in volume ratio of both screw machine stages. The results allow the two-stage machine to be fully specified in order to achieve maximum efficiency for a required power output.

Reported here are the results of Laboratory and Bench- Scale experiments and supporting technical and economic assessments conducted under DOE Contract No. DE- AC22- 91PC91040 during the period April 1, 1997 to June 30, 1997. This contract is with the University of Kentucky Research Foundation which supports work with the University of Kentucky Center for Applied Energy Research, CONSOL, Inc., LDP Associates, and Hydrocarbon Technologies, Inc. This work involves the introduction into the basic twostageliquefaction process several novel concepts which includes dispersed lower- cost catalysts, coal cleaning by oil agglomeration, and distillate hydrotreating and dewaxing. This report includes a data analysis of the ALC- 2 run which was the second continuous run in which Wyodak Black Thunder coal was fed to a two kg/ h bench- scale unit. One of the objectives of that run was to determine the relative activity of several Mo- based coal impregnated catalyst precursors. The precursors included ammonium heptamolybdate (100 mg Mo/ kg dry coal), which was used alone as well as in combination with ferrous sulfate (1% Fe/ dry coal) and nickel sulfate (50 mg Ni/ kg dry coal). The fourth precursor that was tested was phosphomolybdic acid which was used at a level of 100 mg Mo/ kg dry coal. Because of difficulties in effectively separating solids from the product stream, considerable variation in the feed stream occurred. Although the coal feed rate was nearly constant, the amount of recycle solvent varied which resulted in wide variations of resid, unconverted coal and mineral matter in the feed stream. Unfortunately, steady state was not achieved in any of the four conditions that were run. Earlier it was reported that Ni- Mo catalyst appeared to give the best results based upon speculative steady- state yields that were developed.

Gulf of Jakarta is an area of active sedimentation. There exist a wide sediment deposition area on the north coast of Jakarta. Generally, these sediments have not been consolidated, so that the conditions in these area is an important factor to determining liquefaction in these area. Liquefaction may occur because of earthquake that cause loss of strength and stiffness in soils. Analysis of liquefaction potential based from SPT data taken at gulf of Jakarta, include susceptibility rate and the factors that triggering. Liquefaction analysis methods compared with each other to get the factor of safety against liquefaction according to the characteristics of the soil. Liquefaction analysis at surface using susceptibility rating factor (SRF). SRF method controled by factors: history, geology, composition, and groundwater. Each factors have parameters that determine the value of SRF.From the analysis, Gulf of Jakarta has susceptibility rating from liquefaction with SRF value 12 - 35. The value shows that Gulf of Jakarta dominated by area that have susceptibility rating from medium to high. High susceptibility rating from liquefaction concentrated at coast area.

Coal liquefaction by oxygen in alkaline slurries is reviewed from the chemical point of view. Available information is considered in the light of questions relating to coal liquefaction. A lack of chemical knowledge in this area is noted, especially on model compounds. 72 refs.

The 2003 M 6.5 San Simeon, California, earthquake caused liquefaction-induced lateral spreading at Oceano at an unexpectedly large distance from the seismogenic rupture. We conclude that the liquefaction was caused by ground motion that was enhanced by both rupture directivity in the mainshock and local site amplification by unconsolidated fine-grained deposits. Liquefaction occurred in sandy artificial fill and undisturbed eolian sand and fluvial deposits. The largest and most damaging lateral spread was caused by liquefaction of artificial fill; the head of this lateral spread coincided with the boundary between the artificial fill and undisturbed eolian sand deposits. Values of the liquefaction potential index, in general, were greater than 5 at liquefaction sites, the threshold value that has been proposed for liquefaction hazard mapping. Although the mainshock ground motion at Oceano was not recorded, peak ground acceleration was estimated to range from 0.25 and 0.28g on the basis of the liquefaction potential index and aftershock recordings. The estimates fall within the range of peak ground acceleration values associated with the modified Mercalli intensity = VII reported at the U.S. Geological Survey (USGS) "Did You Feel It?" web site.

Full Text Available An experimental study using shaking table was conducted to learn liquefaction. Samples used were sandy soils from South of Yogyakarta Special Region Province. Analysis of liquefaction potential was performed by considering several factors, i.e. peak ground acceleration (PGA of 0.3 g to 0.4 g, vibrational frequency of 1.8 Hz, and vibration duration of 8, 16, and 32 seconds which reflect earthquake magnitudes of 5, 6, and 7. The pore water pressure was measured by using a pressure transducer. Liquefaction potential was determined by using the parameter of excess pore water pressure ratio (ru. Liquefaction potentially occurred when ru > 1, whereas ru < 1 indicated liquefaction didn’t occur. The results of test showed that liquefaction potentially occur in each applied dynamic load, that maximum excess pore wate pressure (ru max measured was equal to or larger than 1. The larger peak ground acceleration applied, the faster beginning time of liquefaction. The bigger peak ground acceleration applied, the slower dissipation time of pore water pressure. The duration of liquefaction become longer, along with the increase of applied peak ground acceleration. The bigger applied peak ground acceleration, the larger maximum excess pore water pressure.

Lignocellulosic feedstock can be converted to bio-oil by direct liquefaction in a phenolic solvent such as guaiacol with an oil yield of >90 C% at 300–350 °C without the assistance of catalyst or reactive atmosphere. Despite good initial performance, the liquefaction was rapidly hindered by the form

Existing models of birdsong learning assume that brain area LMAN introduces variability into song for trial-and-error learning. Recent data suggest that LMAN also encodes a corrective bias driving short-term improvements in song. These later consolidate in area RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using a stochastic gradient descent approach, we derive how 'tutor' circuits should match plasticity mechanisms in 'student' circuits for efficient learning. We further describe a reinforcement learning framework with which the tutor can build its teaching signal. We show that mismatching the tutor signal and plasticity mechanism can impair or abolish learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning.

A leaching process for base metals recovery often generates considerable amounts of impurities such as iron and arsenic into the solution.It is a challenge to separate the non-valuable metals into manageable and stable waste products for final disposal,without loosing the valuable constituents.Boliden Mineral AB has patented a two-stage precipitation process that gives a very clean iron-arsenic precipitate by a minimum of coprecipitation of base metals.The obtained product shows to have good sedimentation and filtration properties,which makes it easy to recover the iron-arsenic depleted solution by filtration and washing of the precipitate.Continuos bench scale tests have been done,showing the excellent results achieved by the two-stage precipitation process.

A gain-flattened S-band erbium-doped fiber amplifier (EDFA) using standard erbium-doped fiber (EDF) is proposed and experimentally demonstrated. The proposed amplifier with two-stage double-pass configuration employs two C-band suppressing filters to obtain the optical gain in S-band. The amplifier provides a maximum signal gain of 41.6 dB at 1524 nm with the corresponding noise figure of 3.8 dB. Furthermore, with a well-designed short-pass filter as a gain flattening filter (GFF), we are able to develop the S-band EDFA with a flattened gain of more than 20 dB in 1504-1524 nm. In the experiment, the two-stage double-pass amplifier configuration improves performance of gain and noise figure compared with the configuration of single-stage double-pass S-band EDFA.

Full Text Available This paper attempts to develop a linearized model of automatic generation control (AGC for an interconnected two-area reheat type thermal power system in deregulated environment. A comparison between genetic algorithm optimized PID controller (GA-PID, particle swarm optimized PID controller (PSO-PID, and proposed two-stage based PSO optimized fuzzy logic controller (TSO-FLC is presented. The proposed fuzzy based controller is optimized at twostages: one is rule base optimization and other is scaling factor and gain factor optimization. This shows the best dynamic response following a step load change with different cases of bilateral contracts in deregulated environment. In addition, performance of proposed TSO-FLC is also examined for ±30% changes in system parameters with different type of contractual demands between control areas and compared with GA-PID and PSO-PID. MATLAB/Simulink® is used for all simulations.

We present a two-stage scheme integrating voxel reconstruction and human motion tacking. By combining voxel reconstruction with human motion tracking interactively, our method can work in a cluttered background where perfect foreground silhouettes are hardly available. For each frame, a silhouette-based 3D volume reconstruction method and hierarchical tracking algorithm are applied in twostages. In the first stage, coarse reconstruction and tracking results are obtained, and then the refinement for reconstruction is applied in the second stage. The experimental results demonstrate our approach is promising. Although our method focuses on the problem of human body voxel reconstruction and motion tracking in this paper, our scheme can be used to reconstruct voxel data and infer the pose of many specified rigid and articulated objects.

Electrocardiogram (ECG) signals sensed from mobile devices pertain the potential for biometric identity recognition applicable in remote access control systems where enhanced data security is demanding. In this study, we propose a new algorithm that consists of a two-stage classifier combining random forest and wavelet distance measure through a probabilistic threshold schema, to improve the effectiveness and robustness of a biometric recognition system using ECG data acquired from a biosensor integrated into mobile devices. The proposed algorithm is evaluated using a mixed dataset from 184 subjects under different health conditions. The proposed two-stage classifier achieves a total of 99.52% subject verification accuracy, better than the 98.33% accuracy from random forest alone and 96.31% accuracy from wavelet distance measure algorithm alone. These results demonstrate the superiority of the proposed algorithm for biometric identification, hence supporting its practicality in areas such as cloud data security, cyber-security or remote healthcare systems.

The effect of two-stage aging on the microstructures and superplasticity of 01420 Al-Li alloy was investigated by means of OM, TEM analysis and stretching experiment. The results demonstrate that the second phase particles distributed more uniformly with a larger volume fraction can be observed after the two-stage aging (120 ℃, 12 h+300 ℃, 36 h) compared with the single-aging(300 ℃, 48 h). After rolling and recrystallization annealing, fine grains with size of 8-10 μm are obtained, and the superplastic elongation of the specimens reaches 560% at strain rate of 8×10-4 s-1 and 480 ℃. Uniformly distributed fine particles precipitate both on grain boundaries and in grains at lower temperature. When the sheet is aged at high temperature, the particles become coarser with a large volume fraction.

Ethanol, electricity, hydrogen and methane were produced in a twostage bioethanol refinery setup based on a 10L microbial fuel cell (MFC) and a 33L microbial electrolysis cell (MEC). The MFC was a triple stack for ethanol and electricity co-generation. The stack configuration produced more ethanol with faster glucose consumption the higher the stack potential. Under electrolytic conditions ethanol productivity outperformed standard conditions and reached 96.3% of the theoretically best case. At lower external loads currents and working potentials oscillated in a self-synchronized manner over all three MFC units in the stack. In the second refining stage, fermentation waste was converted into methane, using the scale up MEC stack. The bioelectric methanisation reached 91% efficiency at room temperature with an applied voltage of 1.5V using nickel cathodes. The twostage bioethanol refining process employing bioelectrochemical reactors produces more energy vectors than is possible with today's ethanol distilleries.

Full Text Available Measuring the relative performance of insurance firms plays an important role in this industry. In this paper, we present a two-stage data envelopment analysis to measure the performance of insurance firms, which were active over the period of 2006-2010. The proposed study of this paper performs DEA method in twostages where the first stage considers five inputs and three outputs while the second stage considers the outputs of the first stage as the inputs of the second stage and uses three different outputs for this stage. The results of our survey have indicated that while there were 4 efficient insurance firms most other insurances were noticeably inefficient. This means market was monopolized mostly by a limited number of insurance firms and competition was not fare enough to let other firms participate in economy, more efficiently.

Full Text Available Extended Kalman filter (EKF has been widely applied for sensorless direct torque control (DTC in induction machines (IMs. One key problem associated with EKF is that the estimator suffers from computational burden and numerical problems resulting from high order mathematical models. To reduce the computational cost, a two-stage extended Kalman filter (TEKF based solution is presented for closed-loop stator flux, speed, and torque estimation of IM to achieve sensorless DTC-SVM operations in this paper. The novel observer can be similarly derived as the optimal two-stage Kalman filter (TKF which has been proposed by several researchers. Compared to a straightforward implementation of a conventional EKF, the TEKF estimator can reduce the number of arithmetic operations. Simulation and experimental results verify the performance of the proposed TEKF estimator for DTC of IMs.

Thirty-five insulin-requiring adult diabetic patients underwent 38 Syme's Two-Stage amputations for gangrene of the forefoot with nonreconstructible peripheral vascular insufficiency. All had a minimum Doppler ischemic index of 0.5, serum albumin of 3.0 gm/dl, and total lymphocyte count of 1500. Thirty-one (81.6%) eventually healed and were uneventfully fit with a prosthesis. Regional anesthesia was used in all of the patients, with 22 spinal and 16 ankle block anesthetics. Twenty-seven (71%) returned to their preamputation level of ambulatory function. Six (16%) had major, and fifteen (39%) minor complications following the first stage surgery. The results of this study support the use of the Syme's Two-Stage amputation in adult diabetic patients with gangrene of the forefoot requiring amputation.

We have realized a two-stage integrated superconducting quantum interference device (SQUID) system with a closed loop bandwidth of 2.5 MHz, operated in a direct voltage readout mode. The corresponding flux slew rate was 1.3×10 5Φ0/s and the measured white flux noise was 1.3 μ Φ0/√Hz at 4.2 K. The system is based on a conventional dc SQUID with a double relaxation oscillation SQUID (DROS) as the second stage. Because of the large flux-to-voltage transfer, the sensitivity of the system is completely determined by the sensor SQUID and not by the DROS or the room-temperature preamplifier. Decreasing the Josephson junction area enables a further improvement of the sensitivity of the two-stage SQUID systems.

Generally, a two-stage design is employed in Phase II clinical trials to avoid giving patients an ineffective drug. If the number of patients with significant improvement, which is a binomial response, is greater than a pre-specified value at the first stage, then another binomial response at the second stage is also observed. This paper considers interval estimation of the response probability when the second stage is allowed to continue. Two asymptotic interval estimators, Wald and score, as well as two exact interval estimators, Clopper-Pearson and Sterne, are constructed according to the two binomial responses from this two-stage design, where the binomial response at the first stage follows a truncated binomial distribution. The mean actual coverage probability and expected interval width are employed to evaluate the performance of these interval estimators. According to the comparison results, the score interval is recommended for both Simon's optimal and minimax designs.

To study a centrifugal two-stage turbocharging system's surge and influencing factors, a special test bench was set up and the system surge test was performed. The test results indicate that the measured parameters such as air mass flow and rotation speed of a high pressure (HP) stage compressor can be converted into corrected para-meters under a standard condition according to the Mach number similarity criterion, because the air flow in a HP stage compressor has entered the Reynolds number (Re) auto-modeling range. Accordingly, the reasons leading to a two-stage turbocharging system's surge can be analyzed according to the corrected mass flow characteristic maps and actual operating conditions of HP and low pressure (LP) stage compressors.

Performance-based earthquake engineering requires a probabilistic treatment of potential failure modes in order to accurately quantify the overall stability of the system. This paper is a summary of the application portions of the probabilistic liquefaction triggering correlations proposed recently proposed by Moss and co-workers. To enable probabilistic treatment of liquefaction triggering, the variables comprising the seismic load and the liquefaction resistance were treated as inherently uncertain. Supporting data from an extensive Cone Penetration Test (CPT)-based liquefaction case history database were used to develop a probabilistic correlation. The methods used to measure the uncertainty of the load and resistance variables, how the interactions of these variables were treated using Bayesian updating, and how reliability analysis was applied to produce curves of equal probability of liquefaction are presented. The normalization for effective overburden stress, the magnitude correlated duration weighting factor, and the non-linear shear mass participation factor used are also discussed.

This report provides detailed reactor designs and capital costs, and operating cost estimates for the hydrothermal liquefaction reactor system, used for biomass-to-biofuels conversion, under development at Pacific Northwest National Laboratory. Five cases were developed and the costs associated with all cases ranged from $22 MM/year - $47 MM/year.

Full Text Available Background: The aim of this study was to review our experience with two-staged management for all types of congenital pouch colon (CPC. Patients and Methods: This retrospective study included CPC cases that were managed with two-staged procedures in the Department of Paediatric Surgery, over a period of 12 years from 1 January 2000 to 31 December 2011. Results: CPC comprised of 13.71% (97 of 707 of all anorectal malformations (ARM and 28.19% (97 of 344 of high ARM. Eleven CPC cases (all males were managed with two-staged procedures. Distribution of cases (Narsimha Rao et al.′s classification into types I, II, III, and IV were 1, 2, 6, and 2, respectively. Initial operative procedures performed were window colostomy (n = 6, colostomy proximal to pouch (n = 4, and ligation of colovesical fistula and end colostomy (n = 1. As definitive procedures, pouch excision with abdomino-perineal pull through (APPT of colon in eight, and pouch excision with APPT of ileum in three were performed. The mean age at the time of definitive procedures was 15.6 months (ranges from 3 to 53 months and the mean weight was 7.5 kg (ranges from 4 to 11 kg. Good fecal continence was observed in six and fair in two cases in follow-up periods, while three of our cases lost to follow up. There was no mortality following definitive procedures amongst above 11 cases. Conclusions: Two-staged procedures for all types of CPC can also be performed safely with good results. The most important fact that the definitive procedure is being done without protective stoma and therefore, it avoids stoma closure, stoma-related complications, related cost of stoma closure and hospital stay.

The University of California at Berkeley has tested and modeled satisfactorly a hybrid staged Lysholm engine (positive displacement) with a twostage Curtis wheel turbine. The system operates in a stable manner over its operating range (0/1-3/1 water ratio, 120 psia input). Proposals are made for controlling interstage pressure with a partial admission turbine and volume expansion to control mass flow and pressure ratio for the Lysholm engine.

Full Text Available Introduction. It is necessary to develop adaptive algorithms, which allow to detect such regions and to apply filter with respective parameters for suppression of anomalous noises for the purposes of image filtration, which consist of regions with anomalous errors. Development of adaptive algorithm for non-causal two-stage images filtration at pres-ence of observations with anomalous errors. The adaptive algorithm for noncausal two-stage filtration is developed. On the first stage the adaptive one-dimensional algorithm for causal filtration is used for independent processing along rows and columns of image. On the second stage the obtained data are united and a posteriori estimations are calculated. Results of experimental investigations. The developed adaptive algorithm for noncausal images filtration at presence of observations with anomalous errors is investigated on the model sample by means of statistical modeling on PC. The image is modeled as a realization of Gaussian-Markov random field. The modeled image is corrupted with uncorrelated Gaussian noise. Regions of image with anomalous errors are corrupted with uncorrelated Gaussian noise which has higher power than normal noise on the rest part of the image. Conclusions. The analysis of adaptive algorithm for noncausal two-stage filtration is done. The characteristics of accuracy of computed estimations are shown. The comparisons of first stage and second stage of the developed adaptive algorithm are done. Adaptive algorithm is compared with known uniform two-stage algorithm of image filtration. According to the obtained results the uniform algorithm does not suppress anomalous noise meanwhile the adaptive algorithm shows good results.

From measurements performed on a low-noise two-stage SQUID amplifier coupled to a high- Q electrical resonator we give a complete noise characterization of the SQUID amplifier around the resonator frequency of 11 kHz in terms of additive, back action and cross-correlation noise spectral densities. The minimum noise temperature evaluated at 135 mK is 10 {mu}K and corresponds to an energy resolution of 18{Dirac_h}.

In the present work, we develop a two-stage allocation rule for binary response using the log-odds ratio within the Bayesian framework allowing the current allocation to depend on the covariate value of the current subject. We study, both numerically and theoretically, several exact and limiting properties of this design. The applicability of the proposed methodology is illustrated by using some data set. We compare this rule with some of the existing rules by computing various performance measures.

By using the group ⅢB or groupⅦB metals and modulating the characteristics of electric charges on carrier surface, improving the catalyst preparation process and techniques for loading the active metal components, a novel type SY-2 catalyst earmarked for two-stage hydrogenation of pyrolysis gasoline has been developed. The catalyst evaluation results have indicated that the novel catalyst is characterized by a better hydrogenation reaction activity to give higher aromatic yield.

This paper describes experimental results that the no-load temperature of a two-stage Solvay refrigerator has been reached in liquid helium temperature region from the original 11.5 K by using magnetic regenerative material instead of lead. The structure and technological characteristics of the prototype machine are presented. The effects of operating frequency and pressure on the refrigerating temperature have been discussed in this paper.

In the present work two novel two-stage hydrogen production processes from olive mill wastewater (OMW) have been introduced. The first two-stage process involved dark-fermentation followed by a photofermentation process. Dark-fermentation by activated sludge cultures and photofermentation by Rhodobacter sphaeroides O.U.001 were both performed in 55ml glass vessels, under anaerobic conditions. In some cases of dark-fermentation, activated sludge was initially acclimatized to the OMW to provide the adaptation of microorganisms to the extreme conditions of OMW. The highest hydrogen production potential obtained was 29l{sub H{sub 2}}/l{sub OMW} after photofermentation with 50% (v/v) effluent of dark fermentation with activated sludge. Photofermentation with 50% (v/v) effluent of dark fermentation with acclimated activated sludge had the highest hydrogen production rate (0.008ll{sup -1}h{sup -1}). The second two-stage process involved a clay treatment step followed by photofermentation by R. sphaeroides O.U.001. Photofermentation with the effluent of the clay pretreatment process (4% (v/v)) gives the highest hydrogen production potential (35l{sub H{sub 2}}/l{sub OMW}), light conversion efficiency (0.42%) and COD conversion efficiency (52%). It was concluded that both pretreatment processes enhanced the photofermentative hydrogen production process. Moreover, hydrogen could be produced with highly concentrated OMW. Two-stage processes developed in the present investigation have a high potential for solving the environmental problems caused by OMW. (author)

International audience; Back-arc extension in the Aegean, which was driven by slab rollback since 45 Ma, is described here for the first time in twostages. From Middle Eocene to Middle Miocene, deformation was localized leading to i) the exhumation of high-pressure metamorphic rocks to crustal depths, ii) the exhumation of high-temperature metamorphic rocks in core complexes and iii) the deposition of sedimentary basins. Since Middle Miocene, extension distributed over the whole Aegean domai...

This paper addresses the problem of making supply chain operation decisions for refineries under two types of uncertainties: demand uncertainty and incomplete information shared with suppliers and transport companies. Most of the literature only focus on one uncertainty or treat more uncertainties identically. However, we note that refineries have more power to control uncertainties in procurement and transportation than in demand in the real world. Thus, a two-stage framework for dealing wit...

We have realized a two-stage integrated superconducting quantum interference device (SQUID) system with a closed loop bandwidth of 2.5 MHz, operated in a direct voltage readout mode. The corresponding flux slew rate was 1.3×105 Φ0/s and the measured white flux noise was 1.3 μΦ0/√Hz at 4.2 K. The

An experiment is reported in which the effect of unconditioned stimulus (US) intensity on latent inhibition (LI) was examined, using a two-stage conditioned emotional response (CER) procedure in rats. A tone was used as the pre-exposed and conditioned stimulus (CS), and a foot-shock of either a low (0.3 mA) or high (0.7 mA) intensity was used as…

A monolithic twostage gain control amplifier has been developed using submicron gate length dual gate MESFETs fabricated on ion implanted material. The amplifier has a gain of 12 dB at 30 GHz with a gain control range of over 30 dB. This ion implanted monolithic IC is readily integrable with other phased array receiver functions such as low noise amplifiers and phase shifters.

In this study, exergy analyses of vapor compression refrigeration cycle with two-stage and intercooler using refrigerants R507, R407c, R404a were carried out. The necessary thermodynamic values for analyses were calculated by Solkane program. The coefficient of performance, exergetic efficiency and total irreversibility rate of the system in the different operating conditions for these refrigerants were investigated. The coefficient of performance, exergetic efficiency and total irreversibility rate for alternative refrigerants were compared.

In this study, exergy analyses of vapor compression refrigeration cycle with two-stage and intercooler using refrigerants R507, R407c, R404a were carried out. The necessary thermodynamic values for analyses were calculated by Solkane program. The coefficient of performance, exergetic efficiency and total irreversibility rate of the system in the different operating conditions for these refrigerants were investigated. The coefficient of performance, exergetic efficiency and total irreversibility rate for alternative refrigerants were compared. (orig.)

The objective of this study is to show the effect of single deflector plate on the performance of combined Darrieus-Savonius water turbine. In order to overcome the disadvantages of low torque of solo Darrieus turbine, a plate deflector mounted in front of returning Savonius bucket of combined water turbine composing of Darrieus and Savonius rotor has been proposed in this study. Some configurations of combined turbines with twostage Savonius rotors were experimentally tested in a river of c...

A two-stage decision model is developed to assess the effect of perceived soy health benefits on consumers' decisions with respect to soy food. The first stage captures whether or not to consume soy food, while the second stage reflects how often to consume. A conceptual/analytical framework is also employed, combining Lancaster's characteristics model and Fishbein's multi-attribute model. Results show that perceived soy health benefits significantly influence both decision stages. Further, c...

In this paper, we report on mid-wavelength infrared interband cascade photodetectors grown on InAs substrates. We studied the transport properties of the photon-generated carriers in the interband cascade structures by comparing two different detectors, a single stage detector and a two-stage cascade detector. The two-stage device showed quantum efficiency around 19.8% at room temperature, and clear optical response was measured even at a temperature of 323 K. The two detectors showed similar Johnson-noise limited detectivity. The peak detectivity of the one- and two-stage devices was measured to be 2.15 × 1014 cm·Hz1/02/W and 2.19 × 1014 cm·Hz1/02/W at 80 K, 1.21 × 109 cm·Hz1/02/W and 1.23 × 109 cm·Hz1/02/W at 300 K, respectively. The 300 K background limited infrared performance (BLIP) operation temperature is estimated to be over 140 K.

A two-stage small Stirling cooler has been developed and tested for the infrared astronomical satellite ASTRO-F that is planned to be launched by Japanese M-V rocket in 2005. ASTRO-F has a hybrid cryogenic system that is a combination of superfluid liquid helium (HeII) and two-stage Stirling coolers. The mechanical cooler has a two-stage displacer driven by a linear motor in a cold head and a new linear-ball-bearing system for the piston-supporting structure in a compressor. The linear-ball-bearing supporting system achieves the piston clearance seal, the long piston-stroke operation and the low frequency operation. The typical cooling power is 200 mW at 20 K and the total input power to the compressor and the cold head is below 90 W without driver electronics. The engineering, the prototype and the flight models of the cooler have been fabricated and evaluated to verify the capability for ASTRO-F. This paper describes the design of the cooler and the results from verification tests including cooler performance test, thermal vacuum test, vibration test and lifetime test.

The major technical problems faced by stand-alone fluidized bed gasifiers (FBG) for waste-to gas applications are intrinsically related to the composition and physical properties of waste materials, such as RDF. The high quantity of ash and volatile material in RDF can provide a decrease in thermal output, create high ash clinkering, and increase emission of tars and CO2, thus affecting the operability for clean syngas generation at industrial scale. By contrast, a two-stage process which separates primary gasification and selective tar and ash conversion would be inherently more forgiving and stable. This can be achieved with the use of a separate plasma converter, which has been successfully used in conjunction with conventional thermal treatment units, for the ability to 'polish' the producer gas by organic contaminants and collect the inorganic fraction in a molten (and inert) state. This research focused on the performance analysis of a two-stage fluid bed gasification-plasma process to transform solid waste into clean syngas. Thermodynamic assessment using the two-stage equilibrium method was carried out to determine optimum conditions for the gasification of RDF and to understand the limitations and influence of the second stage on the process performance (gas heating value, cold gas efficiency, carbon conversion efficiency), along with other parameters. Comparison with a different thermal refining stage, i.e. thermal cracking (via partial oxidation) was also performed. The analysis is supported by experimental data from a pilot plant.

The oxidant Mn(3+) -malonate, generated by the ligninolytic enzyme versatile peroxidase in a two-stage system, was used for the continuous removal of endocrine disrupting compounds (EDCs) from synthetic and real wastewaters. One plasticizer (bisphenol-A), one bactericide (triclosan) and three estrogenic compounds (estrone, 17β-estradiol, and 17α-ethinylestradiol) were removed from wastewater at degradation rates in the range of 28-58 µg/L·min, with low enzyme inactivation. First, the optimization of three main parameters affecting the generation of Mn(3+) -malonate (hydraulic retention time as well as Na-malonate and H2 O2 feeding rates) was conducted following a response surface methodology (RSM). Under optimal conditions, the degradation of the EDCs was proven at high (1.3-8.8 mg/L) and environmental (1.2-6.1 µg/L) concentrations. Finally, when the two-stage system was compared with a conventional enzymatic membrane reactor (EMR) using the same enzyme, a 14-fold increase of the removal efficiency was observed. At the same time, operational problems found during EDCs removal in the EMR system (e.g., clogging of the membrane and enzyme inactivation) were avoided by physically separating the stages of complex formation and pollutant oxidation, allowing the system to be operated for a longer period (∼8 h). This study demonstrates the feasibility of the two-stage enzymatic system for removing EDCs both at high and environmental concentrations.

A thermally coupled two-stage Stirling-type pulse tube cryocooler (PTC) with inertance tubes as phase shifters has been designed, manufactured and tested. In order to obtain a larger phase shift at the low acoustic power of about 2.0 W, a cold inertance tube as well as a cold reservoir for the second stage, precooled by the cold end of the first stage, was introduced into the system. The transmission line model was used to calculate the phase shift produced by the cold inertance tube. Effect of regenerator material, geometry and charging pressure on the performance of the second stage of the two-stage PTC was investigated based on the well known regenerator model REGEN. Experimental results of the two-stage PTC were carried out with an emphasis on the performance of the second stage. A lowest cooling temperature of 23.7 K and 0.50 W at 33.9 K were obtained with an input electric power of 150.0 W and an operating frequency of 40 Hz.

[Purpose] The primary aim of this study was to assess rehabilitation outcomes for early and two-stage repair of hand flexor tendon injuries. The secondary purpose of this study was to compare the findings between treatment groups. [Subjects and Methods] Twenty-three patients were included in this study. Early repair (n=14) and two-stage repair (n=9) groups were included in a rehabilitation program that used hand splints. This retrospective evaluated patients according to their demographic characteristics, including age, gender, injured hand, dominant hand, cause of injury, zone of injury, number of affected fingers, and accompanying injuries. Pain, range of motion, and grip strength were evaluated using a visual analog scale, goniometer, and dynamometer, respectively. [Results] Both groups showed significant improvements in pain and finger flexion after treatment compared with baseline measurements. However, no significant differences were observed between the two treatment groups. Similar results were obtained for grip strength and pinch grip, whereas gross grip was better in the early tendon repair group. [Conclusion] Early and two-stage reconstruction of patients with flexor tendon injuries can be performed with similarly favorable responses and effective rehabilitation programs.

Full Text Available Background: The rapid international expansion of telemedicine reflects the growth of technological innovations. This technological advancement is transforming the way in which patients can receive health care. Materials and Methods: The study was conducted in Poland, at the Department of Cardiology of the Regional Hospital of Louis Rydygier in Torun. The researchers analyzed the delay in the treatment of patients with acute coronary syndrome. The study was conducted as a survey and examined 67 consecutively admitted patients treated invasively in a two-stage transport system. Data were analyzed statistically. Results: Two-stage transportation does not meet the timeframe guidelines for the treatment of patients with acute myocardial infarction. Intervals for the analyzed group of patients were statistically significant (p < 0.0001. Conclusions: Direct transportation of the patient to a reference center with interventional cardiology laboratory has a significant impact on reducing in-hospital delay in case of patients with acute coronary syndrome. Perspectives: This article presents the results of two-stage transportation of the patient with acute coronary syndrome. This measure could help clinicians who seek to assess time needed for intervention. It also shows how time from the beginning of pain in chest is important and may contribute to patient disability, death or well-being.

Full Text Available Two-stage liver transplantation (LT has been reported for cases of fulminant liver failure that can lead to toxic hepatic syndrome, or massive hemorrhages resulting in uncontrollable bleeding. Technically, the first stage of the procedure consists of a total hepatectomy with preservation of the recipient's inferior vena cava (IVC, followed by the creation of a temporary end-to-side porto-caval shunt (TPCS. The second stage consists of removing the TPCS and implanting a liver graft when one becomes available. We report a case of a two-stage total hepatectomy and LT in which a temporary end-to-end anastomosis between the portal vein and the middle hepatic vein (TPMHV was performed as an alternative to the classic end-to-end TPCS. The creation of a TPMHV proved technically feasible and showed some advantages compared to the standard TPCS. In cases in which a two-stage LT with side-to-side caval reconstruction is utilized, TPMHV can be considered as a safe and effective alternative to standard TPCS.

The paper focuses on two estimation methods that have been widely used to address endogeneity in empirical research in health economics and health services research-two-stage predictor substitution (2SPS) and two-stage residual inclusion (2SRI). 2SPS is the rote extension (to nonlinear models) of the popular linear two-stage least squares estimator. The 2SRI estimator is similar except that in the second-stage regression, the endogenous variables are not replaced by first-stage predictors. Instead, first-stage residuals are included as additional regressors. In a generic parametric framework, we show that 2SRI is consistent and 2SPS is not. Results from a simulation study and an illustrative example also recommend against 2SPS and favor 2SRI. Our findings are important given that there are many prominent examples of the application of inconsistent 2SPS in the recent literature. This study can be used as a guide by future researchers in health economics who are confronted with endogeneity in their empirical work.

While nonimaging concentrators can approach the thermodynamic limit of concentration, they generally suffer from poor compactness when designed for small acceptance angles, e.g., to capture direct solar irradiation. Symmetric two-stage systems utilizing an image-forming primary parabolic concentrator in tandem with a nonimaging secondary concentrator partially overcome this compactness problem, but their achievable concentration ratio is ultimately limited by the central obstruction caused by the secondary. Significant improvements can be realized by two-stage systems having asymmetric cross-sections, particularly for 2D line-focus trough designs. We therefore present a detailed analysis of two-stage line-focus asymmetric concentrators for flat receiver geometries and compare them to their symmetric counterparts. Exemplary designs are examined in terms of the key optical performance metrics, namely, geometric concentration ratio, acceptance angle, concentration-acceptance product, aspect ratio, active area fraction, and average number of reflections. Notably, we show that asymmetric designs can achieve significantly higher overall concentrations and are always more compact than symmetric systems designed for the same concentration ratio. Using this analysis as a basis, we develop novel asymmetric designs, including two-wing and nested configurations, which surpass the optical performance of two-mirror aplanats and are comparable with the best reported 2D simultaneous multiple surface designs for both hollow and dielectric-filled secondaries.

A fluidized bed two-stage gasification process, consisting of a fluidized-bed (FB) pyrolyzer and a transport fluidized bed (TFB) gasifier, has been proposed to gasify biomass for fuel gas production with low tar content. On the basis of our previous fundamental study, an autothermal two-stage gasifier has been designed and built for gasify a kind of Chinese herb residue with a treating capacity of 600 kg/h. The testing data in the operational stable stage of the industrial demonstration plant showed that when keeping the reaction temperatures of pyrolyzer and gasifier respectively at about 700 °C and 850 °C, the heating value of fuel gas can reach 1200 kcal/Nm(3), and the tar content in the produced fuel gas was about 0.4 g/Nm(3). The results from this pilot industrial demonstration plant fully verified the feasibility and technical features of the proposed FB two-stage gasification process.

In this paper, a theoretical analysis on the performance of a thermally driven two-stage four-bed adsorption chiller utilizing low-grade waste heat of temperatures between 50°C and 70°C in combination with a heat sink (cooling water) of 30°C for air-conditioning applications has been described. Activated carbon (AC) of type Maxsorb III/HFC-134a pair has been examined as an adsorbent/refrigerant pair. FORTRAN simulation program is developed to analyze the influence of operating conditions (hot and cooling water temperatures and adsorption/desorption cycle times) on the cycle performance in terms of cooling capacity and COP. The main advantage of this two-stage chiller is that it can be operational with smaller regenerating temperature lifts than other heat-driven single-stage chillers. Simulation results shows that the two-stage chiller can be operated effectively with heat sources of 50°C and 70°C in combination with a coolant at 30°C.

Highlights: • Earthworm casts (EWCs) and clinoptilolite (CL) were used in green waste composting. • Addition of EWCs + CL improved physico-chemical and microbiological properties. • Addition of EWCs + CL extended the duration of thermophilic periods during composting. • Addition of EWCs + CL enhanced humification, cellulose degradation, and nutrients. • Combined addition of 0.30% EWCs + 25% CL reduced composting time to 21 days. - Abstract: Because it helps protect the environment and encourages economic development, composting has become a viable method for organic waste disposal. The objective of this study was to investigate the effects of earthworm casts (EWCs) (at 0.0%, 0.30%, and 0.60%) and zeolite (clinoptilolite, CL) (at 0%, 15%, and 25%) on the two-stage composting of green waste. The combination of EWCs and CL improved the conditions of the composting process and the quality of the compost products in terms of the thermophilic phase, humification, nitrification, microbial numbers and enzyme activities, the degradation of cellulose and hemicellulose, and physico-chemical characteristics and nutrient contents of final composts. The compost matured in only 21 days with the optimized two-stage composting method rather than in the 90–270 days required for traditional composting. The optimal two-stage composting and the best quality compost were obtained with 0.30% EWCs and 25% CL.

A concept for a two-stage injection-locked CW magnetron intended to drive Superconducting Cavities (SC) for intensity-frontier accelerators has been proposed. The concept considers two magnetrons in which the output power differs by 15-20 dB and the lower power magnetron being frequency-locked from an external source locks the higher power magnetron. The injection-locked two-stage CW magnetron can be used as an RF power source for Fermilab's Project-X to feed separately each of the 1.3 GHz SC of the 8 GeV pulsed linac. We expect output/locking power ratio of about 30-40 dB assuming operation in a pulsed mode with pulse duration of ~ 8 ms and repetition rate of 10 Hz. The experimental setup of a two-stage magnetron utilising CW, S-band, 1 kW tubes operating at pulse duration of 1-10 ms, and the obtained results are presented and discussed in this paper.

Full Text Available The fast response, high efficiency, and good reliability are very important characteristics to electric vehicles (EVs dc/dc converters. Two-stage dc-dc converter is a kind of dc-dc topologies that can offer those characteristics to EVs. Presently, nonlinear control is an active area of research in the field of the control algorithm of dc-dc converters. However, very few papers research on two-stage converter for EVs. In this paper, a fixed switching frequency sliding mode (FSFSM controller and double-integral sliding mode (DISM controller for two-stage dc-dc converter are proposed. And a conventional linear control (lag is chosen as the comparison. The performances of the proposed FSFSM controller are compared with those obtained by the lag controller. In consequence, the satisfactory simulation and experiment results show that the FSFSM controller is capable of offering good large-signal operations with fast dynamical responses to the converter. At last, some other simulation results are presented to prove that the DISM controller is a promising method for the converter to eliminate the steady-state error.

A preliminary hazard assessment was completed during February 2015 to evaluate the conceptual design of the modular hydrothermal liquefaction treatment system. The hazard assessment was performed in 2 stages. An initial assessment utilizing Hazard Identification and Preliminary Hazards Analysis (PHA) techniques identified areas with significant or unique hazards (process safety-related hazards) that fall outside of the normal operating envelope of PNNL and warranted additional analysis. The subsequent assessment was based on a qualitative What-If analysis. This analysis was augmented, as necessary, by additional quantitative analysis for scenarios involving a release of hazardous material or energy with the potential for affecting the public.

SRI International evaluated two analytical methods for application to coal liquefaction. These included field ionization mass spectrometry and a technique employing iodotrimethylsilane for the derivatization of oxygen bound to alkyl carbon (alkyl ethers). The full report authored by the SRI researchers is presented here. The following assessment briefly highlights the major findings of the project, and evaluates the potential of the methods for application to coal-derived materials. These results will be incorporated by Consol into a general overview of the application of novel analytical techniques to coal-derived materials at the conclusion of this contract. (VC)

Currently, some catastrophe models used by the insurance industry account for liquefaction by applying a simple factor to shaking-induced losses. The factor is based only on local liquefaction susceptibility and this highlights the need for a more sophisticated approach to incorporating the effects of liquefaction in loss models. This study compares 11 unique models, each based on one of three principal simplified liquefaction assessment methods: liquefaction potential index (LPI) calculated from shear-wave velocity, the HAZUS software method and a method created specifically to make use of USGS remote sensing data. Data from the September 2010 Darfield and February 2011 Christchurch earthquakes in New Zealand are used to compare observed liquefaction occurrences to forecasts from these models using binary classification performance measures. The analysis shows that the best-performing model is the LPI calculated using known shear-wave velocity profiles, which correctly forecasts 78 % of sites where liquefaction occurred and 80 % of sites where liquefaction did not occur, when the threshold is set at 7. However, these data may not always be available to insurers. The next best model is also based on LPI but uses shear-wave velocity profiles simulated from the combination of USGS VS30 data and empirical functions that relate VS30 to average shear-wave velocities at shallower depths. This model correctly forecasts 58 % of sites where liquefaction occurred and 84 % of sites where liquefaction did not occur, when the threshold is set at 4. These scores increase to 78 and 86 %, respectively, when forecasts are based on liquefaction probabilities that are empirically related to the same values of LPI. This model is potentially more useful for insurance since the input data are publicly available. HAZUS models, which are commonly used in studies where no local model is available, perform poorly and incorrectly forecast 87 % of sites where liquefaction occurred, even at

The liquefaction analysis procedure conducted at a dam foundation associated with a layer of liquefiable sand is presented. In this case, the effects of the overlying dam and an embedded diaphragm wall on liquefaction potential of foundation soils are considered. The analysis follows the stress-based approach which compares the earthquake-induced cyclic stresses with the cyclic resistance of the soil, and the cyclic resistance of the sand under complex stress condition is the key issue. Comprehensive laboratory monotonic and cyclic triaxial tests are conducted to evaluate the static characteristics, dynamic char-acteristics and the cyclic resistance against liquefaction of the foundation soils. The distribution of the factor of safety considering liquefaction is given. It is found that the zones beneath the dam edges and near the upstream of the diaphragm wall are more susceptible to liquefaction than in free field, whereas the zone beneath the center of the dam is less susceptible to liquefaction than in free field. According to the results, the strategies of ground improvement are proposed to mitigate the liquefaction hazards.

Full Text Available The liquefaction analysis procedure conducted at a dam foundation associated with a layer of liquefiable sand is presented. In this case, the effects of the overlying dam and an embedded diaphragm wall on liquefaction potential of foundation soils are considered. The analysis follows the stress-based approach which compares the earthquake-induced cyclic stresses with the cyclic resistance of the soil, and the cyclic resistance of the sand under complex stress condition is the key issue. Comprehensive laboratory monotonic and cyclic triaxial tests are conducted to evaluate the static characteristics, dynamic characteristics and the cyclic resistance against liquefaction of the foundation soils. The distribution of the factor of safety considering liquefaction is given. It is found that the zones beneath the dam edges and near the upstream of the diaphragm wall are more susceptible to liquefaction than in free field, whereas the zone beneath the center of the dam is less susceptible to liquefaction than in free field. According to the results, the strategies of ground improvement are proposed to mitigate the liquefaction hazards.

The discussion concerning the use of single-stage contra two-stage PFC solutions has been going on for the last decade and it continues. The purpose of this paper is to direct the focus back on how the power is processed and not so much as to the number of stages or the amount of power processed....... The performance of the basic DC/DC topologies is reviewed with focus on the component stress. The knowledge obtained in this process is used to review some examples of the alternative PFC solutions and compare these solutions with the basic twostage PFC solution....

In this paper, two approaches are developed for directly identifying single-rate models of dual-rate stochastic systems in which the input updating frequency is an integer multiple of the output sampling frequency. The first is the generalized Yule-Walker algorithm and the second is a two-stage algorithm based on the correlation technique. The basic idea is to directly identify the parameters of underlying single-rate models instead of the lifted models of dual-rate systems from the dual-rate input-output data, assuming that the measurement data are stationary and ergodic. An example is given.

in extending coverage of a minimum wage to the non-union sector. Furthermore, the union sector does not seek to increase the non-union wage to a level above the market-clearing wage. In fact, it is optimal for the union sector to impose a market-clearing wage on the non-union sector. Finally, coverage......This paper studies coverage extension in a simple general equilibrium model with a dual labour market. The union sector is characterized by two-stage bargaining whereas the firms set wages in the non-union sector. In this model firms and unions of the union sector have a commonality of interest...

Delving into present trends and anticipating future malware trends, a hybrid, SQL on the server-side, JavaScript on the client-side, self-replicating worm based on two-stage quines was designed and implemented on an ad-hoc scenario instantiating a very common software pattern. The proof of concept code combines techniques seen in the wild, in the form of SQL injections leading to cross-site scripting JavaScript inclusion, and seen in the laboratory, in the form of SQL quines propa- gated via RFIDs, resulting in a hybrid code injection. General features of hybrid worms are also discussed.

optimally weighted harmonic multiple signal classification (MCOW-HMUSIC) estimator is devised for the estimation of fundamental frequencies. Secondly, the spatio- temporal multiple signal classification (ST-MUSIC) estimator is proposed for the estimation of DOA with the estimated frequencies. Statistical......In this paper, the problem of fundamental frequency and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signal is addressed. The estimation procedure consists of twostages. Firstly, by making use of the subspace technique and Markov-based eigenanalysis, a multi- channel...... evaluation with synthetic signals shows the high accuracy of the proposed methods compared with their non-weighting versions....

This paper presents the design, development, optimization experiment and performance of the SITP two-stage Stirling cryocooler. The geometry size of the cooler, especially the diameter and length of the regenerator were analyzed. Operating parameters by experiments were optimized to maximize the second stage cooling performance. In the test the cooler was operated at various drive frequency, phase shift between displacer and piston, fill pressure. The experimental results indicate that the cryocooler has a higher efficiency with a performance of 0.85W at 35K with a compressor input power of 56W at a phase shift of 65°, an operating frequency of 40Hz, 1MPa fill pressure.

Electron bulk energization in the diffusion region during anti-parallel symmetric reconnection entails twostages. First, the inflowing electrons are adiabatically trapped and energized by an ambipolar parallel electric field. Next, the electrons gain energy from the reconnection electric field as they undergo meandering motion. These collisionless mechanisms have been decribed previously, and they lead to highly-structured electron velocity distributions. Nevertheless, a simplified control-volume analysis gives estimates for how the net effective heating scales with the upstream plasma conditions in agreement with fully kinetic simulations and spacecraft observations.

Membrane technology scheme is offered and presented as a two-stage countercurrent recirculating cascade, in order to solve the problem of natural gas dehydration and purification from CO2. The first stage is a single divider, and the second stage is a recirculating two-module divider. This scheme allows natural gas to be cleaned from impurities, with any desired degree of methane extraction. In this paper, the optimal values of the basic parameters of the selected technological scheme are determined. An estimation of energy efficiency was carried out, taking into account the energy consumption of interstage compressor and methane losses in energy units.

A two-stage forecasting approach for long memory time series is introduced. In the first step, we estimate the fractional exponent and, by applying the fractional differencing operator, obtain the underlying weakly dependent series. In the second step, we produce multi-step-ahead forecasts...... for the weakly dependent series and obtain their long memory counterparts by applying the fractional cumulation operator. The methodology applies to both stationary and nonstationary cases. Simulations and an application to seven time series provide evidence that the new methodology is more robust to structural...... change and yields good forecasting results....

The design and development of a positive displacement pump selected to operate as an essential part of the carbon dioxide removal assembly (CDRA) are described. An oilless two-stage rotary sliding vane pump was selected as the optimum concept to meet the CDRA application requirements. This positive displacement pump is characterized by low weight and small envelope per unit flow, ability to pump saturated gases and moderate amount of liquid, small clearance volumes, and low vibration. It is easily modified to accommodate several stages on a single shaft optimizing space and weight, which makes the concept ideal for a range of demanding space applications.

Full Text Available This paper proposes a two-stage maximum likelihood estimation (TSMLE technique suited for multitone code division multiple access (MT-CDMA system. Here, an analytical framework is presented in the indoor environment for determining the average bit error rate (BER of the system, over Rayleigh and Ricean fading channels. The analytical model is derived for quadrature phase shift keying (QPSK modulation technique by taking into account the number of tones, signal bandwidth (BW, bit rate, and transmission power. Numerical results are presented to validate the analysis, and to justify the approximations made therein. Moreover, these results are shown to agree completely with those obtained by simulation.

Increased environmental awareness in the recent years has encouraged rapid growth of renewable energy sources (RESs); especially solar PV and wind. One of the effective solutions to compensate intermittencies in generation from the RESs is to enable consumer participation in demand response (DR......). Being a sizable rated element, electric vehicles (EVs) can offer a great deal of demand flexibility in future intelligent grids. This paper first investigates and analyzes driving pattern and charging requirements of EVs. Secondly, a two-stage charging algorithm, namely local adaptive control...

A two-stage process of gaming-simulation design was conducted: the first stage of design concerned national planning for hemophilia care; the second stage of design was for gaming-simulation concerning the problems of hemophilia patients and health care providers. The planning design was intended to be adaptable to large-scale planning for a variety of health care problems. The educational game was designed using data developed in designing the planning game. A broad range of policy-makers participated in the planning game.

In food processing, two-stage production systems with a batch processor in the first stage and packaging lines in the second stage are common and mostly separated by capacity- and time-constrained intermediate storage. This combination of constraints is common in practice, but the literature hardly...... of systems like this. Contrary to the common sense in operations management, the LPT rule is able to maximize the total production volume per day. Furthermore, we show that adding one tank has considerable effects. Finally, we conclude that the optimal setup frequency for batches in the first stage...

Based on the classical delayed stage-structured model and Lotka-Volterra predator-prey model, we introduce and study a delayed predator-prey system, where prey and predator have twostages, an immature stage and a mature stage. The time delays are the time lengths between the immature's birth and maturity of prey and predator species. Results on global asymptotic stability of nonnegative equilibria of the delay system are given, which generalize and suggest that good continuity exists between the predator-prey system and its corresponding stage-structured system.

Full Text Available This research considers a two-stage assembly-type flowshop scheduling problem with the objective of minimizing the total tardiness. The first stage consists of two independent machines, and the second stage consists of a single machine. Two types of components are fabricated in the first stage, and then they are assembled in the second stage. Dominance properties and lower bounds are developed, and a branch and bound algorithm is presented that uses these properties and lower bounds as well as an upper bound obtained from a heuristic algorithm. The algorithm performance is evaluated using a series of computational experiments on randomly generated instances and the results are reported.

A pilot scale gasification unit with novel co-current, updraft arrangement in the first stage and counter-current downdraft in the second stage was developed and exploited for studying effects of twostage gasification in comparison with one stage gasification of biomass (wood pellets) on fuel gas composition and attainable gas purity. Significant producer gas parameters (gas composition, heating value, content of tar compounds, content of inorganic gas impurities) were compared for the twostage and the one stage method of the gasification arrangement with only the upward moving bed (co-current updraft). The main novel features of the gasifier conception include grate-less reactor, upward moving bed of biomass particles (e.g. pellets) by means of a screw elevator with changeable rotational speed and gradual expanding diameter of the cylindrical reactor in the part above the upper end of the screw. The gasifier concept and arrangement are considered convenient for thermal power range 100-350 kW(th). The second stage of the gasifier served mainly for tar compounds destruction/reforming by increased temperature (around 950°C) and for gasification reaction of the fuel gas with char. The second stage used additional combustion of the fuel gas by preheated secondary air for attaining higher temperature and faster gasification of the remaining char from the first stage. The measurements of gas composition and tar compound contents confirmed superiority of the twostage gasification system, drastic decrease of aromatic compounds with two and higher number of benzene rings by 1-2 orders. On the other hand the twostage gasification (with overall ER=0.71) led to substantial reduction of gas heating value (LHV=3.15 MJ/Nm(3)), elevation of gas volume and increase of nitrogen content in fuel gas. The increased temperature (>950°C) at the entrance to the char bed caused also substantial decrease of ammonia content in fuel gas. The char with higher content of ash leaving the

Biomass production and carbohydrate reduction were determined for a two-stage continuous fermentation process with a simulated potato processing waste feed. The amylolytic yeast Saccharomycopsis fibuligera was grown in the first stage and a mixed culture of S. fibuligera and Candida utilis was maintained in the second stage. All conditions for the first and second stages were fixed except the flow of medium to the second stage was varied. Maximum biomass production occurred at a second stage dilution rate, D(2), of 0.27 h (-1). Carbohydrate reduction was inversely proportional to D(2), between 0.10 and 0.35 h (-1).

An evaluation is made of materials and structures technologies deemed capable of increasing the mass fraction-to-orbit of the Saenger two-stage launcher system while adequately addressing thermal-control and cryogenic fuel storage insulation problems. Except in its leading edges, nose cone, and airbreathing propulsion system air intakes, Ti alloy-based materials will be the basis of the airframe primary structure. Lightweight metallic thermal-protection measures will be employed. Attention is given to the design of the large lower stage element of Saenger.

Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with ...

Full Text Available High density of carbon nanotubes (CNTs has been synthesized from agricultural hydrocarbon: camphor oil using a one-hour synthesis time and a titanium dioxide sol gel catalyst. The pyrolysis temperature is studied in the range of 700–900°C at increments of 50°C. The synthesis process is done using a custom-made two-stage catalytic chemical vapor deposition apparatus. The CNT characteristics are investigated by field emission scanning electron microscopy and micro-Raman spectroscopy. The experimental results showed that structural properties of CNT are highly dependent on pyrolysis temperature changes.

Full Text Available A study was made of the aspects of forming highly resistant coatings in the surface zone of tool steels and solid carbide inserts by a two-stage method. at the first stage of the method, pure Ta or Nb coatings were electrodeposited on samples of tool steel and solid carbide insert in a molten salt bath containing Ta and Nb fluorides. at the second stage, the electrodeposited coating of Ta (Nb was subjected to carburizing or boriding to form carbide (TaC, NbC or boride (TaB, NbB cladding layers.

All the products now obtained from oil can be provided by thermal conversion of the solid fuels biomass and coal. As a feedstock, biomass has many advantages over coal and has the potential to supply up to 20% of US energy by the year 2000 and significant amounts of energy for other countries. However, it is imperative that in producing biomass for energy we practice careful land use. Combustion is the simplest method of producing heat from biomass, using either the traditional fixed-bed combustion on a grate or the fluidized-bed and suspended combustion techniques now being developed. Pyrolysis of biomass is a particularly attractive process if all three products - gas, wood tars, and charcoal - can be used. Gasification of biomass with air is perhaps the most flexible and best-developed process for conversion of biomass to fuel today, yielding a low energy gas that can be burned in existing gas/oil boilers or in engines. Oxygen gasification yields a gas with higher energy content that can be used in pipelines or to fire turbines. In addition, this gas can be used for producing methanol, ammonia, or gasoline by indirect liquefaction. Fast pyrolysis of biomass produces a gas rich in ethylene that can be used to make alcohols or gasoline. Finally, treatment of biomass with high pressure hydrogen can yield liquid fuels through direct liquefaction.

The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

The advantages of biomass as a feedstock are examined and biomass conversion techniques are described. Combustion is the simplest method of producing heat from biomass, using either the traditional fixed bed combustion on a grate or the fluidized bed and suspended combustion techniques now being developed. Pyrolysis of biomass is a particularly attractive process if all three products gas, wood tars, and charcoal can be used. Gasification of biomass with air is perhaps the most flexible and best developed process for conversion of biomass to fuel, yielding a low energy gas that can be burned in existing gas/oil boilers or in engines. Oxygen gasification yields a gas with higher energy content that can be used in pipelines or to fire turbines. In addition, this gas can be used for producing methanol, ammonia, or gasoline by indirect liquefaction. Fast pyrolysis of biomass produces a gas rich in ethylene that can be used to make alcohols or gasoline. Finally, treatment of biomass with high pressure hydrogen can yield liquid fuels through direct liquefaction.

The main objective of the U.S. DOE, Office of Fossil Energy, is to ensure the US a secure energy supply at an affordable price. An integral part of this program was the demonstration of fully developed coal liquefaction processes that could be implemented if market and supply considerations so required, Demonstration of the technology, even if not commercialized, provides a security factor for the country if it is known that the coal to liquid processes are proven and readily available. Direct liquefaction breaks down and rearranges complex hydrocarbon molecules from coal, adds hydrogen, and cracks the large molecules to those in the fuel range, removes hetero-atoms and gives the liquids characteristics comparable to petroleum derived fuels. The current processes being scaled and demonstrated are based on two reactor stages that increase conversion efficiency and improve quality by providing the flexibility to adjust process conditions to accommodate favorable reactions. The first stage conditions promote hydrogenation and some oxygen, sulfur and nitrogen removal. The second stage hydrocracks and speeds the conversion to liquids while removing the remaining sulfur and nitrogen. A third hydrotreatment stage can be used to upgrade the liquids to clean specification fuels.

By the vibrating liquefaction experiment of tailings and fine-ores of iron, it is observed and noted that the change of pore water pressure when the vibrating liquefaction takes place. Based on relevant suppositions, the equation of wave propagation in saturated granular media is obtained. This paper postulates the potential vector equation and the velocity expression of three kinds of body waves under normal conditions.Utilizing the wave theory and the experimental results, the influence of three body waves on pore water pressure and granules has been analyzed in detail. This revealed the rapid increment mechanism of pore water pressure and the wave mechanism of vibrating liquefaction.

Full Text Available A five-dimensional (5D controlled two-stage Colpitts oscillator is introduced and analyzed. This new electronic oscillator is constructed by considering the well-known two-stage Colpitts oscillator with two further elements (coupled inductors and variable resistor. In contrast to current approaches based on piecewise linear (PWL model, we propose a smooth mathematical model (with exponential nonlinearity to investigate the dynamics of the oscillator. Several issues, such as the basic dynamical behaviour, bifurcation diagrams, Lyapunov exponents, and frequency spectra of the oscillator, are investigated theoretically and numerically by varying a single control resistor. It is found that the oscillator moves from the state of fixed point motion to chaos via the usual paths of period-doubling and interior crisis routes as the single control resistor is monitored. Furthermore, an experimental study of controlled Colpitts oscillator is carried out. An appropriate electronic circuit is proposed for the investigations of the complex dynamics behaviour of the system. A very good qualitative agreement is obtained between the theoretical/numerical and experimental results.

Full Text Available In this paper we undertake the theoretical analysis of a two-stage semiconductor thermoelectric module (TEM which contains an arbitrary and different number of thermocouples, n1 and n2, in each stage (pyramid-styled TEM. The analysis is based on a dimensionless entropy balance set of equations. We study the effects of n1 and n2, the flowing electric currents through each stage, the applied temperatures and the thermoelectric properties of the semiconductor materials on the exergetic efficiency. Our main result implies that the electric currents flowing in each stage must necessarily be different with a ratio about 4.3 if the best thermal performance and the highest temperature difference possible between the cold and hot side of the device are pursued. This fact had not been pointed out before for pyramid-styled twostage TEM. The ratio n1/n2should be about 8.

This paper presents a method for moderate pulsed X rays produced by a series diode, which can be driven by high voltage pulse to generate intense large-area uniform sub-100-keV X rays. A twostage series diode was designed for Flash-II accelerator and experimentally investigated. A compact support system of floating converter/cathode was invented, the extra cathode is floating electrically and mechanically, by withdrawing three support pins several milliseconds before a diode electrical pulse. A double ring cathode was developed to improve the surface electric field and emission stability. The cathode radii and diode separation gap were optimized to enhance the uniformity of X rays and coincidence of the two diode voltages based on the simulation and theoretical calculation. The experimental results show that the twostage series diode can work stably under 700 kV and 300 kA, the average energy of X rays is 86 keV, and the dose is about 296 rad(Si) over 615 cm2 area with uniformity 2:1 at 5 cm from the last converter. Compared with the single diode, the average X rays' energy reduces from 132 keV to 88 keV, and the proportion of sub-100-keV photons increases from 39% to 69%.

This paper presents a two-stage high-capacity free-piston Stirling cryocooler driven by a linear compressor to meet the requirement of the high temperature superconductor (HTS) motor applications. The cryocooler system comprises a single piston linear compressor, a two-stage free piston Stirling cryocooler and a passive oscillator. A single stepped displacer configuration was adopted. A numerical model based on the thermoacoustic theory was used to optimize the system operating and structure parameters. Distributions of pressure wave, phase differences between the pressure wave and the volume flow rate and different energy flows are presented for a better understanding of the system. Some characterizing experimental results are presented. Thus far, the cryocooler has reached a lowest cold-head temperature of 27.6 K and achieved a cooling power of 78 W at 40 K with an input electric power of 3.2 kW, which indicates a relative Carnot efficiency of 14.8%. When the cold-head temperature increased to 77 K, the cooling power reached 284 W with a relative Carnot efficiency of 25.9%. The influences of different parameters such as mean pressure, input electric power and cold-head temperature are also investigated.

Full Text Available Irrigation water management is crucial for agricultural production and livelihood security in many regions and countries throughout the world. In this study, a two-stage stochastic fractional programming (TSFP method is developed for planning an agricultural water resources management system under uncertainty. TSFP can provide an effective linkage between conflicting economic benefits and the associated penalties; it can also balance conflicting objectives and maximize the system marginal benefit with per unit of input under uncertainty. The developed TSFP method is applied to a real case of agricultural water resources management of the Zhangweinan River Basin China, which is one of the main food and cotton producing regions in north China and faces serious water shortage. The results demonstrate that the TSFP model is advantageous in balancing conflicting objectives and reflecting complicated relationships among multiple system factors. Results also indicate that, under the optimized irrigation target, the optimized water allocation rate of Minyou Channel and Zhangnan Channel are 57.3% and 42.7%, respectively, which adapts the changes in the actual agricultural water resources management problem. Compared with the inexact two-stage water management (ITSP method, TSFP could more effectively address the sustainable water management problem, provide more information regarding tradeoffs between multiple input factors and system benefits, and help the water managers maintain sustainable water resources development of the Zhangweinan River Basin.

A novel 4 K separate two-stage pulse tube cooler (PTC) was designed and tested. The cooler consists of two separate pulse tube coolers, in which the cold end of the first stage regenerator is thermally connected with the middle part of the second regenerator. Compared to the traditional coupled multi-stage pulse tube cooler, the mutual interference between stages can be significantly eliminated. The lowest refrigeration temperature obtained at the first stage pulse tube was 13.8 K. This is a new record for single stage PTC. With two compressors and two rotary valves driving mode, the separate two-stage PTC obtained a refrigeration temperature of 2.5 K at the second stage. Cooling capacities of 508 mW at 4.2 K and 15 W at 37.5 K were achieved simultaneously. A one-compressor and one-rotary valve driving mode has been proposed to further simplify the structure of separate type PTC.

Full Text Available Most of the real-life decision-making problems have more than one conflicting and incommensurable objective functions. In this paper, we present a multiobjective two-stage stochastic linear programming problem considering some parameters of the linear constraints as interval type discrete random variables with known probability distribution. Randomness of the discrete intervals are considered for the model parameters. Further, the concepts of best optimum and worst optimum solution are analyzed in two-stage stochastic programming. To solve the stated problem, first we remove the randomness of the problem and formulate an equivalent deterministic linear programming model with multiobjective interval coefficients. Then the deterministic multiobjective model is solved using weighting method, where we apply the solution procedure of interval linear programming technique. We obtain the upper and lower bound of the objective function as the best and the worst value, respectively. It highlights the possible risk involved in the decision-making tool. A numerical example is presented to demonstrate the proposed solution procedure.

The purpose was to propose two-stage single-compartment models for evaluating dissolution characteristics in distal ileum and ascending colon, under conditions simulating the bioavailability and bioequivalence studies in fasted and fed state by using the mini-paddle and the compendial flow-through apparatus (closed-loop mode). Immediate release products of two highly dosed active pharmaceutical ingredients (APIs), sulfasalazine and L-870,810, and one mesalamine colon targeting product were used for evaluating their usefulness. Change of medium composition simulating the conditions in distal ileum (SIFileum ) to a medium simulating the conditions in ascending colon in fasted state and in fed state was achieved by adding an appropriate solution in SIFileum . Data with immediate release products suggest that dissolution in lower intestine is substantially different than in upper intestine and is affected by regional pH differences > type/intensity of fluid convection > differences in concentration of other luminal components. Asacol® (400 mg/tab) was more sensitive to type/intensity of fluid convection. In all the cases, data were in line with available human data. Two-stage single-compartment models may be useful for the evaluation of dissolution in lower intestine. The impact of type/intensity of fluid convection and viscosity of media on luminal performance of other APIs and drug products requires further exploration.

Hepatectomy is an effective surgical treatment for multiple bilobar liver metastases from colon cancer; however, one of the primary obstacles to completing surgical resection for these cases is an insufficient volume of the future remnant liver, which may cause postoperative liver failure. To induce atrophy of the unilateral lobe and hypertrophy of the future remnant liver, procedures to occlude the portal vein have been conventionally used prior to major hepatectomy. We report a case of a 50-year-old woman in whom two-stage hepatectomy was performed in combination with intraoperative ligation of the portal vein and the bile duct of the right hepatic lobe. This procedure was designed to promote the atrophic effect on the right hepatic lobe more effectively than the conventional technique, and to the best of our knowledge, it was used for the first time in the present case. Despite successful induction of liver volume shift as well as the following procedure, the patient died of subsequent liver failure after developing recurrent tumors. We discuss the first case in which simultaneous ligation of the portal vein and the biliary system was successfully applied as part of the first step of two-stage hepatectomy.

It is often difficult to estimate parameters for a two-stage production process of blister copper (containing 99.4 wt.% of Cu metal) as well as those for most industrial processes with high accuracy, which leads to problems related to process modeling and control. The first objective of this study was to model flash smelting and converting of Cu matte stages using three different techniques: artificial neural networks, support vector machines, and random forests, which utilized noisy technological data. Subsequently, more advanced models were applied to optimize the entire process (which was the second goal of this research). The obtained optimal solution was a Pareto-optimal one because the process consisted of twostages, making the optimization problem a multi-criteria one. A sequential optimization strategy was employed, which aimed for optimal control parameters consecutively for both stages. The obtained optimal output parameters for the first smelting stage were used as input parameters for the second converting stage. Finally, a search for another optimal set of control parameters for the second stage of a Kennecott-Outokumpu process was performed. The optimization process was modeled using a Monte-Carlo method, and both modeling parameters and computed optimal solutions are discussed.

The major methods of biomass thermal conversion are combustion in excess oxygen, gasification in reduced oxygen, and pyrolysis in the absence of oxygen. The end products of these methods are heat, gas, liquid and solid fuels. From the point of view of energy production, none of these methods can be considered optimal. A two-stage thermal conversion of biomass based on pyrolysis as the first stage and pyrolysis products cracking as the second stage can be considered the optimal method for energy production that allows obtaining synthesis gas consisting of hydrogen and carbon monoxide and not containing liquid or solid particles. On the base of the twostage cracking technology, there was designed an experimental power plant of electric power up to 50 kW. The power plant consists of a thermal conversion module and a gas engine power generator adapted for operation on syngas. Purposes of the work were determination of an optimal operation temperature of the thermal conversion module and an optimal mass ratio of processed biomass and charcoal in cracking chamber of the thermal conversion module. Experiments on the pyrolysis products cracking at various temperatures show that the optimum cracking temperature is equal to 1000 °C. From the results of measuring the volume of gas produced in different mass ratios of charcoal and wood biomass processed, it follows that the maximum volume of the gas in the range of the mass ratio equal to 0.5-0.6.

Full Text Available The study of the optimum distribution of goods between sources and destinations is one of the important topics in projects economics. This importance comes as a result of minimizing the transportation cost, deterioration, time, etc. The classical transportation problem constitutes one of the major areas of application for linear programming. The aim of this problem is to obtain the optimum distribution of goods from different sources to different destinations which minimizes the total transportation cost. From the practical point of view, the transportation problems may differ from the classical form. It may contain one or more objective function, one or more stage to transport, one or more type of commodity with one or more means of transport. The aim of this paper is to construct an optimization model for transportation problem for one of mill-stones companies. The model is formulated as a bi-criteria two-stage transportation problem with a special structure depending on the capacities of suppliers, warehouses and requirements of the destinations. A solution algorithm is introduced to solve this class of bi-criteria two-stage transportation problem to obtain the set of non-dominated extreme points and the efficient solutions accompanied with each one that enables the decision maker to choose the best one. The solution algorithm mainly based on the fruitful application of the methods for treating transportation problems, theory of duality of linear programming and the methods of solving bi-criteria linear programming problems.

During typhoons, accurate forecasts of hourly inundation depths are essential for inundation warning and mitigation. Due to the lack of observed data of inundation maps, sufficient observed data are not available for developing inundation forecasting models. In this paper, the inundation depths, which are simulated and validated by a physically based two-dimensional model (FLO-2D), are used as a database for inundation forecasting. A two-stage inundation forecasting approach based on Support Vector Machine (SVM) is proposed to yield 1- to 6-h lead-time inundation maps during typhoons. In the first stage (point forecasting), the proposed approach not only considers the rainfall intensity and inundation depth as model input but also simultaneously considers cumulative rainfall and forecasted inundation depths. In the second stage (spatial expansion), the geographic information of inundation grids and the inundation forecasts of reference points are used to yield inundation maps. The results clearly indicate that the proposed approach effectively improves the forecasting performance and decreases the negative impact of increasing forecast lead time. Moreover, the proposed approach is capable of providing accurate inundation maps for 1- to 6-h lead times. In conclusion, the proposed two-stage forecasting approach is suitable and useful for improving the inundation forecasting during typhoons, especially for long lead times.

TwoStage gasification of biomass results in almost tar free producer gas suitable for multiple end-use purposes. In the present study, it is investigated to what extent the partial oxidation process of the pyrolysis gas from the first stage is involved in direct and in-direct tar destruction and ...... tar destruction and a high moisture content of the biomass enhances the decomposition of phenol and inhibits the formation of naphthalene. This enhances tar conversion and gasification in the char-bed, and thus contributes in-directly to the tar destruction.......TwoStage gasification of biomass results in almost tar free producer gas suitable for multiple end-use purposes. In the present study, it is investigated to what extent the partial oxidation process of the pyrolysis gas from the first stage is involved in direct and in-direct tar destruction...... and conversion. The study identifies the following major impact factors regarding tar content in the producer gas: oxidation temperature, excess air ratio and biomass moisture content. In a experimental setup, wood pellets were pyrolyzed and the resulting pyrolysis gas was transferred in a heated partial...

A mathematical model was presented in this paper for the combustion of municipal solid waste in a novel two-stage reciprocating grate furnace. Numerical simulations were performed to predict the temperature, the flow and the species distributions in the furnace, with practical operational conditions taken into account. The calculated results agree well with the test data, and the burning behavior of municipal solid waste in the novel two-stage reciprocating incinerator can be demonstrated well. The thickness of waste bed, the initial moisture content, the excessive air coefficient and the secondary air are the major factors that influence the combustion process. If the initial moisture content of waste is high, both the heat value of waste and the temperature inside incinerator are low, and less oxygen is necessary for combustion. The air supply rate and the primary air distribution along the grate should be adjusted according to the initial moisture content of the waste. A reasonable bed thickness and an adequate excessive air coefficient can keep a higher temperature, promote the burnout of combustibles, and consequently reduce the emission of dioxin pollutants. When the total air supply is constant, reducing primary air and introducing secondary air properly can enhance turbulence and mixing, prolong the residence time of flue gas, and promote the complete combustion of combustibles. This study provides an important reference for optimizing the design and operation of municipal solid wastes furnace.

The degradability of excess activated sludge from a wastewater treatment plant was studied. The objective was establishing the degree of degradation using either air or pure oxygen at different temperatures. Sludge treated with pure oxygen was degraded at temperatures from 22 degrees C to 50 degrees C while samples treated with air were degraded between 32 degrees C and 65 degrees C. Using air, sludge is efficiently degraded at 37 degrees C and at 50-55 degrees C. With oxygen, sludge was most effectively degraded at 38 degrees C or at 25-30 degrees C. Two-stage anaerobic-aerobic processes were studied. The first anaerobic stage was always operated for 5 days HRT, and the second stage involved aeration with pure oxygen and an HRT between 5 and 10 days. Under these conditions, there is 53.5% VSS removal and 55.4% COD degradation at 15 days HRT - 5 days anaerobic, 10 days aerobic. Sludge digested with pure oxygen at 25 degrees C in a batch reactor converted 48% of sludge total Kjeldahl nitrogen to nitrate. Addition of an aerobic stage with pure oxygen aeration to the anaerobic digestion enhances ammonium nitrogen removal. In a two-stage anaerobic-aerobic sludge digestion process within 8 days HRT of the aerobic stage, the removal of ammonium nitrogen was 85%.

We investigated the behaviors of an active control system of two-stage vibration isolation with the actuator installed in parallel with either the upper passive mount or the lower passive isolation mount. We revealed the relationships between the active control force of the actuator and the parameters of the passive isolators by studying the dynamics of two-stage active vibration isolation for the actuator at the foregoing two positions in turn. With the actuator installed beside the upper mount, a small active force can achieve a very good isolating effect when the frequency of the stimulating force is much larger than the natural frequency of the upper mount; a larger active force is required in the low-frequency domain; and the active force equals the stimulating force when the upper mount works within the resonance region, suggesting an approach to reducing wobble and ensuring desirable installation accuracy by increasing the upper-mount stiffness. In either the low or the high frequency region far away from the resonance region, the active force is smaller when the actuator is beside the lower mount than beside the upper mount.

This report presents the performance characteristics of two “two-stage” fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

A two-stage process combined hydrogen and methane production from household solid waste was demonstrated working successfully. The yield of 43 mL H-2/g volatile solid (VS) added was generated in the first hydrogen production stage and the methane production in the second stage was 500 mL CH4/g VS....... Furthermore, this study also provided direct evidence in the dynamic fermentation process that, hydrogen production increase was reflected by acetate to butyrate ratio increase in liquid phase. (c) 2006 Elsevier Ltd. All rights reserved.......A two-stage process combined hydrogen and methane production from household solid waste was demonstrated working successfully. The yield of 43 mL H-2/g volatile solid (VS) added was generated in the first hydrogen production stage and the methane production in the second stage was 500 mL CH4/g VS...... added. This figure was 21% higher than the methane yield from the one-stage process, which was run as control. Sparging of the hydrogen reactor with methane gas resulted in doubling of the hydrogen production. PH was observed as a key factor affecting fermentation pathway in hydrogen production stage...

The 8 and 10 mm diameter wire rods intended for use as concrete reinforcement were produced/ hot rolled from C-Mn steel chemistry containing various elements within the range of C:0.55-0.65, Mn:0.85-1.50, Si:0.05-0.09, S:0.04 max, P:0.04 max and N:0.006 max wt%. Depending upon the C and Mn contents the product attained pearlitic microstructure in the range of 85-93% with balance amount of polygonal ferrite transformed at prior austenite grain boundaries. The pearlitic microstructure in the wire rods helped in achieving yield strength, tensile strength, total elongation and reduction in area values within the range of 422-515 MPa, 790-950 MPa, 22-15% and 45-35%, respectively. On analyzing the tensile results it was revealed that the material experienced hardening in twostages separable by a knee strain value of about 0.05. The occurrence of twostage hardening thus in the steel with hardening coefficients of 0.26 and 0.09 could be demonstrated with the help of derived relationships existed between flow stress and the strain.

Trial-and-error learning requires evaluating variable actions and reinforcing successful variants. In songbirds, vocal exploration is induced by LMAN, the output of a basal ganglia-related circuit that also contributes a corrective bias to the vocal output. This bias is gradually consolidated in RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using stochastic gradient descent, we derive how the activity in ‘tutor’ circuits (e.g., LMAN) should match plasticity mechanisms in ‘student’ circuits (e.g., RA) to achieve efficient learning. We further describe a reinforcement learning framework through which the tutor can build its teaching signal. We show that mismatches between the tutor signal and the plasticity mechanism can impair learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning. DOI: http://dx.doi.org/10.7554/eLife.20944.001 PMID:28374674

Earth-to-orbit vehicle studies of future replacements for the Space Shuttle are needed to guide technology development. Previous studies that have examined single-stage vehicles have shown advantages for dual-fuel propulsion. Previous two-stage system studies have assumed all-hydrogen fuel for the Orbiters. The present study examined dual-fuel Orbiters and found that the system dry mass could be reduced with this concept. The possibility of staging the booster at a staging velocity low enough to allow coast-back to the launch site is shown to be beneficial, particularly in combination with a dual-fuel Orbiter. An engine evaluation indicated the same ranking of engines as did a previous single-stage study. Propane and RP-1 fuels result in lower vehicle dry mass than methane, and staged-combustion engines are preferred over gas-generator engines. The sensitivity to the engine selection is less for two-stage systems than for single-stage systems.

Human awareness plays an important role in the spread of infectious diseases and the control of propagation patterns. The dynamic process with human awareness is called awareness cascade, during which individuals exhibit herd-like behavior because they are making decisions based on the actions of other individuals [Borge-Holthoefer et al., J. Complex Networks 1, 3 (2013), 10.1093/comnet/cnt006]. In this paper, to investigate the epidemic spreading with awareness cascade, we propose a local awareness controlled contagion spreading model on multiplex networks. By theoretical analysis using a microscopic Markov chain approach and numerical simulations, we find the emergence of an abrupt transition of epidemic threshold βc with the local awareness ratio α approximating 0.5 , which induces two-stage effects on epidemic threshold and the final epidemic size. These findings indicate that the increase of α can accelerate the outbreak of epidemics. Furthermore, a simple 1D lattice model is investigated to illustrate the two-stage-like sharp transition at αc≈0.5 . The results can give us a better understanding of why some epidemics cannot break out in reality and also provide a potential access to suppressing and controlling the awareness cascading systems.

To investigate the configuration consideration of expander in transcritical carbon dioxide two-stage compression cycle, the best place in the cycle should be searched for to reinvest the recovery work so as to improve the system efficiency. The expander and the compressor are connected to the same shaft and integrated into one unit, with the latter being driven by the former, thus the transfer loss and leakage loss can be decreased greatly. In these systems, the expander can be either connected with the first stage compressor (shortened as DCDL cycle) or the second stage compressor (shortened as DCDH cycle), but the two configuration ways can get different performances. By setting up theoretical model for two kinds of expander configuration ways in the transcritical carbon dioxide two-stage compression cycle, the first and the second laws of thermodynamics are used to analyze the coefficient of performance, exergy efficiency, inter-stage pressure, discharge temperature and exergy losses of each component for the two cycles. From the model results, the performance of DCDH cycle is better than that of DCDL cycle. The analysis results are indispensable to providing a theoretical basis for practical design and operating.

Within the wireless mesh network, a bottleneck problem arises as the number of concurrent traffic flows (NCTF) increases over a single common control channel, as it is for most conventional networks. To alleviate this problem, this paper proposes a two-stage coordination multi-radio multi-channel MAC (TSC-M2MAC) protocol that designates all available channels as both control channels and data channels in a time division manner through a two-stage coordination. At the first stage, a load balancing breadth-first-search-based vertex coloring algorithm for multi-radio conflict graph is proposed to intelligently allocate multiple control channels. At the second stage, a REQ/ACK/RES mechanism is proposed to realize dynamical channel allocation for data transmission. At this stage, the Channel-and-Radio Utilization Structure (CRUS) maintained by each node is able to alleviate the hidden nodes problem; also, the proposed adaptive adjustment algorithm for the Channel Negotiation and Allocation (CNA) sub-interval is ab...

Even though microalgal biomass is leading the third generation biofuel research, significant effort is required to establish an economically viable commercial-scale microalgal biofuel production system. Whilst a significant amount of work has been reported on large-scale cultivation of microalgae using photo-bioreactors and pond systems, research focus on establishing high performance downstream dewatering operations for large-scale processing under optimal economy is limited. The enormous amount of energy and associated cost required for dewatering large-volume microalgal cultures has been the primary hindrance to the development of the needed biomass quantity for industrial-scale microalgal biofuels production. The extremely dilute nature of large-volume microalgal suspension and the small size of microalgae cells in suspension create a significant processing cost during dewatering and this has raised major concerns towards the economic success of commercial-scale microalgal biofuel production as an alternative to conventional petroleum fuels. This article reports an effective framework to assess the performance of different dewatering technologies as the basis to establish an effective two-stage dewatering system. Bioflocculation coupled with tangential flow filtration (TFF) emerged a promising technique with total energy input of 0.041 kWh, 0.05 kg CO2 emissions and a cost of $ 0.0043 for producing 1 kg of microalgae biomass. A streamlined process for operational analysis of two-stage microalgae dewatering technique, encompassing energy input, carbon dioxide emission, and process cost, is presented. PMID:26904075

A two-stage method for image segmentation based on edge and region information is proposed. Different deformation schemes are used at twostages for segmenting the object correctly in image plane. At the first stage, the contour of the model is divided into several segments hierarchically that deform respectively using affine transformation. After the contour is deformed to the approximate boundary of object, a fine match mechanism using statistical information of local region to redefine the external energy of the model is used to make the contour fit the object's boundary exactly. The algorithm is effective, as the hierarchical segmental deformation makes use of the globe and local information of the image, the affine transformation keeps the consistency of the model, and the reformative approaches of computing the internal energy and external energy are proposed to reduce the algorithm complexity. The adaptive method of defining the search area at the second stage makes the model converge quickly. The experimental results indicate that the proposed model is effective and robust to local minima and able to search for concave objects.

A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.

Considering the fact that the temperature distribution in furnace of a tangential fired pulverized coal boiler is difficult to be measured and monitored, two-stage numerical simulation method was put forward. First, multi-field coupling simulation in typical work conditions was carried out off-line with the software CFX-4.3, and then the expression of temperature profile varying with operating parameter was obtained. According to real-time operating parameters, the temperature at arbitrary point of the furnace can be calculated by using this expression. Thus the temperature profile can be shown on-line and monitoring for combustion state in the furnace is realized. The simul-ation model was checked by the parameters measured in an operating boiler, DG130-9.8/540. The maximum of relative error is less than 12% and the absolute error is less than 120 ℃, which shows that the proposed two-stage simulation method is reliable and able to satisfy the requirement of industrial application.

A low-voltage sense amplifier with reference current generator utilizing two-stage operational amplifier clamp structure for flash memory is presented in this paper, capable of operating with minimum supply voltage at 1 V. A new reference current generation circuit composed of a reference cell and a two-stage operational amplifier clamping the drain pole of the reference cell is used to generate the reference current, which avoids the threshold limitation caused by current mirror transistor in the traditional sense amplifier. A novel reference voltage generation circuit using dummy bit-line structure without pull-down current is also adopted, which not only improves the sense window enhancing read precision but also saves power consumption. The sense amplifier was implemented in a flash realized in 90 nm flash technology. Experimental results show the access time is 14.7 ns with power supply of 1.2 V and slow corner at 125 °C. Project supported by the National Natural Science Fundation of China (No. 61376028).

Nowadays, disposal of sewage sludge from wastewater treatment plants and recovery of waste heat from steel industry, become two important environmental issues and to integrate these two problems, a two-stage high temperature sludge gasification approach was investigated using the waste heat in hot slags herein. The whole process was divided into twostages, i.e., the low temperature sludge pyrolysis at ⩽ 900°C in argon agent and the high temperature char gasification at ⩾ 900°C in CO2 agent, during which the heat required was supplied by hot slags in different temperature ranges. Both the thermodynamic and kinetic mechanisms were identified and it was indicated that an Avrami-Erofeev model could best interpret the stage of char gasification. Furthermore, a schematic concept of this strategy was portrayed, based on which the potential CO yield and CO2 emission reduction achieved in China could be ∼1.92∗10(9)m(3) and 1.93∗10(6)t, respectively.

Message propagation in social networks is becoming a popular topic in complex networks. One of the message types in social networks is called broadcast message. It refers to a type of message which has a unique and unknown destination for the publisher, such as 'lost and found'. Its propagation always has twostages. Due to this feature, rumor propagation model and epidemic propagation model have difficulty in describing this message's propagation accurately. In this paper, an improved two-stage susceptible-infected-removed model is proposed. We come up with the concept of the first forwarding probability and the second forwarding probability. Another part of our work is figuring out the influence to the successful message transmission chance in each level resulting from multiple reasons, including the topology of the network, the receiving probability, the first stage forwarding probability, the second stage forwarding probability as well as the length of the shortest path between the publisher and the relevant destination. The proposed model has been simulated on real networks and the results proved the model's effectiveness.

This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level.

Full Text Available Purpose: The purpose of this paper is to define the relative optimal incentive contract to effectively encourage employees to improve work efficiency while actively implementing innovative behavior. Design/methodology/approach: This paper analyzes a twostaged incentive contract coordinated with efficiency and innovation in Critical Chain Project Management using learning real options, based on principle-agent theory. The situational experiment is used to analyze the validity of the basic model. Finding: The twostaged incentive scheme is more suitable for employees to create and implement learning real options, which will throw themselves into innovation process efficiently in Critical Chain Project Management. We prove that the combination of tolerance for early failure and reward for long-term success is effective in motivating innovation. Research limitations/implications: We do not include the individual characteristics of uncertain perception, which might affect the consistency of external validity. The basic model and the experiment design need to improve. Practical Implications: The project managers should pay closer attention to early innovation behavior and monitoring feedback of competition time in the implementation of Critical Chain Project Management. Originality/value: The central contribution of this paper is the theoretical and experimental analysis of incentive schemes for innovation in Critical Chain Project Management using the principal-agent theory, to encourage the completion of CCPM methods as well as imitative free-riding on the creative ideas of other members in the team.

The two-stage breast reconstruction with tissue expander and prosthesis is nowadays a common method for achieving a satisfactory appearance in selected patients who had a mastectomy, but its most common aesthetic drawback is represented by an excessive volumetric increment of the superior half of the reconstructed breast, with a convexity of the profile in that area. A possible solution to limit this effect, and to fulfil the inferior pole, may be obtained by reducing the inferior tissue resistance by means of capsulotomies. This study reports the effects of various types of capsulotomies, performed in 72 patients after removal of the mammary expander, with the aim of emphasising the convexity of the inferior mammary aspect in the expanded breast. According to each kind of desired modification, possible solutions are described. On the basis of subjective and objective evaluations, an overall high degree of satisfaction has been evidenced. The described selective capsulotomies, when properly carried out, may significantly improve the aesthetic results in two-stage reconstructed breasts, with no additional scars, with minimal risks, and with little lengthening of the surgical time.

Conclusion: The outcome of rapid two-stage repair is comparable to that of one-stage repair. Rapid two-stage repair has the advantages of significantly shorter cardiopulmonary bypass duration and AXC time, and avoids deep hypothermic circulatory arrest. LVOTO remains an unresolved issue, and postoperative aortic arch restenosis can be dilated effectively by percutaneous balloon angioplasty.

According to the EPT and WRL, the two-stage nerve graft showed significant improvement (P=0.020 and P =0.017 respectively. The TOA showed no significant difference between the two groups. The total vascular index was significantly higher in the two-stage nerve graft group (P

OBJECTIVES Video-assisted thoracoscopic sympathectomy is currently the best treatment for palmar and axillary hyperhidrosis. It can be performed through either one or twostages of surgery. This study aimed to evaluate the operative and postoperative results of two-stage unilateral vs one-stage bilateral thoracoscopic sympathectomy. METHODS From November 1995 to February 2011, 270 patients with severe palmar and/or axillary hyperhidrosis were recruited for this study. One hundred and thirty patients received one-stage bilateral, single-port video-assisted thoracoscopic sympathectomy (one-stage group) and 140, two-stage unilateral, single-port video-assisted thoracoscopic sympathectomy, with a mean time interval of 4 months between the procedures (two-stage group). RESULTS The mean postoperative follow-up period was 12.5 (range: 1–24 months). After surgery, hands and axillae of all patients were dry and warm. Sixteen (12%) patients of the one-stage group and 15 (11%) of the two-stage group suffered from mild/moderate pain (P = 0.8482). The mean operative time was 38 ± 5 min in the one-stage group and 39 ± 8 min in the two-stage group (P = 0.199). Pneumothorax occurred in 8 (6%) patients of the one-stage group and in 11 (8%) of the two-stage group. Compensatory sweating occurred in 25 (19%) patients of the one-stage group and in 6 (4%) of the two-stage group (P = 0.0001). No patients developed Horner's syndrome. CONCLUSIONS Both two-stage unilateral and one-stage bilateral single-port video-assisted thoracoscopic sympathectomies are effective, safe and minimally invasive procedures. Two-stage unilateral sympathectomy can be performed with a lower occurrence of compensatory sweating, improving permanently the quality of life in patients with palmar and axillary hyperhidrosis. PMID:23442937

Video-assisted thoracoscopic sympathectomy is currently the best treatment for palmar and axillary hyperhidrosis. It can be performed through either one or twostages of surgery. This study aimed to evaluate the operative and postoperative results of two-stage unilateral vs one-stage bilateral thoracoscopic sympathectomy. From November 1995 to February 2011, 270 patients with severe palmar and/or axillary hyperhidrosis were recruited for this study. One hundred and thirty patients received one-stage bilateral, single-port video-assisted thoracoscopic sympathectomy (one-stage group) and 140, two-stage unilateral, single-port video-assisted thoracoscopic sympathectomy, with a mean time interval of 4 months between the procedures (two-stage group). The mean postoperative follow-up period was 12.5 (range: 1-24 months). After surgery, hands and axillae of all patients were dry and warm. Sixteen (12%) patients of the one-stage group and 15 (11%) of the two-stage group suffered from mild/moderate pain (P = 0.8482). The mean operative time was 38 ± 5 min in the one-stage group and 39 ± 8 min in the two-stage group (P = 0.199). Pneumothorax occurred in 8 (6%) patients of the one-stage group and in 11 (8%) of the two-stage group. Compensatory sweating occurred in 25 (19%) patients of the one-stage group and in 6 (4%) of the two-stage group (P = 0.0001). No patients developed Horner's syndrome. Both two-stage unilateral and one-stage bilateral single-port video-assisted thoracoscopic sympathectomies are effective, safe and minimally invasive procedures. Two-stage unilateral sympathectomy can be performed with a lower occurrence of compensatory sweating, improving permanently the quality of life in patients with palmar and axillary hyperhidrosis.

The results of Laboratory and Bench-Scale experiments and supporting technical and economic assessments conducted under DOE Contract No. DE-AC22-91PC91040 are reported for the period July 1, 1997 to September 30, 1997. This contract is with the University of Kentucky Research Foundation which supports work with the University of Kentucky Center for Applied Energy Research, CONSOL, Inc., LDP Associates, and Hydrocarbon Technologies, Inc. This work involves the introduction into the basic twostageliquefaction process several novel concepts which include dispersed lower-cost catalysts, coal cleaning by oil agglomeration, and distillate hydrotreating and dewaxing. Results are reported from experiments in which various methods were tested to activate dispersed Mo precursors. Several oxothiomolybdates precursors having S/Mo ratios from two to six were prepared. Another having a S/Mo ratio of eleven was also prepared that contained an excess of sulfur. In the catalyst screening test, none of these precursors exhibited an activity enhancement that might suggest that adding sulfur into the structure of the Mo precursors would be beneficial to the process. In another series of experiments, AHM impregnated coal slurried in the reaction mixture was pretreated withH S/H under pressure and successively heated for 30 min at 120, 250 2 2 and 360 C. THF conversions in the catalyst screening test were not affected while resid conversions o increased such that pretreated coals impregnated with 100 ppm Mo gave conversions equivalent to untreated coals impregnated with 300 ppm fresh Mo. Cobalt, nickel and potassium phosphomolybdates were prepared and tested as bimetallic precursors. The thermal stability of these compounds was evaluated in TG/MS to determine whether the presence of the added metal would stabilize the Keggin structure at reaction temperature. Coals impregnated with these salts showed the Ni and Co salts gave the same THF conversion as PMA while the Ni salt gave higher

Hydrothermal liquefaction experiments on Nannochloropsis salina and Spirulina platensis at subcritical and supercritical water conditions were carried out to explore the feasibility of extracting lipids from wet algae, preserving nutrients in lipid-extracted algae solid residue, and recycling pro...

The necessary prerequisites for liquefaction of buffers and backfills in a KBS-3 repository exist but the stress conditions and intended densities practically eliminate the risk of liquefaction for single earthquakes with magnitudes up to M=8 and normal duration. For buffers rich in expandable minerals it would be possible to reduce the density at water saturation to 1,700 - 1,800 kg/m{sup 3} or even less without any significant risk of liquefaction, while the density at saturation of backfills with 10 - 15% expandable clay should not be reduced to less than about 1,900 kg/m{sup 3}. Since the proposed densities of both buffers and backfills will significantly exceed these minimum values it is concluded that there is no risk of liquefaction of the engineered soil barriers in a KBS-3 repository even for very significant earthquakes.

Progress on seventeen projects related to coal liquefaction or the upgrading of coal liquids and supported by US DOE is reported with emphasis on funding, brief process description history and current progress. (LTN)

Utilizing the dissipative structure theory, the evolutionary process of vibrating liquefaction in saturatedgranules was analyzed. When the irreversible force increases to some degree, the system will be in a state far fromequilibrium, and the new structure probably occurs. According to synergetics, the equation of liquefaction evolutionwas deduced, and the evolutionary process was analyzed by dynamics. The evolutionary process of vibrating lique-faction is a process in which the period doubling accesses to chaos, and the fluctuation is the original driving force ofsystem evolution. The liquefaction process was also analyzed by fractal geometry. The steady process of vibratingliquefaction obeys the scaling form, and shows self-organized criticality in the course of vibration. With the incre-ment of the recurrence number, the stress of saturated granules will decrease rapidly or lose completely, and thestrain will increase rapidly, so that the granules can not sustain load and the "avalanche" phenomenon takes place.

This report describes the base case yields and operating conditions for converting whole microalgae via hydrothermal liquefaction and upgrading to liquid fuels. This serves as the basis against which future technical improvements will be measured.

In the Advance Coal Liquefaction Concept Proposal (ACLCP) carbon monoxide (CO) and water have been proposed as the primary reagents in the pretreatment process. The main objective of this project is to develop a methodology for pretreating coal under mild conditions based on a combination of existing processes which have shown great promise in liquefaction, extraction and pyrolysis studies. The aim of this pretreatment process is to partially depolymerise the coal, eliminate oxygen and diminish the propensity for retograde reactions during subsequent liquefaction. The desirable outcome of the CO pretreatment step should be: (1) enhanced liquefaction activity and/or selectivity toward products of higher quality due to chemical modification of the coal structure; (2) cleaner downstream products; (3) overall improvement in operability and process economics.

Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

We calculated liquefaction potential index for a grid of sites in the Evansville, Indiana area for two scenario earthquakes-a magnitude 7.7 in the New Madrid seismic zone and a M6.8 in the Wabash Valley seismic zone. For the latter event, peak ground accelerations range from 0.13 gravity to 0.81 gravity, sufficiently high to be of concern for liquefaction.

Full Text Available The specialized literature concerning the Geotechnical Engineering Field indicates the problems due to soil liquefaction and the aggravating consequences that liquefaction phenomenon may cause to buildings. Some procedures of foundation soil improvement for both existing and future foundations are presented. The paper also presents three soil remediation methods involving a low level of vibration generated in the process of foundation soil improvement and two case studies representing the usual method in Romania.

The specialized literature concerning the Geotechnical Engineering Field indicates the problems due to soil liquefaction and the aggravating consequences that liquefaction phenomenon may cause to buildings. Some procedures of foundation soil improvement for both existing and future foundations are presented. The paper also presents three soil remediation methods involving a low level of vibration generated in the process of foundation soil improvement and two case studies representing the usu...

The liquefaction analysis procedure conducted at a dam foundation associated with a layer of liquefiable sand is presented. In this case, the effects of the overlying dam and an embedded diaphragm wall on liquefaction potential of foundation soils are considered. The analysis follows the stress-based approach which compares the earthquake-induced cyclic stresses with the cyclic resistance of the soil, and the cyclic resistance of the sand under complex stress condition is the key issue. Compr...

Loosely packed sand that is saturated with water can liquefy during an earthquake, potentially causing significant damage. Once the shaking is over, the excess pore water pressures that developed during the earthquake gradually dissipate, while the surface of the soil settles, in a process called post-liquefaction reconsolidation. When examining reconsolidation, the soil is typically divided in liquefied and solidified parts, which are modelled separately. The aim of this paper is to show that this fragmentation is not necessary. By assuming that the hydraulic conductivity and the one-dimensional stiffness of liquefied sand have real, positive values, the equation of consolidation can be numerically solved throughout a reconsolidating layer. Predictions made in this manner show good agreement with geotechnical centrifuge experiments. It is shown that the variation of one-dimensional stiffness with effective stress and void ratio is the most crucial parameter in accurately capturing reconsolidation.

Undrained behaviour of sand under low cell pressure was studied in static and cyclic triaxial tests. It was found that very loose sand liquefies under static loading with the relative density being a key parameter for the undrained behaviour of sand. In cyclic triaxial tests, pore water pressures built up during the cyclic loading and exceeded the confining cell pressure. This process was accompanied by a large sudden increase in axial deformation. The necessary number of cycles to obtain liquefaction was related to the confining cell pressure, the amplitude of cyclic loading and the relative density of sand.In addition, the patterns of pore water pressure response are different from those of sand samples with different relative densities. The test results are very useful for expounding scour mechanism around coastal structures since they relate to the low stress behaviour of the sand.

Full Text Available The stability of any structure is possible if foundation is appropriately designed. The Bandar abbas is the largest and most important port of Iran, with high seismicity and occurring strong earthquakes in this territory, the soil mechanical properties of different parts of city have been selected as the subject of current research. The data relating to the design of foundation for improvement of structure at different layer of subsoil have been collected and, accordingly, soil mechanical properties have been evaluated. The results of laboratory experiments can be used for evaluation of geotechnical characteristics of urban area for development a region with high level of structural stability. Ultimately, a new method for calculation of liquefaction force is suggested. It is applicable for improving geotechnical and structure codes and also for reanalysis of structure stability of previously constructed buildings.

A twostage pretreatment approach for biomass is developed in the current work in which dilute acid (DA) pretreatment is followed by a solvent based pretreatment (N-methyl morpholine N oxide -- NMMO). When the combined pretreatment (DAWNT) is applied to sugarcane bagasse and corn stover, the rates of hydrolysis and overall yields (>90%) are seen to dramatically improve and under certain conditions 48 h can be taken off the time of hydrolysis with the additional NMMO step to reach similar conversions. DAWNT shows a 2-fold increase in characteristic rates and also fractionates different components of biomass -- DA treatment removes the hemicellulose while the remaining cellulose is broken down by enzymatic hydrolysis after NMMO treatment to simple sugars. The remaining residual solid is high purity lignin. Future work will focus on developing a full scale economic analysis of DAWNT for use in biomass fractionation.

In the post-genomic biology era, the reconstruction of gene regulatory networks from microarray gene expression data is very important to understand the underlying biological system, and it has been a challenging task in bioinformatics. The Bayesian network model has been used in reconstructing the gene regulatory network for its advantages, but how to determine the network structure and parameters is still important to be explored. This paper proposes a two-stage structure learning algorithm which integrates immune evolution algorithm to build a Bayesian network .The new algorithm is evaluated with the use of both simulated and yeast cell cycle data. The experimental results indicate that the proposed algorithm can find many of the known real regulatory relationships from literature and predict the others unknown with high validity and accuracy.

The total carbon emissions control is the ultimate goal of carbon emission reduction, while industrial carbon emissions are the basic units of the total carbon emission. On the basis of existing research results, in this paper, a two-stage input-output structure decomposition method is creatively proposed for fully combining the input-output method with the structure decomposition technique. In this study, more comprehensive technical progress indicators were chosen in comparison with the previous studies and included the utilization efficiency of all kinds of intermediate inputs such as energy and non-energy products, and finally were positioned at the factors affecting the carbon emissions of different industries. Through analysis, the affecting rate of each factor on industrial carbon emissions was acquired. Thus, a theory basis and data support is provided for the total carbon emissions control of China from the perspective of industrial emissions.

Full Text Available This paper demonstrates possibility of separate zinc and lead recovery from coal composite pellets, composed of EAFD with other synergetic iron-bearing wastes and by-products (mill scale, pyrite-cinder, magnetite concentrate, through a two-stage process. The results show that in the first, low temp erature stage performed in electro-resistant furnace, removal of lead is enabled due to presence of chlorides in the system. In the second stage, performed at higher temperatures in Direct Current (DC plasma furnace, valorisation of zinc is conducted. Using this process, several final products were obtained, including a higher purity zinc oxide, which, by its properties, corresponds washed Waelz oxide.

In this paper, we present a two-stage near-lossless compression scheme. It belongs to the class of "lossy plus residual coding" and consists of a wavelet-based lossy layer followed by arithmetic coding of the quantized residual to guarantee a given L(infinity) error bound in the pixel domain. We focus on the selection of the optimum bit rate for the lossy layer to achieve the minimum total bit rate. Unlike other similar lossy plus lossless approaches using a wavelet-based lossy layer, the proposed method does not require iteration of decoding and inverse discrete wavelet transform in succession to locate the optimum bit rate. We propose a simple method to estimate the optimal bit rate, with a theoretical justification based on the critical rate argument from the rate-distortion theory and the independence of the residual error.

Full Text Available With MIMO technology being adopted by the wireless communication standards LTE and HSPA+, MIMO OTA research has attracted wide interest from both industry and academia. Parallel studies are underway in COST2100, CTIA, and 3GPP RAN WG4. The major test challenge for MIMO OTA is how to create a repeatable scenario which accurately reflects the MIMO antenna radiation performance in a realistic wireless propagation environment. Different MIMO OTA methods differ in the way to reproduce a specified MIMO channel model. This paper introduces a novel, flexible, and cost-effective method for measuring MIMO OTA using a two-stage approach. In the first stage, the antenna pattern is measured in an anechoic chamber using a nonintrusive approach, that is without cabled connections or modifying the device. In the second stage, the antenna pattern is convolved with the chosen channel model in a channel emulator to measure throughput using a cabled connection.

We used a display change detection paradigm (Slattery, Angele, & Rayner Human Perception and Performance, 37, 1924-1938 2011) to investigate whether display change detection uses orthographic regularity and whether detection is affected by the processing difficulty of the word preceding the boundary that triggers the display change. Subjects were significantly more sensitive to display changes when the change was from a nonwordlike preview than when the change was from a wordlike preview, but the preview benefit effect on the target word was not affected by whether the preview was wordlike or nonwordlike. Additionally, we did not find any influence of preboundary word frequency on display change detection performance. Our results suggest that display change detection and lexical processing do not use the same cognitive mechanisms. We propose that parafoveal processing takes place in twostages: an early, orthography-based, preattentional stage, and a late, attention-dependent lexical access stage.

Under autotrophic conditions, highly productive biodiesel production was achieved using a semi-continuous culture system in Neochloris oleoabundans. In particular, the flue gas generated by combustion of liquefied natural gas and natural solar radiation were used for cost-effective microalgal culture system. In semi-continuous culture, the greater part (~80%) of the culture volume containing vegetative cells grown under nitrogen-replete conditions in a first photobioreactor (PBR) was directly transferred to a second PBR and cultured sequentially under nitrogen-deplete conditions for accelerating oil accumulation. As a result, in semi-continuous culture, the productivities of biomass and biodiesel in the cells were increased by 58% (growth phase) and 51% (induction phase) compared to the cells in batch culture, respectively. The semi-continuous culture system using twostage photobioreactors is a very efficient strategy to further improve biodiesel production from microalgae under photoautotrophic conditions.

Full Text Available The paper employed a two-stage Data Envelopment Analysis (DEA approach to examine the sources oftechnical efficiency in the Nigerian banking sub-sector. Using a cross sectionof commercial and merchant banks, the study showed that the Nigerian bankingindustry was not efficient both in the pre-and-post-liberalization era. Thestudy further revealed that market share was the strongest determinant oftechnical efficiency in the Nigerian banking Industry. Thus, appropriatemacroeconomic policy, institutional development and structural reforms mustaccompany financial liberalization to create the stable environment requiredfor it to succeed. Hence, the present bank consolidation and reforms by theCentral Bank of Nigeria, which started with Soludo and continued with Sanusi,are considered necessary, especially in the areas of e banking and reorganizingthe management of banks.

In 24 patients with malabsorption, (/sup 14/C)triolein breath tests were conducted before and together with the administration of pancreatic enzymes (Pancrease, Johnson and Johnson, Skillman, N.J.). Eleven patients with pancreatic insufficiency had a significant rise in peak percent dose per hour /sup 14/CO/sub 2/ excretion after Pancrease, whereas 13 patients with other causes of malabsorption had no increase in /sup 14/CO/sub 2/ excretion (2.61 +/- 0.96 vs. 0.15 +/- 0.45, p less than 0.001). The two-stage (/sup 14/C)triolein breath test appears to be an accurate and simple noninvasive test of fat malabsorption that differentiates steatorrhea secondary to pancreatic insufficiency from other causes of steatorrhea.

Lot streaming in Hybrid flowshop [HFS] is encountered in many real world problems. This paper deals with a heuristic approach for Lot streaming based on critical machine consideration for a twostage Hybrid Flowshop. The first stage has two identical parallel machines and the second stage has only one machine. In the second stage machine is considered as a critical by valid reasons these kind of problems is known as NP hard. A mathematical model developed for the selected problem. The simulation modelling and analysis were carried out in Extend V6 software. The heuristic developed for obtaining optimal lot streaming schedule. The eleven cases of lot streaming were considered. The proposed heuristic was verified and validated by real time simulation experiments. All possible lot streaming strategies and possible sequence under each lot streaming strategy were simulated and examined. The heuristic consistently yielded optimal schedule consistently in all eleven cases. The identification procedure for select best lot streaming strategy was suggested.

In the food-processing industry, usually a limited number of storage tanks for intermediate storage is available, which are used for different products. The market sometimes requires extremely short lead times for some products, leading to prioritization of these products, partly through the dedi......In the food-processing industry, usually a limited number of storage tanks for intermediate storage is available, which are used for different products. The market sometimes requires extremely short lead times for some products, leading to prioritization of these products, partly through...... the dedication of a storage tank. This type of situation has hardly been investigated, although planners struggle with it in practice. This paper aims at investigating the fundamental effect of prioritization and dedicated storage in a two-stage production system, for various product mixes. We show...

This study established a comprehensive model to configure a new two-stage high solid anaerobic digester (HSAD) system designed for highly degradable organic fraction of municipal solid wastes (OFMSW). The HSAD reactor as the first stage was naturally separated into two zones due to biogas floatation and low specific gravity of solid waste. The solid waste was retained in the upper zone while only the liquid leachate resided in the lower zone of the HSAD reactor. Continuous stirred-tank reactor (CSTR) and advective-diffusive reactor (ADR) models were constructed in series to describe the whole system. Anaerobic digestion model No. 1 (ADM1) was used as reaction kinetics and incorporated into each reactor module. Compared with the experimental data, the simulation results indicated that the model was able to well predict the pH, volatile fatty acid (VFA) and biogas production.

The investigation of the photochemistry of a two-stage photobase generator (PBG) is described. Absorption of a photon by a latent PBG (1) (first step) produces a PBG (2). Irradiation of 2 in the presence of water produces a base (second step). This two-photon sequence (1 + hν → 2 + hν → base) is an important component in the design of photoresists for pitch division technology, a method that doubles the resolution of projection photolithography for the production of microelectronic chips. In the present system, the excitation of 1 results in a Norrish type II intramolecular hydrogen abstraction to generate a 1,4-biradiacal that undergoes cleavage to form 2 and acetophenone (Φ ∼ 0.04). In the second step, excitation of 2 causes cleavage of the oxime ester (Φ = 0.56) followed by base generation after reaction with water.

The present study focused on the application of the Anaerobic Digestion Model 1 οn the methane production from acidified sorghum extract generated from a hydrogen producing bioreactor in a two-stage anaerobic process. The kinetic parameters for hydrogen and volatile fatty acids consumption were...... estimated through fitting of the model equations to the data obtained from batch experiments. The simulation of the continuous reactor performance at all HRTs tested (20, 15 and 10d) was very satisfactory. Specifically, the largest deviation of the theoretical predictions against the experimental data...... was 12% for the methane production rate at the HRT of 20d while the deviation values for the 15 and 10 d HRT were 1.9% and 1.1%, respectively. The model predictions regarding pH, methane percentage in the gas phase and COD removal were in very good agreement with the experimental data with a deviation...

Full Text Available Advances in high performance sensing technologies enable the development of wind turbine condition monitoring system to diagnose and predict the system-wide effects of failure events. This paper presents a vibration-based twostage fault detection framework for failure diagnosis of rotating components in wind turbines. The proposed framework integrates an analytical defect detection method and a graphical verification method together to ensure the diagnosis efficiency and accuracy. The efficacy of the proposed methodology is demonstrated with a case study with the gearbox condition monitoring Round Robin study dataset provided by the National Renewable Energy Laboratory (NREL. The developed methodology successfully picked five faults out of seven in total with accurate severity levels without producing any false alarm in the blind analysis. The case study results indicated that the developed fault detection framework is effective for analyzing gear and bearing faults in wind turbine drive train system based upon system vibration characteristics.

Urine contains the majority of nutrients in urban wastewaters and is an ideal nutrient recovery target. In this study, stabilization of real undiluted urine through nitrification and subsequent microalgae cultivation were explored as strategy for biological nutrient recovery. A nitrifying inoculum screening revealed a commercial aquaculture inoculum to have the highest halotolerance. This inoculum was compared with municipal activated sludge for the start-up of two nitrification membrane bioreactors. Complete nitrification of undiluted urine was achieved in both systems at a conductivity of 75mScm(-1) and loading rate above 450mgNL(-1)d(-1). The halotolerant inoculum shortened the start-up time with 54%. Nitrite oxidizers showed faster salt adaptation and Nitrobacter spp. became the dominant nitrite oxidizers. Nitrified urine as growth medium for Arthrospira platensis demonstrated superior growth compared to untreated urine and resulted in a high protein content of 62%. This two-stage strategy is therefore a promising approach for biological nutrient recovery.

The possible intermittent impacts of a two-stage isolation system with rigid limiters have been investigated. The isolation system is under periodic external excitation disturbed by small stationary Gaussian white noise after shock. The maximal impact Then in the period after shock, the zero order approximate stochastic discrete model and the first order approximate stochastic model are developed. The real isolation system of an MTU diesel engine is used to evaluate the established model. After calculating of the numerical example, the effects of noise excitation on the isolation system are discussed.The results show that the property of the system is complicated due to intermittent impact. The difference between zero order model and the first order model may be great.The effect of small noise is obvious. The results may be expected useful to the naval designers.

A two-stage Stirling-type U-shape pulse tube cryocooler driven by a 10 kW-class linear compressor was designed, built and tested. A special feature of the cold head is the absence of a heat exchanger at the cold end of the first stage, since the intended application requires no cooling power at an intermediate temperature. Simulations where done using Sage-software to find optimum operating conditions and cold head geometry. Flow-impedance matching was required to connect the compressor designed for 60 Hz operation to the 40 Hz cold head. A cooling power of 12.9 W at 25 K with an electrical input power of 4.6 kW has been achieved up to now. The lowest temperature reached is 13.7 K.

Recently, coherent x-ray sources have promoted developments of optical systems for focusing, imaging, and interferometers. In this paper, we propose a two-stage focusing optical system with the goal of achromatically focusing pulses from an x-ray free-electron laser (XFEL), with a focal width of 10 nm. In this optical system, the x-ray beam is expanded by a grazing-incidence aspheric mirror, and it is focused by a mirror that is shaped as a solid of revolution. We describe the design procedure and discuss the theoretical focusing performance. In theory, soft-XFEL lights can be focused to a 10 nm area without chromatic aberration and with high reflectivity; this creates an unprecedented power density of 1020 W cm-2 in the soft-x-ray range.

Due to the still increasing penetration of grid-connected Photovoltaic (PV) systems, advanced active power control functionalities have been introduced in grid regulations. A power reserve control, where namely the active power from the PV panels is reserved during operation, is required for grid...... to achieve the power reserve. In this method, the solar irradiance and temperature measurements that have been used in conventional power reserve control schemes to estimate the available PV power are not required, and thereby being a sensorless approach with reduced cost. Experimental tests have been...... support. In this paper, a cost-effective solution to realize the power reserve for two-stage grid-connected PV systems is proposed. The proposed solution routinely employs a Maximum Power Point Tracking (MPPT) control to estimate the available PV power and a Constant Power Generation (CPG) control...

Due to still increasing penetration level of grid-connected Photovoltaic (PV) systems, advanced active power control functionalities have been introduced in grid regulations. A reserved power control, where the active power from the PV panels is reserved during operation, is required for grid...... to achieve the power reserve. In this method, the irradiance measurements that have been used in conventional control schemes to estimate the available PV power are not required, and thereby being a sensorless solution. Simulations and experimental tests have been performed on a 3-kW two-stage single...... support. In this paper, a cost-effective solution to realize the reserved power control for grid-connected PV systems is proposed. The proposed solution routinely employs a Maximum Power Point Tracking (MPPT) control to estimate the available PV power and a Constant Power Generation (CPG) control...

Full Text Available A prey-predator model system is developed; specifically the disease is considered into the prey population. Here the prey population is taken as pest and the predators consume the selected pest. Moreover, we assume that the prey species is infected with a viral disease forming into susceptible and two-stage infected classes, and the early stage of infected prey is more vulnerable to predation by the predator. Also, it is assumed that the later stage of infected pests is not eaten by the predator. Different equilibria of the system are investigated and their stability analysis and Hopf bifurcation of the system around the interior equilibriums are discussed. A modified model has been constructed by considering some alternative source of food for the predator population and the dynamical behavior of the modified model has been investigated. We have demonstrated the analytical results by numerical analysis by taking some simulated set of parameter values.

This paper presents a two-stage motion compensation coding scheme for image sequences in hemodynamics. The first stage of the proposed method implements motion compensation, and the second stage corrects local pixel intensity distortions with a context adaptive linear predictor. The proposed method is robust to the local intensity distortions and the noise that often degrades these image sequences, providing lossless and near-lossless quality. Our experiments with lossless compression of 12bits/pixel studies indicate that, potentially, our approach can perform 3.8%, 2% and 1.6% better than JPEG-2000, JPEG-LS and the method proposed by Scharcanski [1], respectively. The performance tends to improve for near-lossless compression. Therefore, this work presents experimental evidence that for coding image sequences in hemodynamics, an adequate motion compensation scheme can be more efficient than the still-image coding methods often used nowadays.

An effective two-stage method for an estimation of parameters of the linear regression is considered. For this purpose we introduce a certain quasi-estimator that, in contrast to usual estimator, produces two alternative estimates. It is proved that, in comparison to the least squares estimate, one alternative has a significantly smaller quadratic risk, retaining at the same time unbiasedness and consistency. These properties hold true for one-dimensional, multi-dimensional, orthogonal and non-orthogonal problems. Moreover, a Monte-Carlo simulation confirms high robustness of the quasi-estimator to violations of the initial assumptions. Therefore, at the first stage of the estimation we calculate mentioned two alternative estimates. At the second stage we choose the better estimate out of these alternatives. In order to do so we use additional information, among it but not exclusively of a priori nature. In case of two alternatives the volume of such information should be minimal. Furthermore, the additional ...

In order to obtain an overall and systematic understanding of the performance of a two-stage light gas gun (TLGG), a numerical code to simulate the process occurring in a gun shot is advanced based on the quasi-one-dimensional unsteady equations of motion with the real gas effect, friction and heat transfer taken into account in a characteristic formulation for both driver and propellant gas. Comparisons of projectile velocities and projectile pressures along the barrel with experimental results from JET (Joint European Torus) and with computational data got by the Lagrangian method indicate that this code can provide results with good accuracy over a wide range of gun geometry and loading conditions.

Full Text Available We present a two-stage approach for the “helicopters and vehicles” intermodal transportation of medical supplies in large-scale disaster responses. In the first stage, a fuzzy-based method and its heuristic algorithm are developed to select the locations of temporary distribution centers (TDCs and assign medial aid points (MAPs to each TDC. In the second stage, an integer-programming model is developed to determine the delivery routes. Numerical experiments verified the effectiveness of the approach, and observed several findings: (i More TDCs often increase the efficiency and utility of medical supplies; (ii It is not definitely true that vehicles should load more and more medical supplies in emergency responses; (iii The more contrasting the traveling speeds of helicopters and vehicles are, the more advantageous the intermodal transportation is.

Full Text Available The study in this paper is a numerical integration of second-order three-point boundary value problems under two imposed nonlocal boundary conditions at t=t0, t=ÃŽÂ¾, and t=t1 in a general setting, where t0two-stage Lie-group shooting method for finding unknown initial conditions, which are obtained through an iterative solution of derived algebraic equations in terms of a weighting factor rÃ¢ÂˆÂˆ(0,1. The best r is selected by matching the target with a minimal discrepancy. Numerical examples are examined to confirm that the new approach has high efficiency and accuracy with a fast speed of convergence. Even for multiple solutions, the present method is also effective to find them.

Full Text Available Abstract The study in this paper is a numerical integration of second-order three-point boundary value problems under two imposed nonlocal boundary conditions at , , and in a general setting, where . We construct a two-stage Lie-group shooting method for finding unknown initial conditions, which are obtained through an iterative solution of derived algebraic equations in terms of a weighting factor . The best is selected by matching the target with a minimal discrepancy. Numerical examples are examined to confirm that the new approach has high efficiency and accuracy with a fast speed of convergence. Even for multiple solutions, the present method is also effective to find them.

Full Text Available Purchasers of speed reducers decide on buying those reducers, that can the most approximately satisfy their demands with much smaller costs. Amount of used material, ie. mass and dimensions of gear unit influences on gear units price. Mass and dimensions of gear unit, besides output torque, gear unit ratio and efficiency, are the most important parameters of technical characteristics of gear units and their quality. Centre distance and position of shafts have significant influence on output torque, gear unit ratio and mass of gear unit through overall dimension of gear unit housing. Thus these characteristics are dependent on each other. This paper deals with analyzing of centre distance and shaft position influence on output torque and ratio of universal twostages gear units.

Based on the evaluation of dynamic performance for feed drives in machine tools, this paper presents a two-stage tuning method of servo parameters. In the first stage, the evaluation of dynamic performance, parameter tuning and optimization on a mechatronic integrated system simulation platform of feed drives are performed. As a result, a servo parameter combination is acquired. In the second stage, the servo parameter combination from the first stage is set and tuned further in a real machine tool whose dynamic performance is measured and evaluated using the cross grid encoder developed by Heidenhain GmbH. A case study shows that this method simplifies the test process effectively and results in a good dynamic performance in a real machine tool.

Effects of hydraulic retention time (HRT) and gas volume on efficiency of wastewater treatment are discussed based on a simulation experiment in which the domestic swage was treated by the two-stage-bio-contact oxidation process. The result shows that the average CODcr, BOD5, suspended solid (SS), and ammonia-nitrogen removal rate are 94.5 %, 93.2 %, 91.7 % and 46.9 %, respectively, under the conditions of a total air/water ratio of 5: 1 , an air/water ratio of 3:1 for oxidation tank 1 and 2:1for oxidation tank 2 and a hydraulic retention time of 1 h for each stage. This method is suitable for domestic sewage treatment of residential community and small towns as well.

We present the design, implementation and alignment procedure for a two-stage time delay compensating monochromator. The setup spectrally filters the radiation of a high-order harmonic generation source providing wavelength-selected XUV pulses with a bandwidth of 300 to 600~meV in the photon energy range of 3 to 50~eV. XUV pulses as short as $12\\pm3$~fs are demonstrated. Transmission of the 400~nm (3.1~eV) light facilitates precise alignment of the monochromator. This alignment strategy together with the stable mechanical design of the motorized beamline components enables us to automatically scan the XUV photon energ in pump-probe experiments that require XUV beam pointing stability. The performance of the beamline is demonstrated by the generation of IR-assisted sidebands in XUV photoionization of argon atoms.

CANARY is an on-sky Laser Guide Star (LGS) tomographic AO demonstrator in operation at the 4.2m William Herschel Telescope (WHT) in La Palma. From the early demonstration of open-loop tomography on a single deformable mirror using natural guide stars in 2010, CANARY has been progressively upgraded each year to reach its final goal in July 2015. It is now a two-stage system that mimics the future E-ELT: a GLAO-driven woofer based on 4 laser guide stars delivers a ground-layer compensated field to a figure sensor locked tweeter DM, that achieves the final on-axis tomographic compensation. We present the overall system, the control strategy and an overview of its on-sky performance.

A two-stage axial-flow fan with a tip speed of 1450 ft/sec (442 m/sec) and an overall pressure ratio of 2.8 was designed, built, and tested. At design speed and pressure ratio, the measured flow matched the design value of 184.2 lbm/sec (83.55kg/sec). The adiabatic efficiency at the design operating point was 85.7 percent. The stall margin at design speed was 10 percent. A first-bending-mode flutter of the second-stage rotor blades was encountered near stall at speeds between 77 and 93 percent of design, and also at high pressure ratios at speeds above 105 percent of design. A 5 deg closed reset of the first-stage stator eliminated second-stage flutter for all but a narrow speed range near 90 percent of design.

Full Text Available This paper presents an optimal two-stage extended Kalman filter (OTSEKF for closed-loop flux, torque, and speed estimation of a permanent magnet synchronous motor (PMSM to achieve sensorless DTC-SVPWM operation of drive system. The novel observer is obtained by using the same transformation as in a linear Kalman observer, which is proposed by C.-S. Hsieh and F.-C. Chen in 1999. The OTSEKF is an effective implementation of the extended Kalman filter (EKF and provides a recursive optimum state estimation for PMSMs using terminal signals that may be polluted by noise. Compared to a conventional EKF, the OTSEKF reduces the number of arithmetic operations. Simulation and experimental results verify the effectiveness of the proposed OTSEKF observer for DTC of PMSMs.

The “methanation + anaerobic ammonia oxidation autotrophic denitrification” method was adopted by using anaerobic sequencing batch reactors (ASBRs) and realized a satisfactory synchronous removal of chemical oxygen demand (COD) and ammonia-nitrogen (NH4 +-N) in wastewater after 75 days operation. 90% of COD was removed at a COD load of 1.2 kg/(m3•d) and 90% of TN was removed at a TN load of 0.14 kg/(m3•d). The anammox reaction ratio was estimated to be 1: 1.32: 0.26. The results showed that synchronous rapid start-up of the methanation and anaerobic ammonia oxidation processes in two-stage ASBRs was feasible.

A Remote Liquid Loading System (RLLS) was designed and tested for the application of loading high-hazard liquid materials into instrumented target cells for gas gun-driven plate impact experiments. These high hazard liquids tend to react with confining materials in a short period of time, degrading target assemblies and potentially building up pressure through the evolution of gas in the reactions. Therefore, the ability to load a gas gun target immediately prior to gun firing provides the most stable and reliable target fielding approach. We present the design and evaluation of an RLLS built for the LANL two-stage gas gun. The system has been used successfully to interrogate the shock initiation behavior of ˜98 wt% percent hydrogen peroxide (H2O2) solutions, using embedded electromagnetic gauges for measurement of shock wave profiles in-situ.

Full Text Available Spinal neurofibromas occur sporadically and typically occur in association with neurofibromatosis 1. Patients afflicted with neurofibromatosis 1 usually present with involvement of several nerve roots. This report describes the case of a 14- year-old child with a large intraspinal, but extradural tumour with paraspinal extension, dumbbell neurofibroma of the cervical region extending from the C2 to C4 vertebrae. The lesions were readily detected by MR imaging and were successfully resected in a two-stage surgery. The time interval between the first and second surgery was one month. We provide a brief review of the literature regarding various surgical approaches, emphasising the utility of anterior and posterior approaches.

to the geriatric outpatient clinic, community health centre, primary physician or arrangements with next-of-kin. Findings: Primary endpoints will be presented as unplanned readmission to ED; admission to nursing home; and death. Secondary endpoints will be presented as physical function; depressive symptoms......Background: Geriatric patients recently discharged from hospital are at risk of unplanned readmissions and admission to nursing home. When discharged directly from Emergency Department (ED) the risk increases, as time pressure often requires focus on the presenting problem, although 80...... % of geriatric patients have complex and often unresolved caring needs. The objective was to examine the effect of a two-stage nursing assessment and intervention to address the patients uncompensated problems given just after discharge from ED and one and six months after. Method: We conducted a prospective...

@@ An improved two-stage model of colorimetric characterization for liquid crystal display (LCD) was proposed. The model included an S-shape nonlinear function with four coefficients for each channel to fit the Tone reproduction curve (TRC), and a linear transfer matrix with black-level correction. To compare with the simple model (SM), gain-offset-gain (GOG), S-curve and three-one-dimensional look-up tables (3-1D LUTs) models, an identical LCD was characterized and the color differences were calculated and summarized using the set of 7 × 7 × 7 digital-to-analog converter (DAC) triplets as test data. The experimental results showed that the model was outperformed in comparison with the GOG and SM ones, and near to that of the S-curve model and 3-1D LUTs method.

A fast two-stage geometric active contour algorithm for image segmentation is developed. First, the Eikonal equation problem is quickly solved using an improved fast sweeping method, and a criterion of local minimum of area gradient (LMAG) is presented to extract the optimal arrival time. Then, the final time function is passed as an initial state to an area and length minimizing flow model, which adjusts the interface more accurately and prevents it from leaking. For object with complete and salient edge, using the first stage only is able to obtain an ideal result, and this results in a time complexity of O(M), where M is the number of points in each coordinate direction. Both stages are needed for convoluted shapes, but the computation cost can be drastically reduced. Efficiency of the algorithm is verified in segmentation experiments of real images with different feature.

The present work concerns the parametric study of an autonomous, two-stage solar organic Rankine cycle for RO desalination. The main goal of the current simulation is to estimate the efficiency, as well as to calculate the annual mechanical energy available for desalination in the considered cases, in order to evaluate the influence of various parameters on the performance of the system. The parametric study concerns the variation of different parameters, without changing actually the baseline case. The effect of the collectors' slope and the total number of evacuated tube collectors used, have been extensively examined. The total cost is also taken into consideration and is calculated for the different cases examined, along with the specific fresh water cost (EUR/m{sup 3}). (author)

Full Text Available Introduction: We report a case of non-union with severe shortening of the femur following diaphysectomy for chronic osteomyelitis.Case Presentation: A boy, aged 16 years presented with a dangling and excessively short left lower limb. He was using an elbow crutch in his right hand to help him walk. He had a history of diaphysectomy for chronic osteomyelitis at the age of 9. Examination revealed a freely mobile non-union of the left femur. The femur was the seat of an 18 cm shortening and a 4 cm defect at the non-union site; the knee joint was ankylosed in extension. The tibia and fibula were 10 cm short. Considering the extensive shortening in the femur and tibia in addition to osteoporosis, he was treated in twostages. In stage I, the femoral non-union was treated by open reduction, internal fixation and iliac bone grafting. The patient was then allowed to walk with full weight bearing in an extension brace for 7 months. In Stage II, equalization of leg length discrepancy (LLD was achieved by simultaneous distraction of the femur and tibia by unilateral frames. At the 6 month follow- up, he was fully weight bearing without any walking aid, with a heel lift to compensate the 1.5 cm shortening. Three years later he reported that he was satisfied with the result of treatment and was leading a normal life as a university student.Conclusions: Two-stage treatment succeeded to restore about 20 cm of the femoral shortening in a severely osteoporotic bone. It has also succeeded in reducing the treatment time of the external fixator.

Full Text Available A method described in this paper is to design a TwoStage CMOS operational amplifier and analyze the effect of various aspect ratios on the characteristics of this Op-Amp, which operates at 1.8V power supply using tsmc 0.18μm CMOS technology. In this paper trade-off curves are computed between all characteristics such as Gain, PM, GBW, ICMRR, CMRR, Slew Rate etc. The OPAMP designed is a two-stage CMOS OPAMP. The OPAMP is designed to exhibit a unity gain frequency of 14MHz and exhibits a gain of 59.98dB with a 61.235 phase margin. Design has been carried out in Mentor graphics tools. Simulation results are verified using Model Sim Eldo and Design Architect IC. The task of CMOS operational amplifiers (Op-Amps design optimization is investigated in this work. This Paper focused on the optimization of various aspect ratios, which gave the result of different parameter. When this task is analyzed as a search problem, it can be translated into a multi-objective optimization application in which various Op-Amps’ specifications have to be taken into account, i.e., Gain, GBW (gain-bandwidth product, phase margin and others. The results are compared with respect to standard characteristics of the op-amp with the help of graph and table. Simulation results agree with theoretical predictions. Simulations confirm that the settling time can be further improved by increasing the value of GBW, the settling time is achieved 19ns. It has been demonstrated that when W/L increases the parameters GBW increases and settling time reduces.

Abnormal neuronal synchrony plays an important role in a number of brain diseases. To specifically counteract abnormal neuronal synchrony by desynchronization, Coordinated Reset (CR) stimulation, a spatiotemporally patterned stimulation technique, was designed with computational means. In neuronal networks with spike timing–dependent plasticity CR stimulation causes a decrease of synaptic weights and finally anti-kindling, i.e., unlearning of abnormally strong synaptic connectivity and abnormal neuronal synchrony. Long-lasting desynchronizing aftereffects of CR stimulation have been verified in pre-clinical and clinical proof of concept studies. In general, for different neuromodulation approaches, both invasive and non-invasive, it is desirable to enable effective stimulation at reduced stimulation intensities, thereby avoiding side effects. For the first time, we here present a two-stage CR stimulation protocol, where two qualitatively different types of CR stimulation are delivered one after another, and the first stage comes at a particularly weak stimulation intensity. Numerical simulations show that a two-stage CR stimulation can induce the same degree of anti-kindling as a single-stage CR stimulation with intermediate stimulation intensity. This stimulation approach might be clinically beneficial in patients suffering from brain diseases characterized by abnormal neuronal synchrony where a first treatment stage should be performed at particularly weak stimulation intensities in order to avoid side effects. This might, e.g., be relevant in the context of acoustic CR stimulation in tinnitus patients with hyperacusis or in the case of electrical deep brain CR stimulation with sub-optimally positioned leads or side effects caused by stimulation of the target itself. We discuss how to apply our method in first in man and proof of concept studies. PMID:27242500

The Tissint meteorite is a geochemically depleted, olivine-phyric shergottite. Olivine megacrysts contain 300-600 μm cores with uniform Mg# ( 80 ± 1) followed by concentric zones of Fe-enrichment toward the rims. We applied a number of tests to distinguish the relationship of these megacrysts to the host rock. Major and trace element compositions of the Mg-rich core in olivine are in equilibrium with the bulk rock, within uncertainty, and rare earth element abundances of melt inclusions in Mg-rich olivines reported in the literature are similar to those of the bulk rock. Moreover, the P Kα intensity maps of two large olivine grains show no resorption between the uniform core and the rim. Taken together, these lines of evidence suggest the olivine megacrysts are phenocrysts. Among depleted olivine-phyric shergottites, Tissint is the first one that acts mostly as a closed system with olivine megacrysts being the phenocrysts. The texture and mineral chemistry of Tissint indicate a crystallization sequence of: olivine (Mg# 80 ± 1) → olivine (Mg# 76) + chromite → olivine (Mg# 74) + Ti-chromite → olivine (Mg# 74-63) + pyroxene (Mg# 76-65) + Cr-ulvöspinel → olivine (Mg# 63-35) + pyroxene (Mg# 65-60) + plagioclase, followed by late-stage ilmenite and phosphate. The crystallization of the Tissint meteorite likely occurred in twostages: uniform olivine cores likely crystallized under equilibrium conditions; and a fractional crystallization sequence that formed the rest of the rock. The two-stage crystallization without crystal settling is simulated using MELTS and the Tissint bulk composition, and can broadly reproduce the crystallization sequence and mineral chemistry measured in the Tissint samples. The transition between equilibrium and fractional crystallization is associated with a dramatic increase in cooling rate and might have been driven by an acceleration in the ascent rate or by encounter with a steep thermal gradient in the Martian crust.

Full Text Available Many studies have investigated how to use focused ultrasound (FUS to temporarily disrupt the blood-brain barrier (BBB in order to facilitate the delivery of medication into lesion sites in the brain. In this study, through the setup of a real-time system, FUS irradiation and injections of ultrasound contrast agent (UCA and Gadodiamide (Gd, an MRI contrast agent can be conducted simultaneously during MRI scanning. By using this real-time system, we were able to investigate in detail how the general kinetic model (GKM is used to estimate Gd penetration in the FUS irradiated area in a rat's brain resulting from UCA concentration changes after single FUS irradiation. Two-stage GKM was proposed to estimate the Gd penetration in the FUS irradiated area in a rat's brain under experimental conditions with repeated FUS irradiation combined with different UCA concentrations. The results showed that the focal increase in the transfer rate constant of Ktrans caused by BBB disruption was dependent on the doses of UCA. Moreover, the amount of in vivo penetration of Evans blue in the FUS irradiated area in a rat's brain under various FUS irradiation experimental conditions was assessed to show the positive correlation with the transfer rate constants. Compared to the GKM method, the Two-stage GKM is more suitable for estimating the transfer rate constants of the brain treated with repeated FUS irradiations. This study demonstrated that the entire process of BBB disrupted by FUS could be quantitatively monitored by real-time dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI.

In industrial practice, condition monitoring is typically applied to critical machinery. A particular piece of machinery may have its own condition monitoring system that allows the health condition of said piece of equipment to be assessed independently of any connected assets. However, industrial machines are typically complex sets of components that continuously interact with one another. In some cases, dynamics resulting from the inception and development of a fault can propagate between individual components. For example, a fault in one component may lead to an increased vibration level in both the faulty component, as well as in connected healthy components. In such cases, a condition monitoring system focusing on a specific element in a connected set of components may either incorrectly indicate a fault, or conversely, a fault might be missed or masked due to the interaction of a piece of equipment with neighboring machines. In such cases, a more holistic condition monitoring approach that can not only account for such interactions, but utilize them to provide a more complete and definitive diagnostic picture of the health of the machinery is highly desirable. In this paper, a Two-Stage Bayesian Inference approach allowing data from separate condition monitoring systems to be combined is presented. Data from distributed condition monitoring systems are combined in twostages, the first data fusion occurring at a local, or component, level, and the second fusion combining data at a global level. Data obtained from an experimental rig consisting of an electric motor, two gearboxes, and a load, operating under a range of different fault conditions is used to illustrate the efficacy of the method at pinpointing the root cause of a problem. The obtained results suggest that the approach is adept at refining the diagnostic information obtained from each of the different machine components monitored, therefore improving the reliability of the health assessment of

Harvesting mechanical energy from ocean wave oscillations for conversion to electrical energy has long been pursued as an alternative or self-contained power source. The attraction to harvesting energy from ocean waves stems from the sheer power of the wave motion, which can easily exceed 50 kW per meter of wave front. The principal barrier to harvesting this power is the very low and varying frequency of ocean waves, which generally vary from 0.1Hz to 0.5Hz. In this paper the application of a novel class of two-stage electrical energy generators to buoyant structures is presented. The generators use the buoy's interaction with the ocean waves as a low-speed input to a primary system, which, in turn, successively excites an array of vibratory elements (secondary system) into resonance - like a musician strumming a guitar. The key advantage of the present system is that by having two decoupled systems, the low frequency and highly varying buoy motion is converted into constant and much higher frequency mechanical vibrations. Electrical energy may then be harvested from the vibrating elements of the secondary system with high efficiency using piezoelectric elements. The operating principles of the novel two-stage technique are presented, including analytical formulations describing the transfer of energy between the two systems. Also, prototypical design examples are offered, as well as an in-depth computer simulation of a prototypical heaving-based wave energy harvester which generates electrical energy from the up-and-down motion of a buoy riding on the ocean's surface.

Full Text Available Abnormal neuronal synchrony plays an important role in a number of brain diseases. To specifically counteract abnormal neuronal synchrony by desynchronization, Coordinated Reset (CR stimulation, a spatiotemporally patterned stimulation technique, was designed with computational means. In neuronal networks with spike timing–dependent plasticity CR stimulation causes a decrease of synaptic weights and finally anti-kindling, i.e. unlearning of abnormally strong synaptic connectivity and abnormal neuronal synchrony. Long-lasting desynchronizing aftereffects of CR stimulation have been verified in pre-clinical and clinical proof of concept studies. In general, for different neuromodulation approaches, both invasive and non-invasive, it is desirable to enable effective stimulation at reduced stimulation intensities, thereby avoiding side effects. For the first time, we here present a two-stage CR stimulation protocol, where two qualitatively different types of CR stimulation are delivered one after another, and the first stage comes at a particularly weak stimulation intensity. Numerical simulations show that a two-stage CR stimulation can induce the same degree of anti-kindling as a single-stage CR stimulation with intermediate stimulation intensity. This stimulation approach might be clinically beneficial in patients suffering from brain diseases characterized by abnormal neuronal synchrony where a first treatment stage should be performed at particularly weak stimulation intensities in order to avoid side effects. This might, e.g., be relevant in the context of acoustic CR stimulation in tinnitus patients with hyperacusis or in the case of electrical deep brain CR stimulation with sub-optimally positioned leads or side effects caused by stimulation of the target itself. We discuss how to apply our method in first in man and proof of concept studies.

Full Text Available BackgroundThe aim of unilateral breast reconstruction after mastectomy is to craft a natural-looking breast with symmetry. The latissimus dorsi (LD flap with implant is an established technique for this purpose. However, it is challenging to obtain adequate volume and satisfactory aesthetic results using a one-stage operation when considering factors such as muscle atrophy, wound dehiscence and excessive scarring. The two-stage reconstruction addresses these difficulties by using a tissue expander to gradually enlarge the skin pocket which eventually holds an appropriately sized implant.MethodsWe analyzed nine patients who underwent unilateral two-stage LD reconstruction. In the first stage, an expander was placed along with the LD flap to reconstruct the mastectomy defect, followed by gradual tissue expansion to achieve overexpansion of the skin pocket. The final implant volume was determined by measuring the residual expander volume after aspirating the excess saline. Finally, the expander was replaced with the chosen implant.ResultsThe average volume of tissue expansion was 460 mL. The resultant expansion allowed an implant ranging in volume from 255 to 420 mL to be placed alongside the LD muscle. Seven patients scored less than six on the relative breast retraction assessment formula for breast symmetry, indicating excellent breast symmetry. The remaining two patients scored between six and eight, indicating good symmetry.ConclusionsThis approach allows the size of the eventual implant to be estimated after the skin pocket has healed completely and the LD muscle has undergone natural atrophy. Optimal reconstruction results were achieved using this approach.

This paper describes the results of liquefaction tests of Mongolian coals using an autoclave and a flow micro reactor. Uvdughudag coal, Hootiinhonhor coal, and Shivee-Ovoo coal were used for liquefaction tests with an autoclave. Oil yields of Uvdughudag and Hootiinhonhor coals were 55.56 wt% and 55.29 wt%, respectively, which were similar to that of Wyoming coal. Similar results were obtained, as to produced gas and water yields. These coals were found to be suitable for coal liquefaction. Lower oil yield, 42.55 wt% was obtained for Shivee-Ovoo coal, which was not suitable for liquefaction. Liquefaction tests were conducted for Uvdughudag coal with a flow micro reactor. The oil yield was 55.7 wt%, which was also similar to that of Wyoming coal, 56.1 wt%. Hydrogen consumption of Uvdughudag coal was also similar to that of Wyoming coal. From these, Uvdughudag coal can be a prospective coal for liquefaction. From the distillation distribution of oil, distillate fraction yield below 350{degree}C of Uvdughudag coal was 50.7 wt%, which was much higher than that of Wyoming coal, 35.6 wt%. Uvdughudag coal is a coal with high light oil fraction yield. 2 figs., 5 tabs.

This work investigates the correlation between a large number of widely used ground motion intensity measures (IMs) and the corresponding liquefaction potential of a soil deposit during earthquake loading. In order to accomplish this purpose the seismic responses of 32 sloping liquefiable site models consisting of layered cohesionless soil were subjected to 139 earthquake ground motions. Two sets of ground motions, consisting of 80 ordinary records and 59 pulse-like near-fault records are used in the dynamic analyses. The liquefaction potential of the site is expressed in terms of the the mean pore pressure ratio, the maximum ground settlement, the maximum ground horizontal displacement and the maximum ground horizontal acceleration. For each individual accelerogram, the values of the aforementioned liquefaction potential measures are determined. Then, the correlation between the liquefaction potential measures and the IMs is evaluated. The results reveal that the velocity spectrum intensity (VSI) shows the strongest correlation with the liquefaction potential of sloping site. VSI is also proven to be a sufficient intensity measure with respect to earthquake magnitude and source-to-site distance, and has a good predictability, thus making it a prime candidate for the seismic liquefaction hazard evaluation.

Coal is the most abundant domestic energy resource in the United States. The Fossil Energy Organization within the US Department of Energy (DOE) has been supporting a coal liquefaction program to develop improved technologies to convert coal to clean and cost-effective liquid fuels to complement the dwindling supply of domestic petroleum crude. The goal of this program is to produce coal liquids that are competitive with crude at $20 to $25 per barrel. Indirect and direct liquefaction routes are the two technologies being pursued under the DOE coal liquefaction program. This paper will give an overview of the DOE indirect liquefaction program. More detailed discussions will be given to the F-T diesel and DME fuels which have shown great promises as clean burning alternative diesel fuels. The authors also will briefly discuss the economics of indirect liquefaction and the hurdles and opportunities for the early commercial deployment of these technologies. Discussions will be preceded by two brief reviews on the liquid versus gas phase reactors and the natural gas versus coal based indirect liquefaction.

Process oil samples from HRI Catalytic Two-StageLiquefaction (CTSL) Bench Unit Run CC-16 (227-76) were analyzed to provide information on process performance. Run CC-16 was operated in December 1992 with Burning Star 2 Mine (Illinois 6 seam) coal to test and validate Akzo EXP-AO-60 Ni/Mo catalyst (1/16 in. extrudate). Results were compared with those of four previous HRI CTSL bench unit runs made with Ni/Mo catalysts. Major conclusions from this work are summarized. (1) Akzo EXP-AO-60 gave process oil characteristics in Run CC-16 similar to those of other Ni/Mo catalysts tested in Runs I-13, I-16, I-17, and I-18 (by our analytical and empirical test methods). No distinct performance advantage for any of the catalysts emerges from the process oil characteristics and plant performance. Thus, for commercial coal liquefaction, a number of equivalent catalysts are available from competitive commercial sources. The similarity of run performance and process oil characteristics indicates consistent performance of HRI`s bench unit operations over a period of several years; (2) Dominant effects on process oil properties in Run CC-16 were catalyst age and higher temperature operation in Periods 10--13 (Condition 2). Properties affected were the aromaticities and phenolic -OH concentrations of most streams and the asphaltene and preasphaltene concentrations of the pressure-filter liquid (PFL) 850{degrees}F{sup +} resid. The trends reflect decreasing hydrogenation and defunctionalization of the process streams with increasing catalyst age. Operation at higher temperature conditions seems to have partially offset the effects of catalyst age.

In this thesis, two-stage combustion of biomass was experimentally/numerically investigated in a multifuel reactor. The following emissions issues have been the main focus of the work: 1- NOx and N2O 2- Unburnt species (CO and CxHy) 3- Corrosion related emissions.The study had a focus on two-stage combustion in order to reduce pollutant emissions (primarily NOx emissions). It is well known that pollutant emissions are very dependent on the process conditions such as temperature, reactant concentrations and residence times. On the other hand, emissions are also dependent on the fuel properties (moisture content, volatiles, alkali content, etc.). A detailed study of the important parameters with suitable biomass fuels in order to optimize the various process conditions was performed. Different experimental studies were carried out on biomass fuels in order to study the effect of fuel properties and combustion parameters on pollutant emissions. Process conditions typical for biomass combustion processes were studied. Advanced experimental equipment was used in these studies. The experiments showed the effects of staged air combustion, compared to non-staged combustion, on the emission levels clearly. A NOx reduction of up to 85% was reached with staged air combustion using demolition wood as fuel. An optimum primary excess air ratio of 0.8-0.95 was found as a minimizing parameter for the NOx emissions for staged air combustion. Air staging had, however, a negative effect on N2O emissions. Even though the trends showed a very small reduction in the NOx level as temperature increased for non-staged combustion, the effect of temperature was not significant for NOx and CxHy, neither in staged air combustion or non-staged combustion, while it had a great influence on the N2O and CO emissions, with decreasing levels with increasing temperature. Furthermore, flue gas recirculation (FGR) was used in combination with staged combustion to obtain an enhanced NOx reduction. The

The present study investigated a two-stage anaerobic hydrogen and methane process for increasing bioenergy production from organic wastes. A two-stage process with hydraulic retention time (HRT) 3d for hydrogen reactor and 12d for methane reactor, obtained 11% higher energy compared to a single......-stage methanogenic process (HRT 15d) under organic loading rate (OLR) 3gVS/(Ld). The two-stage process was still stable when the OLR was increased to 4.5gVS/(Ld), while the single-stage process failed. The study further revealed that by changing the HRThydrogen:HRTmethane ratio of the two-stage process from 3...

Predicting the existence of restriction enzymes sequences on the recombinant DNA fragments, after accomplishing the manipulating reaction, via mathematical approach is considered as a convenient way in terms of DNA recombination. In terms of mathematics, for this characteristic of the recombinant DNA strands, which involve the recognition sites of restriction enzymes, is called persistent and permanent. Normally differentiating the persistency and permanency of twostages recombinant DNA strands using wet-lab experiment is expensive and time-consuming due to running the experiment at twostages as well as adding more restriction enzymes on the reaction. Therefore, in this research, by using Yusof-Goode (Y-G) model the difference between persistent and permanent splicing language of some twostages is investigated. Two theorems were provided, which show the persistency and non-permanency of twostages DNA splicing language.

This paper proposes a new approach of two-stage hybrid model of logistic regression-ANN for the construction of a financial distress warning system for banking industry in emerging market during 1998-2006...

The treatment of condensed molasses fermentation soluble (CMS) is a troublesome problem for glutamate manufacturing factory. However, CMS contains high carbohydrate and nutrient contents and is an attractive and commercially potential feedstock for bioenergy production. The aim of this paper is to produce hydrogen and methane by two-stage anaerobic fermentation process. The fermentative hydrogen production from CMS was conducted in a continuously-stirred tank bioreactor (working volume 4 L) which was operated at a hydraulic retention time (HRT) of 8 h, organic loading rate (OLR) of 120 kg COD/m{sup 3}-d, temperature of 35 C, pH 5.5 and sewage sludge as seed. The anaerobic methane production was conducted in an up-flow bioreactor (working volume 11 L) which was operated at a HRT of 24 -60 hrs, OLR of 4.0-10 kg COD/m{sup 3}-d, temperature of 35 C, pH 7.0 with using anaerobic granule sludge from fructose manufacturing factory as the seed and the effluent from hydrogen production process as the substrate. These two reactors have been operated successfully for more than 400 days. The steady-state hydrogen content, hydrogen production rate and hydrogen production yield in the hydrogen fermentation system were 37%, 169 mmol-H{sub 2}/L-d and 93 mmol-H{sub 2}/g carbohydrate{sub removed}, respectively. In the methane fermentation system, the peak methane content and methane production rate were 66.5 and 86.8 mmol-CH{sub 4}/L-d with methane production yield of 189.3 mmol-CH{sub 4}/g COD{sub removed} at an OLR 10 kg/m{sup 3}-d. The energy production rate was used to elucidate the energy efficiency for this two-stage process. The total energy production rate of 133.3 kJ/L/d was obtained with 5.5 kJ/L/d from hydrogen fermentation and 127.8 kJ/L/d from methane fermentation. (orig.)

A two-stage dark-fermentation and electrohydrogenesis process was used to convert the recalcitrant lignocellulosic materials into hydrogen gas at high yields and rates. Fermentation using Clostridium thermocellum produced 1.67 mol H2/mol-glucose at a rate of 0.25 L H2/L-d with a corn stover lignocellulose feed, and 1.64 mol H2/mol-glucose and 1.65 L H2/L-d with a cellobiose feed. The lignocelluose and cellobiose fermentation effluent consisted primarily of: acetic, lactic, succinic, and formic acids and ethanol. An additional 800 ± 290 mL H2/g-COD was produced from a synthetic effluent with a wastewater inoculum (fermentation effluent inoculum; FEI) by electrohydrogensis using microbial electrolysis cells (MECs). Hydrogen yields were increased to 980 ± 110 mL H2/g-COD with the synthetic effluent by combining in the inoculum samples from multiple microbial fuel cells (MFCs) each pre-acclimated to a single substrate (single substrate inocula; SSI). Hydrogen yields and production rates with SSI and the actual fermentation effluents were 980 ± 110 mL/g-COD and 1.11 ± 0.13 L/L-d (synthetic); 900 ± 140 mL/g-COD and 0.96 ± 0.16 L/L-d (cellobiose); and 750 ± 180 mL/g-COD and 1.00 ± 0.19 L/L-d (lignocellulose). A maximum hydrogen production rate of 1.11 ± 0.13 L H2/L reactor/d was produced with synthetic effluent. Energy efficiencies based on electricity needed for the MEC using SSI were 270 ± 20% for the synthetic effluent, 230 ± 50% for lignocellulose effluent and 220 ± 30% for the cellobiose effluent. COD removals were ∼90% for the synthetic effluents, and 70-85% based on VFA removal (65% COD removal) with the cellobiose and lignocellulose effluent. The overall hydrogen yield was 9.95 mol-H2/mol-glucose for the cellobiose. These results show that pre-acclimation of MFCs to single substrates improves performance with a complex mixture of substrates, and that high hydrogen yields and gas production rates can be achieved using a two-stage fermentation and MEC

It is standard practice for clinicians and nurses to primarily assess patients' wounds via visual examination. This subjective method can be inaccurate in wound assessment and also represents a significant clinical workload. Hence, computer-based systems, especially implemented on mobile devices, can provide automatic, quantitative wound assessment and can thus be valuable for accurately monitoring wound healing status. Out of all wound assessment parameters, the measurement of the wound area is the most suitable for automated analysis. Most of the current wound boundary determination methods only process the image of the wound area along with a small amount of surrounding healthy skin. In this paper, we present a novel approach that uses Support Vector Machine (SVM) to determine the wound boundary on a foot ulcer image captured with an image capture box, which provides controlled lighting, angle and range conditions. The Simple Linear Iterative Clustering (SLIC) method is applied for effective super-pixel segmentation. A cascaded two-stage classifier is trained as follows: in the first stage a set of k binary SVM classifiers are trained and applied to different subsets of the entire training images dataset, and a set of incorrectly classified instances are collected. In the second stage, another binary SVM classifier is trained on the incorrectly classified set. We extracted various color and texture descriptors from super-pixels that are used as input for each stage in the classifier training. Specifically, we apply the color and Bag-of-Word (BoW) representation of local Dense SIFT features (DSIFT) as the descriptor for ruling out irrelevant regions (first stage), and apply color and wavelet based features as descriptors for distinguishing healthy tissue from wound regions (second stage). Finally, the detected wound boundary is refined by applying a Conditional Random Field (CRF) image processing technique. We have implemented the wound classification on a Nexus

Full Text Available Hydrogen is considered one of the possible main energy carriers for the future, thanks to its unique environmental properties. Indeed, its energy content (120 MJ/kg can be exploited virtually without emitting any exhaust in the atmosphere except for water. Renewable production of hydrogen can be obtained through common biological processes on which relies anaerobic digestion, a well-established technology in use at farm-scale for treating different biomass and residues. Despite two-stage hydrogen and methane producing fermentation is a simple variant of the traditional anaerobic digestion, it is a relatively new approach mainly studied at laboratory scale. It is based on biomass fermentation in two separate, seuqential stages, each maintaining conditions optimized to promote specific bacterial consortia: in the first acidophilic reactorhydrogen is produced production, while volatile fatty acids-rich effluent is sent to the second reactor where traditional methane rich biogas production is accomplished. A two-stage pilot-scale plant was designed, manufactured and installed at the experimental farm of the University of Milano and operated using a biomass mixture of livestock effluents mixed with sugar/starch-rich residues (rotten fruits and potatoes and expired fruit juices, afeedstock mixture based on waste biomasses directly available in the rural area where plant is installed. The hydrogenic and the methanogenic reactors, both CSTR type, had a total volume of 0.7m3 and 3.8 m3 respectively, and were operated in thermophilic conditions (55 2 °C without any external pH control, and were fully automated. After a brief description of the requirements of the system, this contribution gives a detailed description of its components and of engineering solutions to the problems encountered during the plant realization and start-up. The paper also discusses the results obtained in a first experimental run which lead to production in the range of previous

Highlights: Black-Right-Pointing-Pointer The adsorption isotherm of cesium by copper ferrocyanide followed a Freundlich model. Black-Right-Pointing-Pointer Decontamination factor of cesium was higher in lab-scale test than that in jar test. Black-Right-Pointing-Pointer A countercurrent two-stage adsorption-microfiltration process was achieved. Black-Right-Pointing-Pointer Cesium concentration in the effluent could be calculated. Black-Right-Pointing-Pointer It is a new cesium removal process with a higher decontamination factor. - Abstract: Copper ferrocyanide (CuFC) was used as an adsorbent to remove cesium. Jar test results showed that the adsorption capacity of CuFC was better than that of potassium zinc hexacyanoferrate. Lab-scale tests were performed by an adsorption-microfiltration process, and the mean decontamination factor (DF) was 463 when the initial cesium concentration was 101.3 {mu}g/L, the dosage of CuFC was 40 mg/L and the adsorption time was 20 min. The cesium concentration in the effluent continuously decreased with the operation time, which indicated that the used adsorbent retained its adsorption capacity. To use this capacity, experiments on a countercurrent two-stage adsorption (CTA)-microfiltration (MF) process were carried out with CuFC adsorption combined with membrane separation. A calculation method for determining the cesium concentration in the effluent was given, and batch tests in a pressure cup were performed to verify the calculated method. The results showed that the experimental values fitted well with the calculated values in the CTA-MF process. The mean DF was 1123 when the dilution factor was 0.4, the initial cesium concentration was 98.75 {mu}g/L and the dosage of CuFC and adsorption time were the same as those used in the lab-scale test. The DF obtained by CTA-MF process was more than three times higher than the single-stage adsorption in the jar test.

Stem-like cells in solid tumors are purported to contribute to cancer development and poor treatment outcome. The abilities to self-renew, differentiate, and resist anticancer therapies are hallmarks of these rare cells, and steering them into lineage commitment may be one strategy to curb cancer development or progression. Vitamin D is a prohormone that can alter cell growth and differentiation and may induce the differentiation cancer stem-like cells. In this study, octamer-binding transcription factor 4 (OCT4)-positive/Nanog homeobox (Nanog)- positive lung adenocarcinoma stem-like cells (LACSCs) were enriched from spheroid cultured SPC-A1 cells and differentiated by a two-stage induction (TSI) method, which involved knockdown of hypoxia-inducible factor 1-alpha (HIF1α) expression (first stage) followed by sequential induction with 1alpha,25-dihydroxyvitamin D3 (1,25(OH)2D3, VD3) and suberoylanilide hydroxamic acid (SAHA) treatment (second stage). The results showed the HIF1α-knockdowned cells displayed diminished cell invasion and clonogenic activities. Moreover, the TSI cells highly expressed tumor suppressor protein p63 (P63) and forkhead box J1 (FOXJ1) and lost stem cell characteristics, including absent expression of OCT4 and Nanog. These cells regained sensitivity to cisplatin in vitro while losing tumorigenic capacity and decreased tumor cell proliferation in vivo. Our results suggest that induced transdifferentiation of LACSCs by vitamin D and SAHA may become novel therapeutic avenue to alter tumor cell phenotypes and improve patient outcome.The development and progression of lung cancer may involve rare population of stem-like cells that have the ability to grow, differentiate, and resist drug treatment. However, current therapeutic strategies have mostly focused on tumor characteristics and neglected the potential source of cells that may contribute to poor clinical outcome. We generated lung adenocarcinoma stem-like cells from spheroid culture and

In many older cities, stormwater results in combined sewer overflows (CSOs) and consequent water quality impairments. Because of the expense of traditional approaches for controlling CSOs, cities are considering the use of green infrastructure (GI) to reduce runoff and pollutants. Examples of GI include tree trenches, rain gardens, green roofs, and rain barrels. However, the cost and effectiveness of GI are uncertain, especially at the watershed scale. We present a two-stage stochastic extension of the Stormwater Investment Strategy Evaluation (StormWISE) model (A. McGarity, JWRPM, 2012, 111-24) to explicitly model and optimize these uncertainties in an adaptive management framework. A two-stage model represents the immediate commitment of resources ("here & now") followed by later investment and adaptation decisions ("wait & see"). A case study is presented for Philadelphia, which intends to extensively deploy GI over the next two decades (PWD, "Green City, Clean Water - Implementation and Adaptive Management Plan," 2011). After first-stage decisions are made, the model updates the stochastic objective and constraints (learning). We model two types of "learning" about GI cost and performance. One assumes that learning occurs over time, is automatic, and does not depend on what has been done in stage one (basic model). The other considers learning resulting from active experimentation and learning-by-doing (advanced model). Both require expert probability elicitations, and learning from research and monitoring is modelled by Bayesian updating (as in S. Jacobi et al., JWRPM, 2013, 534-43). The model allocates limited financial resources to GI investments over time to achieve multiple objectives with a given reliability. Objectives include minimizing construction and O&M costs; achieving nutrient, sediment, and runoff volume targets; and community concerns, such as aesthetics, CO2 emissions, heat islands, and recreational values. CVaR (Conditional Value at Risk) and

The traditional two-stage vapor compression refrigeration cycle might be replaced by a two-stage ejector-vapor compression refrigeration cycle if it is aimed the decrease of irreversibility during expansion. In this respect, the expansion valve is changed with an ejector. The performance improvement is searched in the case of choosing R404A as a refrigerant. Using the ejector as an expansion device ensures a higher value for COP compared to the traditional case. On the basis...

Introduction: Although cleft lip and palate (CLP) is one of the most common congenital malformations, occurring in 1 in 700 live births, there is still no generally accepted treatment protocol. Numerous surgical techniques have been described for cleft palate repair; these techniques can be divided into one-stage (one operation) cleft palate repair and two-stage cleft palate closure. The aim of this study is to present our cleft palate team experience in using the two-stage cleft palate closu...

Anaerobic digestion (AD) of sugar beet pressed pulp (SBPP) is a promising treatment concept. It produces biogas as a renewable energy source making sugar production more energy efficient and it turns SBPP from a residue into a valuable resource. In this study one- and two-stage mono fermentation at mesophilic conditions in a continuous stirred tank reactor were compared. Also the optimal incubation temperature for the pre-acidification stage was studied. The fastest pre-acidification, with a hydraulic retention time (HRT) of 4 days, occurred at a temperature of 55 °C. In the methanogenic reactor of the two-stage system stable fermentation at loading rate of 7 kg VS/m³ d was demonstrated. No artificial pH adjustment was necessary to maintain optimum levels in both the pre-acidification and the methanogenic reactor. The total HRT of the two-stage AD was 36 days which is considerably lower compared to the one-stage AD (50 days). The frequently observed problem of foaming at high loading rates was less severe in the two-stage reactor. Moreover the viscosity of digestate in the methanogenic stage of the two-stage fermentation was in average tenfold lower than in the one-stage fermentation. This decreases the energy input for the reactor stirring about 80 %. The observed advantages make the two-stage process economically attractive, despite higher investments for a two reactor system.

Experimental liquefaction is reported of subbituminous Taiheiyo coal with tetralin solvent and a red mud-sulfur catalyst, at 440 C and 85 kg/cm/sup 2/ initial hydrogen pressure. A study was made of the dependence of production composition and liquids yield on residence time. The results obtained were compared with corresponding results for Miike coal and Yallourn brown coal. Studies were also made of the influence of hydrotreating conditions on the properties of the hydrotreated oil, and of the hydrotreating of Taiheiyo coal SRC liquids. Possible uses for the hydrotreated product are diesel fuel, gas oil, hydrotreated oil with cetane number 45-60, and kerosene. 22 figs., 2 tabs.

A treatment for strategy of unicystic ameloblastoma (UA) should be decided by its pathology type including luminal or mural type. Luminal type of UA can be treated only by enucleation alone, but UA with mural invasion should be treated aggressively like conventional ameloblastomas. However, it is difficult to diagnose the subtype of UA by an initial biopsy. There is a possibility that the lesion is an ordinary cyst or keratocystic odontogenic tumor, leading to a possible overtreatment. Therefore, this study performed the enucleation of the cyst wall and deflation at first, and the pathological finding confirmed mural invasion into the cystic wall, leading to the second surgery. The second surgery enucleated scar tissue, bone curettage, and deflation, and was able to contribute to the reduction of the recurrence rate by removing tumor nest in scar tissue or new bone, enhancing new bone formation, and shrinking the mandibular expanding by fenestration. In this study, a large UA with mural invasion including condyle was treated by "two-stage enucleation and deflation" in a 20-year-old patient.

In this investigation, we address the task of airborne LiDAR point cloud labelling for urban areas by presenting a contextual classification methodology based on a Conditional Random Field (CRF). A two-stage CRF is set up: in a first step, a point-based CRF is applied. The resulting labellings are then used to generate a segmentation of the classified points using a Conditional Euclidean Clustering algorithm. This algorithm combines neighbouring points with the same object label into one segment. The second step comprises the classification of these segments, again with a CRF. As the number of the segments is much smaller than the number of points, it is computationally feasible to integrate long range interactions into this framework. Additionally, two different types of interactions are introduced: one for the local neighbourhood and another one operating on a coarser scale. This paper presents the entire processing chain. We show preliminary results achieved using the Vaihingen LiDAR dataset from the ISPRS Benchmark on Urban Classification and 3D Reconstruction, which consists of three test areas characterised by different and challenging conditions. The utilised classification features are described, and the advantages and remaining problems of our approach are discussed. We also compare our results to those generated by a point-based classification and show that a slight improvement is obtained with this first implementation.

In order to remove CO to achieve lower CO content of below 10 ppm in the CO removal step of reformer for polymer electrolyte fuel cell (PEFC) co-generation systems, CO preferential methanation under various conditions were studied in this paper. Results showed that, with a single kind of catalyst, it was difficult to reach both CO removal depth and CO2 conversion ratio of below 5%. Thus, a two-stage methanation process applying two kinds of catalysts is proposed in this study, that is, one kind of catalyst with relatively low activity and high selectivity for the first stage at higher temperature, and another kind of catalyst with relatively high activity and high selectivity for the second stage at lower temperature. Experimental results showed that at the first stage CO content was decreased from 1% to below 0.1% at 250-300 ℃, and at the second stage to below 10 ppm at 150-185 ℃. CO2 conversion was kept less than 5%. At the same time, influence of inlet CO content and GHSV on CO removal depth was also discussed in this paper.

Full Text Available This paper presents a low power, high slew rate, high gain, ultra wide band twostage CMOS cascode operational amplifier for radio frequency application. Current mirror based cascoding technique and pole zero cancelation technique is used to ameliorate the gain and enhance the unity gain bandwidth respectively, which is the novelty of the circuit. In cascading technique a common source transistor drive a common gate transistor. The cascoding is used to enhance the output resistance and hence improve the overall gain of the operational amplifier with less complexity and less power dissipation. To bias the common gate transistor, a current mirror is used in this paper. The proposed circuit is designed and simulated using Cadence analog and digital system design tools of 45 nanometer CMOS technology. The simulated results of the circuit show DC gain of 63.62 dB, unity gain bandwidth of 2.70 GHz, slew rate of 1816 V/µs, phase margin of 59.53º, power supply of the proposed operational amplifier is 1.4 V (rail-to-rail ±700 mV, and power consumption is 0.71 mW. This circuit specification has encountered the requirements of radio frequency application.

Full Text Available In order to deliver the maximum available power to the load under the condition of varying solar irradiation and environment temperature, maximum power point tracking (MPPT technologies have been used widely in PV systems. Among all the MPPT schemes, the chaos method is one of the hot topics in recent years. In this paper, a novel two-stage chaos optimization method is presented which can make search faster and more effective. In the process of proposed chaos search, the improved logistic mapping with the better ergodic is used as the first carrier process. After finding the current optimal solution in a certain guarantee, the power function carrier as the secondary carrier process is used to reduce the search space of optimized variables and eventually find the maximum power point. Comparing with the traditional chaos search method, the proposed method can track the change quickly and accurately and also has better optimization results. The proposed method provides a new efficient way to track the maximum power point of PV array.

This investigation was carried out to establish a new domestic landfill gas (LFG) generation rate model that takes into account the impact ofleachate recirculation. The first-order kinetics and two-stage reaction (FKTSR) model of the LFG generation rate includes mechanisms of the nutrient balance for biochemical reaction in two main stages. In this study, the FKTSR model was modified by the introduction of the outflow function and the organic acid conversion coefficient in order to represent the in-situ condition of nutrient loss through leachate. Laboratory experiments were carried out to simulate the impact of leachate recirculation and verify the modified FKTSR model. The model calibration was then calculated by using the experimental data. The results suggested that the new model was in line with the experimental data. The main parameters of the modified FKTSR model, including the LFG production potential (L0), the reaction rate constant in the first stage (K1), and the reaction rate constant in the second stage (K2) of 64.746 L, 0.202 d-1, and 0.338 d-1,respectively, were comparable to the old ones of 42.069 L,0.231 d-1, and 0.231 d-1. The new model is better able to explain the mechanisms involved in LFG generation.

Unique potential applications of electromagnetic railguns [R.S. Hawke, IEEE Trans. Nucl. NS-28 (2) (1981) 1542] have motivated a decade of continuous development throughout the world. This effort has led to routine acceleration of projectiles of from 1 g to about 1 kg, to velocities of nearly 4 km/s. Attempts to reach higher velocities have met with problems in the 6- to 8-km/s range [J.V. Parker, Proc. 4th Symp. on Electromagnetic Launch Tech., Austin, TX, 1988, to be published in IEEE Trans. Mag.]. The principal problem is "restrike", which causes shunting of the propulsive plasma armature by the formation of a second plasma short circuit in the breech region of the railgun. One means of impeding restrike is the use of a two-stage light-gas gun (2SLGG) as a projectile injector. A joint development project was initiated in early 1986 between the Sandia National Laboratories Albuquerque (SNLA) and the Lawrence Livermore National Laboratory (LLNL). The project is based on the use of a 2SLGG to inject projectiles at about 7 km/s. The injection gas is hydrogen, which serves to inhibit formation of the secondary arc and to minimize barrel ablation and armature contamination. Results and status of this work are discussed.

The development of macroparticles acceleration methods for high-speed impact simulation in a laboratory is an actual problem due to increasing of space flights duration and necessity of providing adequate spacecraft protection against micrometeoroid and space debris impacts. This paper presents results of experimental study of a two-stage light- gas magnetoplasma launcher for acceleration of a macroparticle, in which a coaxial plasma accelerator creates a shock wave in a high-pressure channel filled with light gas. Graphite and steel spheres with diameter of 2.5-4 mm were used as a projectile and were accelerated to the speed of 0.8-4.8 km/s. A launching of particle occurred in vacuum. For projectile velocity control the speed measuring method was developed. The error of this metod does not exceed 5%. The process of projectile flight from the barrel and the process of a particle collision with a target were registered by use of high-speed camera. The results of projectile collision with elements of meteoroid shielding are presented. In order to increase the projectile velocity, the high-pressure channel should be filled with hydrogen. However, we used helium in our experiments for safety reasons. Therefore, we can expect that the range of mass and velocity of the accelerated particles can be extended by use of hydrogen as an accelerating gas.

A two-stage UASB reactor was employed to remove sulfate from acrylic fiber manufacturing wastewater.Mesophilic operation (35±0.5℃) was performed with hydraulic retention time (HRT) varied between 28 and 40 hr.Mixed liquor suspended solids (MLSS)in the reactor was maintained about 8000 mg/L.The results indicated that sulfate removal was enhanced with increasing the ratio of COD/SO42-.At low COD/SO42-,the growth of the sulfate-reducing bacteria (SRB) was carbon-limited.The optimal sulfate removal efficiencies were 75% when the HRT was no less than 38 hr.Sulfidogenesis mainly happened in the sulfate-reducing stage,while methanogenesis in the methane-producing stage.Microbes in sulfate-reducing stage performed granulation better than that in methaneproducing stage.Higher extracellular polymeric substances (EPS) content in sulfate-reducing stage helped to adhere and connect the flocculent sludge particles together.SRB accounted for about 31% both in sulfate-reducing stage and methane-producing stage at COD/SO42- ratio of 0.5,while it dropped dramatically from 34% in sulfate-reducing stage to 10% in methane-producing stage corresponding to the COD/SO42- ratio of 4.7.SRB and MPA were predominant in sulfate-reducing stage and methane-producing stage respectively.

The human cough is a significant vector in the transmission of respiratory diseases in indoor environments. The cough flow is characterized as a two-stage jet; specifically, the starting jet (when the cough starts and flow is released) and interrupted jet (after the source supply is terminated). During the starting-jet stage, the flow rate is a function of time; three temporal profiles of the exit velocity (pulsation, sinusoidal and real-cough) were investigated in this study, and our results showed that the cough flow’s maximum penetration distance was in the range of a 50.6–85.5 opening diameter (D) under our experimental conditions. The real-cough and sinusoidal cases exhibited greater penetration ability than the pulsation cases under the same characteristic Reynolds number (Rec) and normalized cough expired volume (Q/AD, with Q as the cough expired volume and A as the opening area). However, the effects of Rec and Q/AD on the maximum penetration distances proved to be more significant; larger values of Rec and Q/AD reflected cough flows with greater penetration distances. A protocol was developed to scale the particle experiments between the prototype in air, and the model in water. The water tank experiments revealed that although medium and large particles deposit readily, their maximum spread distance is similar to that of small particles. Moreover, the leading vortex plays an important role in enhancing particle transport. PMID:28046084

The problem of deep laser cooling of $^{24}$Mg atoms is theoretically studied. We propose two-stage sub-Doppler cooling strategy using electro-dipole transition $3^3P_2$$\\to$$3^3D_3$ ($\\lambda$=383.9 nm). The first stage implies exploiting magneto-optical trap with $\\sigma^+$ and $\\sigma^-$ light beams, while the second one uses a lin$\\perp$lin molasses. We focus on achieving large number of ultracold atoms (T$_{eff}$ < 10 $\\mu$K) in a cold atomic cloud. The calculations have been done out of many widely used approximations and based on quantum treatment with taking full account of recoil effect. Steady-state average kinetic energies and linear momentum distributions of cold atoms are analysed for various light-field intensities and frequency detunings. The results of conducted quantum analysis have revealed noticeable differences from results of semiclassical approach based on the Fokker-Planck equation. At certain conditions the second cooling stage can provide sufficiently lower kinetic energies of atom...

Full Text Available In the wake of intensive fossil fuel usage and CO2 accumulation in the environment, research is targeted towards sustainable alternate bioenergy that can suffice the growing need for fuel and also that leaves a minimal carbon footprint. Oil production from microalgae can potentially be carried out more efficiently, leaving a smaller footprint and without competing for arable land or biodiverse landscapes. However, current algae cultivation systems and lipid induction processes must be significantly improved and are threatened by contamination with other algae or algal grazers. To address this issue, we have developed an efficient two-stage cultivation system using the marine microalga Tetraselmis sp. M8. This hybrid system combines exponential biomass production in positive pressure air lift-driven bioreactors with a separate synchronized high-lipid induction phase in nutrient deplete open raceway ponds. A comparison to either bioreactor or open raceway pond cultivation system suggests that this process potentially leads to significantly higher productivity of algal lipids. Nutrients are only added to the closed bioreactors while open raceway ponds have turnovers of only a few days, thus reducing the issue of microalgal grazers.

The recognition and subsequent repair of DNA damage are essential reactions for the maintenance of genome stability. A key general sensor of DNA lesions is xeroderma pigmentosum group C (XPC) protein, which recognizes a wide variety of helix-distorting DNA adducts arising from ultraviolet (UV) radiation, genotoxic chemicals and reactive metabolic byproducts. By detecting damaged DNA sites, this unique molecular sensor initiates the global genome repair (GGR) pathway, which allows for the removal of all the aforementioned lesions by a limited repertoire of excision factors. A faulty GGR activity causes the accumulation of DNA adducts leading to mutagenesis, carcinogenesis, neurological degeneration and other traits of premature aging. Recent findings indicate that XPC protein achieves its extraordinary substrate versatility by an entirely indirect readout strategy implemented in two clearly discernible stages. First, the XPC subunit uses a dynamic sensor interface to monitor the double helix for the presence of non-hydrogen-bonded bases. This initial screening generates a transient nucleoprotein intermediate that subsequently matures into the ultimate recognition complex by trapping undamaged nucleotides in the abnormally oscillating native strand, in a way that no direct contacts are made between XPC protein and the offending lesion itself. It remains to be elucidated how accessory factors like Rad23B, centrin-2 or the UV-damaged DNA-binding complex contribute to this dynamic two-stage quality control process.

Green roofs have been adopted in urban drainage systems to control the total quantity and volumetric flow rate of runoff. Modern green roof designs are multi-layered, their main components being vegetation, substrate and, in almost all cases, a separate drainage layer. Most current hydrological models of green roofs combine the modelling of the separate layers into a single process; these models have limited predictive capability for roofs not sharing the same design. An adaptable, generic, two-stage model for a system consisting of a granular substrate over a hard plastic 'egg box'-style drainage layer and fibrous protection mat is presented. The substrate and drainage layer/protection mat are modelled separately by previously verified sub-models. Controlled storm events are applied to a green roof system in a rainfall simulator. The time-series modelled runoff is compared to the monitored runoff for each storm event. The modelled runoff profiles are accurate (mean Rt(2) = 0.971), but further characterization of the substrate component is required for the model to be generically applicable to other roof configurations with different substrate.

Full Text Available Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total cost of construction and operation and overall sojourn time of stormwater. An improved genetic algorithm is redesigned to solve this complex nondeterministic problem, which incorporates with stochastic and fuzzy characteristics in whole drainage process. A numerical example in Shanghai illustrates how to implement the model, and comparisons with alternative algorithms show its performance in computational flexibility and efficiency. Discussions on sensitivity of four main parameters, that is, quantity of pump stations, drainage pipe diameter, rainstorm precipitation intensity, and confidence levels, are also presented to provide guidance for designing urban drainage system.

With the continuous developing of technology and reducing of investment costs, renewable energy proportion in the power grid is becoming higher and higher because of the clean and environmental characteristics, which may need more larger-capacity energy storage devices, increasing the cost. A two-stage collaborative global optimization design model of the combined-heat-power-and-gas (abbreviated as CHPG) microgrid is proposed in this paper, to minimize the cost by using virtual storage without extending the existing storage system. P2G technology is used as virtual multi-energy storage in CHPG, which can coordinate the operation of electric energy network and natural gas network at the same time. Demand response is also one kind of good virtual storage, including economic guide for the DGs and heat pumps in demand side and priority scheduling of controllable loads. Two kinds of storage will coordinate to smooth the high-frequency fluctuations and low-frequency fluctuations of renewable energy respectively, and achieve a lower-cost operation scheme simultaneously. Finally, the feasibility and superiority of proposed design model is proved in a simulation of a CHPG microgrid.

A novel gas loading system was designed for the specific application of remotely loading high purity gases into targets for gas-gun driven plate impact experiments. The high purity gases are loaded into well-defined target configurations to obtain Hugoniot states in the gas phase at greater than ambient pressures.The small volume of the gas samples is challenging, as slight changing in the ambient temperature result in measurable pressure changes. Therefore, the ability to load a gas gun target and continually monitor the sample pressure prior to firing provides the most stable and reliable target fielding approach. We present the design and evaluation of a gas loading system built for the LANL 50 mm bore two-stage light gas gun. Targets for the gun are made of 6061 Al or OFHC Cu, and assembled to form a gas containment cell with a volume of approximately 1.38 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with corrosive gases. Piping and valves are stainless steel with wetted seals made from Kalrez® and Teflon®. Preliminary testing was completed to ensure proper flow rate and that the proper safety controls were in place. The system has been used to successfully load Ar, Kr, Xe, and anhydrous ammonia with purities of up to 99.999 percent. The design of the system and example data from the plate impact experiments will be shown.

Full Text Available Image enhancement plays a vital role in various applications. There are many techniques to remove the noise from the image and produce the clear visual of the image. Moreover, there are several filters and image smoothing techniques available in the literature. All these available techniques have certain limitations. Recently, neural networks are found to be a very efficient tool for image enhancement. A novel two-stage noise removal technique for image enhancement and noise removal is proposed in this paper. In noise removal stage, Adaptive Neuro-Fuzzy Inference System (ANFIS with a Modified Levenberg-Marquardt training algorithm was used to eliminate the impulse noise. The usage of Modified Levenberg-Marquardt training algorithm will reduce the execution time. In the image enhancement stage, the fuzzy decision rules inspired by the Human Visual System (HVS are used to categorize the image pixels into human perception sensitive class and nonsensitive class, and to enhance the quality of the image. The Hyper trapezoidal fuzzy membership function is used in the proposed technique. In order to improve the sensitive regions with higher visual quality, a Neural Network (NN is proposed. The experiment is conducted with standard image. It is observed from the experimental result that the proposed FANFIS shows significant performance when compared to existing methods.

This paper compares the syngas produced from methane with the syngas obtained from the gasification, in a two-stage reactor, of various waste feedstocks. The syngas composition and the gasification conditions were simulated using a simple thermodynamic model. The waste feedstocks considered are: landfill gas, waste oil, municipal solid waste (MSW) typical of a low-income country, the same MSW blended with landfill gas, refuse derived fuel (RDF) made from the same MSW, the same RDF blended with waste oil and a MSW typical of a high-income country. Energy content, the sum of H2 and CO gas percentages, and the ratio of H2 to CO are considered as measures of syngas quality. The simulation shows that landfill gas gives the best results in terms of both H2+CO and H2/CO, and that the MSW of low-income countries can be expected to provide inferior syngas on all three quality measures. Co-gasification of the MSW from low-income countries with landfill gas, and the mixture of waste oil with RDF from low-income MSW are considered as options to improve gas quality.

The overall goal of this work was to develop a saccharification method for the production of third generation biofuel(i.e.bioethanol) using feedstock of the invasive marine macroalga Gracilaria salicornia.Under optimum conditions(120℃ and 2% sulfuric acid for 30 min), dilute acid hydrolysis of the homogenized invasive plants yielded a low concentration of glucose(4.1mM or 4.3g glucose/kg fresh algal biomass). However, two-stage hydrolysis of the homogenates (combination of dilute acid hydrolysis with enzymatic hydrolysis) produced 13.8g of glucose from one kilogram of fresh algal feedstock. Batch fermentation analysis produced 79.1g EtOH from one kilogram of dried invasive algal feedstock using the ethanologenic strain Escherichia coli K011. Furthermore, ethanol production kinetics indicated that the invasive algal feedstock contained different types of sugar, including C5-sugar. This study represents the first report on third generation biofuel production from invasive macroalgae, suggesting that there is great potential for the production of renewable energy using marine invasive biomass.

Full Text Available The precise conversion of arbitrary text into its corresponding phoneme sequence (grapheme-to-phoneme or G2P conversion is implemented in speech synthesis and recognition, pronunciation learning software, spoken term detection and spoken document retrieval systems. Because the quality of this module plays an important role in the performance of such systems and many problems regarding G2P conversion have been reported, we propose a novel two-stage model-based approach, which is implemented using an existing weighted finite-state transducer-based G2P conversion framework, to improve the performance of the G2P conversion model. The first-stage model is built for automatic conversion of words to phonemes, while the second-stage model utilizes the input graphemes and output phonemes obtained from the first stage to determine the best final output phoneme sequence. Additionally, we designed new grapheme generation rules, which enable extra detail for the vowel and consonant graphemes appearing within a word. When compared with previous approaches, the evaluation results indicate that our approach using rules focusing on the vowel graphemes slightly improved the accuracy of the out-of-vocabulary dataset and consistently increased the accuracy of the in-vocabulary dataset.

Full Text Available In the deregulated energy market, the accuracy of load forecasting has a significant effect on the planning and operational decision making of utility companies. Electric load is a random non-stationary process influenced by a number of factors which make it difficult to model. To achieve better forecasting accuracy, a wide variety of models have been proposed. These models are based on different mathematical methods and offer different features. This paper presents a new two-stage approach for short-term electrical load forecasting based on least-squares support vector machines. With the aim of improving forecasting accuracy, one more feature was added to the model feature set, the next day average load demand. As this feature is unknown for one day ahead, in the first stage, forecasting of the next day average load demand is done and then used in the model in the second stage for next day hourly load forecasting. The effectiveness of the presented model is shown on the real data of the ISO New England electricity market. The obtained results confirm the validity advantage of the proposed approach.

Heating and acceleration of electrons in solar impulsive hard X-ray (HXR) flares are studied according to the two-stage acceleration model developed by Zhang for solar 3Herich events. It is shown that electrostatic H-cyclotron waves can be excited at a parallel phase velocity less than about the electron thermal velocity and thus can significantly heat the electrons (up to 40 MK) through Landau resonance. The preheated electrons with velocities above a threshold are further accelerated to high energies in the flare-acceleration process. The flareproduced electron spectrum is obtained and shown to be thermal at low energies and power law at high energies. In the non-thermal energy range, the spectrum can be double power law if the spectral power index is energy dependent or related. The electron energy spectrum obtained by this study agrees quantitatively with the result derived from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) HXR observations in the flare of 2002 July 23. The total flux and energy flux of electrons accelerated in the solar flare also agree with the measurements.

Large scale optimization problems arise in diverse fields. Decomposing the large scale problem into small scale subproblems regarding the variable interactions and optimizing them cooperatively are critical steps in an optimization algorithm. To explore the variable interactions and perform the problem decomposition tasks, we develop a twostage variable interaction reconstruction algorithm. A learning model is proposed to explore part of the variable interactions as prior knowledge. A marginalized denoising model is proposed to construct the overall variable interactions using the prior knowledge, with which the problem is decomposed into small scale modules. To optimize the subproblems and relieve premature convergence, we propose a cooperative hierarchical particle swarm optimization framework, where the operators of contingency leadership, interactional cognition, and self-directed exploitation are designed. Finally, we conduct theoretical analysis for further understanding of the proposed algorithm. The analysis shows that the proposed algorithm can guarantee converging to the global optimal solutions if the problems are correctly decomposed. Experiments are conducted on the CEC2008 and CEC2010 benchmarks. The results demonstrate the effectiveness, convergence, and usefulness of the proposed algorithm.

To examine whether awareness of, and involvement with alcohol marketing at age 13 is predictive of initiation of drinking, frequency of drinking and units of alcohol consumed at age 15. A two-stage cohort study, involving a questionnaire survey, combining interview and self-completion, was administered in respondents' homes. Respondents were drawn from secondary schools in three adjoining local authority areas in the West of Scotland, UK. From a baseline sample of 920 teenagers (aged 12-14, mean age 13), in 2006, a cohort of 552 was followed up 2 years later (aged 14-16, mean age 15). Data were gathered on multiple forms of alcohol marketing and measures of drinking initiation, frequency and consumption. At follow-up, logistic regression demonstrated that, after controlling for confounding variables, involvement with alcohol marketing at baseline was predictive of both uptake of drinking and increased frequency of drinking. Awareness of marketing at baseline was also associated with an increased frequency of drinking at follow-up. Our findings demonstrate an association between involvement with, and awareness of, alcohol marketing and drinking uptake or increased drinking frequency, and we consider whether the current regulatory environment affords youth sufficient protection from alcohol marketing.

Performance evaluations often aim to achieve goals such as obtaining estimates of unit-specific means, ranks, and the distribution of unit-specific parameters. The Bayesian approach provides a powerful way to structure models for achieving these goals. While no single estimate can be optimal for achieving all three inferential goals, the communication and credibility of results will be enhanced by reporting a single estimate that performs well for all three. Triple goal estimates [Shen and Louis, 1998. Triple-goal estimates in two-stage hierarchical models. J. Roy. Statist. Soc. Ser. B 60, 455–471] have this performance and are appealing for performance evaluations. Because triple-goal estimates rely more heavily on the entire distribution than do posterior means, they are more sensitive to misspecification of the population distribution and we present various strategies to robustify triple-goal estimates by using nonparametric distributions. We evaluate performance based on the correctness and efficiency of the robustified estimates under several scenarios and compare empirical Bayes and fully Bayesian approaches to model the population distribution. We find that when data are quite informative, conclusions are robust to model misspecification. However, with less information in the data, conclusions can be quite sensitive to the choice of population distribution. Generally, use of a nonparametric distribution pays very little in efficiency when a parametric population distribution is valid, but successfully protects against model misspecification. PMID:19603088

18O/16O and D/H of coexisting feldspar, quartz, and biotite separates of twenty samples collected from the Ertaibei granite pluton, northern Xinjiang, China are determined. It is shown that the Ertaibei pluton experienced twostages of isotopic exchanges. The second stage of 18O/16O and D/H exchanges with meteoric water brought about a marked decrease in the δ18O values of feldspar and biotite from the second group of samples. The D/H of biotite exhibits a higher sensitivity to the meteoric water alteration than its 18O/16O. However, the first stage of 18O/16O exchange with the 18O-rich aqueous fluid derived from the dehydration within the deep crust caused the δ18OQuartz-Feldspar reversal. It is inferred that the dehydration-melting may have been an important mechanism for anatexis. It is shown that the deep fluid encircled the Ertaibei pluton like an envelope which serves as an effective screen to the surface waters.

18O/16O and D/H of coexisting feldspar, quartz, and biotite separates of twenty samples collected from the Ertaibei granite pluton, northern Xinjiang, China are determined. It is shown that the Ertaibei pluton experienced twostages of isotopic exchanges. The second stage of 18O/16O and D/H exchanges with meteoric water brought about a marked decrease in the δ18O values of feldspar and biotite from the second group of samples. The D/H of biotite exhibits a higher sensitivity to the meteoric water alteration than its 18O/16O. However, the first stage of 18O/16O exchange with the 18O-rich aqueous fluid derived from the dehydration within the deep crust caused the Δ18OQuariz-Feidspar reversal. It is inferred that the dehydration-melting may have been an important mechanism for anatexis. It is shown that the deep fluid encircled the Ertaibei pluton like an envelope which serves as an effective screen to the surface waters.

Full Text Available A brand name can be considered a mental category. Similarity-based categorization theory has been used to explain how consumers judge a new product as a member of a known brand, a process called brand extension evaluation. This study was an event-related potential study conducted in two experiments. The study found a two-stage categorization process reflected by the P2 and N400 components in brand extension evaluation. In experiment 1, a prime-probe paradigm was presented in a pair consisting of a brand name and a product name in three conditions, i.e., in-category extension, similar-category extension, and out-of-category extension. Although the task was unrelated to brand extension evaluation, P2 distinguished out-of-category extensions from similar-category and in-category ones, and N400 distinguished similar-category extensions from in-category ones. In experiment 2, a prime-probe paradigm with a related task was used, in which product names included subcategory and major-category product names. The N400 elicited by subcategory products was more significantly negative than that elicited by major-category products, with no salient difference in P2. We speculated that P2 could reflect the early low-level and similarity-based processing in the first stage, whereas N400 could reflect the late analytic and category-based processing in the second stage.

Full Text Available This study numerically investigated the influence of using the second row of a double-row deswirl vane as the inlet guide vane of the second stage on the performance of the first stage in a two-stage refrigeration centrifugal compressor. The working fluid was R134a, and the turbulence model was the Spalart-Allmaras model. The parameters discussed included the cutting position of the deswirl vane, the staggered angle of two rows of vane, and the rotation angle of the second row. The results showed that the performance of staggered angle 7.5° was better than that of 15° or 22.5°. When the staggered angle was 7.5°, the performance of cutting at 1/3 and 1/2 of the original deswirl vane length was slightly different from that of the original vane but obviously better than that of cutting at 2/3. When the staggered angle was 15°, the cutting position influenced the performance slightly. At a low flow rate prone to surge, when the second row at a staggered angle 7.5° cutting at the half of vane rotated 10°, the efficiency was reduced by only about 0.6%, and 10% of the swirl remained as the preswirl of the second stage, which is generally better than other designs.

Full Text Available Two-stage pressurized anaerobic digestion is a promising technology. This technology integrates in one process biogas production with upgrading and pressure boosting for grid injection. To investigate whether the efficiency of this novel system could be further increased, a water scrubbing system was integrated into the methanogensis step. Therefore, six leach-bed reactors were used for hydrolysis/acidification and a 30-L pressurized anaerobic filter operated at 9 bar was adopted for acetogenesis/methanogenesis. The fermentation liquid of the pressurized anaerobic filter was circulated periodically via a flash tank, operating at atmospheric pressure. Due to the pressure drop, part of dissolved carbon dioxide was released from the liquid phase into the flash tank. The depressurized fermentation liquid was then recycled to the pressurized reactor. Three different flow rates (0 L·day−1, 20 L·day−1 and 40 L·day−1 were tested with three repetitions. As the daily recycled flashed liquid flow was increased from 0 to 40 L, six times as much as the daily feeding, the methane content in the biogas increased from 75 molar percent (mol% to 87 mol%. The pH value of the substrate in the methane reactor rose simultaneously from 6.5 to 6.7. The experimental data were verified by calculation.

The conventional process for solids treatment is mesophilic anaerobic sludge digestion (at about 35{sup o}C). In this process about 44% of volatile solids (VSS) are removed at about 40-50 days retention time with the specific biogas production of about 400 L/kg of VSS inserted. At the National Institute of Chemistry Ljubljana, thermophilic sludge digestion at 50 to 60{sup o}C was studied. As the result of research two-stage anaerobic-aerobic process was developed and patented. A case of 3+12 days retention time (anaerobic + aerobic stage) gave specific biogas production equal to mesophilic process (about 400 L/kg of VSS inserted) and VSS removal of 62% in 15 days of total retention time. A case of 3+6 days retention time gave the same biogas production of 400 L/kg and VSS removal of 49% in 9 days of total retention time. The main problem at thermophilic sludge digestion is the heat requirements for sustaining the process. The biogas produced is not sufficient for covering the heat requirements. This problem was resolved with heat regeneration between outflow and inflow sludge. Such regeneration brought the process equal to mesophilic process. With 80 to 95% excess heat of power production from produced biogas all heat requirements were covered. (author)

Full Text Available Essentially, strategic fleet planning is vital for airlines to yield a higher profit margin while providing a desired service frequency to meet stochastic demand. In contrast to most studies that did not consider slot purchase which would affect the service frequency determination of airlines, this paper proposes a novel approach to solve the fleet planning problem subject to various operational constraints. A two-stage fleet planning model is formulated in which the first stage selects the individual operating route that requires slot purchase for network expansions while the second stage, in the form of probabilistic dynamic programming model, determines the quantity and type of aircraft (with the corresponding service frequency to meet the demand profitably. By analyzing an illustrative case study (with 38 international routes, the results show that the incorporation of slot purchase in fleet planning is beneficial to airlines in achieving economic and social sustainability. The developed model is practically viable for airlines not only to provide a better service quality (via a higher service frequency to meet more demand but also to obtain a higher revenue and profit margin, by making an optimal slot purchase and fleet planning decision throughout the long-term planning horizon.

During the last decades, electronic textual information has become the world's largest and most important information source available. People have added a variety of daily newspapers, books, scientific and governmental publications, blogs and private messages to this wellspring of endless information and knowledge. Since neither the existing nor the new information can be read in its entirety, computers are used to extract and visualize meaningful or interesting topics and documents from this huge information clutter. In this paper, we extend, improve and combine existing individual approaches into an overall framework that supports topological analysis of high dimensional document point clouds given by the well-known tf-idf document-term weighting method. We show that traditional distance-based approaches fail in very high dimensional spaces, and we describe an improved two-stage method for topology-based projections from the original high dimensional information space to both two dimensional (2-D) and three dimensional (3-D) visualizations. To show the accuracy and usability of this framework, we compare it to methods introduced recently and apply it to complex document and patent collections.

The vehicle routing problem (VRP) is a well-known combinatorial optimization issue in transportation and logistics network systems. There exist several limitations associated with the traditional VRP. Releasing the restricted conditions of traditional VRP has become a research focus in the past few decades. The vehicle routing problem with split deliveries and pickups (VRPSPDP) is particularly proposed to release the constraints on the visiting times per customer and vehicle capacity, that is, to allow the deliveries and pickups for each customer to be simultaneously split more than once. Few studies have focused on the VRPSPDP problem. In this paper we propose a two-stage heuristic method integrating the initial heuristic algorithm and hybrid heuristic algorithm to study the VRPSPDP problem. To validate the proposed algorithm, Solomon benchmark datasets and extended Solomon benchmark datasets were modified to compare with three other popular algorithms. A total of 18 datasets were used to evaluate the effectiveness of the proposed method. The computational results indicated that the proposed algorithm is superior to these three algorithms for VRPSPDP in terms of total travel cost and average loading rate.

A two-stage process combined hydrogen and methane production from household solid waste was demonstrated working successfully. The yield of 43 mL H(2)/g volatile solid (VS) added was generated in the first hydrogen production stage and the methane production in the second stage was 500 mL CH(4)/g VS added. This figure was 21% higher than the methane yield from the one-stage process, which was run as control. Sparging of the hydrogen reactor with methane gas resulted in doubling of the hydrogen production. pH was observed as a key factor affecting fermentation pathway in hydrogen production stage. The optimum pH range for hydrogen production in this system was in the range from 5 to 5.5. The short hydraulic retention time (2 days) applied in the first stage was enough to separate acidogenesis from methanogenesis. No additional control for preventing methanogenesis in the first stage was necessary. Furthermore, this study also provided direct evidence in the dynamic fermentation process that, hydrogen production increase was reflected by acetate to butyrate ratio increase in liquid phase.

Genome packaging is an essential step in virus replication and a potential drug target. Single-stranded RNA viruses have been thought to encapsidate their genomes by gradual co-assembly with capsid subunits. In contrast, using a single molecule fluorescence assay to monitor RNA conformation and virus assembly in real time, with two viruses from differing structural families, we have discovered that packaging is a two-stage process. Initially, the genomic RNAs undergo rapid and dramatic (approximately 20-30%) collapse of their solution conformations upon addition of cognate coat proteins. The collapse occurs with a substoichiometric ratio of coat protein subunits and is followed by a gradual increase in particle size, consistent with the recruitment of additional subunits to complete a growing capsid. Equivalently sized nonviral RNAs, including high copy potential in vivo competitor mRNAs, do not collapse. They do support particle assembly, however, but yield many aberrant structures in contrast to viral RNAs that make only capsids of the correct size. The collapse is specific to viral RNA fragments, implying that it depends on a series of specific RNA-protein interactions. For bacteriophage MS2, we have shown that collapse is driven by subsequent protein-protein interactions, consistent with the RNA-protein contacts occurring in defined spatial locations. Conformational collapse appears to be a distinct feature of viral RNA that has evolved to facilitate assembly. Aspects of this process mimic those seen in ribosome assembly.

Full Text Available A low grade graphite run-of-mine (r.o.m ore from eastern India was studied for its amenability to beneficiation by flotation technique. The petrography studies indicate that the ore primarily consists of quartz and graphite with minor quantity of mica. It analyzed 89.89% ash and 8.59% fixed carbon. The ore was crushed in stages followed by primary coarse wet grinding to 212 μm (d80. Rougher flotation was carried out in Denver flotation cell to eliminate gangue as much as possible in the form of primary tailings with minimal loss of carbon. Diesel & pine oil were used as collector and frother respectively. Regrinding of rougher concentrate to150 μm (d80 was resorted to further liberate the graphite values and was followed by multi-stage cleaning. This two-stage grinding approach involving a primary coarse grinding and regrinding of rougher float followed by its multi-stage cleaning was found to yield required grade of concentrate for applications such as refractories, batteries and high temperature lubricants. This approach is supposed to retain the flake size of coarse, free and liberated graphite, if available, during primary coarse grinding and rougher flotation stage with minimal grinding energy costs as against the usual practice of single stage grinding in the case of many ores. A final concentrate of 8.97% weight recovery with 5.80% ash and 92.13% fixed carbon could be achieved.

A two-stage chromatography that yields highly purified ceruloplasmin (CP) from human plasma and from rat and rabbit serum is described. The isolation procedure is based on the interaction of CP with neomycin, and it provides a high yield of CP. Constants of inhibition by gentamycin, kanamycin, and neomycin of oxidase activity of CP in its reaction with p-phenylenediamine were assayed. The lowest K(i) for neomycin (11 µM) corresponded to the highest specific adsorption of CP on neomycin-agarose (10 mg CP/ml of resin). Isolation of CP from 1.4 liters of human plasma using ion-exchange chromatography on UNO-Sphere Q and affinity chromatography on neomycin-agarose yields 348 mg of CP with 412-fold purification degree. Human CP preparation obtained with A(610)/A(280) ~ 0.052 contained neither immunoreactive prothrombin nor active thrombin. Upon storage at 37°C under sterile conditions, the preparation remained stable for two months. Efficient preparation of highly purified CP from rat and rabbit sera treated according to a similar protocol suggests the suitability of our method for isolation of CP from plasma and serum of other animals. The yield of CP in three separate purifications was no less than 78%.

Full Text Available The study attempts to identify factors for growing SMBs. An evolutionary phase approach has been used. The study also aims to find out if there are common and different denominators for newer and older firms that can affect their profitability. The study selects a sampling frame that isolates two groups of firms in two industries at twostages of development. A variety of organizational and structural data was collected and analyzed. Amongst the conclusions that may be drawn from the study are that it is not easy to find a common definition of success, it is important to stratify SMBs when studying them, an evolutionary stage approach helps to compare firms with roughly the same external and internal dynamics and each industry has its own set of success variables.The study has identified three success variables for older firms that reflect contemporary strategic thinking such as crafting a good strategy and changing it only incrementally, building core competencies and outsourcing the rest, and keeping up with innovation and honing competitive skills.

Full Text Available The concepts of multifunctional dehumidification solar systems, heat supply, cooling, and air conditioning based on the open absorption cycle with direct absorbent regeneration developed. The solar systems based on preliminary drainage of current of air and subsequent evaporated cooling. The solar system using evaporative coolers both types (direct and indirect. The principle of two-stage regeneration of absorbent used in the solar systems, it used as the basis of liquid and gas-liquid solar collectors. The main principle solutions are designed for the new generation of gas-liquid solar collectors. Analysis of the heat losses in the gas-liquid solar collectors, due to the mechanism of convection and radiation is made. Optimal cost of gas and liquid, as well as the basic dimensions and configuration of the working channel of the solar collector identified. Heat and mass transfer devices, belonging to the evaporative cooling system based on the interaction between the film and the gas stream and the liquid therein. Multichannel structure of the polymeric materials used to create the tip. Evaporative coolers of water and air both types (direct and indirect are used in the cooling of the solar systems. Preliminary analysis of the possibilities of multifunctional solar absorption systems made reference to problems of cooling media and air conditioning on the basis of experimental data the authors. Designed solar systems feature low power consumption and environmental friendliness.

Antioxidants play an important role in inhibiting and scavenging free radicals, thus providing protection to human against infections and degenerative diseases. Current research is now directed towards natural antioxidants originated from plants due to safe therapeutics. Moringa oleifera is used in Indian traditional medicine for a wide range of various ailments. To understand the mechanism of pharmacological actions, antioxidant properties of the Moringa oleifera leaf extracts were tested in twostages of maturity using standard in vitro models. The successive aqueous extract of Moringa oleifera exhibited strong scavenging effect on 2, 2-diphenyl-2-picryl hydrazyl (DPPH) free radical, superoxide, nitric oxide radical and inhibition of lipid per oxidation. The free radical scavenging effect of Moringa oleifera leaf extract was comparable with that of the reference antioxidants. The data obtained in the present study suggests that the extracts of Moringa oleifera both mature and tender leaves have potent antioxidant activity against free radicals, prevent oxidative damage to major biomolecules and afford significant protection against oxidative damage.

For spintronics it is a prerequisite to create devices capable of generating and detecting spin-polarized currents. We present an all-semiconductor nanostructure based on an InAs heterostructure, which can be used to generate spin-polarized currents all-electrically utilizing the intrinsic spin Hall effect in a Y-shaped junction. By cascading two spin filters the first acts as a generator of spin-polarized currents while the second acts as an all-electrical detector. Measurements applying an AC voltage to the two-stage spin-filter cascade have proven the feasibility of these devices as efficient generators and detectors for spin-polarized currents. For the investigation of the influence of magnetic and electric fields in different directions it is essential to use DC voltages to direct the electron flow in constant direction instead of alternating directions in the AC case and thereby keeping the geometric relation between the current flow and the applied field constant. To enable lock-in technique we apply a DC voltage with AC modulation to the spin-filter cascade.

This paper proposes a new approach for 3D face reconstruction with RGBD images from an inexpensive commodity sensor. The challenges we face are: 1) substantial random noise and corruption are present in low-resolution depth maps; and 2) there is high degree of variability in pose and face expression. We develop a novel two-stage algorithm that effectively maps low-quality depth maps to realistic face models. Each stage is targeted toward a certain type of noise. The first stage extracts sparse errors from depth patches through the data-driven local sparse coding, while the second stage smooths noise on the boundaries between patches and reconstructs the global shape by combining local shapes using our template-based surface refinement. Our approach does not require any markers or user interaction. We perform quantitative and qualitative evaluations on both synthetic and real test sets. Experimental results show that the proposed approach is able to produce high-resolution 3D face models with high accuracy, even if inputs are of low quality, and have large variations in viewpoint and face expression.

Uniform lithospheric extension predicts basic properties of non-volcanic rifted margins but fails to explain other important characteristics. Significant discrepancies are observed at 'type I' margins (such as the Iberia-Newfoundland conjugates), where large tracts of continental mantle lithosphere are exposed at the sea floor, and 'type II' margins (such as some ultrawide central South Atlantic margins), where thin continental crust spans wide regions below which continental lower crust and mantle lithosphere have apparently been removed. Neither corresponds to uniform extension. Instead, either crust or mantle lithosphere has been preferentially removed. Using dynamical models, we demonstrate that these margins are opposite end members: in type I, depth-dependent extension results in crustal-necking breakup before mantle-lithosphere breakup and in type II, the converse is true. These two-layer, two-stage breakup behaviours explain the discrepancies and have implications for the styles of the associated sedimentary basins. Laterally flowing lower-mantle lithosphere may underplate both type I and type II margins, thereby contributing to their anomalous characteristics.

Due to the relatively high cost of measuring sample plots in forest inventories, considerable attention is given to sampling and plot designs during the forest inventory planning phase. A two-stage design can be efficient from a field work perspective as spatially proximate plots are grouped into work zones. A comparison between subsampling with units of unequal size (SUUS) and a simple random sample (SRS) design in a panelized framework assessed the statistical and economic implications of using the SUUS design for a case study in the Northeastern USA. The sampling errors for estimates of forest land area and biomass were approximately 1.5-2.2 times larger with SUUS prior to completion of the inventory cycle. Considerable sampling error reductions were realized by using the zones within a post-stratified sampling paradigm; however, post-stratification of plots in the SRS design always provided smaller sampling errors in comparison. Cost differences between the two designs indicated the SUUS design could reduce the field work expense by 2-7 %. The results also suggest the SUUS design may provide substantial economic advantage for tropical forest inventories, where remote areas, poor access, and lower wages are typically encountered.

Full Text Available Neural Networks offer the potential for providing a novel solution to the problem of data compression by its ability to generate an internal data representation. This network, which is an application of back propagation network, accepts a large amount of image data, compresses it for storage or transmission, and subsequently restores it when desired. A new approach for reducing training time by reconstructing representative vectors has also been proposed. Performance of the network has been evaluated using some standard real world images. Neural networks can be trained to represent certain sets of data. After decomposing an image using the Discrete Cosine Transform (DCT, a twostage neural network may be able to represent the DCT coefficients in less space than the coefficients themselves. After splitting the image and the decomposition using several methods, neural networks were trained to represent the image blocks. By saving the weights and bias of each neuron, by using the Inverse DCT (IDCT coefficient mechanism an image segment can be approximately recreated. Compression can be achieved using neural networks. Current results have been promising except for the amount of time needed to train a neural network. One method of speeding up code execution is discussed. However, plenty of future research work is available in this area it is shown that the development architecture and training algorithm provide high compression ratio and low distortion while maintaining the ability to generalize and is very robust as well

Background Many classification methods have been proposed based on magnetic resonance images. Most methods rely on measures such as volume, the cerebral cortical thickness and grey matter density. These measures are susceptible to the performance of registration and limited in representation of anatomical structure. This paper proposes a two-stage local feature fusion method, in which deformable registration is not desired and anatomical information is represented from moderate scale. Methods Keypoints are firstly extracted from scale-space to represent anatomical structure. Then, two kinds of local features are calculated around the keypoints, one for correspondence and the other for representation. Scores are assigned for keypoints to quantify their effect in classification. The sum of scores for all effective keypoints is used to determine which group the test subject belongs to. Results We apply this method to magnetic resonance images of Alzheimer's disease and Parkinson's disease. The advantage of local feature in correspondence and representation contributes to the final classification. With the help of local feature (Scale Invariant Feature Transform, SIFT) in correspondence, the performance becomes better. Local feature (Histogram of Oriented Gradient, HOG) extracted from 16×16 cell block obtains better results compared with 4×4 and 8×8 cell block. Discussion This paper presents a method which combines the effect of SIFT descriptor in correspondence and the representation ability of HOG descriptor in anatomical structure. This method has the potential in distinguishing patients with brain disease from controls. PMID:28207873

A new quasi-one-dimensional Godunov code for modeling two-stage light gas guns is described. The code is third-order accurate in space and second-order accurate in time. A very accurate Riemann solver is used. Friction and heat transfer to the tube wall for gases and dense media are modeled and a simple nonequilibrium turbulence model is used for gas flows. The code also models gunpowder burn in the first-stage breech. Realistic equations of state (EOS) are used for all media. The code was validated against exact solutions of Riemann's shock-tube problem, impact of dense media slabs at velocities up to 20 km/sec, flow through a supersonic convergent-divergent nozzle and burning of gunpowder in a closed bomb. Excellent validation results were obtained. The code was then used to predict the performance of two light gas guns (1.5 in. and 0.28 in.) in service at the Ames Research Center. The code predictions were compared with measured pressure histories in the powder chamber and pump tube and with measured piston and projectile velocities. Very good agreement between computational fluid dynamics (CFD) predictions and measurements was obtained. Actual powder-burn rates in the gun were found to be considerably higher (60-90 percent) than predicted by the manufacturer and the behavior of the piston upon yielding appears to differ greatly from that suggested by low-strain rate tests.

Widowhood is a common life event for married older women. Prior research has found disruptions in eating behaviors to be common among widows. Little is known about the process underlying these disruptions. The aim of this study was to generate a theoretical understanding of the changing food behaviors of older women during the transition of widowhood. Qualitative methods based on constructivist grounded theory guided by a critical realist worldview were used. Individual active interviews were conducted with 15 community-living women, aged 71-86 years, living alone, and widowed six months to 15 years at the time of the interview. Participants described a variety of educational backgrounds and levels of health, were mainly white and of Canadian or European descent, and reported sufficient income to meet their needs. The loss of regular shared meals initiated a two-stage process whereby women first fall into new patterns and then re-establish the personal food system, thus enabling women to redirect their food system from one that satisfied the couple to one that satisfied their personal food needs. Influences on the trajectory of the change process included the couple's food system, experience with nutritional care, food-related values, and food-related resources. Implications for research and practice are discussed.

@@ Artificial tracheal prosthesis is now a challenge to the entire surgical field all over the world. Previously, all kinds of prosthesis were "inner stent" that could not be integrated with native trachea. Since there is an interface between smooth surface of the prosthesis and living tissues, and the inner side of the prosthesis is not covered with living membrane, infections always happen around the prosthesis. We then developed a new technique, which combined memory-alloy mesh with traditional operative procedures. Memory-alloy mesh is extensible, flexible and can maintain the shape of a tube. It has very good compatibility with tissues and no antigenicity. Thus, it is the most desirable material nowadays that can be found to make the frame of an artificial trachea by using a two-stage approach. That is imbedding a pre-shaped memory-alloy mesh under skin or endothelium (pleura or peritoneum), and then making the "sandwich" pedicle skin and muscle flap to a pedicled artificial trachea anastomosed with native trachea. After a two-year experimental study, a patient received the artificial trachea replacement on April 18, 2002, with a good result.

Full Text Available One of the critical issues for facial expression recognition is to eliminate the negative effect caused by variant poses and illuminations. In this paper a two-stage illumination estimation framework is proposed based on three-dimensional representative face and clustering, which can estimate illumination directions under a series of poses. First, 256 training 3D face models are adaptively categorized into a certain amount of facial structure types by k-means clustering to group people with similar facial appearance into clusters. Then the representative face of each cluster is generated to represent the facial appearance type of that cluster. Our training set is obtained by rotating all representative faces to a certain pose, illuminating them with a series of different illumination conditions, and then projecting them into two-dimensional images. Finally the saltire-over-cross feature is selected to train a group of SVM classifiers and satisfactory performance is achieved when estimating a number of test sets including images generated from 64 3D face models kept for testing, CAS-PEAL face database, CMU PIE database, and a small test set created by ourselves. Compared with other related works, our method is subject independent and has less computational complexity O(C×N without 3D facial reconstruction.

Conversion of lignocellulosic material to ethanol is a major challenge in second generation bio-fuel production by yeast Saccharomyces cerevisiae. This report describes a novel strategy named "two-stage transcriptional reprogramming (TSTR)" in which key gene expression at both glucose and xylose fermentation phases is optimized in engineered S. cerevisiae. Through a combined genome-wide screening of stage-specific promoters and the balancing of the metabolic flux, ethanol yields and productivity from mixed sugars were significantly improved. In a medium containing 50g/L glucose and 50g/L xylose, the top-performing strain WXY12 rapidly consumed glucose within 12h and within 84h it consistently achieved an ethanol yield of 0.48g/g total sugar, which was 94% of the theoretical yield. WXY12 utilizes a KGD1 inducible promoter to drive xylose metabolism, resulting in much higher ethanol yield than a reference strain using a strong constitutive PGK1 promoter. These promising results validate the TSTR strategy by synthetically regulating the xylose assimilation pathway towards efficient xylose fermentation.

Full Text Available Viroporins are small, α-helical, hydrophobic virus encoded proteins, engineered to form homo-oligomeric hydrophilic pores in the host membrane. Viroporins participate in multiple steps of the viral life cycle, from entry to budding. As any other membrane protein, viroporins have to find the way to bury their hydrophobic regions into the lipid bilayer. Once within the membrane, the hydrophobic helices of viroporins interact with each other to form higher ordered structures required to correctly perform their porating activities. This two-step process resembles the two-stage model proposed for membrane protein folding by Engelman and Poppot. In this review we use the membrane protein folding model as a leading thread to analyze the mechanism and forces behind the membrane insertion and folding of viroporins. We start by describing the transmembrane segment architecture of viroporins, including the number and sequence characteristics of their membrane-spanning domains. Next, we connect the differences found among viroporin families to their viral genome organization, and finalize focusing on the pathways used by viroporins in their way to the membrane and on the transmembrane helix-helix interactions required to achieve proper folding and assembly.

In this paper, we introduce a cooperation model (CM) for the two-stage supply chain consisting of a manufacturer and a retailer. In this model, it is supposed that the objective of the manufacturer is to maximise his/her profit while the objective of the retailer is to minimise his/her CVaR while controlling the risk originating from fluctuation in market demand. In reality, the manufacturer and the retailer would like to choose their own decisions as to wholesale price and order quantity to optimise their own objectives, resulting the fact that the expected decision of the manufacturer and that of the retailer may conflict with each other. Then, to achieve cooperation, the manufacturer and the retailer both need to give some concessions. The proposed model aims to coordinate the decisions of the manufacturer and the retailer, and balance the concessions of the two in their cooperation. We introduce an s* - optimal equilibrium solution in this model, which can decide the minimum concession that the manufacturer and the retailer need to give for their cooperation, and prove that the s* - optimal equilibrium solution can be obtained by solving a goal programming problem. Further, the case of different concessions made by the manufacturer and the retailer is also discussed. Numerical results show that the CM is efficient in dealing with the cooperations between the supplier and the retailer.

Full Text Available Nonpoint source (NPS pollution caused by agricultural activities is main reason that water quality in watershed becomes worse, even leading to deterioration. Moreover, pollution control is accompanied with revenue’s fall for agricultural system. How to design and generate a cost-effective and environmentally friendly agricultural production pattern is a critical issue for local managers. In this study, a risk-based interval two-stage programming model (RBITSP was developed. Compared to general ITSP model, significant contribution made by RBITSP model was that it emphasized importance of financial risk under various probabilistic levels, rather than only being concentrated on expected economic benefit, where risk is expressed as the probability of not meeting target profit under each individual scenario realization. This way effectively avoided solutions’ inaccuracy caused by traditional expected objective function and generated a variety of solutions through adjusting weight coefficients, which reflected trade-off between system economy and reliability. A case study of agricultural production management with the Tai Lake watershed was used to demonstrate superiority of proposed model. Obtained results could be a base for designing land-structure adjustment patterns and farmland retirement schemes and realizing balance of system benefit, system-failure risk, and water-body protection.

In order to increase the efficiency of waste utilization in thermal conversion processes, pre-treatment is advantageous. With the Herhof Stabilat[reg] process, residual domestic waste is upgraded to waste-derived fuel by means of biological drying and mechanical separation of inerts and metals. The dried and homogenized waste-derived Stabilat[reg] fuel has a relatively high calorific value and contains high volatile matter which makes it suitable for gasification. As a result of extensive mechanical treatment, the Stabilat[reg] produced is of a fluffy appearance with a low density. A two-stage gasifier, based on a parallel-arranged bubbling fluidized bed and a fixed bed reactor, has been developed to convert Stabilat[reg] into hydrogen-rich product gas. This paper focuses on the design and construction of the configured laboratory-scale gasifier and experience with its operation. The processing of low-density fluffy waste-derived fuel using small-scale equipment demands special technical solutions for the core components as well as for the peripheral equipment. These are discussed here. The operating results of Stabilat[reg] gasification are also presented.

This retrospective study included eight consecutive cases with C2 vertebral body neoplastic lesions. The anterior retropharyngeal approach was used to remove the lesions and decompress the spinal cord. Spinal stabilization with occipitocervical plating in a second-stage operation makes the treatment more tolerable for patients. The object of this study was to determine the effectiveness of a two-stage operation strategy for these lesions. Eight patients were operated on via anterior retropharyngeal approach and then stabilized with occipitocervical plates posteriorly in a second sitting. All neck pain and all dysphagia problems resolved. Partial neurologic improvement was achieved in three out of four patients. No postoperative infection was seen. The retropharyngeal approach to the upper cervical spine and anterior foramen magnum lesions is an effective alternative to transoral surgery because of low complication rates. Neoplastic lesions in the upper cervical spine can safely and effectively be operated with this technique. The general medical status of patients with malignancies does not permit too long, time-consuming operations. Stabilization of the spine in a separate operation increases patient tolerability without any morbidity.

Due to its environmentally sustainable and energy-saving characteristics, railway transportation nowadays plays a fundamental role in delivering passengers and goods. Emerged in the area of transportation planning, the crew (workforce) sizing problem and the crew scheduling problem have been attached great importance by the railway industry and the scientific community. In this paper, we aim to solve the two problems by proposing a novel two-stage optimization approach in the context of the electric multiple units (EMU) depot shunting driver assignment problem. Given a predefined depot shunting schedule, the first stage of the approach focuses on determining an optimal size of shunting drivers. While the second stage is formulated as a bi-objective optimization model, in which we comprehensively consider the objectives of minimizing the total walking distance and maximizing the workload balance. Then we combine the normalized normal constraint method with a modified Pareto filter algorithm to obtain Pareto solutions for the bi-objective optimization problem. Furthermore, we conduct a series of numerical experiments to demonstrate the proposed approach. Based on the computational results, the regression analysis yield a driver size predictor and the sensitivity analysis give some interesting insights that are useful for decision makers. PMID:28704489

Decision-making in water resources is inherently uncertain producing copious risks, ranging from operational (present) to planning (season-ahead) to design/adaptation (decadal) time-scales. These risks include human activity and climate variability/change. As the risks in designing and operating water systems and allocating available supplies vary systematically in time, prospects for predicting and managing such risks become increasingly attractive. Considerable effort has been undertaken to improve seasonal forecast skill and advocate for integration to reduce risk, however only minimal adoption is evident. Impediments are well defined, yet tailoring forecast products and allowing for flexible adoption assist in overcoming some obstacles. The semi-arid Elqui River basin in Chile is contending with increasing levels of water stress and demand coupled with insufficient investment in infrastructure, taxing its ability to meet agriculture, hydropower, and environmental requirements. The basin is fed from a retreating glacier, with allocation principles founded on a system of water rights and markets. A two-stage seasonal streamflow forecast at leads of one and two seasons prescribes the probability of reductions in the value of each water right, allowing water managers to inform their constituents in advance. A tool linking the streamflow forecast to a simple reservoir decision model also allows water managers to select a level of confidence in the forecast information.

Sisal leaf decortications residue (SLDR) is amongst the most abundant agro-industrial residues in Tanzania and is a good feedstock for biogas production. Pre-treatment of the residue prior to its anaerobic digestion (AD) was investigated using a two-stage pre-treatment approach with two fungal strains, CCHT-1 and Trichoderma reesei in succession in anaerobic batch bioreactors. AD of the pre-treated residue with CCTH-1 at 10% (wet weight inoculum/SLDR) inoculum concentration incubated for four days followed by incubation for eight days with 25% (wet weight inoculum/SLDR) of T. reesei gave a methane yield of 0.292 ± 0.04 m3 CH4/kg volatile solids (VS)added. On reversing the pre-treatment succession of the fungal inocula using the same parameters followed by AD, methane yield decreased by about 55%. Generally, an increment in the range of 30–101% in methane yield in comparison to the un-treated SLDR was obtained. The results confirmed the potential of CCHT-1 followed by Trichoderma reesei fungi pre-treatment prior to AD to achieve significant improvement in biogas production from SLDR. PMID:20087466

Full Text Available The present work is to analyze a two-stage cycle based on the ammonia-water absorption system, with intermediate compression. The two generators of the system are heated by geothermal energy at low temperature. The study shows that this system makes it possible at lower generator temperature, under the limits permitted by the systems suggested up to now. For Tg = 335 K, Tc = Ta = 308 K and Te = 263 K, based on the electric consumption, the system efficiency is 8.2. The comparative study of the hybrid system and vapor compression systems shows the superiority of the proposed system. Supplied by the geothermal sources of the Tunisian south, the system makes it possible to obtain for a pilot geothermal station, a production of 75 tons of ice per day. The greenhouse gas emissions should thus be reduced by about 2.38 tons of CO2 per day. Therefore, based on the typical geothermal energy sources in Tunisia which present a global refrigeration potential of 4.4 MW, the daily quantity of ice that could be produced is about 865 tons. The greenhouse gas emissions should thus be reduced by about 10,000 tons of CO2 per year.

Full Text Available Data measurement of roller bearings condition monitoring is carried out based on the Shannon sampling theorem, resulting in massive amounts of redundant information, which will lead to a big-data problem increasing the difficulty of roller bearing fault diagnosis. To overcome the aforementioned shortcoming, a two-stage compressed fault detection strategy is proposed in this study. First, a sliding window is utilized to divide the original signals into several segments and a selected symptom parameter is employed to represent each segment, through which a symptom parameter wave can be obtained and the raw vibration signals are compressed to a certain level with the faulty information remaining. Second, a fault detection scheme based on the compressed sensing is applied to extract the fault features, which can compress the symptom parameter wave thoroughly with a random matrix called the measurement matrix. The experimental results validate the effectiveness of the proposed method and the comparison of the three selected symptom parameters is also presented in this paper.

Full Text Available Visual neuroscientists have discovered fundamental properties of neural representation through careful analysis of responses to controlled stimuli. Typically, different properties are studied and modeled separately. To integrate our knowledge, it is necessary to build general models that begin with an input image and predict responses to a wide range of stimuli. In this study, we develop a model that accepts an arbitrary band-pass grayscale image as input and predicts blood oxygenation level dependent (BOLD responses in early visual cortex as output. The model has a cascade architecture, consisting of twostages of linear and nonlinear operations. The first stage involves well-established computations-local oriented filters and divisive normalization-whereas the second stage involves novel computations-compressive spatial summation (a form of normalization and a variance-like nonlinearity that generates selectivity for second-order contrast. The parameters of the model, which are estimated from BOLD data, vary systematically across visual field maps: compared to primary visual cortex, extrastriate maps generally have larger receptive field size, stronger levels of normalization, and increased selectivity for second-order contrast. Our results provide insight into how stimuli are encoded and transformed in successive stages of visual processing.

Reinforcement Learning is a commonly used technique in robotics, however, traditional algorithms are unable to handle large amounts of data coming from the robot’s sensors, require long training times, are unable to re-use learned policies on similar domains, and use discrete actions. This work introduces TS-RRLCA, a twostage method to tackle these problems. In the first stage, low-level data coming from the robot’s sensors is transformed into a more natural, relational representation based on rooms, walls, corners, doors and obstacles, significantly reducing the state space. We also use Behavioural Cloning, i.e., traces provided by the user to learn, in few iterations, a relational policy that can be re-used in different environments. In the second stage, we use Locally Weighted Regression to transform the initial policy into a continuous actions policy. We tested our approach with a real service robot on different environments for different navigation and following tasks. Results show how the policies can be used on different domains and perform smoother, faster and shorter paths than the original policies.