Sample records for generational comparison revised

The U.S. Geological Survey (USGS) developed revision and product generation (RevPG) software for updating digital line graph (DLG) data and producing maps from such data. This software is based on ARC/INFO, a geographic information system from Environmental Systems Resource Institute (ESRI). RevPG consists of ARC/INFO Arc Macro Language (AML) programs, C routines, and interface menus that permit operators to collect vector data using aerial images, to symbolize the data on-screen, and to produce plots and color-separated files for use in printing maps.

The U.S. Geological Survey (USGS) developed revision and product generation (RevPG) software for updating digital line graph (DLG) data and producing maps from such data. This software is based on ARC/INFO, a geographic information system from Environmental Systems Resource Institute (ESRI). RevPG consists of ARC/INFO Arc Macro Language (AML) programs, C routines, and interface menus that permit operators to collect vector data using aerial images, to symbolize the data onscreen, and to produce plots and color-separated files for use in printing maps.

A scoping evaluation was made of various facility alternatives for test of LMFBR prototype steam generators and models. Recommendations are given for modifications to EBR-II and SCTI (Sodium Components Test Installation) for prototype SG testing, and for few-tube model testing. (DLC)

Waste Management Technology has requested SRTC to maintain and extend a previously developed computer model, TRUGAS, which calculates hydrogen gas concentrations within the transuranic (TRU) waste drums. TRUGAS was written by Frank G. Smith using the BASIC language and is described in the report A Computer Model of gas Generation and Transport within TRU Waste Drums (DP- 1754). The computer model has been partially validated by yielding results similar to experimental data collected at SRL and LANL over a wide range of conditions. The model was created to provide the capability of predicting conditions that could potentially lead to the formation of flammable gas concentrations within drums, and to assess proposed drum venting methods. The model has served as a tool in determining how gas concentrations are affected by parameters such as filter vent sizes, waste composition, gas generation values, the number and types of enclosures, water instrusion into the drum, and curie loading. The success of the TRUGAS model has prompted an interest in the program`s maintenance and enhancement. Experimental data continues to be collected at various sites on such parameters as permeability values, packaging arrangements, filter designs, and waste contents. Information provided by this data is used to improve the accuracy of the model`s predictions. Also, several modifications to the model have been made to enlarge the scope of problems which can be analyzed. For instance, the model has been used to calculate hydrogen concentrations inside steel cabinets containing retired glove boxes (WSRC-RP-89-762). The revised TRUGAS computer model, H2GAS, is described in this report. This report summarizes all modifications made to the TRUGAS computer model and provides documentation useful for making future updates to H2GAS.

Waste Management Technology has requested SRTC to maintain and extend a previously developed computer model, TRUGAS, which calculates hydrogen gas concentrations within the transuranic (TRU) waste drums. TRUGAS was written by Frank G. Smith using the BASIC language and is described in the report A Computer Model of gas Generation and Transport within TRU Waste Drums (DP- 1754). The computer model has been partially validated by yielding results similar to experimental data collected at SRL and LANL over a wide range of conditions. The model was created to provide the capability of predicting conditions that could potentially lead to the formation of flammable gas concentrations within drums, and to assess proposed drum venting methods. The model has served as a tool in determining how gas concentrations are affected by parameters such as filter vent sizes, waste composition, gas generation values, the number and types of enclosures, water instrusion into the drum, and curie loading. The success of the TRUGAS model has prompted an interest in the program's maintenance and enhancement. Experimental data continues to be collected at various sites on such parameters as permeability values, packaging arrangements, filter designs, and waste contents. Information provided by this data is used to improve the accuracy of the model's predictions. Also, several modifications to the model have been made to enlarge the scope of problems which can be analyzed. For instance, the model has been used to calculate hydrogen concentrations inside steel cabinets containing retired glove boxes (WSRC-RP-89-762). The revised TRUGAS computer model, H2GAS, is described in this report. This report summarizes all modifications made to the TRUGAS computer model and provides documentation useful for making future updates to H2GAS.

The Chi index described in the article ‘A revision of the γ-evaluation concept for the comparison of dose distributions’ by Bakai et al (Phys. Med. Biol. 2003 48 3543-53) indicates that smooth acceptance tubes, defining upper and lower limits of dose difference and distance to agreement, can be pre-defined for a given dose distribution based on the local dose gradient. Mathematical analysis and simulations indicate that the Chi index as described by Bakai et al does not produce smooth acceptance criteria in rapidly varying dose gradients. Instead, ‘horns’ are generated in the acceptance tubes which lead to the production of unacceptably large acceptance criteria and the possibility of false negatives.

A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and π+ two-dimensional energy vs cosine distribution.

This is a revised final report and addresses all of the work performed on this program. Specifically, it covers vehicle architecture background, definition of six baseline engine cycles, reliability baseline (space shuttle main engine QRAS), and component level reliability/performance/cost for the six baseline cycles, and selection of 3 cycles for further study. This report further addresses technology improvement selection and component level reliability/performance/cost for the three cycles selected for further study, as well as risk reduction plans, and recommendation for future studies.

These guidelines describe procedures to comply with all Federal and State laws and regulations and Lawrence Berkeley National Laboratory (LBNL) policy applicable to State-regulated medical and unregulated, but biohazardous, waste (medical/biohazardous waste). These guidelines apply to all LBNL personnel who: (1) generate and/or store medical/biohazardous waste, (2) supervise personnel who generate medical/biohazardous waste, or (3) manage a medical/biohazardous waste pickup location. Personnel generating biohazardous waste at the Joint Genome Institute/Production Genomics Facility (JGI/PGF) are referred to the guidelines contained in Section 9. Section 9 is the only part of these guidelines that apply to JGI/PGF. Medical/biohazardous waste referred to in this Web site includes biohazardous, sharps, pathological and liquid waste. Procedures for proper storage and disposal are summarized in the Solid Medical/Biohazardous Waste Disposal Procedures Chart. Contact the Waste Management Group at 486-7663 if you have any questions regarding medical/biohazardous waste management.

For the next generation of linear colliders, the energy loss due to beamstrahlung during the collision of the e{sup +}e{sup {minus}} beams is expected to substantially influence the effective center-of-mass energy distribution of the colliding particles. In this paper, we first derive analytical formulae for the electron and photon energy spectra under multiple beamstrahlung processes, and for the e{sup +}e{sup {minus}} and {gamma}{gamma} differential luminosities. We then apply our formulation to various classes of 500 GeV e{sup +}e{sup {minus}} linear collider designs currently under study.

A database of vegetation, soil, and air tritium concentrations at gridded coordinate locations following nine accidental atmospheric releases is described. While none of the releases caused a significant dose to the public, the data collected are valuable for comparison with the results of tritium transport models used for risk assessment. The largest, potential, individual off-site dose from any of the releases was calculated to be 1.6 mrem. The population dose from this same release was 46 person-rem which represents 0.04% of the natural background radiation dose to the population in the path of the release.

Many of the technical problems of generating a large thin liquid sheet from 0.02 to 0.20..mu..m thick (3 to 40..mu..g/cm/sup 2/) have been solved. It is shown that this perennial sheet is stable and consonant in dimension. Several ion beam species from the SuperHILAC have been used for evaluation at 0.11 MeV/n. In one of three modes this sheet serves as an equivalent substitute for a carbon foil. The second mode is characterized by a solid-like charge state distribution but with a varying fraction of unstripped ions. The third mode gives stripping performance akin to a vapor stripping medium.

DOE GO13028-0001 DESCRIPTION/ABSTRACT This report is a summary of the work performed by Teledyne Energy Systems to understand high pressure electrolysis mechanisms, investigate and address safety concerns related to high pressure electrolysis, develop methods to test components and systems of a high pressure electrolyzer, and produce design specifications for a low cost high pressure electrolysis system using lessons learned throughout the project. Included in this report are data on separator materials, electrode materials, structural cell design, and dissolved gas tests. Also included are the results of trade studies for active area, component design analysis, high pressure hydrogen/oxygen reactions, and control systems design. Several key pieces of a high pressure electrolysis system were investigated in this project and the results will be useful in further attempts at high pressure and/or low cost hydrogen generator projects. An important portion of the testing and research performed in this study are the safety issues that are present in a high pressure electrolyzer system and that they can not easily be simplified to a level where units can be manufactured at the cost goals specified, or operated by other than trained personnel in a well safeguarded environment. The two key objectives of the program were to develop a system to supply hydrogen at a rate of at least 10,000 scf/day at a pressure of 5000psi, and to meet cost goals of $600/ kW in production quantities of 10,000/year. On these two points TESI was not successful. The project was halted due to concerns over safety of high pressure gas electrolysis and the associated costs of a system which reduced the safety concerns.

To inform the public about details of the employment security program and how it functions, this comparison of state unemployment insurance laws is presented. The report is based primarily on an analysis of state statutes. It examines state by state the types of workers and employers that are covered under the state law, the methods of financing…

This paper compares the current and historic operating performance of 12 large nuclear and coal-fired units now operated by Commonwealth Edison Co., and provides specific comparisons of busbar costs of electricity generated by those units in recent years. It also provides cost comparisons for future nuclear and coal-fired units, and attempts to deal realistically with the effect of future inflation upon these comparisons. The paper deals with the problem of uncertainty, the effect of future developments on present-day comparisons, and how published comparisons have varied over the past four or five years. 9 tables.

A previous analysis of the radiological impact of removing and replacing corroded steam generators has been updated based on experience gained during steam generator repairs at Surry Unit 2. Some estimates of occupational doses involved in the operation have been revised but are not significantly different from the earlier estimates. Estimates of occupational doses and radioactive effluents for new tasks have been added. Health physics concerns that arose at Surry included the number of persons involved in the operation, tne training of workers, the handling of quantitites.of low-level waste, and the application of the ALARA principle. A review of these problem areas may help in the planning of other similar operations. A variety of processes could be used to decontaminate steam generators. Research is needed to assess these techniques and their associated occupational doses and waste volumes. Contaminated steam generators can be stored or disposed of after removal without significant radiological problems. Onsite storage and intact shipment have the least impact. In-placing retubing, an alternative to steam generator removal, results in occupational doses and effluents similar to those from removal, but prior decontamination of the channel head is needed. The retubing option should be assessed further.

A comparison of Self-Scoring Error Rate for Self Directed Search (SDS) and the revised SDS is presented. The subjects were college freshmen and sophomores who participated in career planning as a part of their orientation program, and a career workshop. Subjects, N=190 on first study and N=84 on second study, were then randomly assigned to the SDS…

With fast development and wide applications of next-generation sequencing (NGS) technologies, genomic sequence information is within reach to aid the achievement of goals to decode life mysteries, make better crops, detect pathogens, and improve life qualities. NGS systems are typically represented by SOLiD/Ion Torrent PGM from Life Sciences, Genome Analyzer/HiSeq 2000/MiSeq from Illumina, and GS FLX Titanium/GS Junior from Roche. Beijing Genomics Institute (BGI), which possesses the world's biggest sequencing capacity, has multiple NGS systems including 137 HiSeq 2000, 27 SOLiD, one Ion Torrent PGM, one MiSeq, and one 454 sequencer. We have accumulated extensive experience in sample handling, sequencing, and bioinformatics analysis. In this paper, technologies of these systems are reviewed, and first-hand data from extensive experience is summarized and analyzed to discuss the advantages and specifics associated with each sequencing system. At last, applications of NGS are summarized. PMID:22829749

The primary aim of the paper is to increase the number of intuitions accounted for by Bock's rules. The revised set of rules accounts mainly for intuitions in the area of step-, half-, and -in-law relationships. (CFM)

This study compared the hospital cost of primary and revision total hip arthroplasty (THA) after the introduction of cost-containment programs (clinical pathway, hip implant standardization, and competitive bid purchasing of hip implants). Hospital financial records for 290 primary and 85 revision THAs performed from October 1993 through September 1995 were analyzed. A cost-accounting system provided actual hospital cost data for each procedure. Accurate calculation of hospital income or loss was determined. Average hospital length of stay was 4.9 days for primary THA and 5.9 days for revision THA. Average hospital cost was $11,104 for primary THA and $14,935 for revision THA. Average net income (hospital revenue hospital expense) for primary THA was $2486. Average loss from revision THA was $401. The payer mix included commercial insurance, Blue Cross/Blue Shield, managed care, Medicare, Medicaid, and workmen's compensation. For primary THA, all payers were profitable except Medicaid and selected managed care contracts. For revision THA, profit was achieved with payment from commercial insurance only. Despite the introduction of cost-containment programs, revision THA did not achieve profitability at our institution. PMID:10037332

A comprehensive waste-forecasting task was initiated in FY 1991 to provide a consistent, documented estimate of the volumes of waste expected to be generated as a result of U.S. Department of Energy-Oak Ridge Operations (DOE-ORO) Environmental Restoration (ER) OR-1 Project activities. Continual changes in the scope and schedules for remedial action (RA) and decontamination and decommissioning (D&D) activities have required that an integrated data base system be developed that can be easily revised to keep pace with changes and provide appropriate tabular and graphical output. The output can then be analyzed and used to drive planning assumptions for treatment, storage, and disposal (TSD) facilities. The results of this forecasting effort and a description of the data base developed to support it are provided herein. The initial waste-generation forecast results were compiled in November 1991. Since the initial forecast report, the forecast data have been revised annually. This report reflects revisions as of September 1994.

ORIGEN2 is a versatile point depletion and decay computer code for use in simulating nuclear fuel cycles and calculating the nuclide compositions of materials contained therein. This code represents a revision and update of the original ORIGEN computer code which has been distributed world-wide beginning in the early 1970s. The purpose of this report is to give a summary description of a revised and updated version of the original ORIGEN computer code, which has been designated ORIGEN2. A detailed description of the computer code ORIGEN2 is presented. The methods used by ORIGEN2 to solve the nuclear depletion and decay equations are included. Input information necessary to use ORIGEN2 that has not been documented in supporting reports is documented.

In this paper, we present a broad comparison of studies for a selected set of parameters for different nuclear reactor types including the next generation. This serves as an overview of key parameters which provide a semi-quantitative decision basis for selecting nuclear strategies. Out of a number of advanced reactor designs of the LWR type, gas cooled type, and FBR type, currently on the drawing board, the Advanced Light Water Reactors (ALWR) seem to have some edge over other types of the next generation of reactors for the near-term application. This is based on a number of attributes related to the benefit of the vast operating experience with LWRs coupled with an estimated low risk profile, economics of scale, degree of utilization of passive systems, simplification in the plant design and layout, modular fabrication and manufacturing. 32 refs., 1 fig., 3 tabs.

The Chemical and Hydrogen Technology Section (CHT) of the Savannah River Technology Center (SRTC) has conducted a series of gas generation tests in support of the revision of the safety analysis report for packaging (SARP) for the 9975 container, developed at the Savannah River Site (SRS). The Packaging and Transportation Group of SRTC is coordinating the revision to this SARP. A Task Technical and Quality Assurance Plan directing this work was issued by CHT in February 1999. Initially, the primary interest in this testing was hydrogen generation. From these ``gas generation tests can be tracked in real-time by measuring the pressure of a sealed container of the materials being studied. Because multiple gas phase reactions are produced in the radiation field of the sample, material pressure measurements do not necessarily define the quantity of H{sub 2} generated. However, the change in total molecules of gas can be calculated using the ideal gas law from the pressure measurement, known container volume and sample temperature. A measurement of the actual headspace gases must be completed to calculate the H{sub 2} generation rate for a particular sample.'' As the results from these tests were reviewed, however, questions arose regarding the oxygen in the headspace gases. Specifically, do the data from some tests indicate that oxygen was generated for those tests? And do the data for other tests indicate that the oxygen was depleted for them? A statistical analysis of the oxygen data derived from these tests is provided in this report to help answer these questions.

Mesh generation in regions in Euclidean space is a central task in computational science, and especially for commonly used numerical methods for the solution of partial differential equations, e.g., finite element and finite volume methods. We focus on the uniform Delaunay triangulation of planar regions and, in particular, on how one selects the positions of the vertices of the triangulation. We discuss a recently developed method, based on the centroidal Voronoi tessellation (CVT) concept, for effecting such triangulations and present two algorithms, including one new one, for CVT-based grid generation. We also compare several methods, including CVT-based methods, for triangulating planar domains. To this end, we define several quantitative measures of the quality of uniform grids. We then generate triangulations of several planar regions, including some having complexities that are representative of what one may encounter in practice. We subject the resulting grids to visual and quantitative comparisons and conclude that all the methods considered produce high-quality uniform grids and that the CVT-based grids are at least as good as any of the others.

The horror for the scientific crimes of the nazi period led the World Medical Association (WMA), in 1964, to settle by the Helsinki declaration, an ethical code for the medical research on human beings. The code was successively modified in order to account for the developments of the medical science in the past decades. In October 2000, the last revision, the 5th one, has been approved in Edinburgh. Its preparation lasted three years and entailed a passionate, but also profitable dispute between who believes that ethical principles must be followed even though they can hamper the scientific progress, and who thinks that more articulate evaluations should prevail. The initial victory of the more intransigent party resulted in the maintenance of the norm which entails the greatest restriction in using placebo, but, after one year and half, it was partially reshaped by a more permissive interpretation of the same WMA, while waiting for a new revision that is scheduled for this year. PMID:12387141

Objective: To compare the clinical, radiographic and medium-term follow-up results from two fixation methods for the tibial component in revision procedures on total knee prostheses: cemented (tray and stem) and hybrid (cemented tray and uncemented, nonporous canal-filling stem). Methods: Between August 1999 and November 2005, 30 revision procedures on total knee arthroplasties were performed on 26 patients, who were divided between group I (cemented fixation; 21 knees) and group II (hybrid fixation; nine knees). The mean follow-up was 52 months and no patients were lost from the follow up. Results: No differences in the scores from the WOMAC and Knee Society questionnaires were observed between the two groups. One patient in group I presented radiographic signs of loosening. Two patients (one in each group) complained of pain in the diaphyseal region, compatible with the location of the stem tip. The pedestal radiographic sign was observed in 89% of the knees with uncemented stems and in none of the cemented group. Conclusion: The comparative analysis between the two methods did not show any differences regarding clinical and radiographic parameters, or arthroplasty survival. PMID:27027058

OAK B204 Continuous-Wave Radar to Detect Defects Within Heat Exchangers and Steam Generator Tubes ; Revised September 3, 2003. A major cause of failures in heat exchangers and steam generators in nuclear power plants is degradation of the tubes within them. The tube failure is often caused by the development of cracks that begin on the outer surface of the tube and propagate both inwards and laterally. A new technique was researched for detection of defects using a continuous-wave radar method within metal tubing. The technique is 100% volumetric, and may find smaller defects, more rapidly, and less expensively than present methods. The project described in this report was a joint development effort between Sandia National Laboratories (SNL) and New Mexico State University (NMSU) funded by the US Department of Energy. The goal of the project was to research, design, and develop a new concept utilizing a continuous wave radar to detect defects inside metallic tubes and in particular nuclear plant steam generator tubing. The project was divided into four parallel tracks: computational modeling, experimental prototyping, thermo-mechanical design, and signal detection and analysis.

Next Generation Sequencing (NGS) is a powerful tool that depends on loading a precise amount of DNA onto a flowcell. NGS strategies have expanded our ability to investigate genomic phenomena by referencing mutations in cancer and diseases through large-scale genotyping, developing methods to map rare chromatin interactions (4C; 5C and Hi-C) and identifying chromatin features associated with regulatory elements (ChIP-seq, Bis-Seq, ChiA-PET). While many methods are available for DNA library quantification, there is no unambiguous gold standard. Most techniques use PCR to amplify DNA libraries to obtain sufficient quantities for optical density measurement. However, increased PCR cycles can distort the library’s heterogeneity and prevent the detection of rare variants. In this analysis, we compared new digital PCR technologies (droplet digital PCR; ddPCR, ddPCR-Tail) with standard methods for the titration of NGS libraries. DdPCR-Tail is comparable to qPCR and fluorometry (QuBit) and allows sensitive quantification by analysis of barcode repartition after sequencing of multiplexed samples. This study provides a direct comparison between quantification methods throughout a complete sequencing experiment and provides the impetus to use ddPCR-based quantification for improvement of NGS quality. PMID:27048884

A contrast-detail phantom like the CDMAM phantom (Artinis Medical Systems, Zetten, NL) is suggested by the 'European protocol for the quality control of the physical and technical aspects of mammography screening' to evaluate image quality of digital mammography systems. In a recent paper the commonly used CDMAM 3.4 was evaluated according to its dose sensitivity in comparison to other phantoms. The successor phantom (CDMAM 4.0) features other disc diameters and thicknesses that were adapted to be more closely to the image quality which can be found in modern mammography systems. It seems to be obvious to compare this two generations of phantoms with respect to a potential improvement. The time-current product was varied within a range of clinically used values (40-160 mAs). Image evaluation was performed using the automatic evaluation software provided by Artinis. The relative dose sensitivity was compared in dependence of different diameters. Additionally, the IQFinv parameter, which averages over the diameters was computed to get a more global conclusion. We found that the dose is of a considerable smoother dependence with the CMDAM 4.0 phantom. Also the IQFinv parameter shows a more linear behaviour than with the CDMAM 3.4. As the automatic evaluation shows different results on the two phantoms, conversion factors from automatic to human readouts have to be adapted consequently.

Next Generation Sequencing (NGS) is a powerful tool that depends on loading a precise amount of DNA onto a flowcell. NGS strategies have expanded our ability to investigate genomic phenomena by referencing mutations in cancer and diseases through large-scale genotyping, developing methods to map rare chromatin interactions (4C; 5C and Hi-C) and identifying chromatin features associated with regulatory elements (ChIP-seq, Bis-Seq, ChiA-PET). While many methods are available for DNA library quantification, there is no unambiguous gold standard. Most techniques use PCR to amplify DNA libraries to obtain sufficient quantities for optical density measurement. However, increased PCR cycles can distort the library's heterogeneity and prevent the detection of rare variants. In this analysis, we compared new digital PCR technologies (droplet digital PCR; ddPCR, ddPCR-Tail) with standard methods for the titration of NGS libraries. DdPCR-Tail is comparable to qPCR and fluorometry (QuBit) and allows sensitive quantification by analysis of barcode repartition after sequencing of multiplexed samples. This study provides a direct comparison between quantification methods throughout a complete sequencing experiment and provides the impetus to use ddPCR-based quantification for improvement of NGS quality. PMID:27048884

The Department of Energy's (DOE's) Generation IV Nuclear Energy Systems Program will address the research and development (R&D) necessary to support next-generation nuclear energy systems. Such R&D will be guided by the technology roadmap developed for the Generation IV International Forum (GIF) over two years with the participation of over 100 experts from the GIF countries. The roadmap evaluated over 100 future systems proposed by researchers around the world. The scope of the R&D described in the roadmap covers the six most promising Generation IV systems. The effort ended in December 2002 with the issue of the final Generation IV Technology Roadmap [1.1]. The six most promising systems identified for next generation nuclear energy are described within the roadmap. Two employ a thermal neutron spectrum with coolants and temperatures that enable hydrogen or electricity production with high efficiency (the Supercritical Water Reactor - SCWR and the Very High Temperature Reactor - VHTR). Three employ a fast neutron spectrum to enable more effective management of actinides through recycling of most components in the discharged fuel (the Gas-cooled Fast Reactor - GFR, the Lead-cooled Fast Reactor - LFR, and the Sodium-cooled Fast Reactor - SFR). The Molten Salt Reactor (MSR) employs a circulating liquid fuel mixture that offers considerable flexibility for recycling actinides, and may provide an alternative to accelerator-driven systems. A few major technologies have been recognized by DOE as necessary to enable the deployment of the next generation of advanced nuclear reactors, including the development and qualification of the structural materials needed to ensure their safe and reliable operation. Accordingly, DOE has identified materials as one of the focus areas for Gen IV technology development.

This paper presents new calculation methods, recently implemented in the Serpent Monte Carlo code, and related to the production of homogenized few-group constants for deterministic 3D core analysis. The new methods fall under three topics: 1) Improved treatment of neutron-multiplying scattering reactions, 2) Group constant generation in reflectors and other non-fissile regions and 3) Homogenization in leakage-corrected criticality spectrum. The methodology is demonstrated by a numerical example, comparing a deterministic nodal diffusion calculation using Serpent-generated cross sections to a reference full-core Monte Carlo simulation. It is concluded that the new methodology improves the results of the deterministic calculation, and paves the way for Monte Carlo based group constant generation. (authors)

We empirically evaluated indexes derived from the Rorschach Comprehensive System (CS) and the Rorschach Performance Assessment System (R-PAS) that are used for the assessment of psychotic functioning in schizophrenia. We compared the Perceptual Thinking Index (PTI) and the Ego Impairment Index (EII-2) with their revised versions: Thought and Perception Composite (TP-Comp) and EII-3. We evaluated their predictive validity for differentiating schizophrenic from nonschizophrenic patients in a Serbian sample. The sample consisted of 211 (109 men and 102 women, 18-50 years old) inpatients in Serbia who were divided into 2 groups: schizophrenic (100) and nonschizophrenic (111). Test administration, coding, and form quality classification followed CS guidelines. Logistic regression analysis indicated that the new indexes TP-Comp and EII-3 have slightly better predictive power than their counterparts, PTI and EII-2, in identification of schizophrenia, and that TP-Comp performed better than other indexes, although all 4 indexes were successful in differentiating these groups. The results supported the use of TP-Comp in diagnosis of schizophrenia and generally provided evidence for the utility of the Rorschach in evaluating psychosis and for its use in a cross-national context. PMID:23844937

Existing schemes for the palynozonation of the Namurian deposits in western Europe require updating to reflect improvements in both independent biostratigraphical calibration and species distribution data. New biozonation proposals are presented which include the accurate positioning of all biozonal boundaries and the establishment of new sub-biozonal units within the Pendleian-Alportian sections. The base of the renamed Cingulizonates cf capistratus-Bellispores nitidus (CN) Biozone is placed within the late Brigantian and a new unit, the C. cf. capistratus (Cc) Sub-Biozone, with an upper boundary coincident with the Visean-Namurian Stage boundary is proposed. The Pendleian part of the revised CN Biozone is established as the Verrucosisporites morulatus (Vm) Sub-Biozone. Additional data from the Lycospora subtriquetra-Kraeuselisporites ornatus (SO) Biozone, in the interval adjacent to the proposed Mid-Carboniferous Boundary, permits the establishment of L. subtriquetra-Apiculatisporis variocorneus (SV)Sub-Biozone in the upper part of the Arnsbergian Stage and the L. subtriquetra-Cirratriradites rarus (SR) Sub-Biozone which occupies the Chokierian and most of the Alportian stages. The base of the Crassispora kosankei-Grumosisporites varioreticulatus (KV) Biozone is repositioned into the upper part of the Alportian Stage. Comparable assemblages described from the Silesian Coal Basins of Poland are discussed and correlations between the palynozonations of both areas are suggested.

The objective of this document is to provide a resource for all states and compact regions interested in promoting the minimization of low-level radioactive waste (LLW). This project was initiated by the Commonwealth of Massachusetts, and Massachusetts waste streams have been used as examples; however, the methods of analysis presented here are applicable to similar waste streams generated elsewhere. This document is a guide for states/compact regions to use in developing a system to evaluate and prioritize various waste minimization techniques in order to encourage individual radioactive materials users (LLW generators) to consider these techniques in their own independent evaluations. This review discusses the application of specific waste minimization techniques to waste streams characteristic of three categories of radioactive materials users: (1) industrial operations using radioactive materials in the manufacture of commercial products, (2) health care institutions, including hospitals and clinics, and (3) educational and research institutions. Massachusetts waste stream characterization data from key radioactive materials users in each category are used to illustrate the applicability of various minimization techniques. The utility group is not included because extensive information specific to this category of LLW generators is available in the literature.

Paper describes the evaluation of hybrid-electric transit buses purchased by New York City Transit (NYCT) in an order group of 200 (Gen II) and compares their performance to those of similar hybrid-electric transit buses purchased by NYCT in an order group of 125 (Gen I).

The Hanford Pollution Prevention (P2) program is an organized, comprehensive, and continual effort to: systematically reduce the quantity and toxicity of hazardous, radioactive, mixed, and sanitary wastes; conserve resources; and prevent or minimize pollutant releases to all environmental media from all Hanford Site activities. The program has been developed to meet waste minimization and pollution Prevention public law requirements, federal and state regulations, and US Department of Energy (DOE) requirements. The Hanford P2 program is implemented through the sitewide, contractor, and generator group programs.

DOE has selected the High Temperature Gas-cooled Reactor (HTGR) design for the Next Generation Nuclear Plant (NGNP) Project. The NGNP will demonstrate the use of nuclear power for electricity and hydrogen production. It will have an outlet gas temperature in the range of 950°C and a plant design service life of 60 years. The reactor design will be a graphite moderated, helium-cooled, prismatic or pebble-bed reactor and use low-enriched uranium, TRISO-coated fuel. The plant size, reactor thermal power, and core configuration will ensure passive decay heat removal without fuel damage or radioactive material releases during accidents. The NGNP Materials Research and Development (R&D) Program is responsible for performing R&D on likely NGNP materials in support of the NGNP design, licensing, and construction activities. Some of the general and administrative aspects of the R&D Plan include: • Expand American Society of Mechanical Engineers (ASME) Codes and American Society for Testing and Materials (ASTM) Standards in support of the NGNP Materials R&D Program. • Define and develop inspection needs and the procedures for those inspections. • Support selected university materials related R&D activities that would be of direct benefit to the NGNP Project. • Support international materials related collaboration activities through the DOE sponsored Generation IV International Forum (GIF) Materials and Components (M&C) Project Management Board (PMB). • Support document review activities through the Materials Review Committee (MRC) or other suitable forum.

For several decades the NASA Glenn Research Center has been providing a file of thermodynamic data for use in several computer programs. These data are in the form of least-squares coefficients that have been calculated from tabular thermodynamic data by means of the NASA Properties and Coefficients (PAC) program. The source thermodynamic data are obtained from the literature or from standard compilations. Most gas-phase thermodynamic functions are calculated by the authors from molecular constant data using ideal gas partition functions. The Coefficients and Properties (CAP) program described in this report permits the generation of tabulated thermodynamic functions from the NASA least-squares coefficients. CAP provides considerable flexibility in the output format, the number of temperatures to be tabulated, and the energy units of the calculated properties. This report provides a detailed description of input preparation, examples of input and output for several species, and a listing of all species in the current NASA Glenn thermodynamic data file.

In this paper we describe the model-revising problem-solving strategies of two groups of students (one successful, one unsuccessful) as they worked (in a genetics course we developed) to revise Mendel's simple dominance model to explain the inheritance of a trait expressed in any of four variations. The two groups described in this paper were chosen with the intent that the strategies that they employed be used to inform the design of model-based instruction. Differences were found in the groups' abilities to recognize anomalous data, use existing models as templates for revisions, and assess revised models.

The NASA John H. Glenn Research Center initiated baseline testing of ultracapacitors for the Next Generation Launch Transportation (NGLT) project to obtain empirical data for determining the feasibility of using ultracapacitors for the project. There are large transient loads associated with NGLT that require either a very large primary energy source or an energy storage system. The primary power source used for these tests is a proton exchange membrane (PEM) fuel cell. The energy storage system can consist of devices such as batteries, flywheels, or ultracapacitors. Ultracapacitors were used for these tests. Ultracapacitors are ideal for applications such as NGLT where long life, maintenance-free operation, and excellent low-temperature performance is essential. State-of-the-art symmetric ultracapacitors were used for these tests. The ultracapacitors were interconnected in an innovative configuration to minimize interconnection impedance. PEM fuel cells provide excellent energy density, but not good power density. Ultracapacitors provide excellent power density, but not good energy density. The combination of PEM fuel cells and ultracapacitors provides a power source with excellent energy density and power density. The life of PEM fuel cells is shortened significantly by large transient loads. Ultracapacitors used in conjunction with PEM fuel cells reduce the transient loads applied to the fuel cell, and thus appreciably improves its life. PEM fuel cells were tested with and without ultracapacitors, to determine the benefits of ultracapacitors. The report concludes that the implementation of symmetric ultracapacitors in the NGLT power system can provide significant improvements in power system performance and reliability.

Problem: Research suggests that multiple generations of students (predominantly Generation X and millennials) are concurrently enrolled in online classes and that the number of online students continues to grow. The problem investigated in this study was to identify the level of satisfaction as well as the preferences of students from Generation X…

Reports a study that compared the effects of a university health promotion course, before and after major revisions, on students' knowledge, attitudes, and behaviors. Student questionnaires indicated that both versions were effective in improving lifestyle-related knowledge, attitudes, and behaviors. The revised course was superior in modifying…

Describes the model-revising problem-solving strategies of two groups of students (one successful, one unsuccessful) as they worked in a genetics course to revise Mendel's simple dominance model to explain the inheritance of a trait expressed in any of four variations. Finds differences in the groups' abilities to recognize anomalous data, use…

Background: The application of second-generation constrained condylar knee (CCK) prostheses has not been widely studied. This retrospective study was carried out to evaluate the clinical and radiographic outcomes of a second-generation CCK prosthesis for complex primary or revision total knee arthroplasty (TKA). Methods: In total, 51 consecutive TKAs (47 patients) were performed between June 2003 and June 2013 using second-generation modular CCK prostheses. The follow-up was conducted at 3rd day, 1st, 6th, and 12th months postoperatively and later annually. Anteroposterior (AP), lateral, skyline, and long-standing AP radiographs of the affected knees were taken. The Hospital for Special Surgery (HSS) Knee Score, the Knee Society Knee Score (KSKS), the Knee Society Function Score (KSFS), and range of motion (ROM) were also recorded. Heteroscedastic two-tailed Student's t-tests were used to compare the HSS score and the Knee Society score between primary and revision TKAs. A value of P < 0.05 was considered statistically significant. Results: Four knees (two patients) were lost to follow-up, and 47 knees (31 primary TKAs and 16 revision TKAs) had a mean follow-up time of 5.5 years. The mean HSS score improved from 51.1 ± 15.0 preoperatively to 85.3 ± 8.4 points at the final follow-up (P < 0.05). Similar results were observed in terms of the KSKS and KSFS, which improved from 26.0 ± 13.0 to 80.0 ± 12.2 and from 40.0 ± 15.0 to 85.0 ± 9.3 points, respectively (P < 0.05). No significant difference in the HSS, KSKS, KSFS, or ROM was found between primary and revision TKAs (P > 0.05). Two complications were observed in the revision TKA group (one intraoperative distal femur fracture and one recurrence of infection) while one complication (infection) was observed in the primary TKA group. No prosthesis loosening, joint dislocation, patella problems, tibial fracture, or nerve injury were observed. Radiolucent lines were observed in 4% of the knees without progressive

This study is aimed at providing a relative comparison of the thermodynamic and economic performance in electric applications for fixed mirror distributed focus (FMDF) solar thermal concepts which have been studied and developed in the DOE solar thermal program. Following the completion of earlier systems comparison studies in the late 1970's there have been a number of years of progress in solar thermal technology. This progress includes developing new solar components, improving component and system design details, constructing working systems, and collecting operating data on the systems. This study povides an update of the expected performance and cost of the major components, and an overall system energy cost for the FMDDF concepts evaluated. The projections in this study are for the late 1990's and are based on the potential capabilities that might be achieved with further technology development.

The fuel-cycle energy use and greenhouse gas (GHG) emissions associated with the application of fuel cells to distributed power generation were evaluated and compared with the combustion technologies of microturbines and internal combustion engines, as well as the various technologies associated with grid-electricity generation in the United States and California. The results were primarily impacted by the net electrical efficiency of the power generation technologies and the type of employed fuels. The energy use and GHG emissions associated with the electric power generation represented the majority of the total energy use of the fuel cycle and emissions for all generation pathways. Fuel cell technologies exhibited lower GHG emissions than those associated with the U.S. grid electricity and other combustion technologies. The higher-efficiency fuel cells, such as the solid oxide fuel cell (SOFC) and molten carbonate fuel cell (MCFC), exhibited lower energy requirements than those for combustion generators. The dependence of all natural-gas-based technologies on petroleum oil was lower than that of internal combustion engines using petroleum fuels. Most fuel cell technologies approaching or exceeding the DOE target efficiency of 40% offered significant reduction in energy use and GHG emissions.

Purpose This study is to identify preoperative cautions for revision of infected total knee arthroplasty (TKA) by understanding the differences in hematologic and hemodynamic changes between primary TKA and revision of infected TKA. Materials and Methods The study included 40 patients in each of the two groups: one group with patients who underwent TKA and the other group with patients who underwent revision of infected TKA. All patients matched for age and body mass index. The following data were compared between the groups: changes in blood pressure, variations in hemoglobin level, amount of postoperative blood loss and transfused blood, incidence of blood transfusion, white blood cell (WBC) count, albumin level, erythrocyte sedimentation rate (ESR), C-reactive protein (CRP), and liver enzyme level. Results The hemoglobin levels, transfusion rate, and the amount of blood loss were significantly higher in the revision group (p=0.012). In both groups, CRP reached the highest level on the 3rd postoperative day but it was normalized 2 weeks postoperatively; however, the revision TKA group showed a greater tendency to normalization (p=0.029). There were significant differences between the groups in ESR, WBC, blood pressure, and changes in liver enzyme levels. Conclusions Revision of infected TKA results in greater hemodynamic variations than primary TKA. Therefore, more efforts should be made to identify pre- and postoperative hemodynamic changes and hematologic status. PMID:27274469

In recent years many software tools and applications have appeared that offer procedures, scripts and algorithms to process and visualize ALS data. This variety of software tools and of "point cloud" processing methods contributed to the aim of this study: to assess algorithms available in various software tools that are used to classify LIDAR "point cloud" data, through a careful examination of Digital Elevation Models (DEMs) generated from LIDAR data on a base of these algorithms. The works focused on the most important available software tools: both commercial and open source ones. Two sites in a mountain area were selected for the study. The area of each site is 0.645 sq km. DEMs generated with analysed software tools ware compared with a reference dataset, generated using manual methods to eliminate non ground points. Surfaces were analysed using raster analysis. Minimum, maximum and mean differences between reference DEM and DEMs generated with analysed software tools were calculated, together with Root Mean Square Error. Differences between DEMs were also examined visually using transects along the grid axes in the test sites.

While much has been written about the increasing debt burden that college students incur, little research examines student's perceptions of debt. This study sought to determine if student loan debt literacy differs by generation status (first-generation and continuing-generation). The data for this study was collected from a sample of 156…

Monte Carlo generators are crucial to the analysis of high energy physics data, ideally giving a baseline comparison between the state-of-art theoretical models and experimental data. Presented here is a comparison between three of final state distributions from the GENIE, Neut, NUANCE, and NuWro neutrino Monte Carlo event generators. The final state distributions chosen for comparison are: the electromagnetic energy fraction in neutral current interactions, the energy of the leading π{sup 0} vs. the scattering angle for neutral current interactions, and the muon energy vs. scattering angle of ν{sub µ} charged current interactions.

Monte Carlo generators are crucial to the analysis of high energy physics data, ideally giving a baseline comparison between the state-of-art theoretical models and experimental data. Presented here is a comparison between three of final state distributions from the GENIE, Neut, NUANCE, and NuWro neutrino Monte Carlo event generators. The final state distributions chosen for comparison are: the electromagnetic energy fraction in neutral current interactions, the energy of the leading π0 vs. the scattering angle for neutral current interactions, and the muon energy vs. scattering angle of νµ charged current interactions.

The power output variability of photovoltaic systems can affect local electrical grids in locations with high renewable energy penetrations or weak distribution or transmission systems. In those rare cases, quick controllable generators (e.g., energy storage systems) or loads can counteract the destabilizing effects by compensating for the power fluctuations. Previously, control algorithms for coordinated and uncoordinated operation of a small natural gas engine-generator (genset) and a battery for smoothing PV plant output were optimized using MATLAB/Simulink simulations. The simulations demonstrated that a traditional generation resource such as a natural gas genset in combination with a battery would smooth the photovoltaic output while using a smaller battery state of charge (SOC) range and extending the life of the battery. This paper reports on the experimental implementation of the coordinated and uncoordinated controllers to verify the simulations and determine the differences in the controllers. The experiments were performed with the PNM PV and energy storage Prosperity site and a gas engine-generator located at the Aperture Center at Mesa Del Sol in Albuquerque, New Mexico. Two field demonstrations were performed to compare the different PV smoothing control algorithms: (1) implementing the coordinated and uncoordinated controls while switching off a subsection of the PV array at precise times on successive clear days, and (2) comparing the results of the battery and genset outputs for the coordinated control on a high variability day with simulations of the coordinated and uncoordinated controls. It was found that for certain PV power profiles the SOC range of the battery may be larger with the coordinated control, but the total amp-hours through the battery-which approximates battery wear-will always be smaller with the coordinated control.

Once writers complete a first draft, they are often encouraged to evaluate their writing and prioritize what to revise. Yet, this process can be both daunting and difficult. This study looks at how students used a semantic concept mapping tool to re-present the content and organization of their initial draft of an informational text. We examine…

Mobile networks and services have gone further than voice-only communication services and are rapidly developing towards data-centric services. Emerging mobile data services are expected to see the same explosive growth in demand that Internet and wireless voice services have seen in recent years. To support such a rapid increase in traffic, active users, and advanced multimedia services implied by this growth rate along with the diverse quality of service (QoS) and rate requirements set by these services, mobile operator need to rapidly transition to a simple and cost-effective, flat, all IP-network. This has accelerated the development and deployment of new wireless broadband access technologies including fourth-generation (4G) mobile WiMAX and cellular Long-Term Evolution (LTE). Mobile WiMAX and LTE are two different (but not necessarily competing) technologies that will eventually be used to achieve data speeds of up to 100 Mbps. Speeds that are fast enough to potentially replace wired broadband connections with wireless. This paper introduces both of these next generation technologies and then compares them in the end.

A Chinese and a Swedish preschool teacher education programme were examined in search for commonalities and differences of the curriculum decision-making considerations involved in the respective programme revision process. Findings include: (1) the two programmes have shifted orientations and become similar, yet there was no fundamental…

Objective The Rosenberg Self-Esteem Scale (RSES) is a widely used instrument that has been tested for reliability and validity in many settings; however, some negative-worded items appear to have caused it to reveal low reliability in a number of studies. In this study, we revised one negative item that had previously (from the previous studies) produced the worst outcome in terms of the structure of the scale, then re-analyzed the new version for its reliability and construct validity, comparing it to the original version with respect to fit indices. Methods In total, 851 students from Chiang Mai University (mean age: 19.51±1.7, 57% of whom were female), participated in this study. Of these, 664 students completed the Thai version of the original RSES - containing five positively worded and five negatively worded items, while 187 students used the revised version containing six positively worded and four negatively worded items. Confirmatory factor analysis was applied, using a uni-dimensional model with method effects and a correlated uniqueness approach. Results The revised version showed the same level of reliability (good) as the original, but yielded a better model fit. The revised RSES demonstrated excellent fit statistics, with χ2=29.19 (df=19, n=187, p=0.063), GFI=0.970, TFI=0.969, NFI=0.964, CFI=0.987, SRMR=0.040 and RMSEA=0.054. Conclusion The revised version of the Thai RSES demonstrated an equivalent level of reliability but a better construct validity when compared to the original. PMID:22396685

Purpose: The purpose of this paper is to describe some of the problems and issues faced by online library catalogues. It aims to establish how libraries have undertaken the mission of developing the next generation catalogues and how they compare to new tools such as Amazon. Design/methodology/approach: An expert study was carried out in January…

Discussion of two basic conceptions: Wilhelm von Humboldt's idea of language as energeia'' existing within and without man, and Noam Chomsky's idea of language generated by the speaker according to an innate apparatus. Revised version of lectures presented at the University of Bonn, West Germany in August 1971. (RS)

In this paper we present a comparison of three different grids generated with a fractal method and used for fluid dynamic simulations through a kinetic approach. We start from the theoretical element definition and we introduce some optimizations in order to fulfil requirements. The study is performed with analysing results both in terms of friction factor at different Reynolds regimes and streamlines paths.

The suitability of various binary encoding methods for electron-beam recording of computer generated holograms is systematically evaluated. Subjected to the limitations of computing resources, a set of criteria is established according to which these encoding schemes are evaluated and compared. This comparison can be used to determine the optimum encoding method for desired wavefront properties. PMID:20523368

Efforts to parallelize the VGRIDSG unstructured surface grid generation program are described. The inherent parallel nature of the grid generation algorithm used in VGRIDSG was exploited on a cluster of Silicon Graphics IRIS 4D workstations using the message passing libraries Application Portable Parallel Library (APPL) and Parallel Virtual Machine (PVM). Comparisons of speed up are presented for generating the surface grid of a unit cube and a Mach 3.0 High Speed Civil Transport. It was concluded that for this application, both APPL and PVM give approximately the same performance, however, APPL is easier to use.

The paper examines losses and heating in rotors of large synchronous-generators following sustained stator-terminal and HV busbar L-L short-circuits at full-load and no-load, L-L-L short-circuits on a weak line which is connected to the HV generator transformer busbar with clearance at fault current zeros where the generator either remains in synchronism or falls from synchronism, and worst-case malsynchronization. Comparison s are made with negative sequence losses in solid generator rotors following these disturbances given by I{sub 2}{sup 2} t computed from detailed analysis and estimated from approximate analytical expressions for an unloaded machine.

Results are reported from a large number of simultaneous acoustic measurements around a large horizontal axis downwind configuration wind turbine generator. In addition, comparisons are made between measurements and calculations of both the discrete frequency rotational harmonics and the broad band noise components. Sound pressure time histories and noise radiation patterns as well as narrow band and broadband noise spectra are presented for a range of operating conditions. The data are useful for purposes of environmental impact assessment.

This paper presents four algorithms to generate random forecast error time series, including a truncated-normal distribution model, a state-space based Markov model, a seasonal autoregressive moving average (ARMA) model, and a stochastic-optimization based model. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets, used for variable generation integration studies. A comparison is made using historical DA load forecast and actual load values to generate new sets of DA forecasts with similar stoical forecast error characteristics. This paper discusses and compares the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.

We report an experimental study of the transients generated by pulsed x-rays, heavy ions, and different laser wavelengths in a Si p-i-n photodiode. We compare the charge collected by all of the excitation methods to determine the equivalent LET for pulsed x-rays relative to heavy ions. Our comparisons show that pulsed x-rays from synchrotron sources can generate a large range of equivalent LET and generate transients similar to those excited by laser pulses and heavy ion strikes. We also look at how the pulse width of the transients changes for the different excitation methods. We show that the charge collected with pulsed x-rays is greater than expected as the x-ray photon energy increases. Combined with their capability of focusing to small spot sizes and of penetrating metallization, pulsed x-rays are a promising new tool for high resolution screening of SEE susceptibility

We present the results of 62 consecutive acetabular revisions using impaction bone grafting and a cemented polyethylene acetabular component in 58 patients (13 men and 45 women) after a mean follow-up of 27 years (25 to 30). All patients were prospectively followed. The mean age at revision was 59.2 years (23 to 82). We performed Kaplan-Meier (KM) analysis and also a Competing Risk (CR) analysis because with long-term follow-up, the presence of a competing event (i.e. death) prevents the occurrence of the endpoint of re-revision. A total of 48 patients (52 hips) had died or had been re-revised at final review in March 2011. None of the deaths were related to the surgery. The mean Harris hip score of the ten surviving hips in ten patients was 76 points (45 to 99). The KM survivorship at 25 years for the endpoint 're-revision for any reason' was 58.0% (95% confidence interval (CI) 38 to 73) and for 're-revision for aseptic loosening' 72.1% (95% CI 51 to 85). With the CR analysis we calculated the KM analysis overestimates the failure rate with respectively 74% and 93% for these endpoints. The current study shows that acetabular impaction bone grafting revisions provide good clinical results at over 25 years. PMID:26430007

This paper presents four algorithms to generate random forecast error time series. The performance of four algorithms is compared. The error time series are used to create real-time (RT), hour-ahead (HA), and day-ahead (DA) wind and load forecast time series that statistically match historically observed forecasting data sets used in power grid operation to study the net load balancing need in variable generation integration studies. The four algorithms are truncated-normal distribution models, state-space based Markov models, seasonal autoregressive moving average (ARMA) models, and a stochastic-optimization based approach. The comparison is made using historical DA load forecast and actual load values to generate new sets of DA forecasts with similar stoical forecast error characteristics (i.e., mean, standard deviation, autocorrelation, and cross-correlation). The results show that all methods generate satisfactory results. One method may preserve one or two required statistical characteristics better the other methods, but may not preserve other statistical characteristics as well compared with the other methods. Because the wind and load forecast error generators are used in wind integration studies to produce wind and load forecasts time series for stochastic planning processes, it is sometimes critical to use multiple methods to generate the error time series to obtain a statistically robust result. Therefore, this paper discusses and compares the capabilities of each algorithm to preserve the characteristics of the historical forecast data sets.

This document provides performance standards that one, as a generator of hazardous chemical, radioactive, or mixed wastes at the Berkeley Lab, must meet to manage their waste to protect Berkeley Lab staff and the environment, comply with waste regulations and ensure the continued safe operation of the workplace, have the waste transferred to the correct Waste Handling Facility, and enable the Environment, Health and Safety (EH and S) Division to properly pick up, manage, and ultimately send the waste off site for recycling, treatment, or disposal. If one uses and generates any of these wastes, one must establish a Satellite Accumulation Area and follow the guidelines in the appropriate section of this document. Topics include minimization of wastes, characterization of the wastes, containers, segregation, labeling, empty containers, and spill cleanup and reporting.

Here, the utility of Generative Topographic Maps (GTM) for data visualization, structure-activity modeling and database comparison is evaluated, on hand of subsets of the Database of Useful Decoys (DUD). Unlike other popular dimensionality reduction approaches like Principal Component Analysis, Sammon Mapping or Self-Organizing Maps, the great advantage of GTMs is providing data probability distribution functions (PDF), both in the high-dimensional space defined by molecular descriptors and in 2D latent space. PDFs for the molecules of different activity classes were successfully used to build classification models in the framework of the Bayesian approach. Because PDFs are represented by a mixture of Gaussian functions, the Bhattacharyya kernel has been proposed as a measure of the overlap of datasets, which leads to an elegant method of global comparison of chemical libraries. PMID:27477099

A two-stage slagging coal combustor developed by TRW Corporation, was successfully integrated with an MHD generator developed by the Avco Corporation, when the two companies cooperated in an operational demonstration of a coal fired MHD power train under the sponsorship of DOE. The experimental components, rated at a nominal 20 MW thermal input, are the engineering prototypes of 50 MW /SUB th/ hardware to be supplied by the contractors to the recently commissioned Component Development and Integration Facility (CDIF), a federal MHD test site in Butte, Montana. A second series of tests was conducted in which the same channel and operating parameters were employed with an oil-fired ash-injected combustor (AIC) to provide performance comparisons. The only significant performance variation uncovered in the comparison tests was attributable to a non-optimum method and location for seed injection in the coal-fired combustor. The corrective measures are deemed to be relatively straightforward.

The United States Department of Energy (DOE) commissioned a study the suitability of different advanced reactor concepts to support materials irradiations (i.e. a test reactor) or to demonstrate an advanced power plant/fuel cycle concept (demonstration reactor). As part of the study, an assessment of the technical maturity of the individual concepts was undertaken to see which, if any, can support near-term deployment. A Working Group composed of the authors of this document performed the maturity assessment using the Technical Readiness Levels as defined in DOE’s Technology Readiness Guide . One representative design was selected for assessment from of each of the six Generation-IV reactor types: gas-cooled fast reactor (GFR), lead-cooled fast reactor (LFR), molten salt reactor (MSR), supercritical water-cooled reactor (SCWR), sodium-cooled fast reactor (SFR), and very high temperature reactor (VHTR). Background information was obtained from previous detailed evaluations such as the Generation-IV Roadmap but other technical references were also used including consultations with concept proponents and subject matter experts. Outside of Generation IV activity in which the US is a party, non-U.S. experience or data sources were generally not factored into the evaluations as one cannot assume that this data is easily available or of sufficient quality to be used for licensing a US facility. The Working Group established the scope of the assessment (which systems and subsystems needed to be considered), adapted a specific technology readiness scale, and scored each system through discussions designed to achieve internal consistency across concepts. In general, the Working Group sought to determine which of the reactor options have sufficient maturity to serve either the test or demonstration reactor missions.

During the past years the understanding of the multi scale interaction problems have increased significantly. However, at present there exists a flora of different analytical models for investigating multi scale interactions and hardly any specific comparisons have been performed among these models. In this work two different models for the generation of zonal flows from ion-temperature-gradient (ITG) background turbulence are discussed and compared. The methods used are the coherent mode coupling model and the wave kinetic equation model (WKE). It is shown that the two models give qualitatively the same results even though the assumption on the spectral difference is used in the (WKE) approach.

We experimentally realize a robust real-time random number generator by differentially comparing the signal from a chaotic semiconductor laser and its delayed signal through a 1-bit analog-to-digital converter. The probability density distribution of the output chaotic signal based on the differential comparison method possesses an extremely small coefficient of Pearson's median skewness (1.5 × 10⁻⁶), which can yield a balanced random sequence much easily than the previously reported method that compares the signal from the chaotic laser with a certain threshold value. Moveover, we experimently demonstrate that our method can stably generate good random numbers at rates of 1.44 Gbit/s with excellent immunity from external perturbations while the previously reported method fails. PMID:22453429

An algorithm for generating deep-layer mean temperatures from satellite-observed microwave observations is presented. Unlike traditional temperature retrieval methods, this algorithm does not require a first guess temperature of the ambient atmosphere. By eliminating the first guess a potentially systematic source of error has been removed. The algorithm is expected to yield long-term records that are suitable for detecting small changes in climate. The atmospheric contribution to the deep-layer mean temperature is given by the averaging kernel. The algorithm computes the coefficients that will best approximate a desired averaging kernel from a linear combination of the satellite radiometer's weighting functions. The coefficients are then applied to the measurements to yield the deep-layer mean temperature. Three constraints were used in deriving the algorithm: (1) the sum of the coefficients must be one, (2) the noise of the product is minimized, and (3) the shape of the approximated averaging kernel is well-behaved. Note that a trade-off between constraints 2 and 3 is unavoidable. The algorithm can also be used to combine measurements from a future sensor (i.e., the 20-channel Advanced Microwave Sounding Unit (AMSU)) to yield the same averaging kernel as that based on an earlier sensor (i.e., the 4-channel Microwave Sounding Unit (MSU)). This will allow a time series of deep-layer mean temperatures based on MSU measurements to be continued with AMSU measurements. The AMSU is expected to replace the MSU in 1996.

In part one of this document the Governing Documents and Definitions sections provide general guidelines and regulations applying to the handling of hazardous chemical wastes. The remaining sections provide details on how you can prepare your waste properly for transport and disposal. They are correlated with the steps you must take to properly prepare your waste for pickup. The purpose of the second part of this document is to provide the acceptance criteria for the transfer of radioactive and mixed waste to LBL`s Hazardous Waste Handling Facility (HWHF). These guidelines describe how you, as a generator of radioactive or mixed waste, can meet LBL`s acceptance criteria for radioactive and mixed waste.

A coarse mesh (8 by 10) 7 layer global climate model was used to compute 15 months of meteorological history in two perpetual January experiments on a water planet (without continents) with a zonally symmetric climatological January sea surface temperature field. In the first of the two water planet experiments the initial atmospheric state was a set of zonal mean values of specific humidity, temperature, and wind at each latitude. In the second experiment the model was initialized with globally uniform mean values of specific humidity and temperature on each sigma level surface, constant surface pressure (1010 mb), and zero wind everywhere. A comparison was made of the mean January climatic states generated by the two water planet experiments. The first two months of each 15 January run were discarded, and 13 month averages were computed from months 3 through 15.

Background Techniques enabling targeted re-sequencing of the protein coding sequences of the human genome on next generation sequencing instruments are of great interest. We conducted a systematic comparison of the solution-based exome capture kits provided by Agilent and Roche NimbleGen. A control DNA sample was captured with all four capture methods and prepared for Illumina GAII sequencing. Sequence data from additional samples prepared with the same protocols were also used in the comparison. Results We developed a bioinformatics pipeline for quality control, short read alignment, variant identification and annotation of the sequence data. In our analysis, a larger percentage of the high quality reads from the NimbleGen captures than from the Agilent captures aligned to the capture target regions. High GC content of the target sequence was associated with poor capture success in all exome enrichment methods. Comparison of mean allele balances for heterozygous variants indicated a tendency to have more reference bases than variant bases in the heterozygous variant positions within the target regions in all methods. There was virtually no difference in the genotype concordance compared to genotypes derived from SNP arrays. A minimum of 11× coverage was required to make a heterozygote genotype call with 99% accuracy when compared to common SNPs on genome-wide association arrays. Conclusions Libraries captured with NimbleGen kits aligned more accurately to the target regions. The updated NimbleGen kit most efficiently covered the exome with a minimum coverage of 20×, yet none of the kits captured all the Consensus Coding Sequence annotated exons. PMID:21955854

An interlaboratory comparison using relative-humidity (RH) and temperature probes at three national measurement institutes and two accredited laboratories has been carried out. The work had three purposes: firstly, to establish the instruments’ level of reproducibility and suitability for use as transfer standards within their specified range of operation; secondly, to show the agreement of a method of RH generation utilizing certified non-saturated salt RH standards when compared with a method of RH calibration using a chilled-mirror reference and platinum-resistance thermometers; and finally, from the results obtained it is possible to establish the equivalence between the participating laboratories, to the level of uncertainty achievable with the transfer standards used. A total of six RH probes were tested in two groups. The instruments of the first group were calibrated in the range from 10 %rh to 90 %rh at a temperature of 23 °C. The second group of instruments was measured in the same RH range, but at the temperatures of 5 °C, 23 °C, and 50 °C. The objective of the tests on the second group of instruments was to determine the effect of a wider operating temperature range on performance. This article presents and discusses the results of the comparison in the context of an international collaboration that provides confidence in the measurements performed by the participants within their respective accredited scopes and the ILAC or the CIPM mutual recognition arrangements.

Terraces eroded into sediment (cut-fill) and bedrock (strath) preserve a geomorphic record of river activity. River terraces are often thought to form when a river switches from a period of low vertical incision rates and valley widening to high vertical incision rates and terrace abandonment. Consequently, terraces are frequently interpreted to reflect landscape response to changing external drivers, including tectonics, sea-level, and most commonly, climate. In contrast, unsteady lateral migration in meandering rivers may generate river terraces even under constant vertical incision and without changes in external forcing. To explore this latter mechanism, we use a numerical model and an automated terrace detection algorithm to simulate landscape evolution by a vertically incising, meandering river and isolate the age and geometric fingerprints of intrinsically generated river terraces. Simulations indicate that terraces form for a wide range of lateral and vertical incision rates, and the time interval between unique terrace levels is limited by a characteristic timescale for relief generation. Surprisingly, intrinsically generated terraces are commonly paired, an attribute that is thought to be diagnostic of climate change. For low ratios of vertical-to-lateral erosion rates, modeled terraces are longitudinally extensive and typically dip toward the valley center, and terrace slope is proportional to the ratio of vertical to lateral erosion. Evolving, spatial differences in bank strength between bedrock and sediment reduce terrace formation frequency and length, and can explain sub-linear terrace margins at valley boundaries. Comparison of model predictions to natural river terraces indicates that terrace length is the most reliable indicator of terrace formation by pulses of vertical incision, and may contain the imprint of past climate change on landscapes.

Chemically amplified deep UV (CA-DUV) positive resists are the enabling materials for manufacture of devices at and below 0.18 micrometer design rules in the semiconductor industry. CA-DUV resists are typically based on a combination of an acid labile polymer and a photoacid generator (PAG). Upon UV exposure, a catalytic amount of a strong Bronsted acid is released and is subsequently used in a post-exposure bake step to deprotect the acid labile polymer. Deprotection transforms the acid labile polymer into a base soluble polymer and ultimately enables positive tone image development in dilute aqueous base. As CA-DUV resist systems continue to mature and are used in increasingly demanding situations, it is critical to develop a fundamental understanding of how robust these materials are. One of the most important factors to quantify is how much acid is photogenerated in these systems at key exposure doses. For the purpose of quantifying photoacid generation several methods have been devised. These include spectrophotometric methods, ion conductivity methods and most recently an acid-base type titration similar to the standard addition method. This paper compares many of these techniques. First, comparisons between the most commonly used acid sensitive dye, tetrabromophenol blue sodium salt (TBPB) and a less common acid sensitive dye, Rhodamine B base (RB) are made in several resist systems. Second, the novel acid-base type titration based on the standard addition method is compared to the spectrophotometric titration method. During these studies, the make up of the resist system is probed as follows: the photoacid generator and resist additives are varied to understand the impact of each of these resist components on the acid generation process.

This study compares kinetic parameters determined by open-system pyrolysis and hydrous pyrolysis using aliquots of source rocks containing different kerogen types. Kinetic parameters derived from these two pyrolysis methods not only differ in the conditions employed and products generated, but also in the derivation of the kinetic parameters (i.e., isothermal linear regression and non-isothermal nonlinear regression). Results of this comparative study show that there is no correlation between kinetic parameters derived from hydrous pyrolysis and open-system pyrolysis. Hydrous-pyrolysis kinetic parameters determine narrow oil windows that occur over a wide range of temperatures and depths depending in part on the organic-sulfur content of the original kerogen. Conversely, open-system kinetic parameters determine broad oil windows that show no significant differences with kerogen types or their organic-sulfur contents. Comparisons of the kinetic parameters in a hypothetical thermal-burial history (2.5 ??C/my) show open-system kinetic parameters significantly underestimate the extent and timing of oil generation for Type-US kerogen and significantly overestimate the extent and timing of petroleum formation for Type-I kerogen compared to hydrous pyrolysis kinetic parameters. These hypothetical differences determined by the kinetic parameters are supported by natural thermal-burial histories for the Naokelekan source rock (Type-IIS kerogen) in the Zagros basin of Iraq and for the Green River Formation (Type-I kerogen) in the Uinta basin of Utah. Differences in extent and timing of oil generation determined by open-system pyrolysis and hydrous pyrolysis can be attributed to the former not adequately simulating natural oil generation conditions, products, and mechanisms.

Revision rhinoplasty is one of the most challenging operations the facial plastic surgeon performs given the complex 3-dimensional anatomy of the nose and the psychological impact it has on patients. The intricate interplay of cartilages, bone, and soft tissue in the nose gives it its aesthetic and function. Facial harmony and attractiveness depends greatly on the nose given its central position in the face. In the following article, the authors review common motivations and anatomic findings for patients seeking revision rhinoplasty based on the senior author's 30-year experience with rhinoplasty and a review of the literature. PMID:26616705

Shale oils generated using different laboratory pyrolysis methods have been studied using standard oil characterization methods as well as Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) with electrospray ionization (ESI) and atmospheric photoionization (APPI) to assess differences in molecular composition. The pyrolysis oils were generated from samples of the Mahogany zone oil shale of the Eocene Green River Formation collected from outcrops in the Piceance Basin, Colorado, using three pyrolysis systems under conditions relevant to surface and in situ retorting approaches. Significant variations were observed in the shale oils, particularly the degree of conjugation of the constituent molecules and the distribution of nitrogen-containing compound classes. Comparison of FT-ICR MS results to other oil characteristics, such as specific gravity; saturate, aromatic, resin, asphaltene (SARA) distribution; and carbon number distribution determined by gas chromatography, indicated correspondence between higher average double bond equivalence (DBE) values and increasing asphaltene content. The results show that, based on the shale oil DBE distributions, highly conjugated species are enriched in samples produced under low pressure, high temperature conditions, and under high pressure, moderate temperature conditions in the presence of water. We also report, for the first time in any petroleum-like substance, the presence of N4 class compounds based on FT-ICR MS data. Using double bond equivalence and carbon number distributions, structures for the N4 class and other nitrogen-containing compounds are proposed.

Mitochondrial genome (mitogenome) sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS) technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a 'traditional' Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing) on four different sequencing platforms (Illumina's HiSeq and MiSeq, Roche's 454 GS FLX, and Life Technologies' Ion Torrent) to produce seven (near-) complete mitogenomes from six species that form a small radiation of caecilian amphibians from the Seychelles. The fastest, most accurate method of obtaining mitogenome sequences that we tested was direct sequencing of genomic DNA (shotgun sequencing) using the MiSeq platform. Bayesian inference and maximum likelihood analyses using seven different partitioning strategies were unable to resolve compellingly all phylogenetic relationships among the Seychelles caecilian species, indicating the need for additional data in this case. PMID:27280454

An extensive comparison of the properties of positron beams produced by an ultra-intense femtosecond laser in direct and indirect schemes has been performed with two-dimensional particle-in-cell and Monte Carlo simulations. It is shown that the positron beam generated in the indirect scheme has a higher yield (10{sup 10}), a higher temperature (28.8 MeV), a shorter pulse duration (5 ps), and a smaller divergence (8°) than in the direct case (10{sup 9} yield, 4.4 MeV temperature, 40 ps pulse duration, and 60° divergence). In addition, it was found that the positron/gamma ratio in the indirect scheme is one order of magnitude higher than that in the direct one, which represents a higher signal/noise ratio in positron detection. Nevertheless, the direct generation method still has its own unique advantage, the so-called target normal sheath acceleration, which can result in quasi-monoenergetic positron beams that may serve in some specialized applications.

An extensive comparison of the properties of positron beams produced by an ultra-intense femtosecond laser in direct and indirect schemes has been performed with two-dimensional particle-in-cell and Monte Carlo simulations. It is shown that the positron beam generated in the indirect scheme has a higher yield (1010), a higher temperature (28.8 MeV), a shorter pulse duration (5 ps), and a smaller divergence (8°) than in the direct case (109 yield, 4.4 MeV temperature, 40 ps pulse duration, and 60° divergence). In addition, it was found that the positron/gamma ratio in the indirect scheme is one order of magnitude higher than that in the direct one, which represents a higher signal/noise ratio in positron detection. Nevertheless, the direct generation method still has its own unique advantage, the so-called target normal sheath acceleration, which can result in quasi-monoenergetic positron beams that may serve in some specialized applications.

A considerable number of methods for pansharpening remote-sensing images have been developed to generate higher spatial resolution multispectral images by the fusion of lower resolution multispectral images and higher resolution panchromatic images. Because pansharpening alters the spectral properties of multispectral images, method selection is one of the key factors influencing the accuracy of subsequent analyses such as land-cover classification or change detection. In this study, seven pixel-based pansharpening methods (additive wavelet intensity, additive wavelet principal component, generalized Laplacian pyramid with spectral distortion minimization, generalized intensity-hue-saturation (GIHS) transform, GIHS adaptive, Gram-Schmidt spectral sharpening, and block-based synthetic variable ratio) were compared using AVNIR-2 and PRISM onboard ALOS from the viewpoint of the preservation of spectral properties of AVNIR-2. A visual comparison was made between pansharpened images generated from spatially degraded AVNIR-2 and original images over urban, agricultural, and forest areas. The similarity of the images was evaluated in terms of the image contrast, the color distinction, and the brightness of the ground objects. In the quantitative assessment, three kinds of statistical indices, correlation coefficient, ERGAS, and Q index, were calculated by band and land-cover type. These scores were relatively superior in bands 2 and 3 compared with the other two bands, especially over urban and agricultural areas. Band 4 showed a strong dependency on the land-cover type. This was attributable to the differences in the observing spectral wavelengths of the sensors and local scene variances.

Mitochondrial genome (mitogenome) sequences are being generated with increasing speed due to the advances of next-generation sequencing (NGS) technology and associated analytical tools. However, detailed comparisons to explore the utility of alternative NGS approaches applied to the same taxa have not been undertaken. We compared a ‘traditional’ Sanger sequencing method with two NGS approaches (shotgun sequencing and non-indexed, multiplex amplicon sequencing) on four different sequencing platforms (Illumina’s HiSeq and MiSeq, Roche’s 454 GS FLX, and Life Technologies’ Ion Torrent) to produce seven (near-) complete mitogenomes from six species that form a small radiation of caecilian amphibians from the Seychelles. The fastest, most accurate method of obtaining mitogenome sequences that we tested was direct sequencing of genomic DNA (shotgun sequencing) using the MiSeq platform. Bayesian inference and maximum likelihood analyses using seven different partitioning strategies were unable to resolve compellingly all phylogenetic relationships among the Seychelles caecilian species, indicating the need for additional data in this case. PMID:27280454

The differences between two types of pose-based UAV path generation methods clothoid and Dubins are analyzed in this thesis. The Dubins path is a combination of circular arcs and straight line segments; therefore its curvature will exhibit sudden jumps between constant values. The resulting path will have a minimum length if turns are performed at the minimum possible turn radius. The clothoid path consists of a similar combination of arcs and segments but the difference is that the clothoid arcs have a linearly variable curvature and are generated based on Fresnel integrals. Geometrically, the generation of the clothoid arc starts with a large curvature that decreases to zero. The clothoid path results are longer than the Dubins path between the same two poses and for the same minimum turn radius. These two algorithms are the focus of this research because of their geometrical simplicity, flexibility, and low computational requirements. The comparison between clothoid and Dubins algorithms relies on extensive simulation results collected using an ad-hoc developed automated data acquisition tool within the WVU UAV simulation environment. The model of a small jet engine UAV has been used for this purpose. The experimental design considers several primary factors, such as different trajectory tracking control laws, normal and abnormal flight conditions, relative configuration of poses, and wind and turbulence. A total of five different controllers have been considered, three conventional with fixed parameters and two adaptive. The abnormal flight conditions include locked or damaged actuators (stabilator, aileron, or rudder) and sensor bias affecting roll, pitch, or yaw rate gyros that are used in the feedback control loop. The relative configuration of consecutive poses is considered in terms of heading (required turn angle) and relative location of start and end points (position quadrant). Wind and turbulence effects were analyzed for different wind speed and

Revised high-heeled shoes (HHSs) were designed to improve the shortcomings of standard HHSs. This study was conducted to compare revised and standard HHSs with regard to joint angles and electromyographic (EMG) activity of the lower extremities during standing. The participants were five healthy young women. Data regarding joint angles and EMG activity of the lower extremities were obtained under three conditions: barefoot, when wearing revised HHSs, and when wearing standard HHSs. Lower extremity joint angles in the three dimensional plane were confirmed using a VICON motion capture system. EMG activity of the lower extremities was measured using active bipolar surface EMG. Kruskal-Wallis one-way analysis of variance by rank applied to analyze differences during three standing conditions. Compared with the barefoot condition, the standard HHSs condition was more different than the revised HHSs condition with regard to lower extremity joint angles during standing. EMG activity of the lower extremities was different for the revised HHSs condition, but the differences among the three conditions were not significant. Wearing revised HHSs may positively impact joint angles and EMG activity of the lower extremities by improving body alignment while standing. PMID:27163313

The characterization of radioactive emissions from unstable isotopes (intrinsic radiation) is necessary for shielding and radiological-dose calculations from radioactive materials. While most radiation transport codes, e.g., MCNP [X-5 Monte Carlo Team, 2003], provide the capability to input user prescribed source definitions, such as radioactive emissions, they do not provide the capability to calculate the correct radioactive-source definition given the material compositions. Special modifications to MCNP have been developed in the past to allow the user to specify an intrinsic source, but these modification have not been implemented into the primary source base [Estes et al., 1988]. To facilitate the description of the intrinsic radiation source from a material with a specific composition, the Intrinsic Source Constructor library (LIBISC) and MCNP Intrinsic Source Constructor (MISC) utility have been written. The combination of LIBISC and MISC will be herein referred to as the ISC package. LIBISC is a statically linkable C++ library that provides the necessary functionality to construct the intrinsic-radiation source generated by a material. Furthermore, LIBISC provides the ability use different particle-emission databases, radioactive-decay databases, and natural-abundance databases allowing the user flexibility in the specification of the source, if one database is preferred over others. LIBISC also provides functionality for aging materials and producing a thick-target bremsstrahlung photon source approximation from the electron emissions. The MISC utility links to LIBISC and facilitates the description of intrinsic-radiation sources into a format directly usable with the MCNP transport code. Through a series of input keywords and arguments the MISC user can specify the material, age the material if desired, and produce a source description of the radioactive emissions from the material in an MCNP readable format. Further details of using the MISC utility can

Background and purpose Ceramic-on-ceramic (CoC) bearings were introduced in total hip arthroplasty (THA) to reduce problems related to polyethylene wear. We compared the 9-year revision risk for cementless CoC THA and for cementless metal-on-polyethylene (MoP) THA. Patients and methods In this prospective, population-based study from the Danish Hip Arthroplasty Registry, we identified all the primary cementless THAs that had been performed from 2002 through 2009 (n = 25,656). Of these, 1,773 THAs with CoC bearings and 9,323 THAs with MoP bearings were included in the study. To estimate the relative risk (RR) of revision, we used regression with the pseudo-value approach and treated death as a competing risk. Results 444 revisions were identified: 4.0% for CoC THA (71 of 1,773) and 4.0% for MoP THA (373 of 9,323). No statistically significant difference in the risk of revision for any reason was found for CoC and MoP bearings after 9 years of follow-up (adjusted RR = 1.3, 95% CI: 0.72–2.4). Revision rates due to component failure were 0.5% (n = 8) for CoC bearings and 0.1% (n = 6) for MoP bearings (p < 0.001). 6 patients with CoC bearings (0.34%) underwent revision due to ceramic fracture. Interpretation When compared to the “standard” MoP bearings, CoC THA had a 33% higher (though not statistically significantly higher) risk of revision for any reason at 9 years. PMID:25637339

The effect of service-learning courses on student growth was compared for 321 first-generation and 782 non-first-generation undergraduate students at a large urban university. Student growth encompassed both academic and professional skill development. The majority of students reported significant academic and professional development after…

NASA continues to focus on improving safety and reliability while reducing the annual cost of meeting human space flight and unique ISS and exploration needs. NASA's Space Transportation Architecture Study (STAS) Phase 2 in early 1998 focused on space transportation options. Subsequently, NASA directed parallel industry and government teams to conduct the Integrated Space Transportation Plan effort (STAS Phase 3). The objective of ISTP was to develop technology requirements, roadmaps, and risk reduction portfolio that considered expanded definition of "clean-sheet" and Shuttle-derived second generation ETO transportation systems in support of a 2005 RLV competition for NASA missions beginning 2010. NASA provided top-level requirements for improvements in safety, reliability, and cost and a set of design reference missions representing NASA ISS, human exploration, commercial, and other civil and government needs. This paper addresses the challenges of meeting NASA's objectives while servicing the varied market segments represented in the ISTP design reference missions and provides a summary of technology development needs and candidate system concepts. A comparison of driving requirements, architectures and technology needs is discussed and descriptions of viable Shuttle-derived and next generation systems to meet the market needs are presented.

Minerals including various forms of sulfur could generate AMD (Acid Mine Drainage) or ARD (Acid Rock Drainage), which can have serious effects on the ecosystem and even on human when exposed to air and/or water. To minimize the hazards by acid drainage, it is necessary to assess in advance the acid generation possibility of rocks and estimate the amount of acid generation. Because of its relatively simple and effective experiment procedure, the method of combining the results of ABA (Acid Base Accounting) and NAG (Net Acid Generation) tests have been commonly used in determining acid drainage conditions. The simplicity and effectiveness of the above method however, are derived from massive assumptions of simplified chemical reactions and this often leads to results of classifying the samples as UC (Uncertain) which would then require additional experimental or field data to reclassify them properly. This paper therefore, attempts to find the reasons that cause samples to be classified as UC and suggest new series of experiments where samples can be reclassified appropriately. Study precedents on evaluating potential acid generation and neutralization capacity were reviewed and as a result three individual experiments were selected in the light of applicability and compatibility of minimizing unnecessary influence among other experiments. The proposed experiments include sulfur speciation, ABCC (Acid Buffering Characteristic Curve), and Modified NAG which are all improved versions of existing experiments of Total S, ANC (Acid Neutralizing Capacity), and NAG respectively. To assure the applicability of the experiments, 36 samples from 19 sites with diverse geologies, field properties, and weathering conditions were collected. The samples were then subject to existing experiments and as a result, 14 samples which either were classified as UC or could be used as a comparison group had been selected. Afterwards, the selected samples were used to conduct the suggested

The use of cadavers for orthopaedic biomechanics research is well established, but presents difficulties to researchers in terms of cost, biosafety, availability, and ease of use. High fidelity composite models of human bone have been developed for use in biomechanical studies. While several studies have utilized composite models of the human pelvis for testing orthopaedic reconstruction techniques, few biomechanical comparisons of the properties of cadaveric and composite pelves exist. The aim of this study was to compare the mechanical properties of cadaveric pelves to those of the 4th generation composite model. An Instron ElectroPuls E10000 mechanical testing machine was used to load specimens with orientation, boundary conditions and degrees of freedom that approximated those occurring during the single legged phase of walking, including hip abductor force. Each specimen was instrumented with strain gauge rosettes. Overall specimen stiffness and principal strains were calculated from the test data. Composite specimens showed significantly higher overall stiffness and slightly less overall variability between specimens (composite K=1448±54N/m, cadaver K=832±62N/m; p<0.0001). Strains measured at specific sites in the composite models and cadavers were similar (but did differ) only when the applied load was scaled to overall construct stiffness. This finding regarding strain distribution and the difference in overall stiffness must be accounted for when using these composite models for biomechanics research. Altering the cortical wall thickness or tuning the elastic moduli of the composite material may improve future generations of the composite model. PMID:26839060

Two of the largest contributors to electron pitch angle diffusion in the plasmasphere are plasmaspheric hiss and lightning-generated whistler mode waves. Several modeling efforts have been made to describe the interaction between electrons and waves associated with these natural processes, most notably by Abel and Thorne [1998] and Meredith et al [2007,2009]. We present an additional lightning-generated whistler diffusion model based on the recent VLF spectral density climatology of Colman and Starks [2013]. Monthly averages of the wave power distribution used to develop this model are provided. A polynomial fit to the spectral intensity profiles is used to describe the power distribution instead of the normal Gaussian formalization. Comparisons between these models are facilitated via a program based on quasi-linear theory, using input parameters that are representative of each model. Diffusion coefficients are presented as a function of equatorial pitch angle and L-shell for L-shells in the range 2.5-4.0 at electron energies of 0.1, 0.5, 1.0, and 5.0 MeV. The diffusion coefficients are applied to the CRRESELE radiation belt model to determine electron loss timescales. The diffused electron flux pitch angle distributions are presented for CRRESELE energies of 0.65, 2.0, 3.15, and 5.75 MeV and at elapsed times of 30 days, 90 days, 1 year, and 4 years after the start of diffusion. Our results are found to be consistent with prior modeling determinations for small wave normal angle propagation, but less diffusive for large wave normal angles.

This reports evaluates two options for providing reliable power to rural areas in India. The benefits and costs are compared for biomass based distributed generation (DG) systems versus a 1200-MW central grid coal-fired power plant. The biomass based DG systems are examined both as alternatives to grid extension and as supplements to central grid power. The benefits are divided into three categories: those associated with providing reliable power from any source, those associated specifically with biomass based DG technology, and benefits of a central grid coal plant. The report compares the estimated delivered costs of electricity from the DG systems to those of the central plant. The analysis includes estimates for a central grid coal plant and four potential DG system technologies: Stirling engines, direct-fired combustion turbines, fuel cells, and biomass integrated gasification combined cycles. The report also discusses issues affecting India`s rural electricity demand, including economic development, power reliability, and environmental concerns. The results of the costs of electricity comparison between the biomass DG systems and the coal-fired central grid station demonstrated that the DG technologies may be able to produce very competitively priced electricity by the start of the next century. The use of DG technology may provide a practical means of addressing many rural electricity issues that India will face in the future. Biomass DG technologies in particular offer unique advantages for the environment and for economic development that will make them especially attractive. 58 refs., 31 figs.

The advent of next-generation sequencing technologies is accompanied with the development of many whole-genome sequence assembly methods and software, especially for de novo fragment assembly. Due to the poor knowledge about the applicability and performance of these software tools, choosing a befitting assembler becomes a tough task. Here, we provide the information of adaptivity for each program, then above all, compare the performance of eight distinct tools against eight groups of simulated datasets from Solexa sequencing platform. Considering the computational time, maximum random access memory (RAM) occupancy, assembly accuracy and integrity, our study indicate that string-based assemblers, overlap-layout-consensus (OLC) assemblers are well-suited for very short reads and longer reads of small genomes respectively. For large datasets of more than hundred millions of short reads, De Bruijn graph-based assemblers would be more appropriate. In terms of software implementation, string-based assemblers are superior to graph-based ones, of which SOAPdenovo is complex for the creation of configuration file. Our comparison study will assist researchers in selecting a well-suited assembler and offer essential information for the improvement of existing assemblers or the developing of novel assemblers. PMID:21423806

Consuming 40 to 50 percent of software development cost, software testing is one of the most resource consuming activities in software development lifecycle. To ensure an acceptable level of quality and reliability of a typical software product, it is desirable to test every possible combination of input data under various configurations. Due to combinatorial explosion problem, considering all exhaustive testing is practically impossible. Resource constraints, costing factors as well as strict time-to-market deadlines are amongst the main factors that inhibit such consideration. Earlier work suggests that sampling strategy (i.e. based on t-way parameter interaction or called as t-way testing) can be effective to reduce number of test cases without effecting the fault detection capability. However, for a very large system, even t-way strategy will produce a large test suite that need to be executed. In the end, only part of the planned test suite can be executed in order to meet the aforementioned constraints. Here, there is a need for test engineers to measure the effectiveness of partially executed test suite in order for them to assess the risk they have to take. Motivated by the abovementioned problem, this paper presents the effectiveness comparison of partially executed t-way test suite generated by existing strategies using tuples coverage method. Here, test engineers can predict the effectiveness of the testing process if only part of the original test cases is executed.

square meter) and suspended solids concentration which allows two plots with different dimensions comparison. Our results show influence of the field plot length and a clear increase in the soil loss with increasing plot length. The results suggest that the specific sediment concentration (related to one meter of the plot length) measured at the large plot gauge is about twice the concentration generated by the small plot. A sample for grain size estimation was taken during experiments in last year. The information was used for calculation and comparison of the dragging forces on both plots, the particle size distribution of the eroded particles was also compared to the topsoil texture. The experiments were analysed also with the aim to validate the surface runoff parameters in the mathematical model SMODERP. The input parameters for validation were based on measured: rainfall intensity, time of surface runoff initiation, infiltration and surface runoff discharge, mean velocity and velocity in runoff preferential paths. The research has been supported by the grants No. SGS14/180/OHK1/3T/11 and No. TA02020647.

A 3.5-year effort to characterize the aerodynamic behavior of the Ares I-X Flight Test Vehicle (AIX FTV) is described in this paper. The AIX FTV was designed to be representative of the Ares I Crew Launch Vehicle (CLV). While there are several differences in the outer mold line from the current revision of the CLV, the overall length, mass distribution, and flight systems of the two vehicles are very similar. This paper briefly touches on each of the aerodynamic databases developed in the program, describing the methodology employed, experimental and computational contributions to the generation of the databases, and how well the databases and underlying computations compare to actual flight test results.

The purpose of the study is to analyze the content validity of Public Personnel Selection Exam (KPSS), which is used for teacher recruitment in Turkey, in accordance with the teaching profession courses and Bloom's revised taxonomy of educational aims. For this purpose, the study was designed as a descriptive survey model. The data were…

Because of prevailing dissatisfaction with the 1967 version, a revised version of Chapter 12 of the Anglo-American Cataloging Rules was published in 1975. The scope of the chapter was greatly expanded to cover charts, dioramas, flash cards, games, kits, microscope slides, models, realia, slides, transparencies, and videorecordings as well as…

Compared the performance of 56 children on the 11 subscales of the Luria-Nebraska Neuropsychological Battery-Children's Revision. Results revealed significant differences on Receptive Speech and Expressive Language subscales, suggesting a possible differential sensitivity of the children's Luria-Nebraska to verbal and nonverbal cognitive deficits.…

A review of hospital records was conducted for children evaluated for autism spectrum disorders who completed both the Leiter International Performance Scale-Revised (Leiter-R) and Stanford-Binet Intelligence Scales, 5th Edition (SB5). Participants were between 3 and 12 years of age. Diagnoses were autistic disorder (n = 26, 55%) and pervasive…

Greater numbers of women are entering and working in higher education. Some of these women are the first in their families to attain academic degrees. They are known as first-generation students, and the care of children and others is often responsible for their withdrawal from academic study. This study addressed the void of information…

Widespread interest in human impacts on the Earth has prompted much questioning in fields of concern to the general public. One of these issues is the extent of the impacts on the environment caused by hydro-based power generation, once viewed as a clean energy source. From the early 1990s onwards, papers and studies have been challenging this assumption through claims that hydroelectric dams also emit greenhouse gases, generated by the decomposition of biomass flooded by filling these reservoirs. Like as other freshwater bodies, hydroelectric reservoirs produce gases underwater by biology decomposition of organic matter. Some of these biogenic gases are effective in terms of Global Warming. The decomposition is mainly due by anaerobically regime, emitting methane (CH4), nitrogen (N2) and carbon dioxide (CO2). This paper compare results obtained from gross greenhouse fluxes in Brazilian hydropower reservoirs with thermo power plants using different types of fuels and technology. Measurements were carried in the Manso, Serra da Mesa, Corumbá, Itumbiara, Estreito, Furnas and Peixoto reservoirs, located in Cerrado biome and in Funil reservoir located at Atlantic forest biome with well defined climatologically regimes. Fluxes of carbon dioxide and methane in each of the reservoirs selected, whether through bubbles and/or diffusive exchange between water and atmosphere, were assessed by sampling. The intensity of emissions has a great variability and some environmental factors could be responsible for these variations. Factors that influence the emissions could be the water and air temperature, depth, wind velocity, sunlight, physical and chemical parameters of water, the composition of underwater biomass and the operational regime of the reservoir. Based in this calculations is possible to conclude that the large amount of hydro-power studied is better than thermopower source in terms of atmospheric greenhouse emissions. The comparisons between the reservoirs studied

Abstract Next-generation sequencing (NGS) technologies have generated enormous amounts of shotgun read data, and assembly of the reads can be challenging, especially for organisms without template sequences. We study the power of genome comparison based on shotgun read data without assembly using three alignment-free sequence comparison statistics, D2, \\documentclass{aastex}\\usepackage{amsbsy}\\usepackage{amsfonts}\\usepackage{amssymb}\\usepackage{bm}\\usepackage{mathrsfs}\\usepackage{pifont}\\usepackage{stmaryrd}\\usepackage{textcomp}\\usepackage{portland, xspace}\\usepackage{amsmath, amsxtra}\\pagestyle{empty}\\DeclareMathSizes{10}{9}{7}{6}\\begin{document} $$\\textbf{\\textit{D}}_{\\bf 2}^{\\bf *}$$ \\end{document}, and \\documentclass{aastex}\\usepackage{amsbsy}\\usepackage{amsfonts}\\usepackage{amssymb}\\usepackage{bm}\\usepackage{mathrsfs}\\usepackage{pifont}\\usepackage{stmaryrd}\\usepackage{textcomp}\\usepackage{portland, xspace}\\usepackage{amsmath, amsxtra}\\pagestyle{empty}\\DeclareMathSizes{10}{9}{7}{6}\\begin{document} $$\\textbf{\\textit{D}}_{\\bf 2}^S$$ \\end{document}, both theoretically and by simulations. Theoretical formulas for the power of detecting the relationship between two sequences related through a common motif model are derived. It is shown that both \\documentclass{aastex}\\usepackage{amsbsy}\\usepackage{amsfonts}\\usepackage{amssymb}\\usepackage{bm}\\usepackage{mathrsfs}\\usepackage{pifont}\\usepackage{stmaryrd}\\usepackage{textcomp}\\usepackage{portland, xspace}\\usepackage{amsmath, amsxtra}\\pagestyle{empty}\\DeclareMathSizes{10}{9}{7}{6}\\begin{document} $$\\textbf{\\textit{D}}_{\\bf 2}^{\\bf *}$$ \\end{document} and \\documentclass{aastex}\\usepackage{amsbsy}\\usepackage{amsfonts}\\usepackage{amssymb}\\usepackage{bm}\\usepackage{mathrsfs}\\usepackage{pifont}\\usepackage{stmaryrd}\\usepackage{textcomp}\\usepackage{portland, xspace}\\usepackage{amsmath, amsxtra}\\pagestyle{empty}\\DeclareMathSizes{10}{9}{7}{6}\\begin

A study compared the Social Responsiveness Scale (SRS) with the Autism Diagnostic Interview-Revised in 61 children (ages 4-16) with autism. Correlations between the test scores for DSM-IV criterion sets were on the order of 0.7. SRS scores were unrelated to I.Q. and exhibited inter-rater reliability on the order of 0.8. (Contains references.)…

Most surgical patients end up with a scar and most of these would want at least some improvement in the appearance of the scar. Using sound techniques for wound closure surgeons can, to a certain extent, prevent suboptimal scars. This article reviews the principles of prevention and treatment of suboptimal scars. Surgical techniques of scar revision, i.e., Z plasty, W plasty, and geometrical broken line closure are described. Post-operative care and other adjuvant therapies of scars are described. A short description of dermabrasion and lasers for management of scars is given. It is hoped that this review helps the surgeon to formulate a comprehensive plan for management of scars of these patients. PMID:24516292

The report gives results of an evaluation of four leachate-generating procedures in terms of their general applicability, reproducibility, compatibility with environmental assessment methods, and leaching characteristics. The generated leachates were analyzed for nine metals by a...

Source of cortical variability and its influence on signal processing remain an open question. We address the latter, by studying two types of balanced randomly connected networks of quadratic I-F neurons, with irregular spontaneous activity: (a) a deterministic network with strong connections generating noise by chaotic dynamics (b) a stochastic network with weak connections receiving noisy input. They are analytically tractable in the limit of large network-size and channel time-constant. Despite different sources of noise, spontaneous activity of these networks are identical unless majority of neurons are simultaneously recorded. However, the two networks show remarkably different sensitivity to external stimuli. In the former, input reverberates internally and can be read out over long time, but in the latter, inputs rapidly decay. This is further enhanced with activity-dependent plasticity at input synapses producing marked difference in decoding inputs from neural activity. We show, this leads to distinct performance of the two networks to integrate temporally separate signals from multiple sources, with the deterministic chaotic network activity serving as reservoir for Monte Carlo sampling to perform near optimal Bayesian integration, unlike its stochastic counterpart.

As the climate models (GCMs and RCMs) fail to satisfactorily reproduce the real-world surface weather regime, various statistical methods are applied to downscale GCM/RCM outputs into site-specific weather series. The stochastic weather generators are among the most favourite downscaling methods capable to produce realistic (observed like) meteorological inputs for agrological, hydrological and other impact models used in assessing sensitivity of various ecosystems to climate change/variability. To name their advantages, the generators may (i) produce arbitrarily long multi-variate synthetic weather series representing both present and changed climates (in the latter case, the generators are commonly modified by GCM/RCM-based climate change scenarios), (ii) be run in various time steps and for multiple weather variables (the generators reproduce the correlations among variables), (iii) be interpolated (and run also for sites where no weather data are available to calibrate the generator). This contribution will compare two stochastic daily weather generators in terms of their ability to reproduce various features of the daily weather series. M&Rfi is a parametric generator: Markov chain model is used to model precipitation occurrence, precipitation amount is modelled by the Gamma distribution, and the 1st order autoregressive model is used to generate non-precipitation surface weather variables. The non-parametric GoMeZ generator is based on the nearest neighbours resampling technique making no assumption on the distribution of the variables being generated. Various settings of both weather generators will be assumed in the present validation tests. The generators will be validated in terms of (a) extreme temperature and precipitation characteristics (annual and 30 years extremes and maxima of duration of hot/cold/dry/wet spells); (b) selected validation statistics developed within the frame of VALUE project. The tests will be based on observational weather series

A total of 1922 first generation crossbred cows born between 2005 and 2012 produced by inseminating purebred Israeli Holstein cows with Norwegian Red semen, and 7487 purebred Israeli Holstein cows of the same age in the same 50 herds were analyzed for production, calving traits, fertility, calving diseases, body condition score, abortion rate and survival under intensive commercial management conditions. Holstein cows were higher than crossbreds for 305-day milk, fat and protein production. Differences were 764, 1244, 1231 for kg milk; 23.4, 37.4, 35.6 for kg fat, and 16.7, 29.8, 29.8 for kg protein; for parities 1 through 3. Differences for fat concentration were not significant; while crossbred cows were higher for protein concentration by 0.06% to 0.08%. Differences for somatic cells counts were not significant. Milk production persistency was higher for Holstein cows by 5, 8.3 and 8% in parities 1 through 3. Crossbred cows were higher for conception status by 3.1, 3.6 and 4.7% in parities 1 through 3. Rates of metritis for Holsteins were higher than the crossbred cows by 7.8, 4.6 and 3.4% in parities 1 to 3. Differences for incidence of abortion, dystocia, ketosis and milk fever were not significant. Holstein cows were lower than crossbred cows for body condition score for all three parities, with differences of 0.2 to 0.4 units. Contrary to comparisons in other countries, herd-life was higher for Holsteins by 79 days. A total of 6321 Holstein cows born between 2007 and 2011 were higher than 765 progeny of crossbred cows backcrossed to Israeli Holsteins of the same ages for milk, fat and protein production. Differences were 279, 537, 542 kg milk; 10.5, 17.7, 17.0 kg fat and 6.2, 12.9, 13.2 kg protein for parities 1 through 3. Differences for fat concentration were not significant, while backcross cows were higher for protein percentage by 0.02% to 0.04%. The differences for somatic cell score, conception rate, and calving diseases other than metritis, were not

In 1981, a series of 236 intranasal ethmoidectomy (INE) procedures was reported with a complication rate of 1.8%. Special attention has subsequently been directed to the surgical failures; namely, recurrent nasal polyposis which accounted for approximately 17%. The reason for recurrence in most instances was felt due to failure to do a more thorough posterior ethmoidectomy and enter and clean out the sphenoid sinuses. Subsequently, in all revision cases where a more thorough sphenoidethmoidectomy (RSE) was performed, the overall long-term success rate raised to better than 90%. Attention to skeletonizing the middle turbinate by stripping mucosa and leaving a thin bony shell is an important technical factor. An attempt is made to leave some of this bony skeletonized medial wall of the middle turbinate as it represents the most crucial landmark in doing the surgery via the intranasal route. There still remains approximately 8% to 10% of this patient population with nasal polyposis and sinusitis of such severity that surgery has offered only a temporary measure of relief. In dealing with this group it may be necessary to see these patients postoperatively at four to six-week intervals, carefully suctioning the ethmoid labyrinth and occasionally doing minor office "touch-up" ethmoidectomy-polypectomy procedures to clean off redundant mucosa or early polyposis. This paper is written to offer a compromise to the two schools of intranasal ethmoidectomy surgery as to the necessity of removing the middle turbinate in its entirety. PMID:3974381

Jumbo acetabular cups are commonly used in revision total hip arthroplasty (THA). A straightforward reaming technique is used which is similar to primary THA. However, jumbo cups may also be associated with hip center elevation, limited screw fixation options, and anterior soft tissue impingement. A partially truncated hemispherical shell was designed with an offset center of rotation, thick superior rim, and beveled anterior and superior rims as an alternative to a conventional jumbo cup. A three dimensional computer simulation was used to assess head center position and safe screw trajectories. Results of this in vitro study indicate that a modified hemispherical implant geometry can reduce head center elevation while permitting favorable screw fixation trajectories into the pelvis in comparison to a conventional jumbo cup. PMID:26253481

Groups naturally promote their strengths and prefer values and rules that give them an identity and an advantage. This shows up as generational tensions across cohorts who share common experiences, including common elders. Dramatic cultural events in America since 1925 can help create an understanding of the differing value structures of the Silents, the Boomers, Gen Xers, and the Millennials. Differences in how these generations see motivation and values, fundamental reality, relations with others, and work are presented, as are some applications of these differences to the dental profession. PMID:16623137

Background Welding fumes consist of a wide range of complex metal oxide particles which can be deposited in all regions of the respiratory tract. The welding aerosol is not homogeneous and is generated mostly from the electrode/wire. Over 390,000 welders were reported in the U.S. in 2008 while over 1 million full-time welders were working worldwide. Many health effects are presently under investigation from exposure to welding fumes. Welding fume pulmonary effects have been associated with bronchitis, metal fume fever, cancer and functional changes in the lung. Our investigation focused on the generation of free radicals and reactive oxygen species from stainless and mild steel welding fumes generated by a gas metal arc robotic welder. An inhalation exposure chamber located at NIOSH was used to collect the welding fume particles. Results Our results show that hydroxyl radicals (.OH) were generated from reactions with H2O2 and after exposure to cells. Catalase reduced the generation of .OH from exposed cells indicating the involvement of H2O2. The welding fume suspension also showed the ability to cause lipid peroxidation, effect O2 consumption, induce H2O2 generation in cells, and cause DNA damage. Conclusion Increase in oxidative damage observed in the cellular exposures correlated well with .OH generation in size and type of welding fumes, indicating the influence of metal type and transition state on radical production as well as associated damage. Our results demonstrate that both types of welding fumes are able to generate ROS and ROS-related damage over a range of particle sizes; however, the stainless steel fumes consistently showed a significantly higher reactivity and radical generation capacity. The chemical composition of the steel had a significant impact on the ROS generation capacity with the stainless steel containing Cr and Ni causing more damage than the mild steel. Our results suggest that welding fumes may cause acute lung injury. Since type of

Variable speed hydraulic turbine power generation has become a useful tool in the chemical processing industry. The technology that preceded and led to the use of variable speed hydraulic turbines was Variable Speed Constant Frequency (VSCF) drives for power generation. The current state of the art in the VSCF technology is the Insulated Gate Bipolar Transistor (IGBT), voltage source, Pulse Width Modulation (PWM) drive. A new PWM switch has been created. The new switch is the Integrated Gate-Commutated Thyristor (IGCT), PWM device. The drives produced with this technology are current source therefore the operational voltages of the drives are easily configured for medium voltage range. Medium voltage power makes the new drives desirable because of the size of the generators. A common power range for variable speed Hydraulic Power Generation is 1MW and applications for the chemical processing industry have a trend toward higher powers. For this type generator: if the operational voltage falls below 2300V the size of the generator becomes too large, bulky and expensive. Windings are no longer wires; they become bussbars. The current solution to the voltage problem is to include a transformer in the generator/drive connection.

The reactions ^124Xe+^112,124Sn at E/A=50MeV have been recently measured. For mid-peripheral collisions, the projectile-like-fragment has been measured in coincidence with emitted particles (charged particles and neutrons). Experimental data will be compared to those obtained by the event generator Elie[1]. This two-step event generator consists of an entrance channel phase using a random process to determine the initial partition; and of kinematic propagation and secondary decay as the second phase. Experimental and generated energy distributions, angular distributions, and Z distributions of charged products will be examined. Yields of isotopically resolved fragments will be studied, including the effect of the target N/Z. [1] Elie: an event generator for nuclear reactions, Dominique Durand, arXiv:0803.2159

This paper is a collection of overhead projector information and graphs which was presented at IAEA Nuclear Power Course on Electric System Expansion Planning. The group of viewgraphs is a collection comparing the Nuclear and Fossil-Fired busbar generation costs of the US. Discussed is information on: (1) where nuclear new stands in the US, (2) what is needed to perform a nuclear vs coalfired busbar generation cost analysis, (3) results of a recent study, and (4) current considerations.

Two application systems, Tube Failure Diagnosis (TFD) and Electric Generator Failure Diagnosis (EGFD), are discussed in the paper. The TFD system was built using two different approaches: one with rule-chaining search algorithms and the other with a new neural network paradigm. The EGFD system combines the two artificial intelligence approaches: rule-chaining and neural networks. An analysis of the advantages and disadvantages of the two technologies, as applied to power generation applications, is included.

Gold nanoparticles (GNPs) are widely used for biomedical applications due to unique optical properties, established synthesis methods, and biological compatibility. Despite important applications of plasmonic heating in thermal therapy, imaging, and diagnostics, the lack of quantification in heat generation leads to difficulties in comparing the heating capability for new plasmonic nanostructures and predicting the therapeutic and diagnostic outcome. This study quantifies GNP heat generation by experimental measurements and theoretical predictions for gold nanospheres (GNS) and nanorods (GNR). Interestingly, the results show a GNP-type dependent agreement between experiment and theory. The measured heat generation of GNS matches well with theory, while the measured heat generation of GNR is only 30% of that predicted theoretically at peak absorption. This then leads to a surprising finding that the polydispersity, the deviation of nanoparticle size and shape from nominal value, significantly influences GNR heat generation (>70% reduction), while having a limited effect for GNS (<10% change). This work demonstrates that polydispersity is an important metric in quantitatively predicting plasmonic heat generation and provides a validated framework to quantitatively compare the heating capabilities between gold and other plasmonic nanostructures. PMID:27445172

Gold nanoparticles (GNPs) are widely used for biomedical applications due to unique optical properties, established synthesis methods, and biological compatibility. Despite important applications of plasmonic heating in thermal therapy, imaging, and diagnostics, the lack of quantification in heat generation leads to difficulties in comparing the heating capability for new plasmonic nanostructures and predicting the therapeutic and diagnostic outcome. This study quantifies GNP heat generation by experimental measurements and theoretical predictions for gold nanospheres (GNS) and nanorods (GNR). Interestingly, the results show a GNP-type dependent agreement between experiment and theory. The measured heat generation of GNS matches well with theory, while the measured heat generation of GNR is only 30% of that predicted theoretically at peak absorption. This then leads to a surprising finding that the polydispersity, the deviation of nanoparticle size and shape from nominal value, significantly influences GNR heat generation (>70% reduction), while having a limited effect for GNS (<10% change). This work demonstrates that polydispersity is an important metric in quantitatively predicting plasmonic heat generation and provides a validated framework to quantitatively compare the heating capabilities between gold and other plasmonic nanostructures. PMID:27445172

Gold nanoparticles (GNPs) are widely used for biomedical applications due to unique optical properties, established synthesis methods, and biological compatibility. Despite important applications of plasmonic heating in thermal therapy, imaging, and diagnostics, the lack of quantification in heat generation leads to difficulties in comparing the heating capability for new plasmonic nanostructures and predicting the therapeutic and diagnostic outcome. This study quantifies GNP heat generation by experimental measurements and theoretical predictions for gold nanospheres (GNS) and nanorods (GNR). Interestingly, the results show a GNP-type dependent agreement between experiment and theory. The measured heat generation of GNS matches well with theory, while the measured heat generation of GNR is only 30% of that predicted theoretically at peak absorption. This then leads to a surprising finding that the polydispersity, the deviation of nanoparticle size and shape from nominal value, significantly influences GNR heat generation (>70% reduction), while having a limited effect for GNS (<10% change). This work demonstrates that polydispersity is an important metric in quantitatively predicting plasmonic heat generation and provides a validated framework to quantitatively compare the heating capabilities between gold and other plasmonic nanostructures.

Studies of the broader autism phenotype, and of subtle changes in autism symptoms over time, have been compromised by a lack of established quantitative assessment tools. The Social Responsiveness Scale (SRS-formerly known as the Social Reciprocity Scale) is a new instrument that can be completed by parents and/or teachers in 15-20 minutes. We compared the SRS with the Autism Diagnostic Interview-Revised (ADI-R) in 61 child psychiatric patients. Correlations between SRS scores and ADI-R algorithm scores for DSM-IV criterion sets were on the order of 0.7. SRS scores were unrelated to I.Q. and exhibited inter-rater reliability on the order of 0.8. The SRS is a valid quantitative measure of autistic traits, feasible for use in clinical settings and for large-scale research studies of autism spectrum conditions. PMID:12959421

The purpose of the present study was to examine the test-retest reliability of the alcohol and drug modules of the AUDADIS-ADR in three sites: Bangalore, India, Jebel, Romania and Sydney, Australia. The overall reliability of ICD-10, DSM-IV and DSM-III-R dependence diagnoses was found to be good to excellent for each substance, including alcohol, for each time frame, regardless of whether the total sample or user subsample figured into the calculations. Reliability associated with corresponding harmful use and abuse diagnoses were mixed, but generally lower. Reliability statistics for Bangalore were generally lower than those of the Jebel and Sydney sites, particularly for alcohol diagnostic criteria. Implications of these results are discussed, in conjunction with results from the discrepancy interview protocol analyses within sites, in terms of future revisions to the AUDADIS-ADR and its training procedures tailored to developing countries. PMID:9306043

A wide range of electronic cigarette (EC) devices, from small cigarette-like (first-generation) to new-generation high-capacity batteries with electronic circuits that provide high energy to a refillable atomizer, are available for smokers to substitute smoking. Nicotine delivery to the bloodstream is important in determining the addictiveness of ECs, but also their efficacy as smoking substitutes. In this study, plasma nicotine levels were measured in experienced users using a first- vs. new-generation EC device for 1 hour with an 18 mg/ml nicotine-containing liquid. Plasma nicotine levels were higher by 35–72% when using the new- compared to the first-generation device. Compared to smoking one tobacco cigarette, the EC devices and liquid used in this study delivered one-third to one-fourth the amount of nicotine after 5 minutes of use. New-generation EC devices were more efficient in nicotine delivery, but still delivered nicotine much slower compared to tobacco cigarettes. The use of 18 mg/ml nicotine-concentration liquid probably compromises ECs' effectiveness as smoking substitutes; this study supports the need for higher levels of nicotine-containing liquids (approximately 50 mg/ml) in order to deliver nicotine more effectively and approach the nicotine-delivery profile of tobacco cigarettes. PMID:24569565

Soil has the potential to be valuable forensic evidence linking a person or item to a crime scene; however, there is no established soil individualization technique. In this study, the utility of soil bacterial profiling via next-generation sequencing of the 16S rRNA gene was examined for associating soils with their place of origin. Soil samples were collected from ten diverse and nine similar habitats over time, and within three habitats at various horizontal and vertical distances. Bacterial profiles were analyzed using four methods: abundance charts and nonmetric multidimensional scaling provided simplification and visualization of the massive datasets, potentially aiding in expert testimony, while analysis of similarities and k-nearest neighbor offered objective statistical comparisons. The vast majority of soil bacterial profiles (95.4%) were classified to their location of origin, highlighting the potential of bacterial profiling via next-generation sequencing for the forensic analysis of soil samples. PMID:27122396

Whether in art or for QR codes, images have proven to be both powerful and efficient carriers of information. Spatial light modulators allow an unprecedented level of control over the generation of optical fields by using digital holograms. There is no unique way of obtaining a desired light pattern however, leaving many competing methods for hologram generation. In this paper, we test six hologram generation techniques in the creation of a variety of modes as well as a photographic image: rating the methods according to obtained mode quality and power. All techniques compensate for a non-uniform mode profile of the input laser and incorporate amplitude scaling. We find that all methods perform well and stress the importance of appropriate spatial filtering. We expect these results to be of interest to those working in the contexts of microscopy, optical trapping or quantum image creation. PMID:27136818

Individuals with a history of depression experience more stress that is dependent in part on their own actions. However, it is unclear whether stress generation is a unique feature of depression, or a universal process that is also present in other types of psychopathology, such as anxiety disorders. The current study addressed this issue by comparing adolescents with a history of “pure” (i.e., non-comorbid) depressive disorders, pure anxiety disorders, comorbid depression and anxiety, and no disorder, on their levels of dependent and independent stress. Results indicated that adolescents with pure depression experienced more dependent stress than adolescents with pure anxiety, and adolescents with any internalizing diagnosis experienced more dependent stress than controls. Further, adolescents with comorbid depression and anxiety reported the highest levels of stress generation. The results suggest that while stress generation may be more strongly associated with depression than anxiety in adolescence, it is not unique to depression. PMID:22724042

Background Nitric oxide (NO) is a vital signalling molecule in a variety of tissues including the neuronal, vascular and reproductive system. However, its high diffusibility and inactivation make characterisation of nitrergic signalling difficult. The use of NO donors is essential to characterise downstream signalling pathways but knowledge of donor release capacities is lacking, thus making comparisons of donor responses difficult. New method This study characterises NO profiles of commonly used NO donors. Donors were stored under defined conditions and temporal release profiles detected to allow determination of released NO concentrations. Results Using NO-sensitive microsensors we assessed release profiles of NO donors following different storage times and conditions. We found that donors such as NOC-5 and PAPA-NONOate decayed substantially within days, whereas SNP and GSNO showed greater stability releasing consistent levels of NO over days. In all donors tested, the amount of released NO differs between frozen and unfrozen stocks. Comparison with existing method(s) Fluorescent and amperometric approaches to measure NO concentrations yield a wide range of levels. However, due to a lack of characterisation of the release profiles, inconsistent effects on NO signalling have been widely documented. Our systematic assessment of release profiles of a range of NO donors therefore provides new essential data allowing for improved and defined investigations of nitrergic signalling. Conclusions This is the first systematic comparison of temporal release profiles of different NO donors allowing researchers to compare conditions across different studies and the use of defined NO levels by choosing specific donors and concentrations. PMID:25749567

Students are often encouraged to generate and answer their own questions on to-be-remembered material, because this interactive process is thought to enhance memory. But does this strategy actually work? In three experiments, all participants read the same passage, answered questions, and took a test to get accustomed to the materials in a…

The collection of essays and studies concerning generative grammar and first and second language acquisition includes: "The Optional-Infinitive Stage in Child English: Evidence from Negation" (Tony Harris, Ken Wexler); "Towards a Structure-Building Model of Acquisition" (Andrew Radford); "The Underspecification of Functional Categories in Early…

The Wada test is at present the method of choice for preoperative assessment of patients who require surgery close to cortical language areas. It is, however, an invasive test with an attached morbidity risk. By now, an alternative to the Wada test is to combine a lexical word generation paradigm with non-invasive imaging techniques. However,…

Although the development of next-generation (NextGen) sequencing technologies has revolutionized genomic research and medicine, the incorporation of these topics into the classroom is challenging, given an implied high degree of technical complexity. We developed an easy-to-implement, interactive classroom activity investigating the similarities…

Range-time-frequency distributions of surf-generated noise were measured within the surf zone during the SandyDuck'97 experiment at Duck, NC. A 24-phone, 138-m, bottom-mounted, linear array located along a line perpendicular to the shore at a depth of 1 to 3 m recorded the surf-generated noise. Concurrent video measurements of the location, size, and time-evolution of the individual breaking waves directly above the array were made from a nearby 43-m tower. Source level spectra are obtained by using a modified fast field program to account for water column and geoacoustic propagation from the distributed source region to an individual hydrophone. The length, location, and orientation of the leading edge of breakers are tracked in time from rectified video images. It is observed that the source levels from spilling breakers are lower (approx5-10 dB) than those produced by plunging breakers that occurred during the same time period. Plunging breakers generated time-frequency signatures with a sharp onset while spilling breakers' signatures had a gradual low-frequency precursor. Range-time signatures of plunging breakers indicate a burst of acoustic energy while spilling breakers' signatures depict sound being generated over a longer time period with the source region moving with the breaking surface wave.

Three different sorbent materials (Ti, Si and Ca based) were compared for their mercury capture efficiencies in an entrained flow reactor. Agglomerated particles with a high specific surface area were generated in situ by injecting gas phase sorbent precursors into a high tempera...

Examines grandparent behaviors in the United States and in the Republic of China to identify curriculum themes for helping them learn to adjust to their changing roles. Results revealed significant differences in perceptions about grandparents across cultures as well as between generations within cultures. Provides specific guidelines and…

This paper presents two studies in which an empirical approach was taken to understand and explain form generation and decisions taken in the design process. In particular, the activities addressing aesthetic aspects when exteriorising form ideas in the design process have been the focus of the present study. Diary methods were the starting point…

In order to get a marketing authorization, breast implants (BI) must meet a number of standard requirements. French and European standards ISO 14607 list a number of official tests to be performed before an implant can be used clinically. However, the BI material characteristics evolution over implantation time remains a research field which is unexplored. The goal of the present study is to compare the mechanical ageing of two breast implant generations and assess if the use of one generation rather than the other is advantageous in terms of durability. For that purpose, 21 explanted BI were analyzed in terms of biomechanical characteristics and compared. Twelve BI were textured anatomic specimens of 5th generation and 10 BI were round textured specimens of 4th generation. All the specimens were produced by the same manufacturer. Implantation time ranged from 3 to 130 months. Both the shell and the gel of every specimen were analyzed. Results show that the mechanical properties go down with the implantation time for all the implants. Moreover, the shell of round implants appear to be less resistant than the shell of anatomic specimens with 25% lower rupture forces. With regard to the gel, whatever the specimen, results show that the properties change with implantation time. The color changes from transparent to milky to finally become yellow, while the cohesion goes down especially for the round specimens. Globally, the study brings out that BI get degraded with implantation time and provides information which could help predicting the durability of the implant. PMID:25746931

This study was conducted to ascertain if repeated reading or question generation was more effective at improving reading fluency and comprehension of fourth- through sixth-grade students with learning disabilities or reading problems. Adult tutors trained by the investigator conducted the interventions. Instructional components and training within…

This paper examined whether FreeSurfer—generated data differed between a fully—automated, unedited pipeline and an edited pipeline that included the application of control points to correct errors in white matter segmentation. In a sample of 30 individuals, we compared the summary statistics of surface area, white matter volumes, and cortical thickness derived from edited and unedited datasets for the 34 regions of interest (ROIs) that FreeSurfer (FS) generates. To determine whether applying control points would alter the detection of significant differences between patient and typical groups, effect sizes between edited and unedited conditions in individuals with the genetic disorder, 22q11.2 deletion syndrome (22q11DS) were compared to neurotypical controls. Analyses were conducted with data that were generated from both a 1.5 tesla and a 3 tesla scanner. For 1.5 tesla data, mean area, volume, and thickness measures did not differ significantly between edited and unedited regions, with the exception of rostral anterior cingulate thickness, lateral orbitofrontal white matter, superior parietal white matter, and precentral gyral thickness. Results were similar for surface area and white matter volumes generated from the 3 tesla scanner. For cortical thickness measures however, seven edited ROI measures, primarily in frontal and temporal regions, differed significantly from their unedited counterparts, and three additional ROI measures approached significance. Mean effect sizes for edited ROIs did not differ from most unedited ROIs for either 1.5 or 3 tesla data. Taken together, these results suggest that although the application of control points may increase the validity of intensity normalization and, ultimately, segmentation, it may not affect the final, extracted metrics that FS generates. Potential exceptions to and limitations of these conclusions are discussed. PMID:26539075

The purpose of the present study was to examine the psychometric properties, factorial structure, and validity of the Padua Inventory-Washington State University Revision and of the Padua Inventory-Revised in a large sample of patients with obsessive-compulsive disorder (n = 228) and with anxiety disorders and/or depression (n = 213). The…

The article discusses the history of copyright laws, the directions which copyright revision can take, and the rationale behind revision. Regulations for protecting various media such as sound recordings, performances, and cable television are discussed. (JAB)

The global parameter fields used in the revised Simple Biosphere Model (SiB2) of Sellers et al. are reviewed. The most important innovation over the earlier SiB1 parameter set of Dorman and Sellers is the use of satellite data to specify the time-varying phenological properties of FPAR, leaf area index, and canopy greenness fraction. This was done by processing a monthly 1{degrees} normalized difference vegetation index (NDVI) dataset obtained from Advanced Very High Resolution Radiometer red and near-infrared data. Corrections were applied to the source NDVI dataset to account for (1) obvious anomalies in the data time series, (2) the effect of variations in solar zenith angle, (3) data dropouts in cold regions where a temperature threshold procedure designed to screen for clouds also eliminated cold land surface points, and (4) persistent cloud cover in the Tropics. An outline of the procedures for calculating the land surface parameters form the corrected NDVI dataset is given, and a brief description is provided of sourcematerial, mainly derived form in situ observations, that was used in addition to the NDVI data. The datasets summarized in this paper should be superior to prescriptions currently used in most land surface parameterizations in that the spatial and temporal dynamics of key land surface parameters, in particular those related to vegetation, are obtained directly from a consistent set of global-scale observations instead of being inferred from a variety of survey-based land-cover classification. 55 refs., 24 figs., 6 tabs.

Accurate rainfall measurements are critical to river flow predictions. Areal and gauge rainfall measurements create different descriptions of the same storms. The purpose of this study is to characterize those differences. A stochastic rainfall generator was calibrated using an automatic search algorithm. Statistics describing several rainfall characteristics of interest were used in the error function. The calibrated model was then used to generate storms which were exhaustively sampled, sparsely sampled and sampled areally with 4 x 4 km grids. The sparsely sampled rainfall was also kriged to 4 x 4 km blocks. The differences between the four schemes were characterized by comparing statistics computed from each of the sampling methods. The possibility of predicting areal statistics from gauge statistics was explored. It was found that areally measured storms appeared to move more slowly, appeared larger, appeared less intense and have shallower intensity gradients.

The design of heat exchangers traditionally focuses on the known constraints of the problem such as inlet and outlet temperatures, flow rates, and pressure drops. This leads mainly to a sizing problem where the designer must select surfaces, flow configuration, and materials to meet the minimum design objectives. An alternate approach based on an acceptable level of thermodynamic irreversibility (entropy generation) has been proposed. When the entropy generation level has been set, the geometric parameters of the heat exchanger can be determined. The design of a plate-fin type, gas-to-gas recuperator for a regenerative open Brayton cycle has been used as a demonstrative device. The resulting heat exchanger designs are then examined to determine what caused the differences and why either method should be preferred over the other.

At any neutron production facility, the energy spectrum at any meaningful distance from the target will be modified. For the case of a facility used to provide reference irradiations of electronics and other devices at various target-to-device distances it is important to have knowledge of these spectral modifications. In addition, it is desirable to have the ability to generate near real-time measurement capability. Advances in neutron metrology have made it possible to determine neutron energy spectra in real time to high levels of accuracy. This paper outlines a series of experimental measurements and theoretical calculations designed to quantify the scattering effects for a 14 MeV neutron generator facility, and makes appropriate recommendations for near real-time measurements of these fields.

Coherent Stokes generation was explored as a means to investigate vibrational dephasing in both the ground state and first excited singlet state of pentacene in benzoic acid. The dephasing-induced coherent emission (DICE) was used to obtain the ground- and excited-state Ramon linewidths between 1.6 K and 200 K. The broadening for both modes displayed an Arrhenius energy of ≈100 cm -1.

Introduction. Cryoballoon (CB) ablation has emerged as a novel treatment for pulmonary vein isolation (PVI) for patients with paroxysmal atrial fibrillation (PAF). The second-generation Arctic Front Advance (ADV) was redesigned with technical modifications aiming at procedural and outcome improvements. We aimed to compare the efficacy of the two different technologies over a long-term follow-up. Methods. A total of 120 patients with PAF were enrolled. Sixty patients underwent PVI using the first-generation CB and 60 patients with the ADV catheter. All patients were evaluated over a follow-up period of 2 years. Results. There were no significant differences between the two groups of patients. Procedures performed with the first-generation CB showed longer fluoroscopy time (36.3 ± 16.8 versus 14.2 ± 13.5 min, resp.; p = 0.00016) and longer procedure times as well (153.1 ± 32 versus 102 ± 24.8 min, resp.; p = 0.019). The overall long-term success was significantly different between the two groups (68.3 versus 86.7%, resp.; p = 0.017). No differences were found in the lesion areas of left and right PV between the two groups (resp., p = 0.61 and 0.57). There were no significant differences in procedural-related complications. Conclusion. The ADV catheter compared to the first-generation balloon allows obtaining a significantly higher success rate after a single PVI procedure during the long-term follow-up. Fluoroscopy and procedural times were significantly shortened using the ADV catheter. PMID:27069711

Introduction. Cryoballoon (CB) ablation has emerged as a novel treatment for pulmonary vein isolation (PVI) for patients with paroxysmal atrial fibrillation (PAF). The second-generation Arctic Front Advance (ADV) was redesigned with technical modifications aiming at procedural and outcome improvements. We aimed to compare the efficacy of the two different technologies over a long-term follow-up. Methods. A total of 120 patients with PAF were enrolled. Sixty patients underwent PVI using the first-generation CB and 60 patients with the ADV catheter. All patients were evaluated over a follow-up period of 2 years. Results. There were no significant differences between the two groups of patients. Procedures performed with the first-generation CB showed longer fluoroscopy time (36.3 ± 16.8 versus 14.2 ± 13.5 min, resp.; p = 0.00016) and longer procedure times as well (153.1 ± 32 versus 102 ± 24.8 min, resp.; p = 0.019). The overall long-term success was significantly different between the two groups (68.3 versus 86.7%, resp.; p = 0.017). No differences were found in the lesion areas of left and right PV between the two groups (resp., p = 0.61 and 0.57). There were no significant differences in procedural-related complications. Conclusion. The ADV catheter compared to the first-generation balloon allows obtaining a significantly higher success rate after a single PVI procedure during the long-term follow-up. Fluoroscopy and procedural times were significantly shortened using the ADV catheter. PMID:27069711

Images from two sensors, the High-Resolution Imaging Science Experiment (HiRISE) and the Context Camera (CTX), both on-board the Mars Reconnaissance Orbiter (MRO), were used to generate high-quality DEMs (Digital Elevation Models) of the Martian surface. However, there were discrepancies between the DEMs generated from the images acquired by these two sensors due to various reasons, such as variations in boresight alignment between the two sensors during the flight in the complex environment. This paper presents a systematic investigation of the discrepancies between the DEMs generated from the HiRISE and CTX images. A combined adjustment algorithm is presented for the co-registration of HiRISE and CTX DEMs. Experimental analysis was carried out using the HiRISE and CTX images collected at the Mars Rover landing site and several other typical regions. The results indicated that there were systematic offsets between the HiRISE and CTX DEMs in the longitude and latitude directions. However, the offset in the altitude was less obvious. After combined adjustment, the offsets were eliminated and the HiRISE and CTX DEMs were co-registered to each other. The presented research is of significance for the synergistic use of HiRISE and CTX images for precision Mars topographic mapping.

Purpose. To compare two methods of composite score generation in dry eye syndrome (DES). Methods. Male patients seen in the Miami Veterans Affairs eye clinic with normal eyelid, corneal, and conjunctival anatomy were recruited to participate in the study. Patients filled out the Dry Eye Questionnaire 5 (DEQ5) and underwent measurement of tear film parameters. DES severity scores were generated by independent component analysis (ICA) and latent class analysis (LCA). Results. A total of 247 men were included in the study. Mean age was 69 years (SD 9). Using ICA analysis, osmolarity was found to carry the largest weight, followed by eyelid vascularity and meibomian orifice plugging. Conjunctival injection and tear breakup time (TBUT) carried the lowest weights. Using LCA analysis, TBUT was found to be best at discriminating healthy from diseased eyes, followed closely by Schirmer's test. DEQ5, eyelid vascularity, and conjunctival injection were the poorest at discrimination. The adjusted correlation coefficient between the two generated composite scores was 0.63, indicating that the shared variance was less than 40%. Conclusions. Both ICA and LCA produced composite scores for dry eye severity, with weak to moderate agreement; however, agreement for the relative importance of single diagnostic tests was poor between the two methods. PMID:23942971

Weather generators are stochastic models that produce synthetic long time series of weather data from limited records of historical data with statistically similar characteristics to those of observed. Long timeseries of synthetic weather daily data, especially precipitation, are required as climate inputs to hydrological and agricultural models, in order to evaluate the performance of the associated physical systems. LARS-WG is a semi parametric weather generator that uses flexible semi-empirical distributions for the lengths of wet and dry day series and daily precipitation. On the other hand, k-Nearest Neighbor is a non parametric technique to resample data from historical records by conditioning on the preceding days (feature vector). The model finds the historical number of nearest neighbors of the current weather vector using the Euclidean distance and resamples from it their successors. To preserve the temporal persistence, the model calculates the Euclidean distance of vectors which have similar sequence of wet and dry days. The objective of this study is to evaluate the performance of these two different models in reproducing interannual variability of precipitation in three stations in Germany. Keywords: Weather generator, k-Nearest-Neighbor, LARSWG, daily precipitation

Optoacoustic imaging represents a new modality that allows noninvasive in vivo molecular imaging with optical contrast and acoustical resolution. Whereas structural or functional imaging applications such as imaging of vasculature do not require contrast enhancing agents, nanoprobes with defined biochemical binding behavior are needed for molecular imaging tasks. Since the contrast of this modality is based on the local optical absorption coefficient, all particle or molecule types that show significant absorption cross sections in the spectral range of the laser wavelength used for signal generation are suitable contrast agents. Currently, several particle types such as gold nanospheres, nanoshells, nanorods, or polymer particles are used as optoacoustic contrast agents. These particles have specific advantages with respect to their absorption properties, or in terms of biologically relevant features (biodegradability, binding to molecular markers). In the present study, a comparative analysis of the signal generation efficiency of gold nanorods, polymeric particles, and magnetite particles using a 1064 nm Nd:YAG laser for signal generation is described. PMID:23207315

T-waves are underwater acoustic waves generated by earthquakes. Modeling of their generation and propagation is a challenging problem. Using a spectral element code-SPECFEM2D, this paper presents the first realistic simulations of T-waves taking into account major aspects of this phenomenon: The radiation pattern of the source, the propagation of seismic waves in the crust, the seismic to acoustic conversion on a non-planar seafloor, and the propagation of acoustic waves in the water column. The simulated signals are compared with data from the mid-Atlantic Ridge recorded by an array of hydrophones. The crust/water interface is defined by the seafloor bathymetry. Different combinations of water sound-speed profiles and sub-seafloor seismic velocities, and frequency content of the source are tested. The relative amplitudes, main arrival-times, and durations of simulated T-phases are in good agreement with the observed data; differences in the spectrograms and early arrivals are likely due to too simplistic source signals and environmental model. These examples demonstrate the abilities of the SPECFEM2D code for modeling earthquake generated T-waves. PMID:24116530

Nerve growth factor (NGF), a member of the neurotrophin family, is responsible for the maintenance and survival of cholinergic neurons in the basal forebrain. The degeneration of cholinergic neurons and reduced acetycholine levels are hallmarks of Alzheimer's disease (AD) as well as associated with learning and memory deficits. Thus far, NGF has proven the most potent neuroprotective molecule against cholinergic neurodegeneration. However, delivery of this factor into the brain remains difficult. Recent studies have begun to elucidate the potential use of monocytes as vehicles for therapeutic delivery into the brain. In this study, we employed different transfection and transduction methods to generate NGF-secreting primary rat monocytes. Specifically, we compared five methods for generating NGF-secreting monocytes: (1) cationic lipid-mediated transfection (Effectene and FuGene), (2) classical electroporation, (3) nucleofection, (4) protein delivery (Bioporter) and (5) lentiviral vectors. Here, we report that classical transfection methods (lipid-mediated transfection, electroporation, nucleofection) are inefficient tools for proper gene transfer into primary rat monocytes. We demonstrate that lentiviral infection and Bioporter can successfully transduce/load primary rat monocytes and produce effective NGF secretion. Furthermore, our results indicate that NGF is bioactive and that Bioporter-loaded monocytes do not appear to exhibit any functional disruptions (i.e. in their ability to differentiate and phagocytose beta-amyloid). Taken together, our results show that primary monocytes can be effectively loaded or transduced with NGF and provides information on the most effective method for generating NGF-secreting primary rat monocytes. This study also provides a basis for further development of primary monocytes as therapeutic delivery vehicles to the diseased AD brain. PMID:23474426

In 1996, Barnes-Jewish Hospital introduced the first fully robotic hematology system in North America. This first-generation Coulter/IDS robotic automated system consisted of a series of transport lanes, inlet and outlet stations, 2 robotic arms, and an on-line slide maker/strainer. The success of the system and its exceptional performance was detailed in an article published in 1998. In 2004, our laboratory replaced this system with a new third-generation robotics system, the LH 1502 from Beckman Coulter, Inc. The new system consists of 2 LH 755 workcells (LH 750 plus slide maker/stainer), an inlet/outlet unit, and a stock-yard. The system has been interfaced to our laboratory computer system (Cerner Millieum) by customized software that allows auto-verification and testing rules to be applied to individual samples. Since its implementation in January 2005, we have monitored its performance characteristics relative to the previous first generation system. We report here on our findings through June 2005. We compared and contrasted the two systems with respects to the following parameters: (1) sample handling; (2) reduction in staff exposure to hazardous materials; (3) stat and routine turnaround times; (4) productivity; (5) reduction in backup testing; (6) operating costs; (7) payback; and (8) reduction in FTEs. We found that manual sample handling was virtually identical between the two systems. The LH 1502 holds a slight edge over our older system with respect to staff exposure mainly due to further reduction in manual backup testing. Stat TAT after introduction of the LH 1502 showed an additional 44% drop from 50 to 28 minutes, while the routine TAT was reduced by 23%, down from 61 to 47 minutes. Gains achieved in productivity after installation of the first-generation system were maintained with the LH 1502, with significant extra volume capacity yet to be utilized. The space required for operating the system was also reduced by nearly 49%. There was a 20

Total extrusive and intrusive magma generated on Mars over the last approximately 3.8 billion years is estimated at 654 x 10 exp 6 cubic kilometers, or 0.17 cubic kilometers per year, substantially less than rates for earth (26 to 34 cu/km yr) and Venus (less than 20 cu/km yr) but much more than for the moon (0.025 cu/km yr). When scaled to earth's mass the Martian rate is much smaller than that for earth or Venus and slightly smaller than for the moon.

In this paper, pressurized oxy-fuel combustion power generation processes are modeled and analyzed based on a 350 MW subcritical reheat boiler associated with a condensing steam turbine. The performance results are obtained. Furthermore, the influences of slurry concentration and coal properties on power plant performance are investigated. An oxy-fuel configuration operating at ambient pressure is studied to compare the performance with pressurized oxy-fuel configuration. Thermodynamic analysis reveals the true potentials of the pressurized oxy-fuel process. Based on the system integration, an improved configuration is proposed in which plant efficiency of pressurized oxy-fuel process is increased by 1.36%.

The decadal variability and its predictability of the surface net freshwater fluxes is compared in a set of retrospective predictions, all using the same model setup, and only differing in the implemented ocean initialisation method and ensemble generation method. The basic aim is to deduce the differences between the initialization/ensemble generation methods in view of the uncertainty of the verifying observational data sets. The analysis will give an approximation of the uncertainties of the net freshwater fluxes, which up to now appear to be one of the most uncertain products in observational data and model outputs. All ensemble generation methods are implemented into the MPI-ESM earth system model in the framework of the ongoing MiKlip project (www.fona-miklip.de). Hindcast experiments are initialised annually between 2000-2004, and from each start year 10 ensemble members are initialized for 5 years each. Four different ensemble generation methods are compared: (i) a method based on the Anomaly Transform method (Romanova and Hense, 2015) in which the initial oceanic perturbations represent orthogonal and balanced anomaly structures in space and time and between the variables taken from a control run, (ii) one-day-lagged ocean states from the MPI-ESM-LR baseline system (iii) one-day-lagged of ocean and atmospheric states with preceding full-field nudging to re-analysis in both the atmospheric and the oceanic component of the system - the baseline one MPI-ESM-LR system, (iv) an Ensemble Kalman Filter (EnKF) implemented into oceanic part of MPI-ESM (Brune et al. 2015), assimilating monthly subsurface oceanic temperature and salinity (EN3) using the Parallel Data Assimilation Framework (PDAF). The hindcasts are evaluated probabilistically using fresh water flux data sets from four different reanalysis data sets: MERRA, NCEP-R1, GFDL ocean reanalysis and GECCO2. The assessments show no clear differences in the evaluations scores on regional scales. However, on the

Computer simulation of genomic data has become increasingly popular for assessing and validating biological models or for gaining an understanding of specific data sets. Several computational tools for the simulation of next-generation sequencing (NGS) data have been developed in recent years, which could be used to compare existing and new NGS analytical pipelines. Here we review 23 of these tools, highlighting their distinct functionality, requirements and potential applications. We also provide a decision tree for the informed selection of an appropriate NGS simulation tool for the specific question at hand. PMID:27320129

The capabilities and limitations, as well as the associated costs for two total energy systems for a diesel power generation plant are compared. Both systems utilize waste heat from engine cooling water and waste heat from exhaust gases. Pressurized water heat recovery system is simple in nature and requires no engine modifications, but operates at lower temperature ranges. On the other hand, a two-phase ebullient system operates the engine at constant temperature, provides higher temperature water or steam to the load, but is more expensive.

This Waste Generation Forecast for DOE-ORO`s Environmental Restoration OR-1 Project. FY 1994--FY 2001 is the third in a series of documents that report current estimates of the waste volumes expected to be generated as a result of Environmental Restoration activities at Department of Energy, Oak Ridge Operations Office (DOE-ORO), sites. Considered in the scope of this document are volumes of waste expected to be generated as a result of remedial action and decontamination and decommissioning activities taking place at these sites. Sites contributing to the total estimates make up the DOE-ORO Environmental Restoration OR-1 Project: the Oak Ridge K-25 Site, the Oak Ridge National Laboratory, the Y-12 Plant, the Paducah Gaseous Diffusion Plant, the Portsmouth Gaseous Diffusion Plant, and the off-site contaminated areas adjacent to the Oak Ridge facilities (collectively referred to as the Oak Ridge Reservation Off-Site area). Estimates are available for the entire fife of all waste generating activities. This document summarizes waste estimates forecasted for the 8-year period of FY 1994-FY 2001. Updates with varying degrees of change are expected throughout the refinement of restoration strategies currently in progress at each of the sites. Waste forecast data are relatively fluid, and this document represents remediation plans only as reported through September 1993.

Radcalc for Windows Version 2.01 is a user-friendly software program developed by Waste Management Federal Services, Inc., Northwest Operations for the U.S. Department of Energy (McFadden et al. 1998). It is used for transportation and packaging applications in the shipment of radioactive waste materials. Among its applications are the classification of waste per the US. Department of Transportation regulations, the calculation of decay heat and daughter products, and the calculation of the radiolytic production of hydrogen gas. The Radcalc program has been extensively tested and validated (Green et al. 1995, McFadden et al. 1998) by comparison of each Radcalc algorithm to hand calculations. An opportunity to benchmark Radcalc hydrogen gas generation calculations to experimental data arose when the Rocky Flats Environmental Technology Site (RFETS) Residue Stabilization Program collected hydrogen gas generation data to determine compliance with requirements for shipment of waste in the TRUPACT-II (Schierloh 1998). The residue/waste drums tested at RFETS contain contaminated, solid, inorganic materials in polyethylene bags. The contamination is predominantly due to plutonium and americium isotopes. The information provided by Schierloh (1 998) of RFETS includes decay heat, hydrogen gas generation rates, calculated G{sub eff} values, and waste material type, making the experimental data ideal for benchmarking Radcalc. The following sections discuss the RFETS data and the Radcalc cases modeled with the data. Results are tabulated and also provided graphically.

The advent and widespread application of next-generation sequencing (NGS) technologies to the study of microbial genomes has led to a substantial increase in the number of studies in which whole genome sequencing (WGS) is applied to the analysis of microbial genomic epidemiology. However, microorganisms such as Mycobacterium tuberculosis (MTB) present unique problems for sequencing and downstream analysis based on their unique physiology and the composition of their genomes. In this study, we compare the quality of sequence data generated using the Nextera and TruSeq isolate preparation kits for library construction prior to Illumina sequencing-by-synthesis. Our results confirm that MTB NGS data quality is highly dependent on the purity of the DNA sample submitted for sequencing and its guanine-cytosine content (or GC-content). Our data additionally demonstrate that the choice of library preparation method plays an important role in mitigating downstream sequencing quality issues. Importantly for MTB, the Illumina TruSeq library preparation kit produces more uniform data quality than the Nextera XT method, regardless of the quality of the input DNA. Furthermore, specific genomic sequence motifs are commonly missed by the Nextera XT method, as are regions of especially high GC-content relative to the rest of the MTB genome. As coverage bias is highly undesirable, this study illustrates the importance of appropriate protocol selection when performing NGS studies in order to ensure that sound inferences can be made regarding mycobacterial genomes. PMID:26849565

In recent years, the videogame industry has been characterized by a great boost in gesture recognition and motion tracking, following the increasing request of creating immersive game experiences. The Microsoft Kinect sensor allows acquiring RGB, IR and depth images with a high frame rate. Because of the complementary nature of the information provided, it has proved an attractive resource for researchers with very different backgrounds. In summer 2014, Microsoft launched a new generation of Kinect on the market, based on time-of-flight technology. This paper proposes a calibration of Kinect for Xbox One imaging sensors, focusing on the depth camera. The mathematical model that describes the error committed by the sensor as a function of the distance between the sensor itself and the object has been estimated. All the analyses presented here have been conducted for both generations of Kinect, in order to quantify the improvements that characterize every single imaging sensor. Experimental results show that the quality of the delivered model improved applying the proposed calibration procedure, which is applicable to both point clouds and the mesh model created with the Microsoft Fusion Libraries. PMID:26528979

The treatment of spent fuel produced in nuclear power generation is one of the most important issues to both the nuclear community and the general public. One of the viable options to long-term geological disposal of spent fuel is to extract plutonium, minor actinides (MA), and potentially long-lived fission products from the spent fuel and transmute them into short-lived or stable radionuclides in currently operating light-water reactors (LWR), thus reducing the radiological toxicity of the nuclear waste stream. One of the challenges is to demonstrate that the burnup-dependent characteristic differences between Reactor-Grade Mixed Oxide (RG-MOX) fuel and RG-MOX fuel with MA Np-237 and Am 241 are minimal, particularly, the inert gas generation rate, such that the commercial MOX fuel experience base is applicable. Under the Advanced Fuel Cycle Initiative (AFCI), developmental fuel specimens in experimental assembly LWR-2 are being tested in the northwest (NW) I-24 irradiation position of the Advanced Test Reactor (ATR). The experiment uses MOX fuel test hardware, and contains capsules with MOX fuel consisting of mixed oxide manufactured fuel using reactor grade plutonium (RG-Pu) and mixed oxide manufactured fuel using RG-Pu with added Np/Am. This study will compare the fuel neutronics depletion characteristics of Case-1 RG-MOX and Case-2 RG-MOX with Np/Am.

T-lymphocytes genetically engineered with the chimeric antigen receptor (CAR-T) have shown great therapeutic potential in cancer treatment. A variety of preclinical researches and clinical trials of CAR-T therapy have been carried out to lay the foundation for future clinical application. In these researches, several gene-transfer methods were used to deliver CARs or other genes into T-lymphocytes, equipping CAR-modified T cells with a property of recognizing and attacking antigen-expressing tumor cells in a major histocompatibility complex-independent manner. Here, we summarize the gene-transfer vectors commonly used in the generation of CAR-T cell, including retrovirus vectors, lentivirus vectors, the transposon/transposase system, the plasmid-based system, and the messenger RNA electroporation system. The following aspects were compared in parallel: efficiency of gene transfer, the integration methods in the modified T cells, foreground of scale-up production, and application and development in clinical trials. These aspects should be taken into account to generate the optimal CAR-gene vector that may be suitable for future clinical application. PMID:27333595

The advent and widespread application of next-generation sequencing (NGS) technologies to the study of microbial genomes has led to a substantial increase in the number of studies in which whole genome sequencing (WGS) is applied to the analysis of microbial genomic epidemiology. However, microorganisms such as Mycobacterium tuberculosis (MTB) present unique problems for sequencing and downstream analysis based on their unique physiology and the composition of their genomes. In this study, we compare the quality of sequence data generated using the Nextera and TruSeq isolate preparation kits for library construction prior to Illumina sequencing-by-synthesis. Our results confirm that MTB NGS data quality is highly dependent on the purity of the DNA sample submitted for sequencing and its guanine-cytosine content (or GC-content). Our data additionally demonstrate that the choice of library preparation method plays an important role in mitigating downstream sequencing quality issues. Importantly for MTB, the Illumina TruSeq library preparation kit produces more uniform data quality than the Nextera XT method, regardless of the quality of the input DNA. Furthermore, specific genomic sequence motifs are commonly missed by the Nextera XT method, as are regions of especially high GC-content relative to the rest of the MTB genome. As coverage bias is highly undesirable, this study illustrates the importance of appropriate protocol selection when performing NGS studies in order to ensure that sound inferences can be made regarding mycobacterial genomes. PMID:26849565

Various analogues of Titan haze particles (termed tholins) have been made in the laboratory. In certain geologic environments on Titan, these haze particles may come into contact with aqueous ammonia (NH3) solutions, hydrolyzing them into molecules of astrobiological interest. A Titan tholin analogue hydrolyzed in aqueous NH3 at room temperature for 2.5 years was analyzed for amino acids using highly sensitive ultra-high performance liquid chromatography coupled with fluorescence detection and time-of-flight mass spectrometry (UHPLC-FDToF-MS) analysis after derivatization with a fluorescent tag. We compare here the amino acids produced from this reaction sequence with those generated from room temperature Miller-Urey (MU) type electric discharge reactions. We find that most of the amino acids detected in low temperature MU CH4N2H2O electric discharge reactions are generated in Titan simulation reactions, as well as in previous simulations of Triton chemistry. This argues that many processes provide very similar mixtures of amino acids, and possibly other types of organic compounds, in disparate environments, regardless of the order of hydration. Although it is unknown how life began, it is likely that given reducing conditions, similar materials were available throughout the early Solar System and throughout the universe to facilitate chemical evolution.

Free energy calculation has long been an important goal for molecular dynamics simulation and force field development, but historically it has been challenged by limited performance, accuracy, and creation of topologies for arbitrary small molecules. This has made it difficult to systematically compare different sets of parameters to improve existing force fields, but in the past few years several authors have developed increasingly automated procedures to generate parameters for force fields such as Amber, CHARMM, and OPLS. Here, we present a new framework that enables fully automated generation of GROMACS topologies for any of these force fields and an automated setup for parallel adaptive optimization of high-throughput free energy calculation by adjusting lambda point placement on the fly. As a small example of this automated pipeline, we have calculated solvation free energies of 50 different small molecules using the GAFF, OPLS-AA, and CGenFF force fields and four different water models, and by including the often neglected polarization costs, we show that the common charge models are somewhat underpolarized. PMID:25343332

Various analogues of Titan haze particles (termed ‘tholins’) have been made in the laboratory. In certain geologic environments on Titan, these haze particles may come into contact with aqueous ammonia (NH3) solutions, hydrolyzing them into molecules of astrobiological interest. A Titan tholin analogue hydrolyzed in aqueous NH3 at room temperature for 2.5 years was analyzed for amino acids using highly sensitive ultra-high performance liquid chromatography coupled with fluorescence detection and time-of-flight mass spectrometry (UHPLC-FD/ToF-MS) analysis after derivatization with a fluorescent tag. We compare here the amino acids produced from this reaction sequence with those generated from room temperature Miller-Urey (MU) type electric discharge reactions. We find that most of the amino acids detected in low temperature MU CH4/N2/H2O electric discharge reactions are generated in Titan simulation reactions, as well as in previous simulations of Triton chemistry. This argues that many processes provide very similar mixtures of amino acids, and possibly other types of organic compounds, in disparate environments, regardless of the order of hydration. Although it is unknown how life began, it is likely that given reducing conditions, similar materials were available throughout the early Solar System and throughout the universe to facilitate chemical evolution.

In recent years, the videogame industry has been characterized by a great boost in gesture recognition and motion tracking, following the increasing request of creating immersive game experiences. The Microsoft Kinect sensor allows acquiring RGB, IR and depth images with a high frame rate. Because of the complementary nature of the information provided, it has proved an attractive resource for researchers with very different backgrounds. In summer 2014, Microsoft launched a new generation of Kinect on the market, based on time-of-flight technology. This paper proposes a calibration of Kinect for Xbox One imaging sensors, focusing on the depth camera. The mathematical model that describes the error committed by the sensor as a function of the distance between the sensor itself and the object has been estimated. All the analyses presented here have been conducted for both generations of Kinect, in order to quantify the improvements that characterize every single imaging sensor. Experimental results show that the quality of the delivered model improved applying the proposed calibration procedure, which is applicable to both point clouds and the mesh model created with the Microsoft Fusion Libraries. PMID:26528979

The aim of this study was to enable a quantitative comparison of initial soil erosion processes in European vineyards using the same methodology and equipment. The study was conducted in four viticultural areas with different characteristics (Valencia and Málaga in Spain, Ruwer-Mosel valley and Saar-Mosel valley in Germany). Old and young vineyards, with conventional and ecological planting and management systems were compared. The same portable rainfall simulator with identical rainfall intensity (40mmh(-1)) and sampling intervals (30min of test duration, collecting the samples at 5-min-intervals) was used over a circular test plot with 0.28m(2). The results of 83 simulations have been analysed and correlation coefficients were calculated for each study area to identify the relationship between environmental plot characteristics, soil texture, soil erosion, runoff and infiltration. The results allow for identification of the main factors related to soil properties, topography and management, which control soil erosion processes in vineyards. The most important factors influencing soil erosion and runoff were the vegetation cover for the ecological German vineyards (with 97.6±8% infiltration coefficients) and stone cover, soil moisture and slope steepness for the conventional land uses. PMID:27265730

Halo merger trees describe the hierarchical assembly of dark matter haloes, and are the backbone for modelling galaxy formation and evolution. Merger trees constructed using Monte Carlo algorithms based on the extended Press-Schechter (EPS) formalism are complementary to using N-body simulations and have the advantage that they are not trammelled by limited numerical resolution and uncertainties in identifying and linking (sub)haloes. This paper compares multiple EPS-based merger tree algorithms to simulation results using four diagnostics: progenitor mass function, mass assembly history (MAH), merger rate per descendant halo and the unevolved subhalo mass function. Spherical collapse-based methods typically overpredict major-merger rates, whereas ellipsoidal collapse dramatically overpredicts the minor-merger rate for massive haloes. The only algorithm in our comparison that yields results in good agreement with simulations is that by Parkinson et al. (P08). We emphasize, though, that the simulation results used as benchmarks in testing the merger trees are hampered by significant uncertainties themselves: MAHs and merger rates from different studies easily disagree by 50 per cent, even when based on the same simulation. Given this status quo, the P08 merger trees can be considered as accurate as those extracted from simulations.

The MB/BacT system (MB/BacT) with a revised antibiotic supplement kit was compared with the BACTEC 460 system (BACTEC 460) in a test of 488 specimens submitted for mycobacterial culture from 302 patients. Twenty-four Mycobacterium tuberculosis isolates were detected by the BACTEC 460 versus 23 isolates by the MB/BacT. Mean time until detection of M. tuberculosis isolates identified by both systems was 11.9 days for the BACTEC 460 versus 13.7 days for the MB/BacT (P = 0.046). M. avium complex was detected in 12 specimens by the MB/BacT versus 10 specimens by the BACTEC 460. Only 8 of 14 (57%) M. avium isolates were detected by both systems, with a mean time until detection of 10.1 days for the BACTEC 460 and 14.2 days for the MB/BacT (P = 0.009). The BACTEC 460 and the MB/BacT detected M. gordonae in four specimens, but only a single specimen was positive by both systems. One M. fortuitum isolate and one of five M. kansasii isolates were recovered only by the BACTEC 460. The bacterial overgrowth rate was 7.0% for the MB/BacT versus 4.1% for the BACTEC 460. We found the MB/BacT to be comparable to the BACTEC 460 for mycobacterial detection. Even though time until detection with the MB/BacT was slightly longer (1.8 days longer for M. tuberculosis and 4.1 days for M. avium [mean values]) and the bacterial overgrowth rate was somewhat higher, the decreased labor, the availability of a computerized data management system, and the noninvasive, nonradiometric aspects of the MB/BacT offset these relative disadvantages and make it an acceptable alternative for use in the diagnostic laboratory. PMID:9774571

For better or worse, natural gas has become the fuel of choice for new power plants being built across the United States. According to the US Energy Information Administration (EIA), natural gas combined-cycle and combustion turbine power plants accounted for 96% of the total generating capacity added in the US between 1999 and 2002--138 GW out of a total of 144 GW. Looking ahead, the EIA expects that gas-fired technology will account for 61% of the 355 GW new generating capacity projected to come on-line in the US up to 2025, increasing the nationwide market share of gas-fired generation from 18% in 2002 to 22% in 2025. While the data are specific to the US, natural gas-fired generation is making similar advances in other countries as well. Regardless of the explanation for (or interpretation of) the empirical findings, however, the basic implications remain the same: one should not blindly rely on gas price forecasts when comparing fixed-price renewable with variable-price gas-fired generation contracts. If there is a cost to hedging, gas price forecasts do not capture and account for it. Alternatively, if the forecasts are at risk of being biased or out of tune with the market, then one certainly would not want to use them as the basis for resource comparisons or investment decisions if a more certain source of data (forwards) existed. Accordingly, assuming that long-term price stability is valued, the most appropriate way to compare the levelized cost of these resources in both cases would be to use forward natural gas price data--i.e. prices that can be locked in to create price certainty--as opposed to uncertain natural gas price forecasts. This article suggests that had utilities and analysts in the US done so over the sample period from November 2000 to November 2003, they would have found gas-fired generation to be at least 0.3-0.6 cents/kWh more expensive (on a levelized cost basis) than otherwise thought. With some renewable resources, in particular wind

The authors studied the utility of the DSM-IV Global Assessment of Functioning (GAF) scale for improving interdisciplinary communication about patient care. Discharge GAF scores for 165 discharged inpatients were computer generated by 13 trained unit social workers and derived by eight psychiatrists on the basis of their clinical impressions. Differences between the scores obtained by the two disciplinary groups were tested by using the paired t test and the nonparametric signed-rank test. Agreement between scores for various GAF categories was tested with kappa agreement indexes. Interdisciplinary agreement on discharge GAF scores was observed across diagnostic categories and across most categories of length of stay. The results suggest that social workers, after receiving systematic training in computer-based GAF reports, can provide reasonable assessments of clients' functioning. PMID:11875231

Au nanoparticles were generated by laser ablation in PBS buffer and conjugated to immunoglobulin E (IgE) during ablation (in situ) and after ablation (ex situ). Exposure for 5 min to 532 nm pulses with a duration of 150 ps, an energy of 8 mJ and a repetition rate of 10 Hz yielded nanoparticles with a mean diameter of about 4 nm for in situ conjugation and of about 5 nm for ex situ conjugation. ELISA analysis showed that the conjugation efficiency was comparable for in situ and ex situ fabrication. ELISA for cytokine (IL-6) production by IgE-activated mast cells showed that the Au-IgE conjugates induced a response which coincided within error for conjugates prepared in situ and ex situ.

Four Air Force pilots were used as subjects to assess a battery of subjective and physiological workload measures in a flight simulation environment in which two computer-generated primary flight display configurations were evaluated. A high- and low-workload task was created by manipulating flight path complexity. Both SWAT and the NASA-TLX were shown to be effective in differentiating the high and low workload path conditions. Physiological measures were inconclusive. A battery of workload measures continues to be necessary for an understanding of the data. Based on workload, opinion, and performance data, it is fruitful to pursue research with a primary flight display and a horizontal situation display integrated into a single display.

Snow avalanches are a source of waves that are transmitted through the ground and the air. These wave fields are detected by seismic and infrasound sensors. During the winter seasons 2008 -2016, a good quality database of avalanches was obtained at the VdlS test site with an accurate instrumentation. These avalanches were both natural and artificially triggered and were of varying types and sizes. Distances involved were 0.5 -3 km. Seismic signals were acquired using three seismometers (3-components, 1Hz) spaced 600 m apart along the avalanche track. One infrasound sensor (0.1Hz) and one seismometer (3-components, 1Hz) were placed one next to the other with a common base of time on the slope opposite the path. The database obtained enables us to compare the different signals generated. Differences in the frequency content and shape of the signals depending on the type and size of the avalanche are detected. A clear evolution of the recorded seismic signals along the path is observed. The cross correlation of the infrasound and seismic signals generated by the avalanches allows us to determine different characteristics for powder, transitional and wet avalanches concerning their wave fields. The joint analysis of infrasound and seismic waves enables us to obtain valuable information about the internal parts of the avalanche as a source of each wave field. This study has repercussions on avalanche dynamics and on the selection of the appropriate avalanche detection system. This study is supported by the Spanish Ministry of Science and Innovation project CHARMA: CHAracterization and ContRol of MAss Movements. A Challenge for Geohazard Mitigation (CGL2013-40828-R), and RISKNAT group (2014GR/1243).

The city of Astana, the capital of Kazakhstan, which has a population of 804,474, and has been experiencing rapid growth over the last 15 years, generates approximately 1.39 kg capita(-1) day(-1) of municipal solid waste (MSW). Nearly 700 tonnes of MSW are collected daily, of which 97% is disposed of at landfills. The newest landfill was built using modern technologies, including a landfill gas (LFG) collection system.The rapid growth of Astana demands more energy on its path to development, and the viability analysis of MSW to generate electricity is imperative. This paper presents a technical-economic pre-feasibility study comparing landfill including LFG utilization and waste incineration (WI) to produce electricity. The performance of LFG with a reciprocating engine and WI with steam turbine power technologies were compared through corresponding greenhouse gases (GHG) reduction, cost of energy production (CEP), benefit-cost ratio (BCR), net present value (NPV) and internal rate of return (IRR) from the analyses. Results demonstrate that in the city of Astana, WI has the potential to reduce more than 200,000 tonnes of GHG per year, while LFG could reduce slightly less than 40,000 tonnes. LFG offers a CEP 5.7% larger than WI, while the latter presents a BCR two times higher than LFG. WI technology analysis depicts a NPV exceeding 280% of the equity, while for LFG, the NPV is less than the equity, which indicates an expected remarkable financial return for the WI technology and a marginal and risky scenario for the LFG technology. Only existing landfill facilities with a LFG collection system in place may turn LFG into a viable project. PMID:25819927

The estimated 721,800 hospital acquired infections per year in the United States have necessitated development of novel environmental decontamination technologies such as ultraviolet germicidal irradiation (UVGI). This study evaluated the efficacy of a novel, portable UVGI generator (the TORCH, ChlorDiSys Solutions, Inc., Lebanon, NJ) to disinfect surface coupons composed of plastic from a bedrail, stainless steel, chrome-plated light switch cover, and a porcelain tile that were inoculated with methicillin-resistant Staphylococcus aureus (MRSA) or vancomycin-resistant Enterococcus faecalis (VRE). Each surface type was placed at 6 different sites within a hospital room and treated by 10-min ultraviolet-C (UVC) exposures using the TORCH with doses ranging from 0-688 mJ/cm(2) between sites. Organism reductions were compared with untreated surface coupons as controls. Overall, UVGI significantly reduced MRSA by an average of 4.6 log10 (GSD: 1.7 log10, 77% inactivation, p < 0.0001) and VRE by an average of 3.9 log10 (GSD: 1.7 log10, 65% inactivation, p < 0.0001). MRSA on bedrail was reduced significantly (p < 0.0001) less than on other surfaces, while VRE was reduced significantly less on chrome (p = 0.0004) and stainless steel (p = 0.0012) than porcelain tile. Organisms out of direct line of sight of the UVC generator were reduced significantly less (p < 0.0001) than those directly in line of sight. UVGI was found an effective method to inactivate nosocomial pathogens on surfaces evaluated within the hospital environment in direct line of sight of UVGI treatment with variation between organism and surface types. PMID:27028152

Chlamydomonas reinhardtii possesses many potential advantages to be exploited as a biocatalyst in microbial fuel cells (MFCs) for electricity generation. In the present study, we performed computational studies based on flux balance analysis (FBA) to probe the maximum potential of C. reinhardtii for current output and identify the metabolic mechanisms supporting a high current generation in three different cultivation conditions, i.e., heterotrophic, photoautotrophic and mixotrophic growth. The results showed that flux balance limitations allow the highest current output for C. reinhardtii in the mixotrophic growth mode (2.368 A/gDW), followed by heterotrophic growth (1.141 A/gDW) and photoautotrophic growth the lowest (0.7035 A/gDW). The significantly higher mediated electron transfer (MET) rate in the mixotrophic mode is in complete contrast to previous findings for a photosynthetic cyanobacterium, and was attributed to the fact that for C. reinhardtii the photophosphorylation improved the efficiency of converting the acetate into biomass and NADH production. Overall, the cytosolic NADH-dependent current production was mainly associated with five reactions in both mixotrophic and photoautotrophic nutritional modes, whereas four reactions participated in the heterotrophic mode. The mixotrophic and photoautotrophic metabolisms were alike and shared the same set of reactions for maximizing current production, whereas in the heterotrophic mode, the current production was additionally contributed by the metabolic activities in the two organelles: glyoxysome and chloroplast. In conclusion, C. reinhardtii has a potential to be exploited in MFCs of MET mode to produce a high current output. PMID:24875305

Of the many challenges in rhinoplasty, achieving a satisfactory outcome at the first operation is important. There are multiple reasons for secondary surgery, and generally revisions can be broadly classified into minor (often one area of deficit) or a total redo. Understanding the common technical reasons for failure in primary surgery by analyzing the deformities has resulted in various error patterns emerging. Understanding these patterns means we can modify techniques in primary surgery to reduce the incidence of revision. This article describes our prospective revision rhinoplasty experience over 5 and then 2 years, highlighting the main error patterns encountered. We also describe a stepwise analysis of four frequently encountered key problem areas alongside techniques to address them and offer pearls to help prevent further revision. Comparison of two cohorts of patients from a teaching hospital setting and private practice with the same operating surgeon indicates an increasing tendency to the open approach for revisions. The re-revision rates for these groups are 15.7 and 9%, respectively. Revision rhinoplasty is a difficult operation to perform to the satisfaction of both the surgeon and the patient. Understanding the common technical reasons for failure in primary surgery by fully analyzing the deformities means we can modify techniques in primary surgery to reduce the incidence of revision. PMID:27494585

Fourier transform ion cyclotron resonance mass spectrometry (FT ICR-MS) was applied in the analysis of shale oils generated using two different pyrolysis systems under laboratory conditions meant to simulate surface and in situ oil shale retorting. Significant variations were observed in the shale oils, particularly the degree of conjugation of the constituent molecules. Comparison of FT ICR-MS results to standard oil characterization methods (API gravity, SARA fractionation, gas chromatography-flame ionization detection) indicated correspondence between the average Double Bond Equivalence (DBE) and asphaltene content. The results show that, based on the average DBE values and DBE distributions of the shale oils examined, highly conjugated species are enriched in samples produced under low pressure, high temperature conditions and in the presence of water.

We present a detailed analysis and comparison of dielectric waveguides made of CdTe, GaP, GaAs and InP for modal phase matched optical difference frequency generation (DFG) in the terahertz domain. From the form of the DFG equations, we derived the definition of a very general figure of merit (FOM). In turn, this FOM enabled us to compare different configurations, by taking into account linear and nonlinear susceptibility dispersion, terahertz absorption, and a rigorous evaluation of the waveguide modes properties. The most efficient waveguides found with this procedure are predicted to approach the quantum efficiency limit with input optical power in the order of kWs.

The Generation IV International Forum (GIF) is a vehicle for the cooperative international development of future nuclear energy systems. The Generation IV program has established primary objectives in the areas of sustainability, economics, safety and reliability, and Proliferation Resistance and Physical Protection (PR&PP). In order to help meet the latter objective a program was launched in December 2002 to develop a rigorous means to assess nuclear energy systems with respect to PR&PP. The study of Physical Protection of a facility is a relatively well established methodology, but an approach to evaluate the Proliferation Resistance of a nuclear fuel cycle is not. This paper will examine the Proliferation Resistance (PR) evaluation methodology being developed by the PR group, which is largely a new approach and compare it to generally accepted nuclear facility safety evaluation methodologies. Safety evaluation methods have been the subjects of decades of development and use. Further, safety design and analysis is fairly broadly understood, as well as being the subject of federally mandated procedures and requirements. It is therefore extremely instructive to compare and contrast the proposed new PR evaluation methodology process with that used in safety analysis. By so doing, instructive and useful conclusions can be derived from the comparison that will help to strengthen the PR methodological approach as it is developed further. From the comparison made in this paper it is evident that there are very strong parallels between the two processes. Most importantly, it is clear that the proliferation resistance aspects of nuclear energy systems are best considered beginning at the very outset of the design process. Only in this way can the designer identify and cost effectively incorporate intrinsic features that might be difficult to implement at some later stage. Also, just like safety, the process to implement proliferation resistance should be a dynamic

We examined fungal communities associated with the PM10 mass of Rehovot, Israel outdoor air samples collected in the spring and fall seasons. Fungal communities were described by 454 pyrosequencing of the internal transcribed spacer (ITS) region of the fungal ribosomal RNA encoding gene. To allow for a more quantitative comparison of fungal exposure in humans, the relative abundance values of specific taxa were transformed to absolute concentrations through multiplying these values by the sample's total fungal spore concentration (derived from universal fungal qPCR). Next, the sequencing-based absolute concentrations for Alternaria alternata, Cladosporium cladosporioides, Epicoccum nigrum, and Penicillium/Aspergillus spp. were compared to taxon-specific qPCR concentrations for A. alternata, C. cladosporioides, E. nigrum, and Penicillium/Aspergillus spp. derived from the same spring and fall aerosol samples. Results of these comparisons showed that the absolute concentration values generated from pyrosequencing were strongly associated with the concentration values derived from taxon-specific qPCR (for all four species, p < 0.005, all R > 0.70). The correlation coefficients were greater for species present in higher concentrations. Our microbial aerosol population analyses demonstrated that fungal diversity (number of fungal operational taxonomic units) was higher in the spring compared to the fall (p = 0.02), and principal coordinate analysis showed distinct seasonal differences in taxa distribution (ANOSIM p = 0.004). Among genera containing allergenic and/or pathogenic species, the absolute concentrations of Alternaria, Aspergillus, Fusarium, and Cladosporium were greater in the fall, while Cryptococcus, Penicillium, and Ulocladium concentrations were greater in the spring. The transformation of pyrosequencing fungal population relative abundance data to absolute concentrations can improve next-generation DNA sequencing-based quantitative aerosol exposure

This report discusses comparison tests for two methods of collecting vapor samples from the Hanford Site high-level radioactive waste tank headspaces. The two sampling methods compared are the truck-mounted vapor sampling system (VSS) and the cart-mounted in-situ vapor sampling (ISVS). Three tanks were sampled by both the VSS and ISVS methods from the same access risers within the same 8-hour period. These tanks have diverse headspace compositions and they represent the highest known level of several key vapor analytes.

Boron ion beams are widely used for semiconductor ion implantation and for surface modification for improving the operating parameters and increasing the lifetime of machine parts and tools. For the latter application, the purity requirements of boron ion beams are not as stringent as for semiconductor technology, and a composite cathode of lanthanum hexaboride may be suitable for the production of boron ions. We have explored the use of two different approaches to boron plasma production: vacuum arc and planar high power impulse magnetron in self-sputtering mode. For the arc discharge, the boron plasma is generated at cathode spots, whereas for the magnetron discharge, the main process is sputtering of cathode material. We present here the results of comparative test experiments for both kinds of discharge, aimed at determining the optimal discharge parameters for maximum yield of boron ions. For both discharges, the extracted ion beam current reaches hundreds of milliamps and the fraction of boron ions in the total extracted ion beam is as high as 80%.

Because of the lack of an accurate and sensitive tool to evaluate the parasitemia level, treatment or prevention of leishmaniasis remains an important challenge worldwide. To monitor and track leishmanial infection by two parameters in real time, we generated stably transgenic Leishmania that express a bi-reporter protein as fused EGFP and firefly luciferase. Using two reporter genes (egfp-luc) simultaneously increases the experimental sensitivity for detection/diagnosis, and in vitro quantification of parasites as well as real-time infection in mice. Through different specific tools, EGFP and LUC signals from the parasite were detectable and measurable within a mammalian host and promastigotes. Here, the LUC protein provided a higher level of sensitivity than did EGFP, so that infection was detectable at an earlier stage of the disease in the footpad (injection site) and lymph nodes by bioluminescence. These results depicted that: (1) both quantitative reporter genes, EGFP and LUC, could be simultaneously used to detect parasitemia in vitro and in vivo and (2) sensitivity of firefly luciferase was 10-fold higher than that of EGFP in promastigotes. PMID:25637784

Targeted, capture-based DNA sequencing is a cost-effective method to focus sequencing on a coding region or other customized region of the genome. There are multiple targeted sequencing methods available, but none has been systematically investigated and compared. We evaluated four commercially available custom-targeted DNA technologies for next-generation sequencing with respect to on-target sequencing, uniformity, and ability to detect single-nucleotide variations (SNVs) and copy number variations. The technologies that used sonication for DNA fragmentation displayed impressive uniformity of capture, whereas the others had shorter preparation times, but sacrificed uniformity. One of those technologies, which uses transposase for DNA fragmentation, has a drawback requiring sample pooling, and the last one, which uses restriction enzymes, has a limitation depending on restriction enzyme digest sites. Although all technologies displayed some level of concordance for calling SNVs, the technologies that require restriction enzymes or transposase missed several SNVs largely because of the lack of coverage. All technologies performed well for copy number variation calling when compared to single-nucleotide polymorphism arrays. These results enable laboratories to compare these methods to make informed decisions for their intended applications. PMID:25528188

Targeted, capture-based DNA sequencing is a cost-effective method to focus sequencing on a coding region or other customized region of the genome. There are multiple targeted sequencing methods available, but none has been systematically investigated and compared. We evaluated four commercially available custom-targeted DNA technologies for next-generation sequencing with respect to on-target sequencing, uniformity, and ability to detect single-nucleotide variations (SNVs) and copy number variations. The technologies that used sonication for DNA fragmentation displayed impressive uniformity of capture, whereas the others had shorter preparation times, but sacrificed uniformity. One of those technologies, which uses transposase for DNA fragmentation, has a drawback requiring sample pooling, and the last one, which uses restriction enzymes, has a limitation depending on restriction enzyme digest sites. Although all technologies displayed some level of concordance for calling SNVs, the technologies that require restriction enzymes or transposase missed several SNVs largely because of the lack of coverage. All technologies performed well for copy number variation calling when compared to single-nucleotide polymorphism arrays. These results enable laboratories to compare these methods to make informed decisions for their intended applications. PMID:25528188

Boron ion beams are widely used for semiconductor ion implantation and for surface modification for improving the operating parameters and increasing the lifetime of machine parts and tools. For the latter application, the purity requirements of boron ion beams are not as stringent as for semiconductor technology, and a composite cathode of lanthanum hexaboride may be suitable for the production of boron ions. We have explored the use of two different approaches to boron plasma production: vacuum arc and planar high power impulse magnetron in self-sputtering mode. For the arc discharge, the boron plasma is generated at cathode spots, whereas for the magnetron discharge, the main process is sputtering of cathode material. We present here the results of comparative test experiments for both kinds of discharge, aimed at determining the optimal discharge parameters for maximum yield of boron ions. For both discharges, the extracted ion beam current reaches hundreds of milliamps and the fraction of boron ions in the total extracted ion beam is as high as 80%. PMID:26931963

Aquatic oligochaetes are a common group of freshwater benthic invertebrates known to be very sensitive to environmental changes and currently used as bioindicators in some countries. However, more extensive application of oligochaetes for assessing the ecological quality of sediments in watercourses and lakes would require overcoming the difficulties related to morphology-based identification of oligochaetes species. This study tested the Next-Generation Sequencing (NGS) of a standard cytochrome c oxydase I (COI) barcode as a tool for the rapid assessment of oligochaete diversity in environmental samples, based on mixed specimen samples. To know the composition of each sample we Sanger sequenced every specimen present in these samples. Our study showed that a large majority of OTUs (Operational Taxonomic Unit) could be detected by NGS analyses. We also observed congruence between the NGS and specimen abundance data for several but not all OTUs. Because the differences in sequence abundance data were consistent across samples, we exploited these variations to empirically design correction factors. We showed that such factors increased the congruence between the values of oligochaetes-based indices inferred from the NGS and the Sanger-sequenced specimen data. The validation of these correction factors by further experimental studies will be needed for the adaptation and use of NGS technology in biomonitoring studies based on oligochaete communities. PMID:26866802

Aquatic oligochaetes are a common group of freshwater benthic invertebrates known to be very sensitive to environmental changes and currently used as bioindicators in some countries. However, more extensive application of oligochaetes for assessing the ecological quality of sediments in watercourses and lakes would require overcoming the difficulties related to morphology-based identification of oligochaetes species. This study tested the Next-Generation Sequencing (NGS) of a standard cytochrome c oxydase I (COI) barcode as a tool for the rapid assessment of oligochaete diversity in environmental samples, based on mixed specimen samples. To know the composition of each sample we Sanger sequenced every specimen present in these samples. Our study showed that a large majority of OTUs (Operational Taxonomic Unit) could be detected by NGS analyses. We also observed congruence between the NGS and specimen abundance data for several but not all OTUs. Because the differences in sequence abundance data were consistent across samples, we exploited these variations to empirically design correction factors. We showed that such factors increased the congruence between the values of oligochaetes-based indices inferred from the NGS and the Sanger-sequenced specimen data. The validation of these correction factors by further experimental studies will be needed for the adaptation and use of NGS technology in biomonitoring studies based on oligochaete communities. PMID:26866802

Human embryonic stem cells (ESCs) readily commit to the trophoblast lineage after exposure to bone morphogenetic protein-4 (BMP-4) and two small compounds, an activin A signaling inhibitor and a FGF2 signaling inhibitor (BMP4/A83-01/PD173074; BAP treatment). During differentiation, areas emerge within the colonies with the biochemical and morphological features of syncytiotrophoblast (STB). Relatively pure fractions of mononucleated cytotrophoblast (CTB) and larger syncytial sheets displaying the expected markers of STB can be obtained by differential filtration of dispersed colonies through nylon strainers. RNA-seq analysis of these fractions has allowed them to be compared with cytotrophoblasts isolated from term placentas before and after such cells had formed syncytia. Although it is clear from extensive gene marker analysis that both ESC- and placenta-derived syncytial cells are trophoblast, each with the potential to transport a wide range of solutes and synthesize placental hormones, their transcriptome profiles are sufficiently dissimilar to suggest that the two cell types have distinct pedigrees and represent functionally different kinds of STB. We propose that the STB generated from human ESCs represents the primitive syncytium encountered in early pregnancy soon after the human trophoblast invades into the uterine wall. PMID:27051068

Convective clouds in the ITCZ (Intertropical Convergence Zone) are a major source of nonstationary gravity waves, that propagate to the stratosphere and result in upward displacements at low levels, which induces new convection. Simulations of wind fields are performed by the mesoscale meteorological model WRF (Advanced Research Weather Research and Forecasting) over a period of 2 days during active thunderstorm days. Simulations are carried out in a domain covering the ITCZ in West Africa using 2 nested grids with horizontal grid spacing of 27 and 9 km respectively. Simulations are driven by ECMWF winds (defined by 91 levels from surface to 80 km), using 100 levels from surface to 50 Pa and a sponge layer above 45 km. The waves characteristics are compared to observations at the CTBT (Comprehensive Test Ban Treaty) infrasound station in Ivory Coast. The aim of this study is to further understand the mechanisms of wave generation by deep convection and propagation to the stratosphere. In a second part, we also study the effects of gravity waves on the dynamics of the tropical atmosphere and perform sensitivity simulations to the top height of the model.

In the high level waste tanks at the Savannah River Site (SRS), hydrogen is produced continuously by interaction of the radiation in the tank with water in the waste. Consequently, the vapor spaces of the tanks are purged to prevent the accumulation of H{sub 2} and possible formation of a flammable mixture in a tank. Personnel at SRS have developed an empirical model to predict the rate of H{sub 2} formation in a tank. The basis of this model is the prediction of the G value for H{sub 2} production. This G value is the number of H{sub 2} molecules produced per 100 eV of radiolytic energy absorbed by the waste. Based on experimental studies it was found that the G value for H{sub 2} production from beta radiation and from gamma radiation were essentially equal. The G value for H{sub 2} production from alpha radiation was somewhat higher. Thus, the model has two equations, one for beta/gamma radiation and one for alpha radiation. Experimental studies have also indicated that both G values are decreased by the presence of nitrate and nitrite ions in the waste. These are the main scavengers for the precursors of H{sub 2} in the waste; thus the equations that were developed predict G values for hydrogen production as a function of the concentrations of these two ions in waste. Knowing the beta/gamma and alpha heat loads in the waste allows one to predict the total generation rate for hydrogen in a tank. With this prediction a ventilation rate can be established for each tank to ensure that a flammable mixture is not formed in the vapor space in a tank. Recently personnel at Hanford have developed a slightly different model for predicting hydrogen G values. Their model includes the same precursor for H{sub 2} as the SRS model but also includes an additional precursor not in the SRS model. Including the second precursor for H{sub 2} leads to different empirical equations for predicting the G values for H{sub 2} as a function of the nitrate and nitrite concentrations in

Implementation of human leukocyte antigen (HLA) genotyping by next-generation sequencing (NGS) in the clinical lab brings new challenges to the laboratories performing this testing. With the advent of commercially available HLA-NGS typing kits, labs must make numerous decisions concerning capital equipment and address labor considerations. Therefore, careful and unbiased evaluation of available methods is imperative. In this report, we compared our in-house developed HLA NGS typing with two commercially available kits from Illumina and Omixon using 10 International Histocompatibility Working Group (IHWG) and 36 clinical samples. Although all three methods employ long range polymerase chain reaction (PCR) and have been developed on the Illumina MiSeq platform, the methodologies for library preparation show significant variations. There was 100% typing concordance between all three methods at the first field when a HLA type could be assigned. Overall, HLA typing by NGS using in-house or commercially available methods is now feasible in clinical laboratories. However, technical variables such as hands-on time and indexing strategies are sufficiently different among these approaches to impact the workflow of the clinical laboratory. PMID:27524804

Human embryonic stem cells (ESCs) readily commit to the trophoblast lineage after exposure to bone morphogenetic protein-4 (BMP-4) and two small compounds, an activin A signaling inhibitor and a FGF2 signaling inhibitor (BMP4/A83-01/PD173074; BAP treatment). During differentiation, areas emerge within the colonies with the biochemical and morphological features of syncytiotrophoblast (STB). Relatively pure fractions of mononucleated cytotrophoblast (CTB) and larger syncytial sheets displaying the expected markers of STB can be obtained by differential filtration of dispersed colonies through nylon strainers. RNA-seq analysis of these fractions has allowed them to be compared with cytotrophoblasts isolated from term placentas before and after such cells had formed syncytia. Although it is clear from extensive gene marker analysis that both ESC- and placenta-derived syncytial cells are trophoblast, each with the potential to transport a wide range of solutes and synthesize placental hormones, their transcriptome profiles are sufficiently dissimilar to suggest that the two cell types have distinct pedigrees and represent functionally different kinds of STB. We propose that the STB generated from human ESCs represents the primitive syncytium encountered in early pregnancy soon after the human trophoblast invades into the uterine wall. PMID:27051068

Orthogonal time division multiplexing (OrthTDM) interleaves sinc-shaped pulses to form a high baud-rate signal, with a rectangular spectrum suitable for multiplexing into a Nyquist WDM (N-WDM)-like signal. The problem with generating sinc-shaped pulses is that they theoretically have infinite durations, and even if time bounded for practical implementation, they still require a filter with a long impulse response, hence a large physical size. Previously a method of creating chirped-orthogonal frequency division multiplexing (OFDM) pulses with a chirped arrayed waveguide (AWG) filter, then converting them into interleaved quasi-sinc pulses using dispersive fiber (DF), has been proposed. This produces a signal with a wider spectrum than the equivalent N-WDM signal. We show that a modification to the scheme enables the spectral extent to be reduced for the same data rate. We then analyse the key factors in designing an OrthTDM transmitter, and relate these to the performance of a N-WDM system. We show that the modified transmitter reduces the required guard band between the N-WDM channels. We also simulate a simpler scheme using an unchirped finite-impulse response filter of similar size, which directly creates truncated-sinc pulses without needing a DF. This gives better system performance than either chirped scheme. PMID:26368149

Computational Fluid Dynamics (CFD) simulations have emerged as a powerful tool for understanding multiphase flows that occur in a wide range of engineering applications and natural processes. A multiphase CFD code called MFIX has been under development at the National Energy Technology Laboratory (NETL) since the 1980s for modeling multiphase flows that occur in fossil fuel reactors. CFD codes such as MFIX are equipped with a number of numerical algorithms to solve a large set of coupled partial differential equations over three-dimensional grids consisting of hundreds of thousands of cells on parallel computers. Currently, the next generation version of MFIX is under development with the goal of building a multiphase problem solving environment (PSE) that would facilitate the simple reuse of modern software components by application scientists. Several open-source frameworks were evaluated to identify the best-suited framework for the multiphase PSE. There are many requirements for the multiphase PSE, and each of these open-source frameworks offers functionalities that satisfy the requirements to varying extents. Therefore, matching the requirements and the functionalities is not a simple task and requires a systematic and quantitative decision making procedure. We present a multi-criteria decision making approach to determining a major system design decision, and demonstrate its application on the framework selection problem.

Interpretation of complex cancer genome data, generated by tumor target profiling platforms, is key for the success of personalized cancer therapy. How to draw therapeutic conclusions from tumor profiling results is not standardized and may vary among commercial and academically-affiliated recommendation tools. We performed targeted sequencing of 315 genes from 75 metastatic breast cancer biopsies using the FoundationOne assay. Results were run through 4 different web tools including the Drug-Gene Interaction Database (DGidb), My Cancer Genome (MCG), Personalized Cancer Therapy (PCT), and cBioPortal, for drug and clinical trial recommendations. These recommendations were compared amongst each other and to those provided by FoundationOne. The identification of a gene as targetable varied across the different recommendation sources. Only 33% of cases had 4 or more sources recommend the same drug for at least one of the usually several altered genes found in tumor biopsies. These results indicate further development and standardization of broadly applicable software tools that assist in our therapeutic interpretation of genomic data is needed. Existing algorithms for data acquisition, integration and interpretation will likely need to incorporate artificial intelligence tools to improve both content and real-time status. PMID:26980737

Interpretation of complex cancer genome data, generated by tumor target profiling platforms, is key for the success of personalized cancer therapy. How to draw therapeutic conclusions from tumor profiling results is not standardized and may vary among commercial and academically-affiliated recommendation tools. We performed targeted sequencing of 315 genes from 75 metastatic breast cancer biopsies using the FoundationOne assay. Results were run through 4 different web tools including the Drug-Gene Interaction Database (DGidb), My Cancer Genome (MCG), Personalized Cancer Therapy (PCT), and cBioPortal, for drug and clinical trial recommendations. These recommendations were compared amongst each other and to those provided by FoundationOne. The identification of a gene as targetable varied across the different recommendation sources. Only 33% of cases had 4 or more sources recommend the same drug for at least one of the usually several altered genes found in tumor biopsies. These results indicate further development and standardization of broadly applicable software tools that assist in our therapeutic interpretation of genomic data is needed. Existing algorithms for data acquisition, integration and interpretation will likely need to incorporate artificial intelligence tools to improve both content and real-time status. PMID:26980737

A computer system unknown as the Data Analysis, Retrieval, and Tabulation System (DARTS) was developed by the Energy Systems Division at Argonne National Laboratory to generate tables of descriptive statistics derived from analyses of housing and energy data sources. Through a simple input command, the user can request the preparation of a hierarchical table based on any combination of several hundred of the most commonly analyzed variables. The system was written in the Statistical Analysis System (SAS) language and designed for use on a large-scale IBM mainframe computer.

The Waste Tank Flammable Gas Stabilization Program was established in 1990 to provide for resolution of a major safety issue identified for 23 of the high-level waste tanks at the Hanford Site. This safety issue involves flammable gas mixtures, consisting mainly of hydrogen, nitrous oxide, and that are generated and periodically released in concentrations that nitrogen, exceed the lower flamability limit. Initial activities of the program have been directed at tank 241-SY-101 because it exhibits the largest risk. Activities conducted in fiscal year (FY) 1991 included waste sampling, waste sample analysis, development of tank models, conducting laboratory tests with synthetic wastes, upgrading of tank instrumentation and ventilation systems, evaluation of new methods for characterizing waste, and development of remedial actions. In addition to the work being conducted to resolve the flammable gas issue, programs have been established (Gasper and Reep 1992) to develop corrective actions for high priority safety issues associated with potential explosive mixtures of ferrocyanides in tanks, potential organic-nitrate reactions in tanks, and for the continued cooling for heat generation in tank 106{degrees}C. The purpose of this document is to provide a brief description of the FY 1992 priorities, logic, work breakdown structure (WBS), and task descriptions for the Waste Tank Flammable Gas Stabilization Program.

Lack of reference materials and standard procedures, on faecal tests leads to major problems in harmonisation of methods and do not allow the comparison of outcome data. In particular the absence of standardisation of pre-analytical characteristic was noted for faecal test methods for haemoglobin since different manufacturers have developed different sampling procedures and report units. Moreover the physical characteristics of the faecal specimen and the designs of specimen collection devices do not allow analysis of samples on different systems in consequence, faecal tests cannot be compared using standard evaluation protocols. To improve the harmonization of results generated using different analytical systems and the overall performances of test on faecal materials we propose the introduction of standard procedures for sampling and pre-analytical phase and the adoption of specific procedures based on the use of artificial biological samples for comparison of methods. Harmonization of sampling devices with the use of a standard design for pickers and a standard ratio between analyte and buffer for different manufacturers represent a mandatory step in the roadmap for harmonization of clinical laboratory measurement on faecal materials and can allow a significant standardisation of results generated by different devices.The creation of specific protocols for the evaluation and comparison of analytical methods for analyse of faeces could lead to a significant improvement in the performance of methods and systems. PMID:24855037

In order to find some basis of salinity resistance in the chloroplastic metabolism, a halophytic Thellungiella salsuginea was compared with glycophytic Arabidopsis thaliana. In control T.s. plants the increased ratios of chlorophyll a/b and of fluorescence emission at 77 K (F730 /F685 ) were documented, in comparison to A.t.. This was accompanied by a higher YII and lower NPQ (non-photochemical quenching) values, and by a more active PSI (photosystem I). Another prominent feature of the photosynthetic electron transport (PET) in T.s. was the intensive production of H2 O2 from PQ (plastoquinone) pool. Salinity treatment (0.15 and 0.30 M NaCl for A.t. and T.s., respectively) led to a decrease in ratios of chl a/b and F730 /F685 . In A.t., a salinity-driven enhancement of YII and NPQ was found, in association with the stimulation of H2 O2 production from PQ pool. In contrast, in salinity-treated T.s., these variables were similar as in controls. The intensive H2 O2 generation was accompanied by a high activity of PTOX (plastid terminal oxidase), whilst inhibition of this enzyme led to an increased H2 O2 formation. It is hypothesized, that the intensive H2 O2 generation from PQ pool might be an important element of stress preparedness in Thellungiella plants. In control T.s. plants, a higher activation state of carboxylase ribulose-1,5-bisphosphate carboxylase/oxygenase (Rubisco, EC 4.1.1.39) was also documented in concert with the attachment of Rubisco activase (RCA) to the thylakoid membranes. It is supposed, that a closer contact of RCA with PSI in T.s. enables a more efficient Rubisco activation than in A.t. PMID:24961163

In order to find some basis of salinity resistance in the chloroplastic metabolism, a halophytic Thellungiella salsuginea was compared with glycophytic Arabidopsis thaliana. In control T.s. plants the increased ratios of chlorophyll a/b and of fluorescence emission at 77 K (F730/F685) were documented, in comparison to A.t.. This was accompanied by a higher YII and lower NPQ (non-photochemical quenching) values, and by a more active PSI (photosystem I). Another prominent feature of the photosynthetic electron transport (PET) in T.s. was the intensive production of H2O2 from PQ (plastoquinone) pool. Salinity treatment (0.15 and 0.30 M NaCl for A.t. and T.s., respectively) led to a decrease in ratios of chl a/b and F730/F685. In A.t., a salinity-driven enhancement of YII and NPQ was found, in association with the stimulation of H2O2 production from PQ pool. In contrast, in salinity-treated T.s., these variables were similar as in controls. The intensive H2O2 generation was accompanied by a high activity of PTOX (plastid terminal oxidase), whilst inhibition of this enzyme led to an increased H2O2 formation. It is hypothesized, that the intensive H2O2 generation from PQ pool might be an important element of stress preparedness in Thellungiella plants. In control T.s. plants, a higher activation state of carboxylase ribulose-1,5-bisphosphate carboxylase/oxygenase (Rubisco, EC 4.1.1.39) was also documented in concert with the attachment of Rubisco activase (RCA) to the thylakoid membranes. It is supposed, that a closer contact of RCA with PSI in T.s. enables a more efficient Rubisco activation than in A.t. PMID:24961163

MicroRNA (miRNA) expression profiling has proven useful in diagnosing and understanding the development and progression of several diseases. Microarray is the standard method for analyzing miRNA expression profiles; however, it has several disadvantages, including its limited detection of miRNAs. In recent years, advances in genome sequencing have led to the development of next-generation sequencing (NGS) technologies, which significantly advance genome sequencing speed and discovery. In this study, we compared the expression profiles obtained by next generation sequencing (NGS) with the profiles created using microarray to assess if NGS could produce a more accurate and complete miRNA profile. Total RNA from 14 hepatocellular carcinoma tumors (HCC) and 6 matched non-tumor control tissues were sequenced with Illumina MiSeq 50-bp single-end reads. Micro RNA expression profiles were estimated using miRDeep2 software. As a comparison, miRNA expression profiles for 11 out of 14 HCCs were also established by microarray (Agilent human microRNA microarray). The average total sequencing exceeded 2.2 million reads per sample and of those reads, approximately 57% mapped to the human genome. The average correlation for miRNA expression between microarray and NGS and subtraction were 0.613 and 0.587, respectively, while miRNA expression between technical replicates was 0.976. The diagnostic accuracy of HCC, p-value, and AUC were 90.0%, 7.22×10(-4), and 0.92, respectively. In summary, NGS created an miRNA expression profile that was reproducible and comparable to that produced by microarray. Moreover, NGS discovered novel miRNAs that were otherwise undetectable by microarray. We believe that miRNA expression profiling by NGS can be a useful diagnostic tool applicable to multiple fields of medicine. PMID:25215888

Fungal keratitis is an infection of the cornea by fungal pathogens. Diagnosis methods based on optical microscopy could be beneficial over the conventional microbiology method by allowing rapid and non-invasive examination. Reflectance confocal microscopy (RCM) and two-photon second harmonic generation microscopy (TPSHGM) have been applied to pre-clinical or clinical studies of fungal keratitis. In this report, RCM and TPSHGM were characterized and compared in the imaging of a fungal keratitis rabbit model ex vivo. Fungal infection was induced by using two strains of fungi: aspergillus fumigatus and candida albicans. The infected corneas were imaged in fresh condition by both modalities sequentially and their images were analyzed. Both RCM and TPSHGM could detect both fungal strains within the cornea based on morphology: aspergillus fumigatus had distinctive filamentous structures, and candida albicans had round structures superficially and elongated structures in the corneal stroma. These imaging results were confirmed by histology. Comparison between RCM and TPSHGM showed several characteristics. Although RCM and TPSHGM images had good correlation each other, their images were slightly different due to difference in contrast mechanism. RCM had relatively low image contrast with the infected turbid corneas due to high background signal. TPSHGM visualized cells and collagen in the cornea clearly compared to RCM, but used higher laser power to compensate low autofluorescence. Since these two modalities provide complementary information, combination of RCM and TPSHGM would be useful for fungal keratitis detection by compensating their weaknesses each other. PMID:26977371

The aim of a current study at the Institute of Hydraulic Engineering and Technical Hydromechanics at TU Dresden is to develop a new injection method for quick and economic sealing of dikes or dike bodies, based on a new synthetic material. To validate the technique, an artificial part of a sand dike was built in an experimental hall. The synthetic material was injected, which afterwards spreads in the inside of the dike. After the material was fully solidified, the surrounding sand was removed with an excavator. In this paper, two methods, which applied terrestrial laser scanning (TLS) and structure from motion (SfM) respectively, for the acquisition of a 3D point cloud of the remaining shapes are described and compared. Combining with advanced software packages, a triangulated 3D model was generated and subsequently the volume of vertical sections of the shape were calculated. As the calculation of the volume revealed differences between the TLS and the SfM 3D model, a thorough qualitative comparison of the two models will be presented as well as a detailed accuracy assessment. The main influence of the accuracy is caused by generalisation in case of gaps due to occlusions in the 3D point cloud. Therefore, improvements for the data acquisition with TLS and SfM for such kind of objects are suggested in the paper.

Fungal keratitis is an infection of the cornea by fungal pathogens. Diagnosis methods based on optical microscopy could be beneficial over the conventional microbiology method by allowing rapid and non-invasive examination. Reflectance confocal microscopy (RCM) and two-photon second harmonic generation microscopy (TPSHGM) have been applied to pre-clinical or clinical studies of fungal keratitis. In this report, RCM and TPSHGM were characterized and compared in the imaging of a fungal keratitis rabbit model ex vivo. Fungal infection was induced by using two strains of fungi: aspergillus fumigatus and candida albicans. The infected corneas were imaged in fresh condition by both modalities sequentially and their images were analyzed. Both RCM and TPSHGM could detect both fungal strains within the cornea based on morphology: aspergillus fumigatus had distinctive filamentous structures, and candida albicans had round structures superficially and elongated structures in the corneal stroma. These imaging results were confirmed by histology. Comparison between RCM and TPSHGM showed several characteristics. Although RCM and TPSHGM images had good correlation each other, their images were slightly different due to difference in contrast mechanism. RCM had relatively low image contrast with the infected turbid corneas due to high background signal. TPSHGM visualized cells and collagen in the cornea clearly compared to RCM, but used higher laser power to compensate low autofluorescence. Since these two modalities provide complementary information, combination of RCM and TPSHGM would be useful for fungal keratitis detection by compensating their weaknesses each other. PMID:26977371

Abstract Intergranular (IG) attack and stress-corrosion cracks in alloy 600 tubing removed from the PWR steam generator #1 at Ringhals 2 have been characterized by analytical transmission electron microscopy (ATEM). Comparisons are made between environmentally induced cracks initiated on the primary-water ID surface versus those initiated on the secondary-water OD surface. General SCC crack morphologies were quite similar with branched IG cracking extending to approximately 50% through wall. Corrosion products in the open cracks were quite different with hydrated nickel phosphate seen filling the secondary-side crack, while the crack wall oxide in the primary-side crack was a Cr and Fe-rich spinel. Both samples revealed narrow (~10-nm wide), deeply penetrated, oxidized zones along most grain boundaries that intersect the open cracks. The local structures and chemistries in these corrosion-affected zones were examined by high-resolution TEM imaging, electron diffraction and fine-probe compositional analysis. These porous IG penetrations were nearly identical in appearance for both the primary- and secondary-side examples and contained Cr-rich oxides (Cr2O3 on the primary side and spinel plus Cr2O3 on the secondary side). Similarities between corrosion-induced structures for primary- and secondary-side cracking may indicate that the same degradation mechanism is operating in both cases. However, controlled experiments are needed where specific mechanisms can be properly distinguished.

Purpose: To compare IMRT planning strategies for prostate cancer patients with metal hip prostheses.Methods: All plans were generated fully automatically (i.e., no human trial-and-error interactions) using iCycle, the authors' in-house developed algorithm for multicriterial selection of beam angles and optimization of fluence profiles, allowing objective comparison of planning strategies. For 18 prostate cancer patients (eight with bilateral hip prostheses, ten with a right-sided unilateral prosthesis), two planning strategies were evaluated: (i) full exclusion of beams containing beamlets that would deliver dose to the target after passing a prosthesis (IMRT{sub remove}) and (ii) exclusion of those beamlets only (IMRT{sub cut}). Plans with optimized coplanar and noncoplanar beam arrangements were generated. Differences in PTV coverage and sparing of organs at risk (OARs) were quantified. The impact of beam number on plan quality was evaluated.Results: Especially for patients with bilateral hip prostheses, IMRT{sub cut} significantly improved rectum and bladder sparing compared to IMRT{sub remove}. For 9-beam coplanar plans, rectum V{sub 60Gy} reduced by 17.5%{+-} 15.0% (maximum 37.4%, p= 0.036) and rectum D{sub mean} by 9.4%{+-} 7.8% (maximum 19.8%, p= 0.036). Further improvements in OAR sparing were achievable by using noncoplanar beam setups, reducing rectum V{sub 60Gy} by another 4.6%{+-} 4.9% (p= 0.012) for noncoplanar 9-beam IMRT{sub cut} plans. Large reductions in rectum dose delivery were also observed when increasing the number of beam directions in the plans. For bilateral implants, the rectum V{sub 60Gy} was 37.3%{+-} 12.1% for coplanar 7-beam plans and reduced on average by 13.5% (maximum 30.1%, p= 0.012) for 15 directions.Conclusions: iCycle was able to automatically generate high quality plans for prostate cancer patients with prostheses. Excluding only beamlets that passed through the prostheses (IMRT{sub cut} strategy) significantly improved OAR

Generation of orthotopic xenograft mouse models of leukemia is important to understand the mechanisms of leukemogenesis, cancer progression, its cross talk with the bone marrow microenvironment, and for preclinical evaluation of drugs. In these models, following intravenous injection, leukemic cells home to the bone marrow and proliferate there before infiltrating other organs, such as spleen, liver, and the central nervous system. Moreover, such models have been shown to accurately recapitulate the human disease and correlate with patient response to therapy and prognosis. Thus, various immune-deficient mice strains have been used with or without recipient preconditioning to increase engraftment efficiency. Mice homozygous for the severe combined immune deficiency (SCID) mutation and with non-obese diabetic background (NOD/SCID) have been used in the majority of leukemia xenograft studies. Later, NOD/SCID mice deficient for interleukin 2 receptor gamma chain (IL2Rγ) gene called NSG mice became the model of choice for leukemia xenografts. However, engraftment of leukemia cells without irradiation preconditioning still remained a challenge. In this study, we used NSG mice with null alleles for major histocompatibility complex class I beta2-microglobulin (β2m) called NSG-B2m. This is a first report describing the 100% engraftment efficiency of pediatric leukemia cell lines and primary samples in NSG-B2m mice in the absence of host preconditioning by sublethal irradiation. We also show direct comparison of the engraftment efficiency and growth rate of pediatric acute leukemia cells in NSG-B2m and NOD/SCID mice, which showed 80-90% engraftment efficiency. Secondary and tertiary xenografts in NSG-B2m mice generated by injection of cells isolated from the spleens of leukemia-bearing mice also behaved similar to the primary patient sample. We have successfully engrafted 25 acute lymphoblastic leukemia (ALL) and 5 acute myeloid leukemia (AML) patient samples with

Generation of orthotopic xenograft mouse models of leukemia is important to understand the mechanisms of leukemogenesis, cancer progression, its cross talk with the bone marrow microenvironment, and for preclinical evaluation of drugs. In these models, following intravenous injection, leukemic cells home to the bone marrow and proliferate there before infiltrating other organs, such as spleen, liver, and the central nervous system. Moreover, such models have been shown to accurately recapitulate the human disease and correlate with patient response to therapy and prognosis. Thus, various immune-deficient mice strains have been used with or without recipient preconditioning to increase engraftment efficiency. Mice homozygous for the severe combined immune deficiency (SCID) mutation and with non-obese diabetic background (NOD/SCID) have been used in the majority of leukemia xenograft studies. Later, NOD/SCID mice deficient for interleukin 2 receptor gamma chain (IL2Rγ) gene called NSG mice became the model of choice for leukemia xenografts. However, engraftment of leukemia cells without irradiation preconditioning still remained a challenge. In this study, we used NSG mice with null alleles for major histocompatibility complex class I beta2-microglobulin (β2m) called NSG-B2m. This is a first report describing the 100% engraftment efficiency of pediatric leukemia cell lines and primary samples in NSG-B2m mice in the absence of host preconditioning by sublethal irradiation. We also show direct comparison of the engraftment efficiency and growth rate of pediatric acute leukemia cells in NSG-B2m and NOD/SCID mice, which showed 80–90% engraftment efficiency. Secondary and tertiary xenografts in NSG-B2m mice generated by injection of cells isolated from the spleens of leukemia-bearing mice also behaved similar to the primary patient sample. We have successfully engrafted 25 acute lymphoblastic leukemia (ALL) and 5 acute myeloid leukemia (AML) patient samples with

...EPA is proposing to approve revisions to the San Joaquin Valley Unified Air Pollution Control District (SJVUAPCD) portion of the California State Implementation Plan (SIP). These revisions concern oxides of nitrogen (NOX), carbon monoxide (CO), oxides of sulfur (SO2) and particulate matter emissions from boilers, steam generators and process heaters greater than 5.0......

Merl Wittrock, a cognitive psychologist who had proposed a generative model of learning, was an essential member of the group that over a period of 5 years revised the "Taxonomy of Educational Objectives," originally published in 1956. This article describes the development of that 2001 revision (Anderson and Krathwohl, Editors) and Merl's…

...EPA is proposing to approve revisions to the Maricopa County Air Quality Department (MCAQD) portion of the Arizona State Implementation Plan (SIP). These revisions concern opacity standards related to multiple pollutants, including particulate matter (PM) emissions from several different types of sources, ranging from fugitive dust to diesel generators. We are approving a local rule that......

Pulse laser generation in several Er3+,Yb3+:glasses thermally bonded with Co2+:MgAl2O4 was achieved. Peak power in the range of 1.83-7.68 kW with pulse duration between 2.9 and 4.2 ns and energy up to 24 μJ was obtained. The output characteristics for different transmissions of the output couplers were investigated. To show the improvements gained by the thermal bonding procedure, a comparison of thermally bonded and unbonded samples was done in terms of generation efficiency, peak power, beam quality, generated spectra and pulse to pulse jitter.

The Defense Land Fallout Interpretive Code (DELFIC) was originally released in 1968 as a tool for modeling fallout patterns and for predicting exposure rates. Despite the continual advancement of knowledge of fission yields, decay behavior of fission products, and biological dosimetry, the decay data and logic of DELFIC have remained mostly unchanged since inception. Additionally, previous code revisions caused a loss of conservation of radioactive nuclides. In this report, a new revision of the decay database and the Particle Activity Module is introduced and explained. The database upgrades discussed are replacement of the fission yields with ENDF/B-VII data as formatted in the Oak Ridge Isotope Generation (ORIGEN) code, revised decay constants, revised exposure rate multipliers, revised decay modes and branching ratios, and revised boiling point data. Included decay logic upgrades represent a correction of a flaw in the treatment of the fission yields, extension of the logic to include more complex decay modes, conservation of nuclides (including stable nuclides) at all times, and conversion of key variables to double precision for nuclide conservation. Finally, recommended future work is discussed with an emphasis on completion of the overall radiation physics upgrade, particularly for dosimetry, induced activity, decay of the actinides, and fractionation.

What we have come to understand as education has a temporal dimension: the school year, progression based on time, timetables, and so on. Similarly, our understanding of teaching is framed by temporality, primarily through salary structures and an implicit coupling of performance with time in the field. We argue that this underlying generative…

Los Alamos National Laboratory is a participant in the Integral System Test (IST) program initiated in June 1983 for the purpose of providing integral system test data on specific issues/phenomena relevant to post-small-break loss-of-coolant accidents, loss of feedwater and other transients in Babcock and Wilcox (BandW) plant designs. The Multi-Loop Integral System Test (MIST) facility is the largest single component in the IST program. MIST is a 2 /times/ 4 (two hot legs and steam generators (SGs), four cold legs and reactor coolant pumps) representation of lowered-loop reactor system of the BandW design. It is a full-height, full-pressure facility with 1/817 power and volume scaling. Two other integral experimental facilities are included in the IST program: test loops at the University of Maryland, College Park, and at SRI International (SRI-2). The objective of the IST tests is to generate high-quality experimental data to be used for assessing thermal-hydraulic safety computer codes. Efforts are under way at Los Alamos to assess TRAC-PF1/MOD1 against data from each of the IST facilities. Calculations and data comparisons for TRAC-PF1/MOD1 assessment are presented for two transients run in the MIST facility. These are MIST Test 330302, a feed and bleed test with delayed high-pressure injection; and Test 3404AA, an SG tube-rupture test with the affected SG isolated. Only MIST assessment results are presented in this paper. The TRAC-PF1/MOD1 calculations completed to date for MIST tests are in reasonable agreement with the data from these tests. Reasonable agreement is defined as meaning that major trends are predicted correctly, although TRAC values are frequently outside the range of data uncertainty. We believe that correct conclusions will be reached if the code is used in similar applications despite minor code/model deficiencies. 7 refs., 5 figs., 2 tabs.

The aim of this study is to generate vector quantisation (VQ) codebooks by integrating principle component analysis (PCA) algorithm, Linde-Buzo-Gray (LBG) algorithm, and evolutionary algorithms (EAs). The EAs include genetic algorithm (GA), particle swarm optimisation (PSO), honey bee mating optimisation (HBMO), and firefly algorithm (FF). The study is to provide performance comparisons between PCA-EA-LBG and PCA-LBG-EA approaches. The PCA-EA-LBG approaches contain PCA-GA-LBG, PCA-PSO-LBG, PCA-HBMO-LBG, and PCA-FF-LBG, while the PCA-LBG-EA approaches contain PCA-LBG, PCA-LBG-GA, PCA-LBG-PSO, PCA-LBG-HBMO, and PCA-LBG-FF. All training vectors of test images are grouped according to PCA. The PCA-EA-LBG used the vectors grouped by PCA as initial individuals, and the best solution gained by the EAs was given for LBG to discover a codebook. The PCA-LBG approach is to use the PCA to select vectors as initial individuals for LBG to find a codebook. The PCA-LBG-EA used the final result of PCA-LBG as an initial individual for EAs to find a codebook. The search schemes in PCA-EA-LBG first used global search and then applied local search skill, while in PCA-LBG-EA first used local search and then employed global search skill. The results verify that the PCA-EA-LBG indeed gain superior results compared to the PCA-LBG-EA, because the PCA-EA-LBG explores a global area to find a solution, and then exploits a better one from the local area of the solution. Furthermore the proposed PCA-EA-LBG approaches in designing VQ codebooks outperform existing approaches shown in the literature.

This article describes the revision of the White Racial Consciousness Development Scale (D. Claney & W. M. Parker, 1989). A multistage approach including item generation, item refinement and selection, and evaluation of score validity and reliability was used to test construction and validation. Implications for theory, practice, and future…

This study compared first generation and non-first generation doctoral students' levels of perceived stress, sense of coherence, and mindfulness. These variables were assessed both separately for each trainee group and in hypothesized relationships with each other. In addition, moderator analyses were conducted to assess whether key relationships…

The purpose of this study was to compare the retention factors between first-generation college students and second- and third-generation college students in the postsecondary educational setting. This study examined the differences in the preselected retention factors: faculty-student interaction, college mentor, academic support, residential…

Beliefs frequently undergo revisions, especially when new pieces of information are true but inconsistent with current beliefs. In previous studies, we showed that linguistic asymmetries provided by relational statements, play a crucial role in spatial belief revision. Located objects (LO) are preferably revised compared to reference objects (RO), known as the LO-principle. Here we establish a connection between spatial belief revision and grounded cognition. In three experiments, we explored whether imagined physical object properties influence which object is relocated and which remains at its initial position. Participants mentally revised beliefs about the arrangements of objects which could be envisaged as light and heavy (Experiment 1), small and large (Experiment 2), or movable and immovable (Experiment 3). The results show that intrinsic object properties are differently taken into account during spatial belief revision. Object weight did not alter the LO-principle (Experiment 1), whereas object size was found to influence which object was preferably relocated (Experiment 2). Object movability did not affect relocation preferences but had an effect on relocation durations (Experiment 3). The findings support the simulation hypothesis within the grounded cognition approach and create new connections between the spatial mental model theory of reasoning and the idea of grounded cognition. PMID:25796056

Here, we present 2D numerical modeling of near critical density plasma using a fully implicit Vlasov-Fokker-Planck (VFP) code, IMPACTA, with the addition of a ray tracing package. In certain situations, such as those at the critical surface at the walls of a hohlraum, magnetic fields are generated through the crossed temperature and electron density gradients. Modeling shows 0.3 MG fields and the strong heating also results in magnetization of the plasma up to ωτ ~ 5 . In the case without magnetic field generation, the heat flows from the laser heating region are isotropic. Including magnetic fields causes the heat flow to form jets along the wall due to the Righi-Leduc effect. The heating of the wall region causes steeper temperature gradients. This serves as a positive feedback mechanism for the field generation rate resulting in nearly twice the amount of field generated in comparison to the case without magnetic fields over 1 ns. The heat conduction, field generation, and the calculation of other transport quantities, is performed ab-initio due to the nature of the VFP equation set. In order to determine the importance of the kinetic effects from IMPACTA, we perform direct comparison with a classical (Braginskii) transport code with hydrodynamic motion (CTC+). The authors would like to acknowledge DOE Grant #DESC0010621 and Advanced Research Computing, UM-AA.

This Guide describes the procedures required to comply with all federal and state laws and regulations and Lawrence Berkeley Laboratory (LBL) policy applicable to medical and biohazardous waste. The members of the LBL Biological Safety Subcommittee participated in writing these policies and procedures. The procedures and policies in this Guide apply to LBL personnel who work with infectious agents or potentially infectious agents, publicly perceived infectious items or materials (e.g., medical gloves, culture dishes), and sharps (e.g., needles, syringes, razor blades). If medical or biohazardous waste is contaminated or mixed with a hazardous chemical or material, with a radioactive material, or with both, the waste will be handled in accordance with the applicable federal and State of California laws and regulations for hazardous, radioactive, or mixed waste.

In this paper, the Adolescent Sexual Abuser Project (ASAP) assessment pack-Dutch Revised Version (ASAP-D) is presented. The ASAP-D is an assessment instrument which measures the personality characteristics that are generally considered relevant in the literature for the development and perpetuation of sexually abusive behaviour in juveniles. After…

In order to produce useful proxy data for the GOES-R Geostationary Lightning Mapper (GLM) in regions not covered by VLF lightning mapping systems, we intend to employ data produced by ground-based (regional or global) VLF/LF lightning detection networks. Before using these data in GLM Risk Reduction tasks, it is necessary to have a quantitative understanding of the performance of these networks, in terms of CG flash/stroke DE, cloud flash/pulse DE, location accuracy, and CLD/CG classification error. This information is being obtained through inter-comparison with LMAs and well-quantified VLF/LF lightning networks. One of our approaches is to compare "bulk" counting statistics on the spatial scale of convective cells, in order to both quantify relative performance and observe variations in cell-based temporal trends provided by each network. In addition, we are using microsecond-level stroke/pulse time correlation to facilitate detailed inter-comparisons at a more-fundamental level. The current development status of our ground-based inter-comparison and evaluation tools will be presented, and performance metrics will be discussed through a comparison of Vaisala s Global Lightning Dataset (GLD360) with the NLDN at locations within and outside the U.S.

The revised edition of this handbook represents a concerted effort to bring school safety to the forefront of business managers' daily and long-range planning activities. Although statistics show few fatalities on school grounds, schools appear to have a high frequency and incident rate of nonfatal injuries. According to the introduction, school…

Alaska State Dept. of Education, Juneau. Div. of Adult and Vocational Education.

This revised curriculum gives information on the skills and knowledge students should acquire through a business education program. The competencies listed reflect the skills that employers see as necessary for success in clerical and accounting occupations. The handbook is organized in seven sections that cover the following: (1) the concept of…

Revision of a college mission statement through a broadly participatory process can provide a new and sharpened sense of direction and priorities and a powerful mechanism for institutional change. Although institutional circumstances and processes may differ, the experience of Wittenberg University (Ohio) serves as an example of a model for…

The purpose of this study is to examine the influence of shape of cross-section of scramjet engine driven experimental DCW-MHD generator on generator performance by three-dimensional numerical analyses. We have designed the MHD generators with symmetric square and circular cross-section, based on the experimental MHD generator with asymmetric square cross-section. Under the optimum load condition, the electric power output becomes 26.6kW for the asymmetric square cross-section, 24.6kW for the symmetric square cross-section, and 22.4kW for the circular cross-section. The highest output is obtained for the experimental generator with asymmetric square cross-section. The difference of electric power output is induced by the difference of flow velocity and boundary layer thickness. For the generator with asymmetric square cross-section, the average flow velocity becomes the highest and the boundary layer becomes the thinnest. The compression wave is generated depending on the channel shape. The difference of flow velocity and boundary layer thickness is induced by the superposition of compression wave.

This report discusses the comparisons of a RELAP5 posttest calculation of the recovery portion of the Semiscale Mod-2B test S-SG-1 to the test data. The posttest calculation was performed with the RELAP5/MOD2 cycle 36.02 code without updates. The recovery procedure that was calculated mainly consisted of secondary feed and steam using auxiliary feedwater injection and the atmospheric dump valve of the unaffected steam generator (the steam generator without the tube rupture). A second procedure was initiated after the trends of the secondary feed and steam procedure had been established, and this was to stop the safety injection that had been provided by two trains of both the charging and high pressure injection systems. The Semiscale Mod-2B configuration is a small scale (1/1705), nonnuclear, instrumented, model of a Westinghouse four-loop pressurized water reactor power plant. S-SG-1 was a single-tube, cold-side, steam generator tube rupture experiment. The comparison of the posttest calculation and data included comparing the general trends and the driving mechanisms of the responses, the phenomena, and the individual responses of the main parameters.

Mie’s waves while sounding within coincident volumes. Being sensitive to the size of scatters, Mie’s waves can give us additional information about particle size distribution. But how about using several wavelengths corresponding to Rayleigh’s diffraction on scatters only? Can any effects be detected in such a case and what performance characteristics of the equipment are required to detect them? The deceptive simplicity of the negative answer to the first part of the question posed will disappear if one collects different definitions of Rayleigh's scattering and consider them more closely than usually. Several definitions borrowed from the introductory texts and most popular textbooks and articles can be seen as one of the reasons for the research presented in the report. Hopefully, based on the comparison of them all, anyone could easily conclude that Rayleigh's scattering has been analyzed extensively, but despite this extensive analysis made fundamental ambiguities in introductory texts are not eliminated completely to date. Moreover, there may be found unreasonably many examples on how these ambiguities have already caused an error to be foreseen, published on the one article, amplified in another one, then cited with approval in the third one, before being finally corrected. Everything indicated that in the light of all the lesions learned and based on modern experimental data, it is time to address these issues again. After the discussion of ambiguities of Rayleigh's scattering concepts, the development of the corrections to original ideas looks relatively easy. In particular, there may be distinguished at least three characteristic regions of the revised models application from the point of view of the scattered field statistical averaging. The authors of the report suggest naming them Rayleigh’s region, Einstein’s region and the region with compensations of the scattering intensity. The most important fact is that the limits of applicability of all

Revision of the International Practical Temperature Scale requires that there be changes for all accurately tabulated thermophysical values. Revised reference data for thermocouples have been generated in a program carried out by the National Bureau of Standards. The new reference data reflect not only revisions in the temperature scale, but also slight changes in the materials themselves and improvements in data fitting methods. A new NBS monograph that contains tables, analytic expressions, various approximations, and explanatory text has been prepared. A general discussion of the project and some specific examples are given.

Computations of drag polars for a low-speed Wortmann sailplane airfoil are compared with both wind tunnel and flight test results. Excellent correlation was shown to exist between computations and flight results except when separated flow regimes were encountered. Smoothness of the input coordinates to the PROFILE computer program was found to be essential to obtain accurate comparisons of drag polars or transition location to either the flight or wind tunnel flight results.

The author considers the use of coal within a revised energy perspective, focusing on the factors that will drive which fuels are used to generate electricity going forward. He looks at the world markets for fossil fuels and the difficulties of predicting oil and natural gas supply and prices, as demonstrated by the variability in projections from one year to another in the EIA's Annual Energy Outlook. 4 refs., 1 tab.

Seven hypothetical power-generation cases were studied to estimate the cost effect in each case of coal cleaning. Three levels of coal preparation - no cleaning, partial cleaning, and intensive cleaning - were used to perform the analysis. Two-unit, 1000-MW power plants operating at 70% average load factor were assumed. These power plants were designed to comply with the proposed NSPS for SO/sub 2/ emissions (85% removal/24-hour averaging) under the 1977 Clean Air Act Amendments. Diverse coals and plant locations were selected. The estimated capital costs of the coal cleaning plants were consistently less than 5% of the capital costs estimated for the corresponding power-plants. In 6 of the 7 study cases, the utilization of coal cleaning reduced overall capital costs, and in 5 cases the busbar-cost savings introduced by the use of cleaned coal more than offset the incremental cost of coal cleaning. In terms of 30-year levelized costs, the use of cleaned coal was estimated to be responsible for net busbar-cost savings of up to 2 mills/net kWh in the 5 cases where coal cleaning appeared cost effective. These results are considered conservative, since certain economic benefits of using cleaned coal (e.g., improved power plant availability and operability) were not included in the cost estimates due to lack of sufficient data.

Examines the concurrent validity of the Slosson Full-Range Intelligence Test (S-FRIT) by comparing S-FRIT scores to the scores of the Wechsler Intelligence Scale for Children-Third Edition (WISC-III) and the Woodcock-Johnson Tests of Achievement-Revised (WJ-R). Results revealed that the S-FRIT scores were more related to overall intelligence,…

This study examines the interaction of clay mineral particles and water vapor to determine the conditions required for cloud droplet formation. Droplet formation conditions are investigated for three clay minerals: illite, sodium-rich montmorillonite, and Arizona Test Dust. Using wet and dry particle generation coupled to a differential mobility analyzer (DMA) and cloud condensation nuclei counter, the critical activation of the clay mineral particles as cloud condensation nuclei is characterized. Electron microscopy (EM) is used to determine non-sphericity in particle shape. EM is also used to determine particle surface area and account for transmission of multiply charged particles by the DMA. Single particle mass spectrometry and ion chromatography are used to investigate soluble material in wet-generated samples and demonstrate that wet and dry generation yield compositionally different particles. Activation results are analyzed in the context of both κ-Köhler theory and Frenkel, Halsey, and Hill (FHH) adsorption activation theory. This study has two main results: (1) κ-Köhler is a suitable framework, less complex than FHH theory, to describe clay mineral nucleation activity despite apparent differences in κ with respect to size. For dry-generated particles the size dependence is likely an artifact of the shape of the size distribution: there is a sharp drop-off in particle concentration at ~300 nm, and a large fraction of particles classified with a mobility diameter less than ~300 nm are actually multiply charged, resulting in a much lower critical supersaturation for droplet activation than expected. For wet-generated particles, deviation from κ-Köhler theory is likely a result of the dissolution and redistribution of soluble material. (2) Wet-generation is found to be unsuitable for simulating the lofting of fresh dry dust because it changes the size-dependent critical supersaturations by fractionating and re-partitioning soluble material.

Limitations of the fission fuel resources will presumably mandate the replacement of thermal fission reactors by fast fission reactors that operate on a self-sufficient closed fuel cycle. This replacement might take place within the next one hundred years, so the direct competitors of fusion reactors will be fission reactors of the latter rather than the former type. Also, fast fission reactors, in contrast to thermal fission reactors, have the potential for transmuting long-lived actinides into short-lived fission products. The associated reduction of the long-term activation of radioactive waste due to actinides makes the comparison of radioactive waste from fast fission reactors to that from fusion reactors more rewarding than the comparison of radioactive waste from thermal fission reactors to that from fusion reactors. Radioactive waste from an experimental and a commercial fast fission reactor and an experimental and a commercial fusion reactor has been characterized. The fast fission reactors chosen for this study were the Experimental Breeder Reactor 2 and the Integral Fast Reactor. The fusion reactors chosen for this study were the International Thermonuclear Experimental Reactor and a Reduced Activation Ferrite Helium Tokamak. The comparison of radioactive waste parameters shows that radioactive waste from the experimental fast fission reactor may be less hazardous than that from the experimental fusion reactor. Inclusion of the actinides would reverse this conclusion only in the long-term. Radioactive waste from the commercial fusion reactor may always be less hazardous than that from the commercial fast fission reactor, irrespective of the inclusion or exclusion of the actinides. The fusion waste would even be far less hazardous, if advanced structural materials, like silicon carbide or vanadium alloy, were employed.

The paper presented four methods for hardware and software generation in real time of sine waves suitable for PWM circuits. The sine waves are derived from a truncated modified cosine Taylor series, wt([pi]-wt) function, a digitally filtered trapezoid, and a second-order differential equation. Triple injection is incorporated by the addition of a defined magnitude triangular waveform of three times the fundamental frequency. Each sine wave generating technique is implemented, as applicable, in a programmable logic cell array and/or in microprocessor-based software. In each case, the output spectra and total harmonic distortion are compared with computer-simulated results.

In the present investigation, Crewstation Assessment of Reach (CAR) results in the form of male hand reach envelopes were generated and compared with an anthropometric survey performed by Kennedy (1978) to determine the extent of the validity of the CAR model with respect to experimentally-derived anthropometric data. The CAR-generated reach envelopes extensively matched the Kennedy envelopes. The match was particularly good in the areas to the front and side from which the reach originated. Attention is given to the crewstation model, the operator sample population, the CAR analysis, aspects of validation methodology, and the modeling of experimental parameters.

Computations of drag polars for a low-speed Wortmann sailplane airfoil are compared to both wind tunnel and flight results. Excellent correlation is shown to exist between computations and flight results except when separated flow regimes were encountered. Wind tunnel transition locations are shown to agree with computed predictions. Smoothness of the input coordinates to the PROFILE airfoil analysis computer program was found to be essential to obtain accurate comparisons of drag polars or transition location to either the flight or wind tunnel results.

Comparison of results of using the Revised Behavior Problem Checklist to identify mild or highly deviant behavior in 95 children (grades 3 through 6) by two regular education teachers of each child indicated little agreement between teachers. (Author/DB)

Education is often seen as the most important mobility channel for children of immigrants. To what extent is this true? In this article, we look at successful second generation Turkish professionals in Sweden, France, Germany and The Netherlands. What kind of pathways did they take to become a professional? Based on the large quantitative…

Power source is an important parameter that can affect the characteristics of atmospheric-pressure plasma jets (APPJs), because it can play a key role on the discharge characteristics and ionization process of APPJs. In this paper, the characteristics of helium APPJs sustained by both nanosecond-pulse and microsecond-pulse generators are compared from the aspects of plume length, discharge current, consumption power, energy, and optical emission spectrum. Experimental results showed that the pulsed APPJ was initiated near the high-voltage electrode with a small curvature radius, and then the stable helium APPJ could be observed when the applied voltage increased. Moreover, the discharge current of the nanosecond-pulse APPJ was larger than that of the microsecond-pulse APPJ. Furthermore, although the nanosecond-pulse generator consumed less energy than the microsecond-pulse generator, longer plume length, larger instantaneous power per pulse and stronger spectral line intensity could be obtained in the nanosecond-pulse excitation case. In addition, some discussion indicated that the rise time of the applied voltage could play a prominent role on the generation of APPJs.

Sugarcane bagasse is used as a fuel in conventional bioethanol production, providing heat and power for the plant; therefore, the amount of surplus bagasse available for use as raw material for second generation bioethanol production is related to the energy consumption of the bioethanol production process. Pentoses and lignin, byproducts of the second generation bioethanol production process, may be used as fuels, increasing the amount of surplus bagasse. In this work, simulations of the integrated bioethanol production process from sugarcane, surplus bagasse and trash were carried out. Selected pre-treatment methods followed, or not, by a delignification step were evaluated. The amount of lignocellulosic materials available for hydrolysis in each configuration was calculated assuming that 50% of sugarcane trash is recovered from the field. An economic risk analysis was carried out; the best results for the integrated first and second generation ethanol production process were obtained for steam explosion pretreatment, high solids loading for hydrolysis and 24-48 h hydrolysis. The second generation ethanol production process must be improved (e.g., decreasing required investment, improving yields and developing pentose fermentation to ethanol) in order for the integrated process to be more economically competitive. PMID:20838849

The purpose of this study is to observe the differences in dietary intakes between two generations, male and female Korean American college students with their respective parents, living in the Los Angeles Areas. This study compared dietary nutrient intakes between old Koreans (KO) (n=28, average age: 53.4[plus or minus]6.4 years, with 13 males…

A growing body of literature in second-language writing suggests that the writing ability of international second language (L2) learners, who attend post-secondary education abroad after having completed high school in their home countries, and the so-called Generation 1.5 population, that is, L2 learners who enter post-secondary education after…

This paper compares two indirect adaptive neurocontrollers, namely a multilayer perceptron neurocontroller (MLPNC) and a radial basis function neurocontroller (RBFNC) to control a synchronous generator. The different damping and transient performances of two neurocontrollers are compared with those of conventional linear controllers, and analyzed based on the Lyapunov direct method. PMID:15384538

Triboelectric nanogenerator (TENG) is a newly invented technology that is effective using conventional organic materials with functionalized surfaces for converting mechanical energy into electricity, which is light weight, cost-effective and easy scalable. Here, we present the first systematic analysis and comparison of EMIG and TENG from their working mechanisms, governing equations and output characteristics, aiming at establishing complementary applications of the two technologies for harvesting various mechanical energies. The equivalent transformation and conjunction operations of the two power sources for the external circuit are also explored, which provide appropriate evidences that the TENG can be considered as a current source with a large internal resistance, while the EMIG is equivalent to a voltage source with a small internal resistance. The theoretical comparison and experimental validations presented in this paper establish the basis of using the TENG as a new energy technology that could be parallel or possibly equivalently important as the EMIG for general power application at large-scale. It opens a field of organic nanogenerator for chemists and materials scientists who can be first time using conventional organic materials for converting mechanical energy into electricity at a high efficiency. PMID:24677413

This work is focused on the analysis of potentialities of the radargrammetric DSMs generation using high resolution SAR imagery acquired by three different platforms (COSMO-SkyMed, TerraSAR-X and Radarsat-2) with particular attention to geometric orientation models. Two orientation models have been tested in this work: the rigorous Toutin's model, developed at the Canada Center for Remote Sensing (CCRS) and implemented in the commercial software package PCI Geomatica, and the radargrammetric model developed at University of Rome La Sapienza and implemented in the scientific software SISAR. A full comparison and analysis has been carried out over Beauport test site (Quebec, Canada), where a LIDAR ground truth and a dense set of GNSS CPs (Check points) are available. Moreover, a preliminary comparison between the DSMs extracted, respectively with SISAR and PCI-Geomatica has been performed. The accuracy of the generated DSMs has been evaluated through the scientific software DEMANAL developed by Prof. K. Jacobsen of University of Hannover. As regards orientation models, the results shown that the Toutin's model accuracy is slightly better than the SISAR one, even if it is important to underline that the SISAR model is computed without using a priori ground truth information. As concern DSMs assessment, the global DSMs accuracy in term of RMSE is around 4 meter and the two radargrammetric approaches show similar performances.

Marine renewable energy is playing an increasing significant role in many parts of the world, mainly due to a rise in the awareness of climate change, and its detrimental effects, and the increasing cost of natural resources. The Severn Estuary, located between South West England and South Wales, has a tidal range of up to 14 m which makes it the second highest tidal range in the world. There are a number of barrage proposals amongst various marine renewable energy schemes proposed to be built in the estuary. The Cardiff-Weston STPG (Severn Tidal Power Group) Barrage, which would be one of the world's largest tidal renewable energy schemes if built, is one of the most publicised schemes to-date. This barrage would generate about 17 TWh/annum of power, which is approximately 5% of the UK's electricity consumption, whilst causing significant hydro-environmental and ecological impact on the estuary. This study mainly focuses on investigating the hydro-environmental impacts of the STPG barrage for the option of two-way generation, and compares this with the commonly investigated option of ebb-only generation. The impacts of the barrage were modelled by implementing a linked 1-D/2-D hydro-environmental model, with the capability of modelling several key environmental processes. The model predictions show that the hydro-environmental impacts of the barrage on the Severn Estuary and Bristol Channel, such as changes in the maximum velocity and reduction in suspended sediment and bacteria levels, were less significant for the two-way generation scheme when compared with the corresponding impacts for ebb-only generation.

Revised the Counselor Rating Form (CRF) creating the Counselor Rating Form-Short version (CRF-S). Tested reliability and validity of the CRF-S. Results indicated reliabilities of CRF-S scales were comparable to those for the CRF. Factor analytic comparison showed three correlated factors corresponding to the attractiveness, expertness, and…

A three-month study examined how interactive iconography impacts social studies and promotes critical writing skills. Groups of three middle-school immigrant students constructed museum labels using "Scope Out", an experimental online revision tool that makes iconography interactive. This study included three comparison groups and one control…

This study examines the interaction of clay mineral particles and water vapor for determining the conditions required for cloud droplet formation. Droplet formation conditions are investigated for two common clay minerals, illite and sodium-rich montmorillonite, and an industrially derived sample, Arizona Test Dust. Using wet and dry particle generation coupled to a differential mobility analyzer (DMA) and cloud condensation nuclei counter, the critical activation of the clay mineral particles as cloud condensation nuclei is characterized. Electron microscopy (EM) is used in order to determine non-sphericity in particle shape. It is also used in order to determine particle surface area and account for transmission of multiply charged particles by the DMA. Single particle mass spectrometry and ion chromatography are used to investigate soluble material in wet-generated samples and demonstrate that wet and dry generation yield compositionally different particles. Activation results are analyzed in the context of both κ-Köhler theory (κ-KT) and Frenkel-Halsey-Hill (FHH) adsorption activation theory. This study has two main results: (1) κ-KT is the suitable framework to describe clay mineral nucleation activity. Apparent differences in κ with respect to size arise from an artifact introduced by improper size-selection methodology. For dust particles with mobility sizes larger than ~300 nm, i.e., ones that are within an atmospherically relevant size range, both κ-KT and FHH theory yield similar critical supersaturations. However, the former requires a single hygroscopicity parameter instead of the two adjustable parameters required by the latter. For dry-generated particles, the size dependence of κ is likely an artifact of the shape of the size distribution: there is a sharp drop-off in particle concentration at ~300 nm, and a large fraction of particles classified with a mobility diameter less than ~300 nm are actually multiply charged, resulting in a much

Currently, there is an enormous amount of energy available from salinity gradients, which could be used for clean hydrogen production. Through the use of a favorable oxygen reduction reaction (ORR) cathode, the projected electrical energy generated by a single pass ammonium bicarbonate reverse electrodialysis (RED) system approached 78 W h m(-3). However, if RED is operated with the less favorable (higher overpotential) hydrogen evolution electrode and hydrogen gas is harvested, the energy recovered increases by as much ~1.5× to 118 W h m(-3). Indirect hydrogen production through coupling an RED stack with an external electrolysis system was only projected to achieve 35 W h m(-3) or ~1/3 of that produced through direct hydrogen generation. PMID:24322796

Like other members from the Pestivirus genus, 'HoBi'-like pestiviruses cause economic losses for cattle producers due to both acute and persistent infections. The present study analyzed for the first time PI animals derived from a controlled infection with two different 'HoBi'-like strains where the animals were maintained under conditions where superinfection by other pestiviruses could be excluded. The sequence of the region coding for viral glycoproteins E1/E2 of variants within the swarms of viruses present in the PI calves and two viral inoculums used to generate them were compared. Differences in genetic composition of the viral swarms were observed suggesting that host factors can play a role in genetic variations among PIs. Moreover, PIs generated with the same inoculum showed amino acid substitutions in similar sites of the polyprotein, even in serum from PIs with different quasispecies composition, reinforcing that some specific sites in E2 are important for host adaptation. PMID:26971244

This report summarizes experimental work performed at Argonne National Laboratory on the failure of internally pressurized steam generator tubing at high temperatures ({le} 700 C). A model was developed for predicting failure of flawed and unflawed steam generator tubes under internal pressure and temperature histories postulated to occur during severe accidents. The model was validated by failure tests on specimens with part-through-wall axial and circumferential flaws of various lengths and depths, conducted under various constant and ramped internal pressure and temperature conditions. The failure temperatures predicted by the model for two temperature and pressure histories, calculated for severe accidents initiated by a station blackout, agree very well with tests performed on both flawed and unflawed specimens.

Second generation H1 antihistamines are considered first-line therapy for allergic rhinitis and chronic idiopathic urticaria, largely because of their nonsedating effects. Evaluating pharmacokinetic and pharmacodynamic parameters and clinical efficacy of a drug is important, but models to predict clinical efficacy are lacking. Receptor occupancy (RO), a predictor for human pharmacodynamics and antihistamine potency that takes into account the affinity of the drug for the receptor and its free plasma concentration, may be a more accurate way to predict a drug's clinical efficacy. This study was designed to assess the concept of RO as a surrogate for clinical efficacy, using examples of second generation oral antihistamines. A literature review was conducted using MEDLINE. Search terms included allergy, allergic rhinitis, drug efficacy, over-the-counter drugs, perennial allergic rhinitis, seasonal allergic rhinitis, second generation antihistamines, chronic idiopathic urticaria, and treatment outcomes. Abstracts and posters from recent allergy-related society meetings were also used. RO of several second generation H1 antihistamines was derived from noncomparative and head-to-head studies. Fexofenadine and levocetirizine showed similar RO at 4 hours, both higher than that of desloratadine. Levocetirizine established higher RO than fexofenadine or desloratadine at 12 and 24 hours. RO for these agents appeared to correlate with pharmacodynamic activity in skin wheal and flare studies and with efficacy in allergen challenge chamber studies. Parameters affecting RO included time from dosing, pH, and dosing regimen. RO did not appear to be linearly related to drug concentration. Results indicate that RO is an accurate predictor of in vivo pharmacodynamic activity and clinical efficacy. PMID:19335943

The generation of radiation via photoelectrons induced off of a conducting surface was explored using Particle-In-Cell (PIC) code computer simulations. Using the MAGIC PIC code, the simulations were performed in one dimension to handle the diverse scale lengths of the particles and fields in the problem. The simulations involved monoenergetic, nonrelativistic photoelectrons emitted normal to the illuminated conducting surface. A sinusoidal, 100% modulated, 6.3263 ns pulse train, as well as unmodulated emission, were used to explore the behavior of the particles, fields, and generated radiation. A special postprocessor was written to convert the PIC code simulated electron sheath into far-field radiation parameters by means of rigorous retarded time calculations. The results of the small-spot PIC simulations were used to generate various graphs showing resonance and nonresonance radiation quantities such as radiated lobe patterns, frequency, and power. A database of PIC simulation results was created and, using a nonlinear curve-fitting program, compared with theoretical scaling laws. Overall, the small-spot behavior predicted by the theoretical scaling laws was generally observed in the PIC simulation data, providing confidence in both the theoretical scaling laws and the PIC simulations.

AIM To compare the effects of first and second generation silicone hydrogel (SiH) contact lens wear on tear film osmolarity. METHODS The healthy subjects who have never used contact lenses before were enrolled in the study. Tear film osmolarity values of 16 eyes (group 1) who wore first generation SiH contact lenses were compared with those of 18 eyes (group 2) who wore second generation SiH contact lenses after three months follow-up. RESULTS Before contact lens wear, tear film osmolarity of groups 1 and 2 were 305.02±49.08 milliosmole (mOsm) and 284.66±30.18mOsm, respectively. After three months of contact lens wear, osmolarity values were found 317.74±60.23mOsm in group 1 and 298.40±37.77mOsm in group 2. Although osmolarity values for both groups of SiH contact lens wear after three months periods were slightly higher than before the contact lens wear, the difference was not statistically significant. CONCLUSION Contact lens wear may cause evaporation from the tear film and can increase tear film osmolarity leading to symptoms of dry eye disease. In the current study, there is a tendency to increase tear film osmolarity for both groups of SiH contact lens wear, but the difference is not statistically significant. PMID:24195046

In order to make the best choice between renewable energy technologies, it is important to be able to compare these technologies on the basis of their sustainability, which may include a variety of social, environmental, and economic indicators. This study examined the comparative sustainability of four renewable electricity technologies in terms of their life cycle CO2 emissions and embodied energy, from construction to decommissioning and including maintenance (periodic component replacement plus machinery use), using life cycle analysis. The models developed were based on case studies of power plants in New Zealand, comprising geothermal, large-scale hydroelectric, tidal (a proposed scheme), and wind-farm electricity generation. The comparative results showed that tidal power generation was associated with 1.8 g of CO2/kWh, wind with 3.0 g of CO2/kWh, hydroelectric with 4.6 g of CO2/kWh, and geothermal with 5.6 g of CO2/kWh (not including fugitive emissions), and that tidal power generation was associated with 42.3 kJ/kWh, wind with 70.2 kJ/kWh, hydroelectric with 55.0 kJ/kWh, and geothermal with 94.6 kJ/kWh. Other environmental indicators, as well as social and economic indicators, should be applied to gain a complete picture of the technologies studied. PMID:19746744

This paper describes a method for generating an unfolded view of organs based on elastic deformation. When we observe the inside of an organ that has a large cavity with a virtual endoscopy system, the viewpoint and the view direction need to be changed many times. Unfolded views can visualize the entire organ wall at a glance and are very useful for diagnosis. The unfolding process requires creating a model that approximates the shape of the target organ. The approximated shape is generated from the outer wall of the organ by extracting the organ wall. The user interactively specifies a cutting line on the approximated shape. The stretching is performed by adding forces and calculating the elastic deformation. The volumetric image where the target organ is unfolded is reconstructed from the original image by using the relation between the approximated and the stretched shapes. We applied the proposed method to six 3-D abdominal CT images. The experimental results showed that the method generates adequate unfolded views of the target organs.

Assessment of the revisability of surgery after the endoprosthetic replacement of vertebral discs shows that the surgical approach depends on the time of revision surgery and the reason why it is carried out. Our experience is based on nine revision operations out of 152 cervical vertebra prostheses of the Bryan and Prodisc C types implanted from 2003 to 2007 and 312 endoprostheses of the Charité and Prodisc types implanted from 1999 to 2007. Our own results show differing approaches in perioperative or late postoperative revision operations. Operations to exchange implants were not possible, whereas a change of surgical procedure is the rule. The same access route can usually be selected in the cervical spine, but in the lumbar spine this can only be done perioperatively; if revision surgery is carried out at a later date, an alternative access route must be used. Using strict indications for the primary implant is the only way to prevent postoperative revision surgery that is due to an inaccurate primary assessment and not to the vertebral endoprosthesis (e.g. post-discotomy syndrome, facet joint arthropathy, rotation instability, vertebral slip). The next generation of vertebral disc endoprostheses must incorporate reduced load of the zygapophyseal joints and improved revisability. PMID:18340432

Generation of reactive oxygen species (ROS) - including superoxide ( rad O 2-), hydrogen peroxide (HOOH), and hydroxyl radical ( rad OH) - has been suggested as one mechanism underlying the adverse health effects caused by ambient particulate matter (PM). In this study we compare HOOH and rad OH production from fine and coarse PM collected at an urban (Fresno) and rural (Westside) site in the San Joaquin Valley (SJV) of California, as well as from laboratory solutions containing dissolved copper or iron. Samples were extracted in a cell-free, phosphate-buffered saline (PBS) solution containing 50 μM ascorbate (Asc). In our laboratory solutions we find that Cu is a potent source of both HOOH and rad OH, with approximately 90% of the electrons that can be donated from Asc ending up in HOOH and rad OH after 4 h. In contrast, in Fe solutions there is no measurable HOOH and only a modest production of rad OH. Soluble Cu in the SJV PM samples is also a dominant source of HOOH and rad OH. In both laboratory copper solutions and extracts of ambient particles we find much more production of HOOH compared to rad OH: e.g., HOOH generation is approximately 30-60 times faster than rad OH generation. The formation of HOOH and rad OH are positively correlated, with roughly 3% and 8% of HOOH converted to rad OH after 4 and 24 h of extraction, respectively. Although the SJV PM produce much more HOOH than rad OH, since rad OH is a much stronger oxidant it is unclear which species might be more important for oxidant-mediated toxicity from PM inhalation.

Generation of reactive oxygen species (ROS) – including superoxide (•O2−), hydrogen peroxide (HOOH), and hydroxyl radical (•OH) – has been suggested as one mechanism underlying the adverse health effects caused by ambient particulate matter (PM). In this study we compare HOOH and •OH production from fine and coarse PM collected at an urban (Fresno) and rural (Westside) site in the San Joaquin Valley (SJV) of California, as well as from laboratory solutions containing dissolved copper or iron. Samples were extracted in a cell-free, phosphate-buffered saline (PBS) solution containing 50 μM ascorbate (Asc). In our laboratory solutions we find that Cu is a potent source of both HOOH and •OH, with approximately 90% of the electrons that can be donated from Asc ending up in HOOH and •OH after 4 h. In contrast, in Fe solutions there is no measurable HOOH and only a modest production of •OH. Soluble Cu in the SJV PM samples is also a dominant source of HOOH and •OH. In both laboratory copper solutions and extracts of ambient particles we find much more production of HOOH compared to •OH: e.g., HOOH generation is approximately 30 – 60 times faster than •OH generation. The formation of HOOH and •OH are positively correlated, with roughly 3 % and 8 % of HOOH converted to •OH after 4 and 24 hr of extraction, respectively. Although the SJV PM produce much more HOOH than •OH, since •OH is a much stronger oxidant it is unclear which species might be more important for oxidant-mediated toxicity from PM inhalation. PMID:22267949

Generation of reactive oxygen species (ROS) - including superoxide ((•)O(2) (-)), hydrogen peroxide (HOOH), and hydroxyl radical ((•)OH) - has been suggested as one mechanism underlying the adverse health effects caused by ambient particulate matter (PM). In this study we compare HOOH and (•)OH production from fine and coarse PM collected at an urban (Fresno) and rural (Westside) site in the San Joaquin Valley (SJV) of California, as well as from laboratory solutions containing dissolved copper or iron. Samples were extracted in a cell-free, phosphate-buffered saline (PBS) solution containing 50 μM ascorbate (Asc). In our laboratory solutions we find that Cu is a potent source of both HOOH and (•)OH, with approximately 90% of the electrons that can be donated from Asc ending up in HOOH and (•)OH after 4 h. In contrast, in Fe solutions there is no measurable HOOH and only a modest production of (•)OH. Soluble Cu in the SJV PM samples is also a dominant source of HOOH and (•)OH. In both laboratory copper solutions and extracts of ambient particles we find much more production of HOOH compared to (•)OH: e.g., HOOH generation is approximately 30 - 60 times faster than (•)OH generation. The formation of HOOH and (•)OH are positively correlated, with roughly 3 % and 8 % of HOOH converted to (•)OH after 4 and 24 hr of extraction, respectively. Although the SJV PM produce much more HOOH than (•)OH, since (•)OH is a much stronger oxidant it is unclear which species might be more important for oxidant-mediated toxicity from PM inhalation. PMID:22267949

Background Next generation sequencing (NGS) technology has revolutionized genomic and genetic research. The pace of change in this area is rapid with three major new sequencing platforms having been released in 2011: Ion Torrent’s PGM, Pacific Biosciences’ RS and the Illumina MiSeq. Here we compare the results obtained with those platforms to the performance of the Illumina HiSeq, the current market leader. In order to compare these platforms, and get sufficient coverage depth to allow meaningful analysis, we have sequenced a set of 4 microbial genomes with mean GC content ranging from 19.3 to 67.7%. Together, these represent a comprehensive range of genome content. Here we report our analysis of that sequence data in terms of coverage distribution, bias, GC distribution, variant detection and accuracy. Results Sequence generated by Ion Torrent, MiSeq and Pacific Biosciences technologies displays near perfect coverage behaviour on GC-rich, neutral and moderately AT-rich genomes, but a profound bias was observed upon sequencing the extremely AT-rich genome of Plasmodium falciparum on the PGM, resulting in no coverage for approximately 30% of the genome. We analysed the ability to call variants from each platform and found that we could call slightly more variants from Ion Torrent data compared to MiSeq data, but at the expense of a higher false positive rate. Variant calling from Pacific Biosciences data was possible but higher coverage depth was required. Context specific errors were observed in both PGM and MiSeq data, but not in that from the Pacific Biosciences platform. Conclusions All three fast turnaround sequencers evaluated here were able to generate usable sequence. However there are key differences between the quality of that data and the applications it will support. PMID:22827831

The high harmonic generation (HHG) from 3D hydrogen (H) atom in three kinds of inhomogeneous fields are investigated by solving the time-dependent Schrödinger equation (TDSE) accurately with time-dependent generalized pseudospectral method (TDGPS), and compared together. The corresponding time-frequency and three-step model is also presented to explain the differences between three cases. We will also calculate the ionization probability and electron wavepacket as functions of time to further illustrate this phenomenon. By superposing a series of properly selected harmonics, the isolated attosecond pulses can be obtained straightforwards the shortest of which is 64 as.

This paper presents an analytical procedure for the calculation of the eddy current losses of permanent magnet synchronous generator (PMSG). The dc and ac loading effects on the eddy current is examined through the suggested analytical procedure that considers the radial and tangential flux density waveform through a phase current harmonic analysis. The corresponding test results are also presented to quantify and compare those loading effects on the eddy current. The results verified the suggested analytical procedures and show that the rotor eddy current losses for PMSG with the dc loads turned out to be more significant than those with the ac loads.

This web site provides an up-to-date report on the radioactive solid waste expected to be managed by Hanford's Waste Management (WM) Project from onsite and offsite generators. It includes: an overview of Hanford-wide solid waste to be managed by the WM Project; program-level and waste class-specific estimates; background information on waste sources; and comparisons with previous forecasts and with other national data sources. This web site does not include: liquid waste (current or future generation); waste to be managed by the Environmental Restoration (EM-40) contractor (i.e., waste that will be disposed of at the Environmental Restoration Disposal Facility (ERDF)); or waste that has been received by the WM Project to date (i.e., inventory waste). The focus of this web site is on low-level mixed waste (LLMW), and transuranic waste (both non-mixed and mixed) (TRU(M)). Some details on low-level waste and hazardous waste are also provided. Currently, this web site is reporting data th at was requested on 10/14/96 and submitted on 10/25/96. The data represent a life cycle forecast covering all reported activities from FY97 through the end of each program's life cycle. Therefore, these data represent revisions from the previous FY97.0 Data Version, due primarily to revised estimates from PNNL. There is some useful information about the structure of this report in the SWIFT Report Web Site Overview.

Mammaglobin A (MGA) is an organ specific molecular biomarker for metastatic breast cancer diagnosis. However, there are still needs to develop optimal monoclonal antibodies (mAbs) to detect MGA expression in breast carcinoma by immunohistochemistry. In this study, we first generated mAbs against MGA. Then, we used epitope prediction and computer-assisted structural analysis to screen five dominant epitopes and identified mAbs against five epitopes. Further immunohistochemical analysis on 42 breast carcinoma specimens showed that MHG1152 and MGD785 had intensive staining mainly in membrane, while CHH11617, CHH995 and MJF656 had more intensive staining within the cytoplasm. MGA scoring results showed that MJF656 had the highest rate (92.8%) of positive staining among five mAbs, including higher staining intensity when compared with that of MHG1152 (p < 0.01) and CHH995 (p < 0.05) and the highest the mean percentage of cells stained among mAbs. Furthermore, we analyzed the relationship of positive staining rate by mAbs with patient clinical characteristics. The results suggest that MJF656 was able to detect MGA expression, especially in early clinical stage, low grade and lymph node metastasis-negative breast carcinoma. In conclusion, our study generated five mAbs against MGA and identified the best candidate for detection of MGA expression in breast cancer tissues. PMID:26272389

Electrical discharges in liquids can be generated in several electrode configurations; one of them is called as a pin-hole. The discharge is created inside a small orifice in the dielectric barrier connecting two chambers filled by any conductive solution. Each chamber contains one electrode. Based on the orifice length/diameter ratio, the discharge is called as a capillary or diaphragm discharge. The present paper gives the first detailed observation of the dependence of discharge creation on the orifice shape for selected NaCl solution conductivities (250-1000 μS cm-1). As a dielectric barrier, ceramic discs with thickness varying from 0.3 to 1.5 mm were used. Diameter of one central pin-hole was in the range of 0.25-1.00 mm. The non-pulsing dc high voltage up to 2 kV with power up to 500 W was used for the presented study. The bubble theory of the discharge generation was confirmed at the set conditions.

The presence of hot halos around some colliding and supposed merging galaxies, as detected in X-rays, suggests that galaxy interactions may be responsible for the production of significant amounts of hot-phase interstellar gas in some systems. Possible mechanisms for producing this hot material are large-scale shock heating due to the collision itself, as well as the subsequent supernova explosions and intense stellar winds from the massive stars that are formed in collision-induced starbursts. We are using numerical simulations of galaxy collisions and mergers to explore the possible contribution of these various physical mechanisms. These simulations are compared with observations of real systems. Here we report on results from the application of a new N-body/smoothed particle hydrodynamics simulation code that has been constructed to allow the representation of multiple phases in the interstellar medium (Hearn et al, in preparation). This simulation code has been used to explore the generation of hot interstellar gas due to the large-scale shock heating that occurs during the collision and merger of two gas-rich disk galaxies. This current study allows us to place limits on the effect of the collision itself (as opposed to the results of subsequent star formation) on the generation of hot halos. We compare our numerical results to the extensive observations of the collisional merging system Arp 220 and, in particular, with our recent Chandra observations of its extended X-ray halo (McDowell et al, in preparation).

Revision rhinoplasty is a complex operation with many variables that may influence the final esthetic and functional outcome of the procedure. Cartilage forms the structural framework of the lower two-thirds of nose and is essential for long-term support and maintenance of a patent nasal airway. The use of autologous cartilage grafting is the primary source of this material, limited by donor site quantity, quality, and harvest morbidity. Alloplastic materials, solid and injectable, are often used for augmentation purposes and may have devastating consequences. This article discusses past and current treatment concepts for various nasal deformities using available autologous grafting techniques. PMID:27400847

The aim of the study was to directly compare the threshold electrical charge density of the retina (retinal threshold) in rabbits for the generation of electrical evoked potentials (EEP) by delivering electrical stimulation with a custom-made microelectrode array (MEA) implanted into either the subretinal or suprachoroidal space. Nine eyes of seven Dutch-belted rabbits were studied. The electroretinogram (ERG), visual evoked potentials (VEP) and EEP were recorded. Electrodes for the VEP and EEP were placed on the dura mater overlying the visual cortex. The EEP was recorded following electrical stimulation of the MEA placed either subretinally beneath the visual streak of the retina or in the suprachoroidal space in the rabbit eye. An ab externo approach was used for placement of the MEA. Liquid perfluorodecaline (PFCL; 0.4 ml) was placed within the vitreous cavity to flatten the neurosensory retina on the MEA after subretinal implantation. The retinal threshold for generation of an EEP was determined for each MEA placement by three consecutive measurements consisting of 100 computer-averaged recordings. Animals were sacrificed at the conclusion of the experiment and the eyes were enucleated for histological examination. The retinal threshold to generate an EEP was 9 ± 7 nC (0.023 ± 0.016 mC cm-2) within the subretinal space and 150 ± 122 nC (0.375 ± 0.306 mC cm-2) within the suprachoroidal space. Histology showed disruption of the outer retina with subretinal but not suprachoroidal placement. The retinal threshold to elicit an EEP is significantly lower with subretinal placement of the MEA compared to suprachoroidal placement (P < 0.05). The retinal threshold charge density with a subretinal MEA is well below the published charge limit of 1 mC cm-2, which is the level below which chronic stimulation of the retina is considered necessary to avoid tissue damage (Shannon 1992 IEEE Trans. Biomed. Eng. 39 424-6). Supported in part by The Charles D Kelman, MD

Background Alcohol drinking is a risk factor for harm and disease. A low level of drinking among non-Western immigrants may lead to less alcohol-related harm and disease. The first aim of this study was to describe frequency of drinking in two generations of immigrants in Oslo, contrasting the result to drinking frequency among ethnic Norwegians. The second aim was to study how frequency of drinking among adult immigrants was associated with social interaction with their own countrymen and ethnic Norwegians, acculturation, age, gender, socioeconomic factors and the Muslim faith. Method The Oslo Health Study (HUBRO) was conducted during the period 2000 to 2002 and consisted of three separate surveys: a youth study (15-16-year-olds, a total of 7343 respondents, response rate 88.3%); adult cohorts from 30 to 75 years old (18,770 respondents, response rate 46%); the five largest immigrant groups in Oslo (aged 20–60 years, a total of 3019 respondents, response rate 39.7%). Based on these three surveys, studies of frequency of drinking in the previous year (four categories) were conducted among 15-16-year-olds and their parents’ generation, 30-60-year-old Iranians, Pakistanis, Turks and ethnic Norwegians. A structural equation model with drinking frequency as outcome was established for the adult immigrants. Results Adults and youth of ethnic Norwegian background reported more frequent alcohol use than immigrants with backgrounds from Iran, Turkey and Pakistan. Iranians reported a higher drinking frequency than Turks and Pakistanis. In the structural equation model high drinking frequency was associated with high host culture competence and social interaction, while high own culture competence was associated with low drinking frequency. Adult first-generation immigrants with a longer stay in Norway, those of a higher age, and females drank alcohol less frequently, while those with a higher level of education and work participation drank more frequently. Muslim

Background It is widely acknowledged that there is value in examining cancers for genomic aberrations via next-generation sequencing (NGS). How commercially available NGS platforms compare with each other, and the clinical utility of the reported actionable results, are not well known. During the course of the current study, the Foundation One (F1) test generated data on a combination of somatic mutations, insertion and deletion polymorphisms, chromosomal abnormalities, and deoxyribonucleic acid (DNA) copy number changes at ~250× coverage, while the Paradigm Cancer Diagnostic (PCDx) test generated the same type of data at >5,000× coverage, plus provided messenger RNA (mRNA) expression levels. We sought to compare and evaluate paired formalin-fixed paraffin-embedded tumor tissue using these two platforms. Methods Samples from patients with advanced solid tumors were submitted to both the F1 and PCDx vendors for NGS analysis. Turnaround time (TAT) was calculated. Biomarkers were considered clinically actionable if they had a published association with treatment response in humans and were assigned to the following categories: commercially available drug (CA), clinical trial drug (CT), or neither option (hereafter referred to as “None”). Results The demographics of the 21 unique patient tumor samples included ten men and eleven women, with a median age of 56 years. Due to insufficient archival tissue from the same collection period, in one case, we used samples from different collections. PCDx reported first results faster than F1 in 20 cases. When received at both vendors on the same day, PCDx reported first results for 14 of 15 cases, with a median TAT of 9 days earlier than F1 (P<0.0001). Categorization of CA compared to CT and none significantly favored PCDx (P=0.012). Conclusion In the current analysis, commercially available NGS platforms provided clinically relevant actionable targets (CA or CT) in 47%–67% of diverse cancer types. In the samples

This paper discusses recent modifications to the Serpent Monte Carlo code methodology and related to the calculation of few-group diffusion coefficients and reflector discontinuity factors The new methods were assessed in the following manner. First, few-group homogenized cross sections calculated by Serpent for a reference PWR core were compared with those generated 1 commercial deterministic lattice transport code HELIOS-2. Second, Serpent and HELIOS-2 fe group cross section sets were later employed by nodal diffusion code DYN3D for the modeling the reference PWR core. Finally, the nodal diffusion results obtained using the both cross section sets were compared with the full core Serpent Monte Carlo solution. The test calculations show that Serpent can calculate the parameters required for nodal analyses similar to conventional deterministic lattice codes. (authors)

The second order nonlinear polarizability and dipole moment changes upon light excitation of light-adapted bacteriorhodopsin (BR), dark-adapted BR, blue membrane, and acid purple membrane have been measured by second harmonic generation. Our results indicate that the dipole moment changes of the retinal chromophore, delta mu, are very sensitive to both the chromophore structure and protein/chromophore interactions. Delta mu of light-adapted BR is larger than that of dark-adapted BR. The acid-induced formation of the blue membrane results in an increase in the delta mu value, and formation of acid purple membrane, resulting from further reduction of pH to 0, returns the delta mu to that of light-adapted BR. The implications of these findings are discussed. PMID:7811928

The second order nonlinear polarizability and dipole moment changes upon light excitation of light-adapted bacteriorhodopsin (BR), dark-adapted BR, blue membrane, and acid purple membrane have been measured by second harmonic generation. Our results indicate that the dipole moment changes of the retinal chromophore, delta mu, are very sensitive to both the chromophore structure and protein/chromophore interactions. Delta mu of light-adapted BR is larger than that of dark-adapted BR. The acid-induced formation of the blue membrane results in an increase in the delta mu value, and formation of acid purple membrane, resulting from further reduction of pH to 0, returns the delta mu to that of light-adapted BR. The implications of these findings are discussed. PMID:7811928

InVivoStat is a free-to-use statistical software package for analysis of data generated from animal experiments. The package is designed specifically for researchers in the behavioural sciences, where exploiting the experimental design is crucial for reliable statistical analyses. This paper compares the analysis of three experiments conducted using InVivoStat with other widely used statistical packages: SPSS (V19), PRISM (V5), UniStat (V5.6) and Statistica (V9). We show that InVivoStat provides results that are similar to those from the other packages and, in some cases, are more advanced. This investigation provides evidence of further validation of InVivoStat and should strengthen users' confidence in this new software package. PMID:22071578

Background: Technological advances in wire selection and bracket design have led to improved treatment efficiency and allowed longer time intervals between appliance adjustments. The wires remain in the mouth for a longer duration and are subjected to electrochemical reactions, mechanical forces of mastication and generalized wear. These cause different types of corrosion. This study was done to compare the galvanic currents generated between different combinations of brackets and archwires commonly used in orthodontic practices. Materials and Methods: The materials used for the study included different commercially available orthodontic archwires and brackets. The galvanic current generated by individual materials and different combinations of these materials was tested and compared. The orthodontic archwires used were 0.019″ × 0.025″ heat-activated nickel-titanium (3M Unitek), 0.019″ × 0.025″ beta-titanium (3M Unitek) and 0.019″ × 0.025″ stainless steel (3M Unitek). The orthodontic brackets used were 0.022″ MBT laser-cut (Victory Series, 3M Unitek) and metal-injection molded (Leone Company) maxillary central incisor brackets respectively. The ligature wire used for ligation was 0.009″ stainless steel ligature (HP Company). The galvanic current for individual archwires, brackets, and the different bracket-archwire-ligature combinations was measured by using a Potentiostat machine. The data were generated using the Linear Sweep Voltammetry and OriginPro 8.5 Graphing and Data Analysis Softwares. The study was conducted in two phases. Phase I comprised of five groups for open circuit potential (OCP) and galvanic current (I), whereas Phase II comprised of six groups for galvanic current alone. Results: Mean, standard deviation and range were computed for the OCP and galvanic current (I) values obtained. Results were subjected to statistical analysis through ANOVA. In Phase I, higher mean OCP was recorded in stainless steel archwire, followed by beta

This study revised the Teacher Beliefs Survey (S. Wooley and A. Wooley, 1999; TBS), an instrument to assess teachers beliefs related to constructivist and behaviorist theories of learning, and then studied the validity of the revised TBS. Drawing on a literature review, researchers added items for the existing constructs of the TBS and added a new…

This study compared the effectiveness of the multifocal visual evoked cortical potentials (mfVEP) elicited by pattern pulse stimulation with that of pattern reversal in producing reliable responses (signal-to-noise ratio >1.359). Participants were 14 healthy subjects. Visual stimulation was obtained using a 60-sector dartboard display consisting of 6 concentric rings presented in either pulse or reversal mode. Each sector, consisting of 16 checks at 99% Michelson contrast and 80 cd/m² mean luminance, was controlled by a binary m-sequence in the time domain. The signal-to-noise ratio was generally larger in the pattern reversal than in the pattern pulse mode. The number of reliable responses was similar in the central sectors for the two stimulation modes. At the periphery, pattern reversal showed a larger number of reliable responses. Pattern pulse stimuli performed similarly to pattern reversal stimuli to generate reliable waveforms in R1 and R2. The advantage of using both protocols to study mfVEP responses is their complementarity: in some patients, reliable waveforms in specific sectors may be obtained with only one of the two methods. The joint analysis of pattern reversal and pattern pulse stimuli increased the rate of reliability for central sectors by 7.14% in R1, 5.35% in R2, 4.76% in R3, 3.57% in R4, 2.97% in R5, and 1.78% in R6. From R1 to R4 the reliability to generate mfVEPs was above 70% when using both protocols. Thus, for a very high reliability and thorough examination of visual performance, it is recommended to use both stimulation protocols. PMID:22782556

To study the function and maturation of the human hematopoietic and immune system without endangering individuals, translational human-like animal models are needed. We compare the efficiency of CD34+ stem cells isolated from cryopreserved cord blood from a blood bank (CCB) and fresh cord blood (FCB) in generating highly engrafted humanized mice in NOD-SCID IL2Rγnull (NSG) rodents. Interestingly, the isolation of CD34+ cells from CCB results in a lower yield and purity compared to FCB. The purity of CD34+ isolation from CCB decreases with an increasing number of mononuclear cells that is not evident in FCB. Despite the lower yield and purity of CD34+ stem cell isolation from CCB compared to FCB, the overall reconstitution with human immune cells (CD45) and the differentiation of its subpopulations e.g., B cells, T cells or monocytes is comparable between both sources. In addition, independent of the cord blood origin, human B cells are able to produce high amounts of human IgM antibodies and human T cells are able to proliferate after stimulation with anti-CD3 antibodies. Nevertheless, T cells generated from FCB showed increased response to restimulation with anti-CD3. Our study reveals that the application of CCB samples for the engraftment of humanized mice does not result in less engraftment or a loss of differentiation and function of its subpopulations. Therefore, CCB is a reasonable alternative to FCB and allows the selection of specific genotypes (or any other criteria), which allows scientists to be independent from the daily changing birth rate. PMID:23071634

In this paper, we have investigated the combined effects of Newtonian heating and internal heat generation/absorption in the two-dimensional flow of Eyring-Powell fluid over a stretching surface. The governing non-linear analysis of partial differential equations is reduced into the ordinary differential equations using similarity transformations. The resulting problems are computed for both series and numerical solutions. Series solution is constructed using homotopy analysis method (HAM) whereas numerical solution is presented by two different techniques namely shooting method and bvp4c. A comparison of homotopy solution with numerical solution is also tabulated. Both solutions are found in an excellent agreement. Dimensionless velocity and temperature profiles are plotted and discussed for various emerging physical parameters. PMID:26402366

An effective, state-of-the art secondary water chemistry control program is essential to maximize the availability and operating life of major PWR components. Furthermore, the costs related to maintaining secondary water chemistry will likely be less than the repair or replacement of steam generators or large turbine rotors, with resulting outages taken into account. The revised PWR secondary water chemistry guidelines in this report represent the latest field and laboratory data on steam generator corrosion phenomena. This document supersedes Interim PWR Secondary Water Chemistry Recommendations for IGA/SCC Control (EPRI report TR-101230) as well as PWR Secondary Water Chemistry Guidelines--Revision 2 (NP-6239).

The behavior of strong gravitational lens model software in the analysis of lens models is not necessarily consistent among the various software available, suggesting that the use of several models may enhance the understanding of the system being studied. Among the publicly available codes, the model input files are heterogeneous, making the creation of multiple models tedious. An enhanced method of creating model files and a method to easily create multiple models, may increase the number of comparison studies. HydraLens simplifies the creation of model files for four strong gravitational lens model software packages, including Lenstool, Gravlens/Lensmodel, glafic and PixeLens, using a custom-designed GUI for each of the four codes that simplifies the entry of the model for each of these codes, obviating the need for user manuals to set the values of the many flags and in each data field. HydraLens is designed in a modular fashion, which simplifies the addition of other strong gravitational lens codes in the future. HydraLens can also translate a model generated for any of these four software packages into any of the other three. Models created using HydraLens may require slight modifications, since some information may be lost in the translation process. However the computer-generated model greatly simplifies the process of developing multiple lens models. HydraLens may enhance the number of direct software comparison studies and also assist in the education of young investigators in gravitational lens modeling. Future development of HydraLens will further enhance its capabilities.

To evaluate the seed shadow generated by wild Japanese martens (Martes melampus), we combined data on their ranging behavior from the northern foot of Mt. Fuji, central Japan (seven males and three females) with data on gut passage time obtained from martens in Toyama Municipal Family Park Zoo (three males and one female). The movement distances varied, and mean distances for 0-1, 2-3, and 4-5 h intervals were 152.4, 734.7, and 1,162.4 m, respectively, with no significant sex difference. The mean gut passage time of ingested seeds was 7.4 h (range: 0.6-51.7 h), and two-thirds were defecated within 12 h. Seeds of fleshy fruits was frequently transported to 501-1,000 m, and 20% of ingested seeds were transported > 1,000 m from feeding sites. We found positive correlations between body size and home range of the animals in Japan and their seed dispersal distances. We conclude that Japanese martens are medium-range dispersers that can transport seeds from the source to open habitats conducive for germination and/or growth, partly due to scent marking behaviors. PMID:27498794

We analyze the high-order harmonic generation (HHG) of Ar atoms and H2 molecules in 780 nm laser fields with the pulse duration about 30 fs and various peak intensities by means of the self-interaction-free time-dependent density functional theory (TDDFT). Since the ionization potentials of Ar and H2 are close to each other, the cutoff position of the HHG spectra for the specific intensity is expected approximately at the same harmonic order, according to the three-step model. In general, our TDDFT calculations agree with this prediction; however, in the high-energy part of the HHG spectra, the harmonic signal from H2 is considerably lower than that from Ar. On the other hand, the HHG spectrum of Ar has a prominent minimum at the photon energy 50 eV, especially for lower laser intensities. This minimum has the same nature as the well-known Cooper minimum in Ar observed in photoionization cross sections.

Background Aortopathies are a group of disorders characterized by aneurysms, dilation, and tortuosity of the aorta. Because of the phenotypic overlap and genetic heterogeneity of diseases featuring aortopathy, molecular testing is often required for timely and correct diagnosis of affected individuals. In this setting next generation sequencing (NGS) offers several advantages over traditional molecular techniques. Methods The purpose of our study was to compare NGS enrichment methods for a clinical assay targeting the nine genes known to be associated with aortopathy. RainDance emulsion PCR and SureSelect RNA-bait hybridization capture enrichment methods were directly compared by enriching DNA from eight samples. Enriched samples were barcoded, pooled, and sequenced on the Illumina HiSeq2000 platform. Depth of coverage, consistency of coverage across samples, and the overlap of variants identified were assessed. This data was also compared to whole-exome sequencing data from ten individuals. Results Read depth was greater and less variable among samples that had been enriched using the RNA-bait hybridization capture enrichment method. In addition, samples enriched by hybridization capture had fewer exons with mean coverage less than 10, reducing the need for followup Sanger sequencing. Variants sets produced were 77% concordant, with both techniques yielding similar numbers of discordant variants. Conclusions When comparing the design flexibility, performance, and cost of the targeted enrichment methods to whole-exome sequencing, the RNA-bait hybridization capture enrichment gene panel offers the better solution for interrogating the aortopathy genes in a clinical laboratory setting. PMID:23148498

A prospective model of parenting and externalizing behavior spanning 3 generations (G1, G2, and G3) was examined for young men from an at-risk sample of young adult men (G2) who were in approximately the youngest one third of their cohort to become fathers. It was first predicted that the young men in G2 who had children the earliest would show high levels of antisocial behavior. Second, it was predicted that G1 poor parenting practices would show both a direct association with the G2 son's subsequent parenting and a mediated effect via his development of antisocial and delinquent behavior by adolescence. The young fathers had more arrests and were less likely to have graduated from high school than the other young men in the sample. Findings were most consistent with the interpretation that there was some direct effect of parenting from G1 to G2 and some mediated effect via antisocial behavior in G2. PMID:12735396

Objectives. Five different second-generation supraglottic airway devices, ProSeal LMA, Supreme LMA, i-gel, SLIPA, and Laryngeal Tube Suction-D, were studied. Operators were inexperienced users with a military background, combat lifesavers, nurses, and physicians. Methods. This was a prospective, randomized, single-blinded study. Devices were inserted in the operating room in low light conditions after induction of general anesthesia. Primary outcome was successful insertion on the first attempt while secondary aims were insertion time, number of attempts, oropharyngeal seal pressure, ease of insertion, fibre optic position of device, efficacy of ventilation, and intraoperative trauma or regurgitation of gastric contents. Results. In total, 505 patients were studied. First-attempt insertion success rate was higher in the Supreme LMA (96%), i-gel (87.9%), and ProSeal LMA (85.9%) groups than in the Laryngeal Tube Suction-D (80.6%) and SLIPA (69.4%) groups. Insertion time was shortest in the Supreme LMA (70.4 ± 32.5 s) and i-gel (74.4 ± 41.1 s) groups (p < 0.001). Oropharyngeal seal pressures were higher in the Laryngeal Tube Suction-D and ProSeal LMA groups than in other three devices. Conclusions. Most study parameters for the Supreme LMA and i-gel were found to be superior to the other three tested supraglottic airway devices when inserted by novice military operators. PMID:26495289

Rapid detection and reporting of third generation cephalosporine resistance (3GC-R) and of extended spectrum betalactamases in Enterobacteriaceae (ESBL-E) is a diagnostic and therapeutic priority to avoid inefficacy of the initial antibiotic regimen. In this study we evaluated a commercially available chromogenic screen for 3GC-R as a predictive and/or confirmatory test for ESBL and AmpC activity in clinical and veterinary Enterobacteriaceae isolates. The test was highly reliable in the prediction of cefotaxime and cefpodoxime resistance, but there was no correlation with ceftazidime and piperacillin/tazobactam minimal inhibitory concentrations. All human and porcine ESBL-E tested were detected with exception of one genetically positive but phenotypically negative isolate. By contrast, AmpC detection rates lay below 30%. Notably, exclusion of piperacillin/tazobactam resistant, 3GC susceptible K1+ Klebsiella isolates increased the sensitivity and specificity of the test for ESBL detection. Our data further imply that in regions with low prevalence of AmpC and K1 positive E. coli strains chromogenic testing for 3GC-R can substitute for more time consuming ESBL confirmative testing in E. coli isolates tested positive by Phoenix or VITEK2 ESBL screen. We, therefore, suggest a diagnostic algorithm that distinguishes 3GC-R screening from primary culture and species-dependent confirmatory ESBL testing by βLACTATM and discuss the implications of MIC distribution results on the choice of antibiotic regimen. PMID:27494134

Rapid detection and reporting of third generation cephalosporine resistance (3GC-R) and of extended spectrum betalactamases in Enterobacteriaceae (ESBL-E) is a diagnostic and therapeutic priority to avoid inefficacy of the initial antibiotic regimen. In this study we evaluated a commercially available chromogenic screen for 3GC-R as a predictive and/or confirmatory test for ESBL and AmpC activity in clinical and veterinary Enterobacteriaceae isolates. The test was highly reliable in the prediction of cefotaxime and cefpodoxime resistance, but there was no correlation with ceftazidime and piperacillin/tazobactam minimal inhibitory concentrations. All human and porcine ESBL-E tested were detected with exception of one genetically positive but phenotypically negative isolate. By contrast, AmpC detection rates lay below 30%. Notably, exclusion of piperacillin/tazobactam resistant, 3GC susceptible K1+ Klebsiella isolates increased the sensitivity and specificity of the test for ESBL detection. Our data further imply that in regions with low prevalence of AmpC and K1 positive E. coli strains chromogenic testing for 3GC-R can substitute for more time consuming ESBL confirmative testing in E. coli isolates tested positive by Phoenix or VITEK2 ESBL screen. We, therefore, suggest a diagnostic algorithm that distinguishes 3GC-R screening from primary culture and species-dependent confirmatory ESBL testing by βLACTATM and discuss the implications of MIC distribution results on the choice of antibiotic regimen. PMID:27494134

A major interplate earthquake occurred on September 16th, 2015, near Illapel, central Chile. This event generated a tsunami of moderate height, however, one which caused significant near field damage. In this study, we model the tsunami produced by some rapid and preliminary fault models with the potential to be calculated within tens of minutes of the event origin time. We simulate tsunami signals from two different heterogeneous slip models, a homogeneous source based on parameters from the global CMT Project, and furthermore we used plate coupling data from GPS observations to construct a heterogeneous fault based on a priori knowledge of the subduction zone. We compare the simulated signals with the observed tsunami at tide gauges located along the Chilean coast and at offshore DART buoys. For this event, concerning rapid response, the homogeneous source and coupling model represent the tsunami at least as well as the heterogeneous sources. We suggest that the initial heterogeneous fault models could be better constrained with continuous GPS measurements in the rupture area, and additionally DART records directly in front of the rupture area, to improve the tsunami simulation based on quickly calculated models for near coastal areas. Additionally, in terms of tsunami modeling, the source estimated from prior plate coupling information in this case is representative of the event that later occurs; placing further importance on the need to monitor subduction zones with GPS.

This article examines how an online scholarly journal, "Kairos: Rhetoric, Technology, Pedagogy," mentors authors to revise their webtexts (interactive, digital media scholarship) for publication. Using an editorial pedagogy in which multimodal and rhetorical genre theories are merged with revision techniques found in process-based…

This final rule provides several necessary revisions to the regulation in order for TRICARE to be consistent with Medicare. These revisions affect: Hospice periods of care; reimbursement of physician assistants and assistant-at-surgery claims; and diagnosis-related group values, removing references to specific numeric diagnosis-related group values and replacing them with their narrative description. PMID:22737760

This study explored the ways gender is accomplished in varied social contexts during the peer revision process in a secondary English classroom. Using a post-structural feminist theoretical framework, an analysis of classroom discourse provided a basis for understanding the performance of gender during peer revision, the effects of gender…

Next-generation sequencing (NGS) has recently been used for analysis of HIV diversity, but this method is labor-intensive, costly, and requires complex protocols for data analysis. We compared diversity measures obtained using NGS data to those obtained using a diversity assay based on high-resolution melting (HRM) of DNA duplexes. The HRM diversity assay provides a single numeric score that reflects the level of diversity in the region analyzed. HIV gag and env from individuals in Rakai, Uganda, were analyzed in a previous study using NGS (n = 220 samples from 110 individuals). Three sequence-based diversity measures were calculated from the NGS sequence data (percent diversity, percent complexity, and Shannon entropy). The amplicon pools used for NGS were analyzed with the HRM diversity assay. HRM scores were significantly associated with sequence-based measures of HIV diversity for both gag and env (P < 0.001 for all measures). The level of diversity measured by the HRM diversity assay and NGS increased over time in both regions analyzed (P < 0.001 for all measures except for percent complexity in gag), and similar amounts of diversification were observed with both methods (P < 0.001 for all measures except for percent complexity in gag). Diversity measures obtained using the HRM diversity assay were significantly associated with those from NGS, and similar increases in diversity over time were detected by both methods. The HRM diversity assay is faster and less expensive than NGS, facilitating rapid analysis of large studies of HIV diversity and evolution. PMID:22785188

Aims Based on KRAS testing, the subset of patients with metastatic colorectal cancer (CRC) that could benefit from anti-EGFR therapy can be better delineated. Though KRAS testing has become significantly more prevalent over the last few years, methods for testing remain heterogeneous and discordance has been reported between methods. Methods In this study, we examined a CRC patient population and compared KRAS testing done in Clinical Laboratory Improvement Amendments (CLIA) approved laboratories as part of standard clinical care and by next-generation sequencing (NGS) using the Illumina platform. Discordances were further evaluated with manual review of the NGS testing. Results Out of 468 CRC patient samples, 77 had KRAS testing done by both CLIA assay and NGS. There were concordant results between testing methodologies in 74 out of 77 patients, or 96% (95% CI 89% to 99%). There were three patient samples that showed discordant results between the two methods of testing. Upon further investigation of the NGS results for the three discordant cases, one sample showed a low level of the mutation seen in the standard testing, one sample showed low tumour fraction and a third did not show any evidence of the mutation that was found with the standard assay. Five patients had KRAS mutations not typically tested with standard testing. Conclusions Overall there was a high concordance rate between NGS and standard testing for KRAS. However, NGS revealed mutations that are not tested for with standard KRAS assays that might have clinical impact with regards to the role for anti-EGFR therapy. PMID:25004944

Mutation-specific antibodies for BRAF V600E and IDH1 R132H offer convenient immunohistochemical (IHC) assays to detect these mutations in tumors. Previous studies using these antibodies have shown high sensitivity and specificity, but use in routine diagnosis with qualitative assessment has not been well studied. In this retrospective study, we reviewed BRAF and IDH1 mutation-specific IHC results compared with separately obtained clinical next-generation sequencing results. For 67 tumors with combined IDH1 IHC and mutation data, IHC was unequivocally reported as positive or negative in all cases. Sensitivity of IHC for IDH1 R132H was 98% and specificity was 100% compared with mutation status. Four IHC-negative samples showed non-R132H IDH1 mutations including R132C, R132G, and P127T. For 128 tumors with combined BRAF IHC and mutation data, IHC was positive in 33, negative in 82, and equivocal in 13 tumors. The sensitivity of IHC was 97% and specificity was 99% when including only unequivocally positive or negative results. If equivocal IHC cases were included in the analysis as negative, sensitivity fell to 81%. If equivocal cases were classified as positive, specificity dropped to 91%. Eight IHC-negative samples showed non-V600E BRAF mutations including V600K, N581I, V600M, and K601E. We conclude that IHC for BRAF V600E and IDH1 R132H is relatively sensitive and specific, but there is a discordance rate that is not trivial. In addition, a significant proportion of patients harbor BRAF non-V600E or IDH1 non-R132H mutations not detectable by IHC, potentially limiting utility of IHC screening for BRAF and IDH1 mutations. PMID:25634750

PurposeIn addition to environmental causes such as TORCH infection, trauma and drug or chemical exposure, childhood cataracts (CC) frequently have a genetic basis. They may be isolated or syndromic and have been associated with mutations in over 110 genes. We have recently demonstrated that next-generation sequencing (NGS), a high throughput sequencing technique that enables the parallel sequencing of multiple genes, is ideally suited to the investigation of bilateral CC. This study assesses the diagnostic outcomes of traditional routine investigations and compares this with outcomes of NGS testing.MethodsA retrospective review of the medical records of 27 consecutive patients with bilateral CC presenting in 2010-2012 was undertaken. The outcomes of routine investigations in these patients, including TORCH screen, urinalysis, karyotyping, and urinary and plasma organic amino acids, were collated. The success of routine genetic investigations undertaken over 10 years (2000-2010) was also assessed.ResultsBy April 2014, the underlying cause of bilateral CC had been identified in just one of 27 patients despite 44% (n=12) receiving a full 'standard' investigative work-up and 22% (n=6) investigations in addition to the standard work-up. Fifteen of these patients underwent NGS testing and nine (60%) of these received a diagnosis for their CC.ConclusionThe frequency of patients receiving a diagnosis for their CC after standard care and the time taken to diagnosis was disappointing. NGS testing improved diagnostic rates and time to diagnosis, as well as changing clinical management. These data serve as a baseline for future evaluation of novel diagnostic modalities. PMID:27315345

Diet is an important variable in toxicology. There are mixed reports on the impact of soy components on energy utilization, fat deposition, and reproductive parameters. Three generations of CD-1 mice were fed irradiated natural ingredient diets with varying levels of soy (NIH-41, 5K96, or 5008/5001), purified irradiated AIN-93 diet, or the AIN-93 formulation modified with ethanol-washed soy protein concentrate (SPC) or SPC with isoflavones (SPC-IF). NIH-41 was the control for pairwise comparisons. Minimal differences were observed among natural ingredient diet groups. F0 males fed AIN-93, SPC, and SPC-IF diets had elevated glucose levels and lower insulin levels compared with the NIH-41 group. In both sexes of the F1 and F2 generations, the SPC and SPC-IF groups had lower body weight gains than the NIH-41 controls and the AIN-93 group had an increased percent body fat at postnatal day 21. AIN-93 F1 pups had higher baseline glucose than NIH-41 controls, but diet did not significantly affect breeding performance or responses to glucose or uterotrophic challenges. Reduced testes weight and sperm in the AIN-93 group may be related to low thiamine levels. Our observations underline the importance of careful selection, manufacturing procedures, and nutritional characterization of diets used in toxicological studies. PMID:27234134

Objective Rett syndrome (RTT) is a severe neurodevelopmental disease that affects approximately 1 in 10,000 live female births and is often caused by mutations in Methyl-CpG-binding protein 2 (MECP2). Despite distinct clinical features, the accumulation of clinical and molecular information in recent years has generated considerable confusion regarding the diagnosis of RTT. The purpose of this work was revise and clarify 2002 consensus criteria for the diagnosis of RTT in anticipation of treatment trials. Method RettSearch members, representing the majority of the international clinical RTT specialists, participated in an iterative process to come to a consensus on a revised and simplified clinical diagnostic criteria for RTT. Results The clinical criteria required for the diagnosis of classic and atypical RTT were clarified and simplified. Guidelines for the diagnosis and molecular evaluation of specific variant forms of RTT were developed. Interpretation These revised criteria provide clarity regarding the key features required for the diagnosis of RTT and reinforce the concept that RTT is a clinical diagnosis based on distinct clinical criteria, independent of molecular findings. We recommend that these criteria and guidelines be utilized in any proposed clinical research. PMID:21154482

Most electromagnetic (EM) geophysical methods focus on the electrical properties of rocks and sediments to determine reliable images of the subsurface, images routinely used in a broad range of applications. Often laboratory measurements of the same EM properties return equivocal results that are difficult to reconcile with observations obtained by EM imaging techniques. These inconsistencies lead to major interpretation problems. Different numerical approaches have been investigated in order to understand the consequences of the presence or absence of interconnected networks of fractures and pores on EM field measurements. These networks have a crucial effect on the EM field measurements, given that they can be permeated by conductive fluids that enhance the conductivity measurements of the whole environment. Most of the above-mentioned studies restrict their examination to direct current (DC) sources only. Bearing in mind that the time-varying nature of the natural electromagnetic sources play a major role in field measurements, we numerically model the effects of such EM sources on the conductivity measured on the surface of a randomly generated three-dimensional body buried in a uniform conductivity host by simulating a magnetotelluric (MT) station measurements on the top of the target random host itself. As a second experiment we simulated a DC measurement of the target bulk conductivity. The spatial distribution and shape of the conductor network allows in fact the propagation of time-varying EM fields by induction, leading the two different methods to measure a different numerical value for the bulk of the same physical property. We have compared the results from the simulated measurements obtained considering time-varying and DC sources with electrical conductivity predicted by both Hashin-Shtrikman (HS) bounds and Archie's Law, and we have compared these results with statistical properties of the model themselves. Our results suggest that for time

Broadband (0.1-20 Hz) synthetic seismograms for finite-fault sources were produced for a model where stress drop is constant with seismic moment to see if they can match the magnitude dependence and distance decay of response spectral amplitudes found in the Next Generation Attenuation (NGA) relations recently developed from strong-motion data of crustal earthquakes in tectonically active regions. The broadband synthetics were constructed for earthquakes of M 5.5, 6.5, and 7.5 by combining deterministic synthetics for plane-layered models at low frequencies with stochastic synthetics at high frequencies. The stochastic portion used a source model where the Brune stress drop of 100 bars is constant with seismic moment. The deterministic synthetics were calculated using an average slip velocity, and hence, dynamic stress drop, on the fault that is uniform with magnitude. One novel aspect of this procedure is that the transition frequency between the deterministic and stochastic portions varied with magnitude, so that the transition frequency is inversely related to the rise time of slip on the fault. The spectral accelerations at 0.2, 1.0, and 3.0 sec periods from the synthetics generally agreed with those from the set of NGA relations for M 5.5-7.5 for distances of 2-100 km. At distances of 100-200 km some of the NGA relations for 0.2 sec spectral acceleration were substantially larger than the values of the synthetics for M 7.5 and M 6.5 earthquakes because these relations do not have a term accounting for Q. At 3 and 5 sec periods, the synthetics for M 7.5 earthquakes generally had larger spectral accelerations than the NGA relations, although there was large scatter in the results from the synthetics. The synthetics showed a sag in response spectra at close-in distances for M 5.5 between 0.3 and 0.7 sec that is not predicted from the NGA relations.

, then the T {sub eff} Almost-Equal-To 300 K cloud-free model fluxes at K and W2 are too faint by 0.5-1.0 mag. We will address this discrepancy in our next generation of models, which will incorporate water clouds and mixing.

This mixed method investigation included a quasi-experiment examining if revision instruction enhanced the substantive revising behavior of 15 English language learner (ELL) and multilingual 10th grade students enrolled in an English class for underperforming students in comparison to 14 non-ELL and multilingual students from the same class who…

This study examines the effectiveness of written error corrections versus reformulations of second language learners' writing as two means of improving learners' grammatical accuracy on a three-stage composition-comparison-revision task. Concurrent verbal protocols were employed during the comparison stage in order to study the learners' reported…

When dealing with complex conceptual systems, low-prior- knowledge learners develop fragmentary and incorrect understanding. To learn complex topics deeply, these learners have to (a) monitor understanding to detect flaws and (b) generate explanations to revise and repair the flaws. In this research we explored if the detection of a flaw in…

This publication is a complete revision of an earlier booklet brought on by the dramatic changes in the energy outlook of the United States that occurred in 1973 and 1974. The purpose of this document is to inform the public on the overall U.S. energy situation, in particular electricity generated from nuclear reactors and other sources. The…

Our enhanced iRRL approach greatly facilitates genotyping-by-sequencing and thus direct estimates of allele frequencies. Our direct comparison of three commonly used SNP callers emphasizes the need to question the accuracy of SNP and genotype calling, as we obtained considerably different SNP datasets depending on caller algorithms, sequencing depths and filtering criteria. These differences affected scans for signatures of natural selection, but will also exert undue influences on demographic inferences. This study presents the first effort to generate a population genomic dataset for wild-born orangutans with known population provenance. PMID:24405840

A numerical investigation of thermal non-equilibrium flows requires species specific relaxation rates, which are often calculated using the Landau-Teller model. This model requires the determination of collision specific relaxation times, which can be computed using Millikan and White's empirical formula. The coefficients used in this formula for each specific collision pair form a set of coefficients, which are assessed here. The focus of the investigation lies on their performance in hypersonic low-temperature (300-2,500 K) flows that occur at shock-tunnel nozzle exits or in supersonic combustion ramjets (scramjets) before combustion. Two experimental validation cases are chosen; a shock-tunnel nozzle and a sharp cone in hypersonic cross-flow experiment. A comparison of the experimentally measured vibrational temperatures at the nozzle exit against numerical data shows large discrepancies for two commonly used coefficient sets. A revised set of coefficients is proposed that greatly improves the agreement between the numerical and experimental results. Furthermore, the numerically generated shock shape over the sharp cone using the revised set of coefficients correlates well with the experimental measurements.

A risk-adapted treatment strategy is mandatory for myelodysplastic syndromes (MDS). We refined the World Health Organization (WHO)-classification-based Prognostic Scoring System (WPSS) by determining the impact of the newer clinical and cytogenetic features, and we compared its prognostic power to that of the revised International Prognostic Scoring System (IPSS-R). A population of 5326 untreated MDS was considered. We analyzed single WPSS parameters and confirmed that the WHO classification and severe anemia provide important prognostic information in MDS. A strong correlation was found between the WPSS including the new cytogenetic risk stratification and WPSS adopting original criteria. We then compared WPSS with the IPSS-R prognostic system. A highly significant correlation was found between the WPSS and IPSS-R risk classifications. Discrepancies did occur among lower-risk patients in whom the number of dysplastic hematopoietic lineages as assessed by morphology did not reflect the severity of peripheral blood cytopenias and/or increased marrow blast count. Moreover, severe anemia has higher prognostic weight in the WPSS versus IPSS-R model. Overall, both systems well represent the prognostic risk of MDS patients defined by WHO morphologic criteria. This study provides relevant in formation for the implementation of risk-adapted strategies in MDS. PMID:25721895

This paper explores the annotation and classification of students' revision behaviors in argumentative writing. A sentence-level revision schema is proposed to capture why and how students make revisions. Based on the proposed schema, a small corpus of student essays and revisions was annotated. Studies show that manual annotation is reliable with…

A committee composed of members of The American Dietetic Association and the American Diabetes Association has revised Exchange List for Meal Planning. Changes were made, as deemed necessary, on the basis of nutritional recommendations for persons with diabetes as understood in 1986. Major changes include rewriting the text to make it more useful in the education of persons with diabetes; changing the order of the exchange lists to emphasize a high-carbohydrate, high-fiber diet, as well as to better reflect the order of foods in menu planning; adding symbols to foods high in fiber and sodium; changing nutritive values for the starch/bread and fruit lists; adding lists of combination foods, free foods, and foods recommended only for occasional use; developing a data base; and initiating a plan for field testing and evaluation. The committee also developed a simplified meal planning tool, Healthy Food Choices, to be used for initial or "survival" level education. In poster format, foods are grouped by calories into six food groups. Approximate portion sizes of commonly used foods are listed. Blank lines are provided for the nutrition counselor to write in a suggested menu or meal plan for the client. Because the booklet does not use the word "diabetes" specifically, it is appropriate as a general teaching tool. PMID:3794130

After pressure from university administrators, the Office of Management and Budget (OMB) has issued a new plan for saving money on research overhead costs, in place of a controversial proposal that was originally published in February 1986 (Eos, May 20, 1986, p. 481). The agency made the new plan more palatable to administrators and faculty by choosing to cap the rate of reimbursement for the activity that researchers say they find among the most difficult to document: the time they spend on administration of federally sponsored grants and contracts. An amendment to a bill signed by President Ronald Reagan on July 2 might force OMB to make additional concessions to colleges and universities.How much money the federal government would save under this policy is a matter of dispute. The agency's revisions to OMB Circular A-21, “Cost Principles for Educational Institutions,” call for fixing the reimbursement rate at 3% of modified total direct costs for departmental administration work done by “department heads, directors of divisions faculty, and professional staff.” The 3% figure represents about half of the current national average rate of reimbursement for these costs and would lead to federal government savings of $100 million a year, according to OMB.

In this paper two bi-directional DC-DC converters for a 1MW next-generation BTB system of a distribution system, as it is applied in Japan, are presented and compared with respect to design, efficiency and power density. One DC-DC converter applies commercially available Si-devices and the other one high voltage SiC switch, which consists of a SiC JFET cascode (MOSFET+1 JFET) in series with five SiC JFETs. In the comparison also the high frequency, high voltage transformer, which ensures galvanic isolation and which is a core element of the DC-DC converter, is examined in detail by analytic calculations and FEM simulations. For validating the analytical considerations a 20kW SiC DC-DC converter has been designed in detail. Measurement results for the switching and conduction losses have been acquired from the SiC and also for a Si system for calculating the losses of the scaled 1MW system.

A comparison of the binding properties of avidin, streptavidin, neutrAvidin, and antibiotin antibody to a biotinylated lipid bilayer was studied using second-harmonic generation. Protein binding assays were performed on a planar supported lipid bilayer of 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC) containing 4 mol % biotinylated-cap-1,2-dioleoyl-sn-glycero-3-phosphoethanolamine (biotin-cap-DOPE). The equilibrium binding affinities of these biotin-protein interactions were determined, revealing the relative energetic contributions for each protein to the biotinylated lipid ligand. The results show that the binding affinities of avidin, streptavidin, and neutrAvidin for biotin were all strengthened by protein-protein interactions but that the stronger protein-protein interactions observed for streptavidin and neutrAvidin make their binding more energetically favorable. It was also shown that neutrAvidin has the highest degree of nonspecific adsorption to a pure DOPC bilayer, compared to avidin and streptavidin. In addition, the biotin-binding affinity of the antibiotin antibody was found to be of the same order of magnitude as that of avidin, streptavidin, and neutrAvidin. These findings provide important new insights into these biotin-bound protein complexes commonly used in several bioanalytical applications. PMID:22122646

A EUROMET collaborative project has been set up between Instituto Nacional de Técnica Aeroespacial (INTA) and E+E ELEKTRONIK Ges.m.b.H, the two designated laboratories of the Spanish and Austrian National Metrology Institutes, Centro Español de Metrología (CEM) and Bundesamt für Eich- und Vermessungswesen (BEV), respectively. The objective of the project is to provide INTA with a new standard that covers the dew-point temperature range from - 27°C to +90°C with a gas flow up to 5 L· min-1 in the “two-pressure” mode, extended to 95°C when operated as a continuous flow “single-pressure” generator, and investigate the importance of the enhancement factors in the uncertainty estimations used in support of the participants’ calibration and measurement capabilities (CMC) (The CIPM Mutual Recognition Arrangement, http://www.bipm.fr/en/cipm-mra/). The equivalence of the Spanish and Austrian national standards is also to be evaluated, further supporting the outcomes of the Key Comparisons, in which both have already participated. The preliminary results obtained to date are reported and discussed in the context of the project and the consistency of the declared CMC’s.

We conducted a systematic review to compare resistance to third-generation cephalosporins (TGCs) in Shigella strains between Europe-America and Asia-Africa from 1998 to 2012 based on a literature search of computerized databases. In Asia-Africa, the prevalence of resistance of total and different subtypes to ceftriaxone, cefotaxime and ceftazidime increased markedly, with a total prevalence of resistance up to 14·2% [95% confidence interval (CI) 3·9-29·4], 22·6% (95% CI 4·8-48·6) and 6·2% (95% CI 3·8-9·1) during 2010-2012, respectively. By contrast, resistance rates to these TGCs in Europe-America remained relatively low--less than 1·0% during the 15 years. A noticeable finding was that certain countries both in Europe-America and Asia-Africa, had a rapid rising trend in the prevalence of resistance of S. sonnei, which even outnumbered S. flexneri in some periods. Moreover, comparison between countries showed that currently the most serious problem concerning resistance to these TGCs appeared in Vietnam, especially for ceftriaxone, China, especially for cefotaxime and Iran, especially for ceftazidime. These data suggest that monitoring of the drug resistance of Shigella strains should be strengthened and that rational use of antibiotics is required. PMID:25553947

Information on the transfer of radionuclides to fruits was almost absent in the former TRS 364 "Handbook of parameter values for the prediction of radionuclide transfer in temperate environments". The revision of the Handbook, carried out under the IAEA Programme on Environmental Modelling for RAdiation Safety (EMRAS), takes into account the information generated in the years following the Chernobyl accident and the knowledge produced under the IAEA BIOMASS (Biosphere Modelling and Assessment) Programme in the years 1997-2000. This paper describes the most important processes concerning the behaviour of radionuclides in fruits reported in the IAEA TRS 364 Revision and provides recommendations for research and modelling. PMID:19027202

In this paper, a revised optimal homotopy asymptotic method (OHAM) is applied to derive an explicit analytical solution of the Falkner-Skan wedge flow problem. The comparisons between the present study with the numerical solutions using (fourth order Runge-Kutta) scheme and with analytical solution using HPM-Padé of order [4/4] and order [13/13] show that the revised form of OHAM is an extremely effective analytical technique. PMID:27186477

This article summarizes the updates and revisions to the second edition of the BI-RADS MRI lexicon. A new feature in the lexicon is background parenchymal enhancement and its descriptors. Another major focus is on revised terminology for masses and non-mass enhancement. A section on breast implants and associated lexicon terms has also been added. Because diagnostic breast imaging increasingly includes multimodality evaluation, the new edition of the lexicon also contains revised recommendations for combined reporting with mammography and ultrasound if these modalities are included as comparison, and clarification on the use of final assessment categories in MR imaging. PMID:23928239

Objective(s): Post-treatment evaluations by CT/MRI (based on the International Working Group/Cotswolds meeting guidelines) and PET (based on Revised Response Criteria), were examined in terms of progression-free survival (PFS) in patients with malignant lymphoma (ML). Methods: 79 patients, undergoing CT/MRI for the examination of suspected lesions and whole-body PET/CT before and after therapy, were included in the study during April 2007-January 2013. The relationship between post-treatment evaluations (CT/MRI and PET) and PFS during the follow-up period was examined, using Kaplan-Meier survival analysis. The patients were grouped according to the histological type into Hodgkin’s lymphoma (HL), diffuse large B-cell lymphoma (DLBCL), and other histological types. The association between post-treatment evaluations (PET or PET combined with CT/MRI) and PFS was examined separately. Moreover, the relationship between disease recurrence and serum soluble interleukin-2 receptor, lactic dehydrogenase, and C-reactive protein levels was evaluated before and after the treatment. Results: Patients with incomplete remission on both CT/MRI and PET had a significantly shorter PFS, compared to patients with complete remission on both CT/MRI and PET and those exhibiting incomplete remission on CT/MRI and complete remission on PET (P<0.001). Post-treatment PET evaluations were strongly correlated with patient outcomes in cases with HL or DLBCL (P<0.01) and other histological types (P<0.001). In patients with HL or DLBCL, incomplete remission on both CT/MRI and PET was associated with a significantly shorter PFS, compared to patients with complete remission on both CT/MRI and PET (P<0.05) and those showing incomplete remission on CT/MRI and complete remission on PET (P<0.01). In patients with other histological types, incomplete remission on both CT/MRI and PET was associated with a significantly shorter PFS, compared to cases with complete remission on both CT/MRI and PET (P<0

Background and purpose - The surgical treatment of periprosthetic knee infection is generally either a partial revision procedure (open debridement and exchange of the tibial insert) or a 2-stage exchange arthroplasty procedure. We describe the failure rates of these procedures on a nationwide basis. Patients and methods - 105 partial revisions (100 patients) and 215 potential 2-stage revision procedures (205 patients) performed due to infection from July 1, 2011 to June 30, 2013 were identified from the Danish Knee Arthroplasty Register (DKR). Failure was defined as surgically related death ≤ 90 days postoperatively, re-revision due to infection, or not reaching the second stage for a planned 2-stage procedure within a median follow-up period of 3.2 (2.2-4.2) years. Results - The failure rate of the partial revisions was 43%. 71 of the partial revisions (67%) were revisions of a primary prosthesis with a re-revision rate due to infection of 34%, as compared to 55% in revisions of a revision prosthesis (p = 0.05). The failure rate of the 2-stage revisions was 30%. Median time interval between stages was 84 (9-597) days. 117 (54%) of the 2-stage revisions were revisions of a primary prosthesis with a re-revision rate due to infection of 21%, as compared to 29% in revisions of a previously revised prosthesis (p = 0.1). Overall postoperative mortality was 0.6% in high-volume centers (> 30 procedures within 2 years) as opposed to 7% in the remaining centers (p = 0.003). Interpretation - The failure rates of 43% after the partial revision procedures and 30% after the 2-stage revisions in combination with the higher mortality outside high-volume centers call for centralization and reconsideration of surgical strategies. PMID:26900908

Purpose: To demonstrate that a “5DCT” technique which utilizes fast helical acquisition yields the same respiratory-gated images as a commercial technique for regular, mechanically produced breathing cycles. Methods: Respiratory-gated images of an anesthetized, mechanically ventilated pig were generated using a Siemens low-pitch helical protocol and 5DCT for a range of breathing rates and amplitudes and with standard and low dose imaging protocols. 5DCT reconstructions were independently evaluated by measuring the distances between tissue positions predicted by a 5D motion model and those measured using deformable registration, as well by reconstructing the originally acquired scans. Discrepancies between the 5DCT and commercial reconstructions were measured using landmark correspondences. Results: The mean distance between model predicted tissue positions and deformably registered tissue positions over the nine datasets was 0.65 ± 0.28 mm. Reconstructions of the original scans were on average accurate to 0.78 ± 0.57 mm. Mean landmark displacement between the commercial and 5DCT images was 1.76 ± 1.25 mm while the maximum lung tissue motion over the breathing cycle had a mean value of 27.2 ± 4.6 mm. An image composed of the average of 30 deformably registered images acquired with a low dose protocol had 6 HU image noise (single standard deviation) in the heart versus 31 HU for the commercial images. Conclusions: An end to end evaluation of the 5DCT technique was conducted through landmark based comparison to breathing gated images acquired with a commercial protocol under highly regular ventilation. The techniques were found to agree to within 2 mm for most respiratory phases and most points in the lung. PMID:26133604

This revision of the classification of eukaryotes, which updates that of Adl et al. (2005), retains an emphasis on the protists and incorporates changes since 2005 that have resolved nodes and branches in phylogenetic trees. Whereas the previous revision was successful in re-introducing name stability to the classification, this revision provides a classification for lineages that were then still unresolved. The supergroups have withstood phylogenetic hypothesis testing with some modifications, but despite some progress, problematic nodes at the base of the eukaryotic tree still remain to be statistically resolved. Looking forward, subsequent transformations to our understanding of the diversity of life will be from the discovery of novel lineages in previously under-sampled areas and from environmental genomic information. PMID:23020233

Background and purpose In Norway, the proportion of revision knee arthroplasties increased from 6.9% in 1994 to 8.5% in 2011. However, there is limited information on the epidemiology and causes of subsequent failure of revision knee arthroplasty. We therefore studied survival rate and determined the modes of failure of aseptic revision total knee arthroplasties. Method This study was based on 1,016 aseptic revision total knee arthroplasties reported to the Norwegian Arthroplasty Register between 1994 and 2011. Revisions done for infections were not included. Kaplan-Meier and Cox regression analyses were used to assess the survival rate and the relative risk of re-revision with all causes of re-revision as endpoint. Results 145 knees failed after revision total knee arthroplasty. Deep infection was the most frequent cause of re-revision (28%), followed by instability (26%), loose tibial component (17%), and pain (10%). The cumulative survival rate for revision total knee arthroplasties was 85% at 5 years, 78% at 10 years, and 71% at 15 years. Revision total knee arthroplasties with exchange of the femoral or tibial component exclusively had a higher risk of re-revision (RR = 1.7) than those with exchange of the whole prosthesis. The risk of re-revision was higher for men (RR = 2.0) and for patients aged less than 60 years (RR = 1.6). Interpretation In terms of implant survival, revision of the whole implant was better than revision of 1 component only. Young age and male sex were risk factors for re-revision. Deep infection was the most frequent cause of failure of revision of aseptic total knee arthroplasties. PMID:25267502

Unicompartmental arthroplasty is an efficient and approved treatment option of unicompartmental arthritis of the knee, being performed with increasing frequency worldwide. Compared to total knee replacement, there are several advantages such as faster recovery, lower blood loss, better functional outcome and lower infection rates. However, higher revision rates are a frequent argument against the use of unicompartmental arthroplasty. The following article gives an overview of failure mechanisms and strategies for revision arthroplasty. This article is based on a selective literature review including PubMed and relevant print media. Our own clinical experience is considered as well. PMID:25209015

This report is a revision of the previous Hanford Environmental Dose Reconstruction (HEDR) Project modeling approach report. This revised report describes the methods used in performing scoping studies and estimating final radiation doses to real and representative individuals who lived in the vicinity of the Hanford Site. The scoping studies and dose estimates pertain to various environmental pathways during various periods of time. The original report discussed the concepts under consideration in 1991. The methods for estimating dose have been refined as understanding of existing data, the scope of pathways, and the magnitudes of dose estimates were evaluated through scoping studies.

STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within {+-}20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within {+-}3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls.

STREAM is an emergency response code that predicts downstream pollutant concentrations for releases from the SRS area to the Savannah River. The STREAM code uses an algebraic equation to approximate the solution of the one dimensional advective transport differential equation. This approach generates spurious oscillations in the concentration profile when modeling long duration releases. To improve the capability of the STREAM code to model long-term releases, its calculation module was replaced by the WASP5 code. WASP5 is a US EPA water quality analysis program that simulates one-dimensional pollutant transport through surface water. Test cases were performed to compare the revised version of STREAM with the existing version. For continuous releases, results predicted by the revised STREAM code agree with physical expectations. The WASP5 code was benchmarked with the US EPA 1990 and 1991 dye tracer studies, in which the transport of the dye was measured from its release at the New Savannah Bluff Lock and Dam downstream to Savannah. The peak concentrations predicted by the WASP5 agreed with the measurements within {+-} 20.0%. The transport times of the dye concentration peak predicted by the WASP5 agreed with the measurements within {+-} 3.6%. These benchmarking results demonstrate that STREAM should be capable of accurately modeling releases from SRS outfalls.

The purpose of this paper is to report on revised cross-section evaluations for 17 nuclides that have been prepared for ENDF/B-VI Revision 2. The nuclides considered include five fission products and various isotopes of cadmium and hafnium. The previous ENDF/B-VI evaluations for these 17 nuclides were carried over from ENDF/B-V and were completed in the 1974--1980 time period. By utilizing the experimental data that have become available since 1980 the revised evaluations will result in significant improvements in the evaluated nuclear data files. The primary emphasis was placed on the resolved and unresolved resonance regions, but new experimental data were also used to improve the cross sections for energies above the unresolved resonance region. Negative elastic scattering cross sections were encountered in some of the previous evaluations; since the revised evaluations use multilevel Breit-Wigner (MLBW) parameters, rather than single-level Breit-Wigner (SLBW), this problem is eliminated.

Many educators teach students that are reluctant about the revisions process in writing. However, this longitudinal study follows a group of students from kindergarten through 8th grade who embraced the importance of the revision process. (Contains 8 figures.)

The following details all additions and revisions made to the DHQ nutrient and food database. This revision history is provided as a reference for investigators who may have performed analyses with a previous release of the database.

This paper describes CLIPS-R, a theory revision system for the revision of CLIPS rule-bases. CLIPS-R may be used for a variety of knowledge-base revision tasks, such as refining a prototype system, adapting an existing system to slightly different operating conditions, or improving an operational system that makes occasional errors. We present a description of how CLIPS-R revises rule-bases, and an evaluation of the system on three rule-bases.

Objective To validate the targeted next-generation sequencing (NGS) platform-Ion Torrent PGM for KRAS exon 2 and expanded RAS mutations detection in formalin-fixed paraffin-embedded (FFPE) colorectal cancer (CRC) specimens, with comparison of Sanger sequencing and ARMS-Scorpion real-time PCR. Setting Beijing, China. Participants 51 archived FFPE CRC samples (36 men, 15 women) were retrospectively randomly selected and then checked by an experienced pathologist for sequencing based on histological confirmation of CRC and availability of sufficient tissue. Methods RAS mutations were detected in the 51 FFPE CRC samples by PGM analysis, Sanger sequencing and the Therascreen KRAS assay, respectively. Agreement among the 3 methods was assessed. Assay sensitivity was further determined by sequencing serially diluted DNA from FFPE cell lines with known mutation statuses. Results 13 of 51 (25.5%) cases had a mutation in KRAS exon 2, as determined by PGM analysis. PGM analysis showed 100% (51/51) concordance with Sanger sequencing (κ=1.000, 95% CI 1 to 1) and 98.04% (50/51) agreement with the Therascreen assay (κ=0.947, 95% CI 0.844 to 1) for detecting KRAS exon 2 mutations, respectively. The only discrepant case harboured a KRAS exon 2 mutation (c.37G>T) that was not covered by the Therascreen kit. The dilution series experiment results showed that PGM was able to detect KRAS mutations at a frequency of as low as 1%. Importantly, RAS mutations other than KRAS exon 2 mutations were also detected in 10 samples by PGM. Furthermore, mutations in other CRC-related genes could be simultaneously detected in a single test by PGM. Conclusions The targeted NGS platform is specific and sensitive for KRAS exon 2 mutation detection and is appropriate for use in routine clinical testing. Moreover, it is sample saving and cost-efficient and time-efficient, and has great potential for clinical application to expand testing to include mutations in RAS and other CRC-related genes. PMID

The identification and elimination of persistently infected (PI) cattle are the most effective measures for controlling bovine pestiviruses, including bovine viral diarrhea virus (BVDV) and the emerging HoBi-like viruses. Here, colostrum-deprived calves persistently infected with HoBi-like pestivirus (HoBi-like PI calves) were generated and sampled (serum, buffy coat, and ear notches) on the day of birth (DOB) and weekly for 5 consecutive weeks. The samples were subjected to diagnostic tests for BVDV—two reverse transcriptase PCR (RT-PCR) assays, two commercial real-time RT quantitative PCR (RT-qPCR), two antigen capture enzyme-linked immunosorbent assays (ACE), and immunohistochemistry (IHC)—and to HoBi-like virus-specific RT-PCR and RT-qPCR assays. The rate of false negatives varied among the calves. The HoBi-like virus-specific RT-PCR detected HoBi-like virus in 83%, 75%, and 87% of the serum, buffy coat, and ear notch samples, respectively, while the HoBi-like RT-qPCR detected the virus in 83%, 96%, and 62%, respectively. In comparison, the BVDV RT-PCR test had a higher rate of false negatives in all tissue types, especially for the ear notch samples (missing detection in at least 68% of the samples). The commercial BVDV RT-qPCRs and IHC detected 100% of the ear notch samples as positive. While ACE based on the BVDV glycoprotein Erns detected infection in at least 87% of ear notches, no infections were detected using NS3-based ACE. The BVDV RT-qPCR, ACE, and IHC yielded higher levels of detection than the HoBi-like virus-specific assays, although the lack of differentiation between BVDV and HoBi-like viruses would make these tests of limited use for the control and/or surveillance of persistent HoBi-like virus infection. An improvement in HoBi-like virus tests is required before a reliable HoBi-like PI surveillance program can be designed. PMID:25122860

This revised and updated book is written to inform the citizens on the nature, causes, and effects of air pollution. It is written in terms familiar to the layman with the purpose of providing knowledge and motivation to spur community action on clean air policies. Numerous charts and drawings are provided to support discussion of air pollution…

The problems that arise when reviewing another surgeon's work, the financial aspects of revision surgery, and the controversies that present in marketing and advertising will be explored. The technological advances of computer imaging and the Internet have introduced new problems that require our additional consideration. PMID:22872552

Written on the basis of senior Indian verbal relatings collected over a 23-year span, this revised edition on modern Indian psychology incorporates suggestions from Indian students and their teachers, Indian and non-Indian social studies experts, and other Indian people. The book contains 6 major divisions: (1) "Culture and Indian Values" relates…

Designed to assist educators in developing or revising school/library copyright policy, this packet provides the following materials: (1) a viewer's guide for the film "Copyright Law: What Every School, College, and Public Library Should Know"; (2) a statement of the primary missions of the Association for Information Media and Equipment (AIME);…

Fatigue cracking has been a principal cause of damage to North Sea structures and consequently considerable attention has been given to the development of guidance for the prediction of fatigue performance. The fatigue guidance of the Offshore Safety Division of the Health and Safety Executive (HSE) was recently revised and published, following a significant offshore industry review in the period 1987 to 1990, and is based on the results of a considerable amount of research and development work on the fatigue behavior of welded tubular and plated joints. As a result of this review, the revised fatigue guidance incorporates several new clauses and recommendations. The revised recommendations apply to joint classification, basic design S-N curves for welded joints and cast or forged steel components, the thickness effect, the effects of environment and the treatment of low and high stress ranges. Additionally, a new appendix on the derivation of stress concentration factors is included. The new clauses cover high strength steels, bolts and threaded connectors, moorings, repaired joints and the use of fracture mechanics analysis. This paper presents an overview of the revisions to the fatigue guidance, the associated background technical information and aspects of the fatigue behavior of offshore structures which are considered to require further investigation. 67 refs., 7 figs., 8 tabs.

Bootstrap loader and mode-control options for Adage Graphics Computer System Significantly simplify operations procedures. Normal load and control functions are performed quickly and easily from control console. Operating characteristics of revised system include greatly increased speed, convenience, and reliability.

The Financial Accounting Standards Board (FASB) has recently issued Statement of Financial Accounting Standards No. 141 (Revised 2007) Business Combinations. The object of this Statement is to improve the relevance, representational faithfulness, and comparability of reported information about a business combination and its effects. This Statement…

A study investigated the effects of using a computer image projected on a large screen to teach revision to college students. Subjects, 19 students at DePauw University, enrolled in a writing intensive literature course in a Writing across the Curriculum program, were divided into test and control groups. It was hypothesized that the modeling of…

Examines three contemporary taxonomies of revision as proposed by Wallace Hildick, Lester Faigley and Stephen Witte, and Sondra Perl. Uses literary and cultural theory to bridge the gap between these theories and students' revision practices. Argues that while revision may be prescriptive, it must also be subordinate to the writer's intentions and…

... HOMELAND SECURITY INSURANCE AND HAZARD MITIGATION National Flood Insurance Program IDENTIFICATION AND MAPPING OF SPECIAL HAZARD AREAS § 65.7 Floodway revisions. (a) General. Floodway data is developed as part... revised floodway on the same topographic map used for the delineation of the revised flood boundaries....

In the years since the Williamsport Area Community College's Graphic Arts Program was last revised, the graphic arts industry has been changed by an influx of new technologies. The graphic arts program and curriculum was revised to provide graduates with skills required by the industry. The objectives of this revision were to (1) identify…

The use of cortical windows for revision elbow arthroplasty has not previously been widely reported. Their use aids safe revision of a well fixed humeral prosthesis and can be used in the setting of dislocation, periprosthetic fracture or aseptic loosening of the ulnar component. We describe our technique and results of cortical windows in the distal humerus for revision elbow arthroplasty surgery. PMID:27583011

...The Commission is proposing revisions to its rules of practice related to Postal Service requests for an advisory opinion from the Commission on a nationwide (or substantially nationwide) change in the nature of service. The proposed revisions are intended to expedite issuance of advisory opinions while preserving due process. The Commission invites public comment on the proposed revisions to......

Explains how a teacher uses the acronym SOAR (Sentences Organized and Revised) as the core of a game designed to motivate a class to revise their work through the promise of popcorn, free time, or snacks for their revision work. Describes a worksheet that forces students to pay attention to various parts of their paper. (TB)

This month, the director of the Magnet Recognition Program® provides an in-depth overview of the Magnet Recognition Program's Application Manual revision process. The history of the 2005 Manual revision, an evidence-based review of the literature, and revisions to the 2008 Manual are key elements of this article. PMID:22968115

As the prime contractor to the Department of Energy Idaho Operations Office (DOE-ID), Lockheed Martin Idaho Technologies Company (LMITCO) provides comprehensive waste management services to all contractors at the Idaho National Engineering and Environmental Laboratory (INEEL) through the Waste Management (WM) Program. This Program Management Plan (PMP) provides an overview of the Waste Management Program objectives, organization and management practices, and scope of work. This document will be reviewed at least annually and updated as needed to address revisions to the Waste Management`s objectives, organization and management practices, and scope of work. Waste Management Program is managed by LMITCO Waste Operations Directorate. The Waste Management Program manages transuranic, low-level, mixed low-level, hazardous, special-case, and industrial wastes generated at or transported to the INEEL.

Context: Reconstruction of the anterior cruciate ligament (ACL) is one of the most common surgical procedures, with more than 200,000 ACL tears occurring annually. Although primary ACL reconstruction is a successful operation, success rates still range from 75% to 97%. Consequently, several thousand revision ACL reconstructions are performed annually and are unfortunately associated with inferior clinical outcomes when compared with primary reconstructions. Evidence Acquisition: Data were obtained from peer-reviewed literature through a search of the PubMed database (1988-2013) as well as from textbook chapters and surgical technique papers. Study Design: Clinical review. Level of Evidence: Level 4. Results: The clinical outcomes after revision ACL reconstruction are largely based on level IV case series. Much of the existing literature is heterogenous with regard to patient populations, primary and revision surgical techniques, concomitant ligamentous injuries, and additional procedures performed at the time of the revision, which limits generalizability. Nevertheless, there is a general consensus that the outcomes for revision ACL reconstruction are inferior to primary reconstruction. Conclusion: Excellent results can be achieved with regard to graft stability, return to play, and functional knee instability but are generally inferior to primary ACL reconstruction. A staged approach with autograft reconstruction is recommended in any circumstance in which a single-stage approach results in suboptimal graft selection, tunnel position, graft fixation, or biological milieu for tendon-bone healing. Strength-of-Recommendation Taxonomy (SORT): Good results may still be achieved with regard to graft stability, return to play, and functional knee instability, but results are generally inferior to primary ACL reconstruction: Level B. PMID:25364483

The SI second of the atomic clock was calibrated to match the Ephemeris Time (ET) second in a mutual four year effort between the National Physical Laboratory (NPL) and the United States Naval Observatory (USNO). The ephemeris time is 'clocked' by observing the elapsed time it takes the Moon to cross two positions (usually occultation of stars relative to a position on Earth) and dividing that time span into the predicted seconds according to the lunar equations of motion. The last revision of the equations of motion was the Improved Lunar Ephemeris (ILE), which was based on E. W. Brown's lunar theory. Brown classically derived the lunar equations from a purely Newtonian gravity with no relativistic compensations. However, ET is very theory dependent and is affected by relativity, which was not included in the ILE. To investigate the relativistic effects, a new, noninertial metric for a gravitated, translationally accelerated and rotating reference frame has three sets of contributions, namely (1) Earth's velocity, (2) the static solar gravity field and (3) the centripetal acceleration from Earth's orbit. This last term can be characterized as a pseudogravitational acceleration. This metric predicts a time dilation calculated to be -0.787481 seconds in one year. The effect of this dilation would make the ET timescale run slower than had been originally determined. Interestingly, this value is within 2 percent of the average leap second insertion rate, which is the result of the divergence between International Atomic Time (TAI) and Earth's rotational time called Universal Time (UT or UTI). Because the predictions themselves are significant, regardless of the comparison to TAI and UT, the authors will be rederiving the lunar ephemeris model in the manner of Brown with the relativistic time dilation effects from the new metric to determine a revised, relativistic ephemeris timescale that could be used to determine UT free of leap second adjustments.

The SI second of the atomic clock was calibrated to match the Ephemeris Time (ET) second in a mutual four year effort between the National Physical Laboratory (NPL) and the United States Naval Observatory (USNO). The ephemeris time is 'clocked' by observing the elapsed time it takes the Moon to cross two positions (usually occultation of stars relative to a position on Earth) and dividing that time span into the predicted seconds according to the lunar equations of motion. The last revision of the equations of motion was the Improved Lunar Ephemeris (ILE), which was based on E. W. Brown's lunar theory. Brown classically derived the lunar equations from a purely Newtonian gravity with no relativistic compensations. However, ET is very theory dependent and is affected by relativity, which was not included in the ILE. To investigate the relativistic effects, a new, noninertial metric for a gravitated, translationally accelerated and rotating reference frame has three sets of contributions, namely (1) Earth's velocity, (2) the static solar gravity field and (3) the centripetal acceleration from Earth's orbit. This last term can be characterized as a pseudogravitational acceleration. This metric predicts a time dilation calculated to be -0.787481 seconds in one year. The effect of this dilation would make the ET timescale run slower than had been originally determined. Interestingly, this value is within 2 percent of the average leap second insertion rate, which is the result of the divergence between International Atomic Time (TAI) and Earth's rotational time called Universal Time (UT or UTI). Because the predictions themselves are significant, regardless of the comparison to TAI and UT, the authors will be rederiving the lunar ephemeris model in the manner of Brown with the relativistic time dilation effects from the new metric to determine a revised, relativistic ephemeris timescale that could be used to determine UT free of leap second adjustments.

Livestock play an important role in agricultural carbon cycling through consumption of biomass and emissions of methane. Quantification and spatial distribution of methane and carbon dioxide produced by livestock is needed to develop bottom-up estimates for carbon monitoring. These estimates serve as stand-alone international emissions estimates, as input to global emissions modeling, and as comparisons or constraints to flux estimates from atmospheric inversion models. Recent results for the US suggest that the 2006 IPCC default coefficients may underestimate livestock methane emissions. In this project, revised coefficients were calculated for cattle and swine in all global regions, based on reported changes in body mass, quality and quantity of feed, milk production, and management of living animals and manure for these regions. New estimates of livestock methane and carbon dioxide emissions were calculated using the revised coefficients and global livestock population data. Spatial distribution of population data and associated fluxes was conducted using the MODIS Land Cover Type 5, version 5.1 (i.e. MCD12Q1 data product), and a previously published downscaling algorithm for reconciling inventory and satellite-based land cover data at 0.05 degree resolution. Preliminary results for 2013 indicate greater emissions than those calculated using the IPCC 2006 coefficients. Global total enteric fermentation methane increased by 6%, while manure management methane increased by 38%, with variation among species and regions resulting in improved spatial distributions of livestock emissions. These new estimates of total livestock methane are comparable to other recently reported studies for the entire US and the State of California. These new regional/global estimates will improve the ability to reconcile top-down and bottom-up estimates of methane production as well as provide updated global estimates for use in development and evaluation of Earth system models.

Limb girdle muscular dystrophies (LGMD) are a heterogeneous group of inherited progressive muscle disorders affecting predominantly the shoulder and pelvic girdle muscles. They present both with autosomal dominant and autosomal recessive patterns of inheritance. Recent development, including results from Next Generation Sequencing technology, expanded the number of recognised forms. Therefore a revised genetic classification that takes into account the novel entities is needed, allowing clinicians and researchers to refer to a common nomenclature for diagnostic and research purposes. PMID:25323878

At the 2000 Fall Meeting in December, the AGU Council reaffirmed a revised version of AGU's position statement, “Meeting the Challenges of Natural Hazards.” This position was first adopted in 1996. The revised version (see accompanying text box) contains the same message as the original, but in concise language more easily understood by policy-makers and other non-scientists.The statement calls for more research in the geophysical processes to help understand the nature of natural hazards. However, it also clearly indicates that research alone will not improve the ability of society to withstand a natural disaster. Multidisciplinary approaches involving groups as disparate as builders, insurers, and relief organizations are required to improve mitigation efforts worldwide. The policy statement also emphasizes the need to communicate the results of scientific research to the public, especially those communities situated in areas particularly susceptible to extreme natural hazards.

This Reference Book contains a current copy of the Clean Air Act, as amended, and those regulations that implement the statute and appear to be most relevant to DOE activities. The document is provided to DOE and contractor staff for informational purposes only and should not be interpreted as legal guidance. This Reference Book has been completely revised and is current through February 15, 1994.

With rapidly changing dietary environment, dietary guidelines for Koreans were revised and relevant action guides were developed. First, the Dietary Guidelines Advisory Committee was established with experts and government officials from the fields of nutrition, preventive medicine, health promotion, agriculture, education and environment. The Committee set dietary goals for Koreans aiming for a better nutrition state of all after a thorough review and analysis of recent information related to nutritional status and/or problems of Korean population, changes in food production/supply, disease pattern, health policy and agricultural policy. Then, the revised dietary guidelines were proposed to accomplish these goals in addition to 6 different sets of dietary action guides to accommodate specific nutrition and health problems of respective age groups. Subsequently, these guidelines and guides were subjected to the focus group review, consumer perception surveys, and a public hearing for general and professional comments. Lastly, the language was clarified in terms of public understanding and phraseology. The revised Dietary guidelines for Koreans are as follows: eat a variety of grains, vegetables, fruits, fish, meat, poultry and dairy products; choose salt-preserved foods less, and use less salt when you prepare foods; increase physical activity for a healthy weight, and balance what you eat with your activity; enjoy every meal, and do not skip breakfast; if you drink alcoholic beverages, do so in moderation; prepare foods properly, and order sensible amounts; enjoy our rice-based diet. PMID:18296301

The results for an experimental study of a one wavelength MHD induction generator operating on a liquid flow are presented. First the design philosophy and the experimental generator design are summarized, including a description of the flow loop and instrumentation. Next a Fourier series method of treating the fact that the magnetic flux density produced by the stator is not a pure traveling sinusoid is described and some results summarized. This approach appears to be of interest after revisions are made, but the initial results are not accurate. Finally, some of the experimental data is summarized for various methods of excitation.

The third in a series of evaluative reports on "Me and My Environment", a group-centered biological sciences program for educable mentally handicapped (EMH) adolescents, provides information about the curriculum design, the analysis and revision of curriculum materials, the gathering and processing of field test data, and a comparison of two…

This paper presents a fully automated pipeline for thickness profile evaluation and analysis of the human corpus callosum (CC) in 3D structural T 1-weighted magnetic resonance images. The pipeline performs the following sequence of steps: midsagittal plane extraction, CC segmentation algorithm, quality control tool, thickness profile generation, statistical analysis and results figure generator. The CC segmentation algorithm is a novel technique that is based on a template-based initialisation with refinement using mathematical morphology operations. The algorithm is demonstrated to have high segmentation accuracy when compared to manual segmentations on two large, publicly available datasets. Additionally, the resultant thickness profiles generated from the automated segmentations are shown to be highly correlated to those generated from the ground truth segmentations. The manual editing tool provides a user-friendly environment for correction of errors and quality control. Statistical analysis and a novel figure generator are provided to facilitate group-wise morphological analysis of the CC. PMID:24968872

This document provides a formal, logical explanation of the parameters selected for the Figure of Merit algorithm used to evaluate lunar regolith simulant. The objectives, requirements, assumptions and analysis behind the parameters is provided. From NASA's objectives for lunar simulants a requirement is derived to verify and validate simulant performance versus lunar regolith. This requirement leads to a specification that comparative measurements be taken the same way on the regolith and the simulant. In turn this leads to a set of 9 criteria with which to evaluate comparative measurement. Many of the potential measurements of interest are not defensible under these criteria, for example many geotechnical properties of interest were not explicitly measured during Apollo and they can only be measured in situ on the Moon. A 2005 workshop identified 32 properties of major interest to users (Sibille Carpenter Schlagheck, and French, 2006). Virtually all of the properties are tightly constrained, though not predictable, if just four parameters are controlled. Three: composition, size and shape, are recognized as being definable at the particle level. The fourth, density, is a bulk property. In recent work a fifth parameter has been identified, which will need to be added to future releases of the Figure of Merit: spectroscopy.

In the past three decades, automated program verification has undoubtedly been one of the most successful contributions of formal methods to software development. However, when verification of a program against a logical specification discovers bugs in the program, manual manipulation of the program is needed in order to repair it. Thus, in the face of existence of numerous unverified and un- certified legacy software in virtually any organization, tools that enable engineers to automatically verify and subsequently fix existing programs are highly desirable. In addition, since requirements of software systems often evolve during the software life cycle, the issue of incomplete specification has become a customary fact in many design and development teams. Thus, automated techniques that revise existing programs according to new specifications are of great assistance to designers, developers, and maintenance engineers. As a result, incorporating program synthesis techniques where an algorithm generates a program, that is correct-by-construction, seems to be a necessity. The notion of manual program repair described above turns out to be even more complex when programs are integrated with large collections of sensors and actuators in hostile physical environments in the so-called cyber-physical systems. When such systems are safety/mission- critical (e.g., in avionics systems), it is essential that the system reacts to physical events such as faults, delays, signals, attacks, etc, so that the system specification is not violated. In fact, since it is impossible to anticipate all possible such physical events at design time, it is highly desirable to have automated techniques that revise programs with respect to newly identified physical events according to the system specification.

Background The health-related quality of life (HRQoL) is currently weighted more heavily when evaluating health status, particularly regarding medical treatments and interventions. However, it is rarely used by physicians to compare responsiveness. Additionally, responsiveness estimates derived by the Harris Hip Score (HHS) and the Short Form 36 (SF-36) before and after revision total hip arthroplasty (THA) have not been clinically compared. This study compared responsiveness and minimal important differences (MID) between HHS and SF-36. Methods All revision THA patients completed the disease-specific HHS and the generic SF-36 before and 6 months after surgery. Scores using these instruments were interpreted by generalized estimating equation (GEE) before and after revision THA. The bootstrap estimation and modified Jacknife test were used to derive 95% confidence intervals for differences in the responsiveness estimates. Results Comparisons of effect size (ES), standardized response means (SRM), relative efficiency (RE) (>1) and MID indicated that the responsiveness of HHS was superior to that of SF-36. The ES and SRM for pain and physical functions in the HHS were significantly larger than those of the SF-36 (p < 0.001). Conclusion The data in this study indicated that clinicians and health researchers should weight disease-specific measures more heavily than generic measures when evaluating treatment outcomes. PMID:21070675

This report documents the structural qualification for the existing equipment when subjected to seismic loading in the Plutonium Storage Complex. It replaces in entirety Revision 0 and reconciles the U.S. Department of Energy (DOE) comments on Revision 0. The Complex consists of 2736-Z Building (plutonium storage vault), 2736-ZA Building (vault ventilation equipment building), and 2736-ZB Building (shipping/receiving, repackaging activities). The existing equipment structurally qualified in this report are the metal storage racks for 7 inch and lard cans in room 2 of Building 2736-Z; the cubicles, can holders and pedestals in rooms 1, 3, and 4 of Building 2736-Z; the ventilation duct including exhaust fans/motors, emergency diesel generator, and HEPA filter housing in Building 2736-ZA; the repackaging glovebox in Building 2736-ZB; and the interface duct between Buildings 2736-Z and 2736-ZA.

The Expedited Technology Demonstration Project Plan, MWNT Revised Baseline 3.0, replaces and significantly modifies the current baseline. The revised plan will focus efforts specifically on the demonstration of an integrated Molten Salt Oxidation (MSO) system. In addition to the MSO primary unit, offgas, and salt recycle subsystems, the demonstrations will include the generation of robust final forms from process mineral residues. A simplified process flow chart for the expedited demonstration is shown. To minimize costs and to accelerate the schedule for deployment, the integrated system will be staged in an existing facility at LLNL equipped to handle hazardous and radioactive materials. The MSO systems will be activated in FY97, followed by the activation of final forms in FY98.

The vagus nerve stimulator (VNS) has been shown to provide a safe, albeit costly, treatment for intractable epilepsy. We aimed to analyze the incidence, timing, and clinical/demographic associations of revision surgery post-VNS implantation in epilepsy patients. The Thomson Reuters MarketScan database, containing data from 23-50million individuals, was used. Epilepsy patients receiving VNS implantations from 2003 to 2009 were identified by Current Procedural Terminology and International Classification Of Diseases Ninth Revision codes. Incidence and timing of subsequent implant-related surgeries were recorded. Events were described using time-to-event methodology, with Kaplan-Meier failure estimation/Cox proportional hazard models adjusted for clinical/demographic factors. In 1234 patients, average incidence of revision surgeries over 6years of follow-up were <1%, <3%, 4-10%, and <1% for VNS electrode revision, battery revision/removal, battery replacement/implantation, and infection washout, respectively. For electrode revision and battery revision/replacement, the incidence was higher in the first year and for battery replacement in later years. Age, sex, insurance type, or geographic region did not significantly impact event occurrence. Implant-related revision surgeries are rare. Some events occur more often in certain follow-up years than others; none are significantly impacted by age, sex, insurance type, or geographic region. The most common reason for revision was battery replacement several years after VNS placement. PMID:27050913

The burnup code system Step-Wise Burnup Analysis Code System (SWAT) is revised for use in a burnup credit analysis. An important feature of the revised SWAT is that its functions are achieved by calling validated neutronics codes without any changes to the original codes. This feature is realized with a system function of the operating system, which allows the revised SWAT to be independent of the development status of each code.A package of the revised SWAT contains the latest libraries based on JENDL-3.2 and the second version of the JNDC FP library. These libraries allow us to analyze burnup problems, such as an analysis of postirradiation examination (PIE), using the latest evaluated data of not only cross sections but also fission yield and decay constants.Another function of the revised SWAT is a library generator for the ORIGEN2 code, which is one of the most reliable burnup codes. ORIGEN2 users can obtain almost the same results with the revised SWAT using the library prepared by this function.The validation of the revised SWAT is conducted by calculation of the Organization for Economic Cooperation and Development/Nuclear Energy Agency burnup credit criticality safety benchmark Phase I-B and analyses of PIE data for spent fuel from Takahama Unit 3. The analysis of PIE data shows that the revised SWAT can predict the isotopic composition of main uranium and plutonium with a deviation of 5% from experimental results taken from UO{sub 2} fuels of 17 x 17 fuel assemblies. Many results of fission products including samarium are within a deviation of 10%. This means that the revised SWAT has high reliability to predict the isotopic composition for pressurized water reactor spent fuel.

Non-fusion spinal implants are designed to reduce the commonly occurring risks and complications of spinal fusion surgery, e.g. long duration of surgery, high blood loss, screw loosening and adjacent segment disease, by dynamic or movement preserving approaches. This principle could be shown for interspinous spacers, cervical and lumbar total disc replacement and dynamic stabilization; however, due to the continuing high rate of revision surgery, the indications for surgery require as much attention and evidence as comparative data on the surgical technique itself. PMID:26779646

The suggestions and comments on the UDC 52 revision, which has been carried out by George Wilkins during 1995-1998, are presented: * to assign the subclass ``522. Theoretical Astrophysics'' for general aspects and methodological problems of this science; * to transform the subdivision ``524.8 The Universe. Metagalaxy. Cosmology.'' as a subclass 525. In this subclass should be included theories of cosmology and observational confirmations of cosmological conclusions, relativistic astrophysics and gravitation theory, high-energy and nuclear astrophysics; * to introduce new computer-readable compilations of astronomical data (catalogues, atlases, various inquiry information, numerical and graphical data) into correspondent subdivisions.

Note taking has been categorized as a two-stage process: the recording of notes and the review of notes. We contend that note taking might best involve a three-stage process where the missing stage is revision. This study investigated the benefits of revising lecture notes and addressed two questions: First, is revision more effective than…

Objectives To examine mortality and revision rates among patients with osteoarthritis undergoing hip arthroplasty and to compare these rates between patients undergoing cemented or uncemented procedures and to compare outcomes between men undergoing stemmed total hip replacements and Birmingham hip resurfacing. Design Cohort study. Setting National Joint Registry. Population About 275 000 patient records. Main outcome measures Hip arthroplasty procedures were linked to the time to any subsequent mortality or revision (implant failure). Flexible parametric survival analysis methods were used to analyse time to mortality and also time to revision. Comparisons between procedure groups were adjusted for age, sex, American Society of Anesthesiologists (ASA) grade, and complexity. Results As there were large baseline differences in the characteristics of patients receiving cemented, uncemented, or resurfacing procedures, unadjusted comparisons are inappropriate. Multivariable survival analyses identified a higher mortality rate for patients undergoing cemented compared with uncemented total hip replacement (adjusted hazard ratio 1.11, 95% confidence interval 1.07 to 1.16); conversely, there was a lower revision rate with cemented procedures (0.53, 0.50 to 0.57). These translate to small predicted differences in population averaged absolute survival probability at all time points. For example, compared with the uncemented group, at eight years after surgery the predicted probability of death in the cemented group was 0.013 higher (0.007 to 0.019) and the predicted probability of revision was 0.015 lower (0.012 to 0.017). In multivariable analyses restricted to men, there was a higher mortality rate in the cemented group and the uncemented group compared with the Birmingham hip resurfacing group. In terms of revision, the Birmingham hip resurfacings had a similar revision rate to uncemented total hip replacements. Both uncemented total hip replacements and Birmingham hip

Cross-section and related data generated by a modified version of the WIMS-D4 code for both plate and rod type research reactor fuel are compared with Monte Carlo data from the VIM code. The modifications include the introduction of a capability for generating broad group microscopic data and to write selected microscopic cross-sections to an ISOTXS file format. The original WIMS-D4 library with H in ZrH, and [sup 166]Er and [sup 167]Er added gives processed microscopic cross-section data that agree well with VIM ENDF/B-V based data for both plate and TRIGA cells. Additional improvements are in progress including the capability to generate an ENDF/B-V based library.

Cross-section and related data generated by a modified version of the WIMS-D4 code for both plate and rod type research reactor fuel are compared with Monte Carlo data from the VIM code. The modifications include the introduction of a capability for generating broad group microscopic data and to write selected microscopic cross-sections to an ISOTXS file format. The original WIMS-D4 library with H in ZrH, and {sup 166}Er and {sup 167}Er added gives processed microscopic cross-section data that agree well with VIM ENDF/B-V based data for both plate and TRIGA cells. Additional improvements are in progress including the capability to generate an ENDF/B-V based library.

The primary changes that have been made to this revision reflect the relocation of the Waste Certification Official (WCO) organizationally from the Quality Services Division (QSD) into the Laboratory Waste Services (LWS) Organization. Additionally, the responsibilities for program oversight have been differentiated between the QSD and LWS. The intent of this effort is to ensure that those oversight functions, which properly belonged to the WCO, moved with that function; but retain an independent oversight function outside of the LWS Organization ensuring the potential for introduction of organizational bias, regarding programmatic and technical issues, is minimized. The Waste Certification Program (WCP) itself has been modified to allow the waste certification function to be performed by any of the personnel within the LWS Waste Acceptance/Certification functional area. However, a single individual may not perform both the technical waste acceptance review and the final certification review on the same 2109 data package. Those reviews must be performed by separate individuals in a peer review process. There will continue to be a designated WCO who will have lead programmatic responsibility for the WCP and will exercise overall program operational oversite as well as determine the overall requirements of the certification program. The quality assurance organization will perform independent, outside oversight to ensure that any organizational bias does not degrade the integrity of the waste certification process. The core elements of the previous WCP have been retained, however, the terms and process structure have been modified.. There are now two ''control points,'' (1) the data package enters the waste certification process with the signature of the Generator Interface/Generator Interface Equivalent (GI/GIE), (2) the package is ''certified'', thus exiting the process. The WCP contains three steps, (1) the technical review for waste acceptance, (2) a review of the

Abstract: Microemboli are implicated in neurological injury; therefore, the extracorporeal circuit (ECC) should not generate microbubbles or transmit introduced air. The venous reservoir is the first component in the ECC designed to remove introduced air. The purpose of this study was to investigate the relative safety of two kinds of adult venous reservoirs—the closed soft-shell venous reservoir (SSVR [Medtronic CBMVR 1600]) and the open hard-shell venous reservoir (HSVR [Affinity NT CVR])—in terms of microbubble generation and introduced air transmission. A recirculating in-vitro circuit was used to compare the two reservoirs with the SSVR further assessed in a fully closed or partially open state. Microbubbles were counted using a Hatteland CMD10 Doppler in the outflow of the reservoirs before (microbubble generation) and after infusing 20 mL/min of air into the venous line (microbubble transmission) while altering pump flow rates (3 L/min; 5 L/min) and reservoir prime (200 mL; 700 mL). Negligible bubble generation was noted in the SSVRs at both flow rates and either reservoir volume. However, microbubble generation was significant in the HSVR at the higher flow rate of 5 L/min and lower reservoir volume of 200 mL. When infusing air, a flow of 3 L/min was associated with insignificant to small increases in microbubble transmission for all reservoirs. Conversely, infusing air while flowing at 5 L/min was associated with significantly more microbubble transmission for all reservoirs at both low and high reservoir volumes. The SSVR is as safe as the HSVR in microbubble handling as the generation and transmission of microbubbles by the SSVR is not more than the HSVR over a range of prime volumes and flow rates. As both reservoirs transmitted microbubbles at higher pump flow rates regardless of reservoir volumes, it is important to eliminate venous air entrainment during cardiopulmonary bypass. PMID:22164449

Southeastern Power Administration has prepared an Environmental Assessment (DOE/EA-0811) evaluating the proposed revision of the Cumberland System Power Marketing Policy and Subsequent Contracts. The findings of the Environmental Assessment (EA) are: the only significant change from the existing policy will be the term of the contracts executed under the revised policy; and there will be no addition of major new generation resources; there will be no new loads; there will be no major changes in the operating parameters of power generation resources. This paper discusses the need for action, the alternative and the environmental impacts.

Southeastern Power Administration has prepared an Environmental Assessment (DOE/EA-0811) evaluating the proposed revision of the Cumberland System Power Marketing Policy and Subsequent Contracts. The findings of the Environmental Assessment (EA) are: the only significant change from the existing policy will be the term of the contracts executed under the revised policy; and there will be no addition of major new generation resources; there will be no new loads; there will be no major changes in the operating parameters of power generation resources. This paper discusses the need for action, the alternative and the environmental impacts.

We have recently proposed a new controller based on the equilibrium point analysis for model power systems. In this paper, first the Japanese standard one-machine infinite-bus system model is formulated, and the equilibrium points are analyzed. Next, complementary control inputs for AVR and GOV with limiters of the model system are determined on the basis of the analysis. Finally, it is shown that the unstable equilibrium point is eliminated by adding the proposed inputs, and then the critical clearing time can be improved in comparison with PSS of the standard model.

The patient had a total knee replacement for arthritis secondary to Stills disease performed 35 years earlier, with 20 years of good function followed by 15 years of progressively worsening knee pain. A revision was completed, which improved the patient's quality of life and objective knee scores, with an increase in Oxford Knee Score from 22 to 42 and American Knee Society Score from 76 to 170. We discuss the technical aspects in revising this knee replacement, which is the oldest that we are aware of. The result has been a good recovery, which is the first available in the literature for future comparison. PMID:26055586

This Acceptance Test Report(ATR) is the completed testing and inspection of the new portable generator. The testing and inspection is to verify that the generator provided by the vendor meets the requirements of specification WHC-S-0252, Revision 2. Attached is various other documentation to support the inspection and testing.

The development of information systems using an engineering approach that uses both traditional programing techniques and fourth generation software tools is described. Fourth generation applications tools are used to quickly develop a prototype system that is revised as the user clarifies requirements. (MLW)

A study investigated factors in Chinese language maintenance among balanced and pseudo-bilinguals who are second-generation Chinese-Americans. Subjects were 12 fifth-grade students in a Chinese-language school; half were balanced bilinguals (proficient in both languages) and half were pseudo-bilinguals (those in whom skills are more developed in…

We compared acquisition of, and preference for, manual signing (MS), picture exchange (PE), and speech-generating devices (SGDs) in four children with autism spectrum disorders (ASD). Intervention was introduced across participants in a non-concurrent multiple-baseline design and acquisition of the three communication modes was compared in an…

The intent of this study is to discuss some of the many factors involved in the development of the design and layout of a steam turbo-generation unit as part of a modular Generation IV nuclear power plant. Of the many factors involved in the design and layout, this research will cover feed water system layout and optimization issues. The research is arranged in hopes that it can be generalized to any Generation IV system which uses a steam powered turbo-generation unit. The research is done using the ORCENT-II heat balance codes and the Salisbury methodology to be reviewed herein. The Salisbury methodology is used on an original cycle design by Famiani for the Westinghouse IRIS and the effects due to parameter variation are studied. The vital parameters of the Salisbury methodology are the incremental heater surface capital cost (S) in $/ft{sup 2}, the value of incremental power (I) in $/kW, and the overall heat transfer coefficient (U) in Btu/ft{sup 2}-degrees Fahrenheit-hr. Each is varied in order to determine the effects on the cycles overall heat rate, output, as well as, the heater surface areas. The effects of each are shown. Then the methodology is then used to compare the optimized original Famiani design consisting of seven regenerative feedwater heaters with an optimized new cycle concept, INRC8, containing four regenerative heaters. The results are shown. It can be seen that a trade between the complexity of the seven stage regenerative Famiani cycle and the simplicity of the INRC8 cycle can be made. It is desired that this methodology can be used to show the ability to evaluate modularity through the value of size a complexity of the system as well as the performance. It also shows the effectiveness of the Salisbury methodology in the optimization of regenerative cycles for such an evaluation.

The U.S. Nuclear Regulatory Commission on July 27, 2001 approved Revision 19 of the TRUPACT-II Safety Analysis Report (SAR) and the associated TRUPACT-II Authorized Methods for Payload Control (TRAMPAC). Key initiatives in Revision 19 included matrix depletion, unlimited mixing of shipping categories, a flammability assessment methodology, and an alternative methodology for the determination of flammable gas generation rates. All U.S. Department of Energy (DOE) sites shipping transuranic (TRU) waste to the Waste Isolation Pilot Plant (WIPP) were required to implement Revision 19 methodology into their characterization and waste transportation programs by May 20, 2002. An implementation process was demonstrated by the Rocky Flats Environmental Technology Site (RFETS) in Golden, Colorado. The three-part process used by RFETS included revision of the site-specific TRAMPAC, an evaluation of the contact-handled TRU waste inventory against the regulations in Revision 19, and design and development of software to facilitate future inventory analyses.

During incineration of waste in waste incineration plants, polluted flue gases are generated which have to be subjected to flue gas purification. Although the legal requirements are nearly unambiguous, the question of whether wet flue gas purification is to be performed in a sewage-free or sewage-generating manner is discussed controversially by experts in the Federal Republic of Germany. As a contribution to this discussion, material flow studies of sewage-free and sewage-generating flue gas purification processes in waste incineration plants were performed by ITAS in cooperation with ITC-TAB. The study covered three waste incineration plants, two of which were operated in a sewage-generating and one in a sewage-free manner. The data and information submitted by most of the plant operators are not sufficient for a comprehensive balancing of flue gas purification systems in waste incineration plants. For this reason, plant operation often is not optimally tailored to the substances prevailing. During operation, at least temporary strong superstoichiometric dosage of auxiliary chemicals cannot be excluded. By means of plausibility assumptions and model calculations, closed balancing of most plants could be achieved. Moreover, it was demonstrated by the balancing of technical-scale waste incineration plants that the material flows in wet flue gas purification re less dependent on the design of the flue gas purification section (sewage-free/sewage-generating), but considerably affected by the operation of the flue gas purification system (e.g., volume of absorption agents used). Hence, material flows can be controlled in a certain range.

Describes the Speaking Test, which forms part of the revised First Certificate of English (FCE) examination of the University of Cambridge Local Examinations Syndicate. Discusses key revisions, including use of paired-testing format, and notes the role of the oral examiners. Considers why the new design provides improvements in the assessment of…

The Administration's May Revision of the 2012-2013 state budget addresses a $15.7 billion shortfall through funding shifts, cuts, and new revenue sources that place children squarely in harms way. California's kids are already grossly underserved relative to the rest of the nation's children. If the May Revise budget is passed by the Legislature,…

... National Park Service Notification of Boundary Revision AGENCY: National Park Service, Interior. ACTION: Notification of boundary revision. SUMMARY: Notice is hereby given that the boundary of the Chesapeake and Ohio... within the Park's boundary will make significant contributions to the purposes for which the Park...

Objectives Around 1% of patients who have a hip replacement have deep prosthetic joint infection (PJI) afterwards. PJI is often treated with antibiotics plus a single revision operation (1-stage revision), or antibiotics plus a 2-stage revision process involving more than 1 operation. This study aimed to characterise the impact and experience of PJI and treatment on patients, including comparison of 1-stage with 2-stage revision treatment. Design Qualitative semistructured interviews with patients who had undergone surgical revision treatment for PJI. Patients were interviewed between 2 weeks and 12 months postdischarge. Data were audio-recorded, transcribed, anonymised and analysed using a thematic approach, with 20% of transcripts double-coded. Setting Patients from 5 National Health Service (NHS) orthopaedic departments treating PJI in England and Wales were interviewed in their homes (n=18) or at hospital (n=1). Participants 19 patients participated (12 men, 7 women, age range 56–88 years, mean age 73.2 years). Results Participants reported receiving between 1 and 15 revision operations after their primary joint replacement. Analysis indicated that participants made sense of their experience through reference to 3 key phases: the period of symptom onset, the treatment period and protracted recovery after treatment. By conceptualising their experience in this way, and through themes that emerged in these periods, they conveyed the ordeal that PJI represented. Finally, in light of the challenges of PJI, they described the need for support in all of these phases. 2-stage revision had greater impact on participants’ mobility, and further burdens associated with additional complications. Conclusions Deep PJI impacted on all aspects of patients’ lives. 2-stage revision had greater impact than 1-stage revision on participants’ well-being because the time in between revision procedures meant long periods of immobility and related psychological distress

Thermospheric wind data obtained from the Atmosphere Explorer E and Dynamics Explorer 2 satellites have been combined with wind data for the lower and upper thermosphere from ground-based incoherent scatter radar and Fabry-Perot optical interferometers to generate a revision (HWM90) of the HWM87 empirical model and extend its applicability to 100 km. Comparison of the various data sets with the aid of the model shows in general remarkable agreement, particularly at mid and low latitudes. The ground-based data allow modeling of seasonal/diurnal variations, which are most distinct at midlatitudes. While solar activity variations are now included, they are found to be small and not always very clearly delineated by the current data. They are most obvious at the higher latitudes. The model describes the transition from predominately diurnal variations in the upper thermosphere to semidiurnal variations in the lower thermosphere and a transition from summer to winter flow above 140 km to winter to summer flow below. Significant altitude gradients in the wind are found to extend to 300 km at some local times and pose complications for interpretation of Fabry-Perot observations.

To support the ever-increasing demand for high-speed optical communications, Nyquist spectral shaping serves as a promising technique to improve spectral efficiency (SE) by generating near-rectangular spectra with negligible crosstalk and inter-symbol interference in wavelength-division-multiplexed (WDM) systems. Compared with specially-designed optical methods, DSP-based electrical filters are more flexible as they can generate different filter shapes and modulation formats. However, such transmitter-side pre-filtering approach is sensitive to the limited taps of finite-impulse-response (FIR) filter, for the complexity of the required DSP and digital-to-analog converter (DAC) is limited by the cost and power consumption of optical transponder. In this paper, we investigate the performance and complexity of transmitter-side FIR-based DSP with polarization-division-multiplexing (PDM) high-order quadrature-amplitude-modulation (QAM) formats. Our results show that Nyquist 64-QAM, 16-QAM and QPSK WDM signals can be sufficiently generated by digital FIR filters with 57, 37, and 17 taps respectively. Then we explore the effects of the required spectral pre-emphasis, bandwidth and resolution on the performance of Nyquist-WDM systems. To obtain negligible OSNR penalty with a roll-off factor of 0.1, two-channel-interleaved DAC requires a Gaussian electrical filter with the bandwidth of 0.4-0.6 times of the symbol rate for PDM-64QAM, 0.35-0.65 times for PDM-16QAM, and 0.3-0.8 times for PDM-QPSK, with required DAC resolutions as 8, 7, 6 bits correspondingly. As a tradeoff, PDM-64QAM can be a promising candidate for SE improvement in next-generation optical metro networks.

This paper presents the approach taken to analyze the radiological consequences of a postulated main steam line break event, with one or more tube ruptures, for the Palo Verde Nuclear Generating Station. The analysis was required to support the restart of PVNGS Unit 2 following the steam generator tube rupture event on March 14, 1993 and to justify continued operation of Units 1 and 3. During the post-event evaluation, the NRC expressed concern that Unit 2 could have been operating with degraded tubes and that similar conditions could exist in Units 1 and 3. The NRC therefore directed that a safety assessment be performed to evaluate a worst case scenario in which a non-isolable main steam line break occurs inducing one or more tube failures in the faulted steam generator. This assessment was to use the generic approach described in NUREG 1477, Voltage-Based Interim Plugging Criteria for Steam Generator Tubes - Task Group Report. An analysis based on the NUREG approach was performed but produced unacceptable results for off-site and control room thyroid doses. The NUREG methodology, however, does not account for plant thermal-hydraulic transient effects, system performance, or operator actions which could be credited to mitigate dose consequences. To deal with these issues, a more detailed analysis methodology was developed using a modified version of the Combustion Engineering Plant Analysis Code, which examines the dose consequences for a main steam line break transient with induced tube failures for a spectrum equivalent to 1 to 4 double ended guillotine U-tube breaks. By incorporating transient plant system responses and operator actions, the analysis demonstrates that the off-site and control room does consequences for a MSLBGTR can be reduced to acceptable limits. This analysis, in combination with other corrective and recovery actions, provided sufficient justification for continued operation of PVNGS Units 1 and 3, and for the subsequent restart of Unit 2.

Objective of this study was the selection and adaptation of mixed microbial cultures (MMCs), able to ferment crude glycerol generated from animal fat-based biodiesel and produce building-blocks and green chemicals. Various adaptation strategies have been investigated for the enrichment of suitable and stable MMC, trying to overcome inhibition problems and enhance substrate degradation efficiency, as well as generation of soluble fermentation products. Repeated transfers in small batches and fed-batch conditions have been applied, comparing the use of different inoculum, growth media, and Kinetic Control. The adaptation of activated sludge inoculum was performed successfully and continued unhindered for several months. The best results showed a substrate degradation efficiency of almost 100% (about 10 g/L glycerol in 21 h) and different dominant metabolic products were obtained, depending on the selection strategy (mainly 1,3-propanediol, ethanol, or butyrate). On the other hand, anaerobic sludge exhibited inactivation after a few transfers. To circumvent this problem, fed-batch mode was used as an alternative adaptation strategy, which led to effective substrate degradation and high 1,3-propanediol and butyrate production. Changes in microbial composition were monitored by means of Next Generation Sequencing, revealing a dominance of glycerol consuming species, such as Clostridium, Klebsiella, and Escherichia. PMID:26509171

The purpose of this study was to compare the activation and deactivation forces generated during first-order archwire deflections when different sizes and types of NiTi wires are paired with conventional and self-ligating brackets (SLBs) and to evaluate the rotational control between these same archwire and bracket combinations. Four maxillary premolar SLBs (Damon 3MX, SmartClip, Carriere, and In-Ovation R) and one conventional twin bracket (Victory) were paired with seven archwires [0.014, 0.016, 0.018, 0.016 × 0.022 Ultra Therm (thermal A f 80-90°F), 0.016, 0.018 SPEED Supercable, and 0.017 × 0.025 Turbo]. A cantilever test design was used and 10 trials per bracket/archwire combination were performed. Load/deflection data were captured over 4 mm fi rst-order archwire deflections. Forces generated were compared across all bracket/archwire combinations. Among thermal archwires, for a given deflection, forces increased with increasing archwire size. Supercable archwires displayed less force than their same size thermal counterparts. The Turbo archwire generated force values in between those of 0.016 and 0.018 thermal archwires. Rotational control improved with increasing wire dimensions and for a given archwire size. Rotational control among brackets generally ranked as follows: In-Ovation R > SmartClip > Carriere and Damon 3MX. PMID:22045693

Plasma source generated nitrogen fertilizer is compared to conventional nitrogen fertilizers in water for plant growth. Root, shoot sizes, and weights are used to examine differences between plant treatment groups. With a simple coaxial structure creating a large-volume atmospheric glow discharge, a 162 MHz generator drives the air plasma. The VHF plasma source emits a steady state glow; the high drive frequency is believed to inhibit the glow-to-arc transition for non-thermal discharge generation. To create the plasma activated water (PAW) solutions used for plant treatment, the discharge is held over distilled water until a 100 ppm nitrate aqueous concentration is achieved. The discharge is used to incorporate nitrogen species into aqueous solution, which is used to fertilize radishes, marigolds, and tomatoes. In a four week experiment, these plants are watered with four different solutions: tap water, dissolved ammonium nitrate DI water, dissolved sodium nitrate DI water, and PAW. Ammonium nitrate solution has the same amount of total nitrogen as PAW; sodium nitrate solution has the same amount of nitrate as PAW. T-tests are used to determine statistical significance in plant group growth differences. PAW fertilization chemical mechanisms are presented.

Background and objective Electronic medical records with encoded entries should enhance the semantic interoperability of document exchange. However, it remains a challenge to encode the narrative concept and to transform the coded concepts into a standard entry-level document. This study aimed to use a novel approach for the generation of entry-level interoperable clinical documents. Methods Using HL7 clinical document architecture (CDA) as the example, we developed three pipelines to generate entry-level CDA documents. The first approach was a semi-automatic annotation pipeline (SAAP), the second was a natural language processing (NLP) pipeline, and the third merged the above two pipelines. We randomly selected 50 test documents from the i2b2 corpora to evaluate the performance of the three pipelines. Results The 50 randomly selected test documents contained 9365 words, including 588 Observation terms and 123 Procedure terms. For the Observation terms, the merged pipeline had a significantly higher F-measure than the NLP pipeline (0.89 vs 0.80, p<0.0001), but a similar F-measure to that of the SAAP (0.89 vs 0.87). For the Procedure terms, the F-measure was not significantly different among the three pipelines. Conclusions The combination of a semi-automatic annotation approach and the NLP application seems to be a solution for generating entry-level interoperable clinical documents. PMID:25332357

An important step in advancing global health through informatics is to understand how systems support health professionals to deliver improved services to patients. Studies in several countries have highlighted the potential for clinical information systems to change patterns of work and communication, and in particular have raised concerns that they reduce nurses’ time in direct care. However measuring the effects of systems on work is challenging and comparisons across studies have been hindered by a lack of standardised definitions and measurement tools. This paper describes the Work Observation Method by Activity Time (WOMBAT) technique version 1.0 and the ways in which the data generated can describe different aspects of health professionals’ work. In 2011 a revised WOMBAT version 2.0 was developed specifically to facilitate its use by research teams in different countries. The new features provide opportunities for international comparative studies of nurses’ work to be conducted. PMID:24199139

Background The factors that lead to patients failing multiple anterior cruciate ligament (ACL) reconstructions are not well understood. Hypothesis Multiple-revision ACL reconstruction will have different characteristics than first-time revision in terms of previous and current graft selection, mode of failure, chondral/meniscal injuries, and surgical charactieristics. Study Design Case-control study; Level of evidence, 3. Methods A prospective multicenter ACL revision database was utilized for the time period from March 2006 to June 2011. Patients were divided into those who underwent a single-revision ACL reconstruction and those who underwent multiple-revision ACL reconstructions. The primary outcome variable was Marx activity level. Primary data analyses between the groups included a comparison of graft type, perceived mechanism of failure, associated injury (meniscus, ligament, and cartilage), reconstruction type, and tunnel position. Data were compared by analysis of variance with a post hoc Tukey test. Results A total of 1200 patients (58% men; median age, 26 years) were enrolled, with 1049 (87%) patients having a primary revision and 151 (13%) patients having a second or subsequent revision. Marx activity levels were significantly higher (9.77) in the primary-revision group than in those patients with multiple revisions (6.74). The most common cause of reruptures was a traumatic, noncontact ACL graft injury in 55% of primary-revision patients; 25% of patients had a nontraumatic, gradual-onset recurrent injury, and 11% had a traumatic, contact injury. In the multiple-revision group, a nontraumatic, gradual-onset injury was the most common cause of recurrence (47%), followed by traumatic noncontact (35%) and nontraumatic sudden onset (11%) (P < .01 between groups). Chondral injuries in the medial compartment were significantly more common in the multiple-revision group than in the single-revision group, as were chondral injuries in the patellofemoral

This Lawrence Berkeley National Laboratory Radiological Control Manual (LBNL RCM) has been prepared to provide guidance for site-specific additions, supplements and interpretation of the DOE Radiological Control Manual. The guidance provided in this manual is one methodology to implement the requirements given in Title 10 Code of Federal Regulations Part 835 (10 CFR 835) and the DOE Radiological Control Manual. Information given in this manual is also intended to provide demonstration of compliance to specific requirements in 10 CFR 835. The LBNL RCM (Publication 3113) and LBNL Health and Safety Manual Publication-3000 form the technical basis for the LBNL RPP and will be revised as necessary to ensure that current requirements from Rules and Orders are represented. The LBNL RCM will form the standard for excellence in the implementation of the LBNL RPP.

Providing hepatitis B vaccine to all neonates within 24 hours of birth (Timely Birth Dose, TBD) is the key preventative measure to control perinatal hepatitis B virus infection. Previous Chinese studies of TBD only differentiated between migrant and non-migrant (local-born generation-LG) children. Our study is the first to stratify migrants in Beijing into first generation migrants (FGM) and second generation migrants (SGM). Based on a questionnaire survey of 2682 people in 3 Beijing villages, we identified 283 children aged 0-15 years, from 246 households, who were eligible for a TBD. Multinomial logistic regression and statistical analyses were used to examine factors explaining TBD rates for LG, FGM and SGM children. Surprisingly, the TBD for LG Beijing children was not significantly different from migrant children. But after stratifying migrant children into FGM and SGM, revealed significant TBD differences were revealed across LG, FGM and SGM according to domicile (p-value < 0.001, OR = 3.24), first vaccination covered by government policy (p-value < 0.05, OR = 3.24), mother's knowledge of hepatitis B (p-value < 0.05, OR = 1.01) and the government's HBV policy environment (p-value < 0.05, OR = 2.338). Birthplace (p-value = 0.002, OR = 6.21) and better policy environments (p-value = 0.01, OR = 2.80) were associated with higher TBD rate for LG and SGM children. Compared with FGM children, SGM had a significantly poorer TBD rate (Fisher exact test of chi-square = 0.013). We identified SGM as a special risk group; proposed Hukou reform to improve SGM TBD; and called for Beijing health authorities to match TBD rates in other provinces, especially by improving practices by health authorities and knowledge of parents. PMID:27043864

The mechanism for acid production in phenolic extreme ultraviolet (EUV) lithography films containing triphenylsulfonium triflate (Ph(3)S(+)TfO(-)) acid generator has been investigated by electron paramagnetic resonance (EPR) spectroscopy and by use of the acid indicator coumarin 6 (C6). Gamma radiolysis was substituted for the EUV radiation with the assumption that the chemistry generated by ionization of the matrix does not depend on the ionization source. Poly(4-hydroxystyrene) (PHS) was first investigated as a well-studied standard, after which the water-wheel-like cyclic oligomer derivative containing pendant adamantyl ester groups, noria-AD(50), was investigated. EPR measurements confirm that the dominant free radical product is a phenoxyl derivative (PHS-O(•) or noria-O(•)) that exhibits quite slow stretched exponential recombination kinetics at room temperature. Also observed at 77 K was the presence of a significant hydrogen atom product of radiolysis. The G value or yield of acid production in thin lithography films was measured with the C6 indicator on a fused silica substrate. It was found that a significant amount of acid is generated via energy transfer from the irradiated fused-silica substrate to the Ph(3)S(+)TfO(-) in the films. By varying the film thickness on the substrates, the substrate effect on the acid yield was quantitatively determined. After subtraction of the contribution from the substrates, the acid yield G value in the PHS film with 10 wt % Ph(3)S(+)TfO(-) and 5 wt % C6 was determined to be 2.5 ± 0.3 protons per 100 eV of radiation. The acid yield of noria-AD(50) films was found to be 3.2 ± 0.3 protons per 100 eV. PMID:22607084

This study quantified the mechanical interactions between an American football cleat and eight surfaces used by professional American football teams. Loading conditions were applied with a custom-built testing apparatus designed to represent play-relevant maneuvers of elite athletes. Two natural grass and six infill artificial surfaces were tested with the cleated portion of a shoe intended for use on either surface type. In translation tests with a 2. 8-kN vertical load, the grass surfaces limited the horizontal force on the cleats by tearing. This tearing was not observed with the artificial surfaces, which allowed less motion and generated greater horizontal force (3.2 kN vs. 4.5 kN, p generated less angular displacement and greater torque on the artificial surfaces (145 N m vs. 197 N m, p generated less peak horizontal force on the natural surfaces than on the artificial surfaces (2.4 kN vs. 3.0 kN, p

This paper presents the electromagnetic characteristic analysis of axial flux machines applied to 500(W) class wind power generators. For the dramatic analysis time reduction, analytical method is applied, and comparative analysis is performed according to magnetization patterns of permanent magnets. Due to their structural features, quasi 3-dimensional analysis is employed, and correction function is introduced to consider the flux leakage of the machines. The analysis results are compared with the results by finite element method and experiment to validate the suggested method performed in this paper showing high reliability.

The design, analysis, and key features of the Modular Isotopic Thermoelectric Generator (MITG) were described in a 1981 IECEC paper; and the design, fabrication, testing, and post-test analysis of test assemblies simulating prototypical MITG modules were described in preceding papers in these proceedings. These analyses succeeded in identifying and explaining the principal causes of thermal-stress problems encountered in the tests, and in confirming the effectiveness of design changes for alleviating them. The present paper presents additional design improvements for solving these and other problems, and describes new thermoelectric material properties generated by independent laboratories over the past two years. Based on these changes and on a revised fabrication procedure, it presents a reoptimization of the MITG design and computes the power-to-weight ratio for the revised design. That ratio is appreciably lower than the 1981 prediction, primarily because of changes in material properties; but it is still much higher than the specific power of current-generation RTGs.

Although there have been large quantities of published work in laser generation of nanoparticles, it is still unclear on the comparative role of laser wavelengths and pulse widths in controlling the nanoparticle sizes, morphology and production rate. In this investigation, Ag, Au and TiO2 nanoparticles were synthesised by nanosecond ( λ = 532 nm, τ = 5 ns), picosecond ( λ = 1064 nm, τ = 10 ps) and femtosecond ( λ = 800 nm, τ = <100 fs) pulse lasers in deionised water. They are compared, in terms of their optical absorption spectra, morphology, size distribution and production rates, characterised by UV-Vis spectroscopy and transmission electron microscopy. The ablation rates of both Ag and Ti samples were shown as a function of laser pulse energy and water level above the samples. The average size of nanoparticles (10-50 nm) was found to be smaller for the shorter wavelength (532 nm) nanosecond pulsed laser compared with those of picosecond and femtosecond lasers, demonstrating a more dominating role of laser wavelength than pulse width in particle size control. The ps laser generated more spherical Ag nanoparticles than those with the ns and fs lasers. Under the same laser processing conditions, Au nanoparticles are smaller than Ag and TiO2, with the latter, the largest. The nanoparticle production rate is relatively independent upon laser types, wavelengths and pulse lengths, but largely determined by the laser fluence and energy deposited.

Four hydrophobic p38α mitogen-activated protein kinase inhibitors were refluxed with 7.5% hydrogen peroxide at 80°C and irradiated with visible light in order to generate more hydrophilic conversion products. The resulting mixtures were analyzed in a high-resolution screening (HRS) platform, featuring liquid chromatographic separation coupled in parallel with a fluorescence enhancement based continuous-flow affinity bioassay towards the p38α mitogen-activated protein kinase and with high-resolution (tandem) mass spectrometry on an ion-trap-time-of-flight hybrid instrument. The results were compared with similar data where chemical diversity was achieved by means of electrochemical conversion or incubation with either human liver microsomes or cytochrome P450s from Bacillus megaterium (BM3s). In total, more than 50 conversion products were identified. The metabolite-like compound libraries studied are discussed in terms of the reactions enabled, the retention of affinity, and the change in hydrophilicity by modification, in summary the ability to generate bioactive, more hydrophilic potential lead compounds. In this context, HRS is demonstrated to be an effective tool as it reduces the effort directed towards laborious synthesis and purification schemes. PMID:24090642

Porous carbon-based materials are commonly used to remove various organic and inorganic pollutants from gaseous and liquid effluents and products. In this study, the adsorption of dioxins on both activated carbons and multi-walled carbon nanotube was internally compared, via series of bench scale experiments. A laboratory-scale dioxin generator was applied to generate PCDD/Fs with constant concentration (8.3 ng I-TEQ/Nm(3)). The results confirm that high-chlorinated congeners are more easily adsorbed on both activated carbons and carbon nanotubes than low-chlorinated congeners. Carbon nanotubes also achieved higher adsorption efficiency than activated carbons even though they have smaller BET-surface. Carbon nanotubes reached the total removal efficiency over 86.8 % to be compared with removal efficiencies of only 70.0 and 54.2 % for the two other activated carbons tested. In addition, because of different adsorption mechanisms, the removal efficiencies of carbon nanotubes dropped more slowly with time than was the case for activated carbons. It could be attributed to the abundant mesopores distributed in the surface of carbon nanotubes. They enhanced the pore filled process of dioxin molecules during adsorption. In addition, strong interactions between the two benzene rings of dioxin molecules and the hexagonal arrays of carbon atoms in the surface make carbon nanotubes have bigger adsorption capacity. PMID:25728198

Oxidative modification of HDLs (high-density lipoproteins) by MPO (myeloperoxidase) compromises its anti-atherogenic properties, which may contribute to the development of atherosclerosis. Although it has been established that HOCl (hypochlorous acid) produced by MPO targets apoA-I (apolipoprotein A-I), the major apolipoprotein of HDLs, the role of the other major oxidant generated by MPO, HOSCN (hypothiocyanous acid), in the generation of dysfunctional HDLs has not been examined. In the present study, we characterize the structural and functional modifications of lipid-free apoA-I and rHDL (reconstituted discoidal HDL) containing apoA-I complexed with phospholipid, induced by HOSCN and its decomposition product, OCN- (cyanate). Treatment of apoA-I with HOSCN resulted in the oxidation of tryptophan residues, whereas OCN- induced carbamylation of lysine residues to yield homocitrulline. Tryptophan residues were more readily oxidized on apoA-I contained in rHDLs. Exposure of lipid-free apoA-I to HOSCN and OCN- significantly reduced the extent of cholesterol efflux from cholesterol-loaded macrophages when compared with unmodified apoA-I. In contrast, HOSCN did not affect the anti-inflammatory properties of rHDL. The ability of HOSCN to impair apoA-I-mediated cholesterol efflux may contribute to the development of atherosclerosis, particularly in smokers who have high plasma levels of SCN- (thiocyanate). PMID:23088652

This study compares the first-generation antipsychotic (FGA) flupentixol to haloperidol and common second-generation antipsychotics (SGAs) as to drug utilization and severe adverse drug reactions (ADRs) in clinical treatment of schizophrenia inpatients using data from the drug safety program Arzneimittelsicherheit in der Psychiatrie (AMSP). AMSP drug utilization and reported ADR data were analyzed. Type and frequency of severe ADRs attributed to flupentixol were compared with haloperidol, clozapine, olanzapine, quetiapine, risperidone and amisulpride in a total of 56,861 schizophrenia inpatients exposed to these drugs. In spite of increasing prescription of SGAs, flupentixol was consistently used in schizophrenic inpatients (about 5 %) over time. Reporting rates of severe ADR ranged from 0.38 to 1.20 % for the individual antipsychotics (drugs imputed alone); flupentixol ranked lowest. The type of ADR differed considerably; as to severe EPMS, flupentixol (0.27 %), such as risperidone (0.28 %), held an intermediate position between haloperidol/amisulpride (0.55/0.52 %) and olanzapine/quetiapine (<0.1 %). The study is a heuristic approach, not a confirmatory test. Flupentixol has a stable place in the treatment of schizophrenia in spite of the introduction of different SGAs. Comparative ADR profiles suggest an intermediate position between FGAs and SGAs for flupentixol in clinical practice. PMID:23835526

The relative accuracy of a georeferenced raster data set captured by the Megavision 1024XM system using the Videk Megaplus CCD cameras is compared to a georeferenced raster data set generated from vector lines manually digitized through the ELAS software package on a Summagraphics X-Y digitizer table. The study also investigates the amount of time necessary to fully complete the rasterization of the two data sets, evaluating individual areas such as time necessary to generate raw data, time necessary to edit raw data, time necessary to georeference raw data, and accuracy of georeferencing against a norm. Preliminary results exhibit a high level of agreement between areas of the vector-parented data and areas of the captured file data where sufficient control points were chosen. Maps of 1:20,000 scale were digitized into raster files of 5 meter resolution per pixel and overall error in RMS was estimated at less than eight meters. Such approaches offer time and labor-saving advantages as well as increasing the efficiency of project scheduling and enabling the digitization of new types of data.

Whole-exome sequencing (WES) has been widely used for analysis of human genetic diseases, but its value for the pharmacogenomic profiling of individuals is not well studied. Initially, we performed an in-depth evaluation of the accuracy of WES variant calling in the pharmacogenes CYP2D6 and CYP2C19 by comparison with MiSeq(®) amplicon sequencing data (n = 36). This analysis revealed that the concordance rate between WES and MiSeq(®) was high, achieving 99.60% for variants that were called without exceeding the truth-sensitivity threshold (99%), defined during variant quality score recalibration (VQSR). Beyond this threshold, the proportion of discordant calls increased markedly. Subsequently, we expanded our findings beyond CYP2D6 and CYP2C19 to include more genes genotyped by the iPLEX(®) ADME PGx Panel in the subset of twelve samples. WES performed well, agreeing with the genotyping panel in approximately 99% of the selected pass-filter variant calls. Overall, our results have demonstrated WES to be a promising approach for pharmacogenomic profiling, with an estimated error rate of lower than 1%. Quality filters, particularly VQSR, are important for reducing the number of false variants. Future studies may benefit from examining the role of WES in the clinical setting for guiding drug therapy. PMID:26858644

The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

Whole-exome sequencing (WES) has been widely used for analysis of human genetic diseases, but its value for the pharmacogenomic profiling of individuals is not well studied. Initially, we performed an in-depth evaluation of the accuracy of WES variant calling in the pharmacogenes CYP2D6 and CYP2C19 by comparison with MiSeq® amplicon sequencing data (n = 36). This analysis revealed that the concordance rate between WES and MiSeq® was high, achieving 99.60% for variants that were called without exceeding the truth-sensitivity threshold (99%), defined during variant quality score recalibration (VQSR). Beyond this threshold, the proportion of discordant calls increased markedly. Subsequently, we expanded our findings beyond CYP2D6 and CYP2C19 to include more genes genotyped by the iPLEX® ADME PGx Panel in the subset of twelve samples. WES performed well, agreeing with the genotyping panel in approximately 99% of the selected pass-filter variant calls. Overall, our results have demonstrated WES to be a promising approach for pharmacogenomic profiling, with an estimated error rate of lower than 1%. Quality filters, particularly VQSR, are important for reducing the number of false variants. Future studies may benefit from examining the role of WES in the clinical setting for guiding drug therapy. PMID:26858644

Revision otologic surgery places a significant economic burden on patients and the healthcare system. We conducted a retrospective chart analysis to estimate the economic impact of revision canal-wall-down (CWD) mastoidectomy. We reviewed the medical records of all 189 adults who had undergone CWD mastoidectomy performed by the senior author between June 2006 and August 2011 at Loyola University Medical Center in Maywood, Ill. Institutional charges and collections for all patients were extrapolated to estimate the overall healthcare cost of revision surgery in Illinois and at the national level. Of the 189 CWD mastoidectomies, 89 were primary and 100 were revision procedures. The total charge for the revision cases was $2,783,700, and the net reimbursement (collections) was $846,289 (30.4%). Using Illinois Hospital Association data, we estimated that reimbursement for 387 revision CWD mastoidectomies that had been performed in fiscal year 2011 was nearly $3.3 million. By extrapolating our data to the national level, we estimated that 9,214 patients underwent revision CWD mastoidectomy in the United States during 2011, which cost the national healthcare system roughly $76 million, not including lost wages and productivity. Known causes of failed CWD mastoidectomies that often result in revision surgery include an inadequate meatoplasty, a facial ridge that is too high, residual diseased air cells, and recurrent cholesteatoma. A better understanding of these factors can reduce the need for revision surgery, which could have a positive impact on the economic strain related to this procedure at the local, state, and national levels. PMID:26991218

We conducted a cross-sectional study to compare the subjective experiences and clinical effects of first-generation long-acting injectable (FGA-LAI) antipsychotics with those of risperidone long-acting injectables (RIS-LAIs) in 434 schizophrenia patients. Compared with the RIS-LAI group, the patients treated with FGA-LAIs had a significantly longer duration of illness and LAI treatment and were older. Our results suggest that patients treated with FGA-LAI have more satisfactory subjective experiences compared with patients treated with RIS-LAI and that both FGA-LAI and RIS-LAI treatments can prevent relapses and hospitalization. Additional longitudinal studies determining the long-term benefits of RIS-LAI are warranted. PMID:27580495

Terahertz wave (THz) photoconductive (PC) antennas were fabricated on oxygen-implanted GaAs (GaAs:O) and low-temperature-grown GaAs (LT-GaAs). The measured cw THz power at 0.358 THz from the GaAs:O antenna is about twice that from the LT-GaAs antenna under the same testing conditions, with the former showing no saturation up to a bias of 40 kV/cm, while the latter is already beginning to saturate at 20 kV/cm. A modified theoretical model incorporating bias-field-dependent electron saturation velocity is employed to explain the results. It shows that GaAs:O exhibits a higher electron saturation velocity, which may be further exploited to generate even larger THz powers by reducing the ion dosage and optimizing the annealing process in GaAs:O. PMID:19340176

A comparative study of two different Photonic Integrated Circuits (PICs) structures for continuous-wave generation of millimeter-wave (MMW) signals is presented, each using a different approach. One approach is optical heterodyning, using an integrated dual-wavelength laser source based on Arrayed Waveguide Grating. The other is based on ModeLocked Laser Diodes (MLLDs). A novel building block -Multimode Interference Reflectors (MIRs) - is used to integrate on-chip both structures, without need of cleaved facets to define the laser cavity. This fact enables us to locate any of these structures at any location within the photonic chip. As will be shown, the MLLD structure provides a simple source for low frequencies. Higher frequencies are easier to achieve by optical heterodyne. Both types of structures have been fabricated on a generic foundry in a commercial MPW PIC technology.

Many data acquisition systems incorporate high-speed scanners to convert analog signals into digital format for further processing. Some systems multiplex many channels into a single scanner. A random access scanner whose scan sequence is specified by a table in random access memory will permit different scan rates on different channels. Generation of this scan table can be a tedious manual task when there are many channels (e.g. 50), when there are more than a few scan rates (e.g. 5), and/or when the ratio of the highest scan rate to the lowest scan rate becomes large (e.g. 100:1). An algorithm is developed which will generate these scan sequences for the random access scanner and implements the algorithm on a digital computer. Application of number theory to the mathematical statement of the problem led to development of several algorithms which were implemented in FORTRAN. The most efficient of these algorithms operates by partitioning the problem into a set of subproblems. Through recursion they solve each subproblem by partitioning it repeatedly into even smaller parts, continuing until a set of simple problems is created. From this process, a pictorial representation or wheel diagram of the problem can be constructed. From the wheel diagram and a description of the original problem, a scan table can be constructed. In addition, the wheel diagram can be used as a method of storing the scan sequence in a smaller amount of memory. The most efficient partitioning algorithm solved most scan table problems in less than a second of CPU time. Some types of problems, however, required as much as a few minutes of CPU time. 26 figures, 2 tables.

Objective The metabolic side effects of second-generation antipsychotics (SGA) are serious and have not been compared head to head in a meta-analysis. We conducted a meta-analysis of studies comparing the metabolic side effects of the following SGAs head-to-head: amisulpride, aripiprazole, clozapine, olanzapine, quetiapine, risperidone, sertindole, ziprasidone, zotepine. Method We searched the register of the Cochrane schizophrenia group (last search May 2007), supplemented by MEDLINE and EMBASE (last search January 2009) for randomized, blinded studies comparing the above mentioned SGA in the treatment of schizophrenia or related disorders. At least three reviewers extracted the data independently. The primary outcome was weight change. We also assessed changes of cholesterol and glucose. The results were combined in a meta-analysis. Results We included 48 studies with 105 relevant arms. Olanzapine produced more weight gain than all other second-generation antipsychotics except for clozapine where no difference was found. Clozapine produced more weight gain than risperidone, risperidone more than amisulpride, and sertindole more than risperidone. Olanzapine produced more cholesterol increase than aripiprazole, risperidone and ziprasidone. (No differences with amisulpride, clozapine and quetiapine were found). Quetiapine produced more cholesterol increase than risperidone and ziprasidone. Olanzapine produced more increase in glucose than amisulpride, aripiprazole, quetiapine, risperidone and ziprasidone; no difference was found with clozapine. Conclusions Some SGAs lead to substantially more metabolic side effects than other SGAs. When choosing an SGA for an individual patient these side effects with their potential cause of secondary diseases must be weighed against efficacy and characteristics of the individual patient. PMID:20692814

The mitochondrial genome's non-recombinant mode of inheritance and relatively rapid rate of evolution has promoted its use as a marker for studying the biogeographic history and evolutionary interrelationships among many metazoan species. A modest portion of the mitochondrial genome has been defined for 12 species and genotypes of parasites in the genus Trichinella, but its adequacy in representing the mitochondrial genome as a whole remains unclear, as the complete coding sequence has been characterized only for Trichinella spiralis. Here, we sought to comprehensively describe the extent and nature of divergence between the mitochondrial genomes of T. spiralis (which poses the most appreciable zoonotic risk owing to its capacity to establish persistent infections in domestic pigs) and Trichinella murrelli (which is the most prevalent species in North American wildlife hosts, but which poses relatively little risk to the safety of pork). Next generation sequencing methodologies and scaffold and de novo assembly strategies were employed. The entire protein-coding region was sequenced (13,917 bp), along with a portion of the highly repetitive non-coding region (1524 bp) of the mitochondrial genome of T. murrelli with a combined average read depth of 250 reads. The accuracy of base calling, estimated from coding region sequence was found to exceed 99.3%. Genome content and gene order was not found to be significantly different from that of T. spiralis. An overall inter-species sequence divergence of 9.5% was estimated. Significant variation was identified when the amount of variation between species at each gene is compared to the average amount of variation between species across the coding region. Next generation sequencing is a highly effective means to obtain previously unknown mitochondrial genome sequence. Particular to parasites, the extremely deep coverage achieved through this method allows for the detection of sequence heterogeneity between the multiple

Although biophysical models predict a difference in the ratio of interchromosomal to intrachromosomal interarm exchanges (F ratio) for low- and high-LET radiations, few experimental data support this prediction. However, the F ratios in experiments to date have been generated using data on chromosome aberrations in samples collected at the first postirradiation mitosis, which may not be indicative of the aberrations formed in interphase after exposure to high-LET radiations. In the present study, we exposed human lymphocytes in vitro to 2 and 5 Gy of gamma rays and 3 Gy of 1 GeV/nucleon iron ions (LET = 140 keV/micrometer), stimulated the cells to grow with phytohemagglutinin (PHA), and collected the condensed chromosomes after 48 h of incubation using both chemically induced premature chromosome condensation (PCC) and the conventional metaphase techniques. The PCC technique used here condenses chromosomes mostly in the G(2) phase of the cell cycle. The F ratio was calculated using data on asymmetrical chromosome aberrations in both the PCC and metaphase samples. It was found that the F ratios were similar for the samples irradiated with low- and high-LET radiation and collected at metaphase. However, for irradiated samples assayed by PCC, the F ratio was found to be 8.2 +/- 2.0 for 5 Gy gamma rays and 5.2 +/- 0.9 for 3 Gy iron ions. The distribution of the aberrations indicated that, in the PCC samples irradiated with iron ions, most of the centric rings occurred in spreads containing five or more asymmetrical aberrations. These heavily damaged cells, which were either less likely to reach mitosis or may reach mitosis at a later time, were responsible for the difference in the F ratios generated from interphase and metaphase analysis after exposure to iron ions.

The kinetics of T-helper immune responses generated in 16 mature outbred rhesus monkeys (Macaca mulatta) within a 10-month period by three different human immunodeficiency virus type 1 (HIV-1) vaccine strategies were compared. Immune responses to monomeric recombinant gp120SF2 (rgp120) when the protein was expressed in vivo by DNA immunization or when it was delivered as a subunit protein vaccine formulated either with the MF59 adjuvant or by incorporation into immune-stimulating complexes (ISCOMs) were compared. Virus-neutralizing antibodies (NA) against HIV-1SF2 reached similar titers in the two rgp120SF2 protein-immunized groups, but the responses showed different kinetics, while NA were delayed and their levels were low in the DNA-immunized animals. Antigen-specific gamma interferon (IFN-γ) T-helper (type 1-like) responses were detected in the DNA-immunized group, but only after the fourth immunization, and the rgp120/MF59 group generated both IFN-γ and interleukin-4 (IL-4) (type 2-like) responses that appeared after the third immunization. In contrast, rgp120/ISCOM-immunized animals rapidly developed marked IL-2, IFN-γ (type 1-like), and IL-4 responses that peaked after the second immunization. To determine which type of immune responses correlated with protection from infection, all animals were challenged intravenously with 50 50% infective doses of a rhesus cell-propagated, in vivo-titrated stock of a chimeric simian immunodeficiency virus-HIVSF13 construct. Protection was observed in the two groups receiving the rgp120 subunit vaccines. Half of the animals in the ISCOM group were completely protected from infection. In other subunit vaccinees there was evidence by multiple assays that virus detected at 2 weeks postchallenge was effectively cleared. Early induction of potent type 1- as well as type 2-like T-helper responses induced the most-effective immunity. PMID:10074183

Delta's Key to the TOEFL iBT: Advanced Skill Practice is a revised and updated edition of Delta's Key to the Next Generation TOEFL Test. Since the introduction of the TOEFL iBT in 2005, there have been significant changes to some of the test questions, particularly the integrated writing and integrated speaking tasks. The new 2011 edition of…

Background: The honey bee is a key model for social behavior and this feature led to the selection of the species for genome sequencing. A genetic map is a necessary companion to the sequence. In addition, because there was originally no physical map for the honey bee genome project, a meiotic map was the only resource for organizing the sequence assembly on the chromosomes. Results: We present the genetic (meiotic) map here and describe the main features that emerged from comparison with the sequence-based physical map. The genetic map of the honey bee is saturated and the chromosomes are oriented from the centromeric to the telomeric regions. The map is based on 2,008 markers and is about 40 Morgans (M) long, resulting in a marker density of one every 2.05 centiMorgans (cM). For the 186 megabases (Mb) of the genome mapped and assembled, this corresponds to a very high average recombination rate of 22.04 cM/Mb. Honey bee meiosis shows a relatively homogeneous recombination rate along and across chromosomes, as well as within and between individuals. Interference is higher than inferred from the Kosambi function of distance. In addition, numerous recombination hotspots are dispersed over the genome. Conclusion: The very large genetic length of the honey bee genome, its small physical size and an almost complete genome sequence with a relatively low number of genes suggest a very promising future for association mapping in the honey bee, particularly as the existence of haploid males allows easy bulk segregant analysis. PMID:17459148

Background Mesenchymal stromal cells are employed in various different clinical settings in order to modulate immune response. However, relatively little is known about the mechanisms responsible for their immunomodulatory effects, which could be influenced by both the cell source and culture conditions. Design and Methods We tested the ability of a 5% platelet lysate-supplemented medium to support isolation and ex vivo expansion of mesenchymal stromal cells from full-term umbilical-cord blood. We also investigated the biological/functional properties of umbilical cord blood mesenchymal stromal cells, in comparison with platelet lysate-expanded bone marrow mesenchymal stromal cells. Results The success rate of isolation of mesenchymal stromal cells from umbilical cord blood was in the order of 20%. These cells exhibited typical morphology, immunophenotype and differentiation capacity. Although they have a low clonogenic efficiency, umbilical cord blood mesenchymal stromal cells may possess high proliferative potential. The genetic stability of these cells from umbilical cord blood was demonstrated by a normal molecular karyotype; in addition, these cells do not express hTERT and telomerase activity, do express p16ink4a protein and do not show anchorage-independent cell growth. Concerning alloantigen-specific immune responses, umbilical cord blood mesenchymal stromal cells were able to: (i) suppress T- and NK-lymphocyte proliferation, (ii) decrease cytotoxic activity and (iii) only slightly increase interleukin-10, while decreasing interferon-γ secretion, in mixed lymphocyte culture supernatants. While an indoleamine 2,3-dioxygenase-specific inhibitor did not reverse mesenchymal stromal cell-induced suppressive effects, a prostaglandin E2-specific inhibitor hampered the suppressive effect of both umbilical cord blood- and bone marrow-mesenchymal stromal cells on alloantigen-induced cytotoxic activity. Mesenchymal stromal cells from both sources expressed HLA

The rate of textbook revision cycles is examined in light of the recent trend towards more rapid revisions (and adoptions of textbooks). The authors conduct background research to better understand the context for textbook revision cycles and the environmental forces that have been influencing what appears to be more rapid textbook revisions. A…

... IDENTIFICATION OF REGIONS AND AGENCIES FOR SOLID WASTE MANAGEMENT Submission and Revision of Identifications... for solid waste functions in the region. (b) Revisions or adjustments to the State plan may require... notified of such revisions by the State solid waste agency. (c) Major revisions or adjustments in...

How do you show students that revision is more than a classroom exercise to please the teacher? Take them into the real world of writing for publication. In Real Revision, award-winning author and teacher Kate Messner demystifies the revision process for teachers and students alike and provides tried-and-true revision strategies, field tested by…

In an attempt to explore effective instruction in the English as a Foreign Language (EFL) setting, this study investigated language errors identified by students and teachers in three different revision stages: self-revision, peer revision, and teacher revision. It gave the focus to the effects of the three different methods on learners' writing…

... strategies. 923.128 Section 923.128 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign... Program § 923.128 Revisions to assessments and strategies. (a) A State, in consultation with the Assistant Administrator, may propose to revise its approved Strategy. Revision(s) to an approved Strategy must...

... strategies. 923.128 Section 923.128 Commerce and Foreign Trade Regulations Relating to Commerce and Foreign... Program § 923.128 Revisions to assessments and strategies. (a) A State, in consultation with the Assistant Administrator, may propose to revise its approved Strategy. Revision(s) to an approved Strategy must...