We evaluated a PCR-RFLP of the ribosomal internal transcribed spacer 2 region (ITS2) to distinguish species of Anopheles commonly reported in the Amazon and validated this method using reared F1 offspring. The following species of Anopheles were used for molecular analysis: An. (Nys.) benarrochi, An. (Nys.) darlingi, An. (Nys.) nuneztovari, An. (Nys.) konderi, An. (Nys.) rangeli, and An. (Nys.) triannulatus sensu lato (s.l.). In addition, three species of the subgenus Anopheles, An. (Ano.) forattini, An. (Ano.) mattogrossensis, and An. (Ano.) peryassui were included for testing. Each of the nine species tested yielded diagnostic banding patterns. The PCR-RFLP method was successful in identifying all life stages including exuviae with small fractions of the sample. The assay is rapid and can be applied as an unbiased confirmatory method for identification of morphologic variants, disputed samples, imperfectly preserved specimens, and life stages from which taxonomic keys do not allow for definitive species determination. PMID:18337348

Among Australian endemic tephritid fruit flies, the sibling species Bactrocera tryoni and Bactrocera neohumeralis have been serious horticultural pests since the introduction of horticulture in the nineteenth century. More recently, Bactrocera jarvisi has also been declared a pest in northern Australia. After several decades of genetic research there is now a range of classical and molecular genetic tools that can be used to develop improved Sterile Insect Technique (SIT) strains for control of these pests. Four-way crossing strategies have the potential to overcome the problem of inbreeding in mass-reared strains of B. tryoni. The ability to produce hybrids between B. tryoni and the other two species in the laboratory has proved useful for the development of genetically marked strains. The identification of Y-chromosome markers in B. jarvisi means that male and female embryos can be distinguished in any strain that carries a B. jarvisi Y chromosome. This has enabled the study of homologues of the sex-determination genes during development of B jarvisi and B. tryoni, which is necessary for the generation of genetic-sexing strains. Germ-line transformation has been established and a draft genome sequence for B. tryoni released. Transcriptomes from various species, tissues and developmental stages, to aid in identification of manipulation targets for improving SIT, have been assembled and are in the pipeline. Broad analyses of the microbiome have revealed a metagenome that is highly variable within and across species and defined by the environment. More specific analyses detected Wolbachia at low prevalence in the tropics but absent in temperate regions, suggesting a possible role for this endosymbiont in future control strategies. PMID:25470996

Among Australian endemic tephritid fruit flies, the sibling species Bactrocera tryoni and Bactrocera neohumeralis have been serious horticultural pests since the introduction of horticulture in the nineteenth century. More recently, Bactrocera jarvisi has also been declared a pest in northern Australia. After several decades of genetic research there is now a range of classical and molecular genetic tools that can be used to develop improved Sterile Insect Technique (SIT) strains for control of these pests. Four-way crossing strategies have the potential to overcome the problem of inbreeding in mass-reared strains of B. tryoni. The ability to produce hybrids between B. tryoni and the other two species in the laboratory has proved useful for the development of genetically marked strains. The identification of Y-chromosome markers in B. jarvisi means that male and female embryos can be distinguished in any strain that carries a B. jarvisi Y chromosome. This has enabled the study of homologues of the sex-determination genes during development of B jarvisi and B. tryoni, which is necessary for the generation of genetic-sexing strains. Germ-line transformation has been established and a draft genome sequence for B. tryoni released. Transcriptomes from various species, tissues and developmental stages, to aid in identification of manipulation targets for improving SIT, have been assembled and are in the pipeline. Broad analyses of the microbiome have revealed a metagenome that is highly variable within and across species and defined by the environment. More specific analyses detected Wolbachia at low prevalence in the tropics but absent in temperate regions, suggesting a possible role for this endosymbiont in future control strategies. PMID:25470996

A new patent pending technique is proposed in this study to improve the mechanical and biological performance of ultra high molecular weight polyethylene (UHMWPE), i.e., to uniformly coat nylon onto the UHMWPE fiber (Firouzi et al., 2012). Mechanical tests were performed on neat and new nylon coated UHMWPE fibers to examine the tensile strength and creep resistance of the samples at different temperatures. Cytotoxicity and osteolysis induced by wear debris of the materials were investigated using (MTT) assay, and RT-PCR for tumor necrosis factor alpha (TNFα) and interleukin 6 (IL-6) osteolysis markers. Mechanical test results showed substantial improvement in maximum creep time, maximum breaking force, and toughness values of Nylon 6,6 and Nylon 6,12 coated UHMWPE fibers between average 15% and 60% at 25, 50, and 70°C. Furthermore, cytotoxicity studies have demonstrated significant improvement in cell viability using the nylon coated UHMWPE over the neat one (72.4% vs 54.8%) for 48h and (80.7 vs 5%) for 72h (P<0.01). Osteolysis test results have shown that the expression levels of TNFα and IL-6 markers induced by the neat UHMWPE fiber were significantly higher than those induced by the Nylon 6,6 coated UHMWPE (2.5 fold increase for TNFα at 48h, and three fold increase for IL-6 at 72h (P<0.01)). This study suggests that UHMWPE coated with nylon could be used as a novel material in clinical applications with lower cytotoxicity, less wear debris-induced osteolysis, and superior mechanical properties compared to neat UHMWPE. PMID:24487078

Thousands of millions of documents are stored and updated daily in the World Wide Web. Most of the information is not efficiently organized to build knowledge from the stored data. Nowadays, search engines are mainly used by users who rely on their skills to look for the information needed. This paper presents different techniques search engine users can apply in Google Search to improve the relevancy of search results. According to the Pew Research Center, the average person spends eight hours a month searching for the right information. For instance, a company that employs 1000 employees wastes $2.5 million dollars on looking for nonexistent and/or not found information. The cost is very high because decisions are made based on the information that is readily available to use. Whenever the information necessary to formulate an argument is not available or found, poor decisions may be made and mistakes will be more likely to occur. Also, the survey indicates that only 56% of Google users feel confident with their current search skills. Moreover, just 76% of the information that is available on the Internet is accurate.

This paper describes several approaches for implementing quality improvement initiatives to improve patient satisfaction, which enables health-care organizations to position themselves for success in today's global and increasingly competitive environment. Specifically, measuring the views of patients, improving patient satisfaction through a community-wide effort, and using a Six Sigma program are discussed. Each of these programs can be an effective mechanism for quality improvement. A key component to quality improvementtechniques involves collaborative efforts by all health-care professionals and managers as they seek to increase patient satisfaction. PMID:15552388

Nanotechnology has been a rapidly developing field in the past few decades, resulting in the more and more exposure of nanomaterials to human. The increased applications of nanomaterials for industrial, commercial and life purposes, such as fillers, catalysts, semiconductors, paints, cosmetic additives and drug carriers, have caused both obvious and potential impacts on human health and environment. Nanotoxicology is used to study the safety of nanomaterials and has grown at the historic moment. Molecular toxicology is a new subdiscipline to study the interactions and impacts of materials at the molecular level. To better understand the relationship between the molecular toxicology and nanomaterials, this review summarizes the typical techniques and methods in molecular toxicology which are applied when investigating the toxicology of nanomaterials and include six categories: namely; genetic mutation detection, gene expression analysis, DNA damage detection, chromosomal aberration analysis, proteomics, and metabolomics. Each category involves several experimental techniques and methods. PMID:27319209

This book serves as a primer for moleculartechniques in insect pathology and is tailored for a wide scientific audience. Contributing authors are internationally recognized experts. The book comprises four sections: 1) pathogen identification and diagnostics, 2) pathogen population genetics and p...

In a practical sense, biotechnology is concerned with the production of commercial products generated by biological processes. More formally, biotechnology may be defined as "the application of scientific and engineering principles to the processing of material by biological agents to provide goods and services" (Cantor, 2000). From a historical perspective, biotechnology dates back to the time when yeast was first used for beer or wine fermentation, and bacteria were used to make yogurt. In 1972, the birth of recombinant DNA technology moved biotechnology to new heights and led to the establishment of a new industry. Progress in biotechnology has been truly remarkable. Within four years of the discovery of recombinant DNA technology, genetically modified organisms (GMOs) were making human insulin, interferon, and human growth hormone. Now, recombinant DNA technology and its products--GMOs are widely used in environmental biotechnology (Glick and Pasternak, 1988; Cowan, 2000). Bioremediation is one of the most rapidly growing areas of environmental biotechnology. Use of bioremediation for environmental clean up is popular due to low costs and its public acceptability. Indeed, bioremediation stands to benefit greatly and advance even more rapidly with the adoption of moleculartechniques developed originally for other areas of biotechnology. The 1990s was the decade of molecular microbial ecology (time of using moleculartechniques in environmental biotechnology). Adoption of these moleculartechniques made scientists realize that microbial populations in the natural environments are much more diverse than previously thought using traditional culture methods. Using molecular ecological methods, such as direct DNA isolation from environmental samples, denaturing gradient gel electrophoresis (DGGE), PCR methods, nucleic acid hybridization etc., we can now study microbial consortia relevant to pollutant degradation in the environment. These techniques promise to

Based on the assumption that a positive environment is an important component of a well-run school, this monograph offers techniques to school principals for evaluating and improving school climate. Topics covered include assessing school climate, planning for climate development, providing leadership for climate improvement, improving classroom…

Prosthetic joint infections (PJI) can be broadly classed into two groups: those where there is a strong clinical suspicion of infection and those with clinical uncertainty, including 'aseptic loosening'. Confirmation of infection and identification of the causative organism along with provision of antibiotic susceptibility data are important stages in the management of PJI. Conventional microbiological culture and susceptibility testing is usually sufficient to provide this. However, it may fail due to prior antimicrobial treatment or the presence of unusual and fastidious organisms. Moleculartechniques, in particular specific real-time and broad-range PCR, are available for diagnostic use in suspected PJI. In this review, we describe the techniques available, their current strengths, limitations and future development. Real-time pathogen-specific and broad-range PCR (with single sequence determination) are suitable for use as part of the routine diagnostic algorithm for clinically suspected PJI. Further development of broad-range PCR with high-throughput (next-generation) sequencing is necessary to understand the microbiome of the prosthetic joint further before this technique can be used for routine diagnostics in clinically unsuspected PJI, including aseptic loosening. PMID:25135084

Molecular nanotechnology is the precise, three-dimensional control of materials and devices at the atomic scale. An important part of nanotechnology is the design of molecules for specific purposes. This paper describes early results using genetic software techniques to automatically design molecules under the control of a fitness function. The fitness function must be capable of determining which of two arbitrary molecules is better for a specific task. The software begins by generating a population of random molecules. The population is then evolved towards greater fitness by randomly combining parts of the better individuals to create new molecules. These new molecules then replace some of the worst molecules in the population. The unique aspect of our approach is that we apply genetic crossover to molecules represented by graphs, i.e., sets of atoms and the bonds that connect them. We present evidence suggesting that crossover alone, operating on graphs, can evolve any possible molecule given an appropriate fitness function and a population containing both rings and chains. Prior work evolved strings or trees that were subsequently processed to generate molecular graphs. In principle, genetic graph software should be able to evolve other graph representable systems such as circuits, transportation networks, metabolic pathways, computer networks, etc.

Current sequencing-based and DNA microarray techniques to study microbial diversity are based on an initial PCR (polymerase chain reaction) amplification step. However, a number of factors are known to bias PCR amplification and jeopardize the true representation of bacterial diversity. PCR amplification of the minor template appears to be suppressed by the exponential amplification of the more abundant template. It is widely acknowledged among environmental molecular microbiologists that genetic biosignatures identified from an environment only represent the most dominant populations. The technological bottleneck has overlooked the presence of the less abundant minority population, and underestimated their role in the ecosystem maintenance. To generate PCR amplicons for subsequent diversity analysis, bacterial l6S rRNA genes are amplified by PCR using universal primers. Two distinct PCR regimes are employed in parallel: one using normal and the other using biotinlabeled universal primers. PCR products obtained with biotin-labeled primers are mixed with streptavidin-labeled magnetic beads and selectively captured in the presence of a magnetic field. Less-abundant DNA templates that fail to amplify in this first round of PCR amplification are subjected to a second round of PCR using normal universal primers. These PCR products are then subjected to downstream diversity analyses such as conventional cloning and sequencing. A second round of PCR amplified the minority population and completed the deep diversity picture of the environmental sample.

This book offers a plan for improved classroom practice through the supervisory process. It includes hands-on practices for developing a personalized supervision strategy, research-based and empirically tested strategies, field-tested tools and techniques for qualitative and quantitative observation, a comprehensive resource of traditional and…

This paper presents the results of a study of pecan spectral reflectances. It describes an experiment for measuring the contrast between several components of raw pecan product to be sorted. An analysis of the experimental data reveals high contrast ratios in the infrared spectrum, suggesting a potential improvement in sorting efficiency when separating pecan meat from shells. It is believed that this technique has the potential to dramatically improve the efficiency of current sorting machinery, and to reduce the cost of processing pecans for the consumer market.

Automatic mesh generation and adaptive refinement methods for complex three-dimensional domains have proven to be very successful tools for the efficient solution of complex applications problems. These methods can, however, produce poorly shaped elements that cause the numerical solution to be less accurate and more difficult to compute. Fortunately, the shape of the elements can be improved through several mechanisms, including face-swapping techniques that change local connectivity and optimization-based mesh smoothing methods that adjust grid point location. The authors consider several criteria for each of these two methods and compare the quality of several meshes obtained by using different combinations of swapping and smoothing. Computational experiments show that swapping is critical to the improvement of general mesh quality and that optimization-based smoothing is highly effective in eliminating very small and very large angles. The highest quality meshes are obtained by using a combination of swapping and smoothing techniques.

The luminosity of the Relativistic Heavy Ion Collider has improved significantly [1] over the first three physics runs. A number of special rf techniques have been developed to facilitate higher luminosity. The techniques described herein include: an ultra low-noise rf source for the 197 MHz storage rf system, a frequency shift switch-on technique for transferring bunches from the acceleration to the storage system, synchronizing the rings during the energy ramp (including crossing the transition energy) to avoid incidental collisions, installation of dedicated 200 MHZ cavities to provide longitudinal Landau damping on the ramp, and the development of a bunch merging scheme in the Booster to increase the available bunch intensity from the injectors.

A tremendous decline in cultivable land and resources and a huge increase in food demand calls for immediate attention to crop improvement. Though molecular plant breeding serves as a viable solution and is considered as "foundation for twenty-first century crop improvement", a major stumbling block for crop improvement is the availability of a limited functional gene pool for cereal crops. Advancement in the next generation sequencing (NGS) technologies integrated with tools like metabolomics, proteomics and association mapping studies have facilitated the identification of candidate genes, their allelic variants and opened new avenues to accelerate crop improvement through development and use of functional molecular markers (FMMs). The FMMs are developed from the sequence polymorphisms present within functional gene(s) which are associated with phenotypic trait variations. Since FMMs obviate the problems associated with random DNA markers, these are considered as "the holy grail" of plant breeders who employ targeted marker assisted selections (MAS) for crop improvement. This review article attempts to consider the current resources and novel methods such as metabolomics, proteomics and association studies for the identification of candidate genes and their validation through virus-induced gene silencing (VIGS) for the development of FMMs. A number of examples where the FMMs have been developed and used for the improvement of cereal crops for agronomic, food quality, disease resistance and abiotic stress tolerance traits have been considered. PMID:26171816

This paper describes several improvements in instrumental techniques for the analysis of low ppb concentrations of sulfur gases using gas chromatography (G.C.). This work has focused on the analytical problem of ambient air monitoring of the two main sulfur gas pollutants, hydrogen sulfide and sulfur dioxide. The most significant technical improvement that will be reported here is the newly developed silica gel column for ppb concentrations of the light sulfur gases (COS, H2S, CS2, SO2, CH3SH). A simplified inlet system will be described which improves reliability of the GC system. The flame photometric detector is used as the means of selectively and sensitively detecting the low concentrations of sulfur gases. Improvements will be described which have yielded better performance than previously reported for this application of the detector. Also included in this paper will be a report of field monitoring using this improved GC system. Reliability and repeatability of performance at the low ppb concentrations of sulfur gases will be demonstrated.

The double-edge lidar technique for measuring the wind using molecular backscatter is described. Two high spectral resolution edge filters are located in the wings of the Rayleigh-Brillouin profile. This doubles the signal change per unit Doppler shift, the sensitivity, and gives nearly a factor of two improvement in measurement accuracy. The use of a crossover region is described where the sensitivity of a molecular and aerosol-based measurement are equal. This desensitizes the molecular measurement to the effects of aerosol scattering over a frequency range of +/- 100 m/s. We give methods for correcting for short-term frequency jitter and drift using a laser reference frequency measurement and methods for long-term frequency correction using a servo control system. The effects of Rayleigh-Brillouin scattering on the measurement are shown to be significant and are included in the analysis. Simulations for a conical scanning satellite-based lidar at 355 nm show an accuracy of 2-3 m/s for altitudes of 2 to 15 km for a 1 km vertical resolution, a satellite altitude of 400 km and a 200 km x 200 km spatial resolution. Results of ground based wind measurements are presented.

Sugar cane is a major source of food and fuel worldwide. Biotechnology has the potential to improve economically-important traits in sugar cane as well as diversify sugar cane beyond traditional applications such as sucrose production. High levels of transgene expression are key to the success of improving crops through biotechnology. Here we describe new molecular tools that both expand and improve gene expression capabilities in sugar cane. We have identified promoters that can be used to drive high levels of gene expression in the leaf and stem of transgenic sugar cane. One of these promoters, derived from the Cestrum yellow leaf curling virus, drives levels of constitutive transgene expression that are significantly higher than those achieved by the historical benchmark maize polyubiquitin-1 (Zm-Ubi1) promoter. A second promoter, the maize phosphonenolpyruvate carboxylate promoter, was found to be a strong, leaf-preferred promoter that enables levels of expression comparable to Zm-Ubi1 in this organ. Transgene expression was increased approximately 50-fold by gene modification, which included optimising the codon usage of the coding sequence to better suit sugar cane. We also describe a novel dual transcriptional enhancer that increased gene expression from different promoters, boosting expression from Zm-Ubi1 over eightfold. These molecular tools will be extremely valuable for the improvement of sugar cane through biotechnology. PMID:24150836

The study of marine microorganisms using molecular biological techniques is now widespread in the ocean sciences. These techniques target nucleic acids which record the evolutionary history of microbes, and encode for processes which are active in the ocean today. Moleculartechniques can form the basis of remote instrumentation sensing technologies for marine microbial diversity and ecological function. Here we review some of the most commonly used molecular biological techniques. These techniques include the polymerase chain reaction (PCR) and reverse-transcriptase PCR, quantitative PCR, whole assemblage "fingerprinting" approaches (based on nucleic acid sequence or length heterogeneity), oligonucleotide microarrays, and high-throughput shotgun sequencing of whole genomes and gene transcripts, which can be used to answer biological, ecological, evolutionary and biogeochemical questions in the ocean sciences. Moreover, molecular biological approaches may be deployed on ocean sensor platforms and hold promise for tracking of organisms or processes of interest in near-real time.

The study of marine microorganisms using molecular biological techniques is now widespread in the ocean sciences. These techniques target nucleic acids which record the evolutionary history of microbes, and encode for processes which are active in the ocean today. Here we review some of the most commonly used molecular biological techniques. Molecular biological techniques permit study of the abundance, distribution, diversity, and physiology of microorganisms in situ. These techniques include the polymerase chain reaction (PCR) and reverse-transcriptase PCR, quantitative PCR, whole assemblage "fingerprinting" approaches (based on nucleic acid sequence or length heterogeneity), oligonucleotide microarrays, and high-throughput shotgun sequencing of whole genomes and gene transcripts, which can be used to answer biological, ecological, evolutionary and biogeochemical questions in the ocean sciences. Moreover, molecular biological approaches may be deployed on ocean sensor platforms and hold promise for tracking of organisms or processes of interest in near-real time.

Myxomycetes are organisms characterized by a life cycle that includes a fruiting body stage. Myxomycete fruiting bodies contain spores, and wind dispersal of the spores is considered important for this organism to colonize new areas. In this study, the presence of airborne myxomycetes and the temporal changes in the myxomycete composition of atmospheric particles (aerosols) were investigated with a polymerase chain reaction (PCR)-based method for Didymiaceae and Physaraceae. Twenty-one aerosol samples were collected on the roof of a three-story building located in Sapporo, Hokkaido Island, northern Japan. PCR analysis of DNA extracts from the aerosol samples indicated the presence of airborne myxomycetes in all the samples, except for the one collected during the snowfall season. Denaturing gradient gel electrophoresis (DGGE) analysis of the PCR products showed seasonally varying banding patterns. The detected DGGE bands were subjected to sequence analyses, and four out of nine obtained sequences were identical to those of fruiting body samples collected in Hokkaido Island. It appears that the difference in the fruiting period of each species was correlated with the seasonal changes in the myxomycete composition of the aerosols. Molecular evidence shows that newly formed spores are released and dispersed in the air, suggesting that wind-driven dispersal of spores is an important process in the life history of myxomycetes. This study is the first to detect airborne myxomycetes with the use of molecular ecological analyses and to characterize their seasonal distribution.

Molecular biological methods, such as the polymerase chain reaction (PCR) and gel electrophoresis, are now commonly taught to students in introductory biology courses at the college and even high school levels. This often includes hands-on experience with one or more moleculartechniques as part of a general biology laboratory. To assure that most…

This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbomachinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. This will be accomplished in a cooperative program by Penn State University and the Allison Engine Company. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tenor.

Earthquake location using relative arrival time measurements can lead to dramatically reduced location errors and a view of fault-zone processes with unprecedented detail. There are two principal reasons why this approach reduces location errors. The first is that the use of differenced arrival times to solve for the vector separation of earthquakes removes from the earthquake location problem much of the error due to unmodeled velocity structure. The second reason, on which we focus in this article, is that waveform cross correlation can substantially reduce measurement error. While cross correlation has long been used to determine relative arrival times with subsample precision, we extend correlation measurements to less similar waveforms, and we introduce a general quantitative means to assess when correlation data provide an improvement over catalog phase picks. We apply the technique to local earthquake data from the Calaveras Fault in northern California. Tests for an example streak of 243 earthquakes demonstrate that relative arrival times with normalized cross correlation coefficients as low as ???70%, interevent separation distances as large as to 2 km, and magnitudes up to 3.5 as recorded on the Northern California Seismic Network are more precise than relative arrival times determined from catalog phase data. Also discussed are improvements made to the correlation technique itself. We find that for large time offsets, our implementation of time-domain cross correlation is often more robust and that it recovers more observations than the cross spectral approach. Longer time windows give better results than shorter ones. Finally, we explain how thresholds and empirical weighting functions may be derived to optimize the location procedure for any given region of interest, taking advantage of the respective strengths of diverse correlation and catalog phase data on different length scales.

Techniques are described for producing improved infrared bolometers from doped germanium. Ion implantation and sputter metalization have been used to make ohmic electrical contacts to Ge:Ga chips. This method results in a high yield of small monolithic bolometers with very little low-frequency noise. When one of these chips is used as the thermometric element of a composite bolometer, it must be bonded to a dielectric substrate. The thermal resistance of the conventional epoxy bond has been measured and found to be undesirably large. A procedure for soldering the chip to a metalized portion of the substrate is described which reduced this resistance. The contribution of the metal film absorber to the heat capacity of a composite bolometer has been measured. The heat capacity of a NiCr absorber at 1.3 K can dominate the bolometer performance. A Bi absorber has significantly lower heat capacity. A low temperature blackbody calibrator has been built to measure the optical responsivity of bolometers. A composite bolometer system with a throughput of approx. 0.1 sr sq cm was constructed using the new techniques. In negligible background it has an optical NEP of 3.6 10((exp -15) W/sq root of Hz at 1.0 K with a time constant of 20 ms. The noise in this bolometer is white above 2.5 Hz and is somewhat below the value predicted by thermodynamic equilibrium theory. It is in agreement with calculations based on a recent nonequilibrium theory.

The planetary radar (e.g. MARSIS) data inversion is based on the selection of groups of stationary frames, within the area under investigation, that shall be statistically analyzed after suitable correction. The selection step includes the recovery of bad/poor data and the estimation of the geometrical surface and subsurface features; these feature shall be utilized in order to obtain data that are only dependent by the material nature of the inclusion, within the layer, and of the interface. This paper is addressed to the techniques used for the frames selection, recovery and their geometric estimation content. As first step, frames have been selected in Mars areas where the surface and subsurface have a physical optics behavior (i.e. quite flat); the surface flatness has been estimated according to a simulator based on MOLA (Mars Orbiter Laser Altimeter) data while the subsurface has been estimated taking into account the Doppler filters content (i.e. filter 0, +1, -1). Being the surface and subsurface quite flat only small geometric contribution have been estimated and used for correction of the received echoes. To perform this task surface and subsurface models have been developed, under the Kirchhoff approximation hypothesis, to be compared with the experimental data. A figure showing the different material nature of different areas of the Mars South Pole has been drawn. The discovery of areas with an high dielectric constant led geologists to analyze those areas with other instrument to confirm the results obtained by MARSIS. This paper outlines also the way out for future works in order to analyze more complex surface and subsurface scenarios where conditions for geometric optics or fractal can be present. In this case, it will be mandatory to develop a clutter cancellation technique to avoid the presence of false subsurface echoes generated by surface and subsurface features not immediately below the nadir direction of observation. It will be also necessary

This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbo-machinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tensor. Penn State will lead the effort to make direct measurements of the momentum and thermal mixing stress tensors in high-speed multistage compressor flow field in the turbomachinery laboratory at Penn State. They will also process the data by both conventional and conditional spectrum analysis to derive momentum and thermal mixing stress tensors due to blade-to-blade periodic and aperiodic components, revolution periodic and aperiodic components arising from various blade rows and non-deterministic (which includes random components) correlations. The modeling results from this program will be publicly available and generally applicable to steady-state Navier-Stokes solvers used for turbomachinery component (compressor or turbine) flow field predictions. These models will lead to improved methodology, including loss and efficiency prediction, for the design of high-efficiency turbomachinery and drastically reduce the time required for the design and development cycle of turbomachinery.

The empirical phase diagram (EPD) is a colored representation of overall structural integrity and conformational stability of macromolecules in response to various environmental perturbations. Numerous proteins and macromolecular complexes have been analyzed by EPDs to summarize results from large data sets from multiple biophysical techniques. The current EPD method suffers from a number of deficiencies including lack of a meaningful relationship between color and actual molecular features, difficulties in identifying contributions from individual techniques, and a limited ability to be interpreted by color-blind individuals. In this work, three improved data visualization approaches are proposed as techniques complementary to the EPD. The secondary, tertiary, and quaternary structural changes of multiple proteins as a function of environmental stress were first measured using circular dichroism, intrinsic fluorescence spectroscopy, and static light scattering, respectively. Data sets were then visualized as (1) RGB colors using three-index EPDs, (2) equiangular polygons using radar charts, and (3) human facial features using Chernoff face diagrams. Data as a function of temperature and pH for bovine serum albumin, aldolase, and chymotrypsin as well as candidate protein vaccine antigens including a serine threonine kinase protein (SP1732) and surface antigen A (SP1650) from S. pneumoniae and hemagglutinin from an H1N1 influenza virus are used to illustrate the advantages and disadvantages of each type of data visualization technique. PMID:22898970

Massive loss of valuable plant species in the past centuries and its adverse impact on environmental and socioeconomic values has triggered the conservation of plant resources. Appropriate identification and characterization of plant materials is essential for the successful conservation of plant resources and to ensure their sustainable use. Molecular tools developed in the past few years provide easy, less laborious means for assigning known and unknown plant taxa. These techniques answer many new evolutionary and taxonomic questions, which were not previously possible with only phenotypic methods. Moleculartechniques such as DNA barcoding, random amplified polymorphic DNA (RAPD), amplified fragment length polymorphism (AFLP), microsatellites and single nucleotide polymorphisms (SNP) have recently been used for plant diversity studies. Each technique has its own advantages and limitations. These techniques differ in their resolving power to detect genetic differences, type of data they generate and their applicability to particular taxonomic levels. This review presents a basic description of different moleculartechniques that can be utilized for DNA fingerprinting and molecular diversity analysis of plant species. PMID:20559503

This chapter of the new barley monograph summarizes current applications of molecular genetics and transformation to barley improvement. The chapter describes recent applications of molecular markers including association genetics, QTL mapping and marker assisted selection in barley programs, and in...

Liquid-liquid extraction technique speeds up separation of biological fluids into number of compounds. This eliminates agitation, emulsion formation, centrifugation, mechanical separation of phases, filtration, and other steps that have been used previously. Extraction efficiencies are equal or better than current manual liquid-liquid extraction techniques.

Ion plating technique keeps the substrate surface clean until the film is deposited, allows extensive diffusion and chemical reaction, and joins insoluble or incompatible materials. The technique involves the deposition of ions on the substrate surface while it is being bombarded with inert gas ions.

} reduction, i.e., small {chi}{sup 2} reduction with large changes of {Delta}K. Under effects of random noise, the fitting solution tends to crawl toward these patterns and ends up with unrealistically large {Delta}K. Such a solution is not very useful in optics correction because after the solution is dialed in, the quadrupoles will not respond as predicted by the lattice model due to magnet hysteresis. We will show that adding constraints to the fitting parameters is an effective way to combat this problem of LOCO. In fact, it improves optics calibration precision even for machines that don't show severe degeneracy behavior. LOCO fitting is essentially to solve a nonlinear least square problem with an iterative approach. The linear least square technique is applied in each iteration to move the solution toward the minimum. This approach is commonly referred to as the Gauss-Newton method. By using singular value decomposition (SVD) to invert the Jacobian matrix, this method has generally been very successful for LOCO. However, this method is based on a linear expansion of the residual vector over the fitting parameters which is valid only when the starting solution is sufficiently close to the real minimum. The fitting algorithm can have difficulties to converge when the initial guess is too far off. For example, it's possible for the {chi}{sup 2} merit function to increase after an iteration instead of decrease. This situation can be improved by using more robust nonlinear least square fitting algorithms, such as the Levenberg-Marquardt method. We will discuss the degeneracy problem in section 2 and then show how the constrained fitting can help in section 3. The application of Levenberg-Marquadt method to LOCO is shown in section 4. A summary is given in section 5.

The rapid technological development in diagnostic pathology, especially of immunohistochemical and moleculartechniques, also has a significant impact on diagnostic procedures for the evaluation of bone marrow trephine biopsies. The necessity for optimal morphology, combined with preservation of tissue antigens and nucleic acids on one hand and the wish for short turnaround times on the other hand require careful planning of the workflow for fixation, decalcification and embedding of trephines. Although any kind of bone marrow processing has its advantages and disadvantages, formalin fixation followed by EDTA decalcification can be considered a good compromise, which does not restrict the use of moleculartechniques. Although the majority of molecular studies in haematological neoplasms are routinely performed on bone marrow aspirates or peripheral blood cells, there are certain indications, in which molecular studies such as clonality determination or detection of specific mutations need to be performed on the trephine biopsy. Especially, the determination of B- or T-cell clonality for the diagnosis of lymphoid malignancies requires stringent quality controls and knowledge of technical pitfalls. In this review, we discuss technical aspects of bone marrow biopsy processing and the application of diagnostic moleculartechniques. PMID:23085692

The U.S. Environmental Protection Agency is interested in field screening hazardous waste sites for contaminants in the soil and surface and ground water. his study is an initial technical overview of the principal molecular spectroscopic techniques and instrumentation currently ...

molecules such as CH5 + with highly non-classical behavior, and for tests of fundamental physics. We have developed a new technique---frequency comb velocity-modulation spectroscopy---that is the first system to enable rapid, broadband spectroscopy of molecular ions with high resolution. We have demonstrated the ability to record 150 cm-1 of spectra consisting of 45,000 points in 30 minutes and have used this system to record over 1000 cm-1 of spectra of HfF+ in the near-infrared around 800 nm. After improvements, the system can now cover more than 3250 cm-1 (700-900 nm). We have combined this with standard velocity-modulation spectroscopy to measure and analyze 19 ro-vibronic bands of HfF+. These measurements enabled precision spectroscopy of trapped HfF + for testing time-reversal symmetry. For this experiment, we perform Ramsey spectroscopy between spin states in the metastable 3Delta 1 level to look for a permanent electric dipole moment of the electron with what we believe is the narrowest line observed in a molecular system (Fourier limited with 500 ms of coherence time). The long coherence time is a major advantage of using ions, but there are also some added complexities. We discuss various aspects metastable state preparation, state detection, and spectroscopy in a rotating frame (due to the necessary rotating electric bias field) that were particular challenging. In addition, we discuss limits to the coherence time---in particular, ion-ion collisions---as well as the sensitivity of the current measurements and provide a path towards a new limit on the electric dipole moment of the electron.

Tritrichomonas foetus (T. foetus) is the causative agent of bovine trichomonosis, a sexually transmitted disease leading to abortion (from 1 to 8 months gestation), infertility, and occasional pyometra. The annual losses to the U.S. beef industry are estimated to be in the hundreds of millions of dollars. Currently, the "gold standard" diagnostic test for trichomonosis in most countries is the cultivation of live organisms from reproductive secretions. The cultured organisms can then be followed by PCR assays with primers that amplify T. foetus to the exclusion of all other trichomonad species. Thus, negative results present as null data, indistinguishable from failed PCR amplification during T. foetus specific amplification. Our newly developed assay improves previously developed PCR based techniques by using diagnostic size variants from within the internal transcribed spacer 1 (ITS1) region that is between the 18S rRNA and 5.8S rRNA subunits. This new PCR assay amplifies trichomonad DNA from a variety of genera and positively identifies the causative agent in the bovine trichomonad infection. This approach eliminates false negatives found in some current assays as well as identifying the causative agent of trichomonad infection. Additionally, our assay incorporates a fluorescently labeled primer enabling high sensitivity and rapid assessment of the specific trichomonad species. Moreover, electrophoretic separation of amplified samples can be outsourced, thus eliminating the need for diagnostic laboratories to purchase expensive analysis equipment. PMID:15619373

A study examined the use of sensory integration techniques to reduce the maladaptive behaviors that interfered with the learning of nine high school students with mental impairments attending a special school. Maladaptive behaviors identified included rocking, toe walking, echolalia, resistance to change, compulsive behaviors, aggression,…

We report a modified technique for pulmonary endarterectomy (PEA) on a 67-year-old man with chronic thromboembolic pulmonary hypertension (CTEPH) who presented with dyspnea. He was referred to our medical center for coronary artery bypass grafting. CTEPH had not been detected in his first visit to another medical center, but upon re-evaluation, the diagnosis was confirmed. PEA was performed with a modified method, which seems to be safe and suitable for the removal of clot and fibrotic materials. Iatrogenic dissection was performed with normal saline injection in the pulmonary artery, and then, the clot was removed completely. Although the technique may not be applicable for all cases, it can be used as an alternative to using an aspirating dissector and a pair of forceps. PMID:25207229

Abdominal imaging is one of the important clinical applications of magnetic resonance imagining, but image degradation due to respiratory motion remains a major problem. Retrospective respiratory navigator gating technique is an effective approach to alleviate such degradation but is subject to long scan time and low signal-to-noise ratio (SNR) efficiency. In this study, a modified retrospective navigator gating technique with variable over-sampling ratio acquisition and weighted average reconstruction algorithm is presented. Experiments in phantom and the imaging results of seven volunteers demonstrated that the proposed method provided an enhanced SNR and reduced ghost-to-image ratio compared to the conventional method. The proposed method can also be used to reduce imaging time while maintaining comparable image quality. PMID:27079107

A substantial fraction of fine mode aerosols are organic with the majority formed in the atmosphere through oxidation of gas phase compounds emitted from a variety of natural and man-made sources. As a result, organic aerosols are comprised of thousands of individual organic species whose complexity increases exponentially with carbon number and degree of atmospheric oxidation. Chemical characterization of individual compounds present in this complex mixture provides information on sources and transformation processes that are critical for apportioning organic carbon from an often convoluted mixture of sources and to constrain oxidation mechanisms needed for atmospheric models. These compounds also affect the physical and optical properties of the aerosol but the vast majority remain unidentified and missing from published mass spectral libraries because of difficulties in separating and identifying them. We have developed improved methodologies for chemical identification in order to better understand complex environmental mixtures. Our approach has been to combine two-dimensional gas chromatography with high resolution time of flight mass spectrometry (GC×GC-HRTOFMS) and both traditional electron ionization (EI) and vacuum ultraviolet (VUV) photoionization. GC×GC provides improved separation of individual compounds over traditional one dimensional GC and minimizes co-elution of peaks resulting in mass spectra that are virtually free of interferences. VUV ionization is a ';soft' ionization technique that reduces fragmentation and enhances the abundance of the parent or molecular ion, which when combined with high resolution mass spectrometry can provide molecular formulas for chromatographic peaks. We demonstrate our methodology by applying it to identify more than 500 individual compounds in aerosol filter samples collected at Blodgett Forest, a rural site in the Sierra Nevada Mountains. Using the EI NIST mass spectral library and molecular formulas determined

Coronary artery disease leads to the accumulation of atheromatous plaque leading to coronary stenosis. Coronary intervention techniques such as balloon angioplasty and atherectomy are used to address coronary stenosis and establish a stable lumen thus enhancing blood flow to the myocardium. Restenosis or re-blockage of the arteries is a major limitation of the above mentioned interventional techniques. Neointimal hyperplasia or proliferation of cells in response to the vascular injury as a result of coronary intervention is considered to be one of the major causes of restenosis. Recent studies indicated that irradiation of the coronary lesion site, with radiation doses ranging from 15 to 30 Gy, leads to diminishing neointimal hyperplasia with subsequent reduction in restenosis. The radiation dose is given by catheter-based radiation delivery systems using beta-emitters 90Sr/90Y, 32P and gamma-emitting 192Ir among others. However the dose schema used for dose prescription for these sources are relatively simplistic, and are based on calculations using uniform homogenous water or tissue media and simple cylinder geometry. Stenotic coronary vessels are invariably lined with atheromatous plaque of heterogeneous composition, the radiation dose distribution obtained from such dosimetry data can cause significant variations in the actual dose received by a given patient. Such discrepancies in dose calculation can introduce relatively large uncertainties in the limits of dose window for effective and safe application of intravascular brachytherapy, and consequently in the clinical evaluation of the efficacy of this modality. In this research study we investigated the effect of different geometrical and material heterogeneities, including residual plaque, catheter non-centering, lesion eccentricity and cardiac motion on the radiation dose delivered at the lesion site. Correction factors including dose perturbation factors and dose variation factors have been calculated

Molecular imaging techniques have led to significant advances in understanding the pathophysiology of schizophrenia and contributed to knowledge regarding potential mechanisms of action of the drugs used to treat this illness. The aim of this article is to provide a review of the major findings related to the application of molecular imaging techniques that have furthered schizophrenia research. This article focuses specifically on neuroreceptor imaging studies with PET and SPECT. After providing a brief overview of neuroreceptor imaging methodology, we consider relevant findings from studies of receptor availability, and dopamine synthesis and release. Results are discussed in the context of current hypotheses regarding neurochemical alterations in the illness. We then selectively review pharmacological occupancy studies and the role of neuroreceptor imaging in drug development for schizophrenia. PMID:21243081

Early diagnosis and effective monitoring of rheumatoid arthritis (RA) are important for a positive outcome. Instant treatment often results in faster reduction of inflammation and, as a consequence, less structural damage. Anatomical imaging techniques have been in use for a long time, facilitating diagnosis and monitoring of RA. However, mere imaging of anatomical structures provides little information on the processes preceding changes in synovial tissue, cartilage, and bone. Molecular imaging might facilitate more effective diagnosis and monitoring in addition to providing new information on the disease pathogenesis. A limiting factor in the development of new molecular imaging techniques is the availability of suitable probes. Here, we review which cells and molecules can be targeted in the RA joint and discuss the advances that have been made in imaging of arthritis with a focus on such molecular targets as folate receptor, F4/80, macrophage mannose receptor, E-selectin, intercellular adhesion molecule-1, phosphatidylserine, and matrix metalloproteinases. In addition, we discuss a new tool that is being introduced in the field, namely the use of nanobodies as tracers. Finally, we describe additional molecules displaying specific features in joint inflammation and propose these as potential new molecular imaging targets, more specifically receptor activator of nuclear factor κB and its ligand, chemokine receptors, vascular cell adhesion molecule-1, αVβ3 integrin, P2X7 receptor, suppression of tumorigenicity 2, dendritic cell-specific transmembrane protein, and osteoclast-stimulatory transmembrane protein. PMID:25099015

The control system design of a dc to 10 kHz bandwidth 45 kVA current sourced power amplifier suitable for geophysical exploration applications is presented. A five-level modulation scheme has been implemented using a modified bridge topology with only four switches. This scheme give as an order of magnitude improvement in switching ripple and control performance over two-level modulation. Using this system, a 50 kHz switch frequency allows a 20 kHz, {minus}3dB bandwidth to be easily achieved. Simulation as well as tenth scale model test results are presented. The current output waveform reproduction is of high quality over the rated dc to 10 kHz frequency range. The THD is 0.3% at 1 kHz.

Ion implantation and sputter metallization are used to produce ohmic electrical contacts to Ge:Ga chips. The method is shown to give a high yield of small monolithic bolometers with very little low-frequency noise. It is noted that when one of the chips is used as the thermometric element of a composite bolometer it must be bonded to a dielectric substrate. The thermal resistance of the conventional epoxy bond is measured and found to be undesirably large. A procedure for soldering the chip to a metallized portion of the substrate in such a way as to reduce this resistance is outlined. An evaluation is made of the contribution of the metal film absorber to the heat capacity of a composite bolometer. It is found that the heat capacity of a NiCr absorber at 1.3 K can dominate the bolometer performance. A Bi absorber possesses significantly lower heat capacity. A low-temperature blackbody calibrator is built to measure the optical responsivity of bolometers. A composite bolometer system with a throughput of approximately 0.1 sr sq cm is constructed using the new techniques. The noise in this bolometer is white above 2.5 Hz and is slightly below the value predicted by thermodynamic equilibrium theory.

The objective of this study was to develop concepts, specifications, designs, techniques, and procedures capable of significantly reducing the time required to connect and verify umbilicals for ground services to the space shuttle. The desired goal was to reduce the current time requirement of several shifts for the Saturn 5/Apollo to an elapsed time of less than one hour to connect and verify all of the space shuttle ground service umbilicals. The study was conducted in four phases: (1) literature and hardware examination, (2) concept development, (3) concept evaluation and tradeoff analysis, and (4) selected concept design. The final product of this study was a detail design of a rise-off disconnect panel prototype test specimen for a LO2/LH2 booster (or an external oxygen/hydrogen tank for an orbiter), a detail design of a swing-arm mounted preflight umbilical carrier prototype test specimen, and a part 1 specification for the umbilical connect and verification design for the vehicles as defined in the space shuttle program.

An improved diffusion welding technique has been developed for TD-NiCr sheet. In the most preferred form, the improvedtechnique consists of diffusion welding 320-grit sanded plus chemically polished surfaces of unrecrystallized TD-NiCr at 760 C under 140 MN/m2 pressure for 1hr followed by postheating at 1180 C for 2hr. Compared to previous work, this improvedtechnique has the advantages of shorter welding time, lower welding temperature, lower welding pressure, and a simpler and more reproducible surface preparation procedure. Weldments were made that had parent-metal creep-rupture shear strength at 1100 C.

A simple technique for sequentially Q-switching molecular lasers is discussed in which an optical scanner is used as an optical folding element in a laser cavity consisting of a stationary diffraction grating and partially reflecting mirror. Sequential Q-switching of a conventional CO2 laser is demonstrated in which over sixty-two transitions between 9.2 and 10.8 microns are observed. Rapid repetition rates (200 Hz) and narrow laser pulses (less than 5 microsec) allow conventional signal processing techniques to be used with this multiwavelength laser source which is a versatile tool for laser propagation studies, absorption spectroscopy, and gain measurements. Results of a preliminary experiment demonstrating the utility of measuring selective absorption of CO2 laser wavelengths by C2H4 are shown.

Modern processors are using increasingly larger sized on-chip caches. Also, with each CMOS technology generation, there has been a significant increase in their leakage energy consumption. For this reason, cache power management has become a crucial research issue in modern processor design. To address this challenge and also meet the goals of sustainable computing, researchers have proposed several techniques for improving energy efficiency of cache architectures. This paper surveys recent architectural techniques for improving cache power efficiency and also presents a classification of these techniques based on their characteristics. For providing an application perspective, this paper also reviews several real-world processor chips that employ cache energy saving techniques. The aim of this survey is to enable engineers and researchers to get insights into the techniques for improving cache power efficiency and motivate them to invent novel solutions for enabling low-power operation of caches.

Individual microspheres labeled with a unique barcode and a surface-bound probe are able to provide multiplexed biological assays in a convenient and high-throughput format. Typically, barcodes are created by impregnating microspheres with several colors of fluorophores mixed at different intensity levels. The number of barcodes is limited to hundreds primarily due to variability in fluorophore loading and difficulties in compensating for signal crosstalk. We constructed a molecular barcode based on differences in lifetimes rather than intensities. Lifetime-based measurements have an advantage in that signal from neighboring channels is reduced (because signal intensities are equal) and may be mathematically deconvoluted. The excited state lifetime of quantum dots (QDs) was systematically altered by attaching a variable number of quencher molecules to the surface. We have synthesized a series of ten QDs with distinguishable lifetimes all emitting at the same wavelength. The QDs were loaded into microspheres to determine the expected signal intensities. The uncertainty in lifetimes as a function of the interrogation time was determined. An acceptable standard deviation (3%) was obtained with a measurement time of approximately 10-30 μsec. Currently, we are expanding these studies to include multiple wavelengths and determining the maximal number of barcodes for a given spectral window.

The molecular genetic tools used in fission yeast have generally been adapted from methods and approaches developed for use in the budding yeast, Saccharomyces cerevisiae Initially, the molecular genetics of Schizosaccharomyces pombe was developed to aid gene identification, but it is now applied extensively to the analysis of gene function and the manipulation of noncoding sequences that affect chromosome dynamics. Much current research using fission yeast thus relies on the basic processes of introducing DNA into the organism and the extraction of DNA for subsequent analysis. Targeted integration into specific genomic loci is often used to create site-specific mutants or changes to noncoding regulatory elements for subsequent phenotypic analysis. It is also regularly used to introduce additional sequences that generate tagged proteins or to create strains in which the levels of wild-type protein can be manipulated through transcriptional regulation and/or protein degradation. Here, we draw together a collection of core molecular genetic techniques that underpin much of modern research using S. pombe We summarize the most useful methods that are routinely used and provide guidance, learned from experience, for the successful application of these methods. PMID:27140925

Nano- and micro-confined fluid flows are often characterised by non-continuum effects that require special treatment beyond the scope of conventional continuum-fluid modelling. However, if the flow system has high-aspect-ratio components (e.g. long narrow channels) the computational cost of a fully molecular-based simulation can be prohibitive. In this talk we present some important elements of a heterogeneous molecular-continuum method that exploits the various degrees of scale separation in both time and space that are very often present in these types of flows. We demonstrate the ability of these techniques to predict the flow of water in aligned carbon nanotube (CNT) membranes: the tube diameters are 1-2 nm and the tube lengths (i.e. the membrane thicknesses) are 2-6 orders of magnitude larger. We compare our results with experimental data. We also find very good agreement with experimental results for a 1 mm thick membrane that has CNTs of diameter 1.59 nm. In this case, our hybrid multiscale simulation is orders of magnitude faster than a full molecular dynamics simulation.

An improvedtechnique has been developed for studies of the shear viscosity of fluids. It utilizes an acoustic resonator as a four-terminal electrical device; the resonator's amplitude response may be determined directly and simply related to the fluid's viscosity. The use of this technique is discussed briefly and data obtained in several fluids is presented.

Marketing in the business world has long used focus group interviews and survey techniques to explore the attitudes, behaviors, and perceptions of their customers. In the college setting, these same techniques are now being used to improve program quality, assess the effectiveness of publications, and explore the image of the college. At Durham…

The goals of this project have was to: (1) assemble and analyze a comprehensive database of past waste injection operations; (2) develop improved diagnostic techniques for monitoring fracture growth and formation changes; (3) develop operating guidelines to optimize daily operations and ultimate storage capacity of the target formation; and (4) to apply these improved models and guidelines in the field.

Candida albicans is one of the most common fungal pathogen in humans due to its high frequency as an opportunistic and pathogenic fungus causing superficial as well as invasive infections in immunocompromised patients. An understanding of gene function in C. albicans is necessary to study the molecular basis of its pathogenesis, virulence and drug resistance. Several manipulation techniques have been used for investigation of gene function in C. albicans, including gene disruption, controlled gene expression, protein tagging, gene reintegration, and overexpression. In this review, the main cassettes containing selectable markers used for gene manipulation in C. albicans are summarized; the advantages and limitations of these cassettes are discussed concerning the influences on the target gene expression and the virulence of the mutant strains. PMID:24759671

A laser technique is proposed which may be useful for the assignment of molecular spectra in the visible and infrared regions. The method is based on the resonant interaction of two monochromatic fields with a Doppler-broadened three-level system. Under the appropriate conditions the absorption line shape of one of the transitions shows a complex structure over a narrow section of the Doppler profile, and for sufficiently high laser power the line shape splits into a number of narrow peaks. Analysis of the resulting intensity pattern leads to unambiguous assignment of the angular momentum quantum numbers of the three levels involved. A simple set of rules is given to facilitate interpretation of spectra. The line shapes discussed are also relevant to monochromatic optical pumping of gases and unidirectional laser amplifiers.

The diagnosis of Whipple's disease (WD) is based on the existence of clinical signs and symptoms compatible with the disease and in the presence of PAS-positive diastase-resistant granules in the macrophages of the small intestine. If there is suspicion of the disease but no histological findings or only isolated extraintestinal manifestations, species-specific PCR using different sequences of the T. whippleii genome from different tissue types and biological fluids is recommended.This study reports two cases: the first patient had diarrhea and the disease was suspected after an endoscopic examination of the ileum, while the second patient had multi-systemic manifestations,particularly abdominal, thoracic, and peripheral lymphadenopathies. In both cases, the diagnosis was confirmed using molecular biology techniques to samples from the small intestine or from a retroperineal lymph node, respectively. PMID:21526877

Engineering Molecular Mechanics (EMM) was developed as an alternative to conventional molecular simulation techniques to model high temperature (T > 0 K) phenomena. The EMM methodology was developed using thermal expansion and thermal energy as key thermal properties. Temperature dependent interatomic potentials were developed to account for thermal effects. Lennard-Jones and Morse potentials were used to build temperature dependent potentials. The validity and effectiveness of EMM simulations were demonstrated by simulating temperature dependent properties such as thermal expansion, elastic constants and thermal stress in copper and nickel. EMM simulations were significantly faster than molecular dynamics (MD) simulations for the same accuracy. A controversy regarding the definition of stress in an atomic system was resolved. Using theoretical arguments and numerical examples, the equivalence of virial stress and Cauchy stress was proved. It was shown that neglecting the velocity term in the definition of virial stress (as suggested by some researchers) can cause significant errors in MD simulations at high temperatures. The nanoscale instabilities during phase transformation in Ni-Al shape memory alloys were studied using MD and EMM simulations. The phase transformation temperatures predicted by MD simulations agreed well with experiments. Some limitations of the EMM methodology and the minimization algorithm were discussed. The possibility of nanoscale material design of Ni-Al shape memory alloys was investigated. It was found that the distribution of nickel and aluminum atoms in the alloy can affect the phase transformation characteristics significantly. A new design criterion based on thermal expansion mismatch was introduced. The predicted results using the new criterion matched well with the phase transformation temperature and strain calculated using MD simulations. The new one parameter design criterion was shown to be effective for designing Ni-Al shape

Liverworts occupy a basal position in the evolution of land plants, and are a key group to address a wide variety of questions in plant biology. Marchantia polymorpha is a common, easily cultivated, dioecious liverwort species, and is emerging as an experimental model organism. The haploid gametophytic generation dominates the diploid sporophytic generation in its life cycle. Genetically homogeneous lines in the gametophyte generation can be established easily and propagated through asexual reproduction, which aids genetic and biochemical experiments. Owing to its dioecy, male and female sexual organs are formed in separate individuals, which enables crossing in a fully controlled manner. Reproductive growth can be induced at the desired times under laboratory conditions, which helps genetic analysis. The developmental process from a single-celled spore to a multicellular body can be observed directly in detail. As a model organism, moleculartechniques for M. polymorpha are well developed; for example, simple and efficient protocols of Agrobacterium-mediated transformation have been established. Based on them, various strategies for molecular genetics, such as introduction of reporter constructs, overexpression, gene silencing and targeted gene modification, are available. Herein, we describe the technologies and resources for reverse and forward genetics in M. polymorpha, which offer an excellent experimental platform to study the evolution and diversity of regulatory systems in land plants. PMID:26116421

Microsporidia are obligate intracellular protozoan parasites that infect a broad range of vertebrates and invertebrates. These parasites are now recognized as one of the most common pathogens in human immunodeficiency virus-infected patients. For most patients with infectious diseases, microbiological isolation and identification techniques offer the most rapid and specific determination of the etiologic agent. This is not a suitable procedure for microsporidia, which are obligate intracellular parasites requiring cell culture systems for growth. Therefore, the diagnosis of microsporidiosis currently depends on morphological demonstration of the organisms themselves. Although the diagnosis of microsporidiosis and identification of microsporidia by light microscopy have greatly improved during the last few years, species differentiation by these techniques is usually impossible and transmission electron microscopy may be necessary. Immunfluorescent-staining techniques have been developed for species differentiation of microsporidia, but the antibodies used in these procedures are available only at research laboratories at present. During the last 10 years, the detection of infectious disease agents has begun to include the use of nucleic acid-based technologies. Diagnosis of infection caused by parasitic organisms is the last field of clinical microbiology to incorporate these techniques and moleculartechniques (e.g., PCR and hybridization assays) have recently been developed for the detection, species differentiation, and phylogenetic analysis of microsporidia. In this paper we review human microsporidial infections and describe and discuss these newly developed moleculartechniques. PMID:10194459

The success of the green alga Chlamydomonas reinhardtii as a model organism is to a large extent due to the wide range of moleculartechniques that are available for its characterization. Here, we review some of the techniques currently used to modify and interrogate the C. reinhardtii nuclear genome and explore several technologies under development. Nuclear mutants can be generated with ultraviolet (UV) light and chemical mutagens, or by insertional mutagenesis. Nuclear transformation methods include biolistic delivery, agitation with glass beads, and electroporation. Transforming DNA integrates into the genome at random sites, and multiple strategies exist for mapping insertion sites. A limited number of studies have demonstrated targeted modification of the nuclear genome by approaches such as zinc-finger nucleases and homologous recombination. RNA interference is widely used to knock down expression levels of nuclear genes. A wide assortment of transgenes has been successfully expressed in the Chlamydomonas nuclear genome, including transformation markers, fluorescent proteins, reporter genes, epitope tagged proteins, and even therapeutic proteins. Optimized expression constructs and strains help transgene expression. Emerging technologies such as the CRISPR/Cas9 system, high-throughput mutant identification, and a whole-genome knockout library are being developed for this organism. We discuss how these advances will propel future investigations. PMID:25704665

The Multi-Order Extreme Ultraviolet Spectrograph (MOSES) forms images of the transition region at HE II 30.4 in three spectral orders. Subtle differences between these images encode line profile information. However, differences in instrument point-spread function (PSF) in the three orders lead to non-negligible systematic errors in the retrieval of the line profiles. We describe an improved periodogram technique for equalizing the PSFs, and provide numerical verification of the technique's validity.

Ion implantation techniques offering improved cell performance and reduced cost have been studied. These techniques include non-mass-analyzed phosphorus implantation, argon implantation gettering, and low temperature boron annealing. It is found that cells produced by non-mass-analyzed implantation perform as well as mass-analyzed controls, and that the cell performance is largely independent of process parameters. A study of argon implantation gettering shows no improvement over non-gettered controls. Results of low temperature boron annealing experiments are presented.

Adenoma detection rate (ADR) is a key component of colonoscopy quality assessment, with a direct link between itself and future mortality from colorectal cancer. There are a number of potential factors, both modifiable and non-modifiable that can impact upon ADR. As methods, understanding and technologies advance, so should our ability to improve ADRs, and thus, reduce colorectal cancer mortality. This article will review new technologies and techniques that improve ADR, both in terms of the endoscopes themselves and adjuncts to current systems. In particular it focuses on effective techniques and behaviours, developments in image enhancement, advancement in endoscope design and developments in accessories that may improve ADR. It also highlights the key role that continued medical education plays in improving the quality of colonoscopy and thus ADR. The review aims to present a balanced summary of the evidence currently available and does not propose to serve as a guideline. PMID:26265990

During the past 25 years many variations have emerged in stapedectomy, most of which centered around either a change in the prosthesis itself or in the type of oval window seal. The small fenestra stapedectomy technique (SFT) represents a change in surgical procedure rather than in prosthetic design. This technique offers the opportunity to improve hearing results while reducing risks in stapedectomy surgery. Four areas of significant improvement are seen in patients in whom the SFT was used: (1) improved hearing in the high frequencies of 2000, 4000, and 8000 Hz, (2) improved speech discrimination scores, (3) a significant reduction in the number of reported vestibular complaints, and (4) a reduction in the number of serious postoperative sensorineural hearing losses. PMID:6417600

The focus of this paper is to familiarize business discipline faculty with cognitive psychology theories of how students learn together with teaching techniques to assist and improve student learning. Student learning can be defined as the outcome from the retrieval (free recall) of desired information. Student learning occurs in two processes.…

Foodborne diseases, caused by pathogenic microorganisms, are a major public health problem worldwide. Microbiological methods commonly used in the detection of these foodborne pathogens are laborious and time consuming. This situation, coupled with the demand for immediate results and with technological advances, has led to the development of a wide range of rapid methods in recent decades. On this basis, this review describes the advantages and limitations of the main molecular methods used in detection and identification of foodborne pathogens. To this end, we considered how recent the information was published, the objective analysis of the topic and its scope. Recent literature reports a significant number of alternative, sensitive and selective moleculartechniques for detection, enumeration and identification of pathogenic microorganisms in food. Polymerase chain reaction (PCR) is the most popular platform, while high performance sequencing is emerging as a technique of wide applicability for the future. However, even with all the advantages of these new methodologies, their limitations should not be overlooked. For example, molecular methods are not standardized protocols, which hinders its use in some cases. For this reason, hard work should be done to overcome these limitations and improve the application of these techniques in complex matrices such as food systems. PMID:25418655

Distant genomic elements were found to interact within the folded eukaryotic genome. However, the used experimental approach (chromosome conformation capture, 3C) enables neither determination of the percentage of cells in which the interactions occur nor demonstration of simultaneous interaction of >2 genomic elements. Each of the above can be done using in-gel replication of interacting DNA segments, the technique reported here. Chromatin fragments released from formaldehyde–cross-linked cells by sodium dodecyl sulfate extraction and sonication are distributed in a polyacrylamide gel layer followed by amplification of selected test regions directly in the gel by multiplex polymerase chain reaction. The fragments that have been cross-linked and separate fragments give rise to multi- and monocomponent molecular colonies, respectively, which can be distinguished and counted. Using in-gel replication of interacting DNA segments, we demonstrate that in the material from mouse erythroid cells, the majority of fragments containing the promoters of active β-globin genes and their remote enhancers do not form complexes stable enough to survive sodium dodecyl sulfate extraction and sonication. This indicates that either these elements do not interact directly in the majority of cells at a given time moment, or the formed DNA–protein complex cannot be stabilized by formaldehyde cross-linking. PMID:24369423

The potential for gains in material properties over conventional materials has motivated an effort to develop novel nanostructured materials for aerospace applications. These novel materials typically consist of a polymer matrix reinforced with particles on the nanometer length scale. In this study, molecular modeling is used to construct fully atomistic models of a carbon nanotube embedded in an epoxy polymer matrix. Functionalization of the nanotube which consists of the introduction of direct chemical bonding between the polymer matrix and the nanotube, hence providing a load transfer mechanism, is systematically varied. The relative effectiveness of functionalization in a nanostructured material may depend on a variety of factors related to the details of the chemical bonding and the polymer structure at the nanotube-polymer interface. The objective of this modeling is to determine what influence the details of functionalization of the carbon nanotube with the polymer matrix has on the resulting mechanical properties. By considering a range of degree of functionalization, the structure-property relationships of these materials is examined and mechanical properties of these models are calculated using standard techniques.

In manufacturing soft ferrite materials the particle size of the raw material has a significant impact on the reactivity of calcination. The control of particle size distribution and final formulation at wet milling after calcining impacts the reactivity during sintering and the magnetic properties of the final product. This paper will deal with steps taken to improve process control during the grinding operations of raw material and calcine in soft ferrite production. Equipment modifications as well as changes to the grinding and material handling techniques will be included. All examples of process control and improvements will be supported by data.

The traditional phase-shifting profilometry technique is based on the projection of digital interference patterns and computation of the absolute phase map. Recently, a method was proposed that used phase interpolation to the corner detection, at subpixel accuracy in the projector image for improving the camera-projector calibration. We propose a general strategy to improve the accuracy in the search for correspondence that can be used to obtain high precision three-dimensional reconstruction. Experimental results show that our strategy can outperform the precision of the phase-shifting method.

A general investigation into the improvement of modal scaling factors of an experimental modal model using additive technique is discussed. Data base required by the proposed method consists of an experimental modal model (a set of complex eigenvalues and eigenvectors) of the original structure and a corresponding set of complex eigenvalues of the mass-added structure. Three analytical methods,i.e., first order and second order perturbation methods, and local eigenvalue modification technique, are proposed to predict the improved modal scaling factors. Difficulties encountered in scaling closely spaced modes are discussed. Methods to compute the necessary rotational modal vectors at the mass additive points are also proposed to increase the accuracy of the analytical prediction.

We present many algorithmic improvements in our early region filling technique, which in a previous publication was already proved to be correct for all connected digital pictures. Ours is an integer-only method that also finds all interior points of any given digital picture by displaying and storing them in a locating matrix. Our filling/locating program is applicable both in computer graphics and image processing.

Range resolution of a conventional pulsed Doppler radar is determined by the scattering volume defined by the transmitted pulse shape. To increase the resolution, the length of the pulse must be reduced. Reducing the pulse length also reduces the transmitted power and hense the signal to noise ratio unless the peak power capability of the transmitter is greatly increased. Improved range resolution may also be attained through the use of various pulse coding methods, but such methods are sometimes difficult to implement from a hardware standpoint. The frequency-hopping (F-H) technique described increases the range resolution of pulse Doppler MST (mesosphere stratosphere troposphere) radar without the need for extensive modifications to the radar transmitter. This technique consists of sending a repeated sequence of pulses, each pulse in the sequence being transmitted at a unique radio frequency that is under the control of a microcomputer. This technique is discussed along with other radar parameters.

The US is embarking on an experiment to make significant and sustained improvements in weather forecasting. The effort stems from a series of community conversations that recognized the rapid advancements in observations, modeling and computing techniques in the academic, governmental and private sectors. The new directions and initial efforts will be summarized, including information on possibilities for international collaboration. Most new projects are scheduled to start in the last half of 2014. Several advancements include ensemble forecasting with global models, and new sharing of computing resources. Newly developed techniques for evaluating weather forecast models will be presented in detail. The approaches use statistical techniques that incorporate pair-wise comparisons of forecasts with observations and account for daily auto-correlation to assess appropriate uncertainty in forecast changes. Some of the new projects allow for international collaboration, particularly on the research components of the projects.

Over the past years a new infrastructure for atomic and molecular databases has been developed within the framework of the Virtual Atomic and Molecular Data Centre (VAMDC). Standards for the representation of atomic and molecular data as well as a set of protocols have been established which allow now to retrieve data from various databases through one portal and to combine the data easily. Apart from spectroscopic databases such as the Cologne Database for Molecular Spectroscopy (CDMS), the Jet Propulsion Laboratory microwave, millimeter and submillimeter spectral line catalogue (JPL) and the HITRAN database, various databases on molecular collisions (BASECOL, KIDA) and reactions (UMIST) are connected. Together with other groups within the VAMDC consortium we are working on common user tools to simplify the access for new customers and to tailor data requests for users with specified needs. This comprises in particular tools to support the analysis of complex observational data obtained with the ALMA telescope. In this presentation requests to CDMS and JPL will be used to explain the basic concepts and the tools which are provided by VAMDC. In addition a new portal to CDMS will be presented which has a number of new features, in particular meaningful quantum numbers, references linked to data points, access to state energies and improved documentation. Fit files are accessible for download and queries to other databases are possible.

Tumor functional and molecular imaging has significantly contributed to cancer preclinical research and clinical applications. Among typical imaging modalities, ultrasonic and optical techniques are two commonly used methods; both share several common features such as cost efficiency, absence of ionizing radiation, relatively inexpensive contrast agents, and comparable maximum-imaging depth. Ultrasonic and optical techniques are also complementary in imaging resolution, molecular sensitivity, and imaging space (vascular and extravascular). The marriage between ultrasonic and optical techniques takes advantages of both techniques. This review introduces tumor functional and molecular imaging using microbubble-based ultrasound and ultrasound-mediated optical imaging techniques. PMID:23219728

Advancements in integrated circuit (IC) package technology are increasingly leading to size shrinkage of modern microelectronic packages. This size reduction presents a challenge for the detection and location of the internal features/defects in the packages, which have approached the resolution limit of conventional acoustic microimaging, an important nondestructive inspection technique in the semiconductor industry. In this paper, to meet the challenge the learning overcomplete representation technique is pursued to decompose an ultrasonic A-scan signal into overcomplete representations over a learned overcomplete dictionary. Ultrasonic echo separation and reflectivity function estimation are then performed by exploiting the sparse representability of ultrasonic pulses. An improved acoustic microimaging technique is proposed by integrating these operations into the conventional acoustic microimaging technique. Its performance is quantitatively evaluated by elaborated experiments on ultrasonic A-scan signals using acoustic microimaging (AMI) error criteria. Results obtained both from simulated and measured A-scans are presented to demonstrate the superior axial resolution and robustness of the proposed technique.

A three-dimensional ocean model and its adjoint model are used to simultaneously optimize the initial conditions (IC) and the wind stress drag coefficient (Cd) for improving storm surge forecasting. To demonstrate the effect of this proposed method, a number of identical twin experiments (ITEs) with a prescription of different error sources and two real data assimilation experiments are performed. Results from both the idealized and real data assimilation experiments show that adjusting IC and Cd simultaneously can achieve much more improvements in storm surge forecasting than adjusting IC or Cd only. A diagnosis on the dynamical balance indicates that adjusting IC only may introduce unrealistic oscillations out of the assimilation window, which can be suppressed by the adjustment of the wind stress when simultaneously adjusting IC and Cd. Therefore, it is recommended to simultaneously adjust IC and Cd to improve storm surge forecasting using an adjoint technique.

Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution by up to {approx}50% when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing case are opticaUy thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block preconditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient preconditioner.

Colonoscopy has substantially evolved during the last 20 years and many different training techniques have been developed in order to improve the performance of endoscopists. The most known are mechanical simulators, virtual reality simulators, computer-simulating endoscopy, magnetic endoscopic imaging, and composite and explanted animal organ simulators. Current literature generally indicates that the use of simulators improves performance of endoscopists and enhances safety of patients, especially during the initial phase of training. Moreover, newer endoscopes and imaging techniques such as high-definition colonoscopes, chromocolonoscopy with dyes spraying, and third-eye retroscope have been incorporated in everyday practice, offering better visualization of the colon and detection of polyps. Despite the abundance of these different technological features, training devices are not widely used and no official guideline or specified training algorithm or technique for lower gastrointestinal endoscopy has been evolved. In this review, we present the most important training methods currently available and evaluate these using existing literature. We also try to propose a training algorithm for novice endoscopists. PMID:27099542

Tunable diode laser atomic absorption spectroscopy (DLAAS) combined with separation techniques and atomization in plasmas and flames is presented as a powerful method for analysis of molecular species. The analytical figures of merit of the technique are demonstrated by the measurement of Cr(VI) and Mn compounds, as well as molecular species including halogen atoms, hydrogen, carbon and sulfur. PMID:15561625

MARS (Molecular Adsorbent Recirculating System) is a new technique as a system of liver detoxification in patients with severe acute or acute on chronic hepatic failure. Also, it has shown its usefulness in the control of resistant pruritus in the primary biliary cirrhosis. Due to the fact that this technique is often delivered in Intensive Care Units (ICUs), we have reviewed the literature 1999 until now to describe this technique, its benefits and its mains complications. The technique was developed in Germany, where in 1999 was first used in clinical practice. It was used for the first time in Spain in 2000 and in the Clínica Universitaria of Navarra in July of 2001. Despite the short clinical experience using MARS its obvious beneficial effects such as decrease of hepatic toxins and the improvement of encephalopathy and hemodynamic situation, makes it a very useful technique in these patients. MARS has been shown to be a safe procedure, well tolerated by patients and accessible to the use by specialised nurses. Despite the encouraging clinical results, its used is still limited. Moreover its high cost precludes it widespread use and requires further studies. PMID:16022828

Super-resolution (SR) software-based techniques aim at generating a final image by combining several noisy frames with lower resolution from the same scene. A comparative study on high-resolution high-angle annular dark field images of InAs/GaAs QDs has been carried out in order to evaluate the performance of the SR technique. The obtained SR images present enhanced resolution and higher signal-to-noise (SNR) ratio and sharpness regarding the experimental images. In addition, SR is also applied in the field of strain analysis using digital image processing applications such as geometrical phase analysis and peak pairs analysis. The precision of the strain mappings can be improved when SR methodologies are applied to experimental images. PMID:26501744

A grid adaptation technique is presented which improves grid quality. The method begins with an assessment of grid quality by defining an appropriate grid quality measure. Then, undesirable grid properties are eliminated by a grid-quality-adaptive grid generation procedure. The same concept has been used for geometry-adaptive and solution-adaptive grid generation. The difference lies in the definition of the grid control sources; here, they are extracted from the distribution of a particular grid property. Several examples are presented to demonstrate the versatility and effectiveness of the method.

Signal editing is a technique used to locate and erase unreliable data before error correction decoding. Consider a concatenated coding (CC) communication system in which the inner code employs convolutional encoding with Viterbi decoding and the outer code could employ either a convolutional or a Reed-Solomon code. In this study, we show that useful information can be derived from the inner Viterbi decoding process to perform two special operations: to locate and erase unreliable decoded data and to estimate the input channel noise level. As a result, the number of errors input to the outer decoder is reduced and the overall CC system performance is improved.

Concentration measurement in molecular gas mixtures using a snapshot spatial imaging technique is reported. The approach consists of measuring the birefringence of the molecular sample when field-free alignment takes place, each molecular component producing a signal with an amplitude depending on the molecular density. The concentration measurement is obtained on a single-shot basis by probing the time-varying birefringence through femtosecond time-resolved optical polarigraphy (FTOP). The relevance of the method is assessed in air.

With climatic change, many western states in the United States are experiencing drought conditions. Numerous irrigation districts are losing significant amount of water from their canal systems due to leakage. Every year, on the average 2 million acres of prime cropland in the US is lost to soil erosion, waterlogging and salinity. Lining of canals could save enormous amount of water for irrigating crops but in present time due to soaring costs of construction and environmental mitigation, adopting such program on a large scale would be excessive. Conventional techniques of seepage detection are expensive, time consuming and labor intensive besides being not very accurate. Technological advancements in remote sensing have made it possible to investigate irrigation canals for seepage sites identification. In this research, band-9 in the [NIR] region and band-45 in the [TIR] region of an airborne MASTER data has been utilized to highlight anomalies along irrigation canal at Phoenix, Arizona. High resolution (1 to 4 meter pixels) satellite images provided by private companies for scientific research and made available by Google to the public on Google Earth is then successfully used to separate those anomalies into water activity sites, natural vegetation, and man-made structures and thereby greatly improving the seepage detection ability of airborne remote sensing. This innovative technique is much faster and cost effective as compared to conventional techniques and past airborne remote sensing techniques for verification of anomalies along irrigation canals. This technique also solves one of the long standing problems of discriminating false impression of seepage sites due to dense natural vegetation, terrain relief and low depressions of natural drainages from true water related activity sites.

We demonstrate a scheme for the preparation of molecular alignment and angular momentum orientation using a hybrid combination of two limits of Raman scattering. First a weak, impulsive pump pulse initializes the system via the nonresonant dynamic Stark effect. Then, having overcome the influence of the vacuum fluctuations, an amplification pulse selectively enhances the initial coherences by transient stimulated Raman scattering, generating alignment and angular momentum orientation of molecular hydrogen. The amplitude and phase of the resulting coherent dynamics are experimentally probed, indicating an amplification factor of 4.5. An analytic theory is developed to model the dynamics.

The National Library of Medicine (NLM) has made a collection of over a 1.2 million research articles containing 3.2 million figure images searchable using the Open-iSM multimodal (text+image) search engine. Many images are visible light photographs, some of which are images containing faces ("face images"). Some of these face images are acquired in unconstrained settings, while others are studio photos. To extract the face regions in the images, we first applied one of the most widely-used face detectors, a pre-trained Viola-Jones detector implemented in Matlab and OpenCV. The Viola-Jones detector was trained for unconstrained face image detection, but the results for the NLM database included many false positives, which resulted in a very low precision. To improve this performance, we applied a deep learning technique, which reduced the number of false positives and as a result, the detection precision was improved significantly. (For example, the classification accuracy for identifying whether the face regions output by this Viola- Jones detector are true positives or not in a test set is about 96%.) By combining these two techniques (Viola-Jones and deep learning) we were able to increase the system precision considerably, while avoiding the need to manually construct a large training set by manual delineation of the face regions.

Background Molecular Dynamics has emerged as an important research methodology covering systems to the level of millions of atoms. However, insufficient sampling often limits its application. The limitation is due to rough energy landscapes, with many local minima separated by high-energy barriers, which govern the biomolecular motion. Scope of review In the past few decades methods have been developed that address the sampling problem, such as replica-exchange molecular dynamics, metadynamics and simulated annealing. Here we present an overview over theses sampling methods in an attempt to shed light on which should be selected depending on the type of system property studied. Major Conclusions Enhanced sampling methods have been employed for a broad range of biological systems and the choice of a suitable method is connected to biological and physical characteristics of the system, in particular system size. While metadynamics and replica-exchange molecular dynamics are the most adopted sampling methods to study biomolecular dynamics, simulated annealing is well suited to characterize very flexible systems. The use of annealing methods for a long time was restricted to simulation of small proteins; however, a variant of the method, generalized simulated annealing, can be employed at a relatively low computational cost to large macromolecular complexes. General Significance Molecular dynamics trajectories frequently do not reach all relevant conformational substates, for example those connected with biological function, a problem that can be addressed by employing enhanced sampling algorithms. PMID:25450171

It is clear that typical protocols used for soil analysis would certainly fail to adequately interrogate ground-water treatment systems unless they were substantially modified. The modifications found necessary to compensate for the low biomass include molecular tools and techniq...

Atmospheric scattering causes significant degradation in the quality of video images, particularly when imaging over long distances. The principle problem is the reduction in contrast due to scattered light. It is known that when the scattering particles are not too large compared with the imaging wavelength (i.e. Mie scattering) then high spatial resolution information may be contained within a low-contrast image. Unfortunately this information is not easily perceived by a human observer, particularly when using a standard video monitor. A secondary problem is the difficulty of achieving a sharp focus since automatic focus techniques tend to fail in such conditions. Recently several commercial colour video processing systems have become available. These systems use various techniques to improve image quality in low contrast conditions whilst retaining colour content. These systems produce improvements in subjective image quality in some situations, particularly in conditions of haze and light fog. There is also some evidence that video enhancement leads to improved ATR performance when used as a pre-processing stage. Psychological literature indicates that low contrast levels generally lead to a reduction in the performance of human observers in carrying out simple visual tasks. The aim of this paper is to present the results of an empirical study on object recognition in adverse viewing conditions. The chosen visual task was vehicle number plate recognition at long ranges (500 m and beyond). Two different commercial video enhancement systems are evaluated using the same protocol. The results show an increase in effective range with some differences between the different enhancement systems.

This project will utilize the electro-phoretic deposition technique (EPD) in conjunction with nanofluids to deposit oxide coatings on prototypic zirconium alloy cladding surfaces. After demonstrating that this surface modification is reproducible and robust, the team will subject the modified surface to boiling and corrosion tests to characterize the improved nucleate boiling behavior and superior corrosion performance. The scope of work consists of the following three tasks: The first task will employ the EPD surface modification technique to coat the surface of a prototypic set of zirconium alloy cladding tube materials (e.g. Zircaloy and advanced alloys such as M5) with a micron-thick layer of zirconium oxide nanoparticles. The team will characterize the modified surface for uniformity using optical microscopy and scanning-electron microscopy, and for robustness using standard hardness measurements. After zirconium alloy cladding samples have been prepared and characterized using the EPD technique, the team will begin a set of boiling experiments to measure the heat transfer coefficient and critical heat flux (CHF) limit for each prepared sample and its control sample. This work will provide a relative comparison of the heat transfer performance for each alloy and the surface modification technique employed. As the boiling heat transfer experiments begin, the team will also begin corrosion tests for these zirconium alloy samples using a water corrosion test loop that can mimic light water reactor (LWR) operational environments. They will perform extended corrosion tests on the surface-modified zirconium alloy samples and control samples to examine the robustness of the modified surface, as well as the effect on surface oxidation

There is increasing urgency to develop and deploy sustainable sources of energy to reduce our global dependency on finite, high-carbon fossil fuels. Lignocellulosic feedstocks, used in power and liquid fuel generation, are valuable sources of non-food plant biomass. They are cultivated with minimal inputs on marginal or degraded lands to prevent competition with arable agriculture and offer significant potential for sustainable intensification (the improvement of yield without the necessity for additional inputs) through advanced molecular breeding. This article explores progress made in next generation sequencing, advanced genotyping, association genetics, and genetic modification in second generation bioenergy production. Using poplar as an exemplar where most progress has been made, a suite of target traits is also identified giving insight into possible routes for crop improvement and deployment in the immediate future. PMID:26541073

This paper describes a Mycobacterium intracellulare variant strain causing an unusual infection. Several isolates obtained from an immunocompromised patient were identified as members of the Mycobacterium avium complex (MAC) by the commercial AccuProbe system and biochemical standard identification. Further molecular approaches were undertaken for a more accurate characterization of the bacteria. Up to seven different genomic sequences were analyzed, ranging from conserved mycobacterial genes such as 16S ribosomal DNA to MAC-specific genes such as mig (macrophage-induced gene). The results obtained identify the isolates as a variant of M. intracellulare, an example of the internal variability described for members of the MAC, particularly within that species. The application of other molecular approaches is recommended for more accurate identification of bacteria described as MAC members. PMID:11724827

Protocols are one of the main organizational resources in molecular biology. They are written instructions that specify ingredients, equipment, and sequences of steps for making technical preparations. Some protocols are published in widely used manuals, while others are hand-written variants used by particular laboratories and individual technicians. It is widely understood, both in molecular biology and in social studies of science, that protocols do not describe exactly what practitioners do in the laboratory workplace. In social studies of science, the difference between protocols and the actual practices of doing them often is used to set up ironic contrasts between 'messy' laboratory practices and the appearance of technical order. Alternatively, in ethnomethodological studies of work, the difference is examined as a constitutive feature, both of the lived-work of doing technical projects, and of the administrative work of regulating and evaluating such projects. The present article takes its point of departure from ethnomethodology, and begins with a discussion of local problems with performing molecular biology protocols on specific occasions. The discussion then moves to particular cases in criminal law in which defense attorneys cross-examine forensic technicians and lab administrators. In these interrogations, the distinction between protocols and actual practices animates the dialogue and becomes consequential for judgments in the case at hand. The article concludes with a discussion of administrative science: the work of treating protocols and paper trails as proxies for actual 'scientific' practices. PMID:12171609

The authors are developing an implantable centrifugal blood pump for short- and medium-term (1-6 months) left ventricular assist. They hypothesized that the application of result dependent modifications to this pump would lead to overall improved performance in long-term implantation studies. Essential requirements for pump operation, such as durability and resistance to clot formation, have been achieved through specialized fabrication techniques. The antithrombogenic character of the pump has been improved through coating at the cannula-housing interfaces and the baffle seal, and through changing the impeller blade material from polysulfone to pyrolytic carbon. The electronic components of the pump have been sealed for implantable use through specialized processes of dipping and potting, and the surfaces of the internal pump components have been treated to increase durability. The device has demonstrated efficacy in five chronic sheep implantation studies of 14, 10, 28, 35, and 154 day duration. Post mortem findings from the 14 day experiment showed stable fibrin entangled around the impeller shaft and blades. After pump modification, autopsy findings of the 10 day study showed no evidence of clot. Additionally, the results of the 28 day experiment showed only a small (2.0 mm) ring of fibrin at the shaft-seal interface. In the 35 and 154 day experiments, redesign of the stators have resulted in improved motor corrosion resistance. The 35 day study showed a small, 0.5 mm wide fibrin deposit at the lip seal, but no motor failure. In the 154 day experiment, the motor failed because of stator fluid corrosion, while the explanted pump was devoid of thrombus. Based on these findings, the authors believe that these pump refinements have contributed significantly to improvements in durability and resistance to clot formation. PMID:8555619

Solution-processed molecular semiconductors for the fabrication of solar cells have emerged as a competitive alternative to their conjugated polymer counterparts, primarily because such materials systems exhibit no batch-to-batch variability, can be purified to a greater extent and offer precisely defined chemical structures. Highest power conversion efficiencies (PCEs) have been achieved through a combination of molecular design and the application of processing methods that optimize the bulk heterojunction (BHJ) morphology. However, one finds that the methods used for controlling structural order, for example the use of high boiling point solvent additives, have been inspired by examination of the conjugated polymer literature. It stands to reason that a different class of morphology modifiers should be sought that address challenges unique to molecular films, including difficulties in obtaining thicker films and avoiding the dewetting of active photovoltaic layers. Here we show that the addition of small quantities of high molecular weight polystyrene (PS) is a very simple to use and economically viable additive that improves PCE. Remarkably, the PS spontaneously accumulates away from the electrodes as separate domains that do not interfere with charge extraction and collection or with the arrangement of the donor and acceptor domains in the BHJ blend.

This paper discusses recent optimization approaches to the protein side-chain prediction problem, protein structural alignment, and molecular structure determination from X-ray diffraction measurements. The machinery employed to solve these problems has included algorithms from linear programming, dynamic programming, combinatorial optimization, and mixed-integer nonlinear programming. Many of these problems are purely continuous in nature. Yet, to this date, they have been approached mostly via combinatorial optimization algorithms that are applied to discrete approximations. The main purpose of the paper is to offer an introduction and motivate further systems approaches to these problems. PMID:20160866

Performance of various holographic techniques can be essentially improved by homogenizing the intensity profile of the laser beam with using beam shaping optics, for example, the achromatic field mapping refractive beam shapers like πShaper. The operational principle of these devices presumes transformation of laser beam intensity from Gaussian to flattop one with high flatness of output wavefront, saving of beam consistency, providing collimated output beam of low divergence, high transmittance, extended depth of field, negligible residual wave aberration, and achromatic design provides capability to work with several laser sources with different wavelengths simultaneously. Applying of these beam shapers brings serious benefits to the Spatial Light Modulator based techniques like Computer Generated Holography or Dot-Matrix mastering of security holograms since uniform illumination of an SLM allows simplifying mathematical calculations and increasing predictability and reliability of the imaging results. Another example is multicolour Denisyuk holography when the achromatic πShaper provides uniform illumination of a field at various wavelengths simultaneously. This paper will describe some design basics of the field mapping refractive beam shapers and optical layouts of their applying in holographic systems. Examples of real implementations and experimental results will be presented as well.

Using the traditional serological tests and the most novel techniques for DNA fingerprinting, forensic scientists scan different traits that vary from person to person and use the data to include or exclude suspects based on matching with the evidence obtained in a criminal case. Although the forensic application of these methods is well known,…

We compare and contrast the development of optical molecular imaging techniques with nuclear medicine with a didactic emphasis for initiating readers into the field of molecular imaging. The nuclear imaging techniques of gamma scintigraphy, single-photon emission computed tomography, and positron emission tomography are first briefly reviewed. The molecular optical imaging techniques of bioluminescence and fluorescence using gene reporter/probes and gene reporters are described prior to introducing the governing factors of autofluorescence and excitation light leakage. The use of dual-labeled, near-infrared excitable and radio-labeled agents are described with comparative measurements between planar fluorescence and nuclear molecular imaging. The concept of time-independent and -dependent measurements is described with emphasis on integrating time-dependent measurements made in the frequency domain for 3-D tomography. Finally, we comment on the challenges and progress for translating near-infrared (NIR) molecular imaging agents for personalized medicine. PMID:19021311

Physicians, in their ever-demanding jobs, are looking to decision support systems for aid in clinical diagnosis. However, clinical decision support systems need to be of sufficiently high accuracy that they help, rather than hinder, the physician in his/her diagnosis. Decision support systems with accuracies, of patient state determination, of greater than 80 percent, are generally perceived to be sufficiently accurate to fulfill the role of helping the physician. We have previously shown that data mining techniques have the potential to provide the underpinning technology for clinical decision support systems. In this paper, an extension of the work in reverence 2, we describe how changes in data mining methodologies, for the analysis of 12-lead ECG data, improve the accuracy by which data mining algorithms determine which patients are suffering from heart disease. We show that the accuracy of patient state prediction, for all the algorithms, which we investigated, can be increased by up to 6 percent, using the combination of appropriate test training ratios and 5-fold cross-validation. The use of cross-validation greater than 5-fold, appears to reduce the improvement in algorithm classification accuracy gained by the use of this validation method. The accuracy of 84 percent in patient state predictions, obtained using the algorithm OCI, suggests that this algorithm will be capable of providing the required accuracy for clinical decision support systems.

An optical survey is the main technique for detecting space debris. Due to the specific characteristics of observation, the pointing errors and tracking errors of the telescope as well as image degradation may be significant, which make it difficult for astrometric calibration. Here we present an improved method that corrects the pointing and tracking errors, and measures the image position precisely. The pipeline is tested on a number of CCD images obtained from a 1-m telescope administered by Xinjiang Astronomical Observatory while observing a GPS satellite. The results show that the position measurement error of the background stars is around 0.1 pixel, while the time cost for a single frame is about 7.5 s; hence the reliability and accuracy of our method are demonstrated. In addition, our method shows a versatile and feasible way to perform space debris observation utilizing non-dedicated telescopes, which means more sensors could be involved and the ability to perform surveys could be improved.

The design of fusion power systems requires analysis of neutron activation of large, complex volumes, and the resulting particles emitted from these volumes. Structured mesh-based discretization of these problems allows for improved modeling in these activation analysis problems. Finer discretization of these problems results in large computational costs, which drives the investigation of more efficient methods. Within an ad hoc subroutine of the Monte Carlo transport code MCNP, we implement sampling of voxels and photon energies for volumetric sources using the alias method. The alias method enables efficient sampling of a discrete probability distribution, and operates in 0(1) time, whereas the simpler direct discrete method requires 0(log(n)) time. By using the alias method, voxel sampling becomes a viable alternative to sampling space with the 0(1) approach of uniformly sampling the problem volume. Additionally, with voxel sampling it is straightforward to introduce biasing of volumetric sources, and we implement this biasing of voxels as an additional variance reduction technique that can be applied. We verify our implementation and compare the alias method, with and without biasing, to direct discrete sampling of voxels, and to uniform sampling. We study the behavior of source biasing in a second set of tests and find trends between improvements and source shape, material, and material density. Overall, however, the magnitude of improvements from source biasing appears to be limited. Future work will benefit from the implementation of efficient voxel sampling - particularly with conformal unstructured meshes where the uniform sampling approach cannot be applied. (authors)

Recently, the use of programmable DNA-binding proteins such as ZFP/ZFNs, TALE/TALENs and CRISPR/Cas has produced unprecedented advances in gene targeting and genome editing in prokaryotes and eukaryotes. These advances allow researchers to specifically alter genes, reprogram epigenetic marks, generate site-specific deletions and potentially cure diseases. Unlike previous methods, these precision genetic modification techniques (PGMs) are specific, efficient, easy to use and economical. Here we discuss the capabilities and pitfalls of PGMs and highlight the recent, exciting applications of PGMs in molecular biology and crop genetic engineering. Further improvement of the efficiency and precision of PGM techniques will enable researchers to precisely alter gene expression and biological/chemical pathways, probe gene function, modify epigenetic marks and improve crops by increasing yield, quality and tolerance to limiting biotic and abiotic stress conditions. PMID:24510124

We have developed and implemented a serial experiment in molecular cloning laboratory course for undergraduate students majored in biotechnology. "Pseudomonas putida xylE" gene, encoding catechol 2, 3-dioxygenase, was manipulated to learn molecular biology techniques. The integration of cloning, expression, and enzyme assay gave students a chance…

Internal coordinate molecular dynamics (ICMD) methods provide a more natural description of a protein by using bond, angle, and torsional coordinates instead of a Cartesian coordinate representation. Freezing high-frequency bonds and angles in the ICMD model gives rise to constrained ICMD (CICMD) models. There are several theoretical aspects that need to be developed to make the CICMD method robust and widely usable. In this article, we have designed a new framework for (1) initializing velocities for nonindependent CICMD coordinates, (2) efficient computation of center of mass velocity during CICMD simulations, (3) using advanced integrators such as Runge-Kutta, Lobatto, and adaptive CVODE for CICMD simulations, and (4) cancelling out the "flying ice cube effect" that sometimes arises in Nosé-Hoover dynamics. The Generalized Newton-Euler Inverse Mass Operator (GNEIMO) method is an implementation of a CICMD method that we have developed to study protein dynamics. GNEIMO allows for a hierarchy of coarse-grained simulation models based on the ability to rigidly constrain any group of atoms. In this article, we perform tests on the Lobatto and Runge-Kutta integrators to determine optimal simulation parameters. We also implement an adaptive coarse-graining tool using the GNEIMO Python interface. This tool enables the secondary structure-guided "freezing and thawing" of degrees of freedom in the molecule on the fly during molecular dynamics simulations and is shown to fold four proteins to their native topologies. With these advancements, we envision the use of the GNEIMO method in protein structure prediction, structure refinement, and in studying domain motion. PMID:23345138

The rapid diagnosis of various diseases is a critical advantage of many emerging biomedical tools. Due to advances in preventive medicine, tools for the accurate analysis of genetic mutation and associated hereditary diseases have attracted significant interests in recent years. The entire diagnostic process usually involves two critical steps, namely, sample pre-treatment and genetic analysis. The sample pre-treatment processes such as extraction and purification of the target nucleic acids prior to genetic analysis are essential in molecular diagnostics. The genetic analysis process may require specialized apparatus for nucleic acid amplification, sequencing and detection. Traditionally, pre-treatment of clinical biological samples (e.g. the extraction of deoxyribonucleic acid (DNA) or ribonucleic acid (RNA)) and the analysis of genetic polymorphisms associated with genetic diseases are typically a lengthy and costly process. These labor-intensive and time-consuming processes usually result in a high-cost per diagnosis and hinder their practical applications. Besides, the accuracy of the diagnosis may be affected owing to potential contamination from manual processing. Alternatively, due to significant advances in micro-electro-mechanical-systems (MEMS) and microfluidic technology, there are numerous miniature systems employed in biomedical applications, especially for the rapid diagnosis of genetic diseases. A number of advantages including automation, compactness, disposability, portability, lower cost, shorter diagnosis time, lower sample and reagent consumption, and lower power consumption can be realized by using these microfluidic-based platforms. As a result, microfluidic-based systems are becoming promising platforms for genetic analysis, molecular biology and for the rapid detection of genetic diseases. In this review paper, microfluidic-based platforms capable of identifying genetic sequences and diagnosis of genetic mutations are surveyed and reviewed

There is currently no calibration available for the whole human mtDNA genome, incorporating both coding and control regions. Furthermore, as several authors have pointed out recently, linear molecular clocks that incorporate selectable characters are in any case problematic. We here confirm a modest effect of purifying selection on the mtDNA coding region and propose an improvedmolecular clock for dating human mtDNA, based on a worldwide phylogeny of > 2000 complete mtDNA genomes and calibrating against recent evidence for the divergence time of humans and chimpanzees. We focus on a time-dependent mutation rate based on the entire mtDNA genome and supported by a neutral clock based on synonymous mutations alone. We show that the corrected rate is further corroborated by archaeological dating for the settlement of the Canary Islands and Remote Oceania and also, given certain phylogeographic assumptions, by the timing of the first modern human settlement of Europe and resettlement after the Last Glacial Maximum. The corrected rate yields an age of modern human expansion in the Americas at ∼15 kya that—unlike the uncorrected clock—matches the archaeological evidence, but continues to indicate an out-of-Africa dispersal at around 55–70 kya, 5–20 ky before any clear archaeological record, suggesting the need for archaeological research efforts focusing on this time window. We also present improved rates for the mtDNA control region, and the first comprehensive estimates of positional mutation rates for human mtDNA, which are essential for defining mutation models in phylogenetic analyses. PMID:19500773

Density modification is a general standard technique which may be used to improve electron density derived from experimental phasing and also to refine densities obtained by ab initio approaches. Here, a novel method to expand density modification is presented, termed the Phantom derivative technique, which is based on non-existent structure factors and is of particular interest in molecular replacement. The Phantom derivative approach uses randomly generated ancil structures with the same unit cell as the target structure to create non-existent derivatives of the target structure, called phantom derivatives, which may be used for ab initio phasing or for refining the available target structure model. In this paper, it is supposed that a model electron density is available: it is shown that ancil structures related to the target obtained by shifting the target by origin-permissible translations may be employed to refine model phases. The method enlarges the concept of the ancil, is as efficient as the canonical approach using random ancils and significantly reduces the CPU refinement time. The results from many real test cases show that the proposed methods can substantially improve the quality of electron-density maps from molecular-replacement-based phases. PMID:27050134

Legionella bacteria are ubiquitous in freshwater aquatic systems, and humans are infected by them primarily through inhalation of contaminated aerosols. This study analyzed a total of 47 water samples from dental lines in private dental offices and university and hospital dental clinics for Legionella using the polymerase chain reaction, direct fluorescent antibody staining and culture techniques. The typical temperature of dental waterlines (23 C) combined with Legionella's ability to form biofilms, stagnation of the water in the lines and a low chlorine residual all potentially create a unique niche for this microorganism. PMID:8803394

Four different laser-based techniques were applied to study physical and chemical characteristics of biomolecules and dye molecules. These techniques are liole burning spectroscopy, single molecule spectroscopy, time-resolved coherent anti-Stokes Raman spectroscopy and laser-induced fluorescence microscopy. Results from hole burning and single molecule spectroscopy suggested that two antenna states (C708 & C714) of photosystem I from cyanobacterium Synechocystis PCC 6803 are connected by effective energy transfer and the corresponding energy transfer time is ~6 ps. In addition, results from hole burning spectroscopy indicated that the chlorophyll dimer of the C714 state has a large distribution of the dimer geometry. Direct observation of vibrational peaks and evolution of coumarin 153 in the electronic excited state was demonstrated by using the fs/ps CARS, a variation of time-resolved coherent anti-Stokes Raman spectroscopy. In three different solvents, methanol, acetonitrile, and butanol, a vibration peak related to the stretch of the carbonyl group exhibits different relaxation dynamics. Laser-induced fluorescence microscopy, along with the biomimetic containers-liposomes, allows the measurement of the enzymatic activity of individual alkaline phosphatase from bovine intestinal mucosa without potential interferences from glass surfaces. The result showed a wide distribution of the enzyme reactivity. Protein structural variation is one of the major reasons that are responsible for this highly heterogeneous behavior.

There are close to 20,000 cataloged manmade objects in space, the large majority of which are not active, functioning satellites. These are tracked by phased array and mechanical radars and ground and space-based optical telescopes, collectively known as the Space Surveillance Network (SSN). A better SSN schedule of observations could, using exactly the same legacy sensor resources, improve space catalog accuracy through more complementary tracking, provide better responsiveness to real-time changes, better track small debris in low earth orbit (LEO) through efficient use of applicable sensors, efficiently track deep space (DS) frequent revisit objects, handle increased numbers of objects and new types of sensors, and take advantage of future improved communication and control to globally optimize the SSN schedule. We have developed a scheduling algorithm that takes as input the space catalog and the associated covariance matrices and produces a globally optimized schedule for each sensor site as to what objects to observe and when. This algorithm is able to schedule more observations with the same sensor resources and have those observations be more complementary, in terms of the precision with which each orbit metric is known, to produce a satellite observation schedule that, when executed, minimizes the covariances across the entire space object catalog. If used operationally, the results would be significantly increased accuracy of the space catalog with fewer lost objects with the same set of sensor resources. This approach inherently can also trade-off fewer high priority tasks against more lower-priority tasks, when there is benefit in doing so. Currently the project has completed a prototyping and feasibility study, using open source data on the SSN's sensors, that showed significant reduction in orbit metric covariances. The algorithm techniques and results will be discussed along with future directions for the research.

When scanning with deoxyglucose, the crucial step in quantitation is the determination of glucose utilization rate, R, from tissue uptake data. R is conventionally calculated using nominal rate constants k/sub 1/-k/sub 4/, which are needed to correct for free deoxyglucose in the tissue at the time of the scan. In general, the resulting R is not consistent with these nominal rate constants, so the answer is necessarily in error. By adjusting the rate constants for consistency and then recalculating R, and repeating as necessary, an accuracy improvement should be obtained. The method reported here interates through modification of the third rate constant, k/sub 3/, since its value is determined by the hexokinase reaction which is considered to be the rate-limiting step. Data have been analyzed, taken from a representative sampling of the more than 150 patients scanned during the past year. It is seen that as glucose utilization rate moves away from the nominal rate for a subject, the self-consistency process developed by the iterative technique modified the quoted rate by an extra 2% per 10% change in R. Further, the percentage change in k/sub 3/ varies approximately linearly, but at a rate roughly twice that of the change in R. This modification indeed corresponds to an improvement in accuracy insofar as the enzymatic reaction described by k/sub 3/ is the primary source of change in glucose kinetics for the tissue in question. The same iterative procedure could be used with other assumptions about the way the rate constants vary.

Bioluminescence is a ubiquitous imaging modality for visualizing biological processes in vivo. This technique employs visible light and interfaces readily with most cell and tissue types, making it a versatile technology for preclinical studies. Here we review basic bioluminescence imaging principles, along with applications of the technology that are relevant to the medicinal chemistry community. These include noninvasive cell tracking experiments, analyses of protein function, and methods to visualize small molecule metabolites. In each section, we also discuss how bioluminescent tools have revealed insights into experimental therapies and aided drug discovery. Last, we highlight the development of new bioluminescent tools that will enable more sensitive and multi-component imaging experiments and, thus, expand our broader understanding of living systems.

Microsporidia are now recognized as important pathogens of AIDS patients; the ability of these parasites to cause disease in immunocompetent persons is still being elucidated. Improved diagnostic tests for microsporidial infection are continually being sought for establishing diagnosis in order to avoid laborious electron microscopy studies that require invasively acquired biopsy specimens. Modified trichrome or chemofluorescent stains are useful for detecting microsporidia in bodily fluids and stool specimens, but they do not allow for speciation of microsporidia. Polymerase chain reaction with specific primers will allow the detection and speciation of microsporidia in biopsy tissue, bodily fluids, and stool specimens. PMID:8903228

The NASA White Sands Test Facility (WSTF) has an ongoing effort to reduce or eliminate usage of cleaning solvents such as CFC-113 and its replacements. These solvents are used in the final clean and cleanliness verification processes for flight and ground support hardware, especially for oxygen systems where organic contaminants can pose an ignition hazard. For the final cleanliness verification in the standard process, the equivalent of one square foot of surface area of parts is rinsed with the solvent, and the final 100 mL of the rinse is captured. The amount of nonvolatile residue (NVR) in the solvent is determined by weight after the evaporation of the solvent. An improved process of sampling this rinse, developed at WSTF, requires evaporation of less than 2 mL of the solvent to make the cleanliness verification. Small amounts of the solvent are evaporated in a clean stainless steel cup, and the cleanliness of the stainless steel cup is measured using a commercially available surface quality monitor. The effectiveness of this new cleanliness verification technique was compared to the accepted NVR sampling procedures. Testing with known contaminants in solution, such as hydraulic fluid, fluorinated lubricants, and cutting and lubricating oils, was performed to establish a correlation between amount in solution and the process response. This report presents the approach and results and discusses the issues in establishing the surface quality monitor-based cleanliness verification.

Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from "nominal predictions" due to "upsetting events" such as the 2008 global banking crisis. PMID:23163724

While pursuing a study of lysozyme levels in human tears, the authors were dissatisfied with collection systems previously used. We therefore constructed a new, improved apparatus for tear collection using a thin, flexible polyethylene tube attached to a standard 25-gauge 1.0 cc tuberculin syringe. This system is superior to previously-designed apparati in that it permits the collection of a relatively large volume of tears in a short time without risking damage to delicate ocular structures. This promises to enhance the reliability of tear chemistry research without significantly adulterating the chemical composition of the samples. Typical results for total protein, levels of IgA and lysozyme, and Minimum-Inhibitory-Concentration bioassay of tears are presented to support the authors' position that standard chemical assays are not altered. An additional bonus inherent in this technique is the potential for obtaining larger populations of experimental subjects due to the low level of irritation that subjects experience when tears are sampled. This apparatus is easily amenable to clinical and medical diagnostic studies of tear chemistry in patients suffering from various eye diseases. PMID:6512146

Internal coordinate molecular dynamics (ICMD) methods provide a more natural description of a protein by using bond, angle and torsional coordinates instead of a Cartesian coordinate representation. Freezing high frequency bonds and angles in the ICMD model gives rise to constrained ICMD (CICMD) models. There are several theoretical aspects that need to be developed in order to make the CICMD method robust and widely usable. In this paper we have designed a new framework for 1) initializing velocities for non-independent CICMD coordinates, 2) efficient computation of center of mass velocity during CICMD simulations, 3) using advanced integrators such as Runge-Kutta, Lobatto and adaptive CVODE for CICMD simulations, and 4) cancelling out the “flying ice cube effect” that sometimes arises in Nosé-Hoover dynamics. The Generalized Newton-Euler Inverse Mass Operator (GNEIMO) method is an implementation of a CICMD method that we have developed to study protein dynamics. GNEIMO allows for a hierarchy of coarse-grained simulation models based on the ability to rigidly constrain any group of atoms. In this paper, we perform tests on the Lobatto and Runge-Kutta integrators to determine optimal simulation parameters. We also implement an adaptive coarse graining tool using the GNEIMO Python interface. This tool enables the secondary structure-guided “freezing and thawing” of degrees of freedom in the molecule on the fly during MD simulations, and is shown to fold four proteins to their native topologies. With these advancements we envision the use of the GNEIMO method in protein structure prediction, structure refinement, and in studying domain motion. PMID:23345138

Decreasing power consumption in small devices such as handhelds, cell phones and high-performance processors is now one of the most critical design concerns. On-chip cache memories dominate the chip area in microprocessors and thus arises the need for power efficient cache memories. Cache is the simplest cost effective method to attain high speed memory hierarchy and, its performance is extremely critical for high speed computers. Cache is used by the microprocessor for channeling the performance gap between processor and main memory (RAM) hence the memory bandwidth is frequently a bottleneck which can affect the peak throughput significantly. In the design of any cache system, the tradeoffs of area/cost, performance, power consumption, and thermal management must be taken into consideration. Previous work has mainly concentrated on performance and area/cost constraints. More recent works have focused on low power design especially for portable devices and media-processing systems, however fewer research has been done on the relationship between heat management, Leakage power and cost per die. Lately, the focus of power dissipation in the new generations of microprocessors has shifted from dynamic power to idle power, a previously underestimated form of power loss that causes battery charge to drain and shutdown too early due the waste of energy. The problem has been aggravated by the aggressive scaling of process; device level method used originally by designers to enhance performance, conserve dissipation and reduces the sizes of digital circuits that are increasingly condensed. This dissertation studies the impact of hotspots, in the cache memory, on leakage consumption and microprocessor reliability and durability. The work will first prove that by eliminating hotspots in the cache memory, leakage power will be reduced and therefore, the reliability will be improved. The second technique studied is data quality management that improves the quality of the data

In the case of most imaging methods, contrast is generated either by physical properties of the sample (Differential Image Contrast, Phase Contrast), or by fluorescent labels that are localized to a particular protein or organelle. Standard Raman and infrared methods for obtaining images are based upon the intrinsic vibrational properties of molecules, and thus obviate the need for attached flurophores. Unfortunately, they have significant limitations for live-cell imaging. However, an active Raman method, called Coherent Anti-Stokes Raman Scattering (CARS), is well suited for microscopy, and provides a new means for imaging specific molecules. Vibrational imaging techniques, such as CARS, avoid problems associated with photobleaching and photo-induced toxicity often associated with the use of fluorescent labels with live cells. Because the laser configuration needed to implement CARS technology is similar to that used in other multiphoton microscopy methods, such as two -photon fluorescence and harmonic generation, it is possible to combine imaging modalities, thus generating simultaneous CARS and fluorescence images. A particularly powerful aspect of CARS microscopy is its ability to selectively image deuterated compounds, thus allowing the visualization of molecules, such as lipids, that are chemically indistinguishable from the native species.

Computational chemistry has always played a key role in anti-viral drug development. The challenges and the quickly rising public interest when a virus is becoming a threat has significantly influenced computational drug discovery. The most obvious example is anti-AIDS research, where HIV protease and reverse transcriptase have triggered enormous efforts in developing and improving computational methods. Methods applied to anti-viral research include (i) ligand-based approaches that rely on known active compounds to extrapolate biological activity, such as machine learning techniques or classical QSAR, (ii) structure-based methods that rely on an experimentally determined 3D structure of the targets, such as molecular docking or molecular dynamics, and (iii) universal approaches that can be applied in a structure- or ligand-based way, such as 3D QSAR or 3D pharmacophore elucidation. In this review we summarize these molecular modeling approaches as they were applied to fight anti-viral diseases and highlight their importance for anti-viral research. We discuss the role of computational chemistry in the development of small molecules as agents against HIV integrase, HIV-1 protease, HIV-1 reverse transcriptase, the influenza virus M2 channel protein, influenza virus neuraminidase, the SARS coronavirus main proteinase and spike protein, thymidine kinases of herpes viruses, hepatitis c virus proteins and other flaviviruses as well as human rhinovirus coat protein and proteases, and other picornaviridae. We highlight how computational approaches have helped in discovering anti-viral activities of natural products and give an overview on polypharmacology approaches that help to optimize drugs against several viruses or help to optimize the metabolic profile of and anti-viral drug. PMID:21303343

A function for estimating the effective root-mean-square deviation in coordinates between two proteins has been developed that depends on both the sequence identity and the size of the protein and is optimized for use with molecular replacement in Phaser. A top peak translation-function Z-score of over 8 is found to be a reliable metric of when molecular replacement has succeeded. The estimate of the root-mean-square deviation (r.m.s.d.) in coordinates between the model and the target is an essential parameter for calibrating likelihood functions for molecular replacement (MR). Good estimates of the r.m.s.d. lead to good estimates of the variance term in the likelihood functions, which increases signal to noise and hence success rates in the MR search. Phaser has hitherto used an estimate of the r.m.s.d. that only depends on the sequence identity between the model and target and which was not optimized for the MR likelihood functions. Variance-refinement functionality was added to Phaser to enable determination of the effective r.m.s.d. that optimized the log-likelihood gain (LLG) for a correct MR solution. Variance refinement was subsequently performed on a database of over 21 000 MR problems that sampled a range of sequence identities, protein sizes and protein fold classes. Success was monitored using the translation-function Z-score (TFZ), where a TFZ of 8 or over for the top peak was found to be a reliable indicator that MR had succeeded for these cases with one molecule in the asymmetric unit. Good estimates of the r.m.s.d. are correlated with the sequence identity and the protein size. A new estimate of the r.m.s.d. that uses these two parameters in a function optimized to fit the mean of the refined variance is implemented in Phaser and improves MR outcomes. Perturbing the initial estimate of the r.m.s.d. from the mean of the distribution in steps of standard deviations of the distribution further increases MR success rates.

The Direct Simulation Monte Carlo (DSMC) method typically used to model thermochemical nonequilibrium rarefied gases requires accurate total collision cross sections, reaction probabilities, and molecular internal energy exchange models. However, the baseline total cross sections are often determined from extrapolations of relatively low-temperature viscosity data, reaction probabilities are defined such that experimentally determined equilibrium reaction rates are replicated, and internal energy relaxation models are phenomenological in nature. Therefore, these models have questionable validity in modeling strongly nonequilibrium gases with temperatures greater than those possible in experimental test facilities. To rectify this deficiency, the Molecular Dynamics/Quasi-Classical Trajectories (MD/QCT) method can be used to accurately compute total collision cross sections, reaction probabilities, and internal energy exchange models based on first principles for hypervelocity collision conditions. In this thesis, MD/QCT-based models were used to improve simulations of two unique nonequilibrium rarefied gas systems: the Ionian atmosphere and hypersonic shocks in Earth's atmosphere. The Jovian plasma torus flows over Io at ≈ 57 km/s, inducing high-speed collisions between atmospheric SO2 and the hypervelocity plasma's O atoms and ions. The DSMC method is well-suited to model the rarefied atmosphere, so MD/QCT studies are therefore conducted to improve DSMC collision models of the critical SO2-O collision pair. The MD/QCT trajectory simulations employed a new potential energy surface that was developed using a ReaxFF fit to a set of ab initio calculations. Compared to the MD/QCT results, the baseline DSMC models are found to significantly under-predict total cross sections, use reaction probabilities that are unrealistically high, and give unphysical internal energies above the dissociation energy for non-reacting inelastic collisions and under-predicts post

Many newly discovered drug molecules have low aqueous solubility, which results in low bioavailability. One way to improve their dissolution is to formulate them as nanoparticles, which have high specific surface areas, consequently increasing the dissolution rate and solubility. Nanoparticles can be produced via top-down or bottom-up methods. Top-down techniques such as wet milling and high pressure homogenisation involve reducing large particles to nano-sizes. Some pharmaceutical products made by these processes have been marketed. Bottom-up methods such as precipitation and controlled droplet evaporation form nanoparticles from molecules in solution. To minimise aggregation upon drying and promote redispersion of the nanoparticles upon reconstitution or administration, hydrophilic matrix formers are added to the formulation. However, the nanoparticles will eventually agglomerate together after dispersing in the liquid and hinders dissolution. Currently there is no pharmacopoeial method specified for nanoparticles. Amongst the current dissolution apparatus available for powders, the flow-through cell has been shown to be the most suitable. Regulatory and pharmacopoeial standards should be established in the future to standardise the dissolution testing of nanoparticles. More nanoparticle formulations of new hydrophobic drugs are expected to be developed in the future with the advancement of nanotechnology. However, the agglomeration problem is inherent and difficult to overcome. Thus the benefit of dissolution enhancement often cannot be fully realised. On the other hand, chemical strategies such as modifying the parent drug molecule to form a more soluble salt form, prodrug, or cyclodextrin complexation are well established and have been shown to be effective in enhancing dissolution. Thus the value of nanoformulations needs to be interpreted in the light of their limitations. Chemical approaches should also be considered in new product development. PMID

The translocation of nucleic acid polymers across cell membranes is a fundamental requirement for complex life and has greatly contributed to genomic molecular evolution. The diversity of pathways that have evolved to transport DNA and RNA across membranes include protein receptors, active and passive transporters, endocytic and pinocytic processes, and various types of nucleic acid conducting channels known as nanopores. We have developed a series of experimental techniques, collectively known as "Wicking", that greatly improves the biophysical analysis of nucleic acid transport through protein nanopores in planar lipid bilayers. We have verified the Wicking method using numerous types of classical ion channels including the well-studied chloride selective channel, CLIC1. We used the Wicking technique to reconstitute α-hemolysin and found that DNA translocation events of types A and B could be routinely observed using this method. Furthermore, measurable differences were observed in the duration of blockade events as DNA length and composition was varied, consistent with previous reports. Finally, we tested the ability of the Wicking technology to reconstitute the dsRNA transporter Sid-1. Exposure to dsRNAs of increasing length and complexity showed measurable differences in the current transitions suggesting that the charge carrier was dsRNA. However, the translocation events occurred so infrequently that a meaningful electrophysiological analysis was not possible. Alterations in the lipid composition of the bilayer had a minor effect on the frequency of translocation events but not to such a degree as to permit rigorous statistical analysis. We conclude that in many instances the Wicking method is a significant improvement to the lipid bilayer technique, but is not an optimal method for analyzing transport through Sid-1. Further refinements to the Wicking method might have future applications in high throughput DNA sequencing, DNA computation, and

The electrochemical emission spectroscopy (EES) technique is a newly developed on-line corrosion monitoring technique, which is capable of detecting localized corrosion as well as measuring uniform corrosion. The main difference between this technique and the traditional electrochemical noise technique is the use of an inert microelectrode to sense the current signal from a working electrode instead of using two identical working electrodes to generate the current signal. In this paper, the ability of the EES technique is evaluated for pitting corrosion monitoring. Pitting corrosion is generated on three systems: stainless steel types 304 and 316 in aerated 3% NaCl solution at 50 C and stainless steel type 304 in 6% FeCl{sub 3} solution at room temperature. In all cases, the on-set of pitting corrosion is clearly indicated in both potential and current spectrums. A parameter called the corrosion admittance, which is defined in the EES technique, is capable of indicating instantaneous localized corrosion activities.

Functional characterisation of the genes regulating metal(loid) homeostasis in plants is a major focus of crop biofortification, phytoremediation, and food security research. This paper focuses on the potential for advancing plant metal(loid) research by combining molecular biology and synchrotron-based techniques. Recent advances in x-ray focussing optics and fluorescence detection have greatly improved the potential of synchrotron techniques for plant science research, allowing metal(loids) to be imaged in vivo in hydrated plant tissues at sub-micron resolution. Laterally resolved metal(loid) speciation can also be determined. By using moleculartechniques to probe the location of gene expression and protein localisation and combining it with this synchrotron-derived data, functional information can be effectively and efficiently assigned to specific genes. This paper provides a review of the state of the art in this field, and provides examples as to how synchrotron-based methods can be combined with moleculartechniques to facilitate functional characterisation of genes in planta. PMID:22200921

Background: Molecular breast imaging (MBI) is a novel breast imaging technique that uses Cadmium Zinc Telluride (CZT) gamma cameras to detect the uptake of Tc-99m sestamibi in breast tumors. Current techniques employ an administered dose of 20-30 mCi Tc-99m, delivering an effective dose of 6.5-10 mSv to the body. This is ~ 5-10 times that of mammography. The goal of this study was to reduce the radiation dose by a factor of 5-10, while maintaining image quality. Methods: A total of 4 dose reduction schemes were evaluated - a) optimized collimation, b) improved utilization of the energy spectrum below the photopeak, c) adaptive geometric mean algorithm developed for combination of images from opposing detectors, and d) non local means filtering (NLMF) for noise reduction and image enhancement. Validation of the various schemes was performed using a breast phantom containing a variety of tumors and containing activity matched to that observed in clinical studies. Results: Development of tungsten collimators with holes matched to the CZT pixels yielded a 2.1-2.9 gain in system sensitivity. Improved utilization of the energy spectra yielded a 1.5-2.0 gain in sensitivity. Development of a modified geometric mean algorithm yielded a 1.4 reduction in image noise, while retaining contrast. Images of the breast phantom demonstrated that a factor of 5 reduction in dose was achieved. Additional refinements to the NLMF should enable an additional factor of 2 reduction in dose. Conclusion: Significant dose reduction in MBI to levels comparable to mammography can be achieved while maintaining image quality.

The authors describe a technique for determining mean lake depth utilizing a systematically aligned dot grid. This technique is, on the average, 55% faster than the traditional planimeter methods, depending on the type of planimeter and the size and complexity of the lake. No det...

Highlights: • Reducing atomic masses by 10-fold vastly improves sampling in MD simulations. • CLN025 folded in 4 of 10 × 0.5-μs MD simulations when masses were reduced by 10-fold. • CLN025 folded as early as 96.2 ns in 1 of the 4 simulations that captured folding. • CLN025 did not fold in 10 × 0.5-μs MD simulations when standard masses were used. • Low-mass MD simulation is a simple and generic sampling enhancement technique. - Abstract: CLN025 is one of the smallest fast-folding proteins. Until now it has not been reported that CLN025 can autonomously fold to its native conformation in a classical, all-atom, and isothermal–isobaric molecular dynamics (MD) simulation. This article reports the autonomous and repeated folding of CLN025 from a fully extended backbone conformation to its native conformation in explicit solvent in multiple 500-ns MD simulations at 277 K and 1 atm with the first folding event occurring as early as 66.1 ns. These simulations were accomplished by using AMBER forcefield derivatives with atomic masses reduced by 10-fold on Apple Mac Pros. By contrast, no folding event was observed when the simulations were repeated using the original AMBER forcefields of FF12SB and FF14SB. The results demonstrate that low-mass MD simulation is a simple and generic technique to enhance configurational sampling. This technique may propel autonomous folding of a wide range of miniature proteins in classical, all-atom, and isothermal–isobaric MD simulations performed on commodity computers—an important step forward in quantitative biology.

Celiac nerve blocks have been performed without radiologic guidance, but recently several groups have reported computed tomography (CT)-guided techniques. The authors present a new technique of CT-guided celiac nerve block using an 18 gauge Teflon catheter, which permits a test block dose and permanent alcohol block with one procedure. The results of this new technique were very encouraging. Of nine cancer patients who had the test block, seven had good pain relief; these same patients had good pain control with the permanent block. Of six patients with pancreatitis, six had good pain relief from the test block, and three had some long-term relief from the permanent block.

Novel molecular ecological techniques were used to study changes in microbial community structure and population during degradation of polylactide (PLA)/organically modified layered silicates (OMLS) nanocomposites. Cloned gene sequences belonging to members of the phyla Actinobacteria and Ascomycota comprized the most dominant groups of microorganisms during biodegradation of PLA/OMLS nanocomposites. Due to their numerical abundance, members of these microbial groups are likely to play an important role during biodegradation process. This paper presents new insights into the biodegradability of PLA/OMLS nanocomposites and highlights the importance of using novel molecular ecological techniques for in situ identification of new microorganisms involved in biodegradation of polymeric materials. PMID:19148900

The literature on the use of hypnosis in an educational setting is briefly reviewed, and a hypnotic approach involving the use of the clenched fist as a conditioned trigger to improve examination performance is described. A study of 60 high school students indicates that the approach can improve test outcomes. (TJH)

This work presents an improved version of the Green's function molecular dynamics method (Kong et al., 2009; Campañá and Müser, 2004 [1,2]), which enables one to study the elastic response of a three-dimensional solid to an external stress field by taking into consideration only atoms near the surface. In the previous implementation, the effective elastic coefficients measured at the Γ-point were altered to reduce finite size effects: their eigenvalues corresponding to the acoustic modes were set to zero. This scheme was found to work well for simple Bravais lattices as long as only atoms within the last layer were treated as Green's function atoms. However, it failed to function as expected in all other cases. It turns out that a violation of the acoustic sum rule for the effective elastic coefficients at Γ (Kong, 2010 [3]) was responsible for this behavior. In the new version, the acoustic sum rule is enforced by adopting an iterative procedure, which is found to be physically more meaningful than the previous one. In addition, the new algorithm allows one to treat lattices with bases and the Green's function slab is no longer confined to one layer. New version program summaryProgram title: FixGFC/FixGFMD v1.12 Catalogue identifier: AECW_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECW_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 206 436 No. of bytes in distributed program, including test data, etc.: 4 314 850 Distribution format: tar.gz Programming language: C++ Computer: All Operating system: Linux Has the code been vectorized or parallelized?: Yes. Code has been parallelized using MPI directives. RAM: Depends on the problem Classification: 7.7 External routines: LAMMPS ( http://lammps.sandia.gov/), MPI ( http

Preliminary studies from our laboratory showed that molecular breast imaging (MBI) can reliably detect tumors <2 cm in diameter. This study extends our work to a larger patient population and examines the technical factors that influence the ability of MBI to detect small breast tumors. Following injection of 740 MBq Tc-99m sestamibi, MBI was performed on 100 patients scheduled for biopsy of a lesion suspicious for malignancy that measured <2 cm on mammography or sonography. Using a small field of view gamma camera, patients were imaged in the standard mammographic views using light pain-free compression. Subjective discomfort, breast thickness, the amount of breast tissue in the detector field of view, and breast counts per unit area were measured and recorded. Follow-up was obtained in 99 patients; 53 patients had 67 malignant tumors confirmed at surgery. Of these, 57 of 67 were detected by MBI (sensitivity 85%). Sensitivity was 29%, 86%, and 97% for tumors <5, 6-10, and > or =11 mm in diameter, respectively. In seven patients, MBI identified eight additional mammographically occult tumors. Of 47 patients with no evidence of cancer at biopsy or surgery, there were 36 true negative and 11 false positive scans on MBI. MBI has potential for the regular detection of malignant breast tumors less than 2 cm in diameter. Work in progress to optimize the imaging parameters and technique may further improve sensitivity and specificity. PMID:17214787

Chlamydia trachomatis is a major cause of sexually transmitted bacterial disease worldwide. C. trachomatis is an intracellular bacterium and its growth in vitro requires cell culture facilities. The diagnosis is based on antigen detection and more recently on molecular nucleic acid amplification techniques (NAAT) that are considered fast, sensitive, and specific. In Belgium, External Quality Assessment (EQA) for the detection of C. trachomatis in urine by NAAT was introduced in 2008. From January 2008 to June 2012, nine surveys were organized. Fifty-eight laboratories participated in at least one survey. The EQA panels included positive and negative samples. The overall accuracy was 75.4%, the overall specificity was 97.6%, and the overall sensitivity was 71.4%. Two major issues were observed: the low sensitivity (45.3%) for the detection of low concentration samples and the incapacity of several methods to detect the Swedish variant of C. trachomatis. The reassuring point was that the overall proficiency of the Belgian laboratories tended to improve over time. PMID:26316982

This research ist a Quasi-Experimental research which only applied to one group without comparison group. It aims to prove whether the implementation of integrative teaching technique has influenced the speaking skill of the students in German Education Study Program of FKIP, Pattimura University. The research was held in the German Education…

The current experiment examined the relative advantage of an errorless learning technique over an errorful one in the acquisition of novel names for unfamiliar objects in typically developing children aged between 7 and 9 years. Errorless learning led to significantly better learning than did errorful learning. Processing speed and vocabulary…

Low-cost fabricating technique produces minute, complex air passages in fluidic devices. Air jet interactions in these function as electronic and electromechanical control systems. Wax cores are fabricated without distortion by two-wax process using nonsoluble pattern-wax and water-soluble wax. Significant steps in fabrication process are discussed.

Welding programs which show that parallel gas welding is a reliable process are discussed. When monitoring controls and nondestructive tests are incorporated into the process, parallel gap welding becomes more reliable and cost effective. The panel fabrication techniques and the HAC thermal cycling test indicate reliable product integrity. The design and building of automated tooling and fixturing for welding are discussed.

Titanium alloys are used for implant abutments onto which prostheses are attached. One major disadvantage of titanium abutments is their esthetics; the metallic gray color may show through the restorative material or through surrounding tissues. A laboratory technique using readily available household items is described that can alter the abutment color by anodization. PMID:26723096

This thesis is concerned with the development of techniques that facilitate the effective implementation of capable automatic chord transcription from music audio signals. Since chord transcriptions can capture many important aspects of music, they are useful for a wide variety of music applications and also useful for people who learn and perform…

Reports on a technique that could increase study time by reducing procrastination. Randomly selected college students (N=197) made written commitments to study for an exam. Students in the commitment condition reported significantly more study time than did students in a control group; they also performed significantly better on the exam. (RJM)

Various methods of meeting and classroom techniques are presented in this booklet. It is aimed at teachers and advisors seeking innovative ideas to encourage student interest and participation in the classroom. The methods examined include the lecture or speech, group discussion, panel discussion, colloquy, role playing, buzz session, forum,…

Summary Microbial treatment of environmental contamination by anthropogenic halogenated organic compounds has become popular in recent decades, especially in the subsurface environments. Moleculartechniques such as polymerase chain reaction‐based fingerprinting methods have been extensively used to closely monitor the presence and activities of dehalogenating microbes, which also lead to the discovery of new dehalogenating bacteria and novel functional genes. Nowadays, traditional moleculartechniques are being further developed and optimized for higher sensitivity, specificity, and accuracy to better fit the contexts of dehalogenation. On the other hand, newly developed high throughput techniques, such as microarray and next‐generation sequencing, provide unsurpassed detection ability, which has enabled large‐scale comparative genomic and whole‐genome transcriptomic analysis. The aim of this review is to summarize applications of various molecular tools in the field of microbially mediated dehalogenation of various halogenated organic compounds. It is expected that traditional moleculartechniques and nucleic‐acid‐based biomarkers will still be favoured in the foreseeable future because of relative low costs and high flexibility. Collective analyses of metagenomic sequencing data are still in need of information from individual dehalogenating strains and functional reductive dehalogenase genes in order to draw reliable conclusions. PMID:22070763

The GenTechnique project at Washington State University uses a networked learning environment for molecular genetics learning. The project is developing courseware featuring animation, hyper-link controls, and interactive self-assessment exercises focusing on fundamental concepts. The first pilot course featured a Web-based module on DNA…

We show how statistical learning techniques based on kriging (Gaussian Process regression) can be used for improving the predictions of classical and/or quantum scattering theory. In particular, we show how Gaussian Process models can be used for: (i) efficient non-parametric fitting of multi-dimensional potential energy surfaces without the need to fit ab initio data with analytical functions; (ii) obtaining scattering observables as functions of individual PES parameters; (iii) using classical trajectories to interpolate quantum results; (iv) extrapolation of scattering observables from one molecule to another; (v) obtaining scattering observables with error bars reflecting the inherent inaccuracy of the underlying potential energy surfaces. We argue that the application of Gaussian Process models to quantum scattering calculations may potentially elevate the theoretical predictions to the same level of certainty as the experimental measurements and can be used to identify the role of individual atoms in determining the outcome of collisions of complex molecules. We will show examples and discuss the applications of Gaussian Process models to improving the predictions of scattering theory relevant for the cold molecules research field. Work supported by NSERC of Canada.

The feasibility and utility of using satellite data and computer-aided remote sensing analysis techniques to conduct range inventories were tested. This pilot study was focused over a 250,000 acre site in Galveston and Brazoria Counties along the Texas Gulf Coast. Rectified enlarged aircraft color infrared photographs of this site were used as the ground truth base. The different land categories were identified, delineated, and measured. Multispectral scanner (MSS) bulk data from LANDSAT-1 was received and analyzed with the Image 100 pattern recognition system. Features of interest were delineated on the image console giving the number of picture elements classified; the picture elements were converted to acreages and the accuracy of the technique was evaluated by comparison with data base results for three test sites. The accuracies for computer aided classification of coastal marshes ranged from 89% to 96%.

This review article reports the recent progress in the development of a new group of molecule-based flow diagnostic techniques, which include molecular tagging velocimetry (MTV) and molecular tagging thermometry (MTT), for both qualitative flow visualization of thermally induced flow structures and quantitative whole-field measurements of flow velocity and temperature distributions. The MTV and MTT techniques can also be easily combined to result in a so-called molecular tagging velocimetry and thermometry (MTV&T) technique, which is capble of achieving simultaneous measurements of flow velocity and temperature distribution in fluid flows. Instead of using tiny particles, the molecular tagging techniques (MTV, MTT, and MTV&T) use phosphorescent molecules, which can be turned into long-lasting glowing marks upon excitation by photons of appropriate wavelength, as the tracers for the flow velocity and temperature measurements. The unique attraction and implementation of the molecular tagging techniques are demonstrated by three application examples, which include: (1) to quantify the unsteady heat transfer process from a heated cylinder to the surrounding fluid flow in order to examine the thermal effects on the wake instabilities behind the heated cylinder operating in mixed and forced heat convection regimes, (2) to reveal the time evolution of unsteady heat transfer and phase changing process inside micro-sized, icing water droplets in order to elucidate the underlying physics pertinent to aircraft icing phenomena, and (3) to achieve simultaneous droplet size, velocity and temperature measurements of "in-flight" droplets to characterize the dynamic and thermodynamic behaviors of flying droplets in spray flows.

To maximize the quality of sign-out documents within the internal medicine residency, a quality improvement intervention was developed and implemented. Written sign-outs were collected from general medicine ward teams and graded using an 11-point checklist; in-person feedback was then given directly to the ward teams. Documentation of many of the 11 elements improved: mental status (22% to 66%, P < .0001), decisionality (40% to 66%, P < .0001), lab/test results (63% to 69%, P < .0001), level of acuity (34% to 50%, P < .0001), anticipatory guidance (69% to 82%, P < .0001), and future plans (35% to 38%, P < .0005). The use of vague language declined (41% to 26%, P < .0001). The mean total scores improved from 7.0 to 8.2 out of a possible 11 (P < .0001). As new house staff rotated onto the services, improvement over time was sustained with 1 feedback session per team, per month. Similar interventions could be made in other programs and other institutions. PMID:24878514

Realizing the need to improve the capabilities of response personnel in dealing with cleanup operations involving contaminated sediments, the U.S. Coast Guard and the U.S. Environmental Protection Agency have jointly funded a research project to: (a) identify, characterize, and c...

In large-scale power transmission systems, predicting faults and preemptively taking corrective action to avoid them is essential to preventing rolling blackouts. The computational study of the constantly-shifting state of the power grid and its weaknesses is called contingency analysis. Multiple-contingency planning in the electrical grid is one example of a complex monitoring system where a full computational solution is operationally infeasible. We present a general framework for building and evaluating resource-aware models of filtering techniques for this type of monitoring.

A versatile double Langmuir probe technique has been developed by incorporating analytical fits to Laframboise's numerical results for ion current collection by biased electrodes of various sizes relative to the local electron Debye length. Application of these fits to the double probe circuit has produced a set of coupled equations that express the potential of each electrode relative to the plasma potential as well as the resulting probe current as a function of applied probe voltage. These equations can be readily solved via standard numerical techniques in order to determine electron temperature and plasma density from probe current and voltage measurements. Because this method self-consistently accounts for the effects of sheath expansion, it can be readily applied to plasmas with a wide range of densities and low ion temperature (T{sub i}/T{sub e} Much-Less-Than 1) without requiring probe dimensions to be asymptotically large or small with respect to the electron Debye length. The presented approach has been successfully applied to experimental measurements obtained in the plume of a low-power Hall thruster, which produced a quasineutral, flowing xenon plasma during operation at 200 W on xenon. The measured plasma densities and electron temperatures were in the range of 1 Multiplication-Sign 10{sup 12}-1 Multiplication-Sign 10{sup 17} m{sup -3} and 0.5-5.0 eV, respectively. The estimated measurement uncertainty is +6%/-34% in density and +/-30% in electron temperature.

A versatile double Langmuir probe technique has been developed by incorporating analytical fits to Laframboise's numerical results for ion current collection by biased electrodes of various sizes relative to the local electron Debye length. Application of these fits to the double probe circuit has produced a set of coupled equations that express the potential of each electrode relative to the plasma potential as well as the resulting probe current as a function of applied probe voltage. These equations can be readily solved via standard numerical techniques in order to determine electron temperature and plasma density from probe current and voltage measurements. Because this method self-consistently accounts for the effects of sheath expansion, it can be readily applied to plasmas with a wide range of densities and low ion temperature (T(i)/T(e) ≪ 1) without requiring probe dimensions to be asymptotically large or small with respect to the electron Debye length. The presented approach has been successfully applied to experimental measurements obtained in the plume of a low-power Hall thruster, which produced a quasineutral, flowing xenon plasma during operation at 200 W on xenon. The measured plasma densities and electron temperatures were in the range of 1 × 10(12)-1 × 10(17) m(-3) and 0.5-5.0 eV, respectively. The estimated measurement uncertainty is +6%∕-34% in density and +∕-30% in electron temperature. PMID:22852694

The use of analysis techniques for industrial controller's analysis, such as Simulation and Formal Verification, is complex on industrial context. This complexity is due to the fact that such techniques require sometimes high investment in specific skilled human resources that have sufficient theoretical knowledge in those domains. This paper aims, mainly, to show that it is possible to obtain a timed automata model for formal verification purposes, considering the CAD model of a mechanical component. This systematic approach can be used, by companies, for the analysis of industrial controllers programs. For this purpose, it is discussed, in the paper, the best way to systematize these procedures, and this paper describes, only, the first step of a complex process and promotes a discussion of the main difficulties that can be found and a possibility for handle those difficulties. A library for formal verification purposes is obtained from original 3D CAD models using Software as a Service platform (SaaS) that, nowadays, has become a common deliverable model for many applications, because SaaS is typically accessed by users via internet access.

We demonstrate that a simple phenomenological approach can be used to simulate electronic conduction in molecular wires under thermal effects induced by the surrounding environment. This "Landauer-Büttiker's probe technique" can properly replicate different transport mechanisms, phase coherent nonresonant tunneling, ballistic behavior, and hopping conduction. Specifically, our simulations with the probe method recover the following central characteristics of charge transfer in molecular wires: (i) the electrical conductance of short wires falls off exponentially with molecular length, a manifestation of the tunneling (superexchange) mechanism. Hopping dynamics overtakes superexchange in long wires demonstrating an ohmic-like behavior. (ii) In off-resonance situations, weak dephasing effects facilitate charge transfer, but under large dephasing, the electrical conductance is suppressed. (iii) At high enough temperatures, kBT/ɛB > 1/25, with ɛB as the molecular-barrier height, the current is enhanced by a thermal activation (Arrhenius) factor. However, this enhancement takes place for both coherent and incoherent electrons and it does not readily indicate on the underlying mechanism. (iv) At finite-bias, dephasing effects may impede conduction in resonant situations. We further show that memory (non-Markovian) effects can be implemented within the Landauer-Büttiker's probe technique to model the interaction of electrons with a structured environment. Finally, we examine experimental results of electron transfer in conjugated molecular wires and show that our computational approach can reasonably reproduce reported values to provide mechanistic information.

High temperature is a major constraint to crop productivity, causing substantial reductions in yield and quality, and expected to become a more devastating factor due to global warming. A better understanding of molecular mechanisms of tolerance to high temperatures is necessary for designing and de...

A primary concern in modern fluid dynamics research is the experimental verification of computational aerothermodynamic codes. This research requires high precision and detail in the test model employed. Ceramic materials are used for these models because of their low heat conductivity and their survivability at high temperatures. To fabricate such models, slip casting techniques were developed to provide net-form, precision casting capability for high-purity ceramic materials in aqueous solutions. In previous slip casting techniques, block, or flask molds made of plaster-of-paris were used to draw liquid from the slip material. Upon setting, parts were removed from the flask mold and cured in a kiln at high temperatures. Casting detail was usually limited with this technique -- detailed parts were frequently damaged upon separation from the flask mold, as the molded parts are extremely delicate in the uncured state, and the flask mold is inflexible. Ceramic surfaces were also marred by 'parting lines' caused by mold separation. This adversely affected the aerodynamic surface quality of the model as well. (Parting lines are invariably necessary on or near the leading edges of wings, nosetips, and fins for mold separation. These areas are also critical for flow boundary layer control.) Parting agents used in the casting process also affected surface quality. These agents eventually soaked into the mold, the model, or flaked off when releasing the case model. Different materials were tried, such as oils, paraffin, and even an algae. The algae released best, but some of it remained on the model and imparted an uneven texture and discoloration on the model surface when cured. According to the present invention, a wax pattern for a shell mold is provided, and an aqueous mixture of a calcium sulfate-bonded investment material is applied as a coating to the wax pattern. The coated wax pattern is then dried, followed by curing to vaporize the wax pattern and leave a shell

The goal of this project was to develop an improved understanding of factors governing performance and degradation of mixed-conducting SOFC cathodes. Two new diagnostic tools were developed to help achieve this goal: (1) microelectrode half-cells for improved isolation of cathode impedance on thin electrolytes, and (2) nonlinear electrochemical impedance spectroscopy (NLEIS), a variant of traditional impedance that allows workers to probe nonlinear rates as a function of frequency. After reporting on the development and efficacy of these tools, this document reports on the use of these and other tools to better understand performance and degradation of cathodes based on the mixed conductor La{sub 1-x}Sr{sub x}CoO{sub 3-{delta}} (LSC) on gadolinia or samaria-doped ceria (GDC or SDC). We describe the use of NLEIS to measure O{sub 2} exchange on thin-film LSC electrodes, and show that O{sub 2} exchange is most likely governed by dissociative adsorption. We also describe parametric studies of porous LSC electrodes using impedance and NLEIS. Our results suggest that O{sub 2} exchange and ion transport co-limit performance under most relevant conditions, but it is O{sub 2} exchange that is most sensitive to processing, and subject to the greatest degradation and sample-to-sample variation. We recommend further work that focuses on electrodes of well-defined or characterized geometry, and probes the details of surface structure, composition, and impurities. Parallel work on primarily electronic conductors (LSM) would also be of benefit to developers, and to improved understanding of surface vs. bulk diffusion.

Background: Bone conduction (BC) threshold depression is not always by means of sensory neural hearing loss and sometimes it is an artifact caused by middle ear pathologies and ossicular chain problems. In this research, the influences of ear surgeries on bone conduction were evaluated. Materials and Methods: This study was conducted as a clinical trial study. The ear surgery performed on 83 patients classified in four categories: Stapedectomy, tympanomastoid surgery and ossicular reconstruction partially or totally; Partial Ossicular Replacement Prosthesis (PORP) and Total Ossicular Replacement Prosthesis (TORP). Bone conduction thresholds assessed in frequencies of 250, 500, 1000, 2000 and 4000 Hz pre and post the surgery. Results: In stapedectomy group, the average of BC threshold in all frequencies improved approximately 6 dB in frequency of 2000 Hz. In tympanomastoid group, BC threshold in the frequency of 500, 1000 and 2000 Hz changed 4 dB (P-value < 0.05). Moreover, In the PORP group, 5 dB enhancement was seen in 1000 and 2000 Hz. In TORP group, the results confirmed that BC threshold improved in all frequencies especially at 4000 Hz about 6.5 dB. Conclusion: In according to results of this study, BC threshold shift was seen after several ear surgeries such as stapedectomy, tympanoplasty, PORP and TORP. The average of BC improvement was approximately 5 dB. It must be considered that BC depression might happen because of ossicular chain problems. Therefore; by resolving middle ear pathologies, the better BC threshold was obtained, the less hearing problems would be faced. PMID:24381615

A recent upgrade of the TSRV research flight system at NASA Langley Research Center retained the original monochrome display system. However, the display memory loading equipment was replaced requiring design and development of new methods of performing this task. This paper describes the new techniques developed to load memory in the display system. An outdated paper tape method for loading the BOOTSTRAP control program was replaced by EPROM storage of the characters contained on the tape. Rather than move a tape past an optical reader, a counter was implemented which steps sequentially through EPROM addresses and presents the same data to the loader circuitry. A cumbersome cassette tape method for loading the applications software was replaced with a floppy disk method using a microprocessor terminal installed as part of the upgrade. The cassette memory image was transferred to disk and a specific software loader was written for the terminal which duplicates the function of the cassette loader.

An efficient technique for increasing the transesterification activity of CaO obtained from calcination of CaCO(3) was proposed in order to make them highly suitable for use as heterogeneous catalysts for biodiesel production. CaO was refluxed in water followed by the synthesis of the oxide from hydroxide species. The characterization results indicate that this procedure substantially increases both the specific surface area and the amount of basic site. Hydration and subsequent calcination also generates a new calcium oxide with less crystalline. Transesterification of palm olein was used to determine the activity of catalysts to show that the decomposed-hydrated CaO exhibits higher catalytic activity than CaO generated from calcination of CaCO(3). The methyl ester content was enhanced 18.4 wt.%. PMID:20089395

Software inspections are widely regarded as a cost-effective mechanism for removing defects in software, though performing them does not always reduce the number of customer-discovered defects. We present a case study in which an attempt was made to reduce such defects through inspection training that introduced program comprehension ideas. The training was designed to address the problem of understanding the artifact being reviewed, as well as other perceived deficiencies of the inspection process itself. Measures, both formal and informal, suggest that explicit training in program understanding may improve inspection effectiveness.

The controlled environment vitrification system (CEVS) permits cryofixation of hydrated biological and colloidal dispersions and aggregates from a temperature- and saturation-controlled environment. Otherwise, specimens prepared in an uncontrolled laboratory atmosphere are subject to evaporation and heat transfer, which may introduce artifacts caused by concentration, pH, ionic strength, and temperature changes. Moreover, it is difficult to fix and examine the microstructure of systems at temperatures other than ambient (e.g., biological systems at in vivo conditions and colloidal systems above room temperature). A system has been developed that ensures that a liquid or partially liquid specimen is maintained in its original state while it is being prepared before vitrification and, once prepared, is vitrified with little alteration of its microstructure. A controlled environment is provided within a chamber where temperature and chemical activity of volatile components can be controlled while the specimen is being prepared. The specimen grid is mounted on a plunger, and a synchronous shutter is opened almost simultaneously with the release of the plunger, so that the specimen is propelled abruptly through the shutter opening into a cryogenic bath. We describe the system and its use and illustrate the value of the technique with TEM micrographs of surfactant microstructures in which specimen preparation artifacts were avoided. We also discuss applications to other instruments like SEM, to other techniques like freeze-fracture, and to novel "on the grid" experiments that make it possible to freeze successive instants of dynamic processes such as membrane fusion, chemical reactions, and phase transitions. PMID:3193246

Large amounts of low-grade heat are emitted by various industries and exhausted into the environment. This heat energy can be used as a free source for pyroelectric power generation. A three-dimensional pattern helps to improve the temperature variation rates in pyroelectric elements by means of lateral temperature gradients induced on the sidewalls of the responsive elements. A novel method using sandblast etching is successfully applied in fabricating the complex pattern of a vortex-like electrode. Both experiment and simulation show that the proposed design of the vortex-like electrode improved the electrical output of the pyroelectric cells and enhanced the efficiency of pyroelectric harvesting converters. A three-dimensional finite element model is generated by commercial software for solving the transient temperature fields and exploring the temperature variation rate in the PZT pyroelectric cells with various designs. The vortex-like type has a larger temperature variation rate than the fully covered type, by about 53.9%.The measured electrical output of the vortex-like electrode exhibits an obvious increase in the generated charge and the measured current, as compared to the fully covered electrode, by of about 47.1% and 53.1%, respectively. PMID:24025557

The module described and evaluated here was created in response to perceived learning difficulties in diagnostic test design and interpretation for students in third-year Clinical Microbiology. Previously, the activities in lectures and laboratory classes in the module fell into the lower cognitive operations of "knowledge" and "understanding." The new approach was to exchange part of the traditional activities with elements of interactive learning, where students had the opportunity to engage in deep learning using a variety of learning styles. The effectiveness of the new curriculum was assessed by means of on-course student assessment throughout the module, a final exam, an anonymous questionnaire on student evaluation of the different activities and a focus group of volunteers. Although the new curriculum enabled a major part of the student cohort to achieve higher pass grades (p < 0.001), it did not meet the requirements of the weaker students, and the proportion of the students failing the module remained at 34%. The action research applied here provided a number of valuable suggestions from students on how to improve future curricula from their perspective. Most importantly, an interactive online program that facilitated flexibility in the learning space for the different reagents and their interaction in diagnostic tests was proposed. The methods applied to improve and assess a curriculum refresh by involving students as partners in the process, as well as the outcomes, are discussed. Journal of Microbiology & Biology Education. PMID:26753024

Large amounts of low-grade heat are emitted by various industries and exhausted into the environment. This heat energy can be used as a free source for pyroelectric power generation. A three-dimensional pattern helps to improve the temperature variation rates in pyroelectric elements by means of lateral temperature gradients induced on the sidewalls of the responsive elements. A novel method using sandblast etching is successfully applied in fabricating the complex pattern of a vortex-like electrode. Both experiment and simulation show that the proposed design of the vortex-like electrode improved the electrical output of the pyroelectric cells and enhanced the efficiency of pyroelectric harvesting converters. A three-dimensional finite element model is generated by commercial software for solving the transient temperature fields and exploring the temperature variation rate in the PZT pyroelectric cells with various designs. The vortex-like type has a larger temperature variation rate than the fully covered type, by about 53.9%.The measured electrical output of the vortex-like electrode exhibits an obvious increase in the generated charge and the measured current, as compared to the fully covered electrode, by of about 47.1% and 53.1%, respectively. PMID:24025557

A Monte Carlo computational model of a fluoroscopic imaging chain was used for deriving optimal technique factors for paediatric fluoroscopy. The optimal technique was defined as the one that minimizes the absorbed dose (or dose rate) in the patient with a constraint of constant image quality. Image quality was assessed for the task of detecting a detail in the image of a patient-simulating phantom, and was expressed in terms of the ideal observer's signal-to-noise ratio (SNR) for static images and in terms of the accumulating rate of the square of SNR for dynamic imaging. The entrance air kerma (or air kerma rate) and the mean absorbed dose (or dose rate) in the phantom quantified radiation detriment. The calculations were made for homogeneous phantoms simulating newborn, 3-, 10- and 15-year-old patients, barium and iodine contrast material details, several x-ray spectra, and for imaging with or without an antiscatter grid. The image receptor was modelled as a CsI x-ray image intensifier (XRII). For the task of detecting low- or moderate-contrast iodine details, the optimal spectrum can be obtained by using an x-ray tube potential near 50 kV and filtering the x-ray beam heavily. The optimal tube potential is near 60 kV for low- or moderate-contrast barium details, and 80-100 kV for high-contrast details. The low-potential spectra above require a high tube load, but this should be acceptable in paediatric fluoroscopy. A reasonable choice of filtration is the use of an additional 0.25 mm Cu, or a suitable K-edge filter. No increase in the optimal tube potential was found as phantom thickness increased. With the constraint of constant low-contrast detail detectability, the mean absorbed doses obtained with the above spectra are approximately 50% lower than those obtained with the reference conditions of 70 kV and 2.7 mm Al filter. For the smallest patient and x-ray field size, not using a grid was slightly more dose-efficient than using a grid, but when the patient

A Monte Carlo computational model of a fluoroscopic imaging chain was used for deriving optimal technique factors for paediatric fluoroscopy. The optimal technique was defined as the one that minimizes the absorbed dose (or dose rate) in the patient with a constraint of constant image quality. Image quality was assessed for the task of detecting a detail in the image of a patient-simulating phantom, and was expressed in terms of the ideal observer's signal-to-noise ratio (SNR) for static images and in terms of the accumulating rate of the square of SNR for dynamic imaging. The entrance air kerma (or air kerma rate) and the mean absorbed dose (or dose rate) in the phantom quantified radiation detriment. The calculations were made for homogeneous phantoms simulating newborn, 3-, 10- and 15-year-old patients, barium and iodine contrast material details, several x-ray spectra, and for imaging with or without an antiscatter grid. The image receptor was modelled as a CsI x-ray image intensifier (XRII). For the task of detecting low- or moderate-contrast iodine details, the optimal spectrum can be obtained by using an x-ray tube potential near 50 kV and filtering the x-ray beam heavily. The optimal tube potential is near 60 kV for low- or moderate-contrast barium details, and 80-100 kV for high-contrast details. The low-potential spectra above require a high tube load, but this should be acceptable in paediatric fluoroscopy. A reasonable choice of filtration is the use of an additional 0.25 mm Cu, or a suitable K-edge filter. No increase in the optimal tube potential was found as phantom thickness increased. With the constraint of constant low-contrast detail detectability, the mean absorbed doses obtained with the above spectra are approximately 50% lower than those obtained with the reference conditions of 70 kV and 2.7 mm Al filter. For the smallest patient and x-ray field size, not using a grid was slightly more dose-efficient than using a grid, but when the patient

Many utilities have evaluated the cost of scrubbing versus fuel switching in various plans and scenarios to determine the most economical means for meeting the requirements of the new law. Presently, the future cost of removing a ton of SO{sub 2} is based on fuel switching, and the market values are in the range of $150 - $250 per ton. The perceived cost of FGDS retrofits is $250 - $400 per ton for eastern medium to high sulfur coal. ABB has studied the overall costs of FGDS and has developed a series of cost reducing improvements. and innovations. The improvements are manifested in ABBs new limestone FGDS technology known by the code phrase {open_quote}Stealth FGDS{close_quotes}. Stealth promises low capital and operating cost, high removal efficiencies for SO{sub 2} and other pollutants, little or positive environmental and economic impact on the local community, salable or non-hazardous by-products, ease of retrofit, and exceptionally short installation schedules. The concepts are being demonstrated in one system at the Miles Generating Station of Ohio Edison Company. Bearing the name {open_quote}LS-2 Advanced SO, Scrubbing{close_quotes}, the Stealth scrubber at Niles is a 110 MWe turnkey, retrofit unit to be completed 20 months after the release of engineering. It will remove 20,000 or more tons per year of SO{sub 2} from the flue gases generated by both Unit 1 and Unit 2 boilers, producing wallboard-grade gypsum. Upon completion of a four month test program, the plant will be operated by Ohio Edison for a four to five year reliability demonstration period. The performance and economic projections for LS-2 scrubbers show the technology to be quite attractive relative to projections for fuel switching when installed in a manner similar to the installation plan for Niles. The description and basis for these economic projections are described in this paper.

Interactive free-viewpoint selection applied to a 3D multi-view signal is a possible attractive feature of the rapidly developing 3D TV media. This paper explores a new rendering algorithm that computes a free-viewpoint based on depth image warping between two reference views from existing cameras. We have developed three quality enhancing techniques that specifically aim at solving the major artifacts. First, resampling artifacts are filled in by a combination of median filtering and inverse warping. Second, contour artifacts are processed while omitting warping of edges at high discontinuities. Third, we employ a depth signal for more accurate disocclusion inpainting. We obtain an average PSNR gain of 3 dB and 4.5 dB for the 'Breakdancers' and 'Ballet' sequences, respectively, compared to recently published results. While experimenting with synthetic data, we observe that the rendering quality is highly dependent on the complexity of the scene. Moreover, experiments are performed using compressed video from surrounding cameras. The overall system quality is dominated by the rendering quality and not by coding.

Physicians frequently encounter patients who make decisions that contravene their long-term goals. Behavioral economists have shown that irrationalities and self-thwarting tendencies pervade human decision making, and they have identified a number of specific heuristics (rules of thumb) and biases that help explain why patients sometimes make such counterproductive decisions. In this essay, we use clinical examples to describe the many ways in which these heuristics and biases influence patients' decisions. We argue that physicians should develop their understanding of these potentially counterproductive decisional biases and, in many cases, use this knowledge to rebias their patients in ways that promote patients' health or other values. Using knowledge of decision-making psychology to persuade patients to engage in healthy behaviors or to make treatment decisions that foster their long-term goals is ethically justified by physicians' duties to promote their patients' interests and will often enhance, rather than limit, their patients' autonomy. We describe techniques that physicians may use to frame health decisions to patients in ways that are more likely to motivate patients to make choices that are less biased and more conducive to their long-term goals. Marketers have been using these methods for decades to get patients to engage in unhealthy behaviors; employers and policy makers are beginning to consider the use of similar approaches to influence healthy choices. It is time for clinicians also to make use of behavioral psychology in their interactions with patients. PMID:20458111

Dopamine is a neurotransmitter that is utilized in brain circuits associated with reward processing and motor activity. Advances in microelectrode techniques and cyclic voltammetry have enabled its extracellular concentration fluctuations to be examined on a subsecond time scale in the brain of anesthetized and freely moving animals. The microelectrodes can be attached to micropipettes that allow local drug delivery at the site of measurement. Drugs that inhibit dopamine uptake or its autoreceptors can be evaluated while only affecting the brain region directly adjacent to the electrode. The drugs are ejected by iontophoresis in which an electrical current forces the movement of molecules by a combination of electrical migration and electroosmosis. Using electroactive tracer molecules, the amount ejected can be measured with cyclic voltammetry. In this review we will give an introduction to the basic principles of iontophoresis, including a historical account on the development of iontophoresis. It will also include an overview of the use of iontophoresis to study neurotransmission of dopamine in the rat brain. It will close by summarizing the advantages of iontophoresis and how the development of quantitative iontophoresis will facilitate future studies. PMID:23276986

In this article a procedure is derived to obtain a performance gain for molecular dynamics (MD) simulations on existing parallel clusters. Parallel clusters use a wide array of interconnection technologies to connect multiple processors together, often at different speeds, such as multiple processor computers and networking. It is demonstrated how to configure existing programs for MD simulations to efficiently handle collective communication on parallel clusters with processor interconnections of different speeds. PMID:15032512

Clinical, administrative and demographic health information is fundamental to understanding the nature of health and evaluating the effectiveness of efforts to reduce morbidity and mortality of the population. The demographic data item 'location' is an integral part of any injury surveillance tool or injury prevention strategy. The true value of location data can only be realised once these data have been appropriately classified and quality assured. Geocoding as a means of classifying location is increasingly used in various health fields to enable spatial analysis of data. This article reports on research carried out in Australia at the National Coroners Information System (NCIS). Trends in the use of NCIS location-based data by researchers were identified. The research also aimed to establish the factors that impacted on the quality of geocoded data and the extent of this impact. A systematic analysis of the geocoding process identified source documentation, data cleaning, and software settings as key factors impacting on data quality. Understanding and application of these processes can improve data quality and therefore inform the analysis and interpretation of these data by researchers. PMID:23087078

This three-year project had two technical objectives. The first objective was to compare the effectiveness of gels in fluid diversion (water shutoff) with those of other types of processes. Several different types of fluid-diversion processes were compared, including those using gels, foams, emulsions, particulates, and microorganisms. The ultimate goals of these comparisons were to (1) establish which of these processes are most effective in a given application and (2) determine whether aspects of one process can be combined with those of other processes to improve performance. Analyses and experiments were performed to verify which materials are the most effective in entering and blocking high-permeability zones. The second objective of the project was to identify the mechanisms by which materials (particularly gels) selectively reduce permeability to water more than to oil. A capacity to reduce water permeability much more than oil or gas permeability is critical to the success of gel treatments in production wells if zones cannot be isolated during gel placement. Topics covered in this report include (1) determination of gel properties in fractures, (2) investigation of schemes to optimize gel placement in fractured systems, (3) an investigation of why some polymers and gels can reduce water permeability more than oil permeability, (4) consideration of whether microorganisms and particulates can exhibit placement properties that are superior to those of gels, and (5) examination of when foams may show placement properties that are superior to those of gels.

Recent advances have led to renewed interest in ballistocardiography (BCG), a noninvasive measure of the small movements of the body due to cardiovascular events. A broad range of platforms have been developed and verified for BCG measurement including beds, chairs, and weighing scales: while the body is coupled to such a platform, the cardiogenic movements are measured. Wearable BCG, measured with an accelerometer affixed to the body, may enable continuous, or more regular, monitoring during the day; however, the signals from such wearable BCGs represent local or distal accelerations of skin and tissue rather than the whole body. In this paper, we propose a novel method to reconstruct the BCG measured with a weighing scale (WS BCG) from a wearable sensor via a training step to remove these local effects. Preliminary validation of this method was performed with 15 subjects: the wearable sensor was placed at three locations on the surface of the body while WS BCG measurements were recorded simultaneously. A regularized system identification approach was used to reconstruct the WS BCG from the wearable BCG. Preliminary results suggest that the relationship between local and central disturbances is highly dependent on both the individual and the location where the accelerometer is placed on the body and that these differences can be resolved via calibration to accurately measure changes in cardiac output and contractility from a wearable sensor. Such measurements could be highly effective, for example, for improved monitoring of heart failure patients at home. PMID:25561589

The use of four-wave mixing techniques in femtosecond time-resolved spectroscopy has considerable advantages. Due to the many degrees of freedom offered e.g. by coherent anti-Stokes Raman scattering (CARS), the dynamics even of complex systems can be analyzed in detail. Using pulse shaping techniques in combination with a self-learning loop approach, molecular mode excitation can be controlled very efficiently in a multi-photon excitation process. Results obtained from the optimal control of CARS on {beta}-carotene are discussed.

A successful approach for the fabrication and characterization of an optical fiber sensor for the detection of profenofos based on surface plasmon resonance (SPR) and molecular imprinting is introduced. Molecular imprinting technology is used for the creation of three dimensional binding sites having complementary shape and size of the specific template molecule over a polymer for the recognition of the same. Binding of template molecule with molecularly imprinted polymer (MIP) layer results in the change in the dielectric nature of the sensing surface (polymer) and is identified by SPR technique. Spectral interrogation method is used for the characterization of the sensing probe. The operating profenofos concentration range of the sensor is from 10(-4) to 10(-1)µg/L. A red shift of 18.7 nm in resonance wavelength is recorded for this profenofos concentration range. The maximum sensitivity of the sensor is 12.7 nm/log (µg/L) at 10(-4)µg/L profenofos concentration. Limit of detection (LOD) of the sensor is found to be 2.5×10(-6)µg/L. Selectivity measurements predict the probe highly selective for the profenofos molecule. Besides high sensitivity due to SPR technique and selectivity due to molecular imprinting, proposed sensor has numerous other advantages like immunity to electromagnetic interference, fast response, low cost and capability of online monitoring and remote sensing of analyte due to the fabrication of the probe on optical fiber. PMID:26706813

Recent advances have led to renewed interest in ballistocardiography (BCG), a non-invasive measure of the small reaction forces on the body from cardiovascular events. A broad range of platforms have been developed and verified for BCG measurement including beds, chairs, and weighing scales: while the body is coupled to such a platform, the cardiogenic movements of the center-of-mass (COM) are measured. Wearable BCG, measured with an accelerometer affixed to the body, may enable continuous, or more regular, monitoring during the day; however, the signals from such wearable BCGs represent local or distal accelerations of skin and tissue rather than the displacement of the body's COM. In this paper we propose a novel method to reconstruct the COM BCG from a wearable sensor via a training step to remove these local effects. Preliminary validation of this method was performed with fifteen subjects: the wearable sensor was placed at three locations on the surface of the body while COM BCG measurements were recorded simultaneously with a modified weighing scale. A regularized system identification approach was used to reconstruct the COM BCG from the wearable signal. Preliminary results suggest that the relationship between local and central forces is highly dependent on both the individual and the location where the wearable sensor is placed on the body and that these differences can be resolved via calibration to accurately measure changes in cardiac output and contractility from a wearable sensor. Such measurements could be highly effective, for example, for improved monitoring of heart failure patients at home. PMID:25561589

The Simplified Aid for EVA Rescue (SAFER) is a small propulsive backpack that was developed as an in-house effort at Johnson Space Center; it is a lightweight system which attaches to the underside of the Primary Life Support Subsystem (PLSS) backpack of the Extravehicular Mobility Unit (EMU). SAFER provides full six-axis control, as well as Automatic Attitude Hold (AAH), by means of a set of cold-gas nitrogen thrusters and a rate sensor-based control system. For compactness, a single hand controller is used, together with mode switching, to command all six axes. SAFER was successfully test-flown on the STS-64 mission in September 1994 as a Development Test Objective (DTO); development of an operational version is now proceeding. This version will be available for EVA self-rescue on the International Space Station and Mir, starting with the STS-86/Mir-7 mission in September 1997. The DTO SAFER was heavily instrumented, and produced in-flight data that was stored in a 12 MB computer memory on-board. This has allowed post-flight analysis to yield good estimates for the actual mass properties (moments and products of inertia and center of mass location) encountered on-orbit. By contrast, Manned Maneuvering Unit (MMU) post-flight results were generated mainly from analysis of video images, and so were not very accurate. The main goal of the research reported here was to use the detailed SAFER on-orbit mass properties data to optimize the design of future EVA maneuvering systems, with the aim being to improve flying qualities and/or reduce propellant consumption. The Automation, Robotics and Simulation Division Virtual Reality (VR) Laboratory proved to be a valuable research tool for such studies. A second objective of the grant was to generate an accurate dynamics model in support of the reflight of the DTO SAFER on STS-76/Mir-3. One complicating factor was the fact that a hand controller stowage box was added to the underside of SAFER on this flight; the position of

We demonstrate that a simple phenomenological approach can be used to simulate electronic conduction in molecular wires under thermal effects induced by the surrounding environment. This “Landauer-Büttiker’s probe technique” can properly replicate different transport mechanisms, phase coherent nonresonant tunneling, ballistic behavior, and hopping conduction. Specifically, our simulations with the probe method recover the following central characteristics of charge transfer in molecular wires: (i) the electrical conductance of short wires falls off exponentially with molecular length, a manifestation of the tunneling (superexchange) mechanism. Hopping dynamics overtakes superexchange in long wires demonstrating an ohmic-like behavior. (ii) In off-resonance situations, weak dephasing effects facilitate charge transfer, but under large dephasing, the electrical conductance is suppressed. (iii) At high enough temperatures, k{sub B}T/ϵ{sub B} > 1/25, with ϵ{sub B} as the molecular-barrier height, the current is enhanced by a thermal activation (Arrhenius) factor. However, this enhancement takes place for both coherent and incoherent electrons and it does not readily indicate on the underlying mechanism. (iv) At finite-bias, dephasing effects may impede conduction in resonant situations. We further show that memory (non-Markovian) effects can be implemented within the Landauer-Büttiker’s probe technique to model the interaction of electrons with a structured environment. Finally, we examine experimental results of electron transfer in conjugated molecular wires and show that our computational approach can reasonably reproduce reported values to provide mechanistic information.

Recommends determining molecular weights of liquids by use of a thermocouple. Utilizing a mathematical gas equation, the molecular weight can be determined from the measurement of the vapor temperature upon complete evaporation. Lists benefits as reduced time and cost, and improved safety factors. (ML)

Nonintrusive optical point-wise measurement techniques utilizing the principles of molecular Rayleigh scattering have been developed at the NASA Glenn Research Center to obtain time-averaged information about gas velocity, density, temperature, and turbulence, or dynamic information about gas velocity and density in unseeded flows. These techniques enable measurements that are necessary for validating computational fluid dynamics (CFD) and computational aeroacoustic (CAA) codes. Dynamic measurements allow the calculation of power spectra for the various flow properties. This type of information is currently being used in jet noise studies, correlating sound pressure fluctuations with velocity and density fluctuations to determine noise sources in jets. These nonintrusive techniques are particularly useful in supersonic flows, where seeding the flow with particles is not an option, and where the environment is too harsh for hot-wire measurements.

Molecular Dynamics (MD) and Monte Carlo (MC) simulations are the most popular simulation techniques for many-particle systems. Although they are often applied to similar systems, it is unclear to which extent one has to expect quantitative agreement of the two simulation techniques. In this work, we present a quantitative comparison of MD and MC simulations in the microcanonical ensemble. For three test examples, we study first- and second-order phase transitions with a focus on liquid-gas like transitions. We present MD analysis techniques to compensate for conservation law effects due to linear and angular momentum conservation. Additionally, we apply the weighted histogram analysis method to microcanonical histograms reweighted from MD simulations. By this means, we are able to estimate the density of states from many microcanonical simulations at various total energies. This further allows us to compute estimates of canonical expectation values. PMID:26450299

Rice is a staple and most important security food crop consumed by almost half of the world’s population. More rice production is needed due to the rapid population growth in the world. Rice blast caused by the fungus, Magnaporthe oryzae is one of the most destructive diseases of this crop in different part of the world. Breakdown of blast resistance is the major cause of yield instability in several rice growing areas. There is a need to develop strategies providing long-lasting disease resistance against a broad spectrum of pathogens, giving protection for a long time over a broad geographic area, promising for sustainable rice production in the future. So far, molecular breeding approaches involving DNA markers, such as QTL mapping, marker-aided selection, gene pyramiding, allele mining and genetic transformation have been used to develop new resistant rice cultivars. Such techniques now are used as a low-cost, high-throughput alternative to conventional methods allowing rapid introgression of disease resistance genes into susceptible varieties as well as the incorporation of multiple genes into individual lines for more durable blast resistance. The paper briefly reviewed the progress of studies on this aspect to provide the interest information for rice disease resistance breeding. This review includes examples of how advanced molecular method have been used in breeding programs for improving blast resistance. New information and knowledge gained from previous research on the recent strategy and challenges towards improvement of blast disease such as pyramiding disease resistance gene for creating new rice varieties with high resistance against multiple diseases will undoubtedly provide new insights into the rice disease control. PMID:26635817

More recently, advanced synchrotron radiation-based bioanalytical technique (SRFTIRM) has been applied as a novel non-invasive analysis tool to study molecular, functional group and biopolymer chemistry, nutrient make-up and structural conformation in biomaterials. This novel synchrotron technique, taking advantage of bright synchrotron light (which is million times brighter than sunlight), is capable of exploring the biomaterials at molecular and cellular levels. However, with the synchrotron RFTIRM technique, a large number of molecular spectral data are usually collected. The objective of this article was to illustrate how to use two multivariate statistical techniques: (1) agglomerative hierarchical cluster analysis (AHCA) and (2) principal component analysis (PCA) and two advanced multicomponent modeling methods: (1) Gaussian and (2) Lorentzian multi-component peak modeling for molecular spectrum analysis of bio-tissues. The studies indicated that the two multivariate analyses (AHCA, PCA) are able to create molecular spectral corrections by including not just one intensity or frequency point of a molecular spectrum, but by utilizing the entire spectral information. Gaussian and Lorentzian modeling techniques are able to quantify spectral omponent peaks of molecular structure, functional group and biopolymer. By application of these four statistical methods of the multivariate techniques and Gaussian and Lorentzian modeling, inherent molecular structures, functional group and biopolymer onformation between and among biological samples can be quantified, discriminated and classified with great efficiency.

The RITP-emulsion polymerization of styrene in the presence of molecular iodine has been successfully performed using potassium persulfate (KPS) as an initiator and 1-hexadecanesulfonate as an emulsifier under argon atmosphere at 80°C for 7 hrs in the absence of light. The effects of the iodine concentration, molar ratio between KPS and iodine, and solid contents on the molecular weight of polystyrene (PS) were studied. As the iodine concentration increased from 0.05 to 0.504 mmol under the fixed [KPS]/[I(2)] ratio at 4.5, the weight-average molecular weight of PS substantially decreased from 126,120 to 35,690 g/mol, the conversion increased from 85.0% to 95.2%, and the weight-average particle diameter decreased from 159 to 103 nm. In addition, as the ratio of [KPS]/[I(2)] increased from 0.5 to 6.0 at the fixed [I(2)] of 0.504 mmol, the weight-average molecular weight of PS decreased from 72,170 to 30,640 g/mol with high conversion between 81.7% and 96.5%. Moreover, when the styrene solid content increased from 10 to 40 wt.% at the fixed [KPS]/[I(2)] ratio of 4.5, the weight-average molecular weight of PS varied between 33,500 and 37,200 g/mol, the conversion varied between 94.9% and 89.7% and the weight-average diameter varied from 122 to 205 nm. Thus, the control of molecular weight of PS less than 100,000g/mol with high conversion (95%) and particle stability of up to 40 wt.% solid content were easily achieved through the usage of iodine with suitable ratio of [KPS]/[I(2)] in the RITP-emulsion polymerization technique, which is of great industrial importance. PMID:20950818

In computational mechanics, molecular dynamics (MD) and finite element (FE) analysis are well developed and most popular on nanoscale and macroscale analysis, respectively. MD can very well simulate the atomistic behavior, but cannot simulate macroscale length and time due to computational limits. FE can very well simulate continuum mechanics (CM) problems, but has the limitation of the lack of atomistic level degrees of freedom. Multiscale modeling is an expedient methodology with a potential to connect different levels of modeling such as quantum mechanics, molecular dynamics, and continuum mechanics. This study proposes a new multiscale modeling technique to couple MD with FE. The proposed method relies on weighted average momentum principle. A wave propagation example has been used to illustrate the challenges in coupling MD with FE and to verify the proposed technique. Furthermore, 2-Dimensional problem has also been used to demonstrate how this method would translate into real world applications. -- Highlights: •A weighted averaging momentum method is introduced for bridging molecular dynamics (MD) with finite element (FE) method. •The proposed method shows excellent coupling results in 1-D and 2-D examples. •The proposed method successfully reduces the spurious wave reflection at the border of MD and FE regions. •Big advantages of the proposed method are simplicity and inexpensive computational cost of multiscale analysis.

The creation of label-free biosensors capable of accurately detecting trace contaminants, particularly small organic molecules, is of significant interest for applications in environmental monitoring. This is achieved by pairing a high-sensitivity signal transducer with a biorecognition element that imparts selectivity towards the compound of interest. However, many environmental pollutants do not have corresponding biorecognition elements. Fortunately, biomimetic chemistries, such as molecular imprinting, allow for the design of artificial receptors with very high selectivity for the target. Here, we perform a proof-of-concept study to show how artificial receptors may be created from inorganic silanes using the molecular imprinting technique and paired with high-sensitivity transducers without loss of device performance. Silica microsphere Whispering Gallery Mode optical microresonators are coated with a silica thin film templated by a small fluorescent dye, fluorescein isothiocyanate, which serves as our model target. Oxygen plasma degradation and solvent extraction of the template are compared. Extracted optical devices are interacted with the template molecule to confirm successful sorption of the template. Surface characterization is accomplished via fluorescence and optical microscopy, ellipsometry, optical profilometry, and contact angle measurements. The quality factors of the devices are measured to evaluate the impact of the coating on device sensitivity. The resulting devices show uniform surface coating with no microstructural damage with Q factors above 10⁶. This is the first report demonstrating the integration of these devices with molecular imprinting techniques, and could lead to new routes to biosensor creation for environmental monitoring. PMID:27314397

Precision medicine is to customize the treatment options for individual patient based on the personal genome information. Colorectal cancer (CRC) is one of the most common cancer worldwide. Molecular heterogeneity of CRC, which includes the MSI phenotype, hypermutation phenotype, and their relationship with clinical preferences, is believed to be one of the main factors responsible for the considerable variability in treatment response. The development of powerful next-generation sequencing (NGS) technologies allows us to further understand the biological behavior of colorectal cancer, and to analyze the prognosis and chemotherapeutic drug reactions by molecular diagnostic techniques, which can guide the clinical treatment. This paper will introduce the new findings in this field. Meanwhile we integrate the new progress of key pathways including EGFR, RAS, PI3K/AKT and VEGF, and the experience in selective patients through associated molecular diagnostic screening who gain better efficacy after target therapy. The technique for detecting circulating tumor DNA (ctDNA) is introduced here as well, which can identify patients with high risk for recurrence, and demonstrate the risk of chemotherapy resistance. Mechanism of tumor drug resistance may be revealed by dynamic observation of gene alteration during treatment. PMID:26797832

The creation of label-free biosensors capable of accurately detecting trace contaminants, particularly small organic molecules, is of significant interest for applications in environmental monitoring. This is achieved by pairing a high-sensitivity signal transducer with a biorecognition element that imparts selectivity towards the compound of interest. However, many environmental pollutants do not have corresponding biorecognition elements. Fortunately, biomimetic chemistries, such as molecular imprinting, allow for the design of artificial receptors with very high selectivity for the target. Here, we perform a proof-of-concept study to show how artificial receptors may be created from inorganic silanes using the molecular imprinting technique and paired with high-sensitivity transducers without loss of device performance. Silica microsphere Whispering Gallery Mode optical microresonators are coated with a silica thin film templated by a small fluorescent dye, fluorescein isothiocyanate, which serves as our model target. Oxygen plasma degradation and solvent extraction of the template are compared. Extracted optical devices are interacted with the template molecule to confirm successful sorption of the template. Surface characterization is accomplished via fluorescence and optical microscopy, ellipsometry, optical profilometry, and contact angle measurements. The quality factors of the devices are measured to evaluate the impact of the coating on device sensitivity. The resulting devices show uniform surface coating with no microstructural damage with Q factors above 106. This is the first report demonstrating the integration of these devices with molecular imprinting techniques, and could lead to new routes to biosensor creation for environmental monitoring. PMID:27314397

Oncology practice increasingly requires the use of molecular profiling of tumors to inform the use of targeted therapeutics. However, many oncologists use third-party laboratories to perform tumor genomic testing, and these laboratories may not have electronic interfaces with the provider's electronic medical record (EMR) system. The resultant reporting mechanisms, such as plain-paper faxing, can reduce report fidelity, slow down reporting procedures for a physician's practice, and make reports less accessible. Vanderbilt University Medical Center and its genomic laboratory testing partner have collaborated to create an automated electronic reporting system that incorporates genetic testing results directly into the clinical EMR. This system was iteratively tested, and causes of failure were discovered and addressed. Most errors were attributable to data entry or typographical errors that made reports unable to be linked to the correct patient in the EMR. By providing direct feedback to providers, we were able to significantly decrease the rate of transmission errors (from 6.29% to 3.84%; P < .001). The results and lessons of 1 year of using the system and transmitting 832 tumor genomic testing reports are reported. PMID:26813927

Dynamic nuclear polarization (DNP) enhanced solid-state NMR can provide orders of magnitude in signal enhancement. One of the most important aspects of obtaining efficient DNP enhancements is the optimization of the paramagnetic polarization agents used. To date, the most utilized polarization agents are nitroxide biradicals. However, the efficiency of these polarization agents is diminished when used with samples other than small molecule model compounds. We recently demonstrated the effectiveness of nitroxide labeled lipids as polarization agents for lipids and a membrane embedded peptide. Here, we systematically characterize, via electron paramagnetic (EPR), the dynamics of and the dipolar couplings between nitroxide labeled lipids under conditions relevant to DNP applications. Complemented by DNP enhanced solid-state NMR measurements at 600 MHz/395 GHz, a molecular rationale for the efficiency of nitroxide labeled lipids as DNP polarization agents is developed. Specifically, optimal DNP enhancements are obtained when the nitroxide moiety is attached to the lipid choline headgroup and local nitroxide concentrations yield an average e(-)-e(-) dipolar coupling of 47 MHz. On the basis of these measurements, we propose a framework for development of DNP polarization agents optimal for membrane protein structure determination. PMID:27434371

High-speed laminar-to-turbulent transition and turbulence affect the control of flight vehicles, the heat transfer rate to a flight vehicle's surface, the material selected to protect such vehicles from high heating loads, the ultimate weight of a flight vehicle due to the presence of thermal protection systems, the efficiency of fuel-air mixing processes in high-speed combustion applications, etc. Gaining a fundamental understanding of the physical mechanisms involved in the transition process will lead to the development of predictive capabilities that can identify transition location and its impact on parameters like surface heating. Currently, there is no general theory that can completely describe the transition-to-turbulence process. However, transition research has led to the identification of the predominant pathways by which this process occurs. For a truly physics-based model of transition to be developed, the individual stages in the paths leading to the onset of fully turbulent flow must be well understood. This requires that each pathway be computationally modeled and experimentally characterized and validated. This may also lead to the discovery of new physical pathways. This document is intended to describe molecular based measurement techniques that have been developed, addressing the needs of the high-speed transition-to-turbulence and high-speed turbulence research fields. In particular, we focus on techniques that have either been used to study high speed transition and turbulence or techniques that show promise for studying these flows. This review is not exhaustive. In addition to the probe-based techniques described in the previous paragraph, several other classes of measurement techniques that are, or could be, used to study high speed transition and turbulence are excluded from this manuscript. For example, surface measurement techniques such as pressure and temperature paint, phosphor thermography, skin friction measurements and

By using a new rapid screening platform set on molecular docking simulations and fluorescence quenching techniques, three new anti-HIV aptamers targeting the viral surface glycoprotein 120 (gp120) were selected, synthesized, and assayed. The use of the short synthetic fluorescent peptide V35-Fluo mimicking the V3 loop of gp120, as the molecular target for fluorescence-quenching binding affinity studies, allowed one to measure the binding affinities of the new aptamers for the HIV-1 gp120 without the need to obtain and purify the full recombinant gp120 protein. The almost perfect correspondence between the calculated Kd and the experimental EC50 on HIV-infected cells confirmed the reliability of the platform as an alternative to the existing methods for aptamer selection and measuring of aptamer-protein equilibria. PMID:26810800

It is known that TCP throughput can degrade significantly over UBR service in a congested ATM network, and the early packet discard (EPD) technique has been proposed to improve the performance. However, recent studies show that EPD cannot ensure fairness among competing VCs in a congested network, but the degree of fairness can be improved using various forms of fair buffer allocation techniques. The authors propose an improved scheme that utilizes only a single shared FIFO queue for all VCs and admits simple implementation for high speed ATM networks. The scheme achieves nearly perfect fairness and throughput among multiple TCP connections, comparable to the expensive per-VC queuing technique. Analytical and simulation results are presented to show the validity of this new scheme and significant improvement in performance as compared with existing fair buffer allocation techniques for TCP over ATM.

Hemophilia is caused by a functional deficiency of one of the coagulation proteins. Therapy for no other group of genetic diseases has seen the progress that has been made for hemophilia over the past 40 years, from a life expectancy in 1970 of ∼20 years for a boy born with severe hemophilia to essentially a normal life expectancy in 2013 with current prophylaxis therapy. However, these therapies are expensive and require IV infusions 3 to 4 times each week. These are exciting times for hemophilia because several new technologies that promise extended half-lives for factor products, with potential for improvements in quality of life for persons with hemophilia, are in late-phase clinical development. PMID:24065241

We report the iAMOEBA (i.e. “inexpensive AMOEBA”) classical polarizable water model. iAMOEBA uses a direct approximation to describe electronic polarizability, which reduces the computational cost relative to a fully polarizable model such as AMOEBA. The model is parameterized using ForceBalance, a systematic optimization method that simultaneously utilizes training data from experimental measurements and high-level ab initio calculations. We show that iAMOEBA is a highly accurate model for water in the solid, liquid, and gas phases, with the ability to fully capture the effects of electronic polarization and predict a comprehensive set of water properties beyond the training data set including the phase diagram. The increased accuracy of iAMOEBA over the fully polarizable AMOEBA model demonstrates ForceBalance as a method that allows the researcher to systematically improve empirical models by optimally utilizing the available data. PMID:23750713

The few studies attempting to specifically characterize dermatophytes from hair samples of dogs and cats using PCR-based methodology relied on sequence-based analysis of selected genetic markers. The aim of the present investigation was to establish and evaluate a PCR-based approach employing genetic markers of nuclear DNA for the specific detection of dermatophytes on such specimens. Using 183 hair samples, we directly compared the test results of our one-step and nested-PCR assays with those based on conventional microscopy and in vitro culture techniques (using the latter as the reference method). The one step-PCR was highly accurate (AUC > 90) for the testing of samples from dogs, but only moderately accurate (AUC = 78.6) for cats. A nested-PCR was accurate (AUC = 93.6) for samples from cats, and achieved higher specificity (94.1 and 94.4%) and sensitivity (100 and 94.9%) for samples from dogs and cats, respectively. In addition, the nested-PCR allowed the differentiation of Microsporum canis from Trichophyton interdigitale (zoophilic) and geophilic dermatophytes (i.e., Microsporum gypseum or Trichophyton terrestre), which was not possible using the one step-assay. The PCRs evaluated here provide practical tools for diagnostic applications to support clinicians in initiating prompt and targeted chemotherapy of dermatophytoses. PMID:22686247

Mature B cell neoplasms cover a spectrum of diseases involving lymphoid tissues (lymphoma) or blood (leukemia), with an overlap between these two presentations. Previous studies describing equine lymphoid neoplasias have not included analyses of clonality using moleculartechniques. The objective of this study was to use moleculartechniques to advance the classification of B cell lymphoproliferative diseases in five adult equine patients with a rare condition of monoclonal gammopathy, B cell leukemia, and concurrent lymphadenopathy (lymphoma/leukemia). The B cell neoplasms were phenotypically characterized by gene and cell surface molecule expression, secreted immunoglobulin (Ig) isotype concentrations, Ig heavy-chain variable (IGHV) region domain sequencing, and spectratyping. All five patients had hyperglobulinemia due to IgG1 or IgG4/7 monoclonal gammopathy. Peripheral blood leukocyte immunophenotyping revealed high proportions of IgG1- or IgG4/7-positive cells and relative T cell lymphopenia. Most leukemic cells lacked the surface B cell markers CD19 and CD21. IGHG1 or IGHG4/7 gene expression was consistent with surface protein expression, and secreted isotype and Ig spectratyping revealed one dominant monoclonal peak. The mRNA expression of the B cell-associated developmental genes EBF1, PAX5, and CD19 was high compared to that of the plasma cell-associated marker CD38. Sequence analysis of the IGHV domain of leukemic cells revealed mutated Igs. In conclusion, the protein and moleculartechniques used in this study identified neoplastic cells compatible with a developmental transition between B cell and plasma cell stages, and they can be used for the classification of equine B cell lymphoproliferative disease. PMID:26311245

Although some areas of clinical health care are becoming adept at implementing continuous quality improvement (CQI) projects, there has been limited experimentation of CQI in health promotion. In this study, we examined the impact of a CQI intervention on health promotion in four Australian Indigenous primary health care centers. Our study objectives were to (a) describe the scope and quality of health promotion activities, (b) describe the status of health center system support for health promotion activities, and (c) introduce a CQI intervention and examine the impact on health promotion activities and health centers systems over 2 years. Baseline assessments showed suboptimal health center systems support for health promotion and significant evidence-practice gaps. After two annual CQI cycles, there were improvements in staff understanding of health promotion and systems for planning and documenting health promotion activities had been introduced. Actions to improve best practice health promotion, such as community engagement and intersectoral partnerships, were inhibited by the way health center systems were organized, predominately to support clinical and curative services. These findings suggest that CQI can improve the delivery of evidence-based health promotion by engaging front line health practitioners in decision-making processes about the design/redesign of health center systems to support the delivery of best practice health promotion. However, further and sustained improvements in health promotion will require broader engagement of management, senior staff, and members of the local community to address organizational and policy level barriers. PMID:27066470

Although some areas of clinical health care are becoming adept at implementing continuous quality improvement (CQI) projects, there has been limited experimentation of CQI in health promotion. In this study, we examined the impact of a CQI intervention on health promotion in four Australian Indigenous primary health care centers. Our study objectives were to (a) describe the scope and quality of health promotion activities, (b) describe the status of health center system support for health promotion activities, and (c) introduce a CQI intervention and examine the impact on health promotion activities and health centers systems over 2 years. Baseline assessments showed suboptimal health center systems support for health promotion and significant evidence-practice gaps. After two annual CQI cycles, there were improvements in staff understanding of health promotion and systems for planning and documenting health promotion activities had been introduced. Actions to improve best practice health promotion, such as community engagement and intersectoral partnerships, were inhibited by the way health center systems were organized, predominately to support clinical and curative services. These findings suggest that CQI can improve the delivery of evidence-based health promotion by engaging front line health practitioners in decision-making processes about the design/redesign of health center systems to support the delivery of best practice health promotion. However, further and sustained improvements in health promotion will require broader engagement of management, senior staff, and members of the local community to address organizational and policy level barriers. PMID:27066470

Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less

Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based on their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.

Infectious pathogens are responsible for high utilisation of healthcare resources globally. Attributable morbidity and mortality remains exceptionally high. Vaccines offer the potential to prime a pathogen-specific immune response and subsequently reduce disease burden. Routine vaccination has fundamentally altered the natural history of many frequently observed and serious infections. Vaccination is also recommended for persons at increased risk of severe vaccine-preventable disease. Many current nonadjuvanted vaccines are poorly effective in the elderly and immunocompromised populations, resulting in nonprotective postvaccine antibody titres, which serve as surrogate markers for protection. The vaccine-induced immune response is influenced by: (i.) vaccine factors i.e., type and composition of the antigen(s), (ii.) host factors i.e., genetic differences in immune-signalling or senescence, and (iii.) external factors such as immunosuppressive drugs or diseases. Adjuvanted vaccines offer the potential to compensate for a lack of stimulation and improve pathogen-specific protection. In this review we use influenza vaccine as a model in a discussion of the different mechanisms of action of the available adjuvants. In addition, we will appraise new approaches using "vaccine-omics" to discover novel types of adjuvants. PMID:24844935

Correct gender identification in monomorphic species is often difficult especially if males and females do not display obvious behavioral and breeding differences. We compared gender specific morphology and behavior with recently developed DNA techniques for gender identification in the monomorphic Grasshopper Sparrow (Ammodramus savannarum). Gender was ascertained with DNA in 213 individuals using the 2550F/2718R primer set and 3% agarose gel electrophoresis. Field observations using behavior and breeding characteristics to identify gender matched DNA analyses with 100% accuracy for adult males and females. Gender was identified with DNA for all captured juveniles that did not display gender specific traits or behaviors in the field. The moleculartechniques used offered a high level of accuracy and may be useful in studies of dispersal mechanisms and winter assemblage composition in monomorphic species.

Nurse administrators are challenged to determine the best use of limited resources to support organizational patient safety improvement efforts. This article reviews the literature on techniques to reduce errors and improve patient safety in hospitals with a focus on team training initiatives. Implications for nurse administrators are discussed. PMID:25279512

An effusive molecular beam technique is described to measure alkane dissociative sticking coefficients, S(Tg, Ts; ϑ), on metal surfaces for which the impinging gas temperature, Tg, and surface temperature, Ts, can be independently varied, along with the angle of incidence, ϑ, of the impinging gas. Effusive beam experiments with Tg = Ts = T allow for determination of angle-resolved dissociative sticking coefficients, S(T; ϑ), which when averaged over the cos (ϑ)/π angular distribution appropriate to the impinging flux from a thermal ambient gas yield the thermal dissociative sticking coefficient, S(T). Nonequilibrium S(Tg, Ts; ϑ) measurements for which Tg ≠ Ts provide additional opportunities to characterize the transition state and gas-surface energy transfer at reactive energies. A resistively heated effusive molecular beam doser controls the Tg of the impinging gas striking the surface. The flux of molecules striking the surface from the effusive beam is determined from knowledge of the dosing geometry, chamber pressure, and pumping speed. Separate experiments with a calibrated leak serve to fix the chamber pumping speed. Postdosing Auger electron spectroscopy is used to measure the carbon of the alkyl radical reaction product that is deposited on the surface as a result of alkane dissociative sticking. As implemented in a typical ultrahigh vacuum chamber for surface analysis, the technique has provided access to a dynamic range of roughly 6 orders of magnitude in the initial dissociative sticking coefficient for small alkanes on Pt(111).

This study was carried out for the characterization and discrimination of the indigenous Gram positive, catalase-positive cocci (GCC) population in sucuk, a traditional Turkish dry-fermented sausage. Sucuk samples, produced by the traditional method without starter culture were collected from 8 local producers in Kayseri/Turkey and a total of 116 GCC isolates were identified by using different moleculartechniques. Two different molecular fingerprinting methods; namely, randomly amplified polymorphic DNA-PCR (RAPD-PCR) and repetitive extragenic palindrome-PCR (rep-PCR), were used for the clustering of isolates and identification at species level was carried out by full length sequencing of 16S rDNA. Combining the results obtained from molecular fingerprinting and 16S rDNA sequencing showed that the dominant GCC species isolated from the sucuk samples was Staphylococcus saprophyticus followed by Staphylococcus succinus and Staphylococcus equorum belonging to the Staphylococcus genus. Real-time PCR DNA melting curve analysis and high-resolution melting (HRM) analysis targeting the V1 + V3 regions of 16S rDNA were also applied for the discrimination of isolates belonging to different species. It was observed statistically different Tm values and species-specific HRM profiles for all except 2 species (S. saprophyticus and Staphylococcus xylosus) that have high 16S rDNA sequence similarity. The combination of rep-PCR and/or PCR-RAPD with 16S rRNA gene sequencing was an efficient approach for the characterization and identification of the GCC population in spontaneously fermented sucuk. On the other hand, intercalating dye assays were found to be a simple and very promising technique for the differentiation of the GCC population at species level. PMID:24410408

Significance: Diabetic foot ulcers (DFU) are a major and growing public health problem. They pose difficulties in clinical practice in both diagnosis and management. Bacterial interactions on the skin surface are important in the pathophysiology of DFU and may contribute to a delay in healing. Fully identifying bacteria present in these wounds is difficult with traditional culture methods. New molecular tools, however, have greatly contributed to our understanding of the role of the cutaneous microbiota in DFU. Recent Advances: Molecular technologies revealed new information concerning how bacteria are organized in DFU. This has led to the concept of “functionally equivalent pathogroups,” meaning that certain bacterial species which are usually nonpathogenic (or at least incapable of maintaining a chronic infection on their own) may coaggregate symbiotically in a pathogenic biofilm and act synergistically to cause a chronic infection. The distribution of pathogens in multispecies biofilms is nonrandom. The high bacterial diversity is probably related to the development of a microbial biofilm that is irreversibly attached to the wound matrix. Critical Issues: Using moleculartechniques requires a financial outlay for high-cost equipment. They are still too time-consuming to perform and reporting is too delayed for them to be used in routine practice. Finally, they do not differentiate live from dead or pathogenic from nonpathogenic microorganisms. Future Directions: Molecular tools have better documented the composition and organization of the skin flora. Further advances are required to elucidate which among the many bacteria in the DFU flora are likely to be pathogens, rather than colonizers. PMID:25566413

Recent progress made in the development of novel molecule-based flow diagnostic techniques, including molecular tagging velocimetry (MTV) and lifetime-based molecular tagging thermometry (MTT), to achieve simultaneous measurements of multiple important flow variables for micro-flows and micro-scale heat transfer studies is reported in this study. The focus of the work described here is the particular class of molecular tagging tracers that relies on phosphorescence. Instead of using tiny particles, especially designed phosphorescent molecules, which can be turned into long-lasting glowing marks upon excitation by photons of appropriate wavelength, are used as tracers for both flow velocity and temperature measurements. A pulsed laser is used to 'tag' the tracer molecules in the regions of interest, and the tagged molecules are imaged at two successive times within the photoluminescence lifetime of the tracer molecules. The measured Lagrangian displacement of the tagged molecules provides the estimate of the fluid velocity. The simultaneous temperature measurement is achieved by taking advantage of the temperature dependence of phosphorescence lifetime, which is estimated from the intensity ratio of the tagged molecules in the acquired two phosphorescence images. The implementation and application of the molecular tagging approach for micro-scale thermal flow studies are demonstrated by two examples. The first example is to conduct simultaneous flow velocity and temperature measurements inside a microchannel to quantify the transient behavior of electroosmotic flow (EOF) to elucidate underlying physics associated with the effects of Joule heating on electrokinematically driven flows. The second example is to examine the time evolution of the unsteady heat transfer and phase changing process inside micro-sized, icing water droplets, which is pertinent to the ice formation and accretion processes as water droplets impinge onto cold wind turbine blades.

Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit

Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit

Improved basis sets for the study of polymer dynamics by means of the diffusion theory, and tests on a melt of cis-1,4-polyisoprene decamers, and a toluene solution of a 71-mer syndiotactic trans-1,2-polypentadiene were presented recently [R. Gaspari and A. Rapallo, J. Chem. Phys. 128, 244109 (2008)]. The proposed hybrid basis approach (HBA) combined two techniques, the long time sorting procedure and the maximum correlation approximation. The HBA takes advantage of the strength of these two techniques, and its basis sets proved to be very effective and computationally convenient in describing both local and global dynamics in cases of flexible synthetic polymers where the repeating unit is a unique type of monomer. The question then arises if the same efficacy continues when the HBA is applied to polymers of different monomers, variable local stiffness along the chain and with longer persistence length, which have different local and global dynamical properties against the above-mentioned systems. Important examples of this kind of molecular chains are the proteins, so that a fragment of the protein transthyretin is chosen as the system of the present study. This peptide corresponds to a sequence that is structured in β-sheets of the protein and is located on the surface of the channel with thyroxin. The protein transthyretin forms amyloid fibrils in vivo, whereas the peptide fragment has been shown [C. P. Jaroniec, C. E. MacPhee, N. S. Astrof, C. M. Dobson, and R. G. Griffin, Proc. Natl. Acad. Sci. U.S.A. 99, 16748 (2002)] to form amyloid fibrils in vitro in extended β-sheet conformations. For these reasons the latter is given considerable attention in the literature and studied also as an isolated fragment in water solution where both experimental and theoretical efforts have indicated the propensity of the system to form β turns or α helices, but is otherwise predominantly unstructured. Differing from previous computational studies that employed implicit

Molecular biology techniques play a very important role in understanding the biological activity. Students who major in biology should know not only how to perform experiments, but also the reasons for performing them. Having the concept of conducting research by integrating various techniques is especially important. This paper introduces a…

The H2-H2O ambient-temperature equilibration technique for the determination of 2H/1H ratios in urinary waters from diabetic subjects provides improved accuracy over the conventional Zn reduction technique. The standard deviation, ~ 1-2???, is at least a factor of three better than that of the Zn reduction technique on urinary waters from diabetic volunteers. Experiments with pure water and solutions containing glucose, urea and albumen indicate that there is no measurable bias in the hydrogen equilibration technique.The H2-H2O ambient-temperature equilibration technique for the determination of 2H/1H ratios in urinary waters from diabetic subjects provides improved accuracy over the conventional Zn reduction technique. The standard deviation, approximately 1-2%, is at least a factor of three better than that of the Zn reduction technique on urinary waters from diabetic volunteers. Experiments with pure water and solutions containing glucose, urea and albumen indicate that there is no measurable bias in the hydrogen equilibration technique.

We describe a case of congenital acquired candidiasis in a preterm female delivered through Caesarean section due to the premature rupture of the amniotic membrane. The neonate presented with suspected chorioamnionitis and erythematous desquamative skin. Candida albicans was isolated from the placenta, mouth, groin, and periumbilical lesions. The infant developed candidemia due to Candida albicans and the same yeast was also isolated from a catheter. Culture inoculated with swabs from the mouth and vagina of the mother yielded C. albicans and C. krusei. All C. albicans isolates from the mother and the neonate were visually indistinguishable by molecular typing techniques which included chromosomal karyotyping and restriction endonuclease analysis followed by pulsed-field gel electrophoresis. These findings allowed the clinical condition to be confirmed as congenital acquisition of candidiasis in this case. PMID:19306215

Vibrio parahaemolyticus is a Gram-negative halophilic bacterium that is found in estuarine, marine and coastal environments. V. parahaemolyticus is the leading causal agent of human acute gastroenteritis following the consumption of raw, undercooked, or mishandled marine products. In rare cases, V. parahaemolyticus causes wound infection, ear infection or septicaemia in individuals with pre-existing medical conditions. V. parahaemolyticus has two hemolysins virulence factors that are thermostable direct hemolysin (tdh)-a pore-forming protein that contributes to the invasiveness of the bacterium in humans, and TDH-related hemolysin (trh), which plays a similar role as tdh in the disease pathogenesis. In addition, the bacterium is also encodes for adhesions and type III secretion systems (T3SS1 and T3SS2) to ensure its survival in the environment. This review aims at discussing the V. parahaemolyticus growth and characteristics, pathogenesis, prevalence and advances in molecular identification techniques. PMID:25566219

Wastewater treatment systems tend to be engineered to select for a few functional microbial groups that may be organized in various spatial structures such as activated sludge flocs, biofilm or granules and represented by single coherent phylogenic groups such as ammonia-oxidizing bacteria (AOB) and polyphosphate-accumulating organisms (PAO). In order to monitor and control engineered microbial structure in wastewater treatment systems, it is necessary to understand the relationships between the microbial community structure and the process performance. This review focuses on bacterial communities in wastewater treatment processes, the quantity of microorganisms and structure of microbial consortia in wastewater treatment bioreactors. The review shows that the application of moleculartechniques in studies of engineered environmental systems has increased our insight into the vast diversity and interaction of microorganisms present in wastewater treatment systems.

This paper presents a multiscale study using the coupled Meshless technique/Molecular Dynamics (M{sup 2}) for exploring the deformation mechanism of mono-crystalline metal (focus on copper) under uniaxial tension. In M{sup 2}, an advanced transition algorithm using transition particles is employed to ensure the compatibility of both displacements and their gradients, and an effective local quasi-continuum approach is also applied to obtain the equivalent continuum strain energy density based on the atomistic potentials and Cauchy-Born rule. The key parameters used in M{sup 2} are firstly investigated using a benchmark problem. Then, M{sup 2} is applied to the multiscale simulation for a mono-crystalline copper bar. It has found that the mono-crystalline copper has very good elongation property, and the ultimate strength and Young's modulus are much higher than those obtained in macro-scale.

This work focused in its first part on the preparation and characterization of novel elastomers based on poly(tetrahydrofuran) (PTHF) networks. Elastomers were prepared by a hydrolysis-condensation reaction which has been followed up by FTIR spectroscopic techniques. The elastomers thus obtained were studied with regard to their equilibrium swelling in toluene at 25°C, and their stress-strain isotherms in elongation. For some of the samples, high elongations seemed to bring about highly desirable strain-induced crystallization, as evidenced by upturns in the modulus. Swelling of these samples with increasing amounts of the non-volatile diluent dibutyl phthalate caused the upturns to gradually disappear. The second part of this work was focused on diversifying the newly developed sound wave propagation technique to characterize elastomeric polymer networks. The technique was applied to characterize polybutadiene (PBD) networks. The speed of wave propagation in PBD networks was found to be strongly dependent on the network structural parameters such as average molecular weight of chain between crosslinks and entanglement molecular weight. Also, for the swollen networks, pulse speeds decreased with increase in degree of swelling. Upturns due to strain-induced crystallization at higher elongations were clearly evidenced in the pulse speeds. The third part of this work presented improvements in the mechanical properties of thermoplastic biodegradable poly(3-hydroxybutyrate-co-3-hydroxyhexanoate) (Nodax(TM)) using a pre-orientation technique. This simple approach involved heating the polymer film to a temperature above its glass transition temperature, stretching it to the desired extension (%), and then quenching it to room temperature while in the stretched state. As expected, pre-orientation resulted in substantial improvements in the mechanical properties of the films. The pre-oriented films had higher values of the modulus, toughness, yield stress, and tensile

Recognition that some lesions typical of T cell-mediated rejection (TCMR) also occur in antibody-mediated rejection requires revision of the histologic TCMR definition. To guide this process, we assessed the relative importance of various lesions and the performance of new histology diagnostic algorithms, using molecular TCMR scores as histology-independent estimates of true TCMR. In 703 indication biopsies, random forest analysis and logistic regression indicated that interstitial infiltrate (i-lesions) and tubulitis (t-lesions) were the key histologic predictors of molecular TCMR, with arteritis (v-lesions) having less importance. Histology predicted molecular TCMR more accurately when diagnoses were assigned by strictly applying the Banff rules to the lesion scores and redefining isolated v-lesion TCMR. This improved prediction from area under the curve (AUC) 0.70 with existing rules to AUC 0.80. Further improvements were achieved by introducing more categories to reflect inflammation (AUC 0.84), by summing the lesion scores (AUC 0.85) and by logistic regression (AUC 0.90). We concluded that histologic assessment of TCMR can be improved by placing more emphasis on i- and t-lesions and incorporating new algorithms for diagnosis. Nevertheless, some discrepancies between histologic and molecular diagnoses persist, partially due to the inherent nonspecificity of i- and t-lesions, and molecular methods will be required to help resolve these cases. PMID:26730747

Purpose To evaluate the dosimetric outcomes of a simple planning technique for improving intensity-modulated radiotherapy (IMRT) for nasopharyngeal cancer (NPC). Methods For 39 NPC cases, generally acceptable original plans were generated and were improved by the two planning techniques, respectively: (1) a basal-dose-compensation (BDC) technique, in which the treatment plans were re-optimized based on the original plans; (2) a local-dose-control (LDC) technique, in which the original plans were re-optimized with constraints for hot and cold spots. The BDC, original, and LDC plans were then compared regarding homogeneity index (HI) and conformity index (CI) of planning target volumes (PTVs), organ-at-risk (OAR) sparing and monitor units (MUs) per fraction. The whole planning times were also compared between the BDC and LDC plans. Results The BDC plans had superior HIs / CIs, by 13-24% / 3-243%, respectively, over the original plans. Compared to the LDC plans, the BDC plans provided better HIs only for PTVnx (the PTV of nasopharyngeal primary tumor) by 11% and better CIs for all PTVs by 2-134%. The BDC technique spared most OARs, by 1-9%. The average MUs of the BDC, original, and LDC plans were 2149, 2068 and 2179, respectively. The average whole planning times were 48 and 69 minutes for the BDC and LDC plans, respectively. Conclusions For the IMRT of nasopharyngeal cancer, the BDC planning technique can improve target dose homogeneity, conformity and OAR sparing, with better planning efficiency. PMID:26132167

The aim of this report is to describe an alternative technique to record the neutral zone. An acrylic resin base with posterior occlusal rims was applied using a thermoplastic denture adhesive. After being worn for 2 days, the base was transferred into an acrylic resin complete denture. Most patients reported an improvement in denture stability and a reduction of pressure sores. This procedure seems to be helpful to improve denture function, especially in the mandible, in patients who cannot be treated with implants. However, because of its complexity, this neutral zone technique cannot be recommended for routine clinical use. PMID:22930774

Objective: To observe and study the improvement of the technique in treatment of internal hemorrhoids with Nd:YAG laser and evaluate the effective rate. Methods: 60 patients of internal hemorrhoids were treated with Nd:YAG laser (10-15mw) irradiating on the mucosa of the lesions. Results: Among 60 patients, 57 patients were primarily cured with one treatment, 3 patients were primarily cured with two treatments. The effective rate was 95% with one treatment, and it reached to 100% with two treatments. Conclusions: the improvement of the technique in treatment of internal hemorrhoids with Nd:YAG laser is effective and easy to operate.

An amplitude control technique has been employed for use with analog voice communication systems, which improves low-level phoneme reception and eliminates the received noise between words and syllables. Tests were conducted on a narrow-band frequency-modulation simplex voice communication channel employing the amplitude control technique. Presented for both the modified rhyme word tests and the phonetically balanced word tests are a series of graphical plots of the tests' score distribution, mean, and standard deviation as a function of received carrier-to-noise power density ratio. At low received carrier-to-noise power density ratios, a significant improvement in the intelligibility was obtained. A voice intelligibility improvement of more than 2 dB was obtained for the modified rhyme test words, and a voice intelligibility improvement in excess of 4 dB was obtained for the phonetically balanced word tests.

An improved method of estimating molecular weights of volatile organic compound from their mass spectra has been developed and implemented with an expert system. he method is based on the strong correlation of MAXMASS, the highest mass with an intensity of 5% of the base peak in ...

Boosted excitation energy transfer in spiranic O-BODIPY/polyarene cassettes, when compared with the parent non-spiranic (flexible) system, is highlighted as a proof for the ability of a new structural design to improve the energy transfer in molecular cassettes. PMID:25207836

Ultra-high molecular weight polyethylene or UHMWPE is an extremely difficult material to coat with, as it is rubbery and chemically very inert. The Cold Spray process appears to be a promising alternative processing technique but polymers are in general difficult to deposit using this method. So, attempts to develop UHMWPE coatings were made using a downstream injection cold spray technique incorporating a few modifications. A conventional cold spray machine yielded only a few deposited particles of UHMWPE on the substrate surface, but with some modifications in the nozzle geometry (especially the length and inner geometry) a thin coating of 45 μm on Al substrate was obtained. Moreover, experiments with the addition of fumed nano-alumina to the feedstock yielded a coating of 1-4 mm thickness on Al and polypropylene substrates. UHMWPE was seen to be melt crystallized during the coating formation, as can be seen from the differential calorimetry curves. Influence of nano-ceramic particles was explained by observing the creation of a bridge bond between UHMWPE particles.

Tumour heterogeneity has, in recent times, come to play a vital role in how we understand and treat cancers; however, the clinical translation of this has lagged behind advances in research. Although significant advancements in oncological management have been made, personalized care remains an elusive goal. Inter- and intratumour heterogeneity, particularly in the clinical setting, has been difficult to quantify and therefore to treat. The histological quantification of heterogeneity of tumours can be a logistical and clinical challenge. The ability to examine not just the whole tumour but also all the molecular variations of metastatic disease in a patient is obviously difficult with current histological techniques. Advances in imaging techniques and novel applications, alongside our understanding of tumour heterogeneity, have opened up a plethora of non-invasive biomarker potential to examine tumours, their heterogeneity and the clinical translation. This review will focus on how various imaging methods that allow for quantification of metastatic tumour heterogeneity, along with the potential of developing imaging, integrated with other in vitro diagnostic approaches such as genomics and exosome analyses, have the potential role as a non-invasive biomarker for guiding the treatment algorithm. PMID:24597512

Classroom assessment techniques (CATs) or other closure activities are widely promoted for use in college classrooms. However, research on whether CATs improve student learning are mixed. The authors posit that the results are mixed because CATs were designed to "help teachers find out what students are learning in the classroom and how well…

This toolkit offers program managers a hands-on guide for implementing quality programming in the after-school hours. The kit includes tools and techniques that increased the quality of literacy programming and helped improve student reading gains in the Communities Organizing Resources to Advance Learning (CORAL) initiative of The James Irvine…

A simple method of improving the TWT and multistage depressed collector (MDC) efficiency has been demonstrated. The efficiency improvement was produced by the application of a thin layer of carbon to the copper electrodes of the MDC by means of a rapid low-cost technique involving the pyrolysis of hydrocarbon oil in electric arc discharges. Experimental results with a representative TWT and MDC showed an 11 percent improvement in both the TWT and MDC efficiencies as compared to those of the same TWT and MDC with machined copper electrode surfaces. An extended test with a 550-W CW TWT indicated good durability of the carbon-coated electrode surfaces.

A technique for improving the momentum resolution for low momentum charged particles in few layer silicon based trackers is presented. The particle momenta are determined from the measured Landau d E/d x distribution and the Bethe-Bloch formula in the 1/ β2 region. It is shown that a factor of two improvement of the momentum determination is achieved as compared to standard track fitting methods. This improvement is important in large scale heavy ion experiments which cover the low transverse momentum spectra using stand-alone silicon tracking devices with a few planes like the ones used in STAR at RHIC and ALICE at LHC.

Accurate and timely molecular test results play an important role in patient management; consequently, there is a customer expectation of short testing turnaround times. Baseline data analysis revealed that the greatest challenge to timely result generation occurred in the preanalytic phase of specimen collection and transport. Here, we describe our efforts to improvemolecular testing turnaround times by focusing primarily on redesign of preanalytic processes using the principles of LEAN production. Our goal was to complete greater than 90% of the molecular tests in less than 3 days. The project required cooperation from different laboratory disciplines as well as individuals outside of the laboratory. The redesigned processes involved defining and standardizing the protocols and approaching blood and tissue specimens as analytes for molecular testing. The LEAN process resulted in fewer steps, approaching the ideal of a one-piece flow for specimens through collection/retrieval, transport, and different aspects of the testing process. The outcome of introducing the LEAN process has been a 44% reduction in molecular test turnaround time for tissue specimens, from an average of 2.7 to 1.5 days. In addition, extending LEAN work principles to the clinician suppliers has resulted in a markedly increased number of properly collected and shipped blood specimens (from 50 to 87%). These continuous quality improvements were accomplished by empowered workers in a blame-free environment and are now being sustained with minimal management involvement. PMID:19661386

We propose a color digital holography by using spectral estimation technique to improve the color reproduction of objects. In conventional color digital holography, there is insufficient spectral information in holograms, and the color of the reconstructed images depend on only reflectances at three discrete wavelengths used in the recording of holograms. Therefore the color-composite image of the three reconstructed images is not accurate in color reproduction. However, in our proposed method, the spectral estimation technique was applied, which has been reported in multispectral imaging. According to the spectral estimation technique, the continuous spectrum of object can be estimated and the color reproduction is improved. The effectiveness of the proposed method was confirmed by a numerical simulation and an experiment, and, in the results, the average color differences are decreased from 35.81 to 7.88 and from 43.60 to 25.28, respectively. PMID:22193005

Purpose To improve 19F flip angle calibration and compensate for B1 inhomogeneities in quantitative 19F MRI of sparse molecular epitopes with perfluorocarbon (PFC) nanoparticle (NP) emulsion contrast agents. Materials and Methods Flip angle sweep experiments on PFC-NP point source phantoms with three custom-designed 19F/1H dual-tuned coils revealed a difference in required power settings for 19F and 1H nuclei, which was used to calculate a calibration ratio specific for each coil. An image-based correction technique was developed using B1-field mapping on 1H to correct for 19F and 1H images in two phantom experiments. Results Optimized 19F peak power differed significantly from that of 1H power for each coil (p<0.05). A ratio of 19F/1H power settings yielded a coil-specific and spatially independent calibration value (surface: 1.48±0.06; semi-cylindrical: 1.71±0.02, single-turn-solenoid: 1.92±0.03). 1H-image-based B1 correction equalized the signal intensity of 19F images for two identical 19F PFC-NP samples placed in different parts of the field, which were offset significantly by ~66% (p<0.001) before correction. Conclusion 19F flip angle calibration and B1-mapping compensations to the 19F images employing the more abundant 1H signal as a basis for correction result in a significant change in the quantification of sparse 19F MR signals from targeted PFC NP emulsions. PMID:25425244

With the increase in crude oil prices, climate change concerns and limited reserves of fossil fuel, attention has been diverted to alternate renewable energy sources such as biofuel and biomass. Among the potential biofuel crops, Jatropha curcas L, a non-domesticated shrub, has been gaining importance as the most promising oilseed, as it does not compete with the edible oil supplies. Economic relevance of J. curcas for biodiesel production has promoted world-wide prospecting of its germplasm for crop improvement and breeding. However, lack of adequate genetic variation and non-availability of improved varieties limited its prospects of being a successful energy crop. In this review, we present the progress made in molecular breeding approaches with particular reference to tissue culture and genetic transformation, genetic diversity assessment using molecular markers, large-scale transcriptome and proteome studies, identification of candidate genes for trait improvement, whole genome sequencing and the current interest by various public and private sector companies in commercial-scale cultivation, which highlights the revival of Jatropha as a sustainable energy crop. The information generated from molecular markers, transcriptome profiling and whole genome sequencing could accelerate the genetic upgradation of J. curcas through molecular breeding. PMID:21584678

DNA hybridization is of tremendous importance in biology, bionanotechnology, and biophysics. Molecular beacons are engineered DNA hairpins with a fluorophore and a quencher labeled on each of the two ends. A target DNA can open the hairpin to give an increased fluorescence signal. To date, the majority of molecular beacon detections have been performed only in aqueous buffers. We describe herein DNA detection in nine different organic solvents, methanol, ethanol, isopropanol, acetonitrile, formamide, dimethylformamide (DMF), dimethyl sulfoxide (DMSO), ethylene glycol, and glycerol, varying each up to 75% (v/v). In comparison with detection in water, the detection in organic solvents showed several important features. First, the molecular beacon hybridizes to its target DNA in the presence of all nine solvents up to a certain percentage. Second, the rate of this hybridization was significantly faster in most organic solvents compared with water. For example, in 56% ethanol, the beacon showed a 70-fold rate enhancement. Third, the ability of the molecular beacon to discriminate single-base mismatch is still maintained. Lastly, the DNA melting temperature in the organic solvents showed a solvent concentration-dependent decrease. This study suggests that molecular beacons can be used for applications where organic solvents must be involved or organic solvents can be intentionally added to improve the molecular beacon performance. PMID:21062084

Assisted reproductive techniques (ARTs) have become the treatment of choice in many cases of infertility; however, the current success rates of these procedures remain suboptimal. Programmed cell death (apoptosis) most likely contributes to failed ART and to the decrease in sperm quality after cryopreservation. There is a likelihood that some sperm selected for ART will display features of apoptosis despite their normal appearance, which may be partially responsible for the low fertilization and implantation rates seen with ART. One of the features of apoptosis is the externalization of phosphatidylserine (PS) residues, which are normally present on the inner leaflet of the sperm plasma membrane. Colloidal superparamagnetic microbeads ( approximately 50 nm in diameter) conjugated with annexin V bind to PS and are used to separate dead and apoptotic spermatozoa by magnetic-activated cell sorting (MACS). Cells with externalized PS will bind to these microbeads, whereas nonapoptotic cells with intact membranes do not bind and could be used during ARTs. We have conducted a series of experiments to investigate whether the MACS technology could be used to improve ART outcomes. Our results clearly indicate that integrating MACS as a part of sperm preparation techniques will improve semen quality and cryosurvival rates by eliminating apoptotic sperm. Nonapoptotic spermatozoa prepared by MACS display higher quality in terms of routine sperm parameters and apoptosis markers. The higher sperm quality is represented by an increased oocyte penetration potential and cryosurvival rates. Thus, the selection of nonapoptotic spermatozoa by MACS should be considered to enhance ART success rates. PMID:18077822

This paper describes an improved error separation technique (EST) for on-machine surface profile measurement which can be applied to optical lenses on precision and ultra-precision machine tools. With only one precise probe and a linear stage, improved EST not only reduces measurement costs, but also shortens the sampling interval, which implies that this method can be used to measure the profile of small-bore lenses. The improved EST with stitching method can be applied to measure the profile of high-height lenses as well. Since the improvement is simple, most of the traditional EST can be modified by this method. The theoretical analysis and experimental results in this paper show that the improved EST eliminates the slide error successfully and generates an accurate lens profile.

A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.

As a versatile tool in separation science, cyclodextrins and their derivatives, known as emerging functional monomers, have been used extensively in molecular imprinting techniques. The attributes of cyclodextrins and their derivatives are widely known to form host-guest inclusion complex processes between the polymer and template. The exploitation of the imprinting technique could produce a product of molecularly imprinted polymers, which are very robust with long-term stability, reliability, cost-efficiency, and selectivity. Hence, molecularly imprinted polymers have gained popularity in chemical separation and analysis. Molecularly imprinted polymers containing either cyclodextrin or its derivatives demonstrate superior binding effects for a target molecule. As noted in the previous studies, the functional monomers of cyclodextrins and their derivatives have been used in molecular imprinting for selective separation with a wide range of chemical compounds, including steroidals, amino acids, polysaccharides, drugs, plant hormones, proteins, pesticides, and plastic additives. Therefore, the main goal of this review is to illustrate the exotic applications of imprinting techniques employing cyclodextrins and their derivatives as single or binary functional monomers in synthesizing molecularly imprinted polymers in areas of separation science by reviewing some of the latest studies reported in the literature. PMID:27324352

Multiple sclerosis is a complex and multifactorial neurological disease, and nutrition is one of the environmental factors possibly involved in its pathogenesis. At present, the role of nutrition is unclear, and MS therapy is not associated to a particular diet. MS clinical trials based on specific diets or dietary supplements are very few and in some cases controversial. To understand how diet can influence the course of MS and improve the wellness of MS patients, it is necessary to identify the dietary molecules, their targets and the molecular mechanisms involved in the control of the disease. The aim of this paper is to provide a molecular basis for the nutritional intervention in MS by evaluating at molecular level the effect of dietary molecules on the inflammatory and autoimmune processes involved in the disease. PMID:21461338

In an effort to achieve high success in knowledge and technique acquisition as a whole, a biochemistry and molecular biology experiment was established for high-grade biotechnology specialty students after they had studied essential theory and received proper technique training. The experiment was based on cloning and expression of alkaline…

Spatial resolution improvements in computed tomography (CT) have been limited by the large and unique error propagation properties of this technique. The desire to provide maximum image resolution has resulted in the use of reconstruction filter functions designed to produce tomographic images with resolution as close as possible to the intrinsic detector resolution. Thus, many CT systems produce images with excessive noise with the system resolution determined by the detector resolution rather than the reconstruction algorithm. CT is a rigorous mathematical technique which applies an increasing amplification to increasing spatial frequencies in the measured data. This mathematical approach to spatial frequency amplification cannot distinguish between signal and noise and therefore both are amplified equally. We report here a method in which tomographic resolution is improved by using very small detectors to selectively amplify the signal and not noise. Thus, this approach is referred to as the signal amplification technique (SAT). SAT can provide dramatic improvements in image resolution without increases in statistical noise or dose because increases in the cutoff frequency of the reconstruction algorithm are not required to improve image resolution. Alternatively, in cases where image counts are low, such as in rapid dynamic or receptor studies, statistical noise can be reduced by lowering the cutoff frequency while still maintaining the best possible image resolution. A possible system design for a positron CT system with SAT is described.

Several aspects of alginate and PHB synthesis in Azotobacter vinelandii at a molecular level have been elucidated in articles published during the last ten years. It is now clear that alginate and PHB synthesis are under a very complex genetic control. Genetic modification of A. vinelandii has produced a number of very interesting mutants which have particular traits for alginate production. One of these mutants has been shown to produce the alginate with the highest mean molecular mass so far reported. Recent work has also shed light on the factors determining molecular mass distribution; the most important of these being identified as; dissolved oxygen tension and specific growth rate. The use of specific mutants has been very useful for the correct analysis and interpretation of the factors affecting polymerization. Recent scale-up/down work on alginate production has shown that oxygen limitation is crucial for producing alginate of high molecular mass, a condition which is optimized in shake flasks and which can now be reproduced in stirred fermenters. It is clear that the phenotypes of mutants grown on plates are not necessarily reproducible when the strains are tested in lab or bench scale fermenters. In the case of PHB, A. vinelandii has shown itself able to produce relatively large amounts of this polymer of high molecular weight on cheap substrates, even allowing for simple extraction processes. The development of fermentation strategies has also shown promising results in terms of improving productivity. The understanding of the regulatory mechanisms involved in the control of PHB synthesis, and of its metabolic relationships, has increased considerably, making way for new potential strategies for the further improvement of PHB production. Overall, the use of a multidisciplinary approach, integrating molecular and bioengineering aspects is a necessity for optimizing alginate and PHB production in A. vinelandii. PMID:17306024

In early 2005, several groups of investigators studying myeloid malignancies described a novel somatic point mutation (V617F) in the conserved autoinhibitory pseudokinase domain of the Janus kinase 2 (JAK2) protein, which plays an important role in normal hematopoietic growth factor signaling. The V617F mutation is present in blood and marrow from a large proportion of patients with classic BCR/ABL-negative chronic myeloproliferative disorders and of a few patients with other clonal hematological diseases such as myelodysplastic syndrome, atypical myeloproliferative disorders, and acute myeloid leukemia. The JAK2 V617F mutation causes constitutive activation of the kinase, with deregulated intracellular signaling that mimics continuous hematopoietic growth factor stimulation. Within 7 months of the first electronic publication describing this new mutation, clinical molecular diagnostic laboratories in the United States and Europe began offering JAK2 mutation testing on a fee-for-service basis. Here, I review the various techniques used by research groups and clinical laboratories to detect the genetic mutation underlying JAK2 V617F, including fluorescent dye chemistry sequencing, allele-specific polymerase chain reaction (PCR), real-time PCR, DNA-melting curve analysis, pyrosequencing, and others. I also discuss diagnostic sensitivity, performance, and other practical concerns relevant to the clinical laboratorian in addition to the potential diagnostic utility of JAK2 mutation tests. PMID:16931578

One of the most important problems in tissue preparation for electron microscopic analysis at a molecular level involves the preservation of the tissue without introducing extensive denaturation of the proteins. Low temperature is a most efficient condition for the inhibition of protein denaturation and freeze-drying offers favourable conditions for transferring proteins to a dry state with minimal denaturation of the proteins. However, the embedding of the dried tissue in a plastic leads to extensive denaturation of the proteins when performed in the conventional way. This eliminates very efficiently the advantages of the method. The situation becomes even worse when subjecting the tissue to freeze-substitution. To eliminate as far as possible the denaturing effect of plastic embedding, freeze-drying can be combined with low temperature embedding in a plastic. Freeze-fracturing allows a most efficient use of low temperature to reduce conformation changes in proteins. The value of the freeze-fracturing technique depends entirely on a precise knowledge of the location of the fracture planes. Since this location is not known, it must be determined on the basis of a deduction. If this deduction is wrong, the method becomes misleading. Two methods which allow a certain testing of the correctness of the deduced location of the fracture planes are mentioned. PMID:6759657

The complex problem of a fixed-bed reactor consisting of catalytically active particles provides an exceptional opportunity of combining a wide range of NMR methods which have become available over time as tools to probe porous media. This work demonstrates the feasibility of different NMR techniques for the investigation of the intra- and interparticle pore space over length scales from nanometers up to centimeters. Many industrially relevant cracking reactions leave a coke residue on the inner surface of the porous catalyst particles so that the active sites become inaccessible to the reactants. Moreover, the pore space shrinks due to the formation of coke, thereby hindering molecular transport. The presence of the coke residue and its influence on the mobility of adsorbed fluid molecules are probed by 129Xe spectroscopy, NMR cryoporometry, relaxation dispersion measurements, and investigations of the reduced diffusivity in the intraporous space. The voids surrounding the random arrangement of catalyst pellets represent another pore space of much larger dimensions, the properties of which can be more directly investigated by mapping the fluid density and the velocity distribution from velocity-encoded imaging. Propagator representations averaged over large sample volumes are discussed and compared to velocity images obtained in selected axial slices of the reactor. PMID:12850717

Molecular signalling in living cells occurs at low copy numbers and is thereby inherently limited by the noise imposed by thermal diffusion. The precision at which biochemical receptors can count signalling molecules is intimately related to the noise correlation time. In addition to passive thermal diffusion, messenger RNA and vesicle-engulfed signalling molecules can transiently bind to molecular motors and are actively transported across biological cells. Active transport is most beneficial when trafficking occurs over large distances, for instance up to the order of 1 metre in neurons. Here we explain how intermittent active transport allows for faster equilibration upon a change in concentration triggered by biochemical stimuli. Moreover, we show how intermittent active excursions induce qualitative changes in the noise in effectively one-dimensional systems such as dendrites. Thereby they allow for significantly improved signalling precision in the sense of a smaller relative deviation in the concentration read-out by the receptor. On the basis of linear response theory we derive the exact mean field precision limit for counting actively transported molecules. We explain how intermittent active excursions disrupt the recurrence in the molecular motion, thereby facilitating improved signalling accuracy. Our results provide a deeper understanding of how recurrence affects molecular signalling precision in biological cells and novel medical-diagnostic devices.

A general method useful in molecular electronics design is developed that integrates modelling on the nano-scale (using quantum-chemical software) and on the micro-scale (using finite-element methods). It is applied to the design of an n-bit shift register memory that could conceivably be built using accessible technologies. To achieve this, the entire complex structure of the device would be built to atomic precision using feedback-controlled lithography to provide atomic-level control of silicon devices, controlled wet-chemical synthesis of molecular insulating pillars above the silicon, and controlled wet-chemical self-assembly of modular molecular devices to these pillars that connect to external metal electrodes (leads). The shift register consists of n connected cells that read data from an input electrode, pass it sequentially between the cells under the control of two external clock electrodes, and deliver it finally to an output device. The proposed cells are trimeric oligoporphyrin units whose internal states are manipulated to provide functionality, covalently connected to other cells via dipeptide linkages. Signals from the clock electrodes are conveyed by oligoporphyrin molecular wires, and μ-oxo porphyrin insulating columns are used as the supporting pillars. The developed multiscale modelling technique is applied to determine the characteristics of this molecular device, with in particular utilization of the inverted region for molecular electron-transfer processes shown to facilitate latching and control using exceptionally low energy costs per logic operation compared to standard CMOS shift register technology.

Many students are being left behind by an educational system that some people believe is in crisis. Improving educational outcomes will require efforts on many fronts, but a central premise of this monograph is that one part of a solution involves helping students to better regulate their learning through the use of effective learning techniques. Fortunately, cognitive and educational psychologists have been developing and evaluating easy-to-use learning techniques that could help students achieve their learning goals. In this monograph, we discuss 10 learning techniques in detail and offer recommendations about their relative utility. We selected techniques that were expected to be relatively easy to use and hence could be adopted by many students. Also, some techniques (e.g., highlighting and rereading) were selected because students report relying heavily on them, which makes it especially important to examine how well they work. The techniques include elaborative interrogation, self-explanation, summarization, highlighting (or underlining), the keyword mnemonic, imagery use for text learning, rereading, practice testing, distributed practice, and interleaved practice. To offer recommendations about the relative utility of these techniques, we evaluated whether their benefits generalize across four categories of variables: learning conditions, student characteristics, materials, and criterion tasks. Learning conditions include aspects of the learning environment in which the technique is implemented, such as whether a student studies alone or with a group. Student characteristics include variables such as age, ability, and level of prior knowledge. Materials vary from simple concepts to mathematical problems to complicated science texts. Criterion tasks include different outcome measures that are relevant to student achievement, such as those tapping memory, problem solving, and comprehension. We attempted to provide thorough reviews for each technique, so this

A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.

A new technique for improving the accuracy of radiance calibrations for large-area integrating-sphere sources has been investigated. Such sources are used to calibrate numerous aircraft and spacecraft remote sensing instruments. Recent measurements performed at NIST and NASA Goddard Space Flight Center have demonstrated that the uncertainty of sphere-source radiance measurements can be improved from the present 5 to 10 percent level to a 1 to 2 percent level. Silicon detectors with bandpass filters mounted in front of them and calibrated for absolute spectral responsivity can be used to confirm and to monitor the absolute radiance of a sphere source.

Assisted reproductive techniques (ART) have become the treatment of choice in many cases of infertility; however the current success rates of these procedures remain suboptimal. Programmed cell death (apoptosis) most likely contributes to failed ART and to the decrease in sperm quality after cryopreservation. There is likelihood that some sperm selected for ART will display features of apoptosis despite their normal appearance, which may be partially responsible for the low fertilization and implantation rates seen with ART. One of the features of apoptosis is the externalization of phosphatidylserine (PS) residues, which are normally present on the inner leaflet of the sperm plasma membrane. Colloidal super-paramagnetic microbeads (~50 nm in diameter) conjugated with annexin-V bind to PS are used to separate dead and apoptotic spermatozoa by magnetic cell sorting (MACS). Cells with externalized PS will bind to these microbeads, while non-apoptotic cells with intact membranes do not bind and could be used during ART. We have conducted a series of experiments to investigate if the MACS technology could be used to improve ART outcomes. Our results clearly indicate that integrating MACS as a part of sperm preparation techniques will improve semen quality and cryosurvival rates by eliminating apoptotic sperm. Non-apoptotic spermatozoa prepared by MACS display higher quality in terms of routine sperm parameters and apoptosis markers. The higher sperm quality is represented by an increased oocyte penetration potential and cryosurvival rates. Thus, the selection of non-apoptotic spermatozoa by MACS should be considered to enhance ART success rates. PMID:18077822

Global solar radiation is the driving force in hydrological cycle especially for evapotranspiration (ET) and is quite infrequently measured. This has led to the reliance on indirect techniques of estimation for data scarce regions. This study presents an improvedtechnique that uses information from a numerical weather prediction (NWP) model (National Centre for Atmospheric Research NCAR's Mesoscale Meteorological model version 5 MM5), for the determination of a cloud cover index (CI), a major factor in the attenuation of the incident solar radiation. The cloud cover index (CI) together with the atmospheric transmission factor (KT) and output from a global clear sky solar radiation were then used for the estimation of global solar radiation for the Brue catchment located in the southwest of England. The results clearly show an improvement in the estimated global solar radiation in comparison to the prevailing approaches.

Sparsity technique is applied to a wide range of problems in power systems analysis. In this paper the authors propose several analytical and computational improvements in sparsity applications. The new partial matrix refactorization method and ordering algorithm are presented. The proposed method is very efficient when applied to various kinds of programs, such as: on-line load flow, optimal power flow and steady-state security analysis. The proposed methodology is applied in a fast decoupled load flow program which include the treatment of tap violations on under-load tap changing (ULTC) transformers and reactive power generation on PV buses. Effects of proposed improvements are well tested and documented on the three networks: 118 bus IEEE test network and two utility networks with 209 and 519 buses, respectively. Keywords: sparsity technique, load flow analysis, security analysis.

Due to its retroperitoneal location, the pancreas has historically been a mysterious organ that is difficult to examine and which complicates treatment. The discovery of anesthesia and asepsis in the mid-19th century allowed laparotomic diagnosis, which was previously only possible at autopsy. The expectations of surgery were improved by the detection of blood groups, vitamin K synthesis, and the development of intensive care units. In addition, high levels of presurgical diagnosis and an unquestionable improvement of its results were enabled by advances in laboratory methods (serum quantification of amylase and lipase, tumoral markers, genetics, and techniques for measuring exocrine pancreatic function), imaging and endoscopic modalities, and fine tuning of surgical techniques. In this article, we review the history of the main milestones that have allowed progress in all these aspects. PMID:25500002

High affinity and specific binding are cardinal properties of nucleic acids in relation to their biological function and their role in biotechnology. To this end, structural preorganization of oligonucleotides can significantly improve their binding performance, and numerous examples of this can be found in Nature as well as in artificial systems. Here we describe the production and characterization of hybrid DNA-polymer nanoparticles (oligoMIP NPs) as a system in which we have preorganized the oligonucleotide binding by molecular imprinting technology. Molecularly imprinted polymers (MIPs) are cost-effective "smart" polymeric materials capable of antibody-like detection, but characterized by superior robustness and the ability to work in extreme environmental conditions. Especially in the nanoparticle format, MIPs are dubbed as one of the most suitable alternatives to biological antibodies due to their selective molecular recognition properties, improved binding kinetics as well as size and dispersibility. Nonetheless, there have been very few attempts at DNA imprinting in the past due to structural complexity associated with these templates. By introducing modified thymine bases into the oligonucleotide sequences, which allow establishing covalent bonds between the DNA and the polymer, we demonstrate that such hybrid oligoMIP NPs specifically recognize their target DNA, and that the unique strategy of incorporating the complementary DNA strands as "preorganized selective monomers" improves the recognition properties without affecting the NPs physical properties such as size, shape or dispersibility. PMID:26509192

This Exploratory LDRD aimed to develop molecular breeding methodology for biofuel algal strain improvement for applications in waste to energy / commodity conversion technologies. Genome shuffling technologies, specifically protoplast fusion, are readily available for the rapid production of genetic hybrids for trait improvement and have been used successfully in bacteria, yeast, plants and animals. However, genome fusion has not been developed for exploiting the remarkable untapped potential of eukaryotic microalgae for large scale integrated bio-conversion and upgrading of waste components to valued commodities, fuel and energy. The proposed molecular breeding technology is effectively sexual reproduction in algae; though compared to traditional breeding, the molecular route is rapid, high-throughput and permits selection / improvement of complex traits which cannot be accomplished by traditional genetics. Genome fusion technologies are the cutting edge of applied biotechnology. The goals of this Exploratory LDRD were to 1) establish reliable methodology for protoplast production among diverse microalgal strains, and 2) demonstrate genome fusion for hybrid strain production using a single gene encoded trait as a proof of the concept.

An improved electron ionization (EI) ion source is described, based on the modification of a Brink-type EI ion source through the addition of a second cage with a fine mesh outside the ion chamber. The added outer cage shields the inner ion cage (ionization zone) against the penetration of the filament and electron repeller potentials, and thus results in the provision of ions with narrower ion energy distribution, hence improved ion-beam quality. The closer to zero electrical field inside the ion cage enables improved filtration (rejection) of ions that are produced from vacuum background compounds, based on difference in ion energies of beam and background species. The improved background ion filtration and ion-beam quality resulted in 2.6 times higher mass spectrometric ion signal, combined with 6.4 times better signal to noise ratio, in comparison with the same ion source having a single cage. The dual cage ion source further provides a smaller or no reduction of the electron emission current upon lowering the electron energy for achieving softer EI and/or electron attachment ionization. It also improves the long-term mass spectral and signal reproducibility and enables fast, automated change of the electron energy. Consequently, the dual cage EI ion source is especially effective for use with gas chromatography mass spectrometry with supersonic molecular beams (SMB), liquid chromatography mass spectrometry with SMB, ion guns with SMB, and any other experimental systems with SMB or nonthermal molecular beams.

A maximum return of science and products with a minimum expenditure of time and resources is a major goal of mission payload integration. A critical component then, in successful mission payload integration is the acquisition and analysis of experiment requirements from the principal investigator and payload element developer teams. One effort to use artificial intelligence techniques to improve the acquisition and analysis of experiment requirements within the payload integration process is described.

The self diagnostic accelerometer (SDA) is a sensor system designed to actively monitor the health of an accelerometer. In this case an accelerometer is considered healthy if it can be determined that it is operating correctly and its measurements may be relied upon. The SDA system accomplishes this by actively monitoring the accelerometer for a variety of failure conditions including accelerometer structural damage, an electrical open circuit, and most importantly accelerometer detachment. In recent testing of the SDA system in emulated engine operating conditions it has been found that a more robust signal processing technique was necessary. An improved accelerometer diagnostic technique and test results of the SDA system utilizing this technique are presented here. Furthermore, the real time, autonomous capability of the SDA system to concurrently compensate for effects from real operating conditions such as temperature changes and mechanical noise, while monitoring the condition of the accelerometer health and attachment, will be demonstrated.

Eddy current techniques are extremely sensitive to the presence of axial cracks in nuclear power plant steam generator tube walls, but they are equally sensitive to the presence of dents, fretting, support structures, corrosion products, and other artifacts. Eddy current signal interpretation is further complicated by cracking geometries more complex than a single axial crack. Although there has been limited success in classifying and sizing defects through artificial neural networks, the ability to predict tubing integrity has, so far, eluded modelers. In large part, this lack of success stems from an inability to distinguish crack signals from those arising from artifacts. We present here a new signal processing technique that deconvolves raw eddy current voltage signals into separate signal contributions from different sources, which allows signals associated with a dominant crack to be identified. The signal deconvolution technique, combined with artificial neural network modeling, significantly improves the prediction of tube burst pressure from bobbin-coil eddy current measurements of steam generator tubing.

This paper improves an inverted decoupling technique for a class of stable linear multivariable processes with multiple time delays and nonminimum-phase zeros. Two decoupling schemes are proposed based on the inverted decoupling technique. One is a developed inverted decoupling scheme. In this scheme, the decoupler is designed such that the inverted decoupling technique accommodates a wider field than the one introduced in the published literature. However, due to the stability issue, some multivariable processes still cannot be decoupled by the inverted decoupling structure. To solve this problem, another modified decoupling scheme with unity feedback structure is suggested for implementation. The Internal Model Control (IMC) theory is applied here to design PI/PID controllers for the decoupled processes. Furthermore, in the presence of multiplicative input uncertainty, low bounds of the control parameters are derived quantitatively for guaranteeing robust stability of the system. Simulations are illustrated for demonstrating the validity of the proposed control schemes. PMID:17343860

A method for estimating gas permeability through a zeolite membrane, using a molecular simulation technique and a theoretical permeation model, is presented. The estimate of permeability is derived from a combination of an absorption isotherm and self-diffusion coefficient based on the adsorption-diffusion model. The adsorption isotherm and self-diffusion coefficients needed for the estimation were calculated using conventional Monte Carlo and molecular dynamics simulations. The calculated self-diffusion coefficient was converted to the mutual diffusion coefficient and the permeability estimated using the Fickian equation. The method was applied to the prediction of permeabilities of methane and ethylene in silicalite at 301 K. Calculated permeabilities were larger than the experimental values by more than an order of magnitude. However, the anisotropic permeability was consistent with the experimental data and the results obtained using a grand canonical ensemble molecular dynamics technique (Pohl et al., Mol.Phys. 1996, 89(6), 1725--1731).

The increasing interest in molecular biology diagnostics is a result of the tremendous gain of scientific knowledge in genetics, made possible especially since the introduction of amplification techniques. High expectations have been placed on genetic testing, and the number of laboratories now using the relevant technology is rapidly increasing--resulting in an obvious need for standardization and definition of laboratory organization. This communication is an effort towards that end. We address aspects that should be considered when structuring a new molecular diagnostic laboratory, and we discuss individual preanalytical and analytical procedures, from sampling to evaluation of assay results. In addition, different means of controlling contamination are discussed. Because the methodology is in constant change, no general standards can be defined. Accordingly, this publication is intended to serve as a recommendation for good laboratory practice and internal quality control and as a guide to troubleshooting, primarily in amplification techniques. PMID:9550553

The objective of the project is to increase oil production and reserves by the use of improved reservoir characterization and completion techniques in the Uinta Basin, Utah. To accomplish this objective, a two-year geologic and engineering characterization of the Bluebell field was conducted. The study evaluated surface and subsurface data, currently used completion techniques, and common production problems. It was determined that advanced case- and open-hole logs could be effective in determining productive beds and that stage-interval (about 500 ft [150 m] per stage) and bed-scale isolation completion techniques could result in improved well performance. In the first demonstration well (Michelle Ute well discussed in the previous technical report), dipole shear anisotropy (anisotropy) and dual-burst thermal decay time (TDT) logs were run before and isotope tracer log was run after the treatment. The logs were very helpful in characterizing the remaining hydrocarbon potential in the well. But, mechanical failure resulted in a poor recompletion and did not result in a significant improvement in the oil production from the well.

Data mining techniques depend strongly on how the data are represented and how distance between samples is measured. High-dimensional data often contain a large number of irrelevant dimensions (features) for a given query. These features act as noise and obfuscate relevant information. Unsupervised approaches to mine such data require distance measures that can account for feature relevance. Molecular dynamics simulations produce high-dimensional data sets describing molecules observed in time. Here, we propose to globally or locally weight simulation features based on effective rates. This emphasizes, in a data-driven manner, slow degrees of freedom that often report on the metastable states sampled by the molecular system. We couple this idea to several unsupervised learning protocols. Our approach unmasks slow side chain dynamics within the native state of a miniprotein and reveals additional metastable conformations of a protein. The approach can be combined with most algorithms for clustering or dimensionality reduction. PMID:26574336

Use of NVM (Non-volatile memory) devices such as ReRAM (resistive RAM) and STT-RAM (spin transfer torque RAM) for designing on-chip caches holds the promise of providing a high-density, low-leakage alternative to SRAM. However, low write endurance of NVMs, along with the write-variation introduced by existing cache management schemes may significantly limit the lifetime of NVM caches. We present LastingNVCache, a technique for improving lifetime of NVM caches by mitigating the intra-set write variation. LastingNVCache works on the key idea that by periodically flushing a frequently-written data-item, the next time the block can be made to load into a cold block in the set. Through this, the future writes to that data-item can be redirected from a hot block to a cold block, which leads to improvement in the cache lifetime. Microarchitectural simulations have shown that LastingNVCache provides 6.36X, 9.79X, and 10.94X improvement in lifetime for single, dual and quad-core systems. Also, its implementation overhead is small and it outperforms a recently proposed technique for improving lifetime of NVM caches.

We present new results in the development of large silicon grisms ( ~ 25x25 mm2). Using photolithography, new etching techniques and post-processing steps we have obtained greatly improved results. Experiments were performed adding ammonium persulfate to the TMAH etching solution. This new method reduced the surface roughness up to ~30% (from rms 32 nm to 22 nm), eliminated all hillock formations and helped maintain constant etch rates. This combined with improved lithography processes has reduced the large-scale defects to less than a few per square inch. Other experiments in post-processing were also carried out. Thin layers of dry silicon dioxide were repeatedly added to and removed from the grating surface producing an additional improvement of ~20% to the surface roughness (from rms roughness of 22 nm to 16 nm) or a ~50% total improvement over previous results. Optical testing of a grating with rms roughness of 16 nm shows less than 1% integrated scattered light at 0.6328 um. We are applying the new techniques in etching an 80x40 mm2 grating on a ~ 30 mm thick substrate to make an anamorphic silicon immersion grating, which can provide a diffraction-limited spectral resolution (R = 200,000) at 2.2 micron.

This paper presents the techniques to self derive the motion vectors (MVs) at video decoder side to improve coding efficiency of B pictures. With the MVs information self derived at video decoder side, the transmission of these self-derived MVs from video encoder side to video decoder side is skipped and thus better coding efficiency can be achieved. Our proposed techniques derive the block-based MVs at video decoder side by considering the temporal correlation among the available pixels in the previously-decoded reference pictures. Utilizing the MVs derived at video decoder side can be added as one of coding mode candidates from video encoder where the video encoder can utilize this new coding mode during phase of the coding mode selection to better trade off the rate-distortion performance to improve the coding efficiency. Experiments have demonstrated that the BD bitrate improvement on top of ITU-T/VCEG Key Technology Area (KTA) Reference Software platform with an overall about 7% improvement on the hierarchical IbBbBbBbP coding structure under the common test conditions of the joint call for proposal for the new video coding technology from ISO/MPEG and ITU-T committee on January 2010.

We present a technique to improve the least squares (LS) extrapolation of Earth orientation parameters (EOPs), consisting of fixing the last observed data point on the LS extrapolation curve, which customarily includes a polynomial and a few sinusoids. For the polar motion (PM), a more sophisticated two steps approach has been developed, which consists of estimating the amplitude of the more stable one of the annual (AW) and Chandler (CW) wobbles using data of longer time span, and then estimating the other parameters using a shorter time span. The technique is studied using hindcast experiments, and justified using year-by-year statistics of 8 years. In order to compare with the official predictions of the International Earth Rotation and Reference Systems Service (IERS) performed at the U.S. Navy Observatory (USNO), we have enforced short-term predictions by applying the ARIMA method to the residuals computed by subtracting the LS extrapolation curve from the observation data. The same as at USNO, we have also used atmospheric excitation function (AEF) to further improve predictions of UT1-UTC. As results, our short-term predictions are comparable to the USNO predictions, and our long-term predictions are marginally better, although not for every year. In addition, we have tested the use of AEF and oceanic excitation function (OEF) in PM prediction. We find that use of forecasts of AEF alone does not lead to any apparent improvement or worsening, while use of forecasts of AEF + OEF does lead to apparent improvement.

Total Quality Management (TQM) is a cooperative form of doing business that relies on the talents of everyone in an organization to continually improve quality and productivity, using teams and an assortment of statistical and measurement tools. The objective of the activities described in this paper was to implement effective improvement tools and techniques in order to build work processes which support good management and technical decisions and actions which are crucial to the success of the ACRV project. The objectives were met by applications in both the technical and management areas. The management applications involved initiating focused continuous improvement projects with widespread team membership. The technical applications involved applying proven statistical tools and techniques to the technical issues associated with the ACRV Project. Specific activities related to the objective included working with a support contractor team to improve support processes, examining processes involved in international activities, a series of tutorials presented to the New Initiatives Office and support contractors, a briefing to NIO managers, and work with the NIO Q+ Team. On the technical side, work included analyzing data from the large-scale W.A.T.E.R. test, landing mode trade analyses, and targeting probability calculations. The results of these efforts will help to develop a disciplined, ongoing process for producing fundamental decisions and actions that shape and guide the ACRV organization .

Total Quality Management (TQM) is a cooperative form of doing business that relies on the talents of everyone in an organization to continually improve quality and productivity, using teams and an assortment of statistical and measurement tools. The objective of the activities described in this paper was to implement effective improvement tools and techniques in order to build work processes which support good management and technical decisions and actions which are crucial to the success of the ACRV project. The objectives were met by applications in both the technical and management areas. The management applications involved initiating focused continuous improvement projects with widespread team membership. The technical applications involved applying proven statistical tools and techniques to the technical issues associated with the ACRV Project. Specific activities related to the objective included working with a support contractor team to improve support processes, examining processes involved in international activities, a series of tutorials presented to the New Initiatives Office and support contractors, a briefing to NIO managers, and work with the NIO Q+ Team. On the technical side, work included analyzing data from the large-scale W.A.T.E.R. test, landing mode trade analyses, and targeting probability calculations. The results of these efforts will help to develop a disciplined, ongoing process for producing fundamental decisions and actions that shape and guide the ACRV organization .

The aim of this study was to evaluate the impact of image fusion techniques on vegetation classification accuracies in a complex wetland system. Fusion of panchromatic (PAN) and multispectral (MS) Quickbird satellite imagery was undertaken using four image fusion techniques: Brovey, hue-saturation-value (HSV), principal components (PC), and Gram-Schmidt (GS) spectral sharpening. These four fusion techniques were compared in terms of their mapping accuracy to a normal MS image using maximum-likelihood classification (MLC) and support vector machine (SVM) methods. Gram-Schmidt fusion technique yielded the highest overall accuracy and kappa value with both MLC (67.5% and 0.63, respectively) and SVM methods (73.3% and 0.68, respectively). This compared favorably with the accuracies achieved using the MS image. Overall, improvements of 4.1%, 3.6%, 5.8%, 5.4%, and 7.2% in overall accuracies were obtained in case of SVM over MLC for Brovey, HSV, GS, PC, and MS images, respectively. Visual and statistical analyses of the fused images showed that the Gram-Schmidt spectral sharpening technique preserved spectral quality much better than the principal component, Brovey, and HSV fused images. Other factors, such as the growth stage of species and the presence of extensive background water in many parts of the study area, had an impact on classification accuracies.

The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling. PMID:22164092

Current planetary protection policies require that spacecraft targeted to sensitive solar system bodies be assembled and readied for launch in controlled cleanroom environments. A better understanding of the distribution and frequency at which high-risk contaminant microbes are encountered on spacecraft surfaces would significantly aid in assessing the threat of forward contamination. However, despite a growing understanding of the diverse microbial populations present in cleanrooms, less abundant microbial populations are probably not adequately taken into account due to technological limitations. This novel approach encompasses a wide spectrum of microbial species and will represent the true picture of spacecraft cleanroom-associated microbial diversity. All of the current microbial diversity assessment techniques are based on an initial PCR amplification step. However, a number of factors are known to bias PCR amplification and jeopardize the true representation of bacterial diversity. PCR amplification of a minor template appears to be suppressed by the amplification of a more abundant template. It is widely acknowledged among environmental molecular microbiologists that genetic biosignatures identified from an environment only represent the most dominant populations. The technological bottleneck overlooks the presence of the less abundant minority population and may underestimate their role in the ecosystem maintenance. DNA intercalating agents such as propidium monoazide (PMA) covalently bind with DNA molecules upon photolysis using visible light, and make it unavailable for DNA polymerase enzyme during polymerase chain reaction (PCR). Environmental DNA samples will be treated with suboptimum PMA concentration, enough to intercalate with 90 99% of the total DNA. The probability of PMA binding with DNA from abundant bacterial species will be much higher than binding with DNA from less abundant species. This will increase the relative DNA concentration of

Water clusters formed in a molecular beam are predissociated by tunable, pulsed, infrared radiation in the frequency range 2900~3750 cm{sup -1}. The recoiling fragments are detected off axis from the molecular beam using a rotatable mass spectrometer. Arguments are presented which show that the measured frequency dependent signal at a fixed detector angle is proportional to the absorption spectrum of the clusters. It is found that the spectra of clusters containing three or more water molecules are remarkably similar to the liquid phase spectrum. Dynamical information on the predissociation process is obtained from the velocity distribution of the fragments. An upper limit to the excited vibrational state lifetime of ~1 microsecond is observed for the results reported here. The most probable dissociation process concentrates the available excess energy into the internal motions of the fragment molecules. Both the time scale and translational energy distribution are consistent with the qualitative predictions of current theoretical models for cluster predissociation. From adiabatic dissociation trajectories and Monte Carlo simulations it is seen that the strong coupling present in the water polymers probably invalidates the simpler "diatomic" picture formulations of cluster predissociation. Instead, the energy can be extensively shared among the intermolecular motions in the polymer before dissociation. Comparison between current intermolecular potentials describing liquid water and the observed frequencies is made in the normal mode approximation. The inability of any potential to predict the gross spectral features (the number of bands and their observed frequency shift from the gas phase monomer) suggests that substantial improvement in the potential energy functions are possible, but that more accurate methods of solving the vibrational wave equation are necessary before a proper explanation of the spectral fine structure is possible. The observed differences

Methane (CH4) is the third most important greenhouse gas after water vapour and carbon dioxide (CO2). Since the industrial revolution the mixing ratio of CH4 in the atmosphere rose to ~1800 ppb, a value never reached within the last 800 000 years. This CH4 increase can only be assessed compared to its natural changes in the past. Firn air and air enclosures in polar ice cores represent the only direct paleoatmospheric archive. The latter show that atmospheric CH4 concentrations changed in concert with northern hemisphere temperature during both glacial/interglacial transitions as well as rapid climate changes (Dansgaard-Oeschger events). Since the different sources of atmospheric methane exhibit distinct carbon and hydrogen isotopic composition (δ13CH4 and δD(CH4)) reconstructions of these parameters on ice cores allow to constrain individual CH4 source/sink changes. δD(CH4) also reflects water cycle changes as hydrogen of precipitation is traced into methane produced from wetland/thermokarst/permafrost systems (Bock et al. 2010, Science). Here we present an updated high precision on line gas chromatography pyrolysis isotope ratio monitoring mass spectrometry technique (GC/P/irmMS) for analysis of δD(CH4) extracted from ice cores. It is based on earlier developments (Bock et al. 2010, RCM) and is improved concerning sample size and precision. The main achievement is post pyrolysis trapping (PPT) of molecular hydrogen after the high temperature conversion of methane leading to a better signal to noise ratio. Air from only 350 g of ice with CH4 concentrations as low as 350 ppb can now be measured with a precision of ~2‰. Such ice samples contain only approximately 30 mL of air and less than 1 nmol CH4. The new method was applied on ice samples from the EDML and EDC ice cores (European Project for Ice Coring in Antarctica, Dronning Maud Land, Dome Concordia). We present the first δD(CH4) records covering the penultimate termination and interglacial from EDML

A new optical transmission technique for black carbon (BC) analysis was developed to minimize interferences due to scattering effects in filter samples. A standard thermal analysis method (VDI, 1999) is used to link light attenuation by the filter samples to elemental carbon (EC) concentration. Scattering effects are minimized by immersion of the filters in oil of a similar refractive index, as is often done for microscopy purposes. Light attenuation was measured using both a white light source and a red LED of 650 nm. The usual increase in overestimation of BC concentrations with decreasing BC amount in filter samples was found considerably reduced. Some effects of BC properties (e.g. fractal dimension, microstructure and size distribution) on the specific attenuation coefficient BATN, however, are still present for the treated samples. BATN was found close to 1 m 2 g -1 for dry-dispersed industrial BC and 7 m 2 g -1 for nebulized BC. Good agreement was found between the oil immersion, integrating sphere and a polar photometer technique and Mie calculations. The average specific attenuation coefficient of ambient samples in oil varied between 7 and 11 m 2 g -1 for white light and 6 and 9 m 2 g -1 for red light (LED). BATN was found to have much less site variation for the treated than for the untreated samples. The oil immersion techniqueimproved also the correlation with thermally analyzed EC. This new immersion technique therefore presents a considerable improvement over conventional optical transmission techniques and may therefore serve as a simple, fast and cost-effective alternative to thermal methods.

Spatial filtering and directional discrimination has been shown to be an effective pre-processing approach for noise reduction in microphone array systems. In dual-microphone hearing aids, fixed and adaptive beamforming techniques are the most common solutions for enhancing the desired speech and rejecting unwanted signals captured by the microphones. In fact, beamformers are widely utilized in systems where spatial properties of target source (usually in front of the listener) is assumed to be known. In this dissertation, some dual-microphone coherence-based speech enhancement techniques applicable to hearing aids are proposed. All proposed algorithms operate in the frequency domain and (like traditional beamforming techniques) are purely based on the spatial properties of the desired speech source and does not require any knowledge of noise statistics for calculating the noise reduction filter. This benefit gives our algorithms the ability to address adverse noise conditions, such as situations where interfering talker(s) speaks simultaneously with the target speaker. In such cases, the (adaptive) beamformers lose their effectiveness in suppressing interference, since the noise channel (reference) cannot be built and updated accordingly. This difference is the main advantage of the proposed techniques in the dissertation over traditional adaptive beamformers. Furthermore, since the suggested algorithms are independent of noise estimation, they offer significant improvement in scenarios that the power level of interfering sources are much more than that of target speech. The dissertation also shows the premise behind the proposed algorithms can be extended and employed to binaural hearing aids. The main purpose of the investigated techniques is to enhance the intelligibility level of speech, measured through subjective listening tests with normal hearing and cochlear implant listeners. However, the improvement in quality of the output speech achieved by the

Bright, long-lasting and non-phototoxic organic fluorophores are essential to the continued advancement of biological imaging. Traditional approaches towards achieving photostability, such as the removal of molecular oxygen and the use of small-molecule additives in solution, suffer from potentially toxic side effects, particularly in the context of living cells. The direct conjugation of small-molecule triplet state quenchers, such as cyclooctatetraene (COT), to organic fluorophores has the potential to bypass these issues by restoring reactive fluorophore triplet states to the ground state through intra-molecular triplet energy transfer. Such methods have enabled marked improvement in cyanine fluorophore photostability spanning the visible spectrum. However, the generality of this strategy to chemically and structurally diverse fluorophore species has yet to be examined. Here, we show that the proximal linkage of COT increases the photon yield of a diverse range of organic fluorophores widely used in biological imaging applications, demonstrating that the intra-molecular triplet energy transfer mechanism is a potentially general approach for improving organic fluorophore performance and photostability. PMID:26700693

Concepts of neuronal damage and repair date back to ancient times. The research in this topic has been growing ever since and numerous nerve repair techniques have evolved throughout the years. Due to our greater understanding of nerve injuries and repair we now distinguish between central and peripheral nervous system. In this review, we have chosen to concentrate on peripheral nerve injuries and in particular those involving the hand. There are no reviews bringing together and summarizing the latest research evidence concerning the most up-to-date techniques used to improve hand function. Therefore, by identifying and evaluating all the published literature in this field, we have summarized all the available information about the advances in peripheral nerve techniques used to improve hand function. The most important ones are the use of resorbable poly[(R)-3-hydroxybutyrate] (PHB), epineural end-to-end suturing, graft repair, nerve transfer, side to side neurorrhaphy and end to side neurorrhaphy between median, radial and ulnar nerves, nerve transplant, nerve repair, external neurolysis and epineural sutures, adjacent neurotization without nerve suturing, Agee endoscopic operation, tourniquet induced anesthesia, toe transfer and meticulous intrinsic repair, free auto nerve grafting, use of distal based neurocutaneous flaps and tubulization. At the same time we found that the patient’s age, tension of repair, time of repair, level of injury and scar formation following surgery affect the prognosis. Despite the thorough findings of this systematic review we suggest that further research in this field is needed. PMID:22431951

The spectrogram is one of the best-known time-frequency distributions suitable to analyze signals whose energy varies both in time and frequency. In reflectometry, it has been used to obtain the frequency content of FM-CW signals for density profile inversion and also to study plasma density fluctuations from swept and fixed frequency data. Being implemented via the short-time Fourier transform, the spectrogram is limited in resolution, and for that reason several methods have been developed to overcome this problem. Among those, we focus on the reassigned spectrogram technique that is both easily automated and computationally efficient requiring only the calculation of two additional spectrograms. In each time-frequency window, the technique reallocates the spectrogram coordinates to the region that most contributes to the signal energy. The application to ASDEX Upgrade reflectometry data results in better energy concentration and improved localization of the spectral content of the reflected signals. When combined with the automatic (data driven) window length spectrogram, this technique provides improved profile accuracy, in particular, in regions where frequency content varies most rapidly such as the edge pedestal shoulder. PMID:21061480

This paper introduces a technique for improving the sensitivity of RF subsamplers in radar and coherent receiver applications. The technique, referred to herein as “delta modulation” (DM), feeds the time-average output of a monobit analog-to-digital converter (ADC) back to the ADC input, but with opposite polarity. Assuming pseudo-stationary modulation statistics on the sampled RF waveform, the feedback signal corrects for aggregate DC offsets present in the ADC that otherwise degrade ADC sensitivity. Two RF integrated circuits (RFICs) are designed to demonstrate the approach. One uses analog DM to create the feedback signal; the other uses digital DM to achieve themore » same result. A series of tests validates the designs. The dynamic time-domain response confirms the feedback loop’s basic operation. Measured output quantization imbalance, under noise-only input drive, significantly improves with the use of the DM circuit, even for large, deliberately induced DC offsets and wide temperature variation from -55°C to +85 °C. Examination of the corrected vs. uncorrected baseband spectrum under swept input signal-tonoise ratio (SNR) conditions demonstrates the effectiveness of this approach for realistic radar and coherent receiver applications. In conclusion, two-tone testing shows no impact of the DM technique on ADC linearity.« less

This paper introduces a technique for improving the sensitivity of RF subsamplers in radar and coherent receiver applications. The technique, referred to herein as “delta modulation” (DM), feeds the time-average output of a monobit analog-to-digital converter (ADC) back to the ADC input, but with opposite polarity. Assuming pseudo-stationary modulation statistics on the sampled RF waveform, the feedback signal corrects for aggregate DC offsets present in the ADC that otherwise degrade ADC sensitivity. Two RF integrated circuits (RFICs) are designed to demonstrate the approach. One uses analog DM to create the feedback signal; the other uses digital DM to achieve the same result. A series of tests validates the designs. The dynamic time-domain response confirms the feedback loop’s basic operation. Measured output quantization imbalance, under noise-only input drive, significantly improves with the use of the DM circuit, even for large, deliberately induced DC offsets and wide temperature variation from -55°C to +85 °C. Examination of the corrected vs. uncorrected baseband spectrum under swept input signal-tonoise ratio (SNR) conditions demonstrates the effectiveness of this approach for realistic radar and coherent receiver applications. In conclusion, two-tone testing shows no impact of the DM technique on ADC linearity.

The primary function of pancreatic beta-cells is to produce and release insulin in response to increment in extracellular glucose concentrations, thus maintaining glucose homeostasis. Deficient beta-cell function can have profound metabolic consequences, leading to the development of hyperglycemia and, ultimately, diabetes mellitus. Therefore, strategies targeting the maintenance of the normal function and protecting pancreatic beta-cells from injury or death might be crucial in the treatment of diabetes. This narrative review will update evidence from the recently identified molecular regulators preserving beta-cell mass and function recovery in order to suggest potential therapeutic targets against diabetes. This review will also highlight the relevance for novel molecular pathways potentially improving beta-cell dysfunction. PMID:23737653

We have derived and implemented a stress tensor formulation for the van der Waals density functional (vdW-DF) with spin-polarization-dependent gradient correction (GC) recently proposed by the authors [J. Phys. Soc. Jpn. 82, 093701 (2013)] and applied it to nonmagnetic and magnetic molecular crystals under ambient condition. We found that the cell parameters of the molecular crystals obtained with vdW-DF show an overall improvement compared with those obtained using local density and generalized gradient approximations. In particular, the original vdW-DF with GC gives the equilibrium structural parameters of solid oxygen in the α-phase, which are in good agreement with the experiment.

The Bluebell field is productive from the Tertiary lower Green River and Colton (Wasatch) Formations of the Uinta Basin, Utah. The productive interval consists of thousands of feet of interbedded fractured clastic and carbonate beds deposited in the ancestral Lake Uinta. Wells in the Bluebell field are typically completed by perforating 40 or more beds over 1000 to 3000 vertical ft (300-900 m), then stimulating the entire interval with hydrochloric acid. This technique is often referred to as the shot gun completion. Completion techniques used in the Bluebell field were discussed in detail in the Second Annual Report (Curtice, 1996). The shot-gun technique is believed to leave many potentially productive beds damaged and/or untreated, while allowing water-bearing and low-pressure (thief) zones to communicate with the wellbore. A two-year characterization study involved detailed examination of outcrop, core, well logs, surface and subsurface fractures, produced oil-field waters, engineering parameters of the two demonstration wells, and analysis of past completion techniques and effectiveness. The study was intended to improve the geologic characterization of the producing formations and thereby develop completion techniques specific to the producing beds or facies instead of a shot gun approach to stimulating all the beds. The characterization did not identify predictable-facies or predictable-fracture trends within the vertical stratigraphic column as originally hoped. Advanced logging techniques can identify productive beds in individual wells. A field-demonstration program was developed to use cased-hole advanced logging techniques in two wells and recompletion the wells at two different scales based on the logging. The first well was going to be completed at the interval scale using a multiple stage completion technique (about 500 ft [150 m] per stage). The second well will be recompleted at the bed-scale using bridge plug and packer to isolate three or more

In the deep sea, biological data are often sparse; hence models capturing relationships between observed fauna and environmental variables (acquired via acoustic mapping techniques) are often used to produce full coverage species assemblage maps. Many statistical modelling techniques are being developed, but there remains a need to determine the most appropriate mapping techniques. Predictive habitat modelling approaches (redundancy analysis, maximum entropy and random forest) were applied to a heterogeneous section of seabed on Rockall Bank, NE Atlantic, for which landscape indices describing the spatial arrangement of habitat patches were calculated. The predictive maps were based on remotely operated vehicle (ROV) imagery transects high-resolution autonomous underwater vehicle (AUV) sidescan backscatter maps. Area under the curve (AUC) and accuracy indicated similar performances for the three models tested, but performance varied by species assemblage, with the transitional species assemblage showing the weakest predictive performances. Spatial predictions of habitat suitability differed between statistical approaches, but niche similarity metrics showed redundancy analysis and random forest predictions to be most similar. As one statistical technique could not be found to outperform the others when all assemblages were considered, ensemble mapping techniques, where the outputs of many models are combined, were applied. They showed higher accuracy than any single model. Different statistical approaches for predictive habitat modelling possess varied strengths and weaknesses and by examining the outputs of a range of modelling techniques and their differences, more robust predictions, with better described variation and areas of uncertainties, can be achieved. As improvements to prediction outputs can be achieved without additional costly data collection, ensemble mapping approaches have clear value for spatial management.

The root zone storage capacity (Sr) greatly influences runoff generation, soil water movement, and vegetation growth and is hence an important variable for ecological and hydrological modelling. However, due to the great heterogeneity in soil texture and structure, there seems to be no effective approach to monitor or estimate Sr at the catchment scale presently. To fill the gap, in this study the Mass Curve Technique (MCT) was improved by incorporating a snowmelt module for the estimation of Sr at the catchment scale in different climatic regions. The "range of perturbation" method was also used to generate different scenarios for determining the sensitivity of the improved MCT-derived Sr to its influencing factors after the evaluation of plausibility of Sr derived from the improved MCT. Results can be showed as: (i) Sr estimates of different catchments varied greatly from ∼10 mm to ∼200 mm with the changes of climatic conditions and underlying surface characteristics. (ii) The improved MCT is a simple but powerful tool for the Sr estimation in different climatic regions of China, and incorporation of more catchments into Sr comparisons can further improve our knowledge on the variability of Sr. (iii) Variation of Sr values is an integrated consequence of variations in rainfall, snowmelt water and evapotranspiration. Sr values are most sensitive to variations in evapotranspiration of ecosystems. Besides, Sr values with a longer return period are more stable than those with a shorter return period when affected by fluctuations in its influencing factors.

The objective was to apply powder metallurgy techniques for the production of improved bearing elements, specifically balls and races, for advanced cryogenic turbopump bearings. The materials and fabrication techniques evaluated were judged on the basis of their ability to improve fatigue life, wear resistance, and corrosion resistance of Space Shuttle Main Engine (SSME) propellant bearings over the currently used 440C. An extensive list of candidate bearing alloys in five different categories was considered: tool/die steels, through hardened stainless steels, cobalt-base alloys, and gear steels. Testing of alloys for final consideration included hardness, rolling contact fatigue, cross cylinder wear, elevated temperature wear, room and cryogenic fracture toughness, stress corrosion cracking, and five-ball (rolling-sliding element) testing. Results of the program indicated two alloys that showed promise for improved bearing elements. These alloys were MRC-2001 and X-405. 57mm bearings were fabricated from the MRC-2001 alloy for further actual hardware rig testing by NASA-MSFC.

A multitude of concrete-based structures are typically part of a light water reactor (LWR) plant to provide the foundation, support, shielding, and containment functions. This use has made its long-term performance crucial for the safe operation of commercial nuclear power plants (NPPs). Extending reactor life to 60 years and beyond will likely increase susceptibility and severity of known forms of degradation. While standard Synthetic Aperture Focusing Technique (SAFT) is adequate for many defects with shallow concrete cover, some defects located under deep concrete cover are not easily identified using the standard SAFT. For many defects, particularly defects under deep cover, the use of frequency banded SAFT improves the detectability over standard SAFT. In addition to the improved detectability, the frequency banded SAFT also provides improved scan depth resolution that can be important in determining the suitability of a particular structure to perform its designed safety function. Specially designed and fabricated test specimens can provide realistic flaws that are similar to actual flaws in terms of how they interact with a particular NDE technique. Because conditions in the laboratory are controlled, the number of unknown variables can be decreased, making it possible to focus on specific aspects, investigate them in detail, and gain further information on the capabilities and limitations of each method. To validate the advantages of frequency banded SAFT on thick concrete, a 2.134 m x 2.134 m x 1.016 m concrete test specimen with twenty deliberately embedded defects was fabricated.

Robust spacetime gauge conditions are critically important to the stability and accuracy of numerical relativity (NR) simulations involving puncture black holes. Most of the NR community continues to use the highly-robust--though nearly decade-old--``moving-puncture gauge conditions'' for such simulations. We present improved gauge conditions and evolution techniques that reduce constraint violations by more than an order of magnitude on adaptive-mesh refinement (AMR) grids. It has been found that high-frequency waves propagating away from puncture black holes (e.g., in binary systems) cross progressively lower levels of refinement until they become under-resolved and reflect off an AMR boundary, leading to noisy gravitational waveforms. Such noise does not converge away cleanly with increasing resolution, effectively setting a hard upper limit on waveform accuracy using puncture techniques at computationally feasible resolutions. We demonstrate that our improved puncture gauge conditions reduce this noise by nearly an order of magnitude, and point to possible directions for future improvements.

Diffusion-weighted imaging (DWI) is an established functional imaging technique that interrogates the delicate balance of water movement at the cellular level. Technological advances enable this technique to be applied to whole-body MRI. Theory, b-value selection, common artifacts and target to background for optimized viewing will be reviewed for applications in the neck, chest, abdomen, and pelvis. Whole-body imaging with DWI allows novel applications of MRI to aid in evaluation of conditions such as multiple myeloma, lymphoma, and skeletal metastases, while the quantitative nature of this technique permits evaluation of response to therapy. Persisting signal at high b-values from restricted hypercellular tissue and viscous fluid also permits applications of DWI beyond oncologic imaging. DWI, when used in conjunction with routine imaging, can assist in detecting hemorrhagic degradation products, infection/abscess, and inflammation in colitis, while aiding with discrimination of free fluid and empyema, while limiting the need for intravenous contrast. DWI in conjunction with routine anatomic images provides a platform to improve lesion detection and characterization with findings rivaling other combined anatomic and functional imaging techniques, with the added benefit of no ionizing radiation. PMID:23960006

Purpose: Total body irradiation (TBI) with megavoltage photon beams has been accepted as an important component of management for a number of hematologic malignancies, generally as part of bone marrow conditioning regimens. The purpose of this paper is to present and discuss the authors' TBI technique, which both simplifies the treatment process and improves the treatment quality. Methods: An AP/PA TBI treatment technique to produce uniform dose distributions using sequential collimator reductions during each fraction was implemented, and a sample calculation worksheet is presented. Using this methodology, the dosimetric characteristics of both 6 and 18 MV photon beams, including lung dose under cerrobend blocks was investigated. A method of estimating midplane lung doses based on measured entrance and exit doses was proposed, and the estimated results were compared with measurements. Results: Whole body midplane dose uniformity of {+-}10% was achieved with no more than two collimator-based beam modulations. The proposed model predicted midplane lung doses 5% to 10% higher than the measured doses for 6 and 18 MV beams. The estimated total midplane doses were within {+-}5% of the prescribed midplane dose on average except for the lungs where the doses were 6% to 10% lower than the prescribed dose on average. Conclusions: The proposed TBI technique can achieve dose uniformity within {+-}10%. This technique is easy to implement and does not require complicated dosimetry and/or compensators.

Solid dispersion has emerged as a method of choice and has been extensively investigated to ascertain the in vivo improved performance of many drug formulations. It generally involves dispersion of drug in amorphous particles (clusters) or in crystalline particles. Comparatively, in the last decade, amorphous drug-polymer solid dispersion has evolved into a platform technology for delivering poorly water-soluble small molecules. However, the success of this technique in the pharmaceutical industry mainly relies on different drug-polymer attributes like physico-chemical stability, bioavailability and manufacturability. The present review showcases the efficacy of amorphous solid dispersion technique in the research and evolution of different drug formulations particularly for those with poor water soluble properties. Apart from the numerous mechanisms of action involved, a comprehensive summary of different key parameters required for the solubility enhancement and their translational efficacy to clinics is also emphasized. PMID:26306524

The efficacy of cryoanalgesia for the control of post-thoracotomy pain has led to the acceptance of the technique as a routine procedure in this unit. A study of 600 consecutive patients in whom an improvedtechnique was used is not reported. The freezing time for each intercostal nerve in this group was reduced to one 30 second exposure instead of the two 30 second exposures previously used. This reduced the duration of cutaneous numbness, with no loss of pain control. Freezing above the fifth intercostal nerve is no longer practiced in women. Modification to the probe has simplified the procedure. Pulmonary function studies and blood-gas analysis are also described. PMID:3736085

The two techniques which have provided most of the information on interface states in MIS-C (metal-insulator-semiconductor-capacitor) structures are the 'quasi-static method' and the 'conductance method'. Sher et al. (1979) and Su et al. (1980) have suggested a number of improvements concerning these methods. The present investigation has the objective to extend the earlier results and to offer a new tentative interpretation of the data. A critical review is conducted of the data collection and reduction techniques for the quasi-static method, taking into account the sample, the quasi-static capacitance, and the surface potential. In connection with a discussion of the conductance method, attention is given to parallel conductance and capacitance measurements, interface-state densities, time constants, and measurements on a (110) surface orientation.

This paper describes a new techniques of reducing phase noise in oscillator circuits. Our method uses an external crystal resonant circuit that acts as a frequency reference and is based on correlation with negative feedback control. We present the circuit configuration and the transfer function used in this method, as well as measured single sideband (SSB) phase noise characteristics. Our experiments show that phase noise can be decreased as it is a theoretical value when using LC oscillator. Furthermore, we examine application for voltage controlled crystal oscillator (VCXO). As a results, we can improve that the phase noise characteristics more than that of original VCXO without spoiling frequency tuning range of VCXO.

Optical variable devices (OVDs), such as holograms, are now common in the field of document security. Up until now mass-produced embossed holograms or other types of mass-produced OVDs are used not only for banknotes but also for personalized documents, such as passports, ID cards, travel documents, driving licenses, credit cards, etc. This means that identical OVDs are used on documents issued to individuals. Today, there is need for a higher degree of security on such documents and this chapter covers new techniques to make improved mass-produced or personalized OVDs.

An optimization technique for generating antenna illumination tapers allows improved microwave transmission efficiencies from proposed solar power satellite (SPS) systems and minimizes sidelobe levels to meet preset environmental standards. The cumulative microwave power density levels from 50 optimized SPS systems are calculated at the centroids of each of the 3073 counties in the continental United States. These cumulative levels are compared with Environmental Protection Agency (EPA) measured levels of electromagnetic radiation in seven eastern cities. Effects of rectenna relocations upon the power levels/population exposure rates are also studied.

A new technique of signal improvement has been developed under the framework of Empirical Mode Decomposition method. It identifies the signal noise from the estimation of correlation coefficient. Such calculations are performed both in the frequency as well as in the time domains of the signal, among the IMFs and the given signal itself. Each of the Fast Fourier Transformed IMFs reflects the complete picture of the frequency involved in the given signal. Therefore, the correlation curve obtained in time domain can be use to identify the noise components. The application of the proposed method has been implemented on the pulse shape data of the liquid scintillator based neutron detector.

An optimization technique for generating antenna illumination tapers allows improved microwave transmission efficiencies from proposed solar power satellite (SPS) systems and minimizes sidelobe levels to meet preset environmental standards. The cumulative microwave power density levels from 50 optimized SPS systems are calculated at the centroids of each of the 3073 counties in the continental United States. These cumulative levels are compared with Environmental Protection Agency (EPA) measured levels of electromagnetic radiation in seven eastern cities. Effects of rectenna relocations upon the power levels/population exposure rates are also studied.

Spraying techniques have been undergoing continuous evolution in recent decades. This paper presents part of the research work carried out in Spain in the field of sensors for characterizing vineyard canopies and monitoring spray drift in order to improve vineyard spraying and make it more sustainable. Some methods and geostatistical procedures for mapping vineyard parameters are proposed, and the development of a variable rate sprayer is described. All these technologies are interesting in terms of adjusting the amount of pesticides applied to the target canopy. PMID:24451462

A method for on-line accurate monitoring and precise control of molecular beam epitaxial growth of Groups III-III-V or Groups III-V-V layers in an advanced semiconductor device incorporates reflection mass spectrometry. The reflection mass spectrometry is responsive to intentional perturbations in molecular fluxes incident on a substrate by accurately measuring the molecular fluxes reflected from the substrate. The reflected flux is extremely sensitive to the state of the growing surface and the measurements obtained enable control of newly forming surfaces that are dynamically changing as a result of growth. 3 figs.

A method for on-line accurate monitoring and precise control of molecular beam epitaxial growth of Groups III-III-V or Groups III-V-V layers in an advanced semiconductor device incorporates reflection mass spectrometry. The reflection mass spectrometry is responsive to intentional perturbations in molecular fluxes incident on a substrate by accurately measuring the molecular fluxes reflected from the substrate. The reflected flux is extremely sensitive to the state of the growing surface and the measurements obtained enable control of newly forming surfaces that are dynamically changing as a result of growth.

We propose a nanoscale switch, giving a nonlinear function with two conductive states separated by a sharp transition region, on the basis of an array of molecular dipoles. We show theoretically that the local interactions between dipoles result in cooperative phenomena that can significantly improve the switching characteristics. We demonstrate the general validity of the concept in the cases of (i) an electrical switch robust to the finite size and variability effects inherent to the nanoscale and (ii) a sensing layer based on the voltage and ligand concentration dependence of the dipole array conductance.

As an alternative to the partial oxidation of methane to synthesis gas followed by methanol synthesis and the subsequent generation of olefins, we have studied the production of light olefins (ethylene and propylene) from the reaction of methyl bromide over various modified microporous silico-aluminophosphate molecular-sieve catalysts with an emphasis on SAPO-34. Some comparisons of methyl halides and methanol as reaction intermediates in their conversion to olefins are presented. Increasing the ratio of Si/Al and incorporation of Co into the catalyst framework improved the methyl bromide yield of light olefins over that obtained using standard SAPO-34. PMID:21203621

It is incredibly easy to ignore the medical practice team that is doing a good job. However, when we allow good performers to continue as they are, they probably won't improve. Their performance may even worsen. This is unfortunate because with a little bit of effort and support, good performers can often learn to excel. This article offers 12 techniques medical practice managers can use to bring their team members from good performance to excellent. It describes how to use goal-setting, work assignments, modeling, confidence building, team retreats, rewards, incentives, and reinforcement to ratchet up a good medical practice team's performance. This article also identifies the signs of medical employee mediocrity. It describes why setting higher expectations of your medical practice employees will ultimately improve their performance. Finally, this article suggests 10 practical and affordable strategies that medical practice managers can use to reinforce excellent performance in their good employees. PMID:23866656

Although DOE's Environmental Management program has made steady progress in cleaning up environmental legacies throughout the DOE complex, there are still significant remediation issues that remain to be solved. For example, DOE faces difficult challenges related to potential mobilization of radionuclides (e.g., actinides) and other hazardous contaminants in soils, removal and final treatment of high-level waste and residuals from leaking tanks, and the long-term stewardship of remediated sites and engineered disposal facilities, to name just a few. In some cases, new technologies and technology applications will be required based on current engineering expertise. In others, however, basic scientific research is needed to understand the mechanisms of how contaminants behave under specific conditions and how they interact with the environment, from which new engineering solutions can emerge. At Brookhaven National Laboratory (BNL) and Stony Brook University, scientists have teamed to use state-of-the-art synchrotron techniques to help understand the basic interactions of contaminants in the environment. Much of this work is conducted at the BNL National Synchrotron Light Source (NSLS), which is a user facility that provides high energy X-ray and ultraviolet photon beams to facilitate the examination of contaminants and materials at the molecular level. These studies allow us to determine how chemical speciation and structure control important parameters such as solubility, which in turn drive critical performance characteristics such as leaching. In one study for example, we are examining the effects of microbial activity on actinide contaminants under conditions anticipated at the Waste Isolation Pilot Plant. One possible outcome of this research is the identification of specific microbes that can trap uranium or other contaminants within the intracellular structure and help mitigate mobility. In another study, we are exploring the interaction of contaminants with

Nanoimprint lithography (NIL) technology is in the spotlight as a next-generation semiconductor manufacturing technique for integrated circuits at 22 nm and beyond. NIL is the unmagnified lithography technique using template which is replicated from master templates. On the other hand, master templates are currently fabricated by electron-beam (EB) lithography[1]. In near future, finer patterns less than 15nm will be required on master template and EB data volume increases exponentially. So, we confront with a difficult challenge. A higher resolution EB mask writer and a high performance fabrication process will be required. In our previous study, we investigated a potential of photomask fabrication process for finer patterning and achieved 15.5nm line and space (L/S) pattern on template by using VSB (Variable Shaped Beam) type EB mask writer and chemically amplified resist. In contrast, we found that a contrast loss by backscattering decreases the performance of finer patterning. For semiconductor devices manufacturing, we must fabricate complicated patterns which includes high and low density simultaneously except for consecutive L/S pattern. Then it's quite important to develop a technique to make various size or coverage patterns all at once. In this study, a small feature pattern was experimentally formed on master template with dose modulation technique. This technique makes it possible to apply the appropriate exposure dose for each pattern size. As a result, we succeed to improve the performance of finer patterning in bright field area. These results show that the performance of current EB lithography process have a potential to fabricate NIL template.

Peroxysome proliferator-activated receptors (PPARs) have grown greatly in importance due to their role in the metabolic profile. Among three subtypes (α, γ and δ), we here consider the least investigated δ subtype to explore the molecular fingerprints of selective PPARδ agonists. Validated QSAR models (regression based 2D-QSAR, HQSAR and KPLS) and molecular docking with dynamics analyses support the inference of classification-based Bayesian and recursive models. Chemometric studies indicate that the presence of ether linkages and heterocyclic rings has optimum influence in imparting selective bioactivity. Pharmacophore models and docking with molecular dynamics analyses postulate the occurrence of aromatic rings, HB acceptor and a hydrophobic region as crucial molecular fragments for development of PPARδ modulators. Multi-chemometric studies suggest the essential structural requirements of a molecule for imparting potent and selective PPARδ modulation. PMID:25986170

We present a new approach in order to improve the convergence of Monte Carlo (MC) simulations of molecular systems belonging to complex energetic landscapes: the problem is redefined in terms of the dynamic allocation of MC move frequencies depending on their past efficiency, measured with respect to a relevant sampling criterion. We introduce various empirical criteria with the aim of accounting for the proper convergence in phase space sampling. The dynamic allocation is performed over parallel simulations by means of a new evolutionary algorithm involving 'immortal' individuals. The method is bench marked with respect to conventional procedures on a model for melt linear polyethylene. We record significant improvement in sampling efficiencies, thus in computational load, while the optimal sets of move frequencies are liable to allow interesting physical insights into the particular systems simulated. This last aspect should provide a new tool for designing more efficient new MC moves.

Aqueous solubility is a key requirement for many functional molecules, e. g., drug candidates. Decrease of the partition coefficient (log P) by chemical modification, i.e., introduction of hydrophilic group(s) into molecules, is a classical strategy for improving aqueous solubility. We have been investigating alternative strategies for improving the aqueous solubility of pharmaceutical compounds by disrupting intermolecular interactions. Here, we show that introducing a bend into the molecular structure of retinoic acid receptor (RAR) agonists by changing the substitution pattern from para to meta or ortho dramatically enhances aqueous solubility by up to 890-fold. We found that meta analogs exhibit similar hydrophobicity to the parent para compound, and have lower melting points, supporting the idea that the increase of aqueous solubility was due to decreased intermolecular interactions in the solid state as a result of the structural changes. PMID:27378357

2- and 3-Dimensional models obtained from the inversion of geophysical data are widely used to represent the structural composition of the Earth and to constrain independent models obtained from other geological data (e.g. core samples, seismic surveys, etc.). However, inverse modeling of gravity data presents a very unstable and ill-posed mathematical problem, given that solutions are non-unique and small changes in parameters (position and density contrast of an anomalous body) can highly impact the resulting model. Through the implementation of an interior-point method constrained optimization technique, we improve the 2-D and 3-D models of Earth structures representing known density contrasts mapping anomalous bodies in uniform regions and boundaries between layers in layered environments. The proposed techniques are applied to synthetic data and gravitational data obtained from the Rio Grande Rift and the Cooper Flat Mine region located in Sierra County, New Mexico. Specifically, we improve the 2- and 3-D Earth models by getting rid of unacceptable solutions (those that do not satisfy the required constraints or are geologically unfeasible) given the reduction of the solution space.

The seminal Marshmallow Test (Mischel & Ebbesen, 1970) has reliably demonstrated that children who can delay gratification are more likely to be emotionally stable and successful later in life. However, this is not good news for those children who can't delay. Therefore, this study aimed to explore whether a metacognitive therapy technique, Attention Training (ATT: Wells, 1990) can improve young children's ability to delay gratification. One hundred children participated. Classes of 5-6 year olds were randomly allocated to either the ATT or a no-intervention condition and were tested pre and post-intervention on ability to delay gratification, verbal inhibition (executive control), and measures of mood. The ATT intervention significantly increased (2.64 times) delay of gratification compared to the no-intervention condition. After controlling for age and months in school, the ATT intervention and verbal inhibition task performance were significant independent predictors of delay of gratification. These results provide evidence that ATT can improve children's self-regulatory abilities with the implication that this might reduce psychological vulnerability later in life. The findings highlight the potential contribution that the Self-Regulatory Executive Function (S-REF) model could make to designing techniques to enhance children's self-regulatory processes. PMID:26708331

In this paper, an improved surface seeding and shell growth technique was developed to prepare Ag-polystyrene core shell composite. Polyethyleneimine (PEI) could act as the linker between Ag ions (Ag nanoparticles) and polystyrene (PS) colloids and the reducing agent in the formation of Ag nanoparticles. Due to the multi-functional characteristic of PEI, Ag seeds formed in-situ and were immobilized on the surface of PEI-modified PS colloids and no free Ag clusters coexist with the Ag 'seeding' PS colloids in the system. Then, the additional agents could be added into the resulting dispersions straightly to produce a thick Ag nanoshell. The Ag nanoshell with controllable thickness was formed on the surface of PS by the 'one-pot' surface seeding and shell growth method. The Ag-coverage increased gradually with the increasing of mass ratio of AgNO{sub 3}/PS. The optical properties of the Ag-PS colloids could be tailored by changing the coverage of Ag. - Graphical abstract: An improved surface seeding and shell growth technique was developed to prepare Ag-polystyrene core shell composite. The optical properties of the Ag-PS colloids could be tailored by changing the coverage of Ag. Display Omitted.

The aim of this study was to objectively evaluate the voices of patients suffering from unilateral vocal cord paralysis, before and after endoscopic augmentation and thyroplasty. In the past, we used injectable Teflon to treat this condition; later techniques included collagen injection and Isshiki thyroplasty. In the last 7 years, preferred treatment methods have included Bioplastique injection and lipoaugmentation of the vocal cords as well as medialization thyroplasty using a titanium implant according to Friedrich. Pre- and postoperative data was evaluated and compared to 25 patients. Appropriate glottic closure of the vocal cords was achieved in every case, in most cases after the first intervention. We used voice range profile measurements to evaluate the results. An objective evaluation was performed using the Friedrich dysphonia index. Significant improvements were found: the dysphonia index decreased in every case, from an average of 2.47, preoperatively, to an average of 1.18 postoperatively. In agreement with earlier studies, voice pitch range was the only parameter that not significantly improved. There was no statistical difference between the lipoaugmentation and thyroplasty according to Friedrich. We concluded that both endoscopic methods and thyroplasty can be used to achieve an optimal result. Cases must be evaluated individually so that the best technique, or combination of methods can be determined. PMID:16896756

Quantum effects due to the spatial delocalization of light atoms are treated in molecular simulation via the path integral technique. Among several methods, Path Integral (PI) Molecular Dynamics (MD) is nowadays a powerful tool to investigate properties induced by spatial delocalization of atoms; however, computationally this technique is very demanding. The above mentioned limitation implies the restriction of PIMD applications to relatively small systems and short time scales. One of the possible solutions to overcome size and time limitation is to introduce PIMD algorithms into the Adaptive Resolution Simulation Scheme (AdResS). AdResS requires a relatively small region treated at path integral level and embeds it into a large molecular reservoir consisting of generic spherical coarse grained molecules. It was previously shown that the realization of the idea above, at a simple level, produced reasonable results for toy systems or simple/test systems like liquid parahydrogen. Encouraged by previous results, in this paper, we show the simulation of liquid water at room conditions where AdResS, in its latest and more accurate Grand-Canonical-like version (GC-AdResS), is merged with two of the most relevant PIMD techniques available in the literature. The comparison of our results with those reported in the literature and/or with those obtained from full PIMD simulations shows a highly satisfactory agreement.

Quantum effects due to the spatial delocalization of light atoms are treated in molecular simulation via the path integral technique. Among several methods, Path Integral (PI) Molecular Dynamics (MD) is nowadays a powerful tool to investigate properties induced by spatial delocalization of atoms; however, computationally this technique is very demanding. The above mentioned limitation implies the restriction of PIMD applications to relatively small systems and short time scales. One of the possible solutions to overcome size and time limitation is to introduce PIMD algorithms into the Adaptive Resolution Simulation Scheme (AdResS). AdResS requires a relatively small region treated at path integral level and embeds it into a large molecular reservoir consisting of generic spherical coarse grained molecules. It was previously shown that the realization of the idea above, at a simple level, produced reasonable results for toy systems or simple/test systems like liquid parahydrogen. Encouraged by previous results, in this paper, we show the simulation of liquid water at room conditions where AdResS, in its latest and more accurate Grand-Canonical-like version (GC-AdResS), is merged with two of the most relevant PIMD techniques available in the literature. The comparison of our results with those reported in the literature and/or with those obtained from full PIMD simulations shows a highly satisfactory agreement.

We propose a three dimensional imaging technique that could be used to measure the internal energy of asymmetrical diatomic molecular ions such as HeH+ and CO+. The detection scheme is similar to the one used for symmetrical diatomic molecular ions, which accesses the internal energy of the ion through the kinetic energy release in a resonant dissociative charge transfer (see for instance). In that technique, the fragments hit two detectors which send the positions of the impacts along with the difference between the times of impacts to a computer. The computed kinetic energy release is related to the vibrational excitation level of the initial molecular ion. In the case of an asymmetrical ion, the lighter fragment has a higher recoil velocity and goes further away transversally from the center of mass direction. The heavier fragment would not hit the first detector if the beam is judiciously misaligned. Therefore, we make distinction between the two particles. Details of the technique will be presented. Authors wish to give special thanks to Pacific Union College Student Senate for their financial support.

This work is devoted to improving the electrical efficiency by reducing the rate of thermal energy of a photovoltaic/thermal system (PV/T).This is achieved by design cooling technique which consists of a heat exchanger and water circulating pipes placed at PV module rear surface to solve the problem of the high heat stored inside the PV cells during the operation. An experimental rig is designed to investigate and evaluate PV module performance with the proposed cooling technique. This cooling technique is the first work in Iraq to dissipate the heat from PV module. The experimental results indicated that due to the heat loss by convection between water and the PV panel's upper surface, an increase of output power is achieved. It was found that without active cooling, the temperature of the PV module was high and solar cells could only achieve a conversion efficiency of about 8%. However, when the PV module was operated under active water cooling condition, the temperature was dropped from 76.8°C without cooling to 70.1°C with active cooling. This temperature dropping led to increase in the electrical efficiency of solar panel to 9.8% at optimum mass flow rate (0.2L/s) and thermal efficiency to (12.3%).

In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.

Over the last two decades, numerous human MRI studies of neuroplasticity have shown compelling evidence for extensive and rapid experience-induced brain plasticity in vivo. To date, most of these studies have consisted of simply detecting a difference in structural or functional images with little concern for their lack of biological specificity. Recent reviews and public debates have stressed the need for advanced imaging techniques to gain a better understanding of the nature of these differences - characterizing their extent in time and space, their underlying biological and network dynamics. The purpose of this article is to give an overview of advanced imaging techniques for an audience of cognitive neuroscientists that can assist them in the design and interpretation of future MRI studies of neuroplasticity. The review encompasses MRI methods that probe the morphology, microstructure, function, and connectivity of the brain with improved specificity. We underline the possible physiological underpinnings of these techniques and their recent applications within the framework of learning- and experience-induced plasticity in healthy adults. Finally, we discuss the advantages of a multi-modal approach to gain a more nuanced and comprehensive description of the process of learning. PMID:26318050

We present a new technique for overcoming confusion noise in deep far-infrared Herschel space telescope images making use of prior information from shorter λ < 2 μm wavelengths. For the deepest images obtained by Herschel, the flux limit due to source confusion is about a factor of three brighter than the flux limit due to instrumental noise and (smooth) sky background. We have investigated the possibility of de-confusing simulated Herschel PACS 160 μm images by using strong Bayesian priors on the positions and weak priors on the flux of sources. We find the blended sources and group them together and simultaneously fit their fluxes. We derive the posterior probability distribution function of fluxes subject to these priors through Monte Carlo Markov Chain (MCMC) sampling by fitting the image. Assuming we can predict the FIR flux of sources based on the ultraviolet-optical part of their SEDs to within an order of magnitude, the simulations show that we can obtain reliable fluxes and uncertainties at least a factor of three fainter than the confusion noise limit of 3σ {sub c} = 2.7 mJy in our simulated PACS-160 image. This technique could in principle be used to mitigate the effects of source confusion in any situation where one has prior information of positions and plausible fluxes of blended sources. For Herschel, application of this technique will improve our ability to constrain the dust content in normal galaxies at high redshift.

The current paper examines a new DSMC approach to hypersonic flow simulation consisting of a combination between the Simplified Bernoulli Trials (SBT) collision algorithm and the transient adaptive subcell (TAS) selection procedure. The SBT collision algorithm has already been introduced as a scheme that provides accurate results with a quite small number of particles per cells and its combination with the transient adaptive subcell (TAS) technique will enable SBT to have coarser grid sizes as well. In the current research, the no-time-counter (NTC) collision algorithm and nearest neighbor (NN) pair selection procedure of Bird DS2V code are substituted by the SBT-TAS and comparisons between the new algorithm and NTC-NN are made considering appropriate test cases including hypersonic cylinder flow and axisymmetric biconic flow. Hypersonic cylinder flow is a well-known benchmark problem with a wide collision frequency range while the biconic flow exhibits laminar shock/shock and shock/boundary-layer interactions. Improvements implemented in the SBT-TAS technique, including subcell volume estimation, surface properties filter, and time controller, are discussed in detail. The simulations of these hypersonic test cases demonstrated that from the viewpoint of consumed sample-size, SBT-TAS is an efficient collision technique.

Conventional Langmuir probe techniques usually face the difficulty of being used in processing plasmas where dielectric compounds form, due to rapid failure by surface insulation. A solution to the problem, the so-called harmonic probe technique, had been proposed and shown effectiveness. In this study, the technique was investigated in detail by changing bias signal amplitudes V0, and evaluated its accuracy by comparing with the conventional Langmuir probe. It was found that the measured electron temperature Te increased with V0, but showing a relatively stable region when V0 > Te/e in which it was close to the true Te value. This is contrary to the general consideration that V0 should be smaller than Te/e for accurate measurement of Te. The phenomenon is interpreted by the non-negligible change of the ion current with V0 at low V0 values. On the other hand, the measured ni also increased with V0 due to the sheath expansion, and to improve the accuracy of ni it needs to linearly extrapolate the ni-V0 trend to V0=0. The results were applied to a diagnosis of the plasmas for chemical vapor deposition of diamond-like carbon thin films and the relationship between plasma parameters and films deposition rates was obtained.

Many current and future dark matter and neutrino detectors are designed to measure scintillation light with a large array of photomultiplier tubes (PMTs). The energy resolution and particle identification capabilities of these detectors depend in part on the ability to accurately identify individual photoelectrons in PMT waveforms despite large variability in pulse amplitudes and pulse pileup. We describe a Bayesian technique that can identify the times of individual photoelectrons in a sampled PMT waveform without deconvolution, even when pileup is present. To demonstrate the technique, we apply it to the general problem of particle identification in single-phase liquid argon dark matter detectors. Using the output of the Bayesian photoelectron counting algorithm described in this paper, we construct several test statistics for rejection of backgrounds for dark matter searches in argon. Compared to simpler methods based on either observed charge or peak finding, the photoelectron counting techniqueimproves both energy resolution and particle identification of low energy events in calibration data from the DEAP-1 detector and simulation of the larger MiniCLEAN dark matter detector.

Brain tumors represent a leading cause of cancer death for people under the age of 40 and the probability complete surgical resection of brain tumors remains low owing to the invasive nature of these tumors and the consequences of damaging healthy brain tissue. Molecular imaging is an emerging approach that has the potential to improve the ability for surgeons to correctly discriminate between healthy and cancerous tissue; however, conventional molecular imaging approaches in brain suffer from significant background signal in healthy tissue or an inability target more invasive sections of the tumor. This work presents initial studies investigating the ability of novel dual-tracer molecular imaging strategies to be used to overcome the major limitations of conventional "single-tracer" molecular imaging. The approach is evaluated in simulations and in an in vivo mice study with animals inoculated orthotopically using fluorescent human glioma cells. An epidermal growth factor receptor (EGFR) targeted Affibody-fluorescent marker was employed as a targeted imaging agent, and the suitability of various FDA approved untargeted fluorescent tracers (e.g. fluorescein & indocyanine green) were evaluated in terms of their ability to account for nonspecific uptake and retention of the targeted imaging agent. Signal-to-background ratio was used to measure and compare the amount of reporter in the tissue between targeted and untargeted tracer. The initial findings suggest that FDA-approved fluorescent imaging agents are ill-suited to act as untargeted imaging agents for dual-tracer fluorescent guided brain surgery as they suffer from poor delivery to the healthy brain tissue and therefore cannot be used to identify nonspecific vs. specific uptake of the targeted imaging agent where current surgery is most limited.

Purpose: In positron emission tomography (PET), the single scatter simulation (SSS) algorithm is widely used for scatter estimation in clinical scans. However, bias usually occurs at the essential steps of scaling the computed SSS distribution to real scatter amounts by employing the scatter-only projection tail. The bias can be amplified when the scatter-only projection tail is too small, resulting in incorrect scatter correction. To this end, we propose a novel scatter calibration technique to accurately estimate the amount of scatter using pre-determined scatter fraction (SF) function instead of the employment of scatter-only tail information. Methods: As the SF depends on the radioactivity distribution and the attenuating material of the patient, an accurate theoretical relation cannot be devised. Instead, we constructed an empirical transformation function between SFs and average attenuation coefficients based on a serious of phantom studies with different sizes and materials. From the average attenuation coefficient, the predicted SFs were calculated using empirical transformation function. Hence, real scatter amount can be obtained by scaling the SSS distribution with the predicted SFs. The simulation was conducted using the SimSET. The Siemens Biograph™ 6 PET scanner was modeled in this study. The Software for Tomographic Image Reconstruction (STIR) was employed to estimate the scatter and reconstruct images. The EEC phantom was adopted to evaluate the performance of our proposed technique. Results: The scatter-corrected image of our method demonstrated improved image contrast over that of SSS. For our technique and SSS of the reconstructed images, the normalized standard deviation were 0.053 and 0.182, respectively; the root mean squared errors were 11.852 and 13.767, respectively. Conclusion: We have proposed an alternative method to calibrate SSS (C-SSS) to the absolute scatter amounts using SF. This method can avoid the bias caused by the insufficient

Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads' length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO₂ emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario. PMID:27376289

Vehicular traffic congestion is a significant problem that arises in many cities. This is due to the increasing number of vehicles that are driving on city roads of limited capacity. The vehicular congestion significantly impacts travel distance, travel time, fuel consumption and air pollution. Avoidance of traffic congestion and providing drivers with optimal paths are not trivial tasks. The key contribution of this work consists of the developed approach for dynamic calculation of optimal traffic routes. Two attributes (the average travel speed of the traffic and the roads’ length) are utilized by the proposed method to find the optimal paths. The average travel speed values can be obtained from the sensors deployed in smart cities and communicated to vehicles via the Internet of Vehicles and roadside communication units. The performance of the proposed algorithm is compared to three other algorithms: the simulated annealing weighted sum, the simulated annealing technique for order preference by similarity to the ideal solution and the Dijkstra algorithm. The weighted sum and technique for order preference by similarity to the ideal solution methods are used to formulate different attributes in the simulated annealing cost function. According to the Sheffield scenario, simulation results show that the improved simulated annealing technique for order preference by similarity to the ideal solution method improves the traffic performance in the presence of congestion by an overall average of 19.22% in terms of travel time, fuel consumption and CO2 emissions as compared to other algorithms; also, similar performance patterns were achieved for the Birmingham test scenario. PMID:27376289

By considering momentum transfer in the Fermi constraint procedure, the stability of the initial nuclei and fragments produced in heavy-ion collisions can be further improved in quantum molecular dynamics simulations. The case of a phase-space occupation probability larger than one is effectively reduced with the proposed procedure. Simultaneously, the energy conservation can be better described for both individual nuclei and heavy-ion reactions. With the revised version of the improved quantum molecular dynamics model, the fusion excitation functions of 16O+186W and the central collisions of Au+Au at 35 AMeV are re-examined. The fusion cross sections at sub-barrier energies and the charge distribution of fragments are relatively better reproduced due to the reduction of spurious nucleon emission. The charge and isotope distribution of fragments in Xe+Sn, U+U and Zr+Sn at intermediate energies are also predicted. More unmeasured extremely neutron-rich fragments with Z = 16–28 are observed in the central collisions of 238U+238U than that of 96Zr+124Sn, which indicates that multi-fragmentation of U+U may offer a fruitful pathway to new neutron-rich isotopes.

Proteochemometric (PCM) methods, which use descriptors of both the interacting species, i.e. drug and the target, are being successfully employed for the prediction of drug-target interactions (DTI). However, unavailability of non-interacting dataset and determining the applicability domain (AD) of model are a main concern in PCM modeling. In the present study, traditional PCM modeling was improved by devising novel methodologies for reliable negative dataset generation and fingerprint based AD analysis. In addition, various types of descriptors and classifiers were evaluated for their performance. The Random Forest and Support Vector Machine models outperformed the other classifiers (accuracies >98% and >89% for 10-fold cross validation and external validation, respectively). The type of protein descriptors had negligible effect on the developed models, encouraging the use of sequence-based descriptors over the structure-based descriptors. To establish the practical utility of built models, targets were predicted for approved anticancer drugs of natural origin. The molecular recognition interactions between the predicted drug-target pair were quantified with the help of a reverse molecular docking approach. The majority of predicted targets are known for anticancer therapy. These results thus correlate well with anticancer potential of the selected drugs. Interestingly, out of all predicted DTIs, thirty were found to be reported in the ChEMBL database, further validating the adopted methodology. The outcome of this study suggests that the proposed approach, involving use of the improved PCM methodology and molecular docking, can be successfully employed to elucidate the intricate mode of action for drug molecules as well as repositioning them for new therapeutic applications. PMID:26822863

The inexorable exposure of plants to the combinations of abiotic stresses has affected the worldwide food supply. The crop improvement against these abiotic stresses has been captivating approach to increase the yield and enhance the stress tolerance. By using traditional and modern breeding methods, the characters that confer tolerance to these stresses were accomplished. No doubt genetic engineering and molecular breeding have helped in comprehending the intricate nature of stress response. Understanding of abiotic stress-involved cellular pathways provides vital information on such responses. On the other hand, genomic research for crop improvement has raised new assessments in breeding new varieties against abiotic stresses. Interpretation of responses of the crop plants under stress is of great significance by studying the main role of crops in food and biofuel production. This review presents genomic-based approaches revealing the complex networks controlling the mechanisms of abiotic stress tolerance, and the possible modes of assimilating information attained by genomic-based approaches due to the advancement in isolation and functional analysis of genes controlling the yield and abiotic stress tolerance are discussed. PMID:26440315

Digital analysis of remotely sensed data has become an important component of many earth-science studies. These data are often processed through a set of preprocessing or "clean-up" routines that includes a correction for atmospheric scattering, often called haze. Various methods to correct or remove the additive haze component have been developed, including the widely used dark-object subtraction technique. A problem with most of these methods is that the haze values for each spectral band are selected independently. This can create problems because atmospheric scattering is highly wavelength-dependent in the visible part of the electromagnetic spectrum and the scattering values are correlated with each other. Therefore, multispectral data such as from the Landsat Thematic Mapper and Multispectral Scanner must be corrected with haze values that are spectral band dependent. An improved dark-object subtraction technique is demonstrated that allows the user to select a relative atmospheric scattering model to predict the haze values for all the spectral bands from a selected starting band haze value. The improved method normalizes the predicted haze values for the different gain and offset parameters used by the imaging system. Examples of haze value differences between the old and improved methods for Thematic Mapper Bands 1, 2, 3, 4, 5, and 7 are 40.0, 13.0, 12.0, 8.0, 5.0, and 2.0 vs. 40.0, 13.2, 8.9, 4.9, 16.7, and 3.3, respectively, using a relative scattering model of a clear atmosphere. In one Landsat multispectral scanner image the haze value differences for Bands 4, 5, 6, and 7 were 30.0, 50.0, 50.0, and 40.0 for the old method vs. 30.0, 34.4, 43.6, and 6.4 for the new method using a relative scattering model of a hazy atmosphere. ?? 1988.

A multitude of concrete-based structures are typically part of a light water reactor (LWR) plant to provide the foundation, support, shielding, and containment functions. This use has made its long-term performance crucial for the safe operation of commercial nuclear power plants (NPPs). Extending reactor life to 60 years and beyond will likely increase susceptibility and severity of known forms of degradation. We seek to improve and extend the usefulness of results produced using the synthetic aperture focusing technique (SAFT) on ultrasonic data collected from thick, complex concrete structures such as in NPPs. Towards these goals, we apply the time-frequency technique of wavelet packet decomposition and reconstruction using a mother wavelet that possesses the exact reconstruction property. However, instead of analyzing the coefficients of each decomposition node, we select and reconstruct specific nodes based on the frequency band it contains to produce a frequency band specific time-series representation. SAFT is then applied to these frequency specific reconstructions allowing SAFT to be used to visualize the reflectivity of a frequency band and that band's interaction with the contents of the concrete structure. Specially designed and fabricated test specimens can provide realistic flaws that are similar to actual flaws in terms of how they interact with a particular NDE technique. Artificial test blocks allow the isolation of certain testing problems as well as the variation of certain parameters. Because conditions in the laboratory are controlled, the number of unknown variables can be decreased, making it possible to focus on specific aspects, investigate them in detail, and gain further information on the capabilities and limitations of each method. To minimize artifacts caused by boundary effects, the dimensions of the specimens should not be too compact. In this paper, we apply this enhanced SAFT technique to a 2.134 m × 2.134 m × 1.016 m concrete

Two efforts to improve the sensitivity and limits of detection for MCE with electrochemical detection are presented here. One is the implementation of a capillary expansion (bubble cell) at the detection zone to increase the exposed working electrode surface area. Bubble cell widths were varied from 1× to 10× the separation channel width (50 μm) to investigate the effects of electrode surface area on detection sensitivity, LOD, and separation efficiency. Improved detection sensitivity and decreased detection limits were obtained with increased bubble cell width, and LODs of dopamine and catechol detected in a 5× bubble cell were 25 nM and 50 nM, respectively. Meanwhile, fluorescent imaging results demonstrated ~8% and ~12% loss in separation efficiency in 4× and 5× bubble cell, respectively. Another effort for reducing the LOD involves using field amplified sample injection (FASI) for gated injection and field amplified sample stacking (FASS) for hydrodynamic injection. Stacking effects are shown for both methods using amperometric detection and pulsed amperometric detection (PAD). The LODs of dopamine in a 4× bubble cell were 8 nM and 20 nM using FASI and FASS, respectively. However, improved LODs were not obtained for anionic analytes using either stacking technique. PMID:19802848

The effect of pulmonary absorption enhancers on the stability of active ingredients is an important factor for successful inhalation therapy as well as the effect on pharmacological activity and safety. We examined the effect of pulmonary absorption enhancers on the stability of insulin in dry powders prepared by a spray-drying technique. Although the hypoglycemic effect was greatly improved when a dry insulin powder containing citric acid (MIC SD) was administered, insulin in the MIC SD was unstable compared with the other powders examined. Bacitracin and Span 85, which are potent pulmonary absorption enhancers of insulin formulated in solutions, showed no deteriorative effect on the stability of dry insulin powder. However, they did not improve the hypoglycemic effect of insulin in dry powders. We modified the insulin dosage form with citric acid to improve the insulin stability at room temperature without loss of hypoglycemic activity. MIC Mix was formulated as a combination of insulin powder (MI') and citric acid powder (MC). MIC Mix showed hypoglycemic activity comparable to MIC SD while the insulin stability was much better than that of MIC SD at a 60 degrees C/dry condition. However, moisture lowered the insulin stability and changed the particle morphology of MIC Mix with time at a 60 degrees C/75% relative humidity condition, suggesting that a package preventing moisture absorption was necessary for the MIC Mix powder. PMID:15129972

The structure of human protein HSPC034 has been determined by both solution NMR spectroscopy and X-ray crystallography. Refinement of the NMR structure ensemble, using a Rosetta protocol in the absence of NMR restraints, resulted in significant improvements not only in structure quality, but also in molecular replacement (MR) performance with the raw X-ray diffraction data using MOLREP and Phaser. This method has recently been shown to be generally applicable with improved MR performance demonstrated for eight NMR structures refined using Rosetta.1 Additionally, NMR structures of HSPC034 calculated by standard methods that include NMR restraints, have improvements in the RMSD to the crystal structure and MR performance in the order DYANA, CYANA, XPLOR-NIH, and CNS with explicit water refinement (CNSw). Further Rosetta refinement of the CNSw structures, perhaps due to more thorough conformational sampling and/or a superior force field, was capable of finding alternative low energy protein conformations that were equally consistent with the NMR data according to the RPF scores. Upon further examination, the additional MR-performance shortfall for NMR refined structures as compared to the X-ray structure MR performance were attributed, in part, to crystal-packing effects, real structural differences, and inferior hydrogen bonding in the NMR structures. A good correlation between a decrease in the number of buried unsatisfied hydrogen-bond donors and improved MR performance demonstrates the importance of hydrogen-bond terms in the force field for improving NMR structures. The superior hydrogen-bond network in Rosetta-refined structures, demonstrates that correct identification of hydrogen bonds should be a critical goal of NMR structure refinement. Inclusion of non-bivalent hydrogen bonds identified from Rosetta structures as additional restraints in the structure calculation results in NMR structures with improved MR performance PMID:18816799

Transition metal dichalcogenides (TMDs) such as molybdenum disulfide (MoS2) have garnered significant interest in recent years. With a layered structure similar to graphene, TMDs also have an intrinsic band gap. This band gap makes them an attractive alternative to graphene in many applications. MoS2 in particular has received attention due to the placement and tenability of its band gap, via functionalization, mechanical manipulation or physisorption. The latter of these is of interest in biosensor devices. Such applications are dependent on understanding physisorption on the MoS2 surface at the molecular level. This can be difficult experimentally but is possible via computer simulation techniques such as molecular dynamics (MD) simulations. MD simulations, however, require a force field accurate to the process modeled. Such a force field must correctly describe non-bonded interactions between substrate layers and between the surface and adsorbates. The force fields we are aware of have focused on intra-layer covalent bonding for structural and vibrational analysis. This work seeks to develop, through DFT and MD simulations with experimental characterization of surface adsorption, a more accurate parameterization for non-bonded interactions for MoS2.

Ultrasound imaging, having the advantages of low-cost and non-invasiveness over MRI and X-ray CT, was reported by several studies as an adequate complement to fluorescence molecular tomography with the perspective of improving localization and quantification of fluorescent molecular targets in vivo. Based on the previous work, an improved dual-modality Fluorescence-Ultrasound imaging system was developed and then validated in imaging study with preclinical tumor model. Ultrasound imaging and a profilometer were used to obtain the anatomical prior information and 3D surface, separately, to precisely extract the tissue boundary on both sides of sample in order to achieve improved fluorescence reconstruction. Furthermore, a pattern-based fluorescence reconstruction on the detection side was incorporated to enable dimensional reduction of the dataset while keeping the useful information for reconstruction. Due to its putative role in the current imaging geometry and the chosen reconstruction technique, we developed an attenuation compensated Born-normalization method to reduce the attenuation effects and cancel off experimental factors when collecting quantitative fluorescence datasets over large area. Results of both simulation and phantom study demonstrated that fluorescent targets could be recovered accurately and quantitatively using this reconstruction mechanism. Finally, in vivo experiment confirms that the imaging system associated with the proposed image reconstruction approach was able to extract both functional and anatomical information, thereby improving quantification and localization of molecular targets.

The molecular interaction between hemoglobin (HHb), the major human heme protein, and the acridine dyes acridine orange (AO) and 9-aminoacridine (9AA) was studied by various spectroscopic, calorimetric and molecular modeling techniques. The dyes formed stable ground state complex with HHb as revealed from spectroscopic data. Temperature dependent fluorescence data showed the strength of the dye-protein complexation to be inversely proportional to temperature and the fluorescence quenching was static in nature. The binding-induced conformational change in the protein was investigated using circular dichroism, synchronous fluorescence, 3D fluorescence and FTIR spectroscopy results. Circular dichroism data also quantified the α-helicity change in hemoglobin due to the binding of acridine dyes. Calorimetric studies revealed the binding to be endothermic in nature for both AO and 9AA, though the latter had higher affinity, and this was also observed from spectroscopic data. The binding of both dyes was entropy driven. pH dependent fluorescence studies revealed the existence of electrostatic interaction between the protein and dye molecules. Molecular modeling studies specified the binding site and the non-covalent interactions involved in the association. Overall, the results revealed that a small change in the acridine chromophore leads to remarkable alteration in the structural and thermodynamic aspects of binding to HHb. PMID:27077554

EPA's Office of Research and Development (ORD) develops innovative methods for use in environmental monitoring and assessment by scientists in Regions, states, and Tribes. Molecular-biology-based methods are not yet established in the environmental monitoring "tool box". SRI (Sci...

Anthracnose of strawberry may be caused by any of three Colletotrichum species: C. acutatum, C. gloeosporioides or C. fragariae. These destructive pathogens may infect the fruit, leaves, petioles, crowns or roots and may cause plant death. Traditional and molecular approaches were used to identify a...

A method for using system identification techniques to improve airframe finite element models using test data was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

In initial use, the high-speed digital data acquisition systems at Langley Research Center's National Transonic Facility produced data containing unacceptably high noise levels. Described is a process whereby the contributing noise sources were identified and eliminated. The effects of 60 Hz power, system grounding, EMI/RFI, and other problems are discussed and the corrective action taken is outlined. The overall effort resulted in an improvement of greater than 5:1 in system performance. Although the report describes a system specifically used for wind tunnel data acquisition, the corrective techniques employed are generally applicable to large scale high-speed data systems where signal resolution in the low microvolts range is important.

This paper describes an approach of machine-learning pattern recognition procedures for the land surface objects using their spectral and textural features on remotely sensed hyperspectral images together with the biological parameters retrieval for the recognized classes of forests. Modified Bayesian classifier is used to improve the related procedures in spatial and spectral domains. Direct and inverse problems of atmospheric optics are solved based on modeling results of the projective cover and density of the forest canopy for the selected classes of forests of different species and ages. Applying the proposed techniques to process images of high spectral and spatial resolution, we have detected object classes including forests within their contours on a particular image and can retrieve the phytomass amount of leaves/needles as well as the relevant total biomass amount for the forest canopy. PMID:26698785

This paper presents improvement in lead (Pb) recovery and sulphate removal from used Pb acid battery (ULAB) through Electrokinetic technique, a process aimed to eliminate environmental pollution that arises due to emission of gases and metal particles from the existing high temperature pyrometallurgical process. Two different cell configurations, (1) one with Nafion membrane placed between anode and middle compartments and Agar membrane between cathode and middle compartments and (2) another with only Agar membrane placed between both sides of the middle compartments were designed for the Pb and sulphate separation from ULAB. This paper concludes that the cell with only Agar membranes performed better than the cell with Nafion and Agar membranes in combinations and also explains the mechanism underlying the chemical and electrochemical processes in the cell. PMID:22483596

A method for using system identification techniques to improve airframe finite element models using test data has been developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

A method for using system identification techniques to improve airframe finite element models was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

The major objective of this report is to help the US Nuclear Regulatory Commission (NRC) in its regulatory mission, particularly with respect to improving the use of cost-benefit analysis and the economic evaluation of resources within the NRC. The objectives of this effort are: (1) to identify current and future NRC requirements (e.g., licensing) for valuing nonmarket goods; (2) to identify, highlight, and present the relevant efforts of selected federal agencies, some with over two decades of experience in valuing nonmarket goods, in this area; and (3) to review methods for valuing nonmarket impacts and to provide estimats of their magnitudes. Recently proposed legislation may result in a requirement for not only more sophisticated valuation analyses, but more extensive applications of these techniques to issues of concern to the NRC. This paper is intended to provide the NRC with information to more efficiently meet such requirements.

Two topics are discussed separately in this thesis. In the first part a semiclassical quark model, called the Thomas-Fermi quark model, is reviewed. After a modified approach to spin in the model is introduced, I present the calculation of the spectra of octet and decuplet baryons. The six-quark doubly strange H-dibaryon state is also investigated. In the second part, two numerical techniques which improve latice QCD calculations are covered. The first one, which we call Polynomial-Preconditioned GMRES-DR(PP-GMRESDR), is used to speed up the calculation of large systems of linear equations in LQCD. The second one, called the Polynomial-Subtraction method, is used to help reduce the noise variance of the calculations for disconnected loops in LQCD.

Due to the fact that the number of new poorly soluble active pharmaceutical ingredients is increasing, it is important to investigate the possibilities of improvement of their solubility in order to obtain a final pharmaceutical formulation with enhanced bioavailability. One of the strategies to increase drug solubility is the inclusion of the APIs in cyclodextrins. The aim of this study was to investigate the possibility of aripiprazole solubility improvement by inclusion in (2-hydroxy)propyl-β-cyclodextrin (HPBCD) and simultaneous manipulation of pH of the medium and addition of polyvinylpyrrolidone. Aripiprazole-HPBCD complexes were prepared by spray drying aqueous drug-HPBCD solutions, and their properties were compared with those prepared by solvent-drop co-grinding and physical mixing. The obtained powders were characterized by thermoanalytical methods (TGA and DSC), FTIR spectroscopy, their dissolution properties were assessed, while the binding of aripiprazole into the cavity of HPBCD was studied by molecular docking simulations. The solubilization capacity was found to be dependent on pH as well as the buffer solution's ionic composition. The presence of PVP in the formulation could affect the solubilization capacity significantly, but further experimentation is required before its effect is fully understood. On the basis of solubility studies, the drug/HPBCD stoichiometry was found to be 1:3. The spray-dried products were free of crystalline aripiprazole, they possessed higher solubility and dissolution rate, and were stable enough over a prolonged period of storage. Spray drying of cyclodextrin solutions proved to be an appropriate and efficient technique for the preparation of highly soluble inclusion compounds of aripiprazole and HPBCD. PMID:22535520

Rapid monitoring of the response to treatment in cancer patients is essential to predict the outcome of the therapeutic regimen early in the course of the treatment. The conventional methods are laborious, time-consuming, subjective and lack the ability to study different biomolecules and their interactions, simultaneously. Since; mechanisms of cancer and its response to therapy is dependent on molecular interactions and not on single biomolecules, an assay capable of studying molecular interactions as a whole, is preferred. Fourier Transform Infrared (FTIR) spectroscopy has become a popular technique in the field of cancer therapy with an ability to elucidate molecular interactions. The aim of this study, was to explore the utility of the FTIR technique along with multivariate analysis to understand whether the method has the resolution to identify the differences in the mechanism of therapeutic response. Towards achieving the aim, we utilized the mouse xenograft model of retinoblastoma and nanoparticle mediated targeted therapy. The results indicate that the mechanism underlying the response differed between the treated and untreated group which can be elucidated by unique spectral signatures generated by each group. The study establishes the efficiency of non-invasive, label-free and rapid FTIR method in assessing the interactions of nanoparticles with cellular macromolecules towards monitoring the response to cancer therapeutics. PMID:26568521

The X-ray phase imaging method has been applied to observe soft biological tissues, and it is possible to image the soft tissues by using the benefit of the so-called “Talbot effect” by an X-ray grating. One type of the X-ray phase imaging method was reported by combining an X-ray imaging microscope equipped by a Fresnel zone plate with a phase grating. Using the fringe scanning technique, a high-precision phase shift image could be obtained by displacing the grating step by step and measuring dozens of sample images. The number of the images was selected to reduce the error caused by the non-sinusoidal component of the Talbot self-image at the imaging plane. A larger number suppressed the error more but increased radiation exposure and required higher mechanical stability of equipment. In this paper, we analyze the approximation error of fringe scanning technique for the X-ray microscopy which uses just one grating and proposes an improved algorithm. We compute the approximation error by iteration and substitute that into the process of reconstruction of phase shift. This procedure will suppress the error even with few sample images. The results of simulation experiments show that the precision of phase shift image reconstructed by the proposed algorithm with 4 sample images is almost the same as that reconstructed by the conventional algorithm with 40 sample images. We also have succeeded in the experiment with real data.

The development is documented of a heuristic method for the solution of pure integer linear programming problems. The procedure draws its methodology from the ideas of Hooke and Jeeves type 1 and 2 exploratory searches, greedy procedures, and neighborhood searches. It uses an efficient rounding method to obtain its first feasible integer point from the optimal continuous solution obtained via the simplex method. Since this method is based entirely on simple addition or subtraction of one to each variable of a point in n-space and the subsequent comparison of candidate solutions to a given set of constraints, it facilitates significant complexity improvements over existing techniques. It also obtains the same optimal solution found by the branch-and-bound technique in 44 of 45 small to moderate size test problems. Two example problems are worked in detail to show the inner workings of the method. Furthermore, using an established weighted scheme for comparing computational effort involved in an algorithm, a comparison of this algorithm is made to the more established and rigorous branch-and-bound method. A computer implementation of the procedure, in PC compatible Pascal, is also presented and discussed.

Osteoporosis is a medical condition affecting men and women of different age groups and populations. The compromised bone quality caused by this disease represents an important challenge when a surgical procedure (e.g., spinal fusion) is needed after failure of conservative treatments. Different pedicle screw designs and instrumentation techniques have been explored to enhance spinal device fixation in bone of compromised quality. These include alterations of screw thread design, optimization of pilot hole size for non-self-tapping screws, modification of the implant's trajectory, and bone cement augmentation. While the true benefits and limitations of any procedure may not be realized until they are observed in a clinical setting, axial pullout tests, due in large part to their reproducibility and ease of execution, are commonly used to estimate the device's effectiveness by quantifying the change in force required to remove the screw from the body. The objective of this investigation is to provide an overview of the different pedicle screw designs and the associated surgical techniques either currently utilized or proposed to improve pullout strength in osteoporotic patients. Mechanical comparisons as well as potential advantages and disadvantages of each consideration are provided herein. PMID:24724097

A new technique to improve the tracking performance of a ship-borne mobile telemetry antenna system in the marine environment is presented for Korea Space Launch Vehicle-I (KSLV-I) mission. The concept of 'pointing bias' is introduced to compensate for the instability or inaccuracy of a sensor in the ship-borne mobile telemetry antenna system. The LEO satellites in the form of 'tracking campaign' are used to measure the pointing bias. The proposed technique is verified through the tests in Jeju sea and Pacific sea in the period of November 23-28, 2012. The results demonstrated that the azimuth pointing bias about -0.27° and -0.49° appears in Jeju sea and Pacific sea, respectively and the pointing bias was proved to be mainly relevant to the heading measurement error of the gyrocompass. Taking into consideration of the pointing bias results in Jeju sea and Pacific sea, the gyrocompass was corrected by 0.25° to compensate for the heading measurement error on the ship-borne mobile telemetry antenna system. The correction value of 0.25° is selected to reduce the risk in tracking. Due to insufficient correction, the residual pointing bias about -0.12° in Pacific sea was observed and the successful tracking of KSLV-I is achieved on January 30, 2013.

Outdoor residual sprays are among the most common methods for targeting pestiferous ants in urban pest management programs. If impervious surfaces such as concrete are treated with these insecticides, the active ingredients can be washed from the surface by rain or irrigation. As a result, residual sprays with fipronil and pyrethroids are found in urban waterways and aquatic sediments. Given the amount of insecticides applied to urban settings for ant control and their possible impact on urban waterways, the development of alternative strategies is critical to decrease the overall amounts of insecticides applied, while still achieving effective control of target ant species. Herein we report a "pheromone-assisted technique" as an economically viable approach to maximize the efficacy of conventional sprays targeting the Argentine ant. By applying insecticide sprays supplemented with an attractive pheromone compound, (Z)-9-hexadecenal, Argentine ants were diverted from nearby trails and nest entrances and subsequently exposed to insecticide residues. Laboratory experiments with fipronil and bifenthrin sprays indicated that the overall kill of the insecticides on Argentine ant colonies was significantly improved (57-142% increase) by incorporating (Z)-9-hexadecenal in the insecticide sprays. This technique, once it is successfully implemented in practical pest management programs, has the potential of providing maximum control efficacy with reduced amount of insecticides applied in the environment. PMID:24665716

The fecal monitoring technique for measuring the absorption of Mn, Se and Fe was studied in eight piglets using high resolution gamma spectrometry. Four day old piglets were fed a complete liquid diet for five days prior to the administration of an isotope dose (/sup 75/Se, /sup 54/Mn, /sup 59/Fe) equilibrated with the milk feeding. /sup 51/CrCl/sub 3/ was used as a fecal marker. Subsequently stool and urine samples were collected daily for 15-21 days. Following counting, the % fecal excretion of the administered dose was calculated. As 0 to 33% of the administered /sup 51/CrCl/sub 3/ was absorbed this fecal marker is inappropriate for piglets. Results indicate that endogenous excretion for each of the isotopes was not constant but decreased exponentially with time. An improved method for calculating the endogenous excretion was therefore developed. This method is based on the pattern of endogenous excretion in comparable piglets injected intravenously with the same isotopes, and on the level of endogenous excretion in the orally fed animals in the post-absorptive phase of excretion. These findings have important implications for the estimation of endogenous excretion in future fecal monitoring absorption studies. Previous results using the latter technique have frequently underestimated true absorption.

Arterial spin labeling (ASL) is a non-invasive technique that can quantitatively measure cerebral blood flow (CBF). While traditionally ASL employs 2D EPI or spiral acquisition trajectories, single-shot 3D GRASE is gaining popularity in ASL due to inherent SNR advantage and spatial coverage. However, a major limitation of 3D GRASE is through-plane blurring caused by T2 decay. A novel technique combining 3D GRASE and a PROPELLER trajectory (3DGP) is presented to minimize through-plane blurring without sacrificing perfusion sensitivity or increasing total scan time. Full brain perfusion images were acquired at a 3×3×5mm3 nominal voxel size with Q2TIPS-FAIR as the ASL preparation sequence. Data from 5 healthy subjects was acquired on a GE 1.5T scanner in less than 4 minutes per subject. While showing good agreement in CBF quantification with 3D GRASE, 3DGP demonstrated reduced through-plane blurring, improved anatomical details, high repeatability and robustness against motion, making it suitable for routine clinical use. PMID:21254211

In individuals with ALS rehabilitation is mainly designed to prevent fatigue and contracture, to improve independence and activities for as long as possible, to optimize ability to live with the handicap, and finally to maximize quality of life. The functional impairment must be defined and physical therapy techniques have to be adapted to each patient and reevaluated frequently during the course of the disease. Various types of massage and exercise, monitored by a physical therapist are effective. Strengthening or endurance exercises are controversial as exercise may injure muscle fibres and motor neurons. Isometric exercise, short of fatigue, of unaffected muscles is recommended. Range of motion exercise is critically important for preventing contraction. Assistive and adaptative equipments are essential for maintaining the patient's activities of daily living and home equipment preserves independence. Several orthoses for hand, arm, foot or cervical weakness are available. A wheelchair is an important adaptative device when walking becomes too fatiguing or impossible. Choice for special options and features may require attention. Pulmonary complications are prevented with adapted techniques for bronchic obstruction. Based on the degree of weakness of limb and axial muscles six stages of functional impairment can be defined ranging from fully ambulatory in stage I to bedridden and totally dependent in stage VI. This staging provides a framework for physical therapy evaluation and guidance for appropriate rehabilitation in ALS patients. PMID:17128118

Osteoporosis is a medical condition affecting men and women of different age groups and populations. The compromised bone quality caused by this disease represents an important challenge when a surgical procedure (e.g., spinal fusion) is needed after failure of conservative treatments. Different pedicle screw designs and instrumentation techniques have been explored to enhance spinal device fixation in bone of compromised quality. These include alterations of screw thread design, optimization of pilot hole size for non-self-tapping screws, modification of the implant's trajectory, and bone cement augmentation. While the true benefits and limitations of any procedure may not be realized until they are observed in a clinical setting, axial pullout tests, due in large part to their reproducibility and ease of execution, are commonly used to estimate the device's effectiveness by quantifying the change in force required to remove the screw from the body. The objective of this investigation is to provide an overview of the different pedicle screw designs and the associated surgical techniques either currently utilized or proposed to improve pullout strength in osteoporotic patients. Mechanical comparisons as well as potential advantages and disadvantages of each consideration are provided herein. PMID:24724097

Human rhinoviruses (RVs), comprising three species (A, B, and C) of the genus Enterovirus, are responsible for the majority of upper respiratory tract infections and are associated with severe lower respiratory tract illnesses such as pneumonia and asthma exacerbations. High genetic diversity and continuous identification of new types necessitate regular updating of the diagnostic assays for the accurate and comprehensive detection of circulating RVs. Methods for molecular typing based on phylogenetic comparisons of a variable fragment in the 5′ untranslated region were improved to increase assay sensitivity and to eliminate nonspecific amplification of human sequences, which are observed occasionally in clinical samples. A modified set of primers based on new sequence information and improved buffers and enzymes for seminested PCR assays provided higher specificity and sensitivity for virus detection. In addition, new diagnostic primers were designed for unequivocal species and type assignments for RV-C isolates, based on phylogenetic analysis of partial VP4/VP2 coding sequences. The improved assay was evaluated by typing RVs in >3,800 clinical samples. RVs were successfully detected and typed in 99% of the samples that were RV positive in multiplex diagnostic assays. PMID:24789198

Rice is one of the main pillars of food security in India. Its improvement for higher yield in sustainable agriculture system is also vital to provide energy and nutritional needs of growing world population, expected to reach more than 9 billion by 2050. The high quality genome sequence of rice has provided a rich resource to mine information about diversity of genes and alleles which can contribute to improvement of useful agronomic traits. Defining the function of each gene and regulatory element of rice remains a challenge for the rice community in the coming years. Subsequent to participation in IRGSP, India has continued to contribute in the areas of diversity analysis, transcriptomics, functional genomics, marker development, QTL mapping and molecular breeding, through national and multi-national research programs. These efforts have helped generate resources for rice improvement, some of which have already been deployed to mitigate loss due to environmental stress and pathogens. With renewed efforts, Indian researchers are making new strides, along with the international scientific community, in both basic research and realization of its translational impact. PMID:26743769

The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvementtechniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

IESIP, an Improved Exploratory Search Technique for Pure Integer Linear Programming Problems, addresses the problem of optimizing an objective function of one or more variables subject to a set of confining functions or constraints by a method called discrete optimization or integer programming. Integer programming is based on a specific form of the general linear programming problem in which all variables in the objective function and all variables in the constraints are integers. While more difficult, integer programming is required for accuracy when modeling systems with small numbers of components such as the distribution of goods, machine scheduling, and production scheduling. IESIP establishes a new methodology for solving pure integer programming problems by utilizing a modified version of the univariate exploratory move developed by Robert Hooke and T.A. Jeeves. IESIP also takes some of its technique from the greedy procedure and the idea of unit neighborhoods. A rounding scheme uses the continuous solution found by traditional methods (simplex or other suitable technique) and creates a feasible integer starting point. The Hook and Jeeves exploratory search is modified to accommodate integers and constraints and is then employed to determine an optimal integer solution from the feasible starting solution. The user-friendly IESIP allows for rapid solution of problems up to 10 variables in size (limited by DOS allocation). Sample problems compare IESIP solutions with the traditional branch-and-bound approach. IESIP is written in Borland's TURBO Pascal for IBM PC series computers and compatibles running DOS. Source code and an executable are provided. The main memory requirement for execution is 25K. This program is available on a 5.25 inch 360K MS DOS format diskette. IESIP was developed in 1990. IBM is a trademark of International Business Machines. TURBO Pascal is registered by Borland International.

The electrostatic interaction between a chemical and its site of biological action is often important in determining biological activity. In order to include this interaction in methods to assess the potential biological activity of large molecules, rapid and reliable techniques ...

Chalcones are naturally occurring aromatic ketones, which consist of an α-, β-unsaturated carbonyl system joining two aryl rings. These compounds are reported to exhibit several pharmacological activities, including antiparasitic, antibacterial, antifungal, anticancer, immunomodulatory, nitric oxide inhibition and anti-inflammatory effects. In the present work, a Quantitative Structure-Activity Relationship (QSAR) study is carried out to classify chalcone derivatives with respect to their antileishmanial activity (active/inactive) on the basis of molecular descriptors. For this purpose, two techniques to select descriptors are employed, the Successive Projections Algorithm (SPA) and the Genetic Algorithm (GA). The selected descriptors are initially employed to build Linear Discriminant Analysis (LDA) models. An additional investigation is then carried out to determine whether the results can be improved by using a non-parametric classification technique (One Nearest Neighbour, 1NN). In a case study involving 100 chalcone derivatives, the 1NN models were found to provide better rates of correct classification than LDA, both in the training and test sets. The best result was achieved by a SPA-1NN model with six molecular descriptors, which provided correct classification rates of 97% and 84% for the training and test sets, respectively. PMID:24090733

Peripheral nerve injury is common especially among young individuals. Although injured neurons have the ability to regenerate, the rate is slow and functional outcomes are often poor. Several potential therapeutic agents have shown considerable promise for improving the survival and regenerative capacity of injured neurons. These agents are reviewed within the context of their molecular mechanisms. The PI3K/Akt and Ras/ERK signaling cascades play a key role in neuronal survival. A number of agents that target these pathways, including erythropoietin, tacrolimus, acetyl-l-carnitine, n-acetylcysteine and geldanamycin have been shown to be effective. Trk receptor signaling events that up-regulate cAMP play an important role in enhancing the rate of axonal outgrowth. Agents that target this pathway including rolipram, testosterone, fasudil, ibuprofen and chondroitinase ABC hold considerable promise for human application. A tantalizing prospect is to combine different molecular targeting strategies in complementary pathways to optimize their therapeutic effects. Although further study is needed prior to human trials, these modalities could open a new horizon in the clinical arena that has so far been elusive. PMID:25220611

Alfalfa (Medicago sativa L.) is a major forage legume grown extensively worldwide with important agronomic and environmental attributes. Insufficient cold hardiness is a major impediment to its reliable production in northern climates. Improvement of freezing tolerance using conventional breeding approaches is slowed by the quantitative nature of inheritance and strong interactions with the environment. The development of gene-based markers would facilitate the identification of genotypes with superior stress tolerance. Successive cycles of recurrent selection were applied using an indoor screening method to develop populations with significantly higher tolerance to freezing (TF). Bulk segregant analysis of heterogeneous TF populations identified DNA variations that are progressively enriched in frequency in response to selection. Polymorphisms resulting from intragenic variations within a dehydrin gene were identified and could potentially lead to the development of robust selection tools. Our results illustrate the benefits of feedback interactions between germplasm development programs and molecular physiology for a deeper understanding of the molecular and genetic bases of cold hardiness. PMID:22452626

We report resistance versus magnetic field measurements for a La{sub 0.65}Sr{sub 0.35}MnO{sub 3}/SrTiO{sub 3}/La{sub 0.65}Sr{sub 0.35}MnO{sub 3} tunnel junction grown by molecular-beam epitaxy, that show a large field window of extremely high tunneling magnetoresistance (TMR) at low temperature. Scanning the in-plane applied field orientation through 360 deg., the TMR shows fourfold symmetry, i.e., biaxial anisotropy, aligned with the crystalline axis but not the junction geometrical long axis. The TMR reaches {approx}1900% at 4 K, corresponding to an interfacial spin polarization of >95% assuming identical interfaces. These results show that uniaxial anisotropy is not necessary for large TMR, and lay the groundwork for future improvements in TMR in manganite junctions.

Lithium-based polymer batteries for aerospace applications need the ability to operate in temperatures ranging from -70 to 70 C. Current state-of-the-art solid polymer electrolytes (based on amorphous polyethylene oxide, PEO) have acceptable ionic conductivities (10-4 to 10-3 S/cm) only above 60 C. Higher conductivity can be achieved in the current systems by adding solvent or plasticizers to the solid polymer to improve ion transport. However, this can compromise the dimensional and thermal stability of the electrolyte, as well as compatibility with electrode materials. One of NASA Glenn Research Center's objectives in the PERS program is to develop new electrolytes having unique molecular architectures and/or novel ion transport mechanisms, leading to good ionic conductivity at room temperature and below without solvents or plasticizers.

The first stage of production of any oil reservoir involves oil displacement by natural drive mechanisms such as solution gas drive, gas cap drive and gravity drainage. Typically, improved oil recovery (IOR) methods are applied to oil reservoirs that have been depleted naturally. In more recent years, IOR techniques are applied to reservoirs even before their natural energy drive is exhausted by primary depletion. Descriptive screening criteria for IOR methods are used to select the appropriate recovery technique according to the fluid and rock properties. This methodology helps in assessing the most suitable recovery process for field deployment of a candidate reservoir. However, the already published screening guidelines neither provide information about the expected reservoir performance nor suggest a set of project design parameters, which can be used towards the optimization of the process. In this study, artificial neural networks (ANN) are used to build a high-performance neuro-simulation tool for screening different improved oil recovery techniques: miscible injection (CO2 and N2), waterflooding and steam injection processes. The simulation tool consists of proxy models that implement a multilayer cascade feedforward back propagation network algorithm. The tool is intended to narrow the ranges of possible scenarios to be modeled using conventional simulation, reducing the extensive time and energy spent in dynamic reservoir modeling. A commercial reservoir simulator is used to generate the data to train and validate the artificial neural networks. The proxy models are built considering four different well patterns with different well operating conditions as the field design parameters. Different expert systems are developed for each well pattern. The screening networks predict oil production rate and cumulative oil production profiles for a given set of rock and fluid properties, and design parameters. The results of this study show that the networks are

Research over several decades by several institutions has shown that alkali-promoted metal sulfide catalysts are capable of producing mixed alcohols from syngas with high selectivity and yield. Unfortunately, process models suggest that syngas to mixed alcohol processes, and especially thermochemical biomass to mixed alcohol processes, require improvements to sulfide catalyst activity and/or selectivity for acceptable economics. These improvements, if incremental, cannot result in increased process complexity, capital expenditure, or catalyst costs. It is well accepted among catalyst researchers that thermal processing techniques like calcining and reduction can have profound effects on the properties and performance of finished catalysts, and that small variations in thermal processing do not usually affect the overall cost of the catalyst. Metal sulfide catalysts are no exception but surprisingly, little attention has been given to the effects of thermal treatment on bulk metal sulfide mixed alcohol catalysts. This presentation will discuss how parameters like temperature, dwell time, metal ratios, and purge gas affect the performance and physical properties of K-Co/Mo catalysts.

A blackout can take place in entire power system or a part of the system due to extreme voltage instability (voltage collapse) that can appear abruptly. Instability prediction and continuous monitoring of the power system performance is, therefore, known exigent. This paper is conducted with a broad overview of the voltage stability indices, which are previously studied in the literature, and have the same foundation during their formulation. Afterward, an improved voltage stability indicator is introduced as a result of the multi-criteria integration and enhancement of the original indices by employing linear algebra methods. It is found that the proposed algorithm can overcome on the probable limitations from calculating point view. Then comparative analysis of the indices is presented in order to reach a unique consensus about the typical techniques of modal analysis (sensitivity, eigenvalue, right eigenvectors, and bus participation factor) as a precise algorithm. Finally, the IEEE 14-bus, and 30-bus test systems are selected to verify the algorithm, and compare the performance of the improved indicator approach with the existing indices.

Recent advances in real-time synthetic scene generation for Hardware-in-the-loop (HWIL) testing at the U.S. Army Aviation and Missile Command (AMCOM) Aviation and Missile Research, Development, and Engineering Center (AMRDEC) improve both performance and fidelity. Modeling ground target scenarios requires tradeoffs because of limited texture memory for imagery and limited main memory for elevation data. High- resolution insets have been used in the past to provide better fidelity in specific areas, such as in the neighborhood of a target. Improvements for ground scenarios include smooth transitions for high-resolution insets to reduce high spatial frequency artifacts at the borders of the inset regions and dynamic terrain paging to support large area databases. Transport lag through the scene generation system, including sensor emulation and interface components, has been dealt with in the past through the use of sub-window extraction from oversize scenes. This compensates for spatial effects of transport lag but not temporal effects. A new system has been developed and used successfully to compensate for a flashing coded beacon in the scene. Other techniques have been developed to synchronize the scene generator with the seeker under test (SUT) and to model atmospheric effects, sensor optic and electronics, and angular emissivity attenuation.

An improved neutron activation technique is analyzed that can be used for the characterization of the neutron field in low neutron flux environments, such as medical Linacs. Due to the much lower neutron fluence rates, thick materials instead of thin have been used. The study is focused on the calculations of basic components of the neutron activation analysis that are required for accurate results, such as the efficiency of the gamma detector used for γ-spectrometry as well as crucial correction factors that are required when dealing with thick samples in different geometries and forms. A Monte Carlo detector model, implemented by Geant4 MC Code was adjusted in accordance to results from various measurements performed. Moreover, regarding to estimate the self-shielding correction factors a new approach using both Monte Carlo and analytical approach was presented. This improvement gives more accurate results, which are important for both activation and shielding studies that take place in many facilities. A quite good agreement between the neutron fluxes is achieved; according to the data obtained a mean value of (2.13±0.34)×105 ncm-2 s-1 is representative for the isocenter of the specific Linac that corresponds to fluence of (5.53±0.94)×106 ncm-2 Gy-1. Comparable fluencies reported in the literature for similar Linacs operating with photon beams at 15 MeV.

Citizens Gas and Coke Utility operates three coke oven batteries, producing both foundry coke and blast furnace coke, under the trade name Indianapolis Coke. Active participation in the regulation negotiation process by the Vice President of Indianapolis Coke allowed the company to accurately anticipate the environmental regulations, long before they were set in law. Several improvements were put into motion that helps them meet the new environmental regulations. Better trained operators with new job positions dedicated solely to environmental compliance, an extensive environmental training program, and two innovations, a portable oven door milling and cleaning machine and three new computer applications are the result of team efforts. The focus of this paper is development of the computer applications designed to enhance three areas of environmental compliance. The three areas addressed by the applications are documentation and information deployment, problem solving, and resource allocation. Through quality improvementtechniques and team oriented problem solving, new approaches to environmental data collection and analysis have helped Indianapolis Coke meet the ever tightening environmental regulations.

Current realizations of the Six-Port technique for measuring the complex reflection coefficient require accurate information about the system calibration constants. An examination of the sensitivity of the solution of the reflection coefficient to calibration constants is undertaken. An improved set of design specifications is derived based on the sensitivity of the solution and constraints on power measurement accuracy. It has been recognized that the system equations for the Six-Port are an overdetermined set of equations. Previously, this overdeterminedness has not been used to reduce the sensitivity of the solution of the reflection coefficient to variation of the calibration constants. An algorithm is described which makes use of the extra calibration constants. A Fortran program is presented which implements the algorithm. A hardware realization based on the improved set of design specifications is described. The circuit operates at a center frequency of 900 MHz and performs over a 16% band of frequencies without recalibration. Errors in phase measurement for this Six-Port implementation are less than two degrees except at the band edges. Errors in magnitude measurement are less than 10% except for measurements of small values of reflection coefficient magnitude (0.3) for which errors are 20%.

The low success rate of animal cloning by somatic cell nuclear transfer (SCNT) is believed to be associated with epigenetic errors including abnormal DNA hypermethylation. Recently, we elucidated by using round spermatids that, after nuclear transfer, treatment of zygotes with trichostatin A (TSA), an inhibitor of histone deacetylase, can remarkably reduce abnormal DNA hypermethylation depending on the origins of transferred nuclei and their genomic regions [S. Kishigami, N. Van Thuan, T. Hikichi, H. Ohta, S. Wakayama. E. Mizutani, T. Wakayama, Epigenetic abnormalities of the mouse paternal zygotic genome associated with microinsemination of round spermatids, Dev. Biol. (2005) in press]. Here, we found that 5-50 nM TSA-treatment for 10 h following oocyte activation resulted in more efficient in vitro development of somatic cloned embryos to the blastocyst stage from 2- to 5-fold depending on the donor cells including tail tip cells, spleen cells, neural stem cells, and cumulus cells. This TSA-treatment also led to more than 5-fold increase in success rate of mouse cloning from cumulus cells without obvious abnormality but failed to improve ES cloning success. Further, we succeeded in establishment of nuclear transfer-embryonic stem (NT-ES) cells from TSA-treated cloned blastocyst at a rate three times higher than those from untreated cloned blastocysts. Thus, our data indicate that TSA-treatment after SCNT in mice can dramatically improve the practical application of current cloning techniques.

A polypyrrole derivative monolayer was investigated for the application as a wire. First, a pyrrole derivative monolayer was prepared by chemically adsorbing (self-assembling) monolayer (CAM) of 6-pyrrolylhexyl-12,12,12-trichloro-12- siladodecanoate (PEN) on a glass substrate. Then, the monolayer was polymerized in the presence of pure water by electrooxidation. The surface characterization of the molecular interaction was investigated by measuring the properties of CAMs attached to the glass substrate in the lateral direction. We formed PEN having polypyrrolyl groups, using Pt-patterned electrodes on glass surfaces and measured the conductance under a small bias voltage, using a conductive cantilever of atomic force microscopy (AFM). The polypyrrole derivative monolayer thus synthesized was covalently bonded to the glass substrate and showed conductivity as high as 3.05..103 S/cm after electro-oxidized. The method of preparing a conductive polymer monolayer by the combining chemical adsorption and electro-oxidation leads to a lot molecular wire to perpendicular to the Pt electrodes, and it is one of the key technologies for molecular devices.

Background The prevalence and amounts of periodontal pathogens detected in bacteraemia samples after tooth brushing-induced by means of four diagnostic technique, three based on culture and one in a molecular-based technique, have been compared in this study. Material and Methods Blood samples were collected from thirty-six subjects with different periodontal status (17 were healthy, 10 with gingivitis and 9 with periodontitis) at baseline and 2 minutes after tooth brushing. Each sample was analyzed by three culture-based methods [direct anaerobic culturing (DAC), hemo-culture (BACTEC), and lysis-centrifugation (LC)] and one molecular-based technique [quantitative polymerase chain reaction (qPCR)]. With culture any bacterial isolate was detected and quantified, while with qPCR only Porphyromonas gingivalis and Aggregatibacter actinomycetemcomitans were detected and quantified. Descriptive analyses, ANOVA and Chi-squared tests, were performed. Results Neither BACTEC nor qPCR detected any type of bacteria in the blood samples. Only LC (2.7%) and DAC (8.3%) detected bacteraemia, although not in the same patients. Fusobacterium nucleatum was the most frequently detected bacterial species. Conclusions The disparity in the results when the same samples were analyzed with four different microbiological detection methods highlights the need for a proper validation of the methodology to detect periodontal pathogens in bacteraemia samples, mainly when the presence of periodontal pathogens in blood samples after tooth brushing was very seldom. Key words:Bacteraemia, periodontitis, culture, PCR, tooth brushing. PMID:26946197

For over a century, vibrational spectroscopy has enhanced the study of materials. Yet, assignment of particular molecular motions to vibrational excitations has relied on indirect methods. Here, we demonstrate that applying group theoretical methods to the dynamic pair distribution function analysis of neutron scattering data provides direct access to the individual atomic displacements responsible for these excitations. Applied to the molecule-based frustrated magnet with a potential magnetic valence-bond state, LiZn{sub 2}Mo{sub 3}O{sub 8}, this approach allows direct assignment of the constrained rotational mode of Mo{sub 3}O{sub 13} clusters and internal modes of MoO{sub 6} polyhedra. We anticipate that coupling this well known data analysis technique with dynamic pair distribution function analysis will have broad application in connecting structural dynamics to physical properties in a wide range of molecular and solid state systems.

Parasitoid detection and identification is a necessary step in the development and implementation of fruit fly biological control strategies employing parasitoid augmentive release. In recent years, DNA-based methods have been used to identify natural enemies of pest species where morphological differentiation is problematic. Moleculartechniques also offer a considerable advantage over traditional morphological methods of fruit fly and parasitoid discrimination as well as within-host parasitoid identification, which currently relies on dissection of immature parasitoids from the host, or lengthy and labour-intensive rearing methods. Here we review recent research focusing on the use of molecular strategies for fruit fly and parasitoid detection and differentiation and discuss the implications of these studies on fruit fly management. PMID:26466628

For over a century, vibrational spectroscopy has enhanced the study of materials. Yet, assignment of particular molecular motions to vibrational excitations has relied on indirect methods. Here, we demonstrate that applying group theoretical methods to the dynamic pair distribution function analysis of neutron scattering data provides direct access to the individual atomic displacements responsible for these excitations. Applied to the molecule-based frustrated magnet with a potential magnetic valence-bond state, LiZn2Mo3O8, this approach allows direct assignment of the constrained rotational mode of Mo3O13 clusters and internal modes of MoO6 polyhedra. We anticipate that coupling this well known data analysis technique with dynamic pair distribution function analysis will have broad application in connecting structural dynamics to physical properties in a wide range of molecular and solid state systems. PMID:26429001

New molecular design of obtaining molecular glasses has been developed by linking triphenylmethyl moieties to chromophore core by flexible C-C bridge. Compounds capable of forming stable amorphous phase with good optical quality have been acquired with increased chemical and thermal sustainability compared to the previously reported design. NLO activity of compounds has been measured after corona discharge polling. Compared to previously synthesized trityloxy fragment containing compounds increase of d33 coefficient by up to 17 times was achieved for the same chromophore core containing compounds.

Aim: Tropical theileriosis is fatal hemoprotozoal disease of dairy animals caused by Theileria annulata. The aim of the present study was to detect the T. annulata and comparison of results of molecular and microscopic techniques. Materials and Methods: A total of 52 blood samples were collected from the cattle suspected for theileriosis across the Banaskantha district. All the samples were screened for theileriosis using Giemsa’s staining technique and polymerase chain reaction (PCR). Results: Total of 17 (32.69%) and 24 (46.15%) samples were found positive for theileriosis by microscopic examination and PCR test, respectively. It revealed that the study area is endemic for theileriosis, and the microscopic technique has 70.83% sensitivity and 100% specificity with respect to PCR technique. Conclusion: It may be concluded from the present study that the PCR is comparatively sensitive technique than microscopic examination and may be recommended to use in the field for screening of theileriosis in the study area, where a high prevalence of diseases have been reported due to intensive dairy farming. PMID:27047045

The systematic microbiological evaluation of endophthalmitis allows the confirmation of the infectious nature of the disease and the possible adaptation of treatment at the individual level and, at the collective level, the epidemiological characterization of the bacterial spectrum of endophthalmitis. Long reserved for research, the use of molecular biology techniques to complement conventional culture techniques has become important for the diagnosis of endophthalmitis in recent years. These new diagnostic techniques are particularly useful for the microbiological study of bacteria that are difficult or impossible to grow because of their intrinsic properties, their presence in only a small inoculum, their sequestration on prosthetic materials, or their inactivation by prior antibiotic treatment. These techniques are based on the polymerase chain reaction (PCR), which allows the amplification and detection of extracted bacterial deoxyribonucleic acid (DNA) that is initially present in minute quantities in an ocular sample. In practice, these conventional or real-time PCRs allow either the a priori detection of bacterial DNA (universal PCR) or the identification of a specific DNA fragment of a bacterial genus or species (specific PCR). New techniques of PCR will allow more rapid bacterial identification and also characterization of genotypic properties, such as genes of virulence or antibiotic resistance. PMID:24359808

We demonstrate the pump-induced coherent Stokes Raman scattering (CSRS) technique by measuring vibrational cooling in low temperature crystals of pentacene in naphthalene following excitation of a vibration 747 cm -1 above the S 1 origin. Using picosecond photon echoes and a two-color pump-probe technique, we find that the initial state decays in 33 ps, and reappears at the origin 25 ps later. We show that pump-induced CSRS simultaneously measures the decay from the initial state and reappearance at the origin. This technique has many of the advantages of conventional coherent Raman (e.g. intense coherent signals), but is a direct measure of the population dynamics in the initial and final states.

Thyroid fine-needle aspiration (FNA) cytology is a fast growing field. One of the most developing areas is represented by molecular tests applied to cytological material. Patients that could benefit the most from these tests are those that have been diagnosed as 'indeterminate' on FNA. They could be better stratified in terms of malignancy risk and thus oriented with more confidence to the appropriate management. Taking in to consideration the need to improve and keep high the yield of thyroid FNA, professionals from various fields (i.e. molecular biologists, endocrinologists, nuclear medicine physicians and radiologists) are refining and fine-tuning their diagnostic instruments. In particular, all these developments aim at increasing the negative predictive value of FNA to improve the selection of patients for diagnostic surgery. These advances involve terminology, the application of next-generation sequencing to thyroid FNA, the use of immunocyto- and histo-chemistry, the development of new sampling techniques and the increasing use of nuclear medicine as well as molecular imaging in the management of patients with a thyroid nodule. Herein, we review the recent advances in thyroid FNA cytology that could be of interest to the 'thyroid-care' community, with particular focus on the indeterminate diagnostic category. PMID:26450171

Objective: Facial aging is characterized by skin laxity and loss of skin elasticity. Hyaluronic acid, a biological component of the extracellular matrix, whose level decreases during aging, plays structural, rheological, and physiological roles in the skin. Hyaluronic acid may possess different molecular weights: low-molecular-weight hyaluronic acid (from 50 kDa) and high-molecular-weight hyaluronic acid (just up to 2 million kDa). This monocentric, retrospective, observational study investigates the efficacy, security, and tolerability of a new injective low- and high-molecular-weight hyaluronic acid for facial skin rejuvenation. Methods: Eleven women received once a month, for 2 months, 2 mL of the product in the subcutaneous layer of the right and left malar/submalar areas. Facial skin echography, facial skin hydration, elasticity, and transepidermal water loss were assessed before (T0), after 1 month (T1), and after 3 months of treatment (T2). The injective features of the product, physician subjective satisfaction, and patient satisfaction were also reported. Results: Facial face hydration, elasticity, and transepidermal water loss values significantly improved at T1 and T2 (P < .01). Patients were very satisfied at the end of the treatment, and the compound's profit evaluated by the physician was optimal in the absence of local side effects. Conclusions: This treatment represents a good treatment option to restore vitality and turgidity of skin presenting the signs of aging in the absence of intolerance symptoms. PMID:26491508

Silicon monoxide is one of the major gas phase silicon bearing components observed in astronomical environments. Silicon oxide serves as the major rock forming material for terrestrial and meteoritic bodies. It is known that several gas phase reactions produce mass independent isotopic fractionations which possess the same delta(O-17)/delta(O-18) ratio observed in Allende inclusions. The general symmetry dependence of the chemically produced mass independent isotopic fractionation process suggests that there are several plausible reactions which could occur in the early solar system which may lead to production of the observed meteoritic oxygen isotopic anomalies. An important component in exploring the role of such processes is the need to experimentally determine the isotopic fractionations for specific reactions of relevance to the early solar system. It has already been demonstrated that atomic oxygen reaction with CO, a major nebular oxygen bearing species, produces a large (approximately 90 percent), mass independent isotopic fractionation. The next hurdle regarding assessing the involvement of symmetry dependent isotopic fractionation processes in the pre-solar nebula is to determine isotopic fractionation factors associated with gas phase reactions of metallic oxides. In particular, a reaction such as O + SiO yields SiO2 is a plausible nebular reaction which could produce a delta(O-17) is approximately delta(O-18) fractionation based upon molecular symmetry considerations. While the isotopic fractionations during silicate evaporation and condensation have been determined, there are no isotopic studies of controlled, gas phase nucleation processes. In order to carefully control the reaction kinetics, a molecular beam apparatus has been constructed. This system produces a supersonic, collimated beam of SiO molecules which is reacted with a second beam of oxygen atoms. An important feature of molecular beams is that they operate at sufficiently low pressures

The objective of this project was to increase oil production and reserves by the use of improved reservoir characterization and completion techniques in the Unita Basin Utah. To accomplish this objective, a two-year geologic and engineering characterization of the Bluebell field was conducted. The study evaluated surface and subsurface data, currently used completion techniques, and common production problems. It was determined that advanced case- and open-hole logs could be effective in determining productive beds and that staged-interval (about 500 ft [150 m] per stage) and bed-scale isolation completion techniques could result in improved well performance.

Remote sensing has the potential to provide information useful in improving the mod- elling of pollution transport in agricultural catchments. The realisation of this potential will depend on the availability of the raw data, the development of techniques for ex- tracting information from this data, and the assimilation of the derived information into models. Each of these aspects has been explored and assessed by this group, and this presentation will describe its findings. Two main objectives were defined in this work. Firstly, to define, quantify, and demonstrate how future spaceborne Earth Ob- servation (EO) derived information can be utilised to improve the accuracy and spatial resolution of input parameters to pollution models of nitrate (N) and phosphate (P) export at the field and catchment scale. Secondly, to quantify the impact of improved accuracy and spatial resolution of input data on model predictions of nutrient export from agricultural fields. These objectives were explored by acquiring high spatial resolution hyperspectral data and laser altimetry of two farm sites near Hereford, UK. The data were then analysed to generate information such as topography, crop type and fractional vegetation cover at different resolutions. A technique was developed to identify the soil and vegetation endmembers within high resolution hyperspectral imagery of a field, enabling an es- timation of the vegetation fraction to be extracted. The results of this technique were compared to conventional methods used with fewer spectral bands. Aerially-acquired laser altimetry was also processed to produce high resolution Digital Elevation Models of the site. Nutrient flow models were then developed to assimilate this information and predict N and P run-off. At the sub-field scale the hypothesis that higher resolution topography will make a substantial difference to contaminant transport was tested using the AGricultural Non- Point Source (AGNPS) model at 20m, 50m and 100m cell

This report summarizes the development of in situ spectral reflectance as a tool for improving the quality, reproducibility, and yield of device structures grown from compound semiconductors. Although initially targeted at MBE (Molecular Beam Epitaxy) machines, equipment difficulties forced the authors to test most of their ideas on a MOCVD (Metal Organic Chemical Vapor Deposition) reactor. A pre-growth control strategy using in situ reflectance has led to an unprecedented demonstration of process control on one of the most difficult device structures that can be grown with compound semiconductor materials. Hundreds of vertical cavity surface emitting lasers (VCSEL`s) were grown with only {+-} 0.3% deviations in the Fabry-Perot cavity wavelength--a nearly ten-fold improvement over current calibration methods. The success of the ADVISOR (Analysis of Deposition using Virtual Interfaces and Spectroscopic Optical Reflectance) method has led to a great deal of interest from the commercial sector, including use by Hewlett Packard and Honeywell. The algorithms, software and reflectance design are being evaluated for patents and/or license agreements. A small company, Filmetrics, Inc., is incorporating the ADVISOR analysis method in its reflectometer product.

Beneficial microorganisms have been considered as an important tool for crop improvement. Native isolates of Azospirillum spp. were obtained from the rhizospheres of different rice fields. Phenotypic, biochemical and molecular characterizations of these isolates led to the identification of six efficient strain of Azospirillum. PCR amplification of the nif genes (nifH, nifD and nifK) and protein profile of Azospirillum strains revealed inter-generic and inter-specific diversity among the strains. In vitro nitrogen fixation performance and the plant growth promotion activities, viz. siderophore, HCN, salicylic acid, IAA, GA, zeatin, ABA, NH3, phosphorus metabolism, ACC deaminase and iron tolerance were found to vary among the Azospirillum strains. The effect of Azospirillum formulations on growth of rice var. Khandagiri under field condition was evaluated, which revealed that the native formulation of Azospirillum of CRRI field (As6) was most effective to elevate endogenous nutrient content, and improved growth and better yield are the result. The 16S rRNA sequence revealed novelty of native Azospirillum lipoferum (As6) (JQ796078) in the NCBI database. PMID:24414168

The plasma immersion ion implantation and deposition (PIIID) technique was used to implant zinc (Zn) ions into smooth surfaces of pure titanium (Ti) disks for investigation of tooth implant surface modification. The aim of the present study was to evaluate the surface structure and chemical composition of a modified Ti surface following Zn ion implantation and deposition and to examine the effect of such modification on osteoblast biocompatibility. Using the PIIID technique, Zn ions were deposited onto the smooth surface of pure Ti disks. The physical structure and chemical composition of the modified surface layers were characterized by scanning electron microscopy (SEM) and X-ray photoelectron spectroscopy (XPS), respectively. In vitro culture assays using the MG-63 bone cell line were performed to determine the effects of Zn-modified Ti surfaces following PIIID on cellular function. Acridine orange staining was used to detect cell attachment to the surfaces and cell cycle analysis was performed using flow cytometry. SEM revealed a rough ‘honeycomb’ structure on the Zn-modified Ti surfaces following PIIID processing and XPS data indicated that Zn and oxygen concentrations in the modified Ti surfaces increased with PIIID processing time. SEM also revealed significantly greater MG-63 cell growth on Zn-modified Ti surfaces than on pure Ti surfaces (P<0.05). Flow cytometric analysis revealed increasing percentages of MG-63 cells in S phase with increasing Zn implantation and deposition, suggesting that MG-63 apoptosis was inhibited and MG-63 proliferation was promoted on Zn-PIIID-Ti surfaces. The present results suggest that modification with Zn-PIIID may be used to improve the osteoblast biocompatibility of Ti implant surfaces. PMID:25673139

Remote sensing can potentially provide information useful in improving pollution transport modelling in agricultural catchments. Realisation of this potential will depend on the availability of the raw data, development of information extraction techniques, and the impact of the assimilation of the derived information into models. High spatial resolution hyperspectral imagery of a farm near Hereford, UK is analysed. A technique is described to automatically identify the soil and vegetation endmembers within a field, enabling vegetation fractional cover estimation. The aerially-acquired laser altimetry is used to produce digital elevation models of the site. At the subfield scale the hypothesis that higher resolution topography will make a substantial difference to contaminant transport is tested using the AGricultural Non-Point Source (AGNPS) model. Slope aspect and direction information are extracted from the topography at different resolutions to study the effects on soil erosion, deposition, runoff and nutrient losses. Field-scale models are often used to model drainage water, nitrate and runoff/sediment loss, but the demanding input data requirements make scaling up to catchment level difficult. By determining the input range of spatial variables gathered from EO data, and comparing the response of models to the range of variation measured, the critical model inputs can be identified. Response surfaces to variation in these inputs constrain uncertainty in model predictions and are presented. Although optical earth observation analysis can provide fractional vegetation cover, cloud cover and semi-random weather patterns can hinder data acquisition in Northern Europe. A Spring and Autumn cloud cover analysis is carried out over seven UK sites close to agricultural districts, using historic satellite image metadata, climate modelling and historic ground weather observations. Results are assessed in terms of probability of acquisition probability and implications

Crowdsourcing is a new approach for solving data processing problems for which conventional methods appear to be inaccurate, expensive, or time-consuming. Nowadays, the development of new crowdsourcing techniques is mostly motivated by so called Big Data problems, including problems of assessment and clustering for large datasets obtained in aerospace imaging, remote sensing, and even in social network analysis. By involving volunteers from all over the world, the Geo-Wiki project tackles problems of environmental monitoring with applications to flood resilience, biomass data analysis and classification of land cover. For example, the Cropland Capture Game, which is a gamified version of Geo-Wiki, was developed to aid in the mapping of cultivated land, and was used to gather 4.5 million image classifications from the Earth's surface. More recently, the Picture Pile game, which is a more generalized version of Cropland Capture, aims to identify tree loss over time from pairs of very high resolution satellite images. Despite recent progress in image analysis, the solution to these problems is hard to automate since human experts still outperform the majority of machine learning algorithms and artificial systems in this field on certain image recognition tasks. The replacement of rare and expensive experts by a team of distributed volunteers seems to be promising, but this approach leads to challenging questions such as: how can individual opinions be aggregated optimally, how can confidence bounds be obtained, and how can the unreliability of volunteers be dealt with? In this paper, on the basis of several known machine learning techniques, we propose a technical approach to improve the overall performance of the majority voting decision rule used in the Cropland Capture Game. The proposed approach increases the estimated consistency with expert opinion from 77% to 86%.

The plasma immersion ion implantation and deposition (PIIID) technique was used to implant zinc (Zn) ions into smooth surfaces of pure titanium (Ti) disks for investigation of tooth implant surface modification. The aim of the present study was to evaluate the surface structure and chemical composition of a modified Ti surface following Zn ion implantation and deposition and to examine the effect of such modification on osteoblast biocompatibility. Using the PIIID technique, Zn ions were deposited onto the smooth surface of pure Ti disks. The physical structure and chemical composition of the modified surface layers were characterized by scanning electron microscopy (SEM) and X‑ray photoelectron spectroscopy (XPS), respectively. In vitro culture assays using the MG‑63 bone cell line were performed to determine the effects of Zn‑modified Ti surfaces following PIIID on cellular function. Acridine orange staining was used to detect cell attachment to the surfaces and cell cycle analysis was performed using flow cytometry. SEM revealed a rough 'honeycomb' structure on the Zn‑modified Ti surfaces following PIIID processing and XPS data indicated that Zn and oxygen concentrations in the modified Ti surfaces increased with PIIID processing time. SEM also revealed significantly greater MG‑63 cell growth on Zn‑modified Ti surfaces than on pure Ti surfaces (P<0.05). Flow cytometric analysis revealed increasing percentages of MG‑63 cells in S phase with increasing Zn implantation and deposition, suggesting that MG‑63 apoptosis was inhibited and MG‑63 proliferation was promoted on Zn‑PIIID‑Ti surfaces. The present results suggest that modification with Zn‑PIIID may be used to improve the osteoblast biocompatibility of Ti implant surfaces. PMID:25673139

Tsetse flies (Diptera: Glossinidae) are the cyclical vectors of the trypanosomes, which cause human African trypanosomosis (HAT) or sleeping sickness in humans and African animal trypanosomosis (AAT) or nagana in animals. Due to the lack of effective vaccines and inexpensive drugs for HAT, and the development of resistance of the trypanosomes against the available trypanocidal drugs, vector control remains the most efficient strategy for sustainable management of these diseases. Among the control methods used for tsetse flies, Sterile Insect Technique (SIT), in the frame of area-wide integrated pest management (AW-IPM), represents an effective tactic to suppress and/or eradicate tsetse flies. One constraint in implementing SIT is the mass production of target species. Tsetse flies harbor obligate bacterial symbionts and salivary gland hypertrophy virus which modulate the fecundity of the infected flies. In support of the future expansion of the SIT for tsetse fly control, the Joint FAO/IAEA Programme of Nuclear Techniques in Food and Agriculture implemented a six year Coordinated Research Project (CRP) entitled “Improving SIT for Tsetse Flies through Research on their Symbionts and Pathogens”. The consortium focused on the prevalence and the interaction between the bacterial symbionts and the virus, the development of strategies to manage virus infections in tsetse colonies, the use of entomopathogenic fungi to control tsetse flies in combination with SIT, and the development of symbiont-based strategies to control tsetse flies and trypanosomosis. The results of the CRP and the solutions envisaged to alleviate the constraints of the mass rearing of tsetse flies for SIT are presented in this special issue. PMID:22841636

Absolute frequencies of unperturbed (12)C(16)O transitions from the near-infrared (3-0) band were measured with uncertainties five-fold lower than previously available data. The frequency axis of spectra was linked to the primary frequency standard. Three different cavity enhanced absorption and dispersion spectroscopic methods and various approaches to data analysis were used to estimate potential systematic instrumental errors. Except for a well established frequency-stabilized cavity ring-down spectroscopy, we applied the cavity mode-width spectroscopy and the one-dimensional cavity mode-dispersion spectroscopy for measurement of absorption and dispersion spectra, respectively. We demonstrated the highest quality of the dispersion line shape measured in optical spectroscopy so far. We obtained line positions of the Doppler-broadened R24 and R28 transitions with relative uncertainties at the level of 10(-10). The pressure shifting coefficients were measured and the influence of the line asymmetry on unperturbed line positions was analyzed. Our dispersion spectra are the first demonstration of molecular spectroscopy with both axes of the spectra directly linked to the primary frequency standard, which is particularly desirable for the future reference-grade measurements of molecular spectra. PMID:27276950

Absolute frequencies of unperturbed 12C16O transitions from the near-infrared (3-0) band were measured with uncertainties five-fold lower than previously available data. The frequency axis of spectra was linked to the primary frequency standard. Three different cavity enhanced absorption and dispersion spectroscopic methods and various approaches to data analysis were used to estimate potential systematic instrumental errors. Except for a well established frequency-stabilized cavity ring-down spectroscopy, we applied the cavity mode-width spectroscopy and the one-dimensional cavity mode-dispersion spectroscopy for measurement of absorption and dispersion spectra, respectively. We demonstrated the highest quality of the dispersion line shape measured in optical spectroscopy so far. We obtained line positions of the Doppler-broadened R24 and R28 transitions with relative uncertainties at the level of 10-10. The pressure shifting coefficients were measured and the influence of the line asymmetry on unperturbed line positions was analyzed. Our dispersion spectra are the first demonstration of molecular spectroscopy with both axes of the spectra directly linked to the primary frequency standard, which is particularly desirable for the future reference-grade measurements of molecular spectra.

Molecular and immunological probes were used to identify various life stages of Perkinsus olseni, a protozoan parasite of the Manila clam Ruditapes philippinarum, from a marine environment and decomposing clam tissue. Western blotting revealed that the antigenic determinants of the rabbit anti-P. olseni antibody developed in this study were peptides with molecular masses of 55.9, 24.0, and 19.2kDa. Immunofluorescent assay indicated that the rabbit anti-P. olseni IgG was specific to all life stages, including the prezoosporangium, trophozoite, and zoospore. Perkinsus olseni prezoosporangium-like cells were successfully isolated from marine sediment collected from Hwangdo on the west coast of Korea, where P. olseni-associated clam mortality has recurred for the past decade. Purified cells were positively stained with the rabbit anti-P. olseni antibody in an immunofluorescence assay, confirming for the first time the presence of P. olseni in marine sediment. Actively replicating zoospores inside the prezoosporangia were observed in the decomposing clam tissue collected from Hwangdo. P. olseni was also isolated from the feces and pseudofeces of infected clams and confirmed by PCR. The clams released 1-2 prezoosporangia per day through feces. The data suggested that the fecal discharge and decomposition of the infected clam tissue could be the two major P. olseni transmission routes. PMID:20691188

FBP. Veo reconstructions showed slight improvement over STD FBP reconstructions (4%–9% increase in accuracy). The most improved ID and WA% measures were for the smaller airways, especially for low dose scans reconstructed at half DFOV (18 cm) with the EDGE algorithm in combination with 100% ASIR to mitigate noise. Using the BONE + ASIR at half BONE technique, measures improved by a factor of 2 over STD FBP even at a quarter of the x-ray dose. Conclusions: The flexibility of ASIR in combination with higher frequency algorithms, such as BONE, provided the greatest accuracy for conventional and low x-ray dose relative to FBP. Veo provided more modest improvement in qCT measures, likely due to its compatibility only with the smoother STD kernel.

done with and without particle splitting were within the accepted clinical tolerance of 2%, with a 0.4% statistical uncertainty. For the two patient geometries considered, head and prostate, the efficiency gain was 20.9 and 14.7, respectively, with the percentages of voxels with gamma indices lower than unity 98.9% and 99.7%, respectively, using 2% and 2 mm criteria. Conclusions: The authors have implemented an efficient variance reduction technique with significant speed improvements for proton Monte Carlo simulations. The method can be transferred to other codes and other treatment heads.

simulations done with and without particle splitting were within the accepted clinical tolerance of 2%, with a 0.4% statistical uncertainty. For the two patient geometries considered, head and prostate, the efficiency gain was 20.9 and 14.7, respectively, with the percentages of voxels with gamma indices lower than unity 98.9% and 99.7%, respectively, using 2% and 2 mm criteria. Conclusions: The authors have implemented an efficient variance reduction technique with significant speed improvements for proton Monte Carlo simulations. The method can be transferred to other codes and other treatment heads. PMID:23556888

Irrigated agriculture is a significant activity in water stressed semi-arid (e.g., the Sahel) regions, thereby yield and water management are fundamental aspects of irrigation success. Small farmers have often difficulties in managing crops and in evaluating water needs resulting in low yield with excessive water consumption, elevated pumping costs and soil degradation. In different proportions, this overuse of water concerns all irrigation techniques: gravity flows from reservoirs, watering cans irrigation from groundwater wells, micro- or drip irrigation. Baseline requirements for supporting sustainable technology are low costs, easy installation, minimal maintenance, and local production. We present and discuss results from the Info4Dourou2.0 explorative project in Burkina Faso, the main goal of which is to improve small-scale agriculture by the use of sensing and communication technologies. In particular, a support system that couples autonomous and continuous measurements of meteorological variables and soil matrix potential as well as soil humidity with agronomic models has been tested in drip-irrigated fields over a three-year period. In particular, the system is collecting data from three water potential sensors at different locations per field and informs the farmers through a simple interface of the correct amount of water needed by the plant. In its simplicity this system provides an easy to use and install irrigation management setup, and is therefore an ideal candidate in favor of sustainability. Info4Dourou2.0 pilot experiments have shown that farmers can obtain significantly higher yields using lower amounts of water. Overall, this methodology allows facing multiple urgent problems such as the use of environmental data to improve agricultural production towards ecosystem conservation, food security issues and adaptation to climatic change scenarios.

A solid dispersion of the drug can be made using a polymer carrier to improve solubility. Generally, drugs become amorphized when solid dispersion is formed using a polymer carrier. In such high energy conditions, the solubility of the drug molecule is increased. We previously prepared solid dispersion using a spray-drying technique and reported its solubility and crystallinity. In this study, hydroxypropylmethylcellulose (HPMC) was used as the carrier, and tolubutamide was the model drug, which is water-insoluble. Solubility was evaluated by preparing a solid dispersion using a newly developed 4-fluid nozzle spray dryer. Observation of particle morphology by scanning electron microscopy (SEM) revealed that the particles from the spray drying were atomized to several microns, and they had also become spherical. Assessment of the crystallinity of the spray-dried particles by powder X-ray diffraction and differential scanning calorimetry demonstrated that the tolbutamide had been amorphized, forming a solid dispersion. The apparent release rate constant K of the drug from the spray-dried particles was 4 to 6 times faster than the original drug in pH 1.2, and it was also 1.5 to 1.9 times faster than the original drug in pH 6.8. The 70% release time (T(70)) of the drug from the spray-dried particles was 20 to 30 times faster than the original drug in pH 1.2 solution as well as 2 to 3 times faster than the original drug in pH 6.8 solution. Pharmaceutical preparations prepared in this way using the 4-fluid nozzle system spray dryer formed composite particles, resulting in a remarkably improved dissolution rates of the drug. PMID:15340191

We seek to improve the accuracy of joint galaxy photometric redshift estimation and spectral energy distribution (SED) fitting. By simulating different sources of uncorrected systematic errors, we demonstrate that if the uncertainties in the photometric redshifts are estimated correctly, so are those on the other SED fitting parameters, such as stellar mass, stellar age, and dust reddening. Furthermore, we find that if the redshift uncertainties are over(under)-estimated, the uncertainties in SED parameters tend to be over(under)-estimated by similar amounts. These results hold even in the presence of severe systematics and provide, for the first time, a mechanism to validate the uncertainties on these parameters via comparison with spectroscopic redshifts. We propose a new technique (annealing) to re-calibrate the joint uncertainties in the photo-z and SED fitting parameters without compromising the performance of the SED fitting + photo-z estimation. This procedure provides a consistent estimation of the multi-dimensional probability distribution function in SED fitting + z parameter space, including all correlations. While the performance of joint SED fitting and photo-z estimation might be hindered by template incompleteness, we demonstrate that the latter is “flagged” by a large fraction of outliers in redshift, and that significant improvements can be achieved by using flexible stellar populations synthesis models and more realistic star formation histories. In all cases, we find that the median stellar age is better recovered than the time elapsed from the onset of star formation. Finally, we show that using a photometric redshift code such as EAZY to obtain redshift probability distributions that are then used as priors for SED fitting codes leads to only a modest bias in the SED fitting parameters and is thus a viable alternative to the simultaneous estimation of SED parameters and photometric redshifts.

There are three major objectives to this phase of the work. (1) Improvement of Information Retrieval (IR) methods for Independent Verification and Validation (IV&V) requirements tracing. Information Retrieval methods are typically developed for very large (order of millions - tens of millions and more documents) document collections and therefore, most successfully used methods somewhat sacrifice precision and recall in order to achieve efficiency. At the same time typical IR systems treat all user queries as independent of each other and assume that relevance of documents to queries is subjective for each user. The IV&V requirements tracing problem has a much smaller data set to operate on, even for large software development projects; the set of queries is predetermined by the high-level specification document and individual requirements considered as query input to IR methods are not necessarily independent from each other. Namely, knowledge about the links for one requirement may be helpful in determining the links of another requirement. Finally, while the final decision on the exact form of the traceability matrix still belongs to the IV&V analyst, his/her decisions are much less arbitrary than those of an Internet search engine user. All this suggests that the information available to us in the framework of the IV&V tracing problem can be successfully leveraged to enhance standard IR techniques, which in turn would lead to increased recall and precision. We developed several new methods during Phase II; (2) IV&V requirements tracing IR toolkit. Based on the methods developed in Phase I and their improvements developed in Phase II, we built a toolkit of IR methods for IV&V requirements tracing. The toolkit has been integrated, at the data level, with SAIC's SuperTracePlus (STP) tool; (3) Toolkit testing. We tested the methods included in the IV&V requirements tracing IR toolkit on a number of projects.

Scattering techniques have played a key role in our understanding of the structure and function of phospholipid membranes. These techniques have been applied widely to study how different molecules (e.g., cholesterol) can affect phospholipid membrane structure. However, there has been much less attention paid to the effects of molecules that remain in the aqueous phase. One important example is the role played by small solutes, particularly sugars, in protecting phospholipid membranes during drying or slow freezing. In this paper, we present new results and a general methodology, which illustrate how contrast variation small angle neutron scattering (SANS) and synchrotron-based X-ray scattering (small angle (SAXS) and wide angle (WAXS)) can be used to quantitatively understand the interactions between solutes and phospholipids. Specifically, we show the assignment of lipid phases with synchrotron SAXS and explain how SANS reveals the exclusion of sugars from the aqueous region in the particular example of hexagonal II phases formed by phospholipids.

We studied the effect of the ratio between the monomer and cross-linker molecules in the azobenene included liquid crystal polymer films by using the heterodyne transient grating (HD-TG) technique, which is one of the time-resolved measurement techniques. Depending on the ratio, the magnitude of the refractive index change, its anisotropy, and the lifetime of the cis isomer of azobenzene, generated by a UV pulse irradiation. By increasing the cross-linker ratio, the refractive index change and its anisotropy was reduced, indicating less ability for the motion, while slower lifetime was observed by increasing the monomer ratio, indicating that the film is difficult to return the original shape by a visiblelight irradiation. The obtained dynamics was consistent with the functionality of the films.

Background Diagnosis of some diseases is difficult due to invasive sampling. Urine has been candidate as a non-invasive and convenient alternative. It has many advantages and easy accessibility but some technical ills should be removed. Finding a suitable extraction method for improving urine DNA quantity and quality in altering invasive specimens for molecular diagnosis of some infectious diseases, was the main object of present research. Method Toxoplasmosis was selected as an experimental model, regarding the congenital and ocular forms, its abundance and requirement to invasive sample for diagnosis. Samples prepared by adding some defined Toxoplasma gondii (RH strain) tachyzoites to normal urine. Several urine DNA extraction and purification methods comparatively were tested for finding the best one. The amount of extracted DNA assessed using Nanodrope spectrophotometer and a multiplex semi-nested PCR were designed for evaluating the results. Results Urine samples with known number of tachyzoites were purified comparatively by five better methods. The results reviled that Cinnagen kit performed with more efficacies. It works well up to 1-5tachyzoites µl−1 of urine. The amount and quality of extracted DNA of more than 100 urine samples with defined tachyzoites were analyzed using a nested PCR method. Finally methods were enough sensitive to detect one tachyzoite DNA in final PCR reaction. Conclusion This method was enough eligible and sensitive to perform molecular tests for different purposes of instance detecting toxoplasmosis by urine sample as a convenience and non invasive method; although it is better to perform some more experiments using patients samples comparing gold methods. PMID:23682277

Background Usher syndrome (USH) combines sensorineural deafness with blindness. It is inherited in an autosomal recessive mode. Early diagnosis is critical for adapted educational and patient management choices, and for genetic counseling. To date, nine causative genes have been identified for the three clinical subtypes (USH1, USH2 and USH3). Current diagnostic strategies make use of a genotyping microarray that is based on the previously reported mutations. The purpose of this study was to design a more accurate molecular diagnosis tool. Methods We sequenced the 366 coding exons and flanking regions of the nine known USH genes, in 54 USH patients (27 USH1, 21 USH2 and 6 USH3). Results Biallelic mutations were detected in 39 patients (72%) and monoallelic mutations in an additional 10 patients (18.5%). In addition to biallelic mutations in one of the USH genes, presumably pathogenic mutations in another USH gene were detected in seven patients (13%), and another patient carried monoallelic mutations in three different USH genes. Notably, none of the USH3 patients carried detectable mutations in the only known USH3 gene, whereas they all carried mutations in USH2 genes. Most importantly, the currently used microarray would have detected only 30 of the 81 different mutations that we found, of which 39 (48%) were novel. Conclusions Based on these results, complete exon sequencing of the currently known USH genes stands as a definite improvement for molecular diagnosis of this disease, which is of utmost importance in the perspective of gene therapy. PMID:21569298

Our team is carrying on a systematic study devoted to the design of a SPECT detector with submillimeter resolution and adequate sensitivity (1 cps/kBq). Such system will be used for functional imaging of biological processes at molecular level in small animal. The system requirements have been defined by two relevant applications: study of atherosclerotic plaques characterization and stem cells diffusion and homing. In order to minimize costs and implementation time, the gamma detector will be based—as much as possible—on conventional components: scintillator crystal and position sensitive PhotoMultipliers read by individual channel electronics. A coded aperture collimator should be adapted to maximize the efficiency. The optimal selection of the detector components is investigated by systematic use of Monte-Carlo simulations (and laboratory validation tests); and finally preliminary results are presented and discussed here.

In order to meet the Renewable Fuels Standard demands for 30 billion gallons of biofuels by the end of 2020, new technologies for generation of cellulosic ethanol must be exploited. Breaking down cellulose by cellulase enzyme is very important for this purpose but this is not thermostable and degrades at higher temperatures in bioreactors. Towards creation of a more ecologically friendly method of rendering bioethanol from cellulosic waste, we attempted to produce recombinant higher temperature resistant cellulases for use in bioreactors. The project involved molecular cloning of genes for cellulose-degrading enzymes based on bacterial source, expressing the recombinant proteins in E. coli and optimizing enzymatic activity. We were able to generate in vitro bacterial expression systems to produce recombinant His-tag purified protein which showed cellulase like activity. PMID:27468362

We outline an approach yielding proper motions with higher precision than exists in present catalogs for a sample of stars in the Kepler field. To increase proper-motion precision, we combine first-moment centroids of Kepler pixel data from a single season with existing catalog positions and proper motions. We use this astrometry to produce improved reduced-proper-motion diagrams, analogous to a Hertzsprung-Russell (H-R) diagram, for stars identified as Kepler objects of interest. The more precise the relative proper motions, the better the discrimination between stellar luminosity classes. Using UCAC4 and PPMXL epoch 2000 positions (and proper motions from those catalogs as quasi-Bayesian priors), astrometry for a single test Channel (21) and Season (0) spanning 2 yr yields proper motions with an average per-coordinate proper-motion error of 1.0 mas yr{sup –1}, which is over a factor of three better than existing catalogs. We apply a mapping between a reduced-proper-motion diagram and an H-R diagram, both constructed using Hubble Space Telescope parallaxes and proper motions, to estimate Kepler object of interest K-band absolute magnitudes. The techniques discussed apply to any future small-field astrometry as well as to the rest of the Kepler field.

Velazquez and Curilef [J. Stat. Mech. (2010); J. Stat. Mech. (2010)] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989)]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L)∝(1/L)z with exponent z≃0.26±0.02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L→+∞. PMID:25871247

Surgeons have been slow to incorporate industrial reliability techniques. Process control methods were applied to surgeon waiting time between cases, and to length of stay (LOS) after colon surgery. Waiting times between surgeries were evaluated by auditing the operating room records of a single hospital over a 1-month period. The medical records of 628 patients undergoing colon surgery over a 5-year period were reviewed. The average surgeon wait time between cases was 53 min, and the busiest surgeon spent 291/2 hr in 1 month waiting between surgeries. Process control charting demonstrated poor overall control of the room turnover process. Average LOS after colon resection also demonstrated very poor control. Mean LOS was 10 days. Weibull's conditional analysis revealed a conditional LOS of 9.83 days. Serious process management problems were identified in both analyses. These process issues are both expensive and adversely affect the quality of service offered by the institution. Process control mechanisms were suggested or implemented to improve these surgical processes. Industrial reliability and quality management tools can easily and effectively identify process control problems that occur on surgical services. PMID:20946422

The authors have devised and demonstrated the successful operation of a low cost, high mass throughput technique capable of performing bulk matter searches for fractionally charged particles based on an improved Millikan liquid drop method. The method uses a stroboscopic lamp and a CCD video camera to image the trajectories of silicone oil drops falling through air in the presence of a vertical, alternating electric field. The images of the trajectories are computer processed in real time, the electric charge on a drop being measured with an rms error of 0.025 of an electron charge. This error is dominated by Brownian motion. In the first use of this method, they have looked at 5,974,941 drops and found no evidence for fractional charges in 1.07 mg of oil. With 95% confidence, the concentration of isolated quarks with {+-} 1/3e or {+-} 2/3e in silicone oil is less than one per 2.14 x 10{sup 20} nucleons.

Near-infrared spectroscopy (NIRS) calculates hemoglobin parameters, such as oxygenated hemoglobin (oxyHb) and deoxygenated hemoglobin (deoxyHb) using the near-infrared light around the wavelength of 800nm. This is based on the modified-Lambert-Beer's law that changes in absorbance are proportional to changes in hemoglobin parameters. Many conventional measurement methods uses only a few wavelengths, however, in this research, basic examination of NIRS measurement was approached by acquiring wide range of wavelength information. Venous occlusion test was performed by using the blood pressure cuff around the upper arm. Pressure of 100mmHg was then applied for about 3 minutes. During the venous occlusion, the spectrum of the lower arm muscles was measured every 15 seconds, within the range of 600 to 1100nm. It was found that other wavelength bands hold information correlating to this venous occlusion task. Technique of improving the performance of NIRS measurement using the Spectroscopic Method is very important for Brain science.

In structural vibration tests, one of the main factors which disturb the reliability and accuracy of the results are the noise signals encountered. To overcome this deficiency, this paper presents a discrete wavelet transform (DWT) approach to denoise the measured signals. The denoising performance of DWT is discussed by several processing parameters, including the type of wavelet, decomposition level, thresholding method, and threshold selection rules. To overcome the disadvantages of the traditional hard- and soft-thresholding methods, an improved thresholding technique called the sigmoid function-based thresholding scheme is presented. The procedure is validated by using four benchmarks signals with three degrees of degradation as well as a real measured signal obtained from a three-story reinforced concrete scale model shaking table experiment. The performance of the proposed method is evaluated by computing the signal-to-noise ratio (SNR) and the root-mean-square error (RMSE) after denoising. Results reveal that the proposed method offers superior performance than the traditional methods no matter whether the signals have heavy or light noises embedded. PMID:23112652

Ensemble modeling is a method of prediction based on the use of a representative sample of possible future states. Global models of the solar corona and inner heliosphere are now maturing to the point of becoming predictive tools; thus, it is both meaningful and necessary to quantitatively assess their uncertainty and limitations. In this study, we apply simple ensemble modeling techniques as a first step towards these goals. We focus on one relatively quiescent time period, Carrington rotation 2062, which occurred during the late declining phase of solar cycle 23. To illustrate and assess the sensitivity of the model results to variations in boundary conditions, we compute solutions using synoptic magnetograms from seven solar observatories. Model sensitivity is explored using (1) different combinations of models, (2) perturbations in the base coronal temperature (a free parameter in one of the model approximations), and (3) the spatial resolution of the numerical grid. We present variance maps, "whisker" plots, and "Taylor" diagrams to summarize the accuracy of the solutions and compute skill scores, which demonstrate that the ensemble mean solution outperforms any of the individual realizations. Our results provide a baseline against which future model improvements can be compared.

Background The sterile insect technique (SIT) is an environment-friendly method used in area-wide pest management of the Mediterranean fruit fly Ceratitis capitata (Wiedemann; Diptera: Tephritidae). Ionizing radiation used to generate reproductive sterility in the mass-reared populations before release leads to reduction of competitiveness. Results Here, we present a first alternative reproductive sterility system for medfly based on transgenic embryonic lethality. This system is dependent on newly isolated medfly promoter/enhancer elements of cellularization-specifically-expressed genes. These elements act differently in expression strength and their ability to drive lethal effector gene activation. Moreover, position effects strongly influence the efficiency of the system. Out of 60 combinations of driver and effector construct integrations, several lines resulted in larval and pupal lethality with one line showing complete embryonic lethality. This line was highly competitive to wildtype medfly in laboratory and field cage tests. Conclusion The high competitiveness of the transgenic lines and the achieved 100% embryonic lethality causing reproductive sterility without the need of irradiation can improve the efficacy of operational medfly SIT programs. PMID:19173707

The ITOS with an improved attitude control system is described. A Hall generator brushless dc torque motor will replace the brush dc torque motor on ITOS-I and ITOS-A (NOAA-1). The four attitude horizon sensors will be replaced with two CO2 sensors for better horizon definition. An earth horizon splitting technique will be used to keep the earth facing side of the satellite toward earth even if the desired circular orbit is not achieved. The external appearance of the pitch control subsystem differs from TIROS-M (ITOS-1) and ITOS-A (NOAA-1) in that two instead of one pitch control electronics (PCE) boxes are used. Two instead of four horizon sensors will be used and one instead of two mirrors will be used for sensor scanning. The brushless motor will eliminate the requirement for brushes, strain gages and the telemetry for the brush wear. A single rotating flywheel, supported by a single bearing provides the gyroscopic stability and the required momentum interchange to keep one side of the satellite facing the earth. Magnetic torquing against the earth's magnetic field eliminates the requirement for expendable propellants which would limit satellite life in orbit.

Velazquez and Curilef [J. Stat. Mech. (2010) P02002, 10.1088/1742-5468/2010/02/P02002; J. Stat. Mech. (2010) P04026, 10.1088/1742-5468/2010/04/P04026] have proposed a methodology to extend Monte Carlo algorithms that are based on canonical ensemble. According to our previous study, their proposal allows us to overcome slow sampling problems in systems that undergo any type of temperature-driven phase transition. After a comprehensive review about ideas and connections of this framework, we discuss the application of a reweighting technique to improve the accuracy of microcanonical calculations, specifically, the well-known multihistograms method of Ferrenberg and Swendsen [Phys. Rev. Lett. 63, 1195 (1989), 10.1103/PhysRevLett.63.1195]. As an example of application, we reconsider the study of the four-state Potts model on the square lattice L ×L with periodic boundary conditions. This analysis allows us to detect the existence of a very small latent heat per site qL during the occurrence of temperature-driven phase transition of this model, whose size dependence seems to follow a power law qL(L ) ∝(1/L ) z with exponent z ≃0 .26 ±0 .02. Discussed is the compatibility of these results with the continuous character of temperature-driven phase transition when L →+∞ .

We devised a global optimization (GO) strategy for optimizing molecular properties with respect to both geometry and chemical composition. A relative index of thermodynamic stability (RITS) is introduced to allow meaningful energy comparisons between different chemical species. We use the RITS by itself, or in combination with another calculated property, to create an objective function F to be minimized. Including the RITS in the definition of F ensures that the solutions have some degree of thermodynamic stability. We illustrate how the GO strategy works with three test applications, with F calculated in the framework of Kohn-Sham Density Functional Theory (KS-DFT) with the Perdew-Burke-Ernzerhof exchange-correlation. First, we searched the composition and configuration space of CmHnNpOq (m = 0-4, n = 0-10, p = 0-2, q = 0-2, and 2 ≤ m + n + p + q ≤ 12) for stable molecules. The GO discovered familiar molecules like N2, CO2, acetic acid, acetonitrile, ethane, and many others, after a small number (5000) of KS-DFT energy evaluations. Second, we carried out a GO of the geometry of CumSnn (+) (m = 1, 2 and n = 9-12). A single GO run produced the same low-energy structures found in an earlier study where each CumSnn (+) species had been optimized separately. Finally, we searched bimetallic clusters AmBn (3 ≤ m + n ≤ 6, A,B= Li, Na, Al, Cu, Ag, In, Sn, Pb) for species and configurations having a low RITS and large highest occupied Molecular Orbital (MO) to lowest unoccupied MO energy gap (Eg). We found seven bimetallic clusters with Eg > 1.5 eV. PMID:26772561

The research described in this product was performed in part in the Environmental Molecular Sciences Laboratory, a national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory. Using broken-symmetry unrestricted density functional theory quantum mechanical (QM) methods in concert with mixed quantum mechanics/molecular mechanics (QM/MM) methods, the hydroxylation of methane and substituted methanes by intermediate Q in methane monooxygenase hydroxylase (MMOH) has been quantitatively modeled. This protocol allows the protein environment to be included throughout the calculations and its effects (electrostatic, van der Waals, strain) upon the reaction to be accurately evaluated. With the current results, recent kinetic data for CH₃X (X ) H, CH₃, OH, CN, NO₂) substrate hydroxylation in MMOH (Ambundo, E. A.; Friesner, R. A.; Lippard, S. J. J. Am. Chem. Soc. 2002, 124, 8770-8771) can be rationalized. Results for methane, which provide a quantitative test of the protocol, including a substantial kinetic isotope effect (KIE), are in reasonable agreement with experiment. Specific features of the interaction of each of the substrates with MMO are illuminated by the QM/MM modeling, and the resulting effects upon substrate binding are quantitatively incorporated into the calculations. The results as a whole point to the success of the QM/MM methodology and enhance our understanding of MMOH catalytic chemistry. We also identify systematic errors in the evaluation of the free energy of binding of the Michaelis complexes of the substrates, which most likely arise from inadequate sampling and/or the use of harmonic approximations to evaluate the entropy of the complex. More sophisticated sampling methods will be required to achieve greater accuracy in this aspect of the calculation.

The interaction of terbutaline sulfate (TS) with calf thymus DNA (ctDNA) were investigated by fluorescence quenching, UV-vis absorption, viscosity measurements, ionic strength effect, DNA melting experiments and molecular docking. The binding constants (Ka) of TS to ctDNA were determined as 4.92×10(4), 1.26×10(4) and 1.16×10(4) L mol(-1) at 17, 27 and 37 °C, respectively. Stern-Volmer plots suggested that the quenching of fluorescence of TS by ctDNA was a static quenching. The absorption spectra of TS with ctDNA revealed a slight blue shift and hyperchromic effect. The relative viscosity ctDNA was hardly changed by TS, and melting temperature varied slightly. For the system of TS-ctDNA, the intensity of fluorescence decreased with the increase of ionic strength. Also, the Ka for TS-double stranded DNA (dsDNA) was clearly weaker than that for TS-single stranded DNA (ssDNA). All these results revealed that the binding mode of TS with ctDNA should be groove binding. The enthalpy change and entropy change suggested that van der Waals force or hydrogen bonds was a main binding force between TS and ctDNA. Furthermore, the quantum yield of TS was measured by comparing with the standard solution. Based on the Förster energy transference theory (FRET), the binding distance between the acceptor and donor was calculated. Molecular docking showed that TS was a minor groove binder of ctDNA and preferentially bound to A-T rich regions. PMID:26123508

The synergistic effect of combining molecular imprinting and surface acoustic wave (SAW) technologies for the selective and label-free detection of sulfamethizole as a model antibiotic in aqueous environment was demonstrated. A molecularly imprinted polymer (MIP) for sulfamethizole (SMZ) selective recognition was prepared in the form of a homogeneous thin film on the sensing surfaces of SAW chip by oxidative electropolymerization of m-phenylenediamine (mPD) in the presence of SMZ, acting as a template. Special attention was paid to the rational selection of the functional monomer using computational and spectroscopic approaches. SMZ template incorporation and its subsequent release from the polymer was supported by IR microscopic measurements. Precise control of the thicknesses of the SMZ-MIP and respective nonimprinted reference films (NIP) was achieved by correlating the electrical charge dosage during electrodeposition with spectroscopic ellipsometry measurements in order to ensure accurate interpretation of label-free responses originating from the MIP modified sensor. The fabricated SMZ-MIP films were characterized in terms of their binding affinity and selectivity toward the target by analyzing the binding kinetics recorded using the SAW system. The SMZ-MIPs had SMZ binding capacity approximately more than eight times higher than the respective NIP and were able to discriminate among structurally similar molecules, i.e., sulfanilamide and sulfadimethoxine. The presented approach for the facile integration of a sulfonamide antibiotic-sensing layer with SAW technology allowed observing the real-time binding events of the target molecule at nanomolar concentration levels and could be potentially suitable for cost-effective fabrication of a multianalyte chemosensor for analysis of hazardous pollutants in an aqueous environment. PMID:26704414

Although several studies have recently addressed phylogenetic relationships among Asian pikas (Ochotona spp.), the North American species have been relatively neglected and their monophyly generally unquestioned or assumed. Given the high degree of intraspecific diversity in pelage and call structure, the recent identification of previously unrecognized species of pika in Asia, and the increasing evidence for multiple trans-Beringian dispersals in several small mammal lineages, the monophyly of North American pikas warrants reexamination. In addition, previous studies have applied an externally calibrated rate to examine the timing of diversification within the genus. This method has been increasingly shown to return results that, at the very least, are overly narrow in their confidence intervals, and at the worst can be entirely spurious. For this study we combined GenBank sequences from the mitochondrial genes cyt b and ND4 with newly generated sequence data from O. hyperborea and O. collaris to investigate the origin of the North American lineages and the timing of phylogenetic diversification within the genus Ochotona. Specifically, we address three goals (1) summarize and reanalyze the molecular evidence for relationships within the genus using statistically supported models of evolution; (2) add additional sequences from O. collaris and O. hyperborea to rigorously test the monophyly of North American pikas; (3) examine the timing of the diversification within the genus using relaxed molecular clock methods. We found no evidence of multiple trans-Beringian dispersals into North America, thereby supporting the traditional hypothesis of a single invasion of North America. We also provide evidence that the major splits within the genus occurred in the Miocene, and the Nearctic pikas diverged sometime before the Pleistocene. PMID:19501176

We devised a global optimization (GO) strategy for optimizing molecular properties with respect to both geometry and chemical composition. A relative index of thermodynamic stability (RITS) is introduced to allow meaningful energy comparisons between different chemical species. We use the RITS by itself, or in combination with another calculated property, to create an objective function F to be minimized. Including the RITS in the definition of F ensures that the solutions have some degree of thermodynamic stability. We illustrate how the GO strategy works with three test applications, with F calculated in the framework of Kohn-Sham Density Functional Theory (KS-DFT) with the Perdew-Burke-Ernzerhof exchange-correlation. First, we searched the composition and configuration space of CmHnNpOq (m = 0-4, n = 0-10, p = 0-2, q = 0-2, and 2 ≤ m + n + p + q ≤ 12) for stable molecules. The GO discovered familiar molecules like N2, CO2, acetic acid, acetonitrile, ethane, and many others, after a small number (5000) of KS-DFT energy evaluations. Second, we carried out a GO of the geometry of Cu m Snn + (m = 1, 2 and n = 9-12). A single GO run produced the same low-energy structures found in an earlier study where each Cu m S nn + species had been optimized separately. Finally, we searched bimetallic clusters AmBn (3 ≤ m + n ≤ 6, A,B= Li, Na, Al, Cu, Ag, In, Sn, Pb) for species and configurations having a low RITS and large highest occupied Molecular Orbital (MO) to lowest unoccupied MO energy gap (Eg). We found seven bimetallic clusters with Eg > 1.5 eV.

"The truth is, the Science of Nature has been already too long made only a work of the Brain and the Fancy: It is now high time that it should return to the plainness and soundness of Observations on material and obvious things," proudly declared Robert Hooke in his highly successful picture book of microscopic and telescopic images, "Micrographia" in 1665. Hooke's statement has remained true in chemistry, where a considerable work of the brain and the fancy is still necessary. Single-molecule, real-time transmission electron microscope (SMRT-TEM) imaging at an atomic resolution now allows us to learn about molecules simply by watching movies of them. Like any dream come true, the new analytical technique challenged the old common sense of the communities, and offers new research opportunities that are unavailable by conventional methods. With its capacity to visualize the motions and the reactions of individual molecules and molecular clusters, the SMRT-TEM technique will become an indispensable tool in molecular science and the engineering of natural and synthetic substances, as well as in science education. PMID:23280645

We analyzed by next-generation sequencing (NGS) 67 epilepsy genes in 19 patients with different types of either isolated or syndromic epileptic disorders and in 15 controls to investigate whether a quick and cheap molecular diagnosis could be provided. The average number of nonsynonymous and splice site mutations per subject was similar in the two cohorts indicating that, even with relatively small targeted platforms, finding the disease gene is not an univocal process. Our diagnostic yield was 47% with nine cases in which we identified a very likely causative mutation. In most of them no interpretation would have been possible in absence of detailed phenotype and familial information. Seven out of 19 patients had a phenotype suggesting the involvement of a specific gene. Disease-causing mutations were found in six of these cases. Among the remaining patients, we could find a probably causative mutation only in three. None of the genes affected in the latter cases had been suspected a priori. Our protocol requires 8-10 weeks including the investigation of the parents with a cost per patient comparable to sequencing of 1-2 medium-to-large-sized genes by conventional techniques. The platform we used, although providing much less information than whole-exome or whole-genome sequencing, has the advantage that can also be run on 'benchtop' sequencers combining rapid turnaround times with higher manageability. PMID:24848745

We analyzed by next-generation sequencing (NGS) 67 epilepsy genes in 19 patients with different types of either isolated or syndromic epileptic disorders and in 15 controls to investigate whether a quick and cheap molecular diagnosis could be provided. The average number of nonsynonymous and splice site mutations per subject was similar in the two cohorts indicating that, even with relatively small targeted platforms, finding the disease gene is not an univocal process. Our diagnostic yield was 47% with nine cases in which we identified a very likely causative mutation. In most of them no interpretation would have been possible in absence of detailed phenotype and familial information. Seven out of 19 patients had a phenotype suggesting the involvement of a specific gene. Disease-causing mutations were found in six of these cases. Among the remaining patients, we could find a probably causative mutation only in three. None of the genes affected in the latter cases had been suspected a priori. Our protocol requires 8–10 weeks including the investigation of the parents with a cost per patient comparable to sequencing of 1–2 medium-to-large-sized genes by conventional techniques. The platform we used, although providing much less information than whole-exome or whole-genome sequencing, has the advantage that can also be run on ‘benchtop' sequencers combining rapid turnaround times with higher manageability. PMID:24848745

Lithium-ion batteries are vital energy storage devices due to their high specific energy density, lack of memory effect, and long cycle life. While they are predominantly used in small consumer electronics, new strategies for improving battery safety and lifetime are critical to the successful implementation of high-capacity, fast-charging materials required for advanced Li-ion battery applications. Currently, the presence of a volatile, combustible electrolyte and an oxidizing agent (Lithium oxide cathodes) make the Li-ion cell susceptible to fire and explosions. Thermal overheating, electrical overcharging, or mechanical damage can trigger thermal runaway, and if left unchecked, combustion of battery materials. To improve battery safety, autonomic, thermally-induced shutdown of Li-ion batteries is demonstrated by depositing thermoresponsive polymer microspheres onto battery anodes. When the internal temperature of the cell reaches a critical value, the microspheres melt and conformally coat the anode and/or separator with an ion insulating barrier, halting Li-ion transport and shutting down the cell permanently. Charge and discharge capacity is measured for Li-ion coin cells containing microsphere-coated anodes or separators as a function of capsule coverage. Scanning electron microscopy images of electrode surfaces from cells that have undergone autonomic shutdown provides evidence of melting, wetting, and re-solidification of polyethylene (PE) into the anode and polymer film formation at the anode/separator interface. As an extension of this autonomic shutdown approach, a particle-based separator capable of performing autonomic shutdown, but which reduces the shorting hazard posed by current bi- and tri-polymer commercial separators, is presented. This dual-particle separator is composed of hollow glass microspheres acting as a physical spacer between electrodes, and PE microspheres to impart autonomic shutdown functionality. An oil-immersion technique is

Absorption of a high energy photon (greater than 6 eV) by an isolated molecule results in the formation of highly excited quasi-discrete or continuum states which evolve through a wide range of direct and indirect photochemical processes. These are: photoionization and autoionization, photodissociation and predissociation, and fluorescence. The ultimate goal is to understand the dynamics of the excitation and decay processes and to quantitatively measure the absolute partial cross sections for all processes which occur in photoabsorption. Typical experimental techniques and the status of observational results of particular interest to solar system observations are presented.

We extend the application of the adaptive resolution technique (AdResS) to liquid systems composed of alkane chains of different lengths. The aim of the study is to develop and test the modifications of AdResS required in order to handle the change of representation of large molecules. The robustness of the approach is shown by calculating several relevant structural properties and comparing them with the results of full atomistic simulations. The extended scheme represents a robust prototype for the simulation of macromolecular systems of interest in several fields, from material science to biophysics.

This exploratory study compares and contrasts two types of critical thinking techniques; one is a philosophical and the other an applied ethical analysis technique. The two techniques analyse an ethically challenging situation involving ICT that a recent media article raised to demonstrate their ability to develop the ethical analysis skills of…

A new modeling technique for arriving at the three dimensional (3-D) structure of an RNA stem-loop has been developed based on a conformational search by a genetic algorithm and the following refinement by energy minimization. The genetic algorithm simultaneously optimizes a population of conformations in the predefined conformational space and generates 3-D models of RNA. The fitness function to be optimized by the algorithm has been defined to reflect the satisfaction of known conformational constraints. In addition to a term for distance constraints, the fitness function contains a term to constrain each local conformation near to a prepared template conformation. The technique has been applied to the two loops of tRNA, the anticodon loop and the T-loop, and has found good models with small root mean square deviations from the crystal structure. Slightly different models have also been found for the anticodon loop. The analysis of a collection of alternative models obtained has revealed statistical features of local variations at each base position. Images PMID:7533901

Raoultella terrigena ATCC 33257, a representative of the coliform group, is commonly used as a challenge organism in water purifier efficacy testing. In addition to being time consuming, traditional culturing techniques and metabolic identification systems (including automated systems) also fail to accurately differentiate this organism from its closely related neighbors belonging to the Enterobacteriaceae group. Molecular-based techniques, such as real-time quantitative polymerase chain reaction (qPCR) and enterobacterial repetitive intergenic consensus (ERIC)-PCR fingerprinting, are preferred methods of detection because of their accuracy, reproducibility, specificity, and sensitivity, along with shorter turnaround time. ERIC-PCR performed with the 1R primer set demonstrated stable unique banding patterns (~800, ~300 bp) for R. terrigena ATCC 33257 different from patterns observed for R. planticola and R. ornithinolytica. The primer pair developed from gyraseA (gyrA) sequence of R. terrigena for the SYBR Green qPCR assay using the AlleleID(®) 7.0 primer probe design software was highly specific and sensitive for the target organism. The sensitivity of the assay was 10(1) colony forming units (CFU)/ml for whole cells and 4.7 fg with genomic DNA. The primer pair was successful in determining the concentration (5.5 ± 0.3 × 10(6) CFU/ml) of R. terrigena from water samples spiked with equal concentration of Escherichia coli and R. terrigena. Based on these results from the ERIC-PCR and the SYBR Green qPCR assay, these moleculartechniques can be efficiently used for rapid identification and quantification of R. terrigena during water purifier testing. PMID:21132347

A systematic vibrational spectroscopic assignment and analysis of Carbamazepine has been carried out by using FT-IR, FT-Raman and UV spectral data. The vibrational analysis were aided by electronic structure calculations - ab initio (RHF) and hybrid density functional methods (B3LYP) performed with standard basis set 6-31G(d,p). Molecular equilibrium geometries, electronic energies, natural bond order analysis, harmonic vibrational frequencies and IR intensities have been computed. A detailed interpretation of the vibrational spectra of the molecule has been made on the basis of the calculated Potential Energy Distribution (PED) by VEDA program. UV-visible spectrum of the compound was also recorded and the electronic properties, such as HOMO and LUMO energies and λmax were determined by HF/6-311++G(d,p) Time-Dependent method. The thermodynamic functions of the title molecule were also performed using the RHF and DFT methods. The restricted Hartree-Fock and density functional theory-based nuclear magnetic resonance (NMR) calculation procedure was also performed, and it was used for assigning the (13)C and (1)H NMR chemical shifts of Carbamazepine. PMID:25682215

A systematic vibrational spectroscopic assignment and analysis of Carbamazepine has been carried out by using FT-IR, FT-Raman and UV spectral data. The vibrational analysis were aided by electronic structure calculations - ab initio (RHF) and hybrid density functional methods (B3LYP) performed with standard basis set 6-31G(d,p). Molecular equilibrium geometries, electronic energies, natural bond order analysis, harmonic vibrational frequencies and IR intensities have been computed. A detailed interpretation of the vibrational spectra of the molecule has been made on the basis of the calculated Potential Energy Distribution (PED) by VEDA program. UV-visible spectrum of the compound was also recorded and the electronic properties, such as HOMO and LUMO energies and λmax were determined by HF/6-311++G(d,p) Time-Dependent method. The thermodynamic functions of the title molecule were also performed using the RHF and DFT methods. The restricted Hartree-Fock and density functional theory-based nuclear magnetic resonance (NMR) calculation procedure was also performed, and it was used for assigning the 13C and 1H NMR chemical shifts of Carbamazepine.

The emergence of new applications of molecular dynamics (MD) simulation calls for the development of mass-statting procedures that insert or delete particles on-the-fly. In this paper we present a new mass-stat which we term FADE, because it gradually “fades-in” (inserts) or “fades-out” (deletes) molecules over a short relaxation period within a MD simulation. FADE applies a time-weighted relaxation to the intermolecular pair forces between the inserting/deleting molecule and any neighbouring molecules. The weighting function we propose in this paper is a piece-wise polynomial that can be described entirely by two parameters: the relaxation time scale and the order of the polynomial. FADE inherently conserves overall system momentum independent of the form of the weighting function. We demonstrate various simulations of insertions of atomic argon, polyatomic TIP4P water, polymer strands, and C{sub 60} Buckminsterfullerene molecules. We propose FADE parameters and a maximum density variation per insertion-instance that restricts spurious potential energy changes entering the system within desired tolerances. We also demonstrate in this paper that FADE compares very well to an existing insertion algorithm called USHER, in terms of accuracy, insertion rate (in dense fluids), and computational efficiency. The USHER algorithm is applicable to monatomic and water molecules only, but we demonstrate that FADE can be generally applied to various forms and sizes of molecules, such as polymeric molecules of long aspect ratio, and spherical carbon fullerenes with hollow interiors.

We describe a diode-laser spectrometer for obtaining direct absorption, rovibrational spectra of monomers and/or weakly bound, molecular complexes which are found in supersonic expansions. The spectrometer incorporates a tunable, semiconductor diode-laser source and a pulsed-gas slit nozzle. White cell optics are used in the vacuum chamber to increase effective path length, and a Fabry-Perot etalon is used for relative frequency calibration. Stabilization of the source output is accomplished by locking onto a zero crossing of the etalon fringe-spacing pattern with a gated integrator. The diode laser is scanned rapidly (˜0.2 cm-1/ms) to modulate absorption signals at frequencies which can be electronically filtered from source noise. For 2000 scans, absorbances as small as 1.3×10-5 (0.003% absorption) can be detected. Amplitude fluctuations in the detected signal due to interference effects in the optics and gain variations in the diode laser are eliminated by recording data with and without gas flow from the nozzle, then performing the appropriate subtractions. Because source drift and multiple crossing-angle effects contribute ≤0.0005 cm-1, observed linewidths (0.003 cm-1) were determined to be laser limited. Data obtained on the van der Waals molecule (ArṡCo) are presented and discussed.

The emergence of new applications of molecular dynamics (MD) simulation calls for the development of mass-statting procedures that insert or delete particles on-the-fly. In this paper we present a new mass-stat which we term FADE, because it gradually "fades-in" (inserts) or "fades-out" (deletes) molecules over a short relaxation period within a MD simulation. FADE applies a time-weighted relaxation to the intermolecular pair forces between the inserting/deleting molecule and any neighbouring molecules. The weighting function we propose in this paper is a piece-wise polynomial that can be described entirely by two parameters: the relaxation time scale and the order of the polynomial. FADE inherently conserves overall system momentum independent of the form of the weighting function. We demonstrate various simulations of insertions of atomic argon, polyatomic TIP4P water, polymer strands, and C60 Buckminsterfullerene molecules. We propose FADE parameters and a maximum density variation per insertion-instance that restricts spurious potential energy changes entering the system within desired tolerances. We also demonstrate in this paper that FADE compares very well to an existing insertion algorithm called USHER, in terms of accuracy, insertion rate (in dense fluids), and computational efficiency. The USHER algorithm is applicable to monatomic and water molecules only, but we demonstrate that FADE can be generally applied to various forms and sizes of molecules, such as polymeric molecules of long aspect ratio, and spherical carbon fullerenes with hollow interiors.

Fibromyalgia (FM) is a complex disorder that affects up to 5% of the general population worldwide. Its pathophysiological mechanisms are difficult to identify and current drug therapies demonstrate limited effectiveness. Both mitochondrial dysfunction and coenzyme Q10 (CoQ10) deficiency have been implicated in FM pathophysiology. We have investigated the effect of CoQ10 supplementation. We carried out a randomized, double-blind, placebo-controlled trial to evaluate clinical and gene expression effects of forty days of CoQ10 supplementation (300 mg/day) on 20 FM patients. This study was registered with controlled-trials.com (ISRCTN 21164124). An important clinical improvement was evident after CoQ10 versus placebo treatment showing a reduction of FIQ (p<0.001), and a most prominent reduction in pain (p<0.001), fatigue, and morning tiredness (p<0.01) subscales from FIQ. Furthermore, we observed an important reduction in the pain visual scale (p<0.01) and a reduction in tender points (p<0.01), including recovery of inflammation, antioxidant enzymes, mitochondrial biogenesis, and AMPK gene expression levels, associated with phosphorylation of the AMPK activity. These results lead to the hypothesis that CoQ10 have a potential therapeutic effect in FM, and indicate new potential molecular targets for the therapy of this disease. AMPK could be implicated in the pathophysiology of FM. PMID:23458405

In order to begin to prepare a novel orthopedic implant that mimics the natural bone environment, the objective of this in vitro study was to synthesize nanocrystalline hydroxyapatite (NHA) and coat it on titanium (Ti) using molecular plasma deposition (MPD). NHA was synthesized through a wet chemical process followed by a hydrothermal treatment. NHA and micron sized hydroxyapatite (MHA) were prepared by processing NHA coatings at 500°C and 900°C, respectively. The coatings were characterized before and after sintering using scanning electron microscopy, atomic force microscopy, and X-ray diffraction. The results revealed that the post-MPD heat treatment of up to 500°C effectively restored the structural and topographical integrity of NHA. In order to determine the in vitro biological responses of the MPD-coated surfaces, the attachment and spreading of osteoblasts (bone-forming cells) on the uncoated, NHA-coated, and MHA-coated anodized Ti were investigated. Most importantly, the NHA-coated substrates supported a larger number of adherent cells than the MHA-coated and uncoated substrates. The morphology of these cells was assessed by scanning electron microscopy and the observed shapes were different for each substrate type. The present results are the first reports using MPD in the framework of hydroxyapatite coatings on Ti to enhance osteoblast responses and encourage further studies on MPD-based hydroxyapatite coatings on Ti for improved orthopedic applications. PMID:25609958

Narrow genetic base and complex allotetraploid genome of cotton (Gossypium hirsutum L.) is stimulating efforts to avail required polymorphism for marker based breeding. The availability of draft genome sequence of G. raimondii and G. arboreum and next generation sequencing (NGS) technologies facilitated the development of high-throughput marker technologies in cotton. The concepts of genetic diversity, QTL mapping, and marker assisted selection (MAS) are evolving into more efficient concepts of linkage disequilibrium, association mapping, and genomic selection, respectively. The objective of the current review is to analyze the pace of evolution in the molecular marker technologies in cotton during the last ten years into the following four areas: (i) comparative analysis of low- and high-throughput marker technologies available in cotton, (ii) genetic diversity in the available wild and improved gene pools of cotton, (iii) identification of the genomic regions within cotton genome underlying economic traits, and (iv) marker based selection methodologies. Moreover, the applications of marker technologies to enhance the breeding efficiency in cotton are also summarized. Aforementioned genomic technologies and the integration of several other omics resources are expected to enhance the cotton productivity and meet the global fiber quantity and quality demands. PMID:25401149

Arthrogryposis, Renal dysfunction and Cholestasis (ARC) syndrome is a multi-system autosomal recessive disorder caused by germline mutations in VPS33B. The detection of germline VPS33B mutations removes the need for diagnostic organ biopsies (these carry a>50% risk of life-threatening haemorrhage due to platelet dysfunction); however, VPS33B mutations are not detectable in approximately 25% of patients. In order further to define the molecular basis of ARC we performed mutation analysis and mRNA and protein studies in patients with a clinical diagnosis of ARC. Here we report novel mutations in VPS33B in patients from Eastern Europe and South East Asia. One of the mutations was present in 7 unrelated Korean patients. Reduced expression of VPS33B and cellular phenotype was detected in fibroblasts from patients clinically diagnosed with ARC with and without known VPS33B mutations. One mutation-negative patient was found to have normal mRNA and protein levels. This patient's clinical condition improved and he is alive at the age of 2.5 years. Thus we show that all patients with a classical clinical course of ARC had decreased expression of VPS33B whereas normal VPS33B expression was associated with good prognosis despite initial diagnosis of ARC. PMID:18853461

Arthrogryposis, Renal dysfunction and Cholestasis (ARC) syndrome is a multi-system autosomal recessive disorder caused by germline mutations in VPS33B. The detection of germline VPS33B mutations removes the need for diagnostic organ biopsies (these carry a >50% risk of life-threatening haemorrhage due to platelet dysfunction); however, VPS33B mutations are not detectable in ∼25% of patients. In order further to define the molecular basis of ARC we performed mutation analysis and mRNA and protein studies in patients with a clinical diagnosis of ARC. Here we report novel mutations in VPS33B in patients from Eastern Europe and South East Asia. One of the mutations was present in 7 unrelated Korean patients. Reduced expression of VPS33B and cellular phenotype was detected in fibroblasts from patients clinically diagnosed with ARC with and without known VPS33B mutations. One mutation-negative patient was found to have normal mRNA and protein levels. This patient's clinical condition improved and he is alive at the age of 2.5 years. Thus we show that all patients with a classical clinical course of ARC had decreased expression of VPS33B whereas normal VPS33B expression was associated with good prognosis despite initial diagnosis of ARC. PMID:18853461

Certain molecular structures such as carbon nanotubes (CNTs) can potentially improve the thermal conductivity of composite materials. However, their thermal boundary resistance is an obstacle to their effective implementation as a medium for heat flow. We are concerned with overcoming this Kapitza resistance with the aid of chemical functional groups. These functional groups will, in principal, eliminate phonon mismatch between our structures and their matrix. The result will maximize the transmission of thermal vibrations to and from their surrounding medium. We develop a method to predict the thermal properties of our functionalized materials through the calculation of vibrational normal modes and Green's functions. We show how the configuration of attached functional groups affect the samples' thermal conductivity (κ) and attempt to find the arrangement in which κ is maximized. We will make explicit comparisons with thermal conductivity experiments done on nanocomposites of functionalized and pristine CNTs. We will discuss how the bonds connecting the functional groups to the CNT affects κ. We compare these results to measurements on our particular synthesized materials and discuss how to better optimize their design. This work was supported by supported by NSF Grant DMR-1310407.

Narrow genetic base and complex allotetraploid genome of cotton (Gossypium hirsutum L.) is stimulating efforts to avail required polymorphism for marker based breeding. The availability of draft genome sequence of G. raimondii and G. arboreum and next generation sequencing (NGS) technologies facilitated the development of high-throughput marker technologies in cotton. The concepts of genetic diversity, QTL mapping, and marker assisted selection (MAS) are evolving into more efficient concepts of linkage disequilibrium, association mapping, and genomic selection, respectively. The objective of the current review is to analyze the pace of evolution in the molecular marker technologies in cotton during the last ten years into the following four areas: (i) comparative analysis of low- and high-throughput marker technologies available in cotton, (ii) genetic diversity in the available wild and improved gene pools of cotton, (iii) identification of the genomic regions within cotton genome underlying economic traits, and (iv) marker based selection methodologies. Moreover, the applications of marker technologies to enhance the breeding efficiency in cotton are also summarized. Aforementioned genomic technologies and the integration of several other omics resources are expected to enhance the cotton productivity and meet the global fiber quantity and quality demands. PMID:25401149

Globally, there exists a long history in reprocessing in evaluation of the shipper/receiver difference (SRD) on spent nuclear fuel (SNF) received and processed. Typically, the declared shipper s values for uranium and plutonium in SNF (based on calculations involving the initial manufacturer s data and reactor operating history) are used as the input quantities to the head-end process of the facility. Problems have been encountered when comparing these values with measured results of the input accountability tank contents. A typical comparison yields a systematic bias indicated as a loss of 5 7 percent of the plutonium (Pu) and approximately 1 percent for the uranium (U). Studies suggest that such deviation can be attributed to the non-linear nature of the axial burnup values of the SNF. Oak Ridge National Laboratory and Texas A&M University are co-investigating the development of a new method, via Nondestructive Assay (NDA) techniques, to improve the accuracy in burnup and Pu content quantification. Two major components have been identified to achieve this objective. The first component calculates a measurement-based burnup profile along the axis of a fuel rod. Gamma-ray data is collected at numerous locations along the axis of the fuel rod using a High Purity Germanium (HPGe) detector designed for a wide range of gamma-ray energies. Using two fission products, 137Cs and 134Cs, the burnup is calculated at each measurement location and a profile created along the axis of the rod based on the individual measurement locations. The second component measures the U/Pu ratio using an HPGe detector configured for relatively low-energy gamma-rays including x-rays. Fluorescence x-rays from U and Pu are measured and compared to the U/Pu ratio determined from a destructive analysis of the sample. This will be used to establish a relationship between the measured and actual values. This relationship will be combined with the burnup analysis results to establish a relationship

Recent advances in turbulence modeling brought more and more sophisticated turbulence closures (e.g. k-ɛ, k-ɛ - v 2- f, Second Moment Closures), where the governing equations for the model parameters involve advection, diffusion and reaction terms. Numerical instabilities can be generated by the dominant advection or reaction terms. Classical stabilized formulations such as the Streamline Upwind/Petrov Galerkin (SUPG) formulation (Brook and Hughes, comput methods Appl Mech Eng 32:199 255, 1982; Hughes and Tezduyar, comput methods Appl Mech Eng 45: 217 284, 1984) are very well suited for preventing the numerical instabilities generated by the dominant advection terms. A different stabilization however is needed for instabilities due to the dominant reaction terms. An additional stabilization term, called the diffusion for reaction-dominated (DRD) term, was introduced by Tezduyar and Park (comput methods Appl Mech Eng 59:307 325, 1986) for that purpose and improves the SUPG performance. In recent years a new class of variational multi-scale (VMS) stabilization (Hughes, comput methods Appl Mech Eng 127:387 401, 1995) has been introduced, and this approach, in principle, can deal with advection diffusion reaction equations. However, it was pointed out in Hanke (comput methods Appl Mech Eng 191:2925 2947) that this class of methods also need some improvement in the presence of high reaction rates. In this work we show the benefits of using the DRD operator to enhance the core stabilization techniques such as the SUPG and VMS formulations. We also propose a new operator called the DRDJ (DRD with the local variation jump) term, targeting the reduction of numerical oscillations in the presence of both high reaction rates and sharp solution gradients. The methods are evaluated in the context of two stabilized methods: the classical SUPG formulation and a recently-developed VMS formulation called the V-SGS (Corsini et al. comput methods Appl Mech Eng 194:4797 4823, 2005

The present research was aimed at the enhancement of the dissolution rate of atorvastatin calcium by the solid dispersion technique using modified locust bean gum. Solid dispersions (SD) using modified locust bean gum were prepared by the modified solvent evaporation method. Other mixtures were also prepared by physical mixing, co-grinding, and the kneading method. The locust bean gum was subjected to heat for modification. The prepared solid dispersions and other mixtures were evaluated for equilibrium solubility studies, content uniformity, FTIR, DSC, XRD, in vitro drug release, and in vivo pharmacodynamic studies. The equilibrium solubility was enhanced in the solid dispersions (in a drug:polymer ratio of 1:6) and other mixtures such as the co-grinding mixture (CGM) and kneading mixture (KM). Maximum dissolution rate was observed in the solid dispersion batch SD3 (i.e. 50% within 15 min) with maximum drug release after 2 h (80%) out of all solid dispersions. The co-grinding mixture also exhibited a significant enhancement in the dissolution rate among the other mixtures. FTIR studies revealed the absence of drug-polymer interaction in the solid dispersions. Minor shifts in the endothermic peaks of the DSC thermograms of SD3 and CGM indicated slight changes in drug crystallinity. XRD studies further confirmed the results of DSC and FTIR. Topological changes were observed in SEM images of SD3 and CGM. In vivo pharmacodynamic studies indicated an improved efficacy of the optimized batch SD3 as compared to the pure drug at a dose of 3 mg/kg/day. Modified locust bean gum can be a promising carrier for solubility enhancement of poorly water-soluble drugs. The lower viscosity and wetting ability of MLBG, reduction in particle size, and decreased crystallinity of the drug are responsible for the dissolution enhancement of atorvastatin. The co-grinding mixture can be a good alternative to solid dispersions prepared by modified solvent evaporation due to its ease of

In Chapter 1, an introduction to basic principles or MRI is given, including the physical principles, basic pulse sequences, and basic hardware. Following the introduction, five different published and yet unpublished papers for improving the utility of MRI are shown. Chapter 2 discusses a small rodent imaging system that was developed for a clinical 3 T MRI scanner. The system integrated specialized radiofrequency (RF) coils with an insertable gradient, enabling 100 microm isotropic resolution imaging of the guinea pig cochlea in vivo, doubling the body gradient strength, slew rate, and contrast-to-noise ratio, and resulting in twice the signal-to-noise (SNR) when compared to the smallest conforming birdcage. Chapter 3 discusses a system using BOLD MRI to measure T2* and invasive fiberoptic probes to measure renal oxygenation (pO2). The significance of this experiment is that it demonstrated previously unknown physiological effects on pO2, such as breath-holds that had an immediate (<1 sec) pO2 decrease (˜6 mmHg), and bladder pressure that had pO2 increases (˜6 mmHg). Chapter 4 determined the correlation between indicators of renal health and renal fat content. The R2 correlation between renal fat content and eGFR, serum cystatin C, urine protein, and BMI was less than 0.03, with a sample size of ˜100 subjects, suggesting that renal fat content will not be a useful indicator of renal health. Chapter 5 is a hardware and pulse sequence technique for acquiring multinuclear 1H and 23Na data within the same pulse sequence. Our system demonstrated a very simple, inexpensive solution to SMI and acquired both nuclei on two 23Na channels using external modifications, and is the first demonstration of radially acquired SMI. Chapter 6 discusses a composite sodium and proton breast array that demonstrated a 2-5x improvement in sodium SNR and similar proton SNR when compared to a large coil with a linear sodium and linear proton channel. This coil is unique in that sodium

Most of the world's great earthquakes (Mw > 8.5, usually known as mega-earthquakes) occur at shallow depths along the subduction thrust fault (STF), i.e., the frictional interface between the subducting and overriding plates. Spatiotemporal occurrences of mega-earthquakes and their governing physics remain ambiguous, as tragically demonstrated by the underestimation of recent megathrust events (i.e., 2011 Tohoku). To help unravel seismic cycle at STF, analogue modelling has become a key-tool. First properly scaled analogue models with realistic geometries (i.e., wedge-shaped) suitable for studying interplate seismicity have been realized using granular elasto-plastic [e.g., Rosenau et al., 2009] and viscoelastic materials [i.e., Corbi et al., 2013]. In particular, viscoelastic laboratory experiments realized with type A gelatin 2.5 wt% simulate, in a simplified yet robust way, the basic physics governing subduction seismic cycle and related rupture process. Despite the strength of this approach, analogue earthquakes are not perfectly comparable to their natural prototype. In this work, we try to improve subduction seismic cycle analogue models by modifying the rheological properties of the analogue material and adopting a new image analysis technique (i.e., PEP - ParticlE and Prediction velocity). We test the influence of lithosphere elasticity by using type A gelatin with greater concentration (i.e., 6 wt%). Results show that gelatin elasticity plays important role in controlling seismogenic behaviour of STF, tuning the mean and the maximum magnitude of analogue earthquakes. In particular, by increasing gelatin elasticity, we observe decreasing mean magnitude, while the maximum magnitude remains the same. Experimental results therefore suggest that lithosphere elasticity could be one of the parameters that tunes seismogenic behaviour of STF. To increase gelatin elasticity also implies improving similarities with their natural prototype in terms of coseismic

. One source of light for shading does show all morphologic features needed for description. Additionally, more details such as fault lines, overlaps and characteristic edges of complex shell structures are clearly detected by simply changing the illumination on the shaded digital surface model. In a further study, the potential of edge detection of the individual shells will be analyzed based on statistical analysis by keeping track of the local accumulative shading gradient. The results are compared to manually identified edges. In a following study phase, the detected edges will be improved by graph cut segmentation. We assume that this technique can lead to automatically extracted teaching set for object segmentation on a complex environment. The project is supported by the Austrian Science Fund (FWF P 25883-N29).

Cerenkov luminescence endoscopy (CLE) is an optical technique that captures the Cerenkov photons emitted from highly energetic moving charged particles (β+ or β−) and can be used to monitor the distribution of many clinically available radioactive probes. A main limitation of CLE is its limited sensitivity to small concentrations of radiotracer, especially when used with a light guide. We investigated the improvement in the sensitivity of CLE brought about by using a β− radiotracer that improved Cerenkov signal due to both higher β-particle energy and lower γ noise in the imaging optics because of the lack of positron annihilation. Methods The signal-to-noise ratio (SNR) of 90Y was compared with that of 18F in both phantoms and small-animal tumor models. Sensitivity and noise characteristics were demonstrated using vials of activity both at the surface and beneath 1 cm of tissue. Rodent U87MG glioma xenograft models were imaged with radiotracers bound to arginine-glycine-aspartate (RGD) peptides to determine the SNR. Results γ noise from 18F was demonstrated by both an observed blurring across the field of view and a more pronounced fall-off with distance. A decreased γ background and increased energy of the β particles resulted in a 207-fold improvement in the sensitivity of 90Y compared with 18F in phantoms. 90Y-bound RGD peptide produced a higher tumor-to-background SNR than 18F in a mouse model. Conclusion The use of 90Y for Cerenkov endoscopic imaging enabled superior results compared with an 18F radiotracer. PMID:25300598

A total of 28 unrelated isolates of the Salmonella enterica subsp. enterica serovar dublin (S. dublin) collected during a 6-year period, as well as four samples of the S. dublin live vaccine strain Bovisaloral and its prototype strain S. dublin 442/039, were investigated by different molecular typing methods for the following reasons: (i) to find the most discriminatory method for the epidemiological typing of isolates belonging to this Salmonella serovar and (ii) to evaluate these methods for their capacity to discriminate among the live vaccine strain Bovisaloral, its prototype strain S. dublin 442/039, and field isolates of the serovar dublin. Five different plasmid profiles were observed; a virulence plasmid of 76 kbp as identified by hybridization with an spvB-spvC gene probe was present in all isolates. The detection of 16S rRNA genes and that of IS200 elements proved to be unsuitable for the epidemiological typing of S. dublin; only one hybridization pattern could be observed with each of these methods. The results obtained from macrorestriction analysis strongly depended on the choice of restriction enzyme. While the enzyme NotI yielded the lowest discriminatory index among all enzymes tested, it was the only enzyme that allowed discrimination between the Bovisaloral vaccine strain and its prototype strain. In contrast to the enzymes XbaI and SpeI, which only differentiated among the S. dublin field isolates, XhoI as well as AvrII also produced restriction fragment patterns of the Bovisaloral strain and of its prototype strain that were not shared by any of the S. dublin field isolates. Macrorestriction analysis proved to be the most discriminatory method not only for the epidemiological typing of S. dublin field isolates but also for the identification of the S. dublin live vaccine strain Bovisaloral. PMID:8904430

conventional assays for clinical follow-up of patients. This review article discusses the diagnostic value of molecular methods in the evaluation of pulmonary and extrapulmonary tuberculosis in the light of the current literature. PMID:22639322

The maintenance mechanism of the supersaturated state of poorly water-soluble drugs, glibenclamide (GLB) and chlorthalidone (CLT), in hydroxypropyl methylcellulose acetate succinate (HPMC-AS) solution was investigated at a molecular level. HPMC-AS suppressed drug crystallization from supersaturated drug solution and maintained high supersaturated level of drugs with small amount of HPMC-AS for 24 h. However, the dissolution of crystalline GLB into HPMC-AS solution failed to produce supersaturated concentrations, although supersaturated concentrations were achieved by adding amorphous GLB to HPMC-AS solution. HPMC-AS did not improve drug dissolution and/or solubility but efficiently inhibited drug crystallization from supersaturated drug solutions. Such an inhibiting effect led to the long-term maintenance of the amorphous state of GLB in HPMC-AS solution. NMR measurements showed that HPMC-AS suppressed the molecular mobility of CLT depending on their supersaturation level. Highly supersaturated CLT in HPMC-AS solution formed a gel-like structure with HPMC-AS in which the molecular mobility of the CLT was strongly suppressed. The gel-like structure of HPMC-AS could inhibit the reorganization from drug prenuclear aggregates to the crystal nuclei and delay the formation of drug crystals. The prolongation subsequently led to the redissolution of the aggregated drugs in aqueous solution and formed the equilibrium state at the supersaturated drug concentration in HPMC-AS solution. The equilibrium state formation of supersaturated drugs by HPMC-AS should be an essential mechanism underlying the marked drug concentration improvement. PMID:25723893

A molecular beam time-of-flight technique is studied as a means of determining surface stay times for physical adsorption. The experimental approach consists of pulsing a molecular beam, allowing the pulse to strike an adsorbing surface and detecting the molecular pulse after it has subsequently desorbed. The technique is also found to be useful for general studies of adsorption under nonequilibrium conditions including the study of adsorbate-adsorbate interactions. The shape of the detected pulse is analyzed in detail for a first-order desorption process. For mean stay times, tau, less than the mean molecular transit times involved, the peak of the detected pulse is delayed by an amount approximately equal to tau. For tau much greater than these transit times, the detected pulse should decay as exp(-t/tau). However, for stay times of the order of the transit times, both the molecular speed distributions and the incident pulse duration time must be taken into account.

Creating catalysts with enhanced selectivity and activity requires precise control over particle shape, composition, and size. Here we report the use of atomic layer deposition (ALD) to synthesize supported Ni, Pt, and Ni-Pt catalysts in the size regime (< 3 nm) where nanoscale properties can have a dramatic effect on reaction activity and selectivity. This thesis presents the first ALD synthesis of non-noble metal nanoparticles by depositing Ni on Al2O3 with two half-reactions of Ni(Cp)2 and H2. By changing the number of ALD cycles, Ni weight loadings were varied from 4.7 wt% to 16.7 wt% and the average particle sizes ranged from 2.5 to 3.3 nm, which increased the selectivity for C 3H6 hydrogenolysis by an order of magnitude over a much larger Ni/Al2O3 catalyst. Pt particles were deposited by varying the number of ALD cycles and the reaction chemistry (H2 or O 2) to control the particle size from approximately 1 to 2 nm, which allowed lower-coordinated surface atoms to populate the particle surface. These Pt ALD catalysts demonstrated some of the highest oxidative dehydrogenation of propane selectivities (37%) of a Pt catalyst synthesized by a scalable technique. Dry reforming of methane (DRM) is a reaction of interest due to the recent increased recovery of natural gas, but this reaction is hindered from industrial implementation because the Ni catalysts are plagued by deactivation from sintering and coking. This work utilized Ni ALD and NiPt ALD catalysts for the DRM reaction. These catalysts did not form destructive carbon whiskers and had enhanced reaction rates due to increased bimetallic interaction. To further limit sintering, the Ni and NiPt ALD catalysts were coated with a porous alumina matrix by molecular layer deposition (MLD). The catalysts were evaluated for DRM at 973 K, and the MLD-coated Ni catalysts outperformed the uncoated Ni catalysts in either activity (with 5 MLD cycles) or stability (with 10 MLD cycles). In summary, this thesis developed a

Sorafenib is a clinically important oral tyrosine kinase inhibitor for the treatment of various cancers. However, the oral bioavailability of sorafenib tablet (Nexavar) is merely 38-49% relative to the oral solution, due to the low aqueous solubility of sorafenib and its relatively high daily dose. It is desirable to improve the oral bioavailability of sorafenib to expand the therapeutic window, reduce the drug resistance, and enhance patient compliance. In this study, we observed that the solubility of sorafenib could be increased ∼50-fold in the coexistence of poly(vinylpyrrolidone-vinyl acetate) (PVP-VA) and sodium lauryl sulfate (SLS), due to the formation of PVP-VA/SLS complexes at a lower critical aggregation concentration. The enhanced solubility provided a faster initial sorafenib dissolution rate, analogous to a forceful "spring" to release drug into solution, from tablets containing both PVP-VA and SLS. However, SLS appears to impair the ability of PVP-VA to act as an efficient "parachute" to keep the drug in solution and maintain drug supersaturation. Using 2D (1)H NMR, (13)C NMR, and FT-IR analysis, we concluded that the solubility enhancement and supersaturation of sorafenib were achieved by PVP-VA/SLS complexes and PVP-VA/sorafenib interaction, respectively, both through molecular interactions hinged on the PVP-VA VA groups. Therefore, a balance between "spring" and "parachute" must be carefully considered in formulation design. To confirm the in vivo relevance of these molecular interaction mechanisms, we prepared three tablet formulations containing PVP-VA alone, SLS alone, and PVP-VA/SLS in combination. The USP II in vitro dissolution and dog pharmacokinetic in vivo evaluation showed clear differentiation between these three formulations, and also good in vitro-in vivo correlation. The formulation containing PVP-VA alone demonstrated the best bioavailability with 1.85-fold and 1.79-fold increases in Cmax and AUC, respectively, compared with the

Background and objectives In patients with breast cancer (BC), the sentinel node (SN) is the first node in the axillary basin that receives the primary lymphatic flow and can be used to accurately assess the axillary nodal status without removal of the axillary contents. Currently, histology and/or immunohistochemistry are the routine methods of SN analysis. The primary objective of this study was to develop a reproducible reverse transcription (RT) PCR assay, with emphasis on achieving high specificity for accurate detection of BC micrometastases in the SN. To correct for the heterogeneity of BC cells, a multimarker approach was followed, with the further aim of improving the detection rate of the assay. Methods In total, 73 markers were evaluated, of which 7 were breast epithelial markers and 66 were either cancer testis or tumour associated antigens. Twelve BC cell lines and 30 SNs (from 30 patients) were analysed using RT‐PCR to determine the in vitro and in vivo detection rates for each of the markers. In addition, 20 axillary nodes obtained from a patient with brain death were used as controls to optimise the PCR cycle numbers for all the markers. Results Of the 30 SNs, 37% (11/30) were positive on haematoxylin and eosin analysis. Extensive immunohistochemical (IHC) analyses of the haematoxylin and eosin negative nodes confirmed the presence of very small numbers of BC cells in an additional 40% (12/30) of SNs. Molecular analysis with the hMAM‐A alone identified metastases in 70% (21/30) of SNs. Using MAGE‐A3 in combination with hMAM‐A identified metastases in 90% (27/30) of patients. Seven SNs (23%) were negative for micrometastases (with haematoxylin and eosin and IHC) but RT‐PCR positive for either hMAM‐A or MAGE‐A3. Conclusions As IHC analysis resulted in a 77% detection rate compared with 37% for haematoxylin and eosin analysis, we consider that IHC is essential in order not to miss SN micrometastases. Molecular analysis with hMAM‐A and

Fungi samples from filters collected in Castle Bruce, Dominica from March through July 2002, were previously purified and identified to genus level using classic macroscopic and microscopic techniques. A total of 105 isolated colonies were cultured in liquid media and the mycelial mats used for DNA extraction. PCR was used to amplify the ITS region of the rDNA using the ITS1 and ITS4 primers. Both strands of the amplified products were sequenced and the final identification to species level was completed by a GenBank search. Fourteen different species and one fungal endophyte were identified from genders Aspergillus,Penicillium, Fusarium, Cladosporium, Curvularia and Phanerochaete. Some of these species such as A. fumigatus, A. japonicus, P. citrinum and C. cladosporoides are known to cause respiratory disorders in humans. A. fumigatus causes an aggressive pulmonary allergic response that might result in allergic bronchopulmonary aspergillosis. Other species such as F. equiseti and C. brachyspora are plant pathogens affecting economically important crops. Sahara dust is an important source of fungal spores of species that are not common in the Caribbean region.

The current report describes the use of a moleculartechnique to identify immature Fascioloides magna An 18-month-old Brangus heifer was found dead in the field without any prior clinical signs. The cause of death was exsanguination into the thoracic cavity associated with pulmonary embolization and infection by immature Fascioloides magna resulting in 2 large foci of pulmonary necrosis and focal arteriolar and lung rupture. The liver had a few random migratory tracts with typical iron and porphyrin fluke exhaust, but no identified fluke larvae. A single immature fluke was found in the lungs, and species level identification as F. magna was confirmed by DNA sequence analysis of the ribosomal internal transcribed spacer regions (ITS1 region, 5.8S rRNA gene, and ITS2) and of partial 28S rRNA gene sequence. This is one of only a few pulmonary fascioloidiasis cases associated with hemothorax in the veterinary literature. PMID:27423736

To fully account for electron-vibrational coupling and vibrational relaxation in the course of electron motion through a molecular wire a density operator approach is utilized. If combined with a particular projection operator technique a generalized master equation can be derived which governs the populations of the electronic wire states. The respective memory kernels are determined beyond any perturbation theory with respect to the electron-vibrational coupling and can be classified via so-called Liouville space pathways. An ordering of the different contributions to the current-voltage characteristics becomes possible by introducing an electron transmission coefficient which describes ballistic as well as inelastic electron transport through the wire. The general derivations are illustrated by numerical calculations which demonstrate the drastic influence of the electron-vibrational coupling on the wire transmission coefficient as well as on the current-voltage characteristics.