Despite intense, continued interest in global analyses of signaling cascades through mass spectrometry-based studies, the large-scale, systematic production of phosphoproteomics data has been hampered in-part by inefficient fractionation strategies subsequent to phosphopeptide enrichment. Here we explore two novel multidimensional fractionation strategies for analysis of phosphopeptides. In the first technique we utilize aliphatic ion pairing agents to improve retention of phosphopeptides at ...

Phosphopeptides are often detected with low efficiency by MALDI MS analysis of peptide mixtures. In an effort to improve the phosphopeptide ion response in MALDI MS, we investigated the effects of adding low concentrations of organic and inorganic acids during peptide sample preparation in 2......,5-dihydroxybenzoic acid (2,5-DHB) matrix. Phosphoric acid in combination with 2,5-DHB matrix significantly enhanced phosphopeptide ion signals in MALDI mass spectra of crude peptide mixtures derived from the phosphorylated proteins alpha-casein and beta-casein. The beneficial effects of adding up to 1% phosphoric...... acid to 2,5-DHB were also observed in LC-MALDI-MS analysis of tryptic phosphopeptides of B. subtilis PrkC phosphoprotein. Finally, the mass resolution of MALDI mass spectra of intact proteins was significantly improved by using phosphoric acid in 2,5-DHB matrix....

Introduction Mass spectrometry (MS) is a powerful technology for study of PTMs, including protein phosphorylation. Due to the low abundance of many phosphoproteins and the relatively poor ionization efficiency of phosphopeptides, specific enrichment of phosphopeptides prior to MS analysis is ne...... (MSA) method was used for phosphopeptide fragmentation. The resulting fragment ion spectra were processed with Proteome Discoverer software (Thermo Electron, Bremen, Germany). Results We first investigated the global phosphorylation profile of plant plasma membrane proteins by enriching...

Protein phosphorylation controls many cellular processes and activities. One of the major challenges in the proteomic study of phosphorylation is the enrichment of substoichiometric phosphorylated peptides from complex mixtures. Titanium dioxide (TiO2)-based chromatography is now widely applied to isolate phosphopeptides because of its efficiency and flexibility. In this study, a novel TiO2 coated matrix assisted laser desorption ionization plate is presented and tested for the purification of phosphopeptides from complex mixtures. The novel feature of this approach is the deposition of a nanostructured TiO2 film on stainless steel plates by pulsed laser deposition (PLD). By using tryptic digests of alpha-casein, beta-casein, and other nonphosphorylated proteins, the successful enrichment of phosphopeptides was possible with this novel device, called T-plate, even when working in the low fmol range, making the sample ready for mass spectrometric analysis in few minutes.

The phosphorylation of proteins is a major post-translational modification that is required for the regulation of many cellular processes and activities. Mass spectrometry signals of low-abundance phosphorylated peptides are commonly suppressed by the presence of abundant non-phosphorylated peptides. Therefore, one of the major challenges in the detection of low-abundance phosphopeptides is their enrichment from complex peptide mixtures. Titanium dioxide (TiO2) has been proven to be a highly efficient approach for phosphopeptide enrichment and is widely applied. In this study, a novel TiO2 plate was developed by coating TiO2 particles onto polydimethylsiloxane (PDMS)-coated MALDI plates, glass, or plastic substrates. The TiO2-PDMS plate (TP plate) could be used for on-target MALDI-TOF analysis, or as a purification plate on which phosphopeptides were eluted out and subjected to MALDI-TOF or nanoLC-MS/MS analysis. The detection limit of the TP plate was ∼10-folds lower than that of a TiO2-packed tip approach. The capacity of the ∼2.5 mm diameter TiO2 spots was estimated to be ∼10 μg of β-casein. Following TiO2 plate enrichment of SCC4 cell lysate digests and nanoLC-MS/MS analysis, ∼82% of the detected proteins were phosphorylated, illustrating the sensitivity and effectiveness of the TP plate for phosphoproteomic study.

Although offline enrichment of phosphorylated peptides is widely used, enrichment for phosphopeptides using TiO{sub 2} is often performed manually, which is labor-intensive and can lead to irreproducible results. To address the problems associated with offline enrichment and to improve the effectiveness of phosphopeptide detection, we developed an automated online enrichment system for phosphopeptideanalysis. A standard protein mixture comprising BSA, fetuin, crystalline, α-casein and β-casein, and ovalbumin was assessed using our new system. Our multidimensional system has four main parts: a sample pump, a 20-mm TiO{sub 2}-based column, a weak anion-exchange, and a strong cation-exchange (2:1 WAX:SCX) separation column with LC/MS. Phosphorylated peptides were successfully detected using the TiO{sub 2}-based online system with little interference from nonphosphorylated peptides. Our results confirmed that our online enrichment system is a simple and efficient method for detecting phosphorylated peptides.

In this work, titania nanoparticles coated carbon nanotubes (denoted as CNTs/TiO2 composites) were synthesized through a facile but effective solvothermal reaction using titanium isopropoxide as the titania source, isopropyl alcohol as the solvent and as the basic catalyst in the presence of hydrophilic carbon nanotubes. Characterizations using scanning electron microscopy (SEM) and transmission electron microscopy (TEM) indicate that the CNTs/TiO2 composites consist of CNT core and a rough outer layer formed by titania nanoparticles (5-10nm). Measurements using wide angle X-ray diffraction (WAXRD), zeta potential and N2 sorption reveal that the titania shell is formed by anatase titania nanoparticles, and the composites have a high specific surface area of about 104 m(2)/g. By using their high surface area and affinity to phosphopeptides, the CNTs/TiO2 composites were applied to selectively enrich phosphopeptides for mass spectrometry analysis. The high selectivity and capacity of the CNTs/TiO2 composites have been demonstrated by effective enrichment of phosphopeptides from digests of phosphoprotein, protein mixtures of β-casein and bovine serum albumin, human serum and rat brain samples. These results foresee a promising application of the novel CNTs/TiO2 composites in the selective enrichment of phosphopeptides.

We report the development of photocatalytically patterned TiO(2) arrays for selective on-plate enrichment and direct matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) analysis of phosphopeptides. A thin TiO(2) nanofilm with controlled porosity is prepared on gold-covered glass slides by a layer-by-layer (LbL) deposition/calcination process. The highly porous and rough nanostructure offers high surface area for selective binding of phosphorylated species. The patterned arrays are generated using an octadecyltrichlorosilane (OTS) coating in combination of UV irradiation with a photomask, followed by NaOH etching. The resulting hydrophilic TiO(2) spots are thus surrounded by a hydrophobic OTS layer, which can facilitate the enrichment of low-abundance components by confining a large volume sample into a small area. The TiO(2) arrays exhibit high specificity toward phosphopeptides in complex samples including phosphoprotein digests and human serum, and the detection can be made in the fmole range. Additional advantages of the arrays include excellent stability, reusability/reproducibility, and low cost. This method has been successfully applied to the analysis of phosphopeptides in nonfat milk. The patterned TiO(2) arrays provide an attractive interface for performing on-plate reactions, including selective capture of target species for MALDI-MS analysis, and can serve as a versatile lab-on-a-chip platform for high throughput analysis in phosphoproteome research.

Highlights: Black-Right-Pointing-Pointer A new micropipette tip TMTip{sub PPY-C18} was developed for desalting of phosphopeptides. Black-Right-Pointing-Pointer TMTip{sub PPY-C18} is based on polypyrrole in tandem with C18 chromatographic material. Black-Right-Pointing-Pointer TMTip{sub PPY-C18} combines electrostatic, {Pi}-{Pi} stacking and hydrophobic interactions. Black-Right-Pointing-Pointer TMTip{sub PPY-C18} can be used in both acidic and basic experimental conditions. - Abstract: Desalting and concentration of peptides using reverse phase (RP) C18 chromatographic material based on hydrophobic interaction is a routine approach used in mass spectrometry (MS)-based proteomics. However, MS detection of small hydrophilic peptides, in particular, phosphopeptides that bear multiple negative charges, is challenging due to the insufficient binding to C18 stationary phase. We described here the development of a new desalting method that takes the unique properties of polypyrrole (PPY). The presence of positively charged nitrogen atoms under acidic conditions and polyunsaturated bonds in polypyrrole provide a prospect for enhanced adsorption of phosphopeptides or hydrophilic peptides through extra electrostatic and {Pi}-{Pi} stacking interactions in addition to hydrophobic interactions. In tandem with reversed phase C18 chromatographic material, the new type of desalting method termed as TMTip{sub PPY-C18} can significantly improve the MS detection of phosphopeptides with multiple phosphate groups and other small hydrophilic peptides. It has been applied to not only tryptic digest of model proteins but also the analysis of complex lysates of zebrafish eggs. The number of detected phosphate groups on a peptide ranged from 1 to 6. Particularly, polypyrrole based method can also be used in basic condition. Thus it provides a useful means to handle peptides that may not be detectable in acidic condition. It can be envisioned that the TMTip{sub PPY-C18} should be able to

Alzheimer's disease (AD) is the most common form of dementia, characterized by progressive loss of cognitive function. One of the pathological hallmarks of AD is the formation of neurofibrillary tangles composed of abnormally hyperphosphorylated tau protein, but global deregulation of protein phosphorylation in AD is not well analyzed. Here, we report a pilot investigation of AD phosphoproteome by titanium dioxide enrichment coupled with high resolution LC-MS/MS. During the optimization of the enrichment method, we found that phosphate ion at a low concentration (e.g. 1 mM) worked efficiently as a nonphosphopeptide competitor to reduce background. The procedure was further tuned with respect to peptide-to-bead ratio, phosphopeptide recovery, and purity. Using this refined method and 9 h LC-MS/MS, we analyzed phosphoproteome in one milligram of digested AD brain lysate, identifying 5243 phosphopeptides containing 3715 nonredundant phosphosites on 1455 proteins, including 31 phosphosites on the tau protein. This modified enrichment method is simple and highly efficient. The AD case study demonstrates its feasibility of dissecting phosphoproteome in a limited amount of postmortem human brain. All MS data have been deposited in the ProteomeXchange with identifier PXD001180 (http://proteomecentral.proteomexchange.org/dataset/PXD001180).

In this study, a novel on-plate IMAC technique was developed for highly selective enrichment and isolation of phosphopeptides with high-throughput MALDI-TOF-MS analysis. At first, a MALDI plate was coated with polydopamine (PDA), and then Ti(4+) was immobilized on the PDA-coated plate. The obtained IMAC plate was successfully applied to the highly selective enrichment and isolation of phosphopeptides in protein digests and human serum. Because of no loss of samples, the on-plate IMAC platform exhibits excellent selectivity and sensitivity in the selective enrichment and isolation of phosphopeptides, which provides a potential technique for high selectivity in the detection of low-abundance phosphopeptides in biological samples.

Shotgun proteomics typically uses multidimensional LC/MS/MS analysis of enzymatically digested proteins, where strong cation-exchange (SCX) and reversed-phase (RP) separations are coupled to increase the separation power and dynamic range of analysis. Here we report an on-line multidimensional LC method using an anion- and cation-exchange mixed bed for the first separation dimension. The mixed-bed ion-exchange resin improved peptide recovery over SCX resins alone and showed better orthogonality to RP separations in two-dimensional separations. The Donnan effect, which was enhanced by the introduction of fixed opposite charges in one column, is proposed as the mechanism responsible for improved peptide recovery by producing higher fluxes of salt cations and lower populations of salt anions proximal to the SCX phase. An increase in orthogonality was achieved by a combination of increased retention for acidic peptides and moderately reduced retention of neutral to basic peptides by the added anion-exchange resin. The combination of these effects led to approximately 100% increase in the number of identified peptides from an analysis of a tryptic digest of a yeast whole cell lysate. The application of the method to phosphopeptide-enriched samples increased by 94% phosphopeptide identifications over SCX alone. The lower pKa of phosphopeptides led to specific enrichment in a single salt step resolving acidic phosphopeptides from other phospho- and non-phosphopeptides. Unlike previous methods that use anion exchange to alter selectivity or enrich phosphopeptides, the proposed format is unique in that it works with typical acidic buffer systems used in electrospray ionization, making it feasible for online multidimensional LC/MS/MS applications.

Obtaining high phosphoproteome coverage requires specific enrichment of phosphorylated peptides from the often extremely complex peptide mixtures generated by proteolytic digestion of biological samples, as well as extensive chromatographic fractionation prior to liquid chromatography-tandem mass spectrometry (LC-MS/MS) analysis. Due to the sample loss resulting from fractionation, this procedure is mainly performed when large quantities of sample are available. To make large-scale phosphoproteomics applicable to smaller amounts of protein we have recently combined highly specific TiO2-based phosphopeptide enrichment with sequential elution from immobilized metal affinity chromatography (SIMAC) for fractionation of mono- and multi-phosphorylated peptides prior to capillary scale hydrophilic interaction liquid chromatography (HILIC) based fractionation of monophosphorylated peptides. In the following protocol we describe the procedure step by step to allow for comprehensive coverage of the phosphoproteome utilizing only a few hundred micrograms of protein.

Immobilized metal affinity chromatography (IMAC) has been the method of choice for phosphopeptide enrichment prior to mass spectrometric analysis for many years and it is still used extensively in many laboratories. Using the affinity of negatively charged phosphate groups towards positively...

, or peptides altered in hydrophilicity such as phosphopeptides. We used microcolumns to compare the ability of RP resin or graphite powder to retain phosphopeptides. A number of standard phosphopeptides and a biologically relevant phosphoprotein, dynamin I, were analyzed. MS revealed that some phosphopeptides...... did not bind the RP resin but were retained efficiently on the graphite. Those that did bind the RP resin often produced much stronger signals from the graphite powder. In particular, the method revealed a doubly phosphorylated peptide in a tryptic digest of dynamin I purified from rat brain nerve...... and doubly phosphorylated peptide in dynamin III, analogous to the dynamin I sequence. A pair of dynamin III phosphorylation sites were found at Ser-759 and Ser-763 by tandem MS. The results directly define the in vivo phosphorylation sites in dynamins I and III for the first time. The findings indicate...

Full Text Available Tristetraprolin/zinc finger protein 36 (TTP/ZFP36 binds and destabilizes some pro-inflammatory cytokine mRNAs. TTP-deficient mice develop a profound inflammatory syndrome due to excessive production of pro-inflammatory cytokines. TTP expression is induced by various factors including insulin and extracts from cinnamon and green tea. TTP is highly phosphorylated in vivo and is a substrate for several protein kinases. Multiple phosphorylation sites are identified in human TTP, but it is difficult to assign major vs. minor phosphorylation sites. This study aimed to generate additional information on TTP phosphorylation using phosphopeptide mapping and mass spectrometry (MS. Wild-type and site-directed mutant TTP proteins were expressed in transfected human cells followed by in vivo radiolabeling with [32P]-orthophosphate. Histidine-tagged TTP proteins were purified with Ni-NTA affinity beads and digested with trypsin and lysyl endopeptidase. The digested peptides were separated by C18 column with high performance liquid chromatography. Wild-type and all mutant TTP proteins were localized in the cytosol, phosphorylated extensively in vivo and capable of binding to ARE-containing RNA probes. Mutant TTP with S90 and S93 mutations resulted in the disappearance of a major phosphopeptide peak. Mutant TTP with an S197 mutation resulted in another major phosphopeptide peak being eluted earlier than the wild-type. Additional mutations at S186, S296 and T271 exhibited little effect on phosphopeptide profiles. MS analysis identified the peptide that was missing in the S90 and S93 mutant protein as LGPELSPSPTSPTATSTTPSR (corresponding to amino acid residues 83-103 of human TTP. MS also identified a major phosphopeptide associated with the first zinc-finger region. These analyses suggest that the tryptic peptide containing S90 and S93 is a major phosphopeptide in human TTP.

Full Text Available Defining alterations in signalling pathways in normal and malignant cells is becoming a major field in proteomics. A number of different approaches have been established to isolate, identify and quantify phosphorylated proteins and peptides. In the current report, a comparison between SCX prefractionation versus an antibody based approach, both coupled to TiO2 enrichment and applied to TMT labelled cellular lysates, is described. The antibody strategy was more complete for enriching phosphopeptides and allowed the identification of a large set of proteins known to be phosphorylated (715 protein groups with a minimum number of not previously known phosphorylated proteins (2.

Immobilized metal affinity chromatography (IMAC) has been the method of choice for phosphopeptide enrichment prior to mass spectrometric analysis for many years and it is still used extensively in many laboratories. Using the affinity of negatively charged phosphate groups towards positively...... charged metal ions such as Fe3+, Ga3+, Al3+, Zr4+, and Ti4+ has made it possible to enrich phosphorylated peptides from peptide samples. However, the selectivity of most of the metal ions is limited, when working with highly complex samples, e.g., whole-cell extracts, resulting in contamination from...

Full Text Available Background: The objective of this study was to evaluate, by means of elemental analysis the mineral density, calcium, and phosphorus weight percent of sound enamel, demineralized and CPP-ACP treated enamel. Elemental analysis allows elemental and isotopic composition of a biologic sample. It can be qualitative (determining what elements are present, and quantitative (determining how much of each are present. INCA Energy 250, Oxford Analytical Instruments Ltd. (UK, energy-dispersive X-ray spectroscopy system for elemental analysis was performed on random assigned samples. Methods: 12 sound premolars were extracted for orthodontic reason. Each tooth was sectioned by using a double-faced diamond microtome under water cooling into three section for a total of 36 samples and randomly assigned to three groups: Group 1 (control, Group 2 (WS: white spot , Group 3 (WST white spot treated of 12 samples each. Samples (Group 2 and Group 3 underwent equally to 24 h and 48 h of acid bath duration. Then all the treated samples (Group 3 were coated with CPP-ACP for 5 min before immersion into water twice a day. Group 2 served as control for enamel damage evaluation. Inca Point & ID, an analytic platform software for SEM was used for elemental analysis on samples from Group 1 (C, 2 (WS and Group 3 (WST in order to determine the weight % and atomic % presence of Ca and P. Results: The results of the samples analysis from the three Groups show different weight % and atomic% of Ca and P, and clearly reflect the different mineralization rates. Conclusions: 10% Casein phosphopeptide-amorphous calcium phosphate (CPP-ACP complex, promotes remineralization in vitro. The results of this in vitro study completely agree with this statement. Clinical studies to investigate the intraoral effectiveness of topical applications of CPP-ACP on white spot lesions are required to confirm these results.

Although thousands of metal-organic frameworks (MOFs) have been fabricated and widely applied in gas storage/separations, adsorption, catalysis, and so on, few kinds of MOFs have been used as adsorption materials while simultaneously serving as matrixes to analyze small molecules for laser desorption/ionization mass spectrometry (LDI-MS). Herein, a new concept is introduced to design and synthesize MOFs as both adsorption materials and matrixes according to the structure of ligands and common matrixes. The proof of concept design was demonstrated by selection of 2,5-pyridinedicarboxylic acid (PDC) and 2,5-dihydroxyterephthalic acid (DHT) as ligands for synthesis of MOFs. Two Zr(IV)-based MOFs of UiO-66-PDC and UiO-66-(OH)2 were synthesized and applied for the first time as new matrixes for analysis of small molecules by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS). Both of them showed low matrix interferences, high ionization efficiency, and good reproducibility when used as matrixes. A variety of small molecules, including saccharides, amino acids, nucleosides, peptides, alkaline drugs, and natural products, were analyzed. In addition, UiO-66-(OH)2 exhibited potential for application in the quantitative determination of glucose and pyridoxal 5'-phosphate. Furthermore, thanks to its intrinsically large surface area and highly ordered pores, UiO-66-(OH)2 also showed sensitive and specific enrichment of phosphopeptides prior to MS analysis. These results demonstrated that this strategy can be used to efficiently screen tailor-made MOFs as matrixes to analyze small molecules by MALDI-TOF-MS.

Full Text Available Phosphorylation is a protein posttranslational modification. It is responsible of the activation/inactivation of disease-related pathways, thanks to its role of “molecular switch.” The study of phosphorylated proteins becomes a key point for the proteomic analyses focused on the identification of diagnostic/therapeutic targets. Liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS is the most widely used analytical approach. Although unmodified peptides are automatically identified by consolidated algorithms, phosphopeptides still require automated tools to avoid time-consuming manual interpretation. To improvephosphopeptide identification efficiency, a novel procedure was developed and implemented in a Perl/C tool called PhosphoHunter, here proposed and evaluated. It includes a preliminary heuristic step for filtering out the MS/MS spectra produced by nonphosphorylated peptides before sequence identification. A method to assess the statistical significance of identified phosphopeptides was also formulated. PhosphoHunter performance was tested on a dataset of 1500 MS/MS spectra and it was compared with two other tools: Mascot and Inspect. Comparisons demonstrated that a strong point of PhosphoHunter is sensitivity, suggesting that it is able to identify real phosphopeptides with superior performance. Performance indexes depend on a single parameter (intensity threshold that users can tune according to the study aim. All the three tools localized >90% of phosphosites.

Titanium dioxide has very high affinity for phosphopeptides and it has become an efficient alternative to already existing methods for phosphopeptide enrichment from complex samples. Peptide loading in a highly acidic environment in the presence of 2,5-dihydroxybenzoic acid (DHB), phthalic acid......, or glycolic acid has been shown to improve selectivity significantly by reducing unspecific binding from nonphosphorylated peptides. The enriched phosphopeptides bound to the titanium dioxide are subsequently eluted from the micro-column using an alkaline buffer. Titanium dioxide chromatography is extremely...... tolerant towards most buffers used in biological experiments. It is highly robust and as such it has become one of the methods of choice in large-scale phospho-proteomics. Here we describe the protocol for phosphopeptide enrichment using titanium dioxide chromatography followed by desalting...

Titanium dioxide (TiO2) is one of metal oxides widely used for phosphopeptide enrichment in phosphoproteomic research nowadays. However it can bind to some non-phosphorylated peptides containing one or more aspartic acid residues and/or glutamic acid residues. These non-phosphorylated peptides can be eluted along with phosphorylated peptides and cause the reduction of the selectivity. Conventional inhibitors for the non-specific binding of non-phosphorylated peptides can often contaminate the ion source of mass spectrometry and therefore their applications are limited in liquid chromatography-mass spectrometry (LC-MS). In this study, aspartic acid was reported as a novel non-specific binding inhibitor for phosphopeptide enrichment by titanium dioxide. Firstly, the tryptic peptide mixtures of 3 and 9 standard proteins were used for the comparison of the enrichment efficiency of titanium dioxide. The effects with the presence of aspartic acid, glutamic acid and no-inhibitor in the enrichment systems were compared separately. The results showed that aspartic acid can greatly improve the selectivity of titanium dioxide for phosphopeptide enrichment. Then, aspartic acid was used for the enrichment of tryptic peptide mixture of C57BL/6J mouse liver lysate and good results were also obtained which demonstrated that aspartic acid was a promising non-specific binding inhibitor for complex biological samples. Besides, no contamination in the ion source occurred during the mass spectrometric analysis.

Although many affinity adsorbents have been developed for phosphopeptides enrichment, high-specifically capturing the multi-phosphopeptides is still a big challenge. Here, we investigated the mechanism of phosphate ion coordination and substitution on affinity adsorbents surfaces and modulated the selectivity of affinity adsorbents to multi-phosphopeptides based on the different capability of mono- and multi-phosphopeptides in competitively substituting the pre-coordinated phosphate ions at strong acidic condition. We demonstrated both the species of pre-coordinated phosphate ions and the substituting conditions played crucial roles in modulating the enrichment selectivity to multi-phosphopeptides, and the pre-coordinated affinity materials with relative more surfaces positive charges exhibited better enrichment efficiency due to the cooperative effect of electrostatic interaction and competitive substitution. Finally, an enrichment selectivity of 85% to multi-phosphopeptides was feasibly achieved with 66% improvement in identification numbers for complex protein sample extracted from HepG2 cells. Data are available via ProteomeXchange with identifier PXD004252.

Graphical abstract: -- Highlights: •Derivatization of diamond nanopowder as IMAC and RP. •Characterization with SEM, EDX and FT-IR. •Phosphopeptide enrichment from standard as well as real samples. •Desalting and human serum profiling with reproducible results. •MALDI-MS analysis with database identification. -- Abstract: Diamond is known for its high affinity and biocompatibility towards biomolecules and is used exclusively in separation sciences and life science research. In present study, diamond nanopowder is derivatized as Immobilized Metal Ion Affinity Chromatographic (IMAC) material for the phosphopeptides enrichment and as Reversed Phase (C-18) media for the desalting of complex mixtures and human serum profiling through MALDI-TOF-MS. Functionalized diamond nanopowder is characterized by Fourier transform infrared (FT-IR) spectroscopy, scanning electron microscopy (SEM) and energy dispersive X-ray (EDX) spectroscopy. Diamond-IMAC is applied to the standard protein (β-casein), spiked human serum, egg yolk and non-fat milk for the phosphopeptides enrichment. Results show the selectivity of synthesized IMAC-diamond immobilized with Fe{sup 3+} and La{sup 3+} ions. To comprehend the elaborated use, diamond-IMAC is also applied to the serum samples from gall bladder carcinoma for the potential biomarkers. Database search is carried out by the Mascot program ( (www.matrixscience.com)) for the assignment of phosphorylation sites. Diamond nanopowder is thus a separation media with multifunctional use and can be applied to cancer protein profiling for the diagnosis and biomarker identification.

Introduction Immobilized metal ion affinity chromatography (IMAC) is a widely used technique for phosphopeptide enrichment prior to mass spectrometry. Fe(III)-IMAC is based on the strong affinity between positively charged metal ions (Fe(III)) and negatively charged phosphate groups. Many reports...... that highly selective enrichment can be achieved by the improved method even when using highly diluted phosphopeptide samples in a background of peptides (1:1000). The improved method also proved to be advantageous in minimizing sample loss. The explanation of the improvement might result from the enhanced...

Several bioanalytical enrichment techniques are based on the interactions of phosphopeptides with Ln(III) ions. In order to gain an improved understanding of these complexes and the respective ion-peptide interactions, hybrid quantum mechanics/molecular mechanics (QM/MM) molecular dynamics (MD) simulations of La(III) coordinating to the phosphopeptide VPQLEIVPNSpAEER were conducted. Simulations of di- as well as monoanionic phosphate groups were carried out. The La(III) ion and its first hydration layer, including the sidechain of the phosphoserine residue were treated quantum mechanically at RI-MP2/triple zeta level, whereas the remaining part of the system was treated with classical potentials. The simulation of the dianionic phosphopeptide revealed a 9-fold coordinated La(III) ion, with the phosphopeptide binding bi- as well as monodentate. The mean residence times (τ) of the first shell water molecules were 82 ps and 37 ps for the bi- and monodentate complexes, respectively, which is much higher compared to free La(III) in aqueous solution (τ=17 ps). The simulation of the monoanionic La(III)-phosphopeptide complex revealed a bidentate coordination throughout the 80 ps sampling period. An intramolecular hydrogen bond between the hydrogen of the phosphate group and the backbone was observed and a τ value of 14 ps was obtained, which is much lower as for the dianionic complex.

In this study, zirconium oxide (ZrO(2)) aerogel was synthesized via a green sol-gel approach, with zirconium oxychloride, instead of the commonly used alkoxide with high toxicity, as the precursor. With such material, phosphopeptides from the digests of 4 pmol of β-casein with the coexistence of 100 times (mol ratio) BSA could be selectively captured, and identified by MALDI-TOF MS. Due to the large surface area (416.0 m(2) g(-1)) and the mesoporous structure (the average pore size of 10.2 nm) of ZrO(2) aerogel, a 20-fold higher loading capacity for phosphopeptide, YKVPQLEIVPN[pS]AEER (MW 1952.12), was obtained compared to that of commercial ZrO(2) microspheres (341.5 vs. 17.87 mg g(-1)). The metal oxide aerogel was further applied in the enrichment of phosphopeptides from 100 ng nonfat milk, and 17 phosphopeptides were positively identified, with a 1.5-fold improvement in phosphopeptide detection compared with previously reported results. These results demonstrate that ZrO(2) aerogel can be a powerful enrichment material for phosphoproteome study.

The basic idea of this study was to recover phosphopeptides after trypsin-assisted digestion of precipitated phosphoproteins using trivalent lanthanide ions. In the first step, phosphoproteins were extracted from the protein solution by precipitation with La(3+) and Ce(3+) ions, forming stable pellets. Additionally, the precipitated lanthanide-phosphoprotein complexes were suspended and directly digested on-pellet using trypsin. Non-phosphorylated peptides were released into the supernatants by enzymatic cleavage and phosphopeptides remained bound on the precipitated pellet. Further washing steps improved the removal of non-phosphorylated peptides. For the recovery of phosphopeptides the precipitated pellets were dissolved in 3.7% hydrochloric acid. The performance of this method was evaluated by several experiments using MALDI-TOF MS measurements and delivered the highest selectivity for phosphopeptides. This can be explained by the overwhelming preference of lanthanides for binding to oxygen-containing anions such as phosphates. The developed enrichment method was evaluated with several types of biological samples, including fresh milk and egg white. The uniqueness and the main advantages of the presented approach are the enrichment on the protein-level and the recovery of phosphopeptides on the peptide-level. This allows much easier handling, as the number of molecules on the peptide level is unavoidably higher, by complicating every enrichment strategy.

Automated phosphopeptide enrichment prior to MS analysis by means of Immobilized Metal Affinity Chromatography (IMAC) and Metal Oxide Affinity Chromatography (MOAC) has been probed with packed columns. We compared POROS-Fe³⁺ and TiO₂ (respectively IMAC and MOAC media), using a simple mixture of peptides from casein-albumin and a complex mixture of peptides isolated from mouse liver. With theses samples, selectivity of POROS-Fe³⁺ and TiO₂ were pH dependant. In the case of liver extract, selectivity increased from 12-18% to 58-60% when loading buffer contained 0.1 M acetic acid or 0.1 M trifluoroacetic acid, respectively. However, with POROS-Fe³⁺ column, the number of identifications decreased from 356 phosphopeptides with 0.1 M acetic acid to 119 phosphopeptides with 0.1 M TFA. This decrease of binding capacity of POROS-Fe³⁺ was associated with strong Fe³⁺ leaching. Furthermore, repetitive use of IMAC-Fe³⁺ with the 0.5 M NH₄OH solution required for phosphopeptide elution induced Fe₂O₃ accumulation in the column. By comparison, MOAC columns packed with TiO₂ support do not present any problem of stability in the same conditions and provide a reliable solution for packed column phosphopeptide enrichment.

Protein phosphorylation is a significant biological process, but separation of phosphorylated peptide isomers is often challenging for many analytical techniques. We developed a microchip electrophoresis (MCE) method for rapid separation of phosphopeptides with on-chip electrospray ionization (ESI) facilitating online sample introduction to the mass spectrometer (MS). With the method, two monophosphorylated positional isomers of insulin receptor peptide (IR1A and IR1B) and a triply phosphorylated insulin receptor peptide (IR3), all with the same amino acid sequence, were separated from the nonphosphorylated peptide (IR0) in less than one minute. For efficient separation of the positional peptide isomers from each other derivatization with 9-fluorenylmethyl reagents (either chloroformate, Fmoc-Cl, or N-succinimidyl carbonate, Fmoc-OSu) was required before the analysis. The derivatization improved not only the separation of the monophosphorylated positional peptide isomers in MCE, but also identification of the phosphorylation site based on MS/MS.

Qualitative and quantitative characterization of phosphopeptides by means of mass spectrometry (MS) is the main goal of MS-based phosphoproteomics, but suffers from their low abundance in the large haystack of various biological molecules. Herein, we introduce two-dimensional (2D) metal oxides to tackle this biological separation issue. A nanocomposite composed of titanoniobate nanosheets embedded with Fe3O4 nanocrystals (Fe3O4-TiNbNS) is constructed via a facile cation-exchange approach, and adopted for the capture and isotope labeling of phosphopeptides. In this nanoarchitecture, the 2D titanoniobate nanosheets offer enlarged surface area and a spacious microenvironment for capturing phosphopeptides, while the Fe3O4 nanocrystals not only incorporate a magnetic response into the composite but, more importantly, also disrupt the restacking process between the titanoniobate nanosheets and thus preserve a greater specific surface for binding phosphopeptides. Owing to the extended active surface, abundant Lewis acid sites and excellent magnetic controllability, Fe3O4-TiNbNS demonstrates superior sensitivity, selectivity and capacity over homogeneous bulk metal oxides, layered oxides, and even restacked nanosheets in phosphopeptide enrichment, and further allows in situ isotope labeling to quantify aberrantly-regulated phosphopeptides from sera of leukemia patients. This composite nanosheet greatly contributes to the MS analysis of phosphopeptides and gives inspiration in the pursuit of 2D structured materials for separation of other biological molecules of interests.Qualitative and quantitative characterization of phosphopeptides by means of mass spectrometry (MS) is the main goal of MS-based phosphoproteomics, but suffers from their low abundance in the large haystack of various biological molecules. Herein, we introduce two-dimensional (2D) metal oxides to tackle this biological separation issue. A nanocomposite composed of titanoniobate nanosheets embedded with Fe3

Phospholipid membranes could be considered a prime example of the ability of nature to produce complex yet ordered structures, by spontaneous and efficient self-assembly. Inspired by the unique properties and architecture of phospholipids, we designed simple amphiphilic decapeptides, intended to fold in the center of the peptide sequence, with a phosphorylated serine "head" located within a central turn segment, and two hydrophobic "tails". The molecular design also included the integration of the diphenylalanine motif, previously shown to facilitate self-assembly and increase nanostructure stability. Secondary structure analysis of the peptides indeed indicated the presence of stabilized conformations in solution, with a central turn connecting two hydrophobic "tails", and interactions between the hydrophobic strands. The mechanisms of assembly into supramolecular structures involved structural transitions between different morphologies, which occurred over several hours, leading to the formation of distinctive nanostructures, including half-elliptical nanosheets and curved tapes. The phosphopeptide building blocks appear to self-assemble via a particular combination of aromatic, hydrophobic and ionic interactions, as well as hydrogen bonding, as demonstrated by proposed constructed simulated models of the peptides and self-assembled nanostructures. Molecular dynamics simulations also gave insight into mechanisms of structural transitions of the nanostructures at a molecular level. Because of the biocompatibility of peptides, the phosphopeptide assemblies allow for expansion of the library of biomolecular nanostructures available for future design and application of biomedical devices.

Qualitative and quantitative characterization of phosphopeptides by means of mass spectrometry (MS) is the main goal of MS-based phosphoproteomics, but suffers from their low abundance in the large haystack of various biological molecules. Herein, we introduce two-dimensional (2D) metal oxides to tackle this biological separation issue. A nanocomposite composed of titanoniobate nanosheets embedded with Fe₃O₄ nanocrystals (Fe₃O₄-TiNbNS) is constructed via a facile cation-exchange approach, and adopted for the capture and isotope labeling of phosphopeptides. In this nanoarchitecture, the 2D titanoniobate nanosheets offer enlarged surface area and a spacious microenvironment for capturing phosphopeptides, while the Fe₃O₄ nanocrystals not only incorporate a magnetic response into the composite but, more importantly, also disrupt the restacking process between the titanoniobate nanosheets and thus preserve a greater specific surface for binding phosphopeptides. Owing to the extended active surface, abundant Lewis acid sites and excellent magnetic controllability, Fe₃O₄-TiNbNS demonstrates superior sensitivity, selectivity and capacity over homogeneous bulk metal oxides, layered oxides, and even restacked nanosheets in phosphopeptide enrichment, and further allows in situ isotope labeling to quantify aberrantly-regulated phosphopeptides from sera of leukemia patients. This composite nanosheet greatly contributes to the MS analysis of phosphopeptides and gives inspiration in the pursuit of 2D structured materials for separation of other biological molecules of interests.

The characterization of phosphorylation state(s) of a protein is best accomplished by using isolated or enriched phosphoprotein samples or their corresponding phosphopeptides. The process is typically time-consuming as, often, a combination of analytical approaches must be used. To facilitate throughput in the study of phosphoproteins, a microreactor that enables a novel strategy for performing fast proteolytic digestion and selective phosphopeptide enrichment was developed. The microreactor was fabricated using 100 μm i.d. fused-silica capillaries packed with 1-2 mm beds of C18 and/or TiO2 particles. Proteolytic digestion-only, phosphopeptide enrichment-only, and sequential proteolytic digestion/phosphopeptide enrichment microreactors were developed and tested with standard protein mixtures. The protein samples were adsorbed on the C18 particles, quickly digested with a proteolytic enzyme infused over the adsorbed proteins, and further eluted onto the TiO2 microreactor for enrichment in phosphopeptides. A number of parameters were optimized to speed up the digestion and enrichments processes, including microreactor dimensions, sample concentrations, digestion time, flow rates, buffer compositions, and pH. The effective time for the steps of proteolytic digestion and enrichment was less than 5 min. For simple samples, such as standard protein mixtures, this approach provided equivalent or better results than conventional bench-top methods, in terms of both enzymatic digestion and selectivity. Analysis times and reagent costs were reduced ~10- to 15-fold. Preliminary analysis of cell extracts and recombinant proteins indicated the feasibility of integration of these microreactors in more advanced workflows amenable for handling real-world biological samples.

Phosphopeptides are valuable reagent probes for studying protein-protein and protein-ligand interactions. The cellular delivery of phosphopeptides is challenging because of the presence of the negatively charged phosphate group. The cellular uptake of a number of fluorescent-labeled phosphopeptides, including F'-GpYLPQTV, F'-NEpYTARQ, F'-AEEEIYGEFEAKKKK, F'-PEpYLGLD, F'-pYVNVQN-NH2, and F'-GpYEEI (F' = fluorescein), was evaluated in the presence or absence of a [WR]4, a cyclic peptide containing alternative arginine (R) and tryptophan (W) residues, in human leukemia cells (CCRF-CEM) after 2 h incubation using flow cytometry. [WR]4 improved significantly the cellular uptake of all phosphopeptides. PEpYLGLD is a sequence that mimics the pTyr1246 of ErbB2 that is responsible for binding to the Chk SH2 domain. The cellular uptake of F'-PEpYLGLD was enhanced dramatically by 27-fold in the presence of [WR]4 and was found to be time-dependent. Confocal microscopy of a mixture of F'-PEpYLGLD and [WR]4 in live cells exhibited intracellular localization and significantly higher cellular uptake compared to that of F'-PEpYLGLD alone. Transmission electron microscopy (TEM) and isothermal calorimetry (ITC) were used to study the interaction of PEpYLGLD and [WR]4. TEM results showed that the mixture of PEpYLGLD and [WR]4 formed noncircular nanosized structures with width and height of 125 and 60 nm, respectively. ITC binding studies confirmed the interaction between [WR]4 and PEpYLGLD. The binding isotherm curves, derived from sequential binding models, showed an exothermic interaction driven by entropy. These studies suggest that amphiphilic peptide [WR]4 can be used as a cellular delivery tool of cell-impermeable negatively charged phosphopeptides.

Phosphopeptide binding domains mediate the directed and localized assembly of protein complexes essential to intracellular kinase signaling. To identify phosphopeptide binding proteins, we developed a proteomic screening method using immobilized partially degenerate phosphopeptide mixtures combined with SILAC and microcapillary LC-MS/MS. The method was used to identify proteins that specifically bound to phosphorylated peptide library affinity matrices, including pTyr, and the motifs pSer/pThr-Pro, pSer/pThr-X-X-X-pSer/pThr, pSer/pThr-Glu/Asp, or pSer/pThr-pSer/pThr in degenerate sequence contexts. Heavy and light SILAC lysates were applied to columns containing these phosphorylated and nonphosphorylated (control) peptide libraries respectively, and bound proteins were eluted, combined, digested, and analyzed by LC-MS/MS using a hybrid quadrupole-TOF mass spectrometer. Heavy/light peptide ion ratios were calculated, and peptides that yielded ratios greater than ∼3:1 were considered as being from potential phosphopeptide binding proteins since this ratio represents the lowest ratio from a known positive control. Many of those identified were known phosphopeptide-binding proteins, including the SH2 domain containing p85 subunit of PI3K bound to pTyr, 14-3-3 bound to pSer/pThr-Asp/Glu, polo-box domain containing PLK1 and Pin1 bound to pSer/pThr-Pro, and pyruvate kinase M2 binding to pTyr. Approximately half of the hits identified by the peptide library screens were novel. Protein domain enrichment analysis revealed that most pTyr hits contain SH2 domains, as expected, and to a lesser extent SH3, C1, STAT, Tyr phosphatase, Pkinase, C2, and PH domains; however, pSer/pThr motifs did not reveal enriched domains across hits.

The rapid development of shotgun proteomics is paving the way for extensive proteome profiling, while providing extensive information on various post translational modifications (PTMs) that occur to a proteome of interest. For example, the current phosphoproteomic methods can yield more than 10,000 phosphopeptides identified from a proteome sample. Despite these developments, it remains a challenging issue to pinpoint the true phosphorylation sites, especially when multiple sites are possible for phosphorylation in the peptides. We developed the Phospho-UMC filter, which is a simple method of localizing the site of phosphorylation using unique mass classes (UMCs) information to differentiate phosphopeptides with different phosphorylation sites and increase the confidence in phosphorylation site localization. The method was applied to large scale phosphopeptide profiling data and was demonstrated to be effective in the reducing ambiguity associated with the tandem mass spectrometric data analysis of phosphopeptides.

Full Text Available Abstract Background The secretion of heterologous animal proteins in filamentous fungi is usually limited by bottlenecks in the vesicle-mediated secretory pathway. Results Using the secretion of bovine chymosin in Aspergillus awamori as a model, we found a drastic increase (40 to 80-fold in cells grown with casein or casein phosphopeptides (CPPs. CPPs are rich in phosphoserine, but phosphoserine itself did not increase the secretion of chymosin. The stimulatory effect is reduced about 50% using partially dephosphorylated casein and is not exerted by casamino acids. The phosphopeptides effect was not exerted at transcriptional level, but instead, it was clearly observed on the secretion of chymosin by immunodetection analysis. Proteomics studies revealed very interesting metabolic changes in response to phosphopeptides supplementation. The oxidative metabolism was reduced, since enzymes involved in fermentative processes were overrepresented. An oxygen-binding hemoglobin-like protein was overrepresented in the proteome following phosphopeptides addition. Most interestingly, the intracellular pre-protein enzymes, including pre-prochymosin, were depleted (most of them are underrepresented in the intracellular proteome after the addition of CPPs, whereas the extracellular mature form of several of these secretable proteins and cell-wall biosynthetic enzymes was greatly overrepresented in the secretome of phosphopeptides-supplemented cells. Another important 'moonlighting' protein (glyceraldehyde-3-phosphate dehydrogenase, which has been described to have vesicle fusogenic and cytoskeleton formation modulating activities, was clearly overrepresented in phosphopeptides-supplemented cells. Conclusions In summary, CPPs cause the reprogramming of cellular metabolism, which leads to massive secretion of extracellular proteins.

Mutations in leucine-rich repeat kinase 2 (LRRK2) that increase its kinase activity associate with familial forms of Parkinson disease (PD). As phosphorylation determines the functional state of most protein kinases, we systematically mapped LRRK2 phosphorylation sites by mass spectrometry. Our analysis revealed a high degree of constitutive phosphorylation in a narrow serine-rich region preceding the LRR-domain. Allowing de novo autophosphorylation of purified LRRK2 in an in vitro autokinase assay prior to mass spectrometric analysis, we discovered multiple sites of autophosphorylation. Solely serine and threonine residues were found phosphorylated suggesting LRRK2 as a true serine threonine kinase. Autophosphorylation mainly targets the ROC GTPase domain and its clustering around the GTP binding pocket of ROC suggests cross-regulatory activity between kinase and Roc domain. In conclusion, the phosphoprotein LRRK2 functions as an autocatalytically active serine threonine kinase. Clustering of phosphosites within two discrete domains suggest that phosphorylation may regulate its biological functions in a yet unknown fashion.

Polo-like kinase 1 (PLK1) is an important regulator in diverse aspects of the cell cycle and proliferation. The protein has a highly conserved polo-box domain (PBD) present in C-terminal noncatalytic region, which exhibits a relatively broad sequence specificity in recognizing and binding phosphorylated substrates to control substrate phosphorylation by the kinase. In order to elucidate the structural basis, thermodynamic property, and biological implication underlying PBD-substrate recognition and association, a systematic amino acid preference profile of phosphopeptide interaction with PLK1 PBD domain was established via virtual mutagenesis analysis and mutation energy calculation, from which the contribution of different amino acids at each residue position of two reference phosphopeptides to domain-peptide binding was characterized comprehensively and quantitatively. With the profile, we are able to determine the favorable, neutral, and unfavorable amino acid types for each position of PBD-binding phosphopeptides, and we also explored the molecular origin of the broad sequence specificity in PBD-substrate recognition. To practice computational findings, the profile was further employed to guide rational design of potent PBD binders; three 6-mer phosphopeptides (i.e., IQSpSPC, LQSpTPF, and LNSpTPT) were successfully developed, which can efficiently target PBD domain with high affinity (Kd = 5.7 ± 1.1, 0.75 ± 0.18, and 7.2 ± 2.6 μm, resp.) as measured by a fluorescence anisotropy assay. The complex structure of PLK1 PBD domain with a newly designed, potent phosphopeptide LQSpTPF as well as diverse noncovalent chemical forces, such as H-bonds and hydrophobic interactions at the complex interface, were examined in detail to reveal the molecular mechanism of high affinity and stability of the complex system.

We developed and compared two approaches for automated validation of phosphopeptide tandem mass spectra identified using database searching algorithms. Phosphopeptide identifications were obtained through SEQUEST searches of a protein database appended with its decoy (reversed sequences). Statistical evaluation and iterative searches were employed to create a high-quality data set of phosphopeptides. Automation of postsearch validation was approached by two different strategies. By using statistical multiple testing, we calculate a p value for each tentative peptide phosphorylation. In a second method, we use a support vector machine (SVM; a machine learning algorithm) binary classifier to predict whether a tentative peptide phosphorylation is true. We show good agreement (85%) between postsearch validation of phosphopeptide/spectrum matches by multiple testing and that from support vector machines. Automatic methods conform very well with manual expert validation in a blinded test. Additionally, the algorithms were tested on the identification of synthetic phosphopeptides. We show that phosphate neutral losses in tandem mass spectra can be used to assess the correctness of phosphopeptide/spectrum matches. An SVM classifier with a radial basis function provided classification accuracy from 95.7% to 96.8% of the positive data set, depending on search algorithm used. Establishing the efficacy of an identification is a necessary step for further postsearch interrogation of the spectra for complete localization of phosphorylation sites. Our current implementation performs validation of phosphoserine/phosphothreonine-containing peptides having one or two phosphorylation sites from data gathered on an ion trap mass spectrometer. The SVM-based algorithm has been implemented in the software package DeBunker. We illustrate the application of the SVM-based software DeBunker on a large phosphorylation data set.

Phosphoproteins/phosphopeptides with clusters of acidic residues are found throughout nature, where they aid in the prevention of unwanted precipitation of solid calcium phosphates. The acidic residues, particularly phosphoserine, interact with calcium and stabilize clusters of calcium and phosphate. Saliva and milk are two examples of biological fluids that contain such phosphoprotein/phosphopeptide-stabilized calcium phosphates, and both share a similar evolutionary pathway. Saliva has been shown to have remineralization potential and is of critical importance in maintaining the mineral content of teeth in the oral environment. Milk can be enzymatically modified to release casein phosphopeptides that contain the clusters of residues that allow milk to stabilize high concentrations of calcium and phosphate. These casein phosphopeptide-stabilized amorphous calcium phosphate nanocomplexes (CPP-ACP) can stabilize even higher concentrations of calcium and phosphate than milk and can be considered a salivary biomimetic, since they share many similarities to statherin. The mechanisms of action and the growing body of scientific evidence that supports the use of CPP-ACP to augment fluoride in inhibiting demineralization and enhancing the remineralization of white-spot lesions are reviewed.

Glycosylation and phosphorylation are important post-translational modifications in biological processes and biomarker research. The difficulty in analyzing these modifications is mainly their low abundance and dissociation of labile regions such as sialic acids or phosphate groups. One solution in matrix-assisted laser desorption/ionization (MALDI) mass spectrometry is to improve matrices for glycopeptides, carbohydrates, and phosphopeptides by increasing the sensitivity and suppressing dissociation of the labile regions. Recently, a liquid matrix 3-aminoquinoline (3-AQ)/α-cyano-4-hydroxycinnamic acid (CHCA) (3-AQ/CHCA), introduced by Kolli et al. in 1996, has been reported to increase sensitivity for carbohydrates or phosphopeptides, but it has not been systematically evaluated for glycopeptides. In addition, 3-AQ/CHCA enhances the dissociation of labile regions. In contrast, a liquid matrix 1,1,3,3-tetramethylguanidium (TMG, G) salt of p-coumaric acid (CA) (G3CA) was reported to suppress dissociation of sulfate groups or sialic acids of carbohydrates. Here we introduce a liquid matrix 3-AQ/CA for glycopeptides, carbohydrates, and phosphopeptides. All of the analytes were detected as [M + H](+) or [M - H](-) with higher or comparable sensitivity using 3-AQ/CA compared with 3-AQ/CHCA or 2,5-dihydroxybenzoic acid (2,5-DHB). The sensitivity was increased 1- to 1000-fold using 3-AQ/CA. The dissociation of labile regions such as sialic acids or phosphate groups and the fragmentation of neutral carbohydrates were suppressed more using 3-AQ/CA than using 3-AQ/CHCA or 2,5-DHB. 3-AQ/CA was thus determined to be an effective MALDI matrix for high sensitivity and the suppression of dissociation of labile regions in glycosylation and phosphorylation analyses.

A new tantalum-based sol-gel material was synthesized using a unique sol-gel synthesis pathway by PEG incorporation into the sol-gel structure without performing a calcination step. This improved its chemical and physical properties for the high capacity and selective enrichment of phosphopeptides from protein digests in complex biological media. The specificity of the tantalum-based sol-gel material for phosphopeptides was evaluated and compared with tantalum(V) oxide (Ta2O5) in different phosphopeptide enrichment applications. The tantalum-based sol-gel and tantalum(V) oxide were characterized in detail using FT-IR spectroscopy, X-ray diffraction (XRD) and scanning electron microscopy (SEM), and also using a surface area and pore size analyzer. In the characterization studies, the surface morphology, pore volume, crystallinity of the materials and PEG incorporation into the sol-gel structure to produce a more hydrophilic material were successfully demonstrated. The X-ray diffractograms of the two different materials were compared and it was noted that the broad signals of the tantalum-based sol-gel clearly represented the amorphous structure of the sol-gel material, which was more likely to create enough surface area and to provide more accessible tantalum atoms for phosphopeptides to be easily adsorbed when compared with the neat and more crystalline structure of Ta2O5. Therefore, the phosphopeptide enrichment performance of the tantalum-based sol-gels was found to be remarkably higher than the more crystalline Ta2O5 in our studies. Phosphopeptides at femtomole levels could be selectively enriched using the tantalum-based sol-gel and detected with a higher signal-to-noise ratio by matrix-assisted laser desorption/ionization-mass spectrometer (MALDI-MS). Moreover, phosphopeptides in a tryptic digest of non-fat bovine milk as a complex real-world biological sample were retained with higher yield using a tantalum-based sol-gel. Additionally, the sol-gel material

This study aims to investigate the effect of topical applications of 10% casein phosphopeptide-amorphous calcium phosphate (CPP-ACP) on white spot lesions (WSL) detected after treatment with fixed orthodontic appliances. Sixty healthy adolescents with ≥1 clinically visible WSL at debonding were recruited and randomly allocated to a randomised controlled trial with two parallel groups. The intervention group was instructed to topically apply a CPP-ACP -containing agent (Tooth Mousse, GC Europe) once daily and the subjects of the control group brushed their teeth with standard fluoride toothpaste. The intervention period was 4 weeks and the endpoints were quantitative light-induced fluorescence (QLF) on buccal surfaces of the upper incisors, cuspids and first premolars and visual scoring from digital photos. The attrition rate was 15%, mostly due to technical errors, and 327 lesions were included in the final evaluation. A statistically significant (p white spot lesions after debonding of orthodontic appliances with a casein phosphopeptide-stabilised amorphous calcium phosphate agent resulted in significantly reduced fluorescence and a reduced area of the lesions after 4 weeks as assessed by QLF. The improvement was however not superior to the "natural" regression following daily use of fluoride toothpaste.

In this work, novel magnetic polymeric core-shell structured microspheres with immobilized Ce(IV), Fe3O4@SiO2@PVPA-Ce(IV), were designed rationally and synthesized successfully via a facile route for the first time. Magnetic Fe3O4@SiO2 microspheres were first prepared by directly coating a thin layer of silica onto Fe3O4 magnetic particles using a sol-gel method, a poly(vinylphosphonic acid) (PVPA) shell was then coated on the Fe3O4@SiO2 microspheres to form Fe3O4@SiO2@PVPA microspheres through a radical polymerization reaction, and finally Ce(IV) ions were robustly immobilized onto the Fe3O4@SiO2@PVPA microspheres through strong chelation between Ce(IV) ions and phosphate moieties in the PVPA. The applicability of the Fe3O4@SiO2@PVPA-Ce(IV) microspheres for selective enrichment and rapid separation of phosphopeptides from proteolytic digests of standard and real protein samples was investigated. The results demonstrated that the core-shell structured Fe3O4@SiO2@PVPA-Ce(IV) microspheres with abundant Ce(IV) affinity sites and excellent magnetic responsiveness can effectively purify phosphopeptides from complex biosamples for MS detection taking advantage of the rapid magnetic separation and the selective affinity between Ce(IV) ions and phosphate moieties of the phosphopeptides. Furthermore, they can be effectively recycled and show good reusability, and have better performance than commercial TiO2 beads and homemade Fe3O4@PMAA-Ce(IV) microspheres. Thus the Fe3O4@SiO2@PVPA-Ce(IV) microspheres can benefit greatly the mass spectrometric qualitative analysis of phosphopeptides in phosphoproteome research.

Programmers can no longer depend on new processors to have significantly improved single-thread performance. Instead, gains have to come from other sources such as the compiler and its optimization passes. Advanced passes make use of information on the dependencies related to loops. We improve...... the quality of that information by reusing the information given by the programmer for parallelization. We have implemented a prototype based on GCC into which we also add a new optimization pass. Our approach improves the amount of correctly classified dependencies resulting in 46% average improvement...

The production of structurally significant product ions during the dissociation of phosphopeptides is a key to the successful determination of phosphorylation sites. These diagnostic ions can be generated using the widely adopted MS/MS approach, MS3 (Data Dependent Neutral Loss - DDNL), or by multistage activation (MSA). The main purpose of this work is to introduce a false-localization rate (FLR) probabilistic model to enable unbiased phosphoproteomics studies. Briefly, our algorithm infers a probabilistic function from the distribution of the identified phosphopeptides' XCorr Delta scores (XD-Scores) in the current experiment. Our module infers p-values by relying on Gaussian mixture models and a logistic function. We demonstrate the usefulness of our probabilistic model by revisiting the "to MSA, or not to MSA" dilemma. For this, we use human leukemia-derived cells (K562) as a study model and enriched for phosphopeptides using the hydroxyapatite (HAP) chromatography. The aliquots were analyzed with and without MSA on an Orbitrap-XL. Our XD-Scoring analysis revealed that the MS/MS approach provides more identifications because of its faster scan rate, but that for the same given scan rate higher-confidence spectra can be achieved with MSA. Our software is integrated into the PatternLab for proteomics freely available for academic community at http://www.patternlabforproteomics.org. Biological significance Assigning statistical confidence to phosphorylation sites is necessary for proper phosphoproteomic assessment. Here we present a rigorous statistical model, based on Gaussian mixture models and a logistic function, which overcomes shortcomings of previous tools. The algorithm described herein is made readily available to the scientific community by integrating it into the widely adopted PatternLab for proteomics. This article is part of a Special Issue entitled: Computational Proteomics.

A new method based upon adding ammonium phosphate as a matrix additive to enhance the ionization efficiency of phosphopeptide in matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) was described. Furthermore, influences of different phosphate salts at various concentrations on the ionization efficiency of phosphopeptide were investigated systematically, finding that the signal intensity for phosphopeptide 48FQ[pS]EEQQQTEDELQDK63 digested from β-casein were 5 to 8 times increased in an optimized condition with 10 mmol/L ammonium monobasic phosphate or 3 to 4 times increased with 10 mmol/L ammonium dibasic phosphate as additive to matrix 2,5-dihydroxybenzoic acid, respectively. Compared with the most optimized matrix system that was currently reported for special ionization of phosphopeptides, the signal intensity of this phosphopeptide was also enhanced by twice when introducing 5 mmol/L ammonium dibasic phosphate into matrix 2,4,6-trihydroxyacetophenone. In addition, the mechanism was also discussed, assuming that the cooperation function of ammonium cation and phosphate anion was of great importance in enhancing the ionization efficiency of phosphopeptide in MALDI-MS.

Deregulation of signaling pathways involving phosphorylation is a hallmark of malignant transformation. Degradation of phosphoproteins generates cancer-specific phosphopeptides that are associated with MHC-I and II molecules and recognized by T-cells. We identified 95 phosphopeptides presented on the surface of primary hematological tumors and normal tissues, including 61 that were tumor-specific. Phosphopeptides were more prevalent on more aggressive and malignant samples. CD8 T-cell lines specific for these phosphopeptides recognized and killed both leukemia cell lines and HLA-matched primary leukemia cells ex vivo. Healthy individuals showed surprisingly high levels of CD8 T-cell responses against many of these phosphopeptides within the circulating memory compartment. This immunity was significantly reduced or absent in some leukemia patients, which correlated with clinical outcome, and was restored following allogeneic stem cell transplantation. These results suggest that phosphopeptides may be targets of cancer immune surveillance in humans, and point to their importance for development of vaccine-based and T-cell adoptive transfer immunotherapies.. PMID:24048523

Some compounds of low abundance in biological samples play important roles in bioprocesses. However, the detection of these compounds at inherently trace concentrations with interference from a complex matrix is difficult. New materials for sample pretreatment are essential for the removal of interferences and for selective enrichment. In this study, echinus-like Fe(3)O(4)@TiO(2) core-shell-structured microspheres (echinus-like microspheres) have been synthesized for the first time. Rutile phase TiO(2) nanorods with a length of approximately 300 nm and width of approximately 60 nm are arranged regularly on the surface of the microspheres. This novel type of material exhibited good selectivity and adsorption capacity toward phosphate-containing compounds. In proteomics research, the echinus-like microspheres were used to selectively enrich phosphopeptides from complex peptide mixtures. Matrix-assisted laser desorption/ionization time of flight mass spectrometry (MALDI-TOF/MS) analysis showed that fourteen phosphopeptides were detected from α-casein tryptic digests after enrichment. Even in peptide mixtures that contained highly abundant nonphosphorylated peptides with interference from bovine serum albumin, these phospopeptides could still be selectively trapped with little nonspecific adsorption. In metabolomics studies, the echinus-like microspheres were further used to selectively remove phosphocholines (PCs) and lysophosphocholines (LPCs), which are the main matrix interferences for the detection of metabolites of low abundance in plasma. Liquid chromatography-quadrupole time-of-flight mass spectrometry was used to perform the metabolic profiling of plasma. The high concentrations of PCs and LPCs were effectively eliminated, and many endogenous metabolites of low abundance were enhanced or even observed for the first time. All of the results suggest that echinus-like microspheres have potential applications in proteomics and metabolomics to improve the

Chk2/CHEK2/hCds1 is a modular serine-threonine kinase involved in transducing DNA damage signals. Phosphorylation by ataxia telangiectasia-mutated kinase (ATM) promotes Chk2 self-association, autophosphorylation, and activation. Here we use expressed protein ligation to generate a Chk2 N-terminal regulatory region encompassing a fork-head-associated (FHA) domain, a stoichiometrically phosphorylated Thr-68 motif and intervening linker. Hydrodynamic analysis reveals that Thr-68 phosphorylation stabilizes weak FHA-FHA interactions that occur in the unphosphorylated species to form a high affinity dimer. Although clearly a prerequisite for Chk2 activation in vivo, we show that dimerization modulates potential phosphodependent interactions with effector proteins and substrates through either the pThr-68 site, or the canonical FHA phosphobinding surface with which it is tightly associated. We further show that the dimer-occluded pThr-68 motif is released by intra-dimer autophosphorylation of the FHA domain at the highly conserved Ser-140 position, a major pThr contact in all FHA-phosphopeptide complex structures, revealing a mechanism of Chk2 dimer dissociation following kinase domain activation.

The combination of immobilized metal affinity chromatography (IMAC) and mass spectrometry is a widely used technique for enrichment and sequencing of phosphopeptides. In the IMAC method, negatively charged phosphate groups interact with positively charged metal ions (Fe3+, Ga3+, and Al3+) and thi......The combination of immobilized metal affinity chromatography (IMAC) and mass spectrometry is a widely used technique for enrichment and sequencing of phosphopeptides. In the IMAC method, negatively charged phosphate groups interact with positively charged metal ions (Fe3+, Ga3+, and Al3...

INTRODUCTIONImmobilized metal ion affinity chromatography (IMAC) makes use of matrix-bound metals to affinity-purify phosphoproteins and phosphopeptides. Commonly used metals in early studies such as Ni(2+), Co(2+), Zn(2+), and Mn(2+) were shown to bind strongly to proteins with a high density of...

This study was carried out to evaluate the difference between bonding to demineralized enamel and remineralized enamel using casein phosphopeptide-amorphous calcium phosphate with fluoride (CPP-ACFP) or without fluoride (CPP-ACP) compared to normal enamel. Another aim was to test if the newly introduced Single Bond Universal adhesive system would show better bonding to any enamel condition in comparison to the other tested adhesive systems. The lingual enamel surfaces of 40 non carious human third molars were divided into four main groups according to the enamel condition (ground normal enamel [negative control]; demineralized enamel [positive control]; and remineralized enamel with CPP-ACP or with CPP-ACFP, respectively). Within each main group, the lingual enamel surface of each tooth was sectioned into three slabs, resulting in 30 slabs that were distributed into three subgroups according to the adhesive system utilized (Clearfil S(3) Bond Plus, Single Bond Universal, or G-aenial Bond). Two resin composite microcylinder buildups were made on each enamel slab using Filtek Z350 XT. The μSBS was evaluated at a crosshead speed of 0.5 mm/min. Modes of failure were detected using an environmental scanning electron microscope at 300× magnification. The two-way analysis of variance with repeated measures revealed a significant effect for the enamel condition. However, there was no significant effect for the type of adhesive system. The interaction between the enamel condition and the type of adhesive system was also not significant. Modes of failure were mainly adhesive except for the demineralized enamel. It showed a mixed type of failure, in which cohesive failure in enamel was recorded. All single-step self-etch adhesives revealed comparable μSBS values to ground enamel and enamel remineralized with CPP-ACP or CPP-ACFP. Bonding to demineralized enamel was ineffective. With any enamel condition, no tested single-step self-etch adhesive was superior in its bonding.

Full Text Available Background: Benefits of remineralizing agents in a wide variety of formulations have been proved beneficial in caries management. Casein phosphopeptide-amorphous calcium phosphate (CPP–ACP nanocomplex has been recommended and used as remineralizing agent. Nano-hydroxyapatite (n-HAp is one of the most biocompatible and bioactive material having wide range of application in dentistry, but does it excel better compared to CPP-ACP. Aims: To evaluate and compare the remineralizing efficiency of the paste containing hydroxyapatite and casein phosphopeptide-amorphous calcium phosphate. Settings and Design: The study was an in vitro single blinded study with lottery method of randomization approved by the Institutional Ethics Committee. Materials and methods: 30 non carious premolar teeth. The teeth were demineralized and divided into 2 groups and subjected to remineralization. The samples were analysed for surface hardness and mineral content. Statistical Analysis: Student t’ test and repeated measures of ANOVA was applied. Results: Average hardness in Nano-hydroxyapatite group increased to 340 ± 31.70 SD and 426 ± 50.62 SD for 15 and 30 days respectively and that of (CPP–ACP, 355.83 ± 38.55 SD and 372.67 ± 53.63 SD. The change in the hardness values was not statistically significant with P value of 0.39 (P > 0.05. Calcium and Phosphorous levels increased in both the groups but was not significant. Conclusion: Both the agents used are effective in causing remineralization of enamel. Nano-hydroxyapatite is more effective as compared to Casein phosphopeptide-amorphous calcium phosphate, in increasing the Calcium and Phosphorus content of enamel, and this effect is more evident over a longer treatment period. Key Message: Remineralizing agents are a boon for caries management. With the advent of many formulations it is difficult to clinically select the agent. This study compares the remineralizing potential of Casein

Aims and Objectives: To determine the demineralization inhibitory potential of fluoride varnish and casein phosphopeptide-amorphous calcium phosphate (CPP-ACP) and to compare and evaluate the additive effect of fluoride varnish + CPP-ACP. Materials and Methods: Ten healthy premolar teeth that were extracted for orthodontic purposes were collected, and each tooth was longitudinally sectioned buccolingually and mesiodistally into four sections. The teeth were then assigned to four different treatment groups namely fluoride varnish, CPP-ACP, F− varnish followed by CPP-ACP and control. The prepared enamel samples were suspended in an artificial caries challenge for 10 days. The demineralizing inhibitory effects of the groups were recorded using polarized light microscopy. Statistical Analysis Used: Statistical analysis was carried out using analysis of variance and Duncan's multiple range tests. Results: The mean lesion depths of all the groups were Group 1 (fluoride varnish): 104.71, Group 2 (CPP-ACP): 127.09, Group 3: (F− varnish + CPP-ACP): 82.34, Group 4 (control): 146.93. Conclusion: Demineralization inhibitory potential on the additive use of F− varnish and casein phosphopeptide was superior to fluoride varnish or CPP-ACP applied alone on the enamel of young permanent teeth. PMID:26538909

A number of germ-line mutations in the BRCA1 gene confer susceptibility to breast and ovarian cancer. However, it remains difficult to determine whether many single amino-acid (missense) changes in the BRCA1 protein that are frequently detected in the clinical setting are pathologic or not. Here, we used a combination of functional, crystallographic, biophysical, molecular and evolutionary techniques, and classical genetic segregation analysis to demonstrate that the BRCA1 missense variant M1775K is pathogenic. Functional assays in yeast and mammalian cells showed that the BRCA1 BRCT domains carrying the amino-acid change M1775K displayed markedly reduced transcriptional activity, indicating that this variant represents a deleterious mutation. Importantly, the M1775K mutation disrupted the phosphopeptide-binding pocket of the BRCA1 BRCT domains, thereby inhibiting the BRCA1 interaction with the proteins BRIP1 and CtIP, which are involved in DNA damage-induced checkpoint control. These results indicate that the integrity of the BRCT phosphopeptide-binding pocket is critical for the tumor suppression function of BRCA1. Moreover, this study demonstrates that multiple lines of evidence obtained from a combination of functional, structural, molecular and evolutionary techniques, and classical genetic segregation analysis are required to confirm the pathogenicity of rare variants of disease-susceptibility genes and obtain important insights into the underlying pathogenetic mechanisms.

Enzyme-catalyzed dephosphorylation is essential for biomineralization and bone metabolism. Here we report the exploration of using enzymatic reaction to transform biocomposites of phosphopeptides and calcium (or strontium) ions to supramolecular hydrogels as a mimic of enzymatic dissolution of biominerals. 31P NMR shows that strong affinity between the phosphopeptides and alkaline metal ions (e.g., Ca2+ or Sr2+) induces the formation of biocomposites as precipitates. Electron microscopy reveals that the enzymatic reaction regulates the morphological transition from particles to nanofibers. Rheology confirms the formation of a rigid hydrogel. As the first example of enzyme-instructed dissolution of a solid to form supramolecular nanofibers/hydrogels, this work provides an approach to generate soft materials with desired properties, expands the application of supramolecular hydrogelators, and offers insights to control the demineralization of calcified soft tissues.

We developed a probability-based machine-learning program, Colander, to identify tandem mass spectra that are highly likely to represent phosphopeptides prior to database search. We identified statistically significant diagnostic features of phosphopeptide tandem mass spectra based on ion trap CID MS/MS experiments. Statistics for the features are calculated from 376 validated phosphopeptide spectra and 376 nonphosphopeptide spectra. A probability-based support vector machine (SVM) program, Colander, was then trained on five selected features. Data sets were assembled both from LC/LC-MS/MS analyses of large-scale phosphopeptide enrichments from proteolyzed cells, tissues and synthetic phosphopeptides. These data sets were used to evaluate the capability of Colander to select pS/pT-containing phosphopeptide tandem mass spectra. When applied to unknown tandem mass spectra, Colander can routinely remove 80% of tandem mass spectra while retaining 95% of phosphopeptide tandem mass spectra. The program significantly reduced computational time spent on database search by 60-90%. Furthermore, prefiltering tandem mass spectra representing phosphopeptides can increase the number of phosphopeptide identifications under a predefined false positive rate.

Protein kinases play important regulatory roles in intracellular signal transduction pathways. The aberrant activities of protein kinases are closely associated with the development of various diseases, which necessitates the development of practical and sensitive assays for monitoring protein kinase activities as well as for screening of potential kinase-targeted drugs. We demonstrate here a robust luminescence resonance energy transfer (LRET)-based protein kinase assay by using NaYF4:Yb,Er, one of the most efficient upconversion nanophosphors (UCNPs), as an autofluorescence-free LRET donor and a tetramethylrhodamine (TAMRA)-labeled substrate peptide as the acceptor. Fascinatingly, besides acting as the LRET donor, NaYF4:Yb,Er UCNPs also serve as the phosphopeptide-recognizing matrix because the intrinsic rare earth ions of UCNPs can specifically capture the fluorescent phosphopeptides catalyzed by protein kinases over the unphosphorylated ones. Therefore, a sensitive and generic protein kinase assay is developed in an extremely simple mix-and-read format without any requirement of surface modification, substrate immobilization, separation, or washing steps, showing great potential in protein kinases-related clinical diagnosis and drug discovery. To the best of our knowledge, this is the first report by use of rare earth-doped UCNPs as both the phospho-recognizing and signal reporting elements for protein kinase analysis.

Rare-earth phosphate microspheres with unique structures were developed as affinity probes for the selective capture and tagging of phosphopeptides. Prickly REPO(4) (RE = Yb, Gd, Y) monodisperse microspheres, that have hollow structures, low densities, high specific surface areas, and large adsorptive capacities were prepared by an ion-exchange method. The elemental compositions and crystal structures of these affinity probes were confirmed by energy-dispersive spectroscopy (EDS), powder X-ray diffraction (XRD), and Fourier-transform infrared (FTIR) spectroscopy. The morphologies of these compounds were investigated using scanning electron microscopy (SEM), transmission electron microscopy (TEM), and nitrogen-adsorption isotherms. The potential ability of these microspheres for selectively capturing and labeling target biological molecules was evaluated by using protein-digestion analysis and a real sample as well as by comparison with the widely used TiO(2) affinity microspheres. These results show that these porous rare-earth phosphate microspheres are highly promising probes for the rapid purification and recognition of phosphopeptides.

A new strategy for the manufacture of a turn-on fluorescent molecularly imprinted polymer (CdTe/SiO2/MIP) receptor for detecting tyrosine phosphopeptide (pTyr peptide) was proposed. The receptor was prepared by the surface imprinting procedure and the epitope approach with silica-capped CdTe quantum dots (QDs) as core substrate and fluorescent signal, phenylphosphonic acid (PPA) as the dummy template, 1-[3-(trimethoxysilyl) propyl] urea as the functional monomer, and octyltrimethoxysilane as the cross-linker. The synthetic CdTe/SiO2/MIP was able to selectively capture the template PPA and corresponding target pTyr peptide with fluorescence enhancement via the special interaction between them and the recognition cavities. The receptor exhibited the linear fluorescence enhancement to pTyr peptide in the range of 0.5-35μM, and the detection limit was 0.37μM. The precision for five replicate detections of pTyr peptide at 20μM was 2.60% (relative standard deviation). Combining the fluorescence property of the CdTe QDs with the merits of the surface imprinting technique and the epitope approach, the receptor not only owned high recognition site accessibility and good binding affinities for target pTyr peptide, but also improved the fluorescence selectivity of the CdTe QDs, as well revealed the feasibility of fabrication of a turn-on fluorescence probe using the surface imprinting procedure and the epitope approach.

Enamel decalcification in orthodontics is a concern for dentists and methods to remineralize these lesions are the focus of intense research. The aim of this study was to evaluate the remineralizing effect of casein phosphopeptide amorphous calcium phosphate (CPP-ACP) nanocomplexes on enamel decalcification in orthodontics. Twenty orthodontic patients with decalcified enamel lesions during fixed orthodontic therapy were recruited to this study as test group and twenty orthodontic patients with the similar condition as control group. GC Tooth Mousse, the main component of which is CPP-ACP, was used by each patient of test group every night after tooth-brushing for six months. For control group, each patient was asked to brush teeth with toothpaste containing 1100 parts per million (ppm) of fluoride twice a day. Standardized intraoral images were taken for all patients and the extent of enamel decalcification was evaluated before and after treatment over this study period. Measurements were statistically compared by t test. After using CPP-ACP for six months, the enamel decalcification index (EDI) of all patients had decreased; the mean EDI before using CPP-ACP was 0.191 ± 0.025 and that after using CPP-ACP was 0.183 ± 0.023, the difference was significant (t = 5.169, P 0.05). CPP-ACP can effectively improve the demineralized enamel lesions during orthodontic treatment, so it has some remineralization potential for enamel decalcification in orthodontics.

Objective To evaluate the ability of casein phosphopeptide/amorphous calcium phosphate (CPP/ACP) and lysozyme, lactoferrin, and lactoperoxidase (LLL) added to glass ionomer cement (GIC) to inhibit the growth of S. mutans in a caries model. Material and methods Eighty permanent third molars were selected. The dentin of these teeth was exposed and flattened. Except for the coronal dentin, the specimens were waterproofed, autoclaved, and submitted to cariogenic challenge with standard strain of S. mutans. The carious lesions were sealed as follows: group 1 (n=20): GIC without additives; group 2 (n=20): GIC + CPP/ACP; group 3 (n=20): GIC + LLL; group 4 (n=20): GIC + CPP/ACP + LLL. S. mutans counts were performed before the caries were sealed (n=5), after 24 hours (n=5), at 1 month (n=5), and at 6 months (n=5). The results were analyzed using descriptive statistical analysis and the Kruskal-Wallis test (Student-Newman-Keuls test). Results GIC + LLL caused a significant reduction of S. mutans 1 month after sealing (pcarious lesions. PMID:27688392

In regard to the phosphoproteome, highly specific and efficient capture of heteroideous kinds of phosphopeptides from intricate biological sample attaches great significance to comprehensive and in-depth phosphorylated proteomics research. However, until now, it has been a challenge. In this study, a new-fashioned porous immobilized metal ion affinity chromatography (IMAC) material was designed and fabricated to promote the selectivity and detection limit for phosphopeptides by covering a metal-organic frameworks (MOFs) shell onto Fe3O4 nanoparticles, taking advantage of layer-by-layer method (the synthesized nanoparticle denoted as Fe3O4@MIL-100 (Fe)). The thick layer renders the nanoparticles with perfect hydrophilic character, super large surface area, large immobilization of the Fe(3+) ions and the special porous structure. Specifically, the as-synthesized MOF-decorated magnetic nanoparticles own an ultra large surface area which is up to 168.66 m(2) g(-1) as well as two appropriate pore sizes of 1.93 and 3.91 nm with a narrow grain-size distribution and rapid separation under the magnetic circumstance. The unique features vested the synthesized nanoparticles an excellent ability for phosphopeptides enrichment with high selectivity for β-casein (molar ratio of β-casein/BSA, 1:500), large enrichment capacity (60 mg g(-1)), low detection limit (0.5 fmol), excellent phosphopeptides recovery (above 84.47%), fine size-exclusion of high molecular weight proteins, good reusability, and desirable batch-to-batch repeatability. Furthermore, encouraged by the experimental results, we successfully performed the as-prepared porous IMAC nanoparticle in the specific capture of phosphopeptides from the human serum (both the healthy and unhealthy) and nonfat milk, which proves itself to be a good candidate for the enrichment and detection of the low-abundant phosphopeptides from complicated biological samples.

Casein phosphopeptides (CPPs) containing chelated calcium drastically increase the secretion of extracellular homologous and heterologous proteins in filamentous fungi. Casein phosphopeptides released by digestion of alpha - and beta-casein are rich in phosphoserine residues (SerP). They stimulate enzyme secretion in the gastrointestinal tract and enhance the immune response in mammals, and are used as food supplements. It is well known that casein phosphopeptides transport Ca2+ across the membranes and play an important role in Ca2+ homeostasis in the cells. Addition of CPPs drastically increases the production of heterologous proteins in Aspergillus as host for industrial enzyme production. Recent proteomics studies showed that CPPs alter drastically the vesicle-mediated secretory pathway in filamentous fungi, apparently because they change the calcium concentration in organelles that act as calcium reservoirs. In the organelles calcium homeostasis a major role is played by the pmr1 gene, that encodes a Ca2+/Mn2+ transport ATPase, localized in the Golgi complex; this transporter controls the balance between intra-Golgi and cytoplasmic Ca2+ concentrations. A Golgi-located casein kinase (CkiA) governs the ER to Golgi directionality of the movement of secretory proteins by interacting with the COPII coat of secretory vesicles when they reach the Golgi. Mutants defective in the casein-2 kinase CkiA show abnormal targeting of some secretory proteins, including cytoplasmic membrane amino acid transporters that in ckiA mutants are miss-targeted to vacuolar membranes. Interestingly, addition of CPPs increases a glyceraldehyde-3-phpshate dehydrogenase protein that is known to associate with microtubules and act as a vesicle/membrane fusogenic agent. In summary, CPPs alter the protein secretory pathway in fungi adapting it to a deregulated protein traffic through the organelles and vesicles what results in a drastic increase in secretion of heterologous and also of

labeling of proteins and peptides for in vitro cell culture systems (stable isotope labeling using amino acids in cell culture, SILAC) or isobaric peptide labels such as isobaric tags for relative and absolute quantitation (iTRAQ) and tandem mass tags (TMT) for both in vitro and in vivo systems....... These quantitation strategies have also been successfully applied to phosphoproteomics studies for the investigation of signal transduction pathways. Here we describe major drawbacks associated with isobaric labeling for the identification and quantitation of phosphopeptides using electrospray tandem mass...

Our recently discovered, selective, on-resin route to N(τ)-alkylated imidazolium-containing histidine residues affords new strategies for peptide mimetic design. In this, we demonstrate the use of this chemistry to prepare a series of macrocyclic phosphopeptides, in which imidazolium groups serve as ring-forming junctions. These cationic moieties subsequently serve to charge-mask the phosphoamino acid group that directed their formation. Furthermore, neighbor-directed histidine N(τ)-alkylation opens the door to new families of phosphopeptidomimetics for use in a range of chemical biology contexts.

A rugged sample-preparation method for comprehensive affinity enrichment of phosphopeptides from protein digests has been developed. The method uses a series of chemical reactions to incorporate efficiently and specifically a thiol-functionalized affinity tag into the analyte by barium hydroxide catalyzed β-elimination with Michael addition using 2-aminoethanethiol as nucleophile and subsequent thiolation of the resulting amino group with sulfosuccinimidyl-2-(biotinamido) ethyl-1,3-dithiopropionate. Gentle oxidation of cysteine residues, followed by acetylation of α- and ε-amino groups before these reactions, ensured selectivity of reversible capture of the modified phosphopeptides by covalent chromatography on activated thiol sepharose. The use of C18 reversed-phase supports as a miniaturized reaction bed facilitated optimization of the individual modification steps for throughput and completeness of derivatization. Reagents were exchanged directly on the supports, eliminating sample transfer between the reaction steps and thus, allowing the immobilized analyte to be carried through the multistep reaction scheme with minimal sample loss. The use of this sample-preparation method for phosphopeptide enrichment was demonstrated with low-level amounts of in-gel-digested protein. As applied to tryptic digests of α-S1- and β-casein, the method enabled the enrichment and detection of the phosphorylated peptides contained in the mixture, including the tetraphosphorylated species of β-casein, which has escaped chemical procedures reported previously. The isolates proved highly suitable for mapping the sites of phosphorylation by collisionally induced dissociation. β-Elimination, with consecutive Michael addition, expanded the use of the solid-phase-based enrichment strategy to phosphothreonyl peptides and to phosphoseryl/phosphothreonyl peptides derived from proline-directed kinase substrates and to their O-sulfono- and O-linked β-N-acetylglucosamine (O

and crosslinkers inside the pores of macroporous silica beads with both free and immobilized template. In the final step the silica was removed by fluoride etching resulting in mesoporous polymer replicas with narrow pore size distributions, pore diameters ≈ 10 nm and surface area > 260 m2 g-1. The beads displayed......Two templating approaches to produce imprinted phosphotyrosine capture beads with a controllable pore structure are reported and compared with respect to their ability to enrich phosphopeptides from a tryptic peptide mixture. The beads were prepared by the polymerization of urea-based host monomers...

under three different conditions. Fe(III)-nitrilotriacetic acid (NTA) IMAC resin was chosen due to its superior performance in all tests. We further investigated the solution ionization efficiency change of the phosphoryl group and carboxylic group in different acetonitrile-water solutions and observed...... that the ionization efficiencies of the phosphoryl group and carboxylic group changed differently when the acetonitrile concentration was increased. A magnified difference was achieved in high acetonitrile content solutions. On the basis of this concept, an optimized phosphopeptide enrichment protocol was established...

The intermittency analysis of single event data (particle moments) in multiparticle production is improved, taking into account corrections due to the reconstruction of history of a particle cascade. This approach is tested within the framework of the $\\alpha$-model.

A SWOT (strengths, weaknesses, opportunities, and threats) analysis of a teacher education program, or any program, can be the driving force for implementing change. A SWOT analysis is used to assist faculty in initiating meaningful change in a program and to use the data for program improvement. This tool is useful in any undergraduate or degree…

Rapid and selective enrichment of phosphopeptides from complex biological samples is essential and challenging in phosphorylated proteomics. We present the direct growth of the ultrathin YPO4 shell on the surface of polyacrylate capped secondary Fe3O4 microspheres (PA-Fe3O4@YPO4) for the rapid and selective trapping phosphopeptides from complex samples. The prepared PA-Fe3O4@YPO4 could be rapidly harvested in the presence of an applied magnetic field and easily re-dispersed in solutions after removing the external magnet. The ultrathin YPO4 shell on super-hydrophilic PA-Fe3O4 has the advantages of fast adsorption/desorption dynamics and low non-specific adsorption, thus trapping of phosphopeptides from the tryptic digests mixture of β-casein/BSA with molar ratio of 1/300 is achieved in 20s adsorption/desorption time. Two phosphopeptides can still be detected with a signal to noise ratio (S/N) over 3 when the amount of β-casein was as low as 8 fmol.

Signal transducers and activators of transcription (STAT) 1 and STAT3 are activated by overlapping but distinct sets of cytokines. STATs are recruited to the different cytokine receptors through their Src homology (SH) 2 domains that make highly specific interactions with phosphotyrosine-docking sites on the receptors. We used a degenerate phosphopeptide library synthesized on 35-microm TentaGel beads and fluorescence-activated bead sorting to determine the sequence specificity of the peptide-binding sites of the SH2 domains of STAT1 and STAT3. The large bead library allowed not only peptide sequencing of pools of beads but also of single beads. The method was validated through surface plasmon resonance measurements of the affinities of different peptides to the STAT SH2 domains. Furthermore, when selected peptides were attached to a truncated erythropoietin receptor and stably expressed in DA3 cells, activation of STAT1 or STAT3 could be achieved by stimulation with erythropoietin. The combined analysis of pool sequencing, the individual peptide sequences, and plasmon resonance measurements allowed the definition of SH2 domain binding motifs. STAT1 preferentially binds peptides with the motif phosphotyrosine-(aspartic acid/glutamic acid)-(proline/arginine)-(arginine/proline/glutamine), whereby a negatively charged amino acid at +1 excludes a proline at +2 and vice versa. STAT3 preferentially binds peptides with the motif phosphotyrosine-(basic or hydrophobic)-(proline or basic)-glutamine. For both STAT1 and STAT3, specific high affinity phosphopeptides were identified that can be used for the design of inhibitory molecules.

Global characterization and in-depth understanding of phosphoproteome based on mass spectrometry (MS) desperately needs a highly efficient affinity probe during sample preparation. In this work, a ternary nanocomposite of magnetite/ceria-codecorated titanoniobate nanosheet (MC-TiNbNS) was synthesized by the electrostatic assembly of Fe3O4 nanospheres and in situ growth of CeO 2 nanoparticles on pre-exfoliated titanoniobate and eventually utilized as the probe and catalyst for the enrichment and dephosphorylation of phosphopeptides. The two-dimensional (2D) structured titanoniobate nanosheet not only promoted the efficacy of capturing phosphopeptides with enlarged surface area, but also functioned as a substrate for embracing the magnetic anchor Fe3O4 to enable magnetic separation and mimic phosphatase CeO2 to produce identifying signatures of phosphopeptides. Compared to single-component TiNbNS or CeO2 nanoparticles, the ternary nanocomposite provided direct evidence of the number of phosphorylation sites while maintaining the enrichment efficiency. Moreover, by altering the on-sheet CeO2 coverage, the dephosphorylation activity could be fine-tuned, generating continuously adjustable signal intensities of both phosphopeptides and their dephosphorylated tags. Exhaustive detection of both mono- and multiphosphorylated peptides with precise counting of their phosphorylation sites was achieved in the primary mass spectra in the cases of digests of standard phosphoprotein and skim milk, as well as a more complex biological sample, human serum. With the resulting highly informative mass spectra, this multifunctional probe can be used as a promising tool for the fast and comprehensive characterization of phosphopeptides in MS-based phosphoproteomics.

Aircraft dynamic analyses are demanding of computer simulation capabilities. The modeling complexities of semi-monocoque construction, irregular geometry, high-performance materials, and high-accuracy analysis are present. At issue are the safety of the passengers and the integrity of the structure for a wide variety of flight-operating and emergency conditions. The technology which supports engineering of aircraft structures using computer simulation is examined. Available computer support is briefly described and improvement of accuracy and efficiency are recommended. Improved accuracy of simulation will lead to a more economical structure. Improved efficiency will result in lowering development time and expense.

The tumour suppressor gene BRCA1 encodes a 220 kDa protein that participates in multiple cellular processes. The BRCA1 protein contains a tandem of two BRCT repeats at its carboxy-terminal region. The majority of disease-associated BRCA1 mutations affect this region and provide to the BRCT repeats a central role in the BRCA1 tumour suppressor function. The BRCT repeats have been shown to mediate phospho-dependant protein-protein interactions. They recognize phosphorylated peptides using a recognition groove that spans both BRCT repeats. We previously identified an interaction between the tandem of BRCA1 BRCT repeats and ACCA, which was disrupted by germ line BRCA1 mutations that affect the BRCT repeats. We recently showed that BRCA1 modulates ACCA activity through its phospho-dependent binding to ACCA. To delineate the region of ACCA that is crucial for the regulation of its activity by BRCA1, we searched for potential phosphorylation sites in the ACCA sequence that might be recognized by the BRCA1 BRCT repeats. Using sequence analysis and structure modelling, we proposed the Ser1263 residue as the most favourable candidate among six residues, for recognition by the BRCA1 BRCT repeats. Using experimental approaches, such as GST pull-down assay with Bosc cells, we clearly showed that phosphorylation of only Ser1263 was essential for the interaction of ACCA with the BRCT repeats. We finally demonstrated by immunoprecipitation of ACCA in cells, that the whole BRCA1 protein interacts with ACCA when phosphorylated on Ser1263.

In a Digital Particle Image Velocimetry (DPIV) system, the correlation of digital images is normally used to acquire the displacement information of particles and give estimates of the flow field. The accuracy and robustness of the correlation algorithm directly affect the validity of the analysis result. In this article, an improved algorithm for the correlation analysis was proposed which could be used to optimize the selection/determination of the correlation window, analysis area and search path. This algorithm not only reduces largely the amount of calculation, but also improves effectively the accuracy and reliability of the correlation analysis. The algorithm was demonstrated to be accurate and efficient in the measurement of the velocity field in a flocculation pool.

Background Enamel decalcification in orthodontics is a concern for dentists and methods to remineralize these lesions are the focus of intense research.The aim of this study was to evaluate the remineralizing effect of casein phosphopeptide amorphous calcium phosphate(CPP-ACP)nanocomplexes on enamel decalcification in orthodontics.Methods Twenty orthodontic patients with decalcified enamel lesions during fixed orthodontic therapy were recruited to this study as test group and twenty orthodontic patients with the similar condition as control group.GC Tooth Mousse,the main component of which is CPP-ACP,was used by each patient of test group every night after tooth-brushing for six months.For control group,each patient was asked to brush teeth with toothpaste containing 1100 parts per million(ppm)of fluoride twice a day.Standardized intraoral images were taken for all patients and the extent of enamel decalcification was evaluated before and after treatment over this study period.Measurements were statistically compared by t test.Results After using CPP-ACP for six months,the enamel decalcification index(EDI)of all patients had decreased;the mean EDI before using CPP-ACP was 0.191±0.025 and that after using CPP-ACP was 0.183±0.023,the difference was significant(t=5.169,P＜0.01).For control group,the mean EDI before treatment was 0.188±0.037 and that after treatment was 0.187±0.046,the difference was not significant(t=1.711,P＞0.05).Conclusion CPP-ACP can effectively improve the demineralized enamel lesions during orthodontic treatment,so it has some remineralization potential for enamel decalcification in orthodontics.

Full Text Available Aim: This study aims to determine and compare the extent of inhibition of demineralization and promotion of remineralization of permanent molar enamel with and without application of three remineralizing agents. Materials and Methods: Forty extracted permanent molars were randomly divided into two groups 1 and 2, longitudinally sectioned into four and divided into subgroups A, B, C, and D. The sections were coated with nail varnish leaving a window of 3 mm × 3 mm. All sections of Group 1 were treated with their respective subgroup-specific agent: Casein phosphopeptide-amorphous calcium phosphate (CPP-ACP paste for subgroup A, CPP-amorphous calcium phosphate fluoride (ACPF paste for subgroup B, CPP-ACPF varnish for subgroup C and subgroup D served as a control. The sections were then subjected to demineralization for 12 days following which lesional depth was measured under the stereomicroscope. All the sections of Group 2 were subjected to demineralization for 12 days, examined for lesional depth, then treated with their respective subgroup specific agents and immersed in artificial saliva for 7 days. The sections were then examined again under the stereomicroscope to measure the lesional depth. Results: CPP-ACPF varnish caused significant inhibition of demineralization. All three agents showed significant remineralization of previously demineralized lesions. However, CPP-ACPF varnish showed the greatest remineralization, followed by CPP-ACPF paste and then CPP-ACP paste. Conclusion: This study shows that CPP-ACPF varnish is effective in preventing demineralization as well as promoting remineralization of enamel. Thus, it can be used as an effective preventive measure for pediatric patients where compliance with the use of tooth mousse may be questionable.

As one of the most studied post-translational modifications (PTM), protein phosphorylation plays an essential role in almost all cellular processes. Current methods are able to predict and determine thousands of phosphorylation sites, whereas stoichiometric quantification of these sites is still challenging. Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS)-based targeted proteomics is emerging as a promising technique for site-specific quantification of protein phosphorylation using proteolytic peptides as surrogates of proteins. However, several issues may limit its application, one of which relates to the phosphopeptides with different phosphorylation sites and the same mass (i.e., isobaric phosphopeptides). While employment of site-specific product ions allows for these isobaric phosphopeptides to be distinguished and quantified, site-specific product ions are often absent or weak in tandem mass spectra. In this study, linear algebra algorithms were employed as an add-on to targeted proteomics to retrieve information on individual phosphopeptides from their common spectra. To achieve this simultaneous quantification, a LC-MS/MS-based targeted proteomics assay was first developed and validated for each phosphopeptide. Given the slope and intercept of calibration curves of phosphopeptides in each transition, linear algebraic equations were developed. Using a series of mock mixtures prepared with varying concentrations of each phosphopeptide, the reliability of the approach to quantify isobaric phosphopeptides containing multiple phosphorylation sites (≥ 2) was discussed. Finally, we applied this approach to determine the phosphorylation stoichiometry of heat shock protein 27 (HSP27) at Ser78 and Ser82 in breast cancer cells and tissue samples.

Phosphoproteins and phosphopeptides were expressed by E. coli to give yields of 30-200mg of purified protein per litre with an average degree of phosphorylation at multiple sites of 61-83%. The method employed two compatible cohabiting plasmids having low and high copy number, expressing a protein kinase and, more abundantly, the substrate (poly)peptide, respectively. It was used to phosphorylate recombinant beta-casein or osteopontin at multiple casein kinase-2 sites. Two constructs were designed to produce shorter peptides containing one or more clusters of phosphorylation sites resembling the phosphate centres of caseins. In the first, a 53-residue 6-His tagged phosphopeptide was expressed at a 5-fold higher molar yield. The second had multiple tandem repeats of a tryptic phosphopeptide sequence to give a similar increase in efficiency. Each recombinant phosphopeptide was purified (30-100mg) and small-angle X-ray scattering measurements showed that they, like certain casein and osteopontin phosphopeptides, sequester amorphous calcium phosphate to form calcium phosphate nanoclusters. In principle, the method can provide novel phosphopeptides for the control of biocalcification or be adapted for use with other kinases and cognate proteins or peptides to study the effect of specific phosphorylations on protein structure. Moreover, the insertion of a phosphate centre sequence, possibly with a linker peptide, may allow thermodynamically stable, biocompatible nanoparticles to be made from virtually any sequence.

The attachment of affinity proteins onto zirconium phosphonate coated glass slides was investigated by fusing a short phosphorylated peptide sequence at one extremity to enable selective bonding to the active surface via the formation of zirconium phosphate coordinate covalent bonds. In a model study, the binding of short peptides containing zero to four phosphorylated serine units and a biotin end-group was assessed by surface plasmon resonance-enhanced ellipsometry (SPREE) as well as in a microarray format using fluorescence detection of AlexaFluor 647-labeled streptavidin. Significant binding to the zirconated surface was only observed in the case of the phosphopeptides, with the best performance, as judged by streptavidin capture, observed for peptides with three or four phosphorylation sites and when spotted at pH 3. When fusing similar phosphopeptide tags to the affinity protein, the presence of four phosphate groups in the tag allows efficient immobilization of the proteins and efficient capture of their target.

Full Text Available Molar Incisor Hypomineralization (MIH is characterized by a developmentally derived deficiency in mineral enamel. Affected teeth present demarcated enamel opacities, ranging from white to brown; also hypoplasia can be associated. Patient frequently claims aesthetic discomfort if anterior teeth are involved. This problem leads patients to request a bleaching treatment to improve aestheticconditions.Nevertheless, hydrogen peroxide can produce serious side-effects, resulting from further mineral loss. Microabrasion and/or a composite restoration are the treatments of choice in teeth with mild/moderate MIH, but they also need enamel loss. Recently, a new remineralizing agent based on Casein Phosphopeptide-Amorphous Calcium Phosphate (CPP-ACP has been proposed to be effective in hypomineralized enamel, improving also aesthetic conditions. The present paper presents a case report of a young man with white opacities on incisors treated with a combined use of CPP-ACP mousse and hydrogen peroxide gel to correct the aesthetic defect. The patient was instructed to use CPP-ACP for two hours per day for three months in order to obtain enamel remineralization followed by a combined use of CPP-ACP and bleaching agent for further two months. At the end of this five-month treatment, a noticeable aesthetic improvement of the opacities was observed.

Molar Incisor Hypomineralization (MIH) is characterized by a developmentally derived deficiency in mineral enamel. Affected teeth present demarcated enamel opacities, ranging from white to brown; also hypoplasia can be associated. Patient frequently claims aesthetic discomfort if anterior teeth are involved. This problem leads patients to request a bleaching treatment to improve aestheticconditions.Nevertheless, hydrogen peroxide can produce serious side-effects, resulting from further mineral loss. Microabrasion and/or a composite restoration are the treatments of choice in teeth with mild/moderate MIH, but they also need enamel loss. Recently, a new remineralizing agent based on Casein Phosphopeptide-Amorphous Calcium Phosphate (CPP-ACP) has been proposed to be effective in hypomineralized enamel, improving also aesthetic conditions. The present paper presents a case report of a young man with white opacities on incisors treated with a combined use of CPP-ACP mousse and hydrogen peroxide gel to correct the aesthetic defect. The patient was instructed to use CPP-ACP for two hours per day for three months in order to obtain enamel remineralization followed by a combined use of CPP-ACP and bleaching agent for further two months. At the end of this five-month treatment, a noticeable aesthetic improvement of the opacities was observed.

Transcriptome analysis is essential to understand the mechanisms regulating key biological processes and functions. The first step usually consists of identifying candidate genes; to find out which pathways are affected by those genes, however, functional analysis (FA) is mandatory. The most frequently used strategies for this purpose are Gene Set and Singular Enrichment Analysis (GSEA and SEA) over Gene Ontology. Several statistical methods have been developed and compared in terms of computational efficiency and/or statistical appropriateness. However, whether their results are similar or complementary, the sensitivity to parameter settings, or possible bias in the analyzed terms has not been addressed so far. Here, two GSEA and four SEA methods and their parameter combinations were evaluated in six datasets by comparing two breast cancer subtypes with well-known differences in genetic background and patient outcomes. We show that GSEA and SEA lead to different results depending on the chosen statistic, model and/or parameters. Both approaches provide complementary results from a biological perspective. Hence, an Integrative Functional Analysis (IFA) tool is proposed to improve information retrieval in FA. It provides a common gene expression analytic framework that grants a comprehensive and coherent analysis. Only a minimal user parameter setting is required, since the best SEA/GSEA alternatives are integrated. IFA utility was demonstrated by evaluating four prostate cancer and the TCGA breast cancer microarray datasets, which showed its biological generalization capabilities.

Full Text Available Using agile methods during the implementation of the system that meets mission critical requirements can be a real challenge. The change in the system built of dozens or even hundreds of specialized devices with embedded software requires the cooperation of a large group of engineers. This article presents a solution that supports parallel work of groups of system analysts and software developers. Deployment of formal rules to the requirements written in natural language enables using formal analysis of artifacts being a bridge between software and system requirements. Formalism and textual form of requirements allowed the automatic generation of message ﬂow graph for the (sub system, called the “big-picture-model”. Flow diagram analysis helped to avoid a large number of defects whose repair cost in extreme cases could undermine the legitimacy of agile methods in projects of this scale. Retrospectively, a reduction of technical debt was observed. Continuous analysis of the “big picture model” improves the control of the quality parameters of the software architecture. The article also tries to explain why the commercial platform based on UML modeling language may not be suﬃcient in projects of this complexity.

TCR signaling critically depends on protein phosphorylation across many proteins. Localization of each phosphorylation event relative to the T-cell receptor (TCR) and canonical T-cell signaling proteins will provide clues about the structure of TCR signaling networks. Quantitative phosphoproteomic analysis by mass spectrometry provides a wide-scale view of cellular phosphorylation networks. However, analysis of phosphorylation by mass spectrometry is still challenging due to the relative low abundance of phosphorylated proteins relative to all proteins and the extraordinary diversity of phosphorylation sites across the proteome. Highly selective enrichment of phosphorylated peptides is essential to provide the most comprehensive view of the phosphoproteome. Optimization of phosphopeptide enrichment methods coupled with highly sensitive mass spectrometry workflows significantly improves the sequencing depth of the phosphoproteome to over 10,000 unique phosphorylation sites from complex cell lysates. Here we describe a step-by-step method for phosphoproteomic analysis that has achieved widespread success for identification of serine, threonine, and tyrosine phosphorylation. Reproducible quantification of relative phosphopeptide abundance is provided by intensity-based label-free quantitation. An ideal set of mass spectrometry analysis parameters is also provided that optimize the yield of identified sites. We also provide guidelines for the bioinformatic analysis of this type of data to assess the quality of the data and to comply with proteomic data reporting requirements.

Full Text Available In Computer network world, the needs for securityand proper systems of control are obvious and findout the intruders who do the modification andmodified data. Nowadays Frauds that occurs incompanies are not only by outsiders but also byinsiders. Insider may perform illegal activity & tryto hide illegal activity. Companies would like to beassured that such illegal activity i.e. tampering hasnot occurred, or if it does, it should be quicklydiscovered. Mechanisms now exist that detecttampering of a database, through the use ofcryptographically-strong hash functions. This papercontains a survey which explores the various beliefsupon database forensics through differentmethodologies using forensic algorithms and toolsfor investigations. Forensic analysis algorithms areused to determine who, when, and what data hadbeen tampered. Tiled Bitmap Algorithm introducesthe notion of a candidate set (all possible locationsof detected tampering(s and provides a completecharacterization of the candidate set and itscardinality. Improved tiled bitmap algorithm willcover come the drawbacks of existing tiled bitmapalgorithm.

Transport of penicillin intermediates and penicillin secretion are still poorly characterized in Penicillium chrysogenum (re-identified as Penicillium rubens). Calcium (Ca(2+)) plays an important role in the metabolism of filamentous fungi, and casein phosphopeptides (CPP) are involved in Ca(2+) internalization. In this study we observe that the effect of CaCl2 and CPP is additive and promotes an increase in penicillin production of up to 10-12 fold. Combination of CaCl2 and CPP greatly promotes expression of the three penicillin biosynthetic genes. Comparative proteomic analysis by 2D-DIGE, identified 39 proteins differentially represented in P. chrysogenum Wisconsin 54-1255 after CPP/CaCl2 addition. The most interesting group of overrepresented proteins were a peroxisomal catalase, three proteins of the methylcitrate cycle, two aminotransferases and cystationine β-synthase, which are directly or indirectly related to the formation of penicillin amino acid precursors. Importantly, two of the enzymes of the penicillin pathway (isopenicillin N synthase and isopenicillin N acyltransferase) are clearly induced after CPP/CaCl2 addition. Most of these overrepresented proteins are either authentic peroxisomal proteins or microbody-associated proteins. This evidence suggests that addition of CPP/CaCl2 promotes the formation of penicillin precursors and the penicillin biosynthetic enzymes in peroxisomes and vesicles, which may be involved in transport and secretion of penicillin.

Aims: The aim of this study was to evaluate and compare the remineralization potential of bioactive-Glass (BAG) (Novamin®/Calcium-sodium-phosphosilicate) and casein phosphopeptide-amorphous calcium phosphate (CPP-ACP) containing dentifrice. Materials and Methods: A total of 30 sound human premolars were decoronated, coated with nail varnish except for a 4 mm × 4 mm window on the buccal surface of crown and were randomly divided in two groups (n = 15). Group A — BAG dentifrice and Group B — CPP-ACP dentifrice. The baseline surface microhardness (SMH) was measured for all the specimens using the vickers microhardness testing machine. Artificial enamel carious lesions were created by inserting the specimens in de-mineralizing solution for 96 h. SMH of demineralized specimens was evaluated. 10 days of pH-cycling regimen was carried out. SMH of remineralized specimens was evaluated. Statistical Analysis: Data was analyzed using ANOVA and multiple comparisons within groups was done using Bonferroni method (post-hoc tests) to detect significant differences at P carious lesion when compared with CPP-ACP. PMID:24554851

Full Text Available Sport organizations exist to perform tasks that can only be executed through cooperative effort, and sport management is responsible for the performance and success of these organizations. The main of the paper is to analyze several issues of management sports organizations in order to asses their quality management. In this respect a questionnaire has been desingned for performing a survey analysis through a statistical approach. Investigation was conducted over a period of 3 months, and have been questioned a number of managers and coaches of football, all while pursuing an activity in football clubs in the counties of Timis and Arad, the level of training for children and juniors. The results suggest that there is a significant interest for the improvement of management across teams of children and under 21 clubs, emphasis on players' participation and rewarding performance. Furthermore, we can state that in the sports clubs there is established a vision and a mission as well as the objectives of the club's general refers to both sporting performance, and financial performance.

Symplekin (Pta1 in yeast) is a scaffold in the large protein complex that is required for 3'-end cleavage and polyadenylation of eukaryotic messenger RNA precursors (pre-mRNAs); it also participates in transcription initiation and termination by RNA polymerase II (Pol II). Symplekin mediates interactions between many different proteins in this machinery, although the molecular basis for its function is not known. Here we report the crystal structure at 2.4 {angstrom} resolution of the amino-terminal domain (residues 30-340) of human symplekin in a ternary complex with the Pol II carboxy-terminal domain (CTD) Ser5 phosphatase Ssu72 and a CTD Ser5 phosphopeptide. The N-terminal domain of symplekin has the ARM or HEAT fold, with seven pairs of antiparallel {alpha}-helices arranged in the shape of an arc. The structure of Ssu72 has some similarity to that of low-molecular-mass phosphotyrosine protein phosphatase, although Ssu72 has a unique active-site landscape as well as extra structural features at the C terminus that are important for interaction with symplekin. Ssu72 is bound to the concave face of symplekin, and engineered mutations in this interface can abolish interactions between the two proteins. The CTD peptide is bound in the active site of Ssu72, with the pSer5-Pro6 peptide bond in the cis configuration, which contrasts with all other known CTD peptide conformations. Although the active site of Ssu72 is about 25 {angstrom} from the interface with symplekin, we found that the symplekin N-terminal domain stimulates Ssu72 CTD phosphatase activity in vitro. Furthermore, the N-terminal domain of symplekin inhibits polyadenylation in vitro, but only when coupled to transcription. Because catalytically active Ssu72 overcomes this inhibition, our results show a role for mammalian Ssu72 in transcription-coupled pre-mRNA 3'-end processing.

in the G transform. Next we improve the designers’ meet-in-the-middle preimage attack on Fugue-256 from 2480 time and memory to 2416. Next we study the security of Fugue-256 against free-start distinguishers and free-start collisions. In this direction, we use an improved variant of the differential...... transform is mapped with a transform to a 256-bit digest. In this paper, we present some improved as well as new analytical results of Fugue-256 (with lengthpadding). First we improve Aumasson and Phans’ integral distinguisher on the 5.5 rounds of the G transform to 16.5 rounds, thus showing weak diffusion...

in the G transform. Next we improve the designers’ meet-in-the-middle preimage attack on Fugue-256 from 2480 time and memory to 2416. Next we study the security of Fugue-256 against free-start distinguishers and free-start collisions. In this direction, we use an improved variant of the differential...

The Systems Improved Numerical Fluids Analysis Code, SINFAC, consists of additional routines added to the April 1983 revision of SINDA, a general thermal analyzer program. The purpose of the additional routines is to allow for the modeling of active heat transfer loops. The modeler can simulate the steady-state and pseudo-transient operations of 16 different heat transfer loop components including radiators, evaporators, condensers, mechanical pumps, reservoirs and many types of valves and fittings. In addition, the program contains a property analysis routine that can be used to compute the thermodynamic properties of 20 different refrigerants. SINFAC can simulate the response to transient boundary conditions. SINFAC was first developed as a method for computing the steady-state performance of two phase systems. It was then modified using CNFRWD, SINDA's explicit time-integration scheme, to accommodate transient thermal models. However, SINFAC cannot simulate pressure drops due to time-dependent fluid acceleration, transient boil-out, or transient fill-up, except in the accumulator. SINFAC also requires the user to be familiar with SINDA. The solution procedure used by SINFAC is similar to that which an engineer would use to solve a system manually. The solution to a system requires the determination of all of the outlet conditions of each component such as the flow rate, pressure, and enthalpy. To obtain these values, the user first estimates the inlet conditions to the first component of the system, then computes the outlet conditions from the data supplied by the manufacturer of the first component. The user then estimates the temperature at the outlet of the third component and computes the corresponding flow resistance of the second component. With the flow resistance of the second component, the user computes the conditions down stream, namely the inlet conditions of the third. The computations follow for the rest of the system, back to the first component

Full Text Available Aim: The aim of this study was to investigate the remineralization potential of casein phosphopeptide-amorphous calcium phosphate (CPP-ACP on enamel eroded by cola drinks. Subjects and Methods: A total of 30 healthy subjects were selected from a random sample of 1200 children and divided into two groups of 15 each wherein calcium and phosphorus analyses and scanning electron microscope (SEM analysis was carried out to investigate the remineralization of enamel surface. A total of 30 non-carious premolar teeth were selected from the human tooth bank (HTB to prepare the in-situ appliance. Three enamel slabs were prepared from the same. One enamel slab was used to obtain baseline values and the other two were embedded into the upper palatal appliances prepared on the subjects′ maxillary working model. The subjects wore the appliance after which 30 ml cola drink exposure was given. After 15 days, the slabs were removed and subjected to respective analysis. Statistical Analysis Used: Means of all the readings of soluble calcium and phosphorous levels at baseline,post cola-drink exposure and post cpp-acp application were subjected to statistical analysis SPSS11.5 version.Comparison within groups and between groups was carried out using ANOVA and F-values at 1% level of significance. Results: Decrease in calcium solubility of enamel in the CPP-ACP application group as compared to post-cola drink exposure group (P < 0.05 was seen. Distinctive change in surface topography of enamel in the post-CPP-ACP application group as compared to post-cola drink exposure group was observed. Conclusion: CPP-ACP significantly promoted remineralization of enamel eroded by cola drinks as revealed by significant morphological changes seen in SEM magnification and spectrophotometric analyses.

The modified version of Yahalom protocol improved by Burrows, Abradi, and Needham (BAN) still has security drawbacks. This study analyzed such flaws in a detailed way from the point of strand spaces, which is a novel method of analyzing protocol's security. First, a mathematical model of BAN-Yahalom protocol is constructed. Second, penetrators' abilities are restricted with a rigorous and formalized definition. Moreover, to increase the security of this protocol against potential attackers in practice, a further improvement is made to the protocol. Future application of this re-improved protocol is also discussed.

New monolithic capillary columns with embedded commercial hydroxyapatite nanoparticles have been developed and used for protein separation and selective enrichment of phosphopeptides. The rod-shaped hydroxyapatite nanoparticles were incorporated into the poly(2-hydroxyethyl methacrylate-co-ethylene dimethacrylate) monolith by simply admixing them in the polymerization mixture followed by in situ polymerization. The effect of percentages of monomers and hydroxyapatite nanoparticles in the polymerization mixture on the performance of the monolithic column was explored in detail. We found that the loading capacity of the monolith is on par with other hydroxyapatite separation media. However, the speed at which these columns can be used is higher due to the fast mass transport. The function of the monolithic columns was demonstrated with the separations of a model mixture of proteins including ovalbumin, myoglobin, lysozyme, and cytochrome c as well as a monoclonal antibody and its aggregates with protein A. Selective enrichment and MALDI/MS characterization of phosphopeptides fished-out from complex peptide mixtures of ovalbumin, α-casein, and β-casein digests were also achieved using the hydroxyapatite monolith.

The potential impact of behavior analysis is limited by the public's dim awareness of the field. The mass media rarely cover behavior analysis, other than to echo inaccurate negative stereotypes about control and punishment. The media instead play up appealing but less-evidence-based approaches to problems, a key example being the touting of dubious diets over behavioral approaches to losing excess weight. These sorts of claims distort or skirt scientific evidence, undercutting the fidelity of behavior analysis to scientific rigor. Strategies for better connecting behavior analysis with the public might include reframing the field's techniques and principles in friendlier, more resonant form; pushing direct outcome comparisons between behavior analysis and its rivals in simple terms; and playing up the "warm and fuzzy" side of behavior analysis.

Several mass spectrometry-based assays have emerged for the quantitative profiling of cellular tyrosine phosphorylation. Ideally, these methods should reveal the exact sites of tyrosine phosphorylation, be quantitative, and not be cost-prohibitive. The latter is often an issue as typically several milligrams of (stable isotope-labeled) starting protein material are required to enable the detection of low abundance phosphotyrosine peptides. Here, we adopted and refined a peptidecentric immunoaffinity purification approach for the quantitative analysis of tyrosine phosphorylation by combining it with a cost-effective stable isotope dimethyl labeling method. We were able to identify by mass spectrometry, using just two LC-MS/MS runs, more than 1100 unique non-redundant phosphopeptides in HeLa cells from about 4 mg of starting material without requiring any further affinity enrichment as close to 80% of the identified peptides were tyrosine phosphorylated peptides. Stable isotope dimethyl labeling could be incorporated prior to the immunoaffinity purification, even for the large quantities (mg) of peptide material used, enabling the quantification of differences in tyrosine phosphorylation upon pervanadate treatment or epidermal growth factor stimulation. Analysis of the epidermal growth factor-stimulated HeLa cells, a frequently used model system for tyrosine phosphorylation, resulted in the quantification of 73 regulated unique phosphotyrosine peptides. The quantitative data were found to be exceptionally consistent with the literature, evidencing that such a targeted quantitative phosphoproteomics approach can provide reproducible results. In general, the combination of immunoaffinity purification of tyrosine phosphorylated peptides with large scale stable isotope dimethyl labeling provides a cost-effective approach that can alleviate variation in sample preparation and analysis as samples can be combined early on. Using this approach, a rather complete qualitative

and evaluated. A new spectrum analysis system designed to detect moving targets is presented. Comparison is made of the detection capabilities of all four noise radar systems in the presence of extraneous noise. (Author)

Full Text Available The stability of any structure is possible if foundation is appropriately designed. The Bandar abbas is the largest and most important port of Iran, with high seismicity and occurring strong earthquakes in this territory, the soil mechanical properties of different parts of city have been selected as the subject of current research. The data relating to the design of foundation for improvement of structure at different layer of subsoil have been collected and, accordingly, soil mechanical properties have been evaluated. The results of laboratory experiments can be used for evaluation of geotechnical characteristics of urban area for development a region with high level of structural stability. Ultimately, a new method for calculation of liquefaction force is suggested. It is applicable for improving geotechnical and structure codes and also for reanalysis of structure stability of previously constructed buildings.

Item analysis was used to find out which biochemical explanations need to be improved in biochemical teaching, not which items are to be discarded, improved, or reusable in biochemical examinations. The analysis revealed the basic facts of which less able students had more misunderstanding than able students. Identifying these basic facts helps…

Full Text Available Abstract Background Synthetic peptides have played a useful role in studies of protein kinase substrates and interaction domains. Synthetic peptide arrays and libraries, in particular, have accelerated the process. Several factors have hindered or limited the applicability of various techniques, such as the need for deconvolution of combinatorial libraries, the inability or impracticality of achieving full automation using two-dimensional or pin solid phases, the lack of convenient interfacing with standard analytical platforms, or the difficulty of compartmentalization of a planar surface when contact between assay components needs to be avoided. This paper describes a process for synthesis of peptides and phosphopeptides on microtiter plate wells that overcomes previous limitations and demonstrates utility in determination of the epitope of an autophosphorylation site phospho-motif antibody and utility in substrate utilization assays of the protein tyrosine kinase, p60c-src. Results The overall reproducibility of phospho-peptide synthesis and multiplexed EGF receptor (EGFR autophosphorylation site (pY1173 antibody ELISA (9H2 was within 5.5 to 8.0%. Mass spectrometric analyses of the released (phosphopeptides showed homogeneous peaks of the expected molecular weights. An overlapping peptide array of the complete EGFR cytoplasmic sequence revealed a high redundancy of 9H2 reactive sites. The eight reactive phospopeptides were structurally related and interestingly, the most conserved antibody reactive peptide motif coincided with a subset of other known EGFR autophosphorylation and SH2 binding motifs and an EGFR optimal substrate motif. Finally, peptides based on known substrate specificities of c-src and related enzymes were synthesized in microtiter plate array format and were phosphorylated by c-Src with the predicted specificities. The level of phosphorylation was proportional to c-Src concentration with sensitivities below 0.1 Units of

Background: Casein phosphopeptide-amorphous calcium phosphate (CPP-ACP) is applied for remineralization of early caries lesions or tooth sensitivity conditions and may affect subsequent resin bonding. This in vitro study investigated the effect of CPP-ACP on the shear bond strength of dental adhesives to enamel. Materials and Methods: Sixty extracted human molar teeth were selected and randomly divided into three groups and six subgroups. Buccal or lingual surfaces of teeth were prepared to create a flat enamel surface. Adhesives used were Tetric N-Bond, AdheSE and AdheSE One F. In three subgroups, before applying adhesives, enamel surfaces were treated with Tooth Mousse CPP-ACP for one hour, rinsed and stored in 37°C temperature with 100% humidity. This procedure was repeated for 5 days and then adhesives were applied and Tetric N-Ceram composite was adhered to the enamel. This procedure was also fulfilled for the other three subgroups without CPP-ACP treatment. After 24 hour water storage, samples were tested for shear bond strength test in a universal testing machine. Failure modes were determined by stereomicroscope. Data were analyzed by t-test and one-way analysis of variance with P 0.05). In non-applied CPP-ACP subgroups, there were statistically significant differences among all subgroups. Tetric N-Bond had the highest and AdheSE One F had the lowest shear bond strength. Conclusion: CPP-ACP application reduces the shear bond strength of AdheSE and AdheSE One F to enamel but not Tetric N-Bond. PMID:25878683

Full Text Available Background: Development of white spot lesions on enamel is a significant and common problem during the fixed orthodontic treatment. Various preventive methods have been suggested. The purpose of this study was to evaluate the preventive potential of MI Paste Plus, Er: YAG Laser and combined under similar in vitro conditions against demineralization. Materials and Methods: In this experimental in vitro study, 60 extracted premolars were randomly allocated to four groups (n = 15 of control, MI Paste Plus, Laser and MI + Laser (MIL. Enamel surface of each group was treated with one of above materials before and during the pH cycling for 12 days through a daily procedure of demineralization and remineralization for 3 h and 20 h, respectively. Teeth were sectioned and evaluated quantitatively by cross-sectional microhardness testing at 20 μm intervals from the outer enamel surface toward dentinoenamel junction up to 160 μm and data were analyzed using the one-way analysis of variance and Tukey test. P < 0.05 was considered as significant. Results: MIL group had the least amount of demineralization (P < 0.001. Control group (C group had the greatest relative mineral loss and the laser group (L group had 45% less mineral loss than the C group and there was no significant difference between the MI Paste Plus and L group (P = 0.154 Conclusion: Based on these results, Er: YAG laser was able to decrease demineralization and was a potential alternative to preventive dentistry and was more effective when combined with casein phosphopeptide-amorphous calcium phosphate products.

Full Text Available Background: Casein phosphopeptide-amorphous calcium phosphate (CPP-ACP is applied for remineralization of early caries lesions or tooth sensitivity conditions and may affect subsequent resin bonding. This in vitro study investigated the effect of CPP-ACP on the shear bond strength of dental adhesives to enamel. Materials and Methods: Sixty extracted human molar teeth were selected and randomly divided into three groups and six subgroups. Buccal or lingual surfaces of teeth were prepared to create a flat enamel surface. Adhesives used were Tetric N-Bond, AdheSE and AdheSE One F. In three subgroups, before applying adhesives, enamel surfaces were treated with Tooth Mousse CPP-ACP for one hour, rinsed and stored in 37°C temperature with 100% humidity. This procedure was repeated for 5 days and then adhesives were applied and Tetric N-Ceram composite was adhered to the enamel. This procedure was also fulfilled for the other three subgroups without CPP-ACP treatment. After 24 hour water storage, samples were tested for shear bond strength test in a universal testing machine. Failure modes were determined by stereomicroscope. Data were analyzed by t-test and one-way analysis of variance with P 0.05. In non-applied CPP-ACP subgroups, there were statistically significant differences among all subgroups. Tetric N-Bond had the highest and AdheSE One F had the lowest shear bond strength. Conclusion: CPP-ACP application reduces the shear bond strength of AdheSE and AdheSE One F to enamel but not Tetric N-Bond.

In Bore Balloting Motion simulation shows that reduction in residual spin by about 5% results in drastic 56% reduction in first maximum yaw. A correlation between first maximum yaw and residual spin is observed. Results of data analysis are used in design modification for existing ammunition. Number of designs are evaluated numerically before freezing five designs for further soundings. These designs are critically assessed in terms of their comparative performance during In-bore travel & external ballistics phase. Results are validated by free flight trials for the finalised design.

1-D transport code is used to examine the ignition of plasma on the improved confinement mode and impact of profile effect on the burning performance. Energy transport, He-ash particle transport and poloidal magnetic field transport equations are solved with a thermal diffusivity of current diffusive ballooning mode model. The ratio of a thermal diffusivity and a He-ash diffusivity is introduced as a parameter and assumed to be constant. For a fixed current profile, the existence of the ignited state is shown. An internal transport barrier is formed autonomously even if parameters lie in the L-mode boundary condition. It is found that the sensitivity of the ignition condition on the density is strong and there is no margin of ignition for the density limit when density profile is flat. However, if a peaked profile of density is chosen, solutions which satisfy the density limit exist. The long time sustainment of ignition is also shown, solving poloidal magnetic field transport simultaneously. It is shown that the ignition is sustained within the time scale of burn-time, however, MHD stability should be considered in the time scale of current diffusion. (author)

Phosphorylation/dephosphorylation is probably the most common and important reversible post-translational modificaion of proteins. Analyzing the functional effects of phosphorylation is helpful for understanding the biological functions of proteins. Identification of the phosphorylation sites of phosphorylated protein is a prerequisite for research on phosphorylation. In this work, an effective and simple method of identification of protein phosphorylation sites has been developed. Phosphopeptides were selectively enriched with immobilized metal affinity chromatography (IMAC) and subsequently chemically modified by 4-sulfophenyl isothiocyanate, and then the chemically modified phosphopeptides were sequenced with post-source decay (PSD) matrix-assisted laser desorption/ionization (MALDI)time-of-flight mass spectrometry for detecting phosphorylation sites. The charge of derivatization by 4-sulfophenyl isothiocyanate introduces a negative sulfonic acid group at the N-terminus of a peptide, and enables the selective detection of only a single series of C-terminal y-type ions. This chemically assisted method greatly simplifies the extremely complex pattern of PSD fragment ions and makes the PSD spectra more easier to be interpreted. The phosphorylation sites of a synthesized model phosphopeptide and human c-myc protein have been successfully identified by this method.Phosphorylation/dephosphorylation is probably the most common and important reversible post-translational modificaion of proteins. Analyzing the functional effects of phosphorylation is helpful for understanding the biological functions of proteins. Identification of the phosphorylation sites of phosphorylated protein is a prerequisite for research on phosphorylation. In this work, an effective and simple method of identification of protein phosphorylation sites has been developed. Phosphopeptides were selectively enriched with immobilized metal affinity chromatography (IMAC) and subsequently chemically

Improving Research and Scientific Publications in Africa: Analysis of a ... factors that can guarantee career success in the field of biomedical science in Nigeria. ... in the article, c) Number of figures and presentation of data within the figures, ...

Single-crystalline hyperbranched nanostructures of iron hydroxyl phosphate Fe5(PO4)4(OH)3·2H2O (giniite) with orthorhombic phase were synthesized through a simple route. They have a well-defined dendrite fractal structure with a pronounced trunk and highly ordered branches. The toxicity test shows that the hyperbranched nanostructures have good biocompatibility and low toxicity level, which makes them have application potentials in life science. The study herein demonstrated that the obtained hyperbranched giniite nanostructures show highly selective capture of phosphopeptides and could be used as a kind of promising nanomaterial for the specific capture of phosphopeptides from complex tryptic digests with the detection of MALDI-TOF mass spectrometry.

A runtime analysis of the Simple Genetic Algorithm (SGA) for the OneMax problem has recently been presented proving that the algorithm with population size μ≤n1/8−ε requires exponential time with overwhelming probability. This paper presents an improvedanalysis which overcomes some limitations...

This article describes how job analysis, a method commonly used in personnel research and organizational psychology, provides a systematic method for documenting program staffing and service delivery that can improve evaluators' knowledge about program operations. Job analysis data can be used to increase evaluators' insight into how staffs…

Aims and Objectives: To determine the demineralization inhibitory potential of fluoride varnish and casein phosphopeptide-amorphous calcium phosphate (CPP-ACP) and to compare and evaluate the additive effect of fluoride varnish + CPP-ACP. Materials and Methods: Ten healthy premolar teeth that were extracted for orthodontic purposes were collected, and each tooth was longitudinally sectioned buccolingually and mesiodistally into four sections. The teeth were then assigned to four different tre...

Highlights: • A new IMAC material (Fe{sub 3}O{sub 4}@PD-Nb{sup 5+}) was synthesized. • The strong magnetic behaviors of the microspheres ensure fast and easy separation. • The enrichment ability was tested by human serum and nonfat milk. • The results were compared with other IMAC materials including the commercial kits. • All results proved the good enrichment ability, especially for multiphosphopeptides. - Abstract: Rapid and selective enrichment of phosphopeptides from complex biological samples is essential and challenging in phosphorylated proteomics. In this work, for the first time, niobium ions were directly immobilized on the surface of polydopamine-coated magnetic microspheres through a facile and effective synthetic route. The Fe{sub 3}O{sub 4}@polydopamine-Nb{sup 5+} (denoted as Fe{sub 3}O{sub 4}@PD-Nb{sup 5+}) microspheres possess merits of high hydrophilicity and good biological compatibility, and demonstrated low limit of detection (2 fmol). The selectivity was also basically satisfactory (β-casein:BSA = 1:500) to capture phosphopeptides. They were also successfully applied for enrichment of phosphopeptides from real biological samples such as human serum and nonfat milk. Compared with Fe{sub 3}O{sub 4}@PD-Ti{sup 4+} microspheres, the Fe{sub 3}O{sub 4}@PD-Nb{sup 5+} microspheres exhibit superior selectivity to multi-phosphorylated peptides, and thus may be complementary to the conventional IMAC materials.

The CONTINUE feature in transient analysis as implemented in the standard release of COSMIC/NASTRAN has inherent errors associated with it. As a consequence, the results obtained by a CONTINUEd restart run do not, in general, match the results that would be obtained in a single run without the CONTINUE feature. These inherent errors were eliminated by improvements to the restart logic that were developed by RPK Corporation and that are available on all RPK-supported versions of COSMIC/NASTRAN. These improvements ensure that the results of a CONTINUEd transient analysis run are the same as those of a non-CONTINUEd run. In addition, the CONTINUE feature was extended to transient analysis involving uncoupled modal equations. The improvements and enhancement were illustrated by examples.

Full Text Available Deoxynivalenol (DON is a widespread trichothecene mycotoxin that commonly contaminates cereal crops and has various toxic effects in animals and humans. DON primarily targets the gastrointestinal tract, the first barrier against ingested food contaminants. In this study, an isobaric tag for relative and absolute quantitation (iTRAQ-based phosphoproteomic approach was employed to elucidate the molecular mechanisms underlying DON-mediated intestinal toxicity in porcine epithelial cells (IPEC-J2 exposed to 20 μM DON for 60 min. There were 4153 unique phosphopeptides, representing 389 phosphorylation sites, detected in 1821 phosphoproteins. We found that 289 phosphopeptides corresponding to 255 phosphoproteins were differentially phosphorylated in response to DON. Comprehensive Gene Ontology (GO analysis combined with Kyoto Encyclopedia of Genes and Genomes (KEGG pathway enrichment revealed that, in addition to previously well-characterized mitogen-activated protein kinase (MAPK signaling, DON exposure altered phosphatidylinositol 3-kinase/Akt (PI3K/Akt and Janus kinase/signal transducer, and activator of transcription (JAK/STAT pathways. These pathways are involved in a wide range of biological processes, including apoptosis, the intestinal barrier, intestinal inflammation, and the intestinal absorption of glucose. DON-induced changes are likely to contribute to the intestinal dysfunction. Overall, identification of relevant signaling pathways yielded new insights into the molecular mechanisms underlying DON-induced intestinal toxicity, and might help in the development of improved mechanism-based risk assessments in animals and humans.

A runtime analysis of the Simple Genetic Algorithm (SGA) for the OneMax problem has recently been presented proving that the algorithm requires exponential time with overwhelming probability. This paper presents an improvedanalysis which overcomes some limitations of our previous one. Firstly...... improvement towards the reusability of the techniques in future systematic analyses of GAs. Finally, we consider the more natural SGA using selection with replacement rather than without replacement although the results hold for both algorithmic versions. Experiments are presented to explore the limits...

A runtime analysis of the Simple Genetic Algorithm (SGA) for the OneMax problem has recently been presented proving that the algorithm with population size μ≤n1/8−ε requires exponential time with overwhelming probability. This paper presents an improvedanalysis which overcomes some limitations...... this is a major improvement towards the reusability of the techniques in future systematic analyses of GAs. Finally, we consider the more natural SGA using selection with replacement rather than without replacement although the results hold for both algorithmic versions. Experiments are presented to explore...

The employment of actively controlled segmented mirror architectures has become increasingly common in the development of current astronomical telescopes. Optomechanical analysis of such hardware presents unique issues compared to that of monolithic mirror designs. The work presented here is a review of current capabilities and improvements in the methodology of the analysis of mechanically induced surface deformation of such systems. The recent improvements include capability to differentiate surface deformation at the array and segment level. This differentiation allowing surface deformation analysis at each individual segment level offers useful insight into the mechanical behavior of the segments that is unavailable by analysis solely at the parent array level. In addition, capability to characterize the full displacement vector deformation of collections of points allows analysis of mechanical disturbance predictions of assembly interfaces relative to other assembly interfaces. This capability, called racking analysis, allows engineers to develop designs for segment-to-segment phasing performance in assembly integration, 0g release, and thermal stability of operation. The performance predicted by racking has the advantage of being comparable to the measurements used in assembly of hardware. Approaches to all of the above issues are presented and demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

The aim of this study was to evaluate the efficacy of tooth mousse containing 10% casein phosphopeptide-amorphous calcium phosphate (CPP-ACP) in reducing tooth sensitivity associated with in-office vital tooth whitening. In-office tooth whitening was performed for 51 participants using 35% hydrogen peroxide gel in a single visit. After the procedure, each participant was randomly assigned to one of three groups: gel without desensitizing agent (n=17), gel with 2% sodium fluoride (n=17), gel with 10% CPP-ACP (n=17). A small amount of the desensitizing gel assigned for each participant was applied directly on the labial surfaces of teeth and left undisturbed for three minutes. The participants were asked to apply the gel assigned to them for three minutes twice daily after brushing their teeth, and to continue this for 14 days. The participants were asked to return for follow-up visits after 24 hours and on days 3, 7, and 14, at which time teeth shade changes were assessed by one evaluator using a value-oriented Vita classic shade guide. The incidence, duration, and intensity of tooth sensitivity experienced was self-assessed on a daily basis for the 14-day study period using a visual analog scale (VAS). The effect of the three gels on tooth sensitivity was assessed using one-way analysis of variance and a χ (2) test (α=0.05). The general linear model was used to compare intensity-level differences in the three studied groups and for shade stability over the follow-up period. The results of this study showed that all three gels decreased the intensity of sensitivity associated with tooth whitening. The intensity of sensitivity was lower in the fluoride group than in the other two groups; however, it was not statistically significant (p=0.112 and p=0.532 on day 1 and day 2, respectively). The average shade change was 6.8. None of the tested materials affected the efficacy of tooth whitening, but the shade change among the fluoride group showed more color stability

This study was done to evaluate remineralizing potential of bioactive glasses (BAGs) and amorphous calcium phosphate-casein phosphopeptide (ACP-CPP) on early enamel lesion. Twenty freshly extracted mandibular premolars were sectioned sagittally. The buccal half was impregnated in acrylic resin blocks and treated with 37% phosphoric acid in liquid form, to demineralize enamel surface to simulate early enamel lesion. The samples were divided into two groups. The samples in Group I were treated with ACP-CPP (GC Tooth Mousse) and in Group II with BAG (Sensodyne Repair and Protect) and stored in saliva to prevent dehydration. The samples were tested for microhardness. The data obtained was analyzed using ANOVA post hoc multiple comparison and independent sample t- test and presented as a mean and standard deviation. All the samples showed a decrease in the microhardness after demineralization. After application of remineralizing agents, Group II showed a highly significant increase in the microhardness (P < 0.05) after 10 days, while Group I showed a significant increase in microhardness after 15 days (P < 0.05). Both the remineralizing agents tested in this study can be considered effective in repair and prevention of demineralization. BAG showed better results initially, but eventually both have similar remineralizing potential.

The 14-3-3 proteins are a class of eukaryotic acidic adapter proteins, with seven isoforms in humans. 14-3-3 proteins mediate their biological function by binding to target proteins and influencing their activity. They are involved in pivotal pathways in the cell such as signal transduction, gene expression, enzyme activation, cell division and apoptosis. The Yes-associated protein (YAP) is a WW-domain protein that exists in two transcript variants of 48 and 54 kDa in humans. By transducing signals from the cytoplasm to the nucleus, YAP is important for transcriptional regulation. In both variants, interaction with 14-3-3 proteins after phosphorylation of Ser127 is important for nucleocytoplasmic trafficking, via which the localization of YAP is controlled. In this study, 14-3-3σ has been cloned, purified and crystallized in complex with a phosphopeptide from the YAP 14-3-3-binding domain, which led to a crystal that diffracted to 1.15 A resolution. The crystals belonged to space group C222(1), with unit-cell parameters a=82.3, b=112.1, c=62.9 A.

Full Text Available Introduction: To promote the remineralization by ionic exchange mechanism instead of invasive techniques many remineralizing agents can be used. Objective: To evaluate the remineralization effects of casein phosphopeptide-amorphous calcium phosphate (CPP-ACP on white spot lesions (WSLs and its inhibitory effect on Streptococcus mutans colonization. Materials and Methods: The study group consisted of 60 subjects exhibiting at least 1-WSL. Subjects were randomly divided into 2 groups: A test group using CPP-ACP cream (GC-Tooth Mousse, Leuven, Belgium and a control group using only fluoride containing toothpaste for a period of 3-month. Baseline WSLs were scored using DIAGNOdent device (KaVo Germany and the saliva samples were collected to measure S. mutans counts. After the 3-month period the WSLs were again recorded and the saliva collection was repeated. Result: DIAGNOdent measurements were increased by time (P = 0.002 in the control group and no statistically significant difference (P = 0.217 was found in the test group by the 3-month period. In both groups, the mutans counts were decreased in the 3-month experimental period. Conclusion: These clinical and laboratory results suggested that CPP-ACP containing cream had a slight remineralization effect on the WSL in the 3-month evaluation period however, longer observation is recommended to confirm whether the greater change in WSLs is maintained.

Introduction: To promote the remineralization by ionic exchange mechanism instead of invasive techniques many remineralizing agents can be used. Objective: To evaluate the remineralization effects of casein phosphopeptide-amorphous calcium phosphate (CPP-ACP) on white spot lesions (WSLs) and its inhibitory effect on Streptococcus mutans colonization. Materials and Methods: The study group consisted of 60 subjects exhibiting at least 1-WSL. Subjects were randomly divided into 2 groups: A test group using CPP-ACP cream (GC-Tooth Mousse, Leuven, Belgium) and a control group using only fluoride containing toothpaste for a period of 3-month. Baseline WSLs were scored using DIAGNOdent device (KaVo Germany) and the saliva samples were collected to measure S. mutans counts. After the 3-month period the WSLs were again recorded and the saliva collection was repeated. Result: DIAGNOdent measurements were increased by time (P = 0.002) in the control group and no statistically significant difference (P = 0.217) was found in the test group by the 3-month period. In both groups, the mutans counts were decreased in the 3-month experimental period. Conclusion: These clinical and laboratory results suggested that CPP-ACP containing cream had a slight remineralization effect on the WSL in the 3-month evaluation period however, longer observation is recommended to confirm whether the greater change in WSLs is maintained. PMID:23956538

Although in-office bleaching has been proven successful for bleaching teeth, controversy exists from morphological alterations in enamel morphology due to mineral loss and tooth sensitivity. This preliminary study aimed to evaluate the efficacy of a novel in-office tooth bleaching technique modified with a casein phosphopeptide-amorphous calcium phosphate (CPP-ACP) paste (MI paste-MI) and its effect on the enamel morphology and tooth sensitivity. Three patients received a 35% hydrogen peroxide (Whiteness HP-HP) dental bleaching system. HP was prepared and applied on the teeth on one of the hemiarches, whilst teeth on the other hemiarch were bleached with a mixture of HP and MI. Tooth color, epoxy resin replicas, and sensitivity levels were evaluated in the upper incisors. The results were analyzed descriptively. Right and left incisors showed similar color change after bleaching. Incisors bleached with the mixture of HP and MI presented unaltered enamel surfaces and lower sensitivity levels. The currently tested tooth bleaching technique did not reduce the gel effectiveness while decreasing hypersensitivity levels and protecting the enamel against surface alterations caused by the high-concentration bleaching peroxide tested. The concomitant use of MI Paste and high-concentration hydrogen peroxide might be a successful method for decreasing tooth sensitivity and limiting changes in the enamel morphology during in-office bleaching.

We aimed to evaluate the efficacy of oral hygiene instruction, fluoride varnish and casein phosphopeptide-amorphous calcium phosphate (CPP-ACP) for remineralizing white spot lesions (WSL), and the effect of these on the dmft index in primary teeth. In this 1-year, randomized clinical trial, 140 children aged 12-36 months with WSL in the anterior maxillary teeth were selected and randomly divided into 4 groups of 35 children each. Group 1 (control) received no preventive intervention. In group 2, there was oral hygiene and dietary counseling. In group 3, there was oral hygiene and the application of fluoride varnish at 4, 8 and 12 months after baseline. In group 4, there was oral hygiene and tooth mousse was applied by the parents twice a day over a 12-month period. At baseline and 4, 8 and 12 months after the intervention, the size of WSL in millimeters and the dmft index were recorded. One hundred and twenty-two children completed the study. Data were analyzed using the repeated-measures ANOVA test. In group 1, the mean percent WSL area and dmft index values had increased significantly at 12 months after baseline (p fluoride varnish applications or constant CPP-ACP during the 12- month period reduced the size of WSL in the anterior primary teeth and caused a small increase in dmft index values.

The appearance of lip wrinkles is problematic if it is adversely influenced by lipstick make-up causing incomplete color tone, spread phenomenon and pigment remnants. It is mandatory to develop an objective assessment method for lip wrinkle status by which the potential of wrinkle-improving products to lips can be screened. The present study is aimed at finding out the useful parameters from the image analysis of lip wrinkles that is affected by lipstick application. The digital photograph image of lips before and after lipstick application was assessed from 20 female volunteers. Color tone was measured by Hue, Saturation and Intensity parameters, and time-related pigment spread was calculated by the area over vermilion border by image-analysis software (Image-Pro). The efficacy of wrinkle-improving lipstick containing asiaticoside was evaluated from 50 women by using subjective and objective methods including image analysis in a double-blind placebo-controlled fashion. The color tone and spread phenomenon after lipstick make-up were remarkably affected by lip wrinkles. The level of standard deviation by saturation value of image-analysis software was revealed as a good parameter for lip wrinkles. By using the lipstick containing asiaticoside for 8 weeks, the change of visual grading scores and replica analysis indicated the wrinkle-improving effect. As the depth and number of wrinkles were reduced, the lipstick make-up appearance by image analysis also improved significantly. The lip wrinkle pattern together with lipstick make-up can be evaluated by the image-analysis system in addition to traditional assessment methods. Thus, this evaluation system is expected to test the efficacy of wrinkle-reducing lipstick that was not described in previous dermatologic clinical studies.

This study focuses on programmatic factors that predict retention for individuals in drug and alcohol treatment programs through secondary analysis of data from the National Treatment Improvement Evaluation Study (NTIES). It addresses the relationships between completion rates, lengths of stay, and treatment modality. It examines the effect of…

After the investigation of the new core arrangement for the JMTR reactor in order to enhance the fuel burn-up and consequently extend the operation period, the ''improved LEU core'' that utilized 2 additional fuel elements instead of formerly installed reflector elements, was adopted. This report describes the results of the thermal-hydraulic analysis of the improved LEU core as a part of safety analysis for the licensing. The analysis covers steady state, abnormal operational transients and accidents, which were described in the annexes of the licensing documents as design bases events. Calculation conditions for the computer codes were conservatively determined based on the neutronic analysis results and others. The results of the analysis, that revealed the safety criteria were satisfied on the fuel temperature, DNBR and primary coolant temperature, were used in the licensing. The operation license of the JMTR with the improved LEU core was granted in March 2001, and the reactor operation with new core started in November 2001 as 142nd operation cycle. (author)

Large-scale phosphoproteomic analysis employing liquid chromatography-tandem mass spectrometry (LC-MS/MS) often requires a significant amount of manual manipulation of phosphopeptide datasets in the post-acquisition phase. To assist in this process, we have created software, PhosphoPIC (PhosphoPeptide Identification and Compilation), which can perform a variety of useful functions including automated selection and compilation of phosphopeptide identifications from multiple MS levels, estimation of dataset false discovery rate, and application of appropriate cross-correlation (XCorr) filters. In addition, the output files generated by this program are compatible with downstream phosphorylation site assignment using the Ascore algorithm, as well as phosphopeptide quantification via QUOIL. In this report, we utilized this software to analyze phosphoproteins from short-term vasopressin-treated rat kidney inner medullary collecting duct (IMCD). A total of 925 phosphopeptides representing 173 unique proteins were identified from membrane-enriched fractions of IMCD with a false discovery rate of 1.5%. Of these proteins, 106 were found only in the membrane-enriched fraction of IMCD cells and not in whole IMCD cell lysates. These identifications included a number of well-studied ion and solute transporters including ClC-1, LAT4, MCT2, NBC3, and NHE1, all of which contained novel phosphorylation sites. Using a label-free quantification approach, we identified phosphoproteins that changed in abundance with vasopressin exposure including aquaporin-2 (AQP2), Hnrpa3, IP3 receptor 3, and pur-beta.

Full Text Available Background. Timely pulmonary function testing is crucial to improving diagnosis and treatment of pulmonary diseases. Perceptions of poor access at an academic pulmonary function laboratory prompted analysis of system demand and capacity to identify factors contributing to poor access. Methods. Surveys and interviews identified stakeholder perspectives on operational processes and access challenges. Retrospective data on testing demand and resource capacity was analyzed to understand utilization of testing resources. Results. Qualitative analysis demonstrated that stakeholder groups had discrepant views on access and capacity in the laboratory. Mean daily resource utilization was 0.64 (SD 0.15, with monthly average utilization consistently less than 0.75. Reserved testing slots for subspecialty clinics were poorly utilized, leaving many testing slots unfilled. When subspecialty demand exceeded number of reserved slots, there was sufficient capacity in the pulmonary function schedule to accommodate added demand. Findings were shared with stakeholders and influenced scheduling process improvements. Conclusion. This study highlights the importance of operational data to identify causes of poor access, guide system decision-making, and determine effects of improvement initiatives in a variety of healthcare settings. Importantly, simple operational analysis can help to improve efficiency of health systems with little or no added financial investment.

Background. Timely pulmonary function testing is crucial to improving diagnosis and treatment of pulmonary diseases. Perceptions of poor access at an academic pulmonary function laboratory prompted analysis of system demand and capacity to identify factors contributing to poor access. Methods. Surveys and interviews identified stakeholder perspectives on operational processes and access challenges. Retrospective data on testing demand and resource capacity was analyzed to understand utilization of testing resources. Results. Qualitative analysis demonstrated that stakeholder groups had discrepant views on access and capacity in the laboratory. Mean daily resource utilization was 0.64 (SD 0.15), with monthly average utilization consistently less than 0.75. Reserved testing slots for subspecialty clinics were poorly utilized, leaving many testing slots unfilled. When subspecialty demand exceeded number of reserved slots, there was sufficient capacity in the pulmonary function schedule to accommodate added demand. Findings were shared with stakeholders and influenced scheduling process improvements. Conclusion. This study highlights the importance of operational data to identify causes of poor access, guide system decision-making, and determine effects of improvement initiatives in a variety of healthcare settings. Importantly, simple operational analysis can help to improve efficiency of health systems with little or no added financial investment.

The objective of this paper is to perform the innovation design for improving the recognition of a captured QR code image with blur through the Pillbox filter analysis. QR code images can be captured by digital video cameras. Many factors contribute to QR code decoding failure, such as the low quality of the image. Focus is an important factor that affects the quality of the image. This study discusses the out-of-focus QR code image and aims to improve the recognition of the conte...

software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

Full Text Available Context: Demineralization of tooth by erosion is caused by frequent contact between the tooth surface and acids present in soft drinks. Aim: The present study objective was to evaluate the remineralization potential of casein-phosphopeptide-amorphous calcium phosphate (CPP-ACP paste, 1.23% acidulated phosphate fluoride (APF gel and iron supplement on dental erosion by soft drinks in human primary and permanent enamel using atomic force microscopy (AFM. Materials and Methods: Specimens were made from extracted 15 primary and 15 permanent teeth which were randomly divided into three treatment groups: CPP-ACP paste, APF gel and iron supplement. AFM was used for baseline readings followed by demineralization and remineralization cycle. Results and Statistics: Almost all group of samples showed remineralization that is a reduction in surface roughness which was higher with CPP-ACP paste. Statistical analysis was performed using by one-way ANOVA and Mann-Whitney U-test with P < 0.05. Conclusions: It can be concluded that the application of CPP-ACP paste is effective on preventing dental erosion from soft drinks.

Software process improvement aims at improving the development process of software systems. It is initiated by process assessment identifying strengths and weaknesses and based on the findings, improvement plans are developed. In general, a process reference model (e.g., CMMI) is used throughout the process of software process improvement as the base. CMMI defines a set of process areas involved in software development and what to be carried out in process areas in terms of goals and practices. Process areas and their elements (goals and practices) are often correlated due to the iterative nature of software development process. However, in the current practice, correlations of process elements are often overlooked in the development of an improvement plan, which diminishes the efficiency of the plan. This is mainly attributed to significant efforts and the lack of required expertise. In this paper, we present a process correlation analysis model that helps identify correlations of process elements from the results of process assessment. This model is defined based on CMMI and empirical data of improvement practices. We evaluate the model using industrial data.

Abundant neutral losses of 98 Da are often observed upon ion trap CID-MS/MS of protonated phosphopeptide ions. Two competing fragmentation pathways are involved in this process, namely, the direct loss of H3PO4 from the phosphorylated residue and the combined losses of HPO3 and H2O from the phosphorylation site and from an additional site within the peptide, respectively. These competing pathways produce product ions with different structures but the same m/z values, potentially limiting the utility of CID-MS3 for phosphorylation site localization. To quantify the relative contributions of these pathways and to determine the conditions under which each pathway predominates, we have examined the ion trap CID-MS/MS fragmentation of a series of regioselective 18O-phosphate ester labeled phosphopeptides prepared using novel solution-phase amino acid synthesis and solid-phase peptide synthesis methodologies. By comparing the intensity of the -100 Da (-H3PO3 18O) versus -98 Da (-[HPO3 + H2O]) neutral loss product ions formed upon MS/MS, quantification of the two pathways was achieved. Factors that affect the extent of formation of the competing neutral losses were investigated, with the combined loss pathway predominantly occurring under conditions of limited proton mobility, and with increased combined losses observed for phosphothreonine compared with phosphoserine-containing peptides. The combined loss pathway was found to be less dominant under ion activation conditions associated with HCD-MS/MS. Finally, the contribution of carboxylic acid functional groups and backbone amide bonds to the water loss in the combined loss fragmentation pathway was determined via methyl esterification and by examination of a phosphopeptide lacking side-chain hydroxyl groups.

Full Text Available Background: Bleaching can affect the mechanical properties of enamel-dentin complex, such as flexural strength. Casein phosphopeptide-amorphus calcium phosphate (CPP-ACP is often used following bleaching treatment to reduce hypersensitivity and to increase demineralization of tooth. Purpose: The purpose of the study was to investigate the effect of CPP-ACP on the flexural strength of enamel-dentin complex following extracoronal bleaching. Methods: Forty-eight enamel-dentin plates (size 8 x 2 x 3 mm were randomly assigned into 6 groups, each consisted of 8 samples. Group 1, no bleaching and immersed in artificial saliva. Group 2, no bleaching, CPP-ACP application only. Group 3, bleaching using 15% carbamide peroxide. Group 4, similar to group 3, except application of CPP-ACP for the times between bleaching. Group 5, bleaching with 40% hydrogen peroxide. Group 6, similar to group 5, except application of CPP-ACP for the times between bleaching. Flexural strength of each enamel-dentin plate was tested by threepoint bending test using universal testing machine. Results: The results showed that 15% carbamide peroxide and 40% hydrogen peroxide significantly reduced flexural strength of enamel-dentin (216.25±26.44 MPa and 206.67±32.07 MPa respectively. Conversely, application of CPP-ACP following both bleachings increased flexural strength (266.75± 28.27MPa and 254.58±36.59 MPa respectively. A two-way Anova revealed that extracoronal bleaching agents significantly reduced flexural strength (p<0.05. Conclusion: Extracoronal bleaching agents reduce flexural strength, whereas application of CPP-ACP following bleaching either with 15% carbamide peroxide or 40% hydrogen peroxide can increase the flexural strength of enamel-dentin complex.

Glass ionomer cements (GIC) are dental restorative materials that are suitable for modification to help prevent dental plaque (biofilm) formation. The aim of this study was to determine the effects of incorporating casein phosphopeptide-amorphous calcium phosphate (CPP-ACP) into a GIC on the colonisation and establishment of Streptococcus mutans biofilms and the effects of aqueous CPP-ACP on established S mutans biofilms. S. mutans biofilms were either established in flow cells before a single ten min exposure to 1% w/v CPP-ACP treatment or cultured in static wells or flow cells with either GIC or GIC containing 3% w/w CPP-ACP as the substratum. The biofilms were then visualised using confocal laser scanning microscopy after BacLight LIVE/DEAD staining. A significant decrease in biovolume and average thickness of S. mutans biofilms was observed in both static and flow cell assays when 3% CPP-ACP was incorporated into the GIC substratum. A single ten min treatment with aqueous 1% CPP-ACP resulted in a 58% decrease in biofilm biomass and thickness of established S. mutans biofilms grown in a flow cell. The treatment also significantly altered the structure of these biofilms compared with controls. The incorporation of 3% CPP-ACP into GIC significantly reduced S. mutans biofilm development indicating another potential anticariogenic mechanism of this material. Additionally aqueous CPP-ACP disrupted established S. mutans biofilms. The use of CPP-ACP containing GIC combined with regular CPP-ACP treatment may lower S. mutans challenge. PMID:27589264

Glass ionomer cements (GIC) are dental restorative materials that are suitable for modification to help prevent dental plaque (biofilm) formation. The aim of this study was to determine the effects of incorporating casein phosphopeptide-amorphous calcium phosphate (CPP-ACP) into a GIC on the colonisation and establishment of Streptococcus mutans biofilms and the effects of aqueous CPP-ACP on established S mutans biofilms. S. mutans biofilms were either established in flow cells before a single ten min exposure to 1% w/v CPP-ACP treatment or cultured in static wells or flow cells with either GIC or GIC containing 3% w/w CPP-ACP as the substratum. The biofilms were then visualised using confocal laser scanning microscopy after BacLight LIVE/DEAD staining. A significant decrease in biovolume and average thickness of S. mutans biofilms was observed in both static and flow cell assays when 3% CPP-ACP was incorporated into the GIC substratum. A single ten min treatment with aqueous 1% CPP-ACP resulted in a 58% decrease in biofilm biomass and thickness of established S. mutans biofilms grown in a flow cell. The treatment also significantly altered the structure of these biofilms compared with controls. The incorporation of 3% CPP-ACP into GIC significantly reduced S. mutans biofilm development indicating another potential anticariogenic mechanism of this material. Additionally aqueous CPP-ACP disrupted established S. mutans biofilms. The use of CPP-ACP containing GIC combined with regular CPP-ACP treatment may lower S. mutans challenge.

Spiral analysis is a computerized method that measures human motor performance from handwritten Archimedean spirals. It quantifies normal motor activity, and detects early disease as well as dysfunction in patients with movement disorders. The clinical utility of spiral analysis is based on kinematic and dynamic indices derived from the original spiral trace, which must be detected and transformed into mathematical expressions with great precision. Accurately determining the center of the spiral and reducing spurious low frequency noise caused by center selection error is important to the analysis. Handwritten spirals do not all start at the same point, even when marked on paper, and drawing artifacts are not easily filtered without distortion of the spiral data and corruption of the performance indices. In this report, we describe a method for detecting the optimal spiral center and reducing the unwanted drawing artifacts. To demonstrate overall improvement to spiral analysis, we study the impact of the optimal spiral center detection in different frequency domains separately and find that it notably improves the clinical spiral measurement accuracy in low frequency domains.

Electron transfer dissociation (ETD) is a recently introduced mass spectrometric technique that provides a more comprehensive coverage of peptide sequences and posttranslational modifications. Here, we evaluated the use of ETD for a global phosphoproteome analysis. In all, we identified a total...... fragment ions that facilitated localization of phosphorylation sites. Although our data indicate that ETD is superior to CID for phosphorylation analysis, the two methods can be effectively combined in alternating ETD and CID modes for a more comprehensive analysis. Combining ETD and CID, from this single...

Nonenzymatic glycation of tissue proteins has important implications in the development of complications of diabetes mellitus. Herein we report improved methods for the enrichment and analysis of glycated peptides using boronate affinity chromatography and electron-transfer dissociation mass spectrometry, respectively. The enrichment of glycated peptides was improved by replacing an off-line desalting step with an online wash of column-bound glycated peptides using 50 mM ammonium acetate, followed by elution with 100 mM acetic acid. The analysis of glycated peptides by MS/MS was improved by considering only higher charged (≥3) precursor ions during data-dependent acquisition, which increased the number of glycated peptide identifications. Similarly, the use of supplemental collisional activation after electron transfer (ETcaD) resulted in more glycated peptide identifications when the MS survey scan was acquired with enhanced resolution. Acquiring ETD-MS/MS data at a normal MS survey scan rate, in conjunction with the rejection of both 1+ and 2+ precursor ions, increased the number of identified glycated peptides relative to ETcaD or the enhanced MS survey scan rate. Finally, an evaluation of trypsin, Arg-C, and Lys-C showed that tryptic digestion of glycated proteins was comparable to digestion with Lys-C and that both were better than Arg-C in terms of the number of glycated peptides and corresponding glycated proteins identified by LC–MS/MS. PMID:18989935

Non-enzymatic glycation of tissue proteins has important implications in the development of complications of diabetes mellitus. Herein we report improved methods for the enrichment and analysis of glycated peptides using boronate affinity chromatography and electron transfer dissociation mass spectrometry, respectively. The enrichment of glycated peptides was improved by replacing an off-line desalting step with an on-line wash of column-bound glycated peptides using 50 mM ammonium acetate. The analysis of glycated peptides by MS/MS was improved by considering only higher charged (≥3) precursor-ions during data-dependent acquisition, which increased the number of glycated peptide identifications. Similarly, the use of supplemental collisional activation after electron transfer (ETcaD) resulted in more glycated peptide identifications when the MS survey scan was acquired with enhanced resolution. In general, acquiring ETD-MS/MS data at a normal MS survey scan rate, in conjunction with the rejection of both 1+ and 2+ precursor-ions, increased the number of identified glycated peptides relative to ETcaD or the enhanced MS survey scan rate. Finally, an evaluation of trypsin, Arg-C, and Lys-C showed that tryptic digestion of glycated proteins was comparable to digestion with Lys-C and that both were better than Arg-C in terms of the number glycated peptides identified by LC-MS/MS.

Full Text Available The objective of this paper is to perform the innovation design for improving the recognition of a captured QR code image with blur through the Pillbox filter analysis. QR code images can be captured by digital video cameras. Many factors contribute to QR code decoding failure, such as the low quality of the image. Focus is an important factor that affects the quality of the image. This study discusses the out-of-focus QR code image and aims to improve the recognition of the contents in the QR code image. Many studies have used the pillbox filter (circular averaging filter method to simulate an out-of-focus image. This method is also used in this investigation to improve the recognition of a captured QR code image. A blurred QR code image is separated into nine levels. In the experiment, four different quantitative approaches are used to reconstruct and decode an out-of-focus QR code image. These nine reconstructed QR code images using methods are then compared. The final experimental results indicate improvements in identification.

X-ray microanalysis by SEM-EDS requires corrections for the many physical processes that affect emitted intensity for elements present in the material. These corrections will only be accurate provided a number of conditions are satisfied and it is essential that the correct elements are identified. As analysis is pushed to achieve results on smaller features and more challenging samples it becomes increasingly difficult to determine if all conditions are upheld and whether the analysis results are valid. If a theoretical simulated spectrum based on the measured analysis result is compared with the measured spectrum, any marked differences will indicate problems with the analysis and can prevent serious mistakes in interpretation. To achieve the necessary accuracy a previous theoretical model has been enhanced to incorporate new line intensity measurements, differential absorption and excitation of emission lines, including the effect of Coster-Kronig transitions and an improved treatment of bremsstrahlung for compounds. The efficiency characteristic has been measured for a large area SDD detector and data acquired from an extensive set of standard materials at both 5 kV and 20 kV. The parameterized model has been adjusted to fit measured characteristic intensities and both background shape and intensity at the same beam current. Examples are given to demonstrate how an overlay of an accurate theoretical simulation can expose some non-obvious mistakes and provide some expert guidance towards a valid analysis result. A new formula for calculating the effective mean atomic number for compounds has also been derived that is appropriate and should help improve accuracy in techniques that calculate the bremsstrahlung or use a bremsstrahlung measurement for calibration.

Paraffin wax is usually used as an embedding medium for histological analysis of natural tissue. However, it is not easy to obtain enough numbers of satisfactory sectioned slices because of the difference in mechanical properties between the paraffin and embedded tissue. We describe a modified paraffin wax that can improve the histological analysis efficiency of natural tissue, composed of paraffin and ethylene vinyl acetate (EVA) resin (0, 3, 5, and 10 wt %). Softening temperature of the paraffin/EVA media was similar to that of paraffin (50-60 degrees C). The paraffin/EVA media dissolved completely in xylene after 30 min at 50 degrees C. Physical properties such as the amount of load under the same compressive displacement, elastic recovery, and crystal intensity increased with increased EVA content. EVA medium (5 wt %) was regarded as an optimal composition, based on the sectioning efficiency measured by the numbers of unimpaired sectioned slices, amount of load under the same compressive displacement, and elastic recovery test. Based on the staining test of sectioned slices embedded in a 5 wt % EVA medium by hematoxylin and eosin (H&E), Masson trichrome (MT), and other staining tests, it was concluded that the modified paraffin wax can improve the histological analysis efficiency with various natural tissues. (c) 2010 Wiley-Liss, Inc.

Three main parts of generalized cell mapping are improved for global analysis. A simple method, which is not based on the theory of digraphs, is presented to locate complete self-cycling sets that corre- spond to attractors and unstable invariant sets involving saddle, unstable periodic orbit and chaotic saddle. Refinement for complete self-cycling sets is developed to locate attractors and unstable in- variant sets with high degree of accuracy, which can start with a coarse cell structure. A nonuniformly interior-and-boundary sampling technique is used to make the refinement robust. For homeomorphic dissipative dynamical systems, a controlled boundary sampling technique is presented to make gen- eralized cell mapping method with refinement extremely accurate to obtain invariant sets. Recursive laws of group absorption probability and expected absorption time are introduced into generalized cell mapping, and then an optimal order for quantitative analysis of transient cells is established, which leads to the minimal computational work. The improved method is applied to four examples to show its effectiveness in global analysis of dynamical systems.

Three main parts of generalized cell mapping are improved for global analysis. A simple method, whichis not based on the theory of digraphs, is presented to locate complete self-cycling sets that corre-spond to attractors and unstable invariant sets involving saddle, unstable periodic orbit and chaotic saddle. Refinement for complete self-cycling sets is developed to locate attractors and unstable in-variant sets with high degree of accuracy, which can start with a coarse cell structure. A nonuniformly interior-and-boundary sampling technique is used to make the refinement robust. For homeomorphic dissipative dynamical systems, a controlled boundary sampling technique is presented to make gen-eralized cell mapping method with refinement extremely accurate to obtain invariant sets. Recursive laws of group absorption probability and expected absorption time are introduced into generalized cell mapping, and then an optimal order for quantitative analysis of transient cells is established, which leads to the minimal computational work. The improved method is applied to four examples to show its effectiveness in global analysis of dynamical systems.

This paper provides an overview of ongoing efforts to develop, evaluate, and validate different tools for improved aerodynamic modeling and systems analysis of Hybrid Wing Body (HWB) aircraft configurations. Results are being presented for the evaluation of different aerodynamic tools including panel methods, enhanced panel methods with viscous drag prediction, and computational fluid dynamics. Emphasis is placed on proper prediction of aerodynamic loads for structural sizing as well as viscous drag prediction to develop drag polars for HWB conceptual design optimization. Data from transonic wind tunnel tests at the Arnold Engineering Development Center s 16-Foot Transonic Tunnel was used as a reference data set in order to evaluate the accuracy of the aerodynamic tools. Triangularized surface data and Vehicle Sketch Pad (VSP) models of an X-48B 2% scale wind tunnel model were used to generate input and model files for the different analysis tools. In support of ongoing HWB scaling studies within the NASA Environmentally Responsible Aviation (ERA) program, an improved finite element based structural analysis and weight estimation tool for HWB center bodies is currently under development. Aerodynamic results from these analyses are used to provide additional aerodynamic validation data.

Full Text Available the rising number of applications serving millions of users and dealing with terabytes of data need to a faster processing paradigms. Recently, there is growing enthusiasm for the notion of big data analysis. Big data analysis becomes a very important aspect for growth productivity, reliability and quality of services (QoS. Processing of big data using a powerful machine is not efficient solution. So, companies focused on using Hadoop software for big data analysis. This is because Hadoop designed to support parallel and distributed data processing. Hadoop provides a distributed file processing system that stores and processes a large scale of data. It enables a fault tolerant by replicating data on three or more machines to avoid data loss.Hadoop is based on client server model and used single master machine called NameNode. However, Hadoop has several drawbacks affecting on its performance and reliability against big data analysis. In this paper, a new framework is proposed to improve big data analysis and overcome specified drawbacks of Hadoop. These drawbacks are replication tasks, Centralized node and nodes failure. The proposed framework is called MapReduce Agent Mobility (MRAM. MRAM is developed by using mobile agent and MapReduce paradigm under Java Agent Development Framework (JADE.

The Topography X-ray Laboratory of the Advanced Photon Source (APS) at Argonne National Laboratory operates as a collaborative effort with APS users to produce high performance crystals for APS X-ray beamline experiments. For many years the topography laboratory has worked closely with an on-site optics shop to help ensure the production of crystals with the highest quality, most stress-free surface finish possible. It has been instrumental in evaluating and refining methods used to produce high quality crystals. Topographical analysis has shown to be an effective method to quantify and determine the distribution of stresses, to help identify methods that would mitigate the stresses and improve the Rocking curve, and to create CCD images of the crystal. This paper describes the topography process and offers methods for reducing crystal stresses in order to substantially improve the crystal optics.

Full Text Available To improve construction site safety, emphasis has been placed on the implementation of safety programs. In order to successfully gain from safety programs, factors that affect their improvement need to be studied. Sixteen critical success factors of safety programs were identified from safety literature, and these were validated by safety experts. This study was undertaken by surveying 70 respondents from medium- and large-scale construction projects. It explored the importance and the actual status of critical success factors (CSFs. Gap analysis was used to examine the differences between the importance of these CSFs and their actual status. This study found that the most critical problems characterized by the largest gaps were management support, appropriate supervision, sufficient resource allocation, teamwork, and effective enforcement. Raising these priority factors to satisfactory levels would lead to successful safety programs, thereby minimizing accidents.

Full Text Available We present an improved ant colony algorithm-based approach to assess the vulnerability of a road network and identify the critical infrastructures. This approach improves computational efficiency and allows for its applications in large-scale road networks. This research involves defining the vulnerability conception, modeling the traffic utility index and the vulnerability of the road network, and identifying the critical infrastructures of the road network. We apply the approach to a simple test road network and a real road network to verify the methodology. The results show that vulnerability is directly related to traffic demand and increases significantly when the demand approaches capacity. The proposed approach reduces the computational burden and may be applied in large-scale road network analysis. It can be used as a decision-supporting tool for identifying critical infrastructures in transportation planning and management.

Resilient Packet Ring (RPR) specified by IEEE 802.17 is a new standard for Metropolitan Area Networks (MANs). One of RPR's characteristics is that it can support three priorities traffic in a single datapath, i.e., class A, class B and class C, ranging from high priority to low priority, respectively. Different entities such as shaping, scheduling, fairness, topology and protection coordinate to guarantee the Quality of Service (QoS) for different services. Various pieces of the datapath in RPR are tied together through logical queues, thus we investigate the datapath from the view of logical queues in this paper. With a detailed analysis of the MAC shaping mechanism in RPR, we propose some improvement to achieve better transport performance for RPR's three priorities traffic. Simulation results show that our improvement is efficient.

The Topography X-ray Laboratory of the Advanced Photon Source (APS) at Argonne National Laboratory operates as a collaborative effort with APS users to produce high performance crystals for APS X-ray beamline experiments. For many years the topography laboratory has worked closely with an on-site optics shop to help ensure the production of crystals with the highest quality, most stress-free surface finish possible. It has been instrumental in evaluating and refining methods used to produce high quality crystals. Topographical analysis has shown to be an effective method to quantify and determine the distribution of stresses, to help identify methods that would mitigate the stresses and improve the Rocking curve, and to create CCD images of the crystal. This paper describes the topography process and offers methods for reducing crystal stresses in order to substantially improve the crystal optics.

Full Text Available With a growing pressure in identifying the skilled resources in Clinical Data Management (CDM world of clinical research organizations, to provide the quality deliverables most of the CDM organizations are planning to improve the skills within the organization. In changing CDM landscape the ability to build, manage and leverage the skills of clinical data managers is very critical and important. Within CDM to proactively identify, analyze and address skill gaps for all the roles involved. In addition to domain skills, the evolving role of a clinical data manager demands diverse skill sets such as project management, six sigma, analytical, decision making, communication etc. This article proposes a methodology of skill gap analysis (SGA management as one of the potential solutions to the big skill challenge that CDM is gearing up for bridging the gap of skills. This would in turn strength the CDM capability, scalability, consistency across geographies along with improved productivity and quality of deliverables

We present an improvedanalysis of the smoothed aggregation (SA) alge- braic multigrid method (AMG) extending the original proof in [SA] and its modification in [Va08]. The new result imposes fewer restrictions on the aggregates that makes it eas- ier to verify in practice. Also, we extend a result in [Van] that allows us to use aggressive coarsening at all levels due to the special properties of the polynomial smoother, that we use and analyze, and thus provide a multilevel convergence estimate with bounds independent of the coarsening ratio.

We present an improvedanalysis of the smoothed aggregation (SA) alge- braic multigrid method (AMG) extending the original proof in [SA] and its modification in [Va08]. The new result imposes fewer restrictions on the aggregates that makes it eas- ier to verify in practice. Also, we extend a result in [Van] that allows us to use aggressive coarsening at all levels due to the special properties of the polynomial smoother, that we use and analyze, and thus provide a multilevel convergence estimate with bounds independent of the coarsening ratio.

OBJECTIVE: To test a method for improving the selection of indicators of general practitioners' prescribing. METHODS: We conducted a prescription database study including all 180 general practices in the County of Funen, Denmark, approximately 472,000 inhabitants. Principal factor analysis was used...... indicators directly quantifying choice of coxibs, indicators measuring expenditure per Defined Daily Dose, and indicators taking risk aspects into account, (2) "Frequent NSAID prescribing", comprising indicators quantifying prevalence or amount of NSAID prescribing, and (3) "Diverse NSAID choice", comprising...... appropriate and inappropriate prescribing, as revealed by the correlation of the indicators in the first factor. CONCLUSION: Correlation and factor analysis is a feasible method that assists the selection of indicators and gives better insight into prescribing patterns....

The Allan variance (AVAR) is widely used to measure the stability of experimental time series. Specifically, AVAR is commonly used in space applications such as monitoring the clocks of the global navigation satellite systems (GNSSs). In these applications, the experimental data present some peculiar aspects which are not generally encountered when the measurements are carried out in a laboratory. Space clocks' data can in fact present outliers, jumps, and missing values, which corrupt the clock characterization. Therefore, an efficient preprocessing is fundamental to ensure a proper data analysis and improve the stability estimation performed with the AVAR or other similar variances. In this work, we propose a preprocessing algorithm and its implementation in a robust software code (in MATLAB language) able to deal with time series of experimental data affected by nonstationarities and missing data; our method is properly detecting and removing anomalous behaviors, hence making the subsequent stability analysis more reliable.

Modeling of multimedia environmental issues is extremely complex due to the intricacy of the systems with the consideration of many factors. In this study, an improved environmental multimedia modeling is developed and a number of testing problems related to it are examined and compared with each other with standard numerical and analytical methodologies. The results indicate the flux output of new model is lesser in the unsaturated zone and groundwater zone compared with the traditional environmental multimedia model. Furthermore, about 90% of the total benzene flux was distributed to the air zone from the landfill sources and only 10% of the total flux emitted into the unsaturated, groundwater zones in non-uniform conditions. This paper also includes functions of model sensitivity analysis to optimize model parameters such as Peclet number (Pe). The analyses results show that the Pe can be considered as deterministic input variables for transport output. The oscillatory behavior is eliminated with the Pe decreased. In addition, the numerical methods are more accurate than analytical methods with the Pe increased. In conclusion, the improved environmental multimedia model system and its sensitivity analysis can be used to address the complex fate and transport of the pollutants in multimedia environments and then help to manage the environmental impacts.

Full Text Available Abstract Background Tracing cell dynamics in the embryo becomes tremendously difficult when cell trajectories cross in space and time and tissue density obscure individual cell borders. Here, we used the chick neural crest (NC as a model to test multicolor cell labeling and multispectral confocal imaging strategies to overcome these roadblocks. Results We found that multicolor nuclear cell labeling and multispectral imaging led to improved resolution of in vivo NC cell identification by providing a unique spectral identity for each cell. NC cell spectral identity allowed for more accurate cell tracking and was consistent during short term time-lapse imaging sessions. Computer model simulations predicted significantly better object counting for increasing cell densities in 3-color compared to 1-color nuclear cell labeling. To better resolve cell contacts, we show that a combination of 2-color membrane and 1-color nuclear cell labeling dramatically improved the semi-automated analysis of NC cell interactions, yet preserved the ability to track cell movements. We also found channel versus lambda scanning of multicolor labeled embryos significantly reduced the time and effort of image acquisition and analysis of large 3D volume data sets. Conclusions Our results reveal that multicolor cell labeling and multispectral imaging provide a cellular fingerprint that may uniquely determine a cell's position within the embryo. Together, these methods offer a spectral toolbox to resolve in vivo cell dynamics in unprecedented detail.

In this, the third and final article in a series on practice skill analysis, attention is given to imaginative ways of improving a practice skill. Having analysed and evaluated a chosen skill in the previous two articles, it is time to look at new ways to proceed. Creative people are able to be analytical and imaginative. The process of careful reasoning involved in analysing and evaluating a skill will not necessarily be used to improve it. To advance a skill, there is a need to engage in more imaginative, free-thinking processes that allow the nurse to think afresh about his or her chosen skill. Suggestions shared in this article are not exhaustive, but the material presented does illustrate measures that in the author's experience seem to have potential. Consideration is given to how the improved skill might be envisaged (an ideal skill in use). The article is illustrated using the case study of empathetic listening, which has been used throughout this series.

Full Text Available Purpose. Energy and economic evaluation of the improved plasma waste utilization technological process, as well as an expediency substantiation of the use of improved plasma technology by comparing its energy consumption with other thermal methods of utilization. Methodology. Analysis of existing modern and advanced methods of waste management and its impact on environmental safety. Considering of energy and monetary costs to implement two different waste management technologies. Results. Studies have shown regular gasification ensure greater heating value due to differences, a significant amount of nitrogen than for plasma gasification. From the point of view of minimizing energy and monetary costs and environmental safety more promising is to offer advanced technology for plasma waste. To carry out the energy assessment of the appropriateness of the considered technologies-comparative calculation was carried out at the standard conditions. This is because in the processing of waste produced useful products, such as liquefied methane, synthetic gas (94% methane and a fuel gas for heating, suitable for sale that provides cost-effectiveness of this technology. Originality. Shown and evaluated ecological and economic efficiency of proposed improved plasma waste utilization technology compared with other thermal techniques. Practical value. Considered and grounded of energy and monetary costs to implement two different waste management technologies, namely ordinary gasification and using plasma generators. Proposed plasma waste utilization technology allows to obtain useful products, such as liquefied methane, synthetic gas and a fuel gas for heating, which are suitable for sale. Plant for improved plasma waste utilization technological process allows to compensate the daily and seasonal electricity and heat consumption fluctuations by allowing the storage of obtained fuel products.

Full Text Available In the credit card scoring and loans management, the prediction of the applicant’s future behavior is an important decision support tool and a key factor in reducing the risk of Loan Default. A lot of data mining and classification approaches have been developed for the credit scoring purpose. For the best of our knowledge, building a credit scorecard by analyzing the textual data in the application form has not been explored so far. This paper proposes a comprehensive credit scorecard model technique that improves credit scorecard modeling though employing textual data analysis. This study uses a sample of loan application forms of a financial institution providing loan services in Yemen, which represents a real-world situation of the credit scoring and loan management. The sample contains a set of Arabic textual data attributes defining the applicants. The credit scoring model based on the text mining pre-processing and logistic regression techniques is proposed and evaluated through a comparison with a group of credit scorecard modeling techniques that use only the numeric attributes in the application form. The results show that adding the textual attributes analysis achieves higher classification effectiveness and outperforms the other traditional numerical data analysis techniques.

Full Text Available The high-resolution analysis and nowcasting system INCA (Integrated Nowcasting through Comprehensive Analysis developed at the Austrian national weather service provides three-dimensional fields of temperature, humidity, and wind on an hourly basis, and two-dimensional fields of precipitation rate in 15 min intervals. The system operates on a horizontal resolution of 1 km and a vertical resolution of 100–200 m. It combines surface station data, remote sensing data (radar, satellite, forecast fields of the numerical weather prediction model ALADIN, and high-resolution topographic data. An important application of the INCA system is nowcasting of convective precipitation. Based on fine-scale temperature, humidity, and wind analyses a number of convective analysis fields are routinely generated. These fields include convective boundary layer (CBL flow convergence and specific humidity, lifted condensation level (LCL, convective available potential energy (CAPE, convective inhibition (CIN, and various convective stability indices. Based on the verification of areal precipitation nowcasts it is shown that the pure translational forecast of convective cells can be improved by using a decision algorithm which is based on a subset of the above fields, combined with satellite products.

Full Text Available The questions concerning the definition of current trends and prospects of venture financing new innovative enterprises as one of the most effective and alternative, but with a high degree of risk financing sources of the entity. The features of venture financing that is different from other sources of business financing, as well as income from investments of venture capital can greatly exceed the volume of investments, but at the same time such financing risks are significant, so it all makes it necessary to build an effective system of venture capital investments in the workplace. In the course of the study also revealed problems of analysis and minimization of risks in the performance of venture financing of innovative enterprises. Defining characteristics analysis and risk assessment of venture financing helps to find ways to minimize and systematization, avoidance and prevention of risks in the performance of venture capital. The study also identified the major areas of improvementanalysis of venture capital for management decisions.

In this research, the bioremediation of dispersed crude oil, based on the amount of nitrogen and phosphorus supplementation in the closed system, was optimized by the application of response surface methodology and central composite design. Correlation analysis of the mathematical-regression model demonstrated that a quadratic polynomial model could be used to optimize the hydrocarbon bioremediation (R{sup 2} = 0.9256). Statistical significance was checked by analysis of variance and residual analysis. Natural attenuation was removed by 22.1% of crude oil in 28 days. The highest removal on un-optimized condition of 68.1% were observed by using nitrogen of 20.00 mg/L and phosphorus of 2.00 mg/L in 28 days while optimization process exhibited a crude oil removal of 69.5% via nitrogen of 16.05 mg/L and phosphorus 1.34 mg/L in 27 days therefore optimization can improve biodegradation in shorter time with less nutrient consumption. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

Firstly, this paper establishes the principal component regression model to analyze the data quantitatively, based on principal component analysis to get the three principal component factors of flight delays. Then the least square method is used to analyze the factors and obtained the regression equation expression by substitution, and then found that the main reason for flight delays is airlines, followed by weather and traffic. Aiming at the above problems, this paper improves the controllable aspects of traffic flow control. For reasons of traffic flow control, an adaptive genetic queuing model is established for the runway terminal area. This paper, establish optimization method that fifteen planes landed simultaneously on the three runway based on Beijing capital international airport, comparing the results with the existing FCFS algorithm, the superiority of the model is proved.

The transient deflagration code DPAC (Deflagration Pressure Analysis Code) has been upgraded for use in modeling hydrogen deflagration transients. The upgraded code is benchmarked using data from vented hydrogen deflagration tests conducted at the HYDRO-SC Test Facility at the University of Pisa. DPAC originally was written to calculate peak deflagration pressures for deflagrations in radioactive waste storage tanks and process facilities at the Savannah River Site. Upgrades include the addition of a laminar flame speed correlation for hydrogen deflagrations and a mechanistic model for turbulent flame propagation, incorporation of inertial effects during venting, and inclusion of the effect of water vapor condensation on vessel walls. In addition, DPAC has been coupled with CEA, a NASA combustion chemistry code. The deflagration tests are modeled as end-to-end deflagrations. The improved DPAC code successfully predicts both the peak pressures during the deflagration tests and the times at which the pressure peaks.

Full Text Available Semantic search of cultural content is of major importance in current digital libraries, such as in Europeana. Content metadata constitute the main features of cultural items that are analysed, mapped and used to interpret users' queries, so that the most appropriate content is selected and presented to the users. Multimedia, especially visual, analysis, has not been a main component in these developments. This paper presents a new semantic search methodology, including a query answering mechanism which meets the semantics of users' queries and enriches the answers by exploiting appropriate visual features, both local and MPEG-7, through an interweaved knowledge and machine learning based approach. An experimental study is presented, using content from the Europeana digital library, and involving both thematic knowledge and extracted visual features from Europeana images, illustrating the improved performance of the proposed semantic search approach.

The Travelling Salesman Problem is one the most fundamental and most studied problems in approximation algorithms. For more than 30 years, the best algorithm known for general metrics has been Christofides's algorithm with approximation factor of 3/2, even though the so-called Held-Karp LP relaxation of the problem is conjectured to have the integrality gap of only 4/3. Very recently, significant progress has been made for the important special case of graphic metrics, first by Oveis Gharan et al., and then by Momke and Svensson. In this paper, we provide an improvedanalysis for the approach introduced by Momke and Svensson yielding a bound of 35/24 on the approximation factor, as well as a bound of 19/12+epsilon for any epsilon>0 for a more general Travelling Salesman Path Problem in graphic metrics.

Latent Semantic Analysis (LSA) offers a technique for improving lessons learned and knowledge management systems. These systems are expected to become more widely used in the nuclear industry, as experienced personnel leave and are replaced by younger, less-experienced workers. LSA is a machine learning technology that allows searching of text based on meaning rather than predefined keywords or categories. Users can enter and retrieve data using their own words, rather than relying on constrained language lists or navigating an artificially structured database. LSA-based tools can greatly enhance the usability and usefulness of knowledge management systems and thus provide a valuable tool to assist nuclear industry personnel in gathering and transferring worker expertise. (authors)

The improvements and the modifications of the NASA Aircraft Noise Prediction Program (ANOPP) and the Propeller Analysis System (PAS) are described. Comparisons of the predictions and the test data are included in the case studies for the flat plate model in the Boundary Layer Module, for the effects of applying compressibility corrections to the lift and pressure coefficients, for the use of different weight factors in the Propeller Performance Module, for the use of the improved retarded time equation solution, and for the effect of the number grids in the Transonic Propeller Noise Module. The DNW tunnel test data of a propeller at different angles of attack and the Dowty Rotol data are compared with ANOPP predictions. The effect of the number of grids on the Transonic Propeller Noise Module predictions and the comparison of ANOPP TPN and DFP-ATP codes are studied. In addition to the above impact studies, the transonic propeller noise predictions for the SR-7, the UDF front rotor, and the support of the enroute noise test program are included.

In recent years, many efforts have been made to study the performance of treatment planning systems in deriving an accurate dosimetry of the complex radiation fields involved in boron neutron capture therapy (BNCT). The computational model of the patient's anatomy is one of the main factors involved in this subject. This work presents a detailed analysis of the performance of the 1 cm based voxel reconstruction approach. First, a new and improved material assignment algorithm implemented in NCTPlan treatment planning system for BNCT is described. Based on previous works, the performances of the 1 cm based voxel methods used in the MacNCTPlan and NCTPlan treatment planning systems are compared by standard simulation tests. In addition, the NCTPlan voxel model is benchmarked against in-phantom physical dosimetry of the RA-6 reactor of Argentina. This investigation shows the 1 cm resolution to be accurate enough for all reported tests, even in the extreme cases such as a parallelepiped phantom irradiated through one of its sharp edges. This accuracy can be degraded at very shallow depths in which, to improve the estimates, the anatomy images need to be positioned in a suitable way. Rules for this positioning are presented. The skin is considered one of the organs at risk in all BNCT treatments and, in the particular case of cutaneous melanoma of extremities, limits the delivered dose to the patient. Therefore, the performance of the voxel technique is deeply analysed in these shallow regions. A theoretical analysis is carried out to assess the distortion caused by homogenization and material percentage rounding processes. Then, a new strategy for the treatment of surface voxels is proposed and tested using two different irradiation problems. For a parallelepiped phantom perpendicularly irradiated with a 5 keV neutron source, the large thermal neutron fluence deviation present at shallow depths (from 54% at 0 mm depth to 5% at 4 mm depth) is reduced to 2% on average

Full Text Available Bioinformatics and genomic signal processing use computational techniques to solve various biological problems. They aim to study the information allied with genetic materials such as the deoxyribonucleic acid (DNA, the ribonucleic acid (RNA, and the proteins. Fast and precise identification of the protein coding regions in DNA sequence is one of the most important tasks in analysis. Existing digital signal processing (DSP methods provide less accurate and computationally complex solution with greater background noise. Hence, improvements in accuracy, computational complexity, and reduction in background noise are essential in identification of the protein coding regions in the DNA sequences. In this paper, a new DSP based method is introduced to detect the protein coding regions in DNA sequences. Here, the DNA sequences are converted into numeric sequences using electron ion interaction potential (EIIP representation. Then discrete wavelet transformation is taken. Absolute value of the energy is found followed by proper threshold. The test is conducted using the data bases available in the National Centre for Biotechnology Information (NCBI site. The comparative analysis is done and it ensures the efficiency of the proposed system.

Bioinformatics and genomic signal processing use computational techniques to solve various biological problems. They aim to study the information allied with genetic materials such as the deoxyribonucleic acid (DNA), the ribonucleic acid (RNA), and the proteins. Fast and precise identification of the protein coding regions in DNA sequence is one of the most important tasks in analysis. Existing digital signal processing (DSP) methods provide less accurate and computationally complex solution with greater background noise. Hence, improvements in accuracy, computational complexity, and reduction in background noise are essential in identification of the protein coding regions in the DNA sequences. In this paper, a new DSP based method is introduced to detect the protein coding regions in DNA sequences. Here, the DNA sequences are converted into numeric sequences using electron ion interaction potential (EIIP) representation. Then discrete wavelet transformation is taken. Absolute value of the energy is found followed by proper threshold. The test is conducted using the data bases available in the National Centre for Biotechnology Information (NCBI) site. The comparative analysis is done and it ensures the efficiency of the proposed system.

The identification of Propionibacterium acnes in cultures of bone and joint samples is always difficult to interpret because of the ubiquity of this microorganism. The aim of this study was to propose a diagnostic strategy to distinguish infections from contaminations. This was a retrospective analysis of all patient charts of those patients with >or=1 deep samples culture-positive for P. acnes. Every criterion was tested for sensitivity, specificity, and positive likelihood ratio, and then the diagnostic probability of combinations of criteria was calculated. Among 65 patients, 52 (80%) were considered truly infected with P. acnes, a diagnosis based on a multidisciplinary process. The most valuable diagnostic criteria were: >or=2 positive deep samples, peri-operative findings (necrosis, hardware loosening, etc.), and >or=2 surgical procedures. However, no single criterion was sufficient to ascertain the diagnosis. The following combinations of criteria had a diagnostic probability of >90%: >or=2 positive cultures + 1 criterion among: peri-operative findings, local signs of infection, >or=2 previous operations, orthopaedic devices; 1 positive culture + 3 criteria among: peri-operative findings, local signs of infection, >or=2 previous surgical operations, orthopaedic devices, inflammatory syndrome. The diagnosis of P. acnes osteomyelitis was greatly improved by combining different criteria, allowing differentiation between infection and contamination.

The k-means method is a widely used clustering algorithm. One of its distinguished features is its speed in practice. Its worst-case running-time, however, is exponential, leaving a gap between practical and theoretical performance. Arthur and Vassilvitskii (FOCS 2006) aimed at closing this gap, and they proved a bound of $\\poly(n^k, \\sigma^{-1})$ on the smoothed running-time of the k-means method, where n is the number of data points and $\\sigma$ is the standard deviation of the Gaussian perturbation. This bound, though better than the worst-case bound, is still much larger than the running-time observed in practice. We improve the smoothed analysis of the k-means method by showing two upper bounds on the expected running-time of k-means. First, we prove that the expected running-time is bounded by a polynomial in $n^{\\sqrt k}$ and $\\sigma^{-1}$. Second, we prove an upper bound of $k^{kd} \\cdot \\poly(n, \\sigma^{-1})$, where d is the dimension of the data space. The polynomial is independent of k and d, and w...

Corneal confocal microscopy (CCM) is a noninvasive clinical method to analyse and quantify corneal nerve fibres in vivo. Although the CCM technique is in constant progress, there are methodological limitations in terms of sampling of images and objectivity of the nerve quantification. The aim of this study was to present a randomized sampling method of the CCM images and to develop an adjusted area-dependent image analysis. Furthermore, a manual nerve fibre analysis method was compared to a fully automated method. 23 idiopathic small-fibre neuropathy patients were investigated using CCM. Corneal nerve fibre length density (CNFL) and corneal nerve fibre branch density (CNBD) were determined in both a manual and automatic manner. Differences in CNFL and CNBD between (1) the randomized and the most common sampling method, (2) the adjusted and the unadjusted area and (3) the manual and automated quantification method were investigated. The CNFL values were significantly lower when using the randomized sampling method compared to the most common method (p = 0.01). There was not a statistical significant difference in the CNBD values between the randomized and the most common sampling method (p = 0.85). CNFL and CNBD values were increased when using the adjusted area compared to the standard area. Additionally, the study found a significant increase in the CNFL and CNBD values when using the manual method compared to the automatic method (p ≤ 0.001). The study demonstrated a significant difference in the CNFL values between the randomized and common sampling method indicating the importance of clear guidelines for the image sampling. The increase in CNFL and CNBD values when using the adjusted cornea area is not surprising. The observed increases in both CNFL and CNBD values when using the manual method of nerve quantification compared to the automatic method are consistent with earlier findings. This study underlines the importance of improving the analysis of the

Criticality safety, the prevention of nuclear chain reactions, depends on Monte Carlo computer codes for most commercial applications. One major shortcoming of these codes is the limited accuracy of the atomic and nuclear data files they depend on. In order to apply a code and its data files to a given criticality safety problem, the code must first be benchmarked against similar problems for which the answer is known. The difference between a code prediction and the known solution is termed the "bias" of the code. Traditional calculations of the bias for application to commercial criticality problems are generally full of assumptions and lead to large uncertainties which must be conservatively factored into the bias as statistical tolerances. Recent trends in storing commercial nuclear fuel---narrowed regulatory margins of safety, degradation of neutron absorbers, the desire to use higher enrichment fuel, etc.---push the envelope of criticality safety. They make it desirable to minimize uncertainty in the bias to accommodate these changes, and they make it vital to understand what assumptions are safe to make under what conditions. A set of improved procedures is proposed for (1) developing multivariate regression bias models, and (2) applying multivariate regression bias models. These improved procedures lead to more accurate estimates of the bias and much smaller uncertainties about this estimate, while also generally providing more conservative results. The drawback is that the procedures are not trivial and are highly labor intensive to implement. The payback in savings in margin to criticality and conservatism for calculations near regulatory and safety limits may be worth this cost. To develop these procedures, a bias model using the statistical technique of weighted least squares multivariate regression is developed in detail. Problems that can occur from a weak statistical analysis are highlighted, and a solid statistical method for developing the bias

emissions under low temperature combustion (LTC) regimes. An invention disclosure was submitted to ORNL for the virtual sensor under the CRADA. Industrial in-kind support was available throughout the project period. Review of the research results were carried out on a regular basis (annual reports and meetings) followed by suggestions for improvement in ongoing work and direction for future work. A significant portion of the industrial support was in the form of experimentation, data analysis, data exchange, and technical consultation.

We show that the Zhang-Yang-Zhu-Zhang identity-based authenticatable ring signcryption scheme is not secure against chosen plaintext attacks.Furthermore,we propose an improved scheme that remedies the weakness of the Zhang-Yang-Zhu-Zhang scheme.The improved scheme has shorter ciphertext size than the Zhang-Yang-Zhu-Zhang scheme.We then prove that the improved scheme satisfies confidentiality,unforgeability,anonymity and authenticatability.

Flash floods are one of the natural hazards that cause major damages worldwide. Especially in Mediterranean areas they provoke high economic losses every year. In mountain areas with high stream gradients, floods events are characterized by extremely high flow and debris transport rates. Flash flood analysis in mountain areas presents specific scientific challenges. On one hand, there is a lack of information on precipitation and discharge due to a lack of spatially well distributed gauge stations with long records. On the other hand, gauge stations may not record correctly during extreme events when they are damaged or the discharge exceeds the recordable level. In this case, no systematic data allows improvement of the understanding of the spatial and temporal occurrence of the process. Since historic documentation is normally scarce or even completely missing in mountain areas, tree-ring analysis can provide an alternative approach. Flash floods may influence trees in different ways: (1) tilting of the stem through the unilateral pressure of the flowing mass or individual boulders; (2) root exposure through erosion of the banks; (3) injuries and scars caused by boulders and wood transported in the flow; (4) decapitation of the stem and resulting candelabra growth through the severe impact of boulders; (5) stem burial through deposition of material. The trees react to these disturbances with specific growth changes such as abrupt change of the yearly increment and anatomical changes like reaction wood or callus tissue. In this study, we sampled 90 cross sections and 265 increment cores of trees heavily affected by past flash floods in order to date past events and to reconstruct recurrence intervals in two torrent channels located in the Spanish Central System. The first study site is located along the Pelayo River, a torrent in natural conditions. Based on the external disturbances of trees and their geomorphological position, 114 Pinus pinaster (Ait

Despite all these efforts, however, crop productivities on farmers' fields still .... sampling units (PSUs) or clusters and then a simple random sampling ..... to 44%, 50% and 54% conditional on the adoption of improved varieties of wheat, faba.

According to military requirement, and based on the problems of equipment maintenance support methods in high-tech battles, each element supporting equipment maintenance is analyzed, and the methods for improving equipment maintenance are proposed.

A keyboard is the most important input device for a computer. With the development of technology a basic keyboard does not want to remain confined within the basic functionalities of a keyboard, rather it wants to go beyond. There are several inventions which attempt to improve the efficiency of a conventional keyboard. This article illustrates 10 inventions from US Patent database all of which have proposed very interesting methods for improving the efficiency of a computer keyboard. Some in...

We are investigating the value of incorporating chronotopographic analysis into undergraduate geology courses using terrestrial laser scanning (TLS) to improve student understanding of the rates and styles of geomorphic processes. Repeat high-resolution TLS surveys can track the evolution of active landscapes, including sites of active faulting, glaciation, landslides, fluvial systems and coastal dynamics. We hypothesize that geology students who collect and analyze such positional data for local active landscapes will develop a better sense of the critical (and non-steady) geomorphic processes affecting landscape change and develop a greater interest in pursuing opportunities for geology field work. We have collected baseline TLS scans of actively evolving landscapes identified in cooperation with land-use agencies. The project team is developing inquiry activities for each site and assessing their impact. For example, our faculty partners at 2-year colleges are interested in rapid retreat of coastal bluffs near their campuses. In this situation, TLS will be part of a laboratory activity in which students compare historic air photos to predict areas of the most active long-term bluff retreat; join their instructor to collect TLS data at the site (replicating the baseline scan); sketch outcrops in the field and suggest areas of the site for higher resolution scanning; and in the following class compare their predictions to the deformation maps that are the output of the repeated TLS scans. A brief two question assessment instrument was developed to address both the content and attitudinal targets. It was given WWU Geomorphology classes in 3 sequential quarters of the 2009/2010 academic year, 2 which did not work with the TLS technology (pre treatment) and one that did participate in the redesigned activities (post treatment). Additionally focus group interviews were conducted with the post students so they could verbalize their experience with the TLS. The content

Full Text Available There is little evidence to direct health systems toward providing efficient interventions to address medical errors, defined as an unintended act of omission or commission or one not executed as intended that may or may not cause harm to the patient but does not achieve its intended outcome. We believe that lack of guidance on what is the most efficient way to reduce adverse events and improve the quality of health care limits the scale-up of health system improvement interventions. Challenges to economic evaluation of these interventions include defining and implementing improvement interventions in different settings with high fidelity, capturing all of the positive and negative effects of the intervention, using process measures of effectiveness rather than health outcomes, and determining the full cost of the intervention and all economic consequences its effects. However, health system improvement interventions should be treated similarly to individual medical interventions and undergo rigorous economic evaluation to provide actionable evidence to guide policy-makers in decisions of resources allocation for improvement activities among other competing demands for health care resources.

There is little evidence to direct health systems toward providing efficient interventions to address medical errors, defined as an unintended act of omission or commission or one not executed as intended that may or may not cause harm to the patient but does not achieve its intended outcome. We believe that lack of guidance on what is the most efficient way to reduce medical errors and improve the quality of health-care limits the scale-up of health system improvement interventions. Challenges to economic evaluation of these interventions include defining and implementing improvement interventions in different settings with high fidelity, capturing all of the positive and negative effects of the intervention, using process measures of effectiveness rather than health outcomes, and determining the full cost of the intervention and all economic consequences of its effects. However, health system improvement interventions should be treated similarly to individual medical interventions and undergo rigorous economic evaluation to provide actionable evidence to guide policy-makers in decisions of resource allocation for improvement activities among other competing demands for health-care resources.

Full Text Available Failure mode and effects analysis and value engineering are well-established methods in the manufacturing industry, commonly applied to optimize product reliability and cost, respectively. Both processes, however, require cross-functional teams to identify and evaluate the product/process functions and are resource-intensive, hence their application is mostly limited to large organizations. In this article, we present a methodology involving the concurrent execution of failure mode and effects analysis and value engineering, assisted by a set of hierarchical functional analysis diagram models, along with the outcomes of a pilot application in a UK-based manufacturing small and medium enterprise. Analysis of the results indicates that this new approach could significantly enhance the resource efficiency and effectiveness of both failure mode and effects analysis and value engineering processes.

Full Text Available In order to realize the accurate distortion parameters test of aircraft power supply system, and satisfy the requirement of corresponding equipment in the aircraft, the novel power parameters test system based on improved filtering algorithm is introduced in this paper. The hardware of the test system has the characters of s portable and high-speed data acquisition and processing, and the software parts utilize the software Labwindows/CVI as exploitation software, and adopt the pre-processing technique and adding filtering algorithm. Compare with the traditional filtering algorithm, the test system adopted improved filtering algorithm can help to increase the test accuracy. The application shows that the test system with improved filtering algorithm can realize the accurate test results, and reach to the design requirements.

We analyze and improve low rank representation (LRR), the state-of-the-art algorithm for subspace segmentation of data. We prove that for the noiseless case, the optimization model of LRR has a unique solution, which is the shape interaction matrix (SIM) of the data matrix. So in essence LRR is equivalent to factorization methods. We also prove that the minimum value of the optimization model of LRR is equal to the rank of the data matrix. For the noisy case, we show that LRR can be approximated as a factorization method that combines noise removal by column sparse robust PCA. We further propose an improved version of LRR, called Robust Shape Interaction (RSI), which uses the corrected data as the dictionary instead of the noisy data. RSI is more robust than LRR when the corruption in data is heavy. Experiments on both synthetic and real data testify to the improved robustness of RSI.

The security of lattice-based cryptosystems such as NTRU, GGH and Ajtai-Dwork essentially relies upon the intractability of computing a shortest non-zero lattice vector and a closest lattice vector to a given target vector in high dimensions. The best algorithms for these tasks are due to Kannan, and, though remarkably simple, their complexity estimates have not been improved since more than twenty years. Kannan's algorithm for solving the shortest vector problem is in particular crucial in Schnorr's celebrated block reduction algorithm, on which are based the best known attacks against the lattice-based encryption schemes mentioned above. Understanding precisely Kannan's algorithm is of prime importance for providing meaningful key-sizes. In this paper we improve the complexity analyses of Kannan's algorithms and discuss the possibility of improving the underlying enumeration strategy.

We report a new headspace analytical method in which multiple headspace extraction is incorporated with the full evaporation technique. The pressure uncertainty caused by the solid content change in the samples has a great impact to the measurement accuracy in the conventional full evaporation headspace analysis. The results (using ethanol solution as the model sample) showed that the present technique is effective to minimize such a problem. The proposed full evaporation multiple headspace extraction analysis technique is also automated and practical, and which could greatly broaden the applications of the full-evaporation-based headspace analysis. This article is protected by copyright. All rights reserved.

Advocates for educational reform frequently call for policies to increase competition between schools because it is argued that market forces naturally lead to greater efficiencies, including improved student learning, when schools face competition. Researchers examining this issue are confronted with difficulties in defining reasonable measures…

Background The purpose of this situation analysis was to explore the views of health and non-health professionals working with women of childbearing age on current and future delivery of preconception...

consideration given the origin. The new method includes the origin information. Using parametric and non- parametric statistical tests and data analysis...Using parametric and non- parametric statistical tests and data analysis techniques, we show that the addition of requisition origin information...apply a weighting scale to those elements. Kunadhamrak and Hanaoka constructed a multi-criteria metric using a combination of a fuzzy- 12

Cysteine is a rare and conserved amino acid involved in most cellular functions. The thiol group of cysteine can be subjected to diverse oxidative modifications that regulate many physio-pathological states. In the present work, a Cysteine-specific Phosphonate Adaptable Tag (CysPAT) was synthesized to selectively label cysteine-containing peptides (Cys peptides) followed by their enrichment with titanium dioxide (TiO2) and subsequent mass spectrometric analysis. The CysPAT strategy was developed using a synthetic peptide, a standard protein and subsequently the strategy was applied to protein lysates from Hela cells, achieving high specificity and enrichment efficiency. In particular, for Cys proteome analysis, the method led to the identification of 7509 unique Cys peptides from 500 μg of HeLa cell lysate starting material. Furthermore, the method was developed to simultaneously enrich Cys peptides and phosphorylated peptides. This strategy was applied to SILAC labeled Hela cells subjected to 5 min epidermal growth factor (EGF) stimulation. In total, 10440 unique reversibly modified Cys peptides (3855 proteins) and 7339 unique phosphopeptides (2234 proteins) were simultaneously identified from 250 μg starting material. Significant regulation was observed in both phosphorylation and reversible Cys modification of proteins involved in EGFR signaling. Our data indicates that EGF stimulation can activate the well-known phosphorylation of EGFR and downstream signaling molecules, such as mitogen-activated protein kinases (MAPK1 and MAPK3), however, it also leads to substantial modulation of reversible cysteine modifications in numerous proteins. Several protein tyrosine phosphatases (PTPs) showed a reduction of the catalytic Cys site in the conserved putative phosphatase HC(X)5R motif indicating an activation and subsequent de-phosphorylation of proteins involved in the EGF signaling pathway. Overall, the CysPAT strategy is a straight forward, easy and promising

Full Text Available With the advances in industry and commerce, passengers have become more accepting of environmental sustainability issues; thus, more people now choose to travel by bus. Government administration constitutes an important part of bus transportation services as the government gives the right-of-way to transportation companies allowing them to provide services. When these services are of poor quality, passengers may lodge complaints. The increase in consumer awareness and developments in wireless communication technologies have made it possible for passengers to easily and immediately submit complaints about transportation companies to government institutions, which has brought drastic changes to the supply–demand chain comprised of the public sector, transportation companies, and passengers. This study proposed the use of big data analysis technology including systematized case assignment and data visualization to improve management processes in the public sector and optimize customer complaint services. Taichung City, Taiwan, was selected as the research area. There, the customer complaint management process in public sector was improved, effectively solving such issues as station-skipping, allowing the public sector to fully grasp the service level of transportation companies, improving the sustainability of bus operations, and supporting the sustainable development of the public sector–transportation company–passenger supply chain.

and improve the quality of the seam , the major cost centers - work in process, operator handlinq, and production control were left far behind. Was it...skilled operation when performed manually; the curve must be stitched without puckering or irregular stitches and the line must remain parallel to the...equipped with side seam expanders. In this plant the Ajax Presses had been modified with a spring I steel clamp on the back of the buck to secure the box

Noise is a very important factor which in most cases, plays an antagonistic role in the vast field of image processing. Thus noise needs to be studied in great depth in order to improve the quality of images. The quantity of signal in an image, corrupted by noise is generally described by the term Signal-to-Noise ratio. Capturing multiple photos at different focus settings is a powerful approach for improving SNR. The paper analyses a frame work for optimally balancing the tradeoff's between defocus and sensor noise by experimenting on synthetic as well as real video sequences. The method is first applied to synthetic image where the improvement in SNR is studied by the ability of Hough transform to extract the number of lines with respect to the variation in SNR. The paper further experiments on real time video sequences while the improvement in SNR is analyzed using different edge operators like Sobel, Canny, Prewitt, Roberts and Laplacian. The result obtained is further analyzed using different edge operators. The main aim is to detect the edges at different values of SNR which will be a prominent measure of the signal strength as well as clarity of an image. The paper also explains in depth the modeling of noise leading to better understanding of SNR. The results obtain from both synthetic image and real time video sequences elaborate the increase in SNR with the increment in the total number of time slices in a fixed budget leading to clear pictures. This technique can be very effectively applied to capture high quality images from long distances.

This paper investigates dhlerent strategies that can be used to improve the tracking accuracy of heliostats at Solar Two. The different strategies are analyzed using a geometrical error model to determine their performance over the course of a day. By using the performance of heliostats in representative locations of the field aad on representative days of the year, an estimate of the annual performance of each strategy is presented.

This paper investigates different strategies that can be used to improve the tracking accuracy of heliostats at Solar Two. The different strategies are analyzed using a geometrical error model to determine their performance over the course of a day. By using the performance of heliostats in representative locations of the field and on representative days of the year, an estimate of the annual performance of each strategy is presented.

Full Text Available Metaphor analysis procedures for uncovering participant conceptualizations have been well-established in qualitative research settings since the early 1980s; however, one common criticism of metaphor analysis is the trustworthiness of the findings. Namely, accurate determination of the conceptual metaphors held by participants based on the investigation of linguistic metaphors has been identified as a methodological issue because of the subjectivity involved in the interpretation; that is, because they are necessarily situated in specific social and cultural milieus, meanings of particular metaphors are not universally constructed nor understood. In light of these critiques, this article provides examples of two different triangulation methods that can be employed to supplement the trustworthiness of the findings when metaphor analysis methodologies are used.

In our work we present an application of some biometrical methods useful in genotype stability evaluation, namely AMMI model, Joint Regression Analysis (JRA) and multiple comparison tests. A genotype stability analysis of oat (Avena Sativa L.) grain yield was carried out using data of the Portuguese Plant Breeding Board, sample of the 22 different genotypes during the years 2002, 2003 and 2004 in six locations. In Ferreira et al. (2006) the authors state the relevance of the regression models and of the Additive Main Effects and Multiplicative Interactions (AMMI) model, to study and to estimate phenotypic stability effects. As computational techniques we use the Zigzag algorithm to estimate the regression coefficients and the agricolae-package available in R software for AMMI model analysis.

Full Text Available Accounting practices are flawed. As a consequence, the accounting data generated by firms are generally open to interpretation, often misleading and sometimes patently false. Yet, financial analysts place tremendous confidence in accounting data when appraising investments and investment strategies. The implications of financial analysis based on questionable information are numerous, and range from inexact analysis to acute investment error. To rectify this situation, this paper identifies a set of simple, yet highly effective corrective measures, which have the capacity to move accounting practice into a realm wherein accounting starts to ‘count what counts’. The net result would be delivery of accounting data that more accurately reflect firms’ economic realities and, as such, are more useful in the task of financial analysis.

research and medical applications. Wavelet transform (WT) is a new multi-resolution time-frequency analysis method. WT possesses localization feature both... wavelet transform , the EEG signals are successfully decomposed and denoised. In this paper we also use a ’quasi-detrending’ method for classification of EEG

Students' attitude towards science (SAS) is often a subject of investigation in science education research. Survey of rating scale is commonly used in the study of SAS. The present study illustrates how Rasch analysis can be used to provide psychometric information of SAS rating scales. The analyses were conducted on a 20-item SAS scale used in an…

The arrangement of plant cortical microtubules can reflect the physiological state of cells. However, little attention has been paid to the image quantitative analysis of plant cortical microtubules so far. In this paper, Bidimensional Empirical Mode Decomposition (BEMD) algorithm was applied in the image preprocessing of the original microtubule image. And then Intrinsic Mode Function 1 (IMF1) image obtained by decomposition was selected to do the texture analysis based on Grey-Level Cooccurrence Matrix (GLCM) algorithm. Meanwhile, in order to further verify its reliability, the proposed texture analysis method was utilized to distinguish different images of Arabidopsis microtubules. The results showed that the effect of BEMD algorithm on edge preserving accompanied with noise reduction was positive, and the geometrical characteristic of the texture was obvious. Four texture parameters extracted by GLCM perfectly reflected the different arrangements between the two images of cortical microtubules. In summary, the results indicate that this method is feasible and effective for the image quantitative analysis of plant cortical microtubules. It not only provides a new quantitative approach for the comprehensive study of the role played by microtubules in cell life activities but also supplies references for other similar studies.

Full Text Available The arrangement of plant cortical microtubules can reflect the physiological state of cells. However, little attention has been paid to the image quantitative analysis of plant cortical microtubules so far. In this paper, Bidimensional Empirical Mode Decomposition (BEMD algorithm was applied in the image preprocessing of the original microtubule image. And then Intrinsic Mode Function 1 (IMF1 image obtained by decomposition was selected to do the texture analysis based on Grey-Level Cooccurrence Matrix (GLCM algorithm. Meanwhile, in order to further verify its reliability, the proposed texture analysis method was utilized to distinguish different images of Arabidopsis microtubules. The results showed that the effect of BEMD algorithm on edge preserving accompanied with noise reduction was positive, and the geometrical characteristic of the texture was obvious. Four texture parameters extracted by GLCM perfectly reflected the different arrangements between the two images of cortical microtubules. In summary, the results indicate that this method is feasible and effective for the image quantitative analysis of plant cortical microtubules. It not only provides a new quantitative approach for the comprehensive study of the role played by microtubules in cell life activities but also supplies references for other similar studies.

We delineated 8 watersheds contributing to previously defined river reaches within the 1,468-km2 historical floodplain of the tidally influenced lower Columbia River and estuary. We assessed land-cover change at the watershed, reach, and restoration site scales by reclassifying remote-sensing data from the National Oceanic and Atmospheric Administration Coastal Change Analysis Program’s land cover/land change product into forest, wetland, and urban categories. The analysis showed a 198.3 km2 loss of forest cover during the first 6 years of the Columbia Estuary Ecosystem Restoration Program, 2001–2006. Total measured urbanization in the contributing watersheds of the estuary during the full 1996-2006 change analysis period was 48.4 km2. Trends in forest gain/loss and urbanization differed between watersheds. Wetland gains and losses were within the margin of error of the satellite imagery analysis. No significant land cover change was measured at restoration sites, although it was visible in aerial imagery, therefore, the 30-m land-cover product may not be appropriate for assessment of early-stage wetland restoration. These findings suggest that floodplain restoration sites in reaches downstream of watersheds with decreasing forest cover will be subject to increased sediment loads, and those downstream of urbanization will experience effects of increased impervious surfaces on hydrologic processes.

To better engage Maine's family forest landowners our study used social network analysis: a computational social science method for identifying stakeholders, evaluating models of engagement, and targeting areas for enhanced partnerships. Interviews with researchers associated with a research center were conducted to identify how social network…

Eccentricity is one of the frequent faults of induction motors,and it may cause rub between the rotor and the stator.Early detection of significant rub from pure eccentricity can prolong the lifespan of induction motors.This paper is devoted to such mixed-fault diagnosis:eccentricity plus rub fault.The continuous wavelet transform(CWT)is employed to analyze vibration signals obtained from the motor body.An improved continuous wavelet trartsform was proposed to alleviate the frequency aliasing.Experimental results show that the proposed method can effectively distinguish two types of faults,single-fault of eccentricity and mixed-fault of eccentricity plus rub.

In April 2007 pp-reactions at 3.5 GeV were measured with the HADES-Spectrometer at GSI. In these elementary reactions, one can reconstruct resonances via the missing mass technique. The kinematic refit has been employed to recalculate the momentum of measured tracks in exclusive reactions by the assumption of physical constraints. This well known procedure, can improve dramatically both the invariant- and missing- mass distributions, increasing in this way the signal to background ratio. The mathematical procedure underlying the kinematic fit will be presented, as well as experimental results achieved for the reconstruction of the {eta} and {omega} mesons and the {sigma}(1385){sup +}-resonance.

With recent events such as the Chinese ASAT test in 2007 and the USA 193 intercept in 2008, many satellite operators are becoming increasingly aware of the potential threat to their satellites as the result of orbital debris or even other satellites. However, to be successful at conjunction monitoring and collision avoidance requires accurate orbital information for as many space objects (payloads, dead satellites, rocket bodies, and debris) as possible. Given the current capabilities of the US Space Surveillance Network (SSN), approximately 18,500 objects are now being tracked and orbital data (in the form of two-line element sets) is available to satellite operators for 11,750 of them (as of 2008 September 1). The capability to automatically process this orbital data to look for close conjunctions and provide that information to satellite operators via the Internet has been continuously available on CelesTrak, in the form of Satellite Orbital Conjunction Reports Assessing Threatening Encounters in Space (SOCRATES), since May 2004. Those reports are used by many operators as one way to keep apprised of these potential threats. However, the two-line element sets (TLEs) are generated using non-cooperative tracking via the SSN's network of radar and optical sensors. As a result, the relatively low accuracy of the data results in a large number of false alarms that satellite operators must routinely deal with. Yet, satellite operators typically perform orbit maintenance for their own satellites, using active ranging and GPS systems. These data are often an order of magnitude more accurate than those available using TLEs. When combined (in the form of ephemerides) with maneuver planning information, the ability to maintain predictive awareness increases significantly. And when satellite operators share this data, the improved space situational awareness, particularly in the crowded geosynchronous belt, can be dramatic and the number of false alarms can be reduced

Software Process Improvement (SPI) has played a dominant role in systems development innovation research and practice for more than 20 years. However, while extant theory acknowledges the political nature of SPI initiatives, researchers have yet to empirically investigate and theorize about how...... organizational politics might impact outcomes. Against this backdrop, we apply metatriangulation to build new theory based on rich data from an SPI project in four business units at a high-tech firm. Reflecting the diverse ways in which politics manifests, we first analyze behaviors and outcomes in each unit...

Recently, the addition of casein phosphopeptide-amorphous calcium phosphate (CPP-ACP) into glass ionomer cements (GICs) has attracted interest due to its remineralization of teeth and its antibacterial effects. However, it should be investigated to ensure that the incorporation of CPP-ACP does not have significant adverse effects on its mechanical properties. The purpose of this study was to evaluate the effects of the addition of CPP-ACP on the mechanical properties of luting and lining GIC. The first step was to synthesize the CPP-ACP. Then the CPP-ACP at concentrations of 1%, 1.56% and 2% of CPP-ACP was added into a luting and lining GIC. GIC without CPP-ACP was used as a control group. The results revealed that the incorporation of CPP-ACP up to 1.56%(w/w) increased the flexural strength (29%), diametral tensile strength (36%) and microhardness (18%), followed by a reduction in these mechanical properties at 2%(w/w) CPP-ACP. The wear rate was significantly decreased (23%) in 1.56%(w/w) concentration of CPP-ACP and it was increased in 2%(w/w). Accordingly, the addition of 1.56%(w/w) CPP-ACP into luting and lining GIC had no adverse effect on the mechanical properties of luting and lining GIC and could be used in clinical practice.

Full Text Available Aim: The purpose was to compare the effect of 0.2% sodium fluoride mouthwash and casein phosphopeptide-amorphous calcium phosphate paste on prevention of dentin erosion. Materials and Methods: Buccal surfaces of 36 sound premolar teeth were ground flat and polished with abrasive discs. Half the polished surfaces were covered with tape to maintain a reference surface. Samples were randomly allocated into three groups. Group A was pretreated with tooth mousse (TM 4 times a day for 5 days. Group B was pretreated with 0.2% sodium fluoride mouthwash 4 times a day for 5 days. Group C was considered as the control group with no pretreatment. In the next step, the samples were exposed to Coca-Cola 4 times a day for 3 days. After each erosive cycle, the samples were rinsed with deionized water and stored in artificial saliva. The surface loss was determined using profilometry. Results: The erosion in both Groups A and B was less than the control group. The surface loss in mouthwash group was significantly lower than in the control group. Erosion in TM group was more than the mouthwash group and less than the control group. Conclusion: Sodium fluoride mouthwash is more effective for prevention of dentin erosion.

To assess the comparative efficacy of fluoride varnish and casein phosphopeptide-amorphous calcium phosphate (CPP-ACP) complex visa viz. Streptococcus mutans in plaque, and thereby the role that these two agents could play in the prevention of dental caries. A cluster sample of 120 caries inactive individuals belonging to moderate and high caries risk group were selected from 3-5-year-old age group based on the criteria given by Krassee and were randomized to four groups, namely, fluoride varnish - Group I, CPP-ACP complex - Group II, mixture of CPP-ACP complex -Gourp III, and fluoride and routine oral hygiene procedures as control - Group IV. The results thus obtained were analyzed using Statistical Package for the Social Sciences (SPSS) version 16. A statistically significant difference in the pre and post-application scores of S. mutans (P fluoride group being the most proficient. Materials such as fluoride varnish, CPP-ACP, and CPP-ACP plus fluoride protects the tooth structure, preserving the integrity of primary dentition, with the most encouraging results being with CPP-ACP plus fluoride.

The effects of casein phosphopeptide amorphous calcium fluoride phosphate (CPP-ACFP) paste vs. control paste on the remineralization of white spot caries lesions and on plaque composition were tested in a double-blind prospective randomized clinical trial. Fifty-four orthodontic patients, with multiple white spot lesions observed upon the removal of fixed appliances, were followed up for 3 months. Subjects were included and randomly assigned to either CPP-ACFP paste or control paste, for use supplementary to their normal oral hygiene. Caries regression was assessed on quantitative light-induced fluorescence (QLF) images captured directly after debonding and 6 and 12 wk thereafter. The total counts and proportions of aciduric bacteria, Streptococcus mutans, and Lactobacillus spp. were measured in plaque samples obtained just before debonding, and 6 and 12 wk afterwards. A significant decrease in fluorescence loss was found with respect to baseline for both groups and no difference was found between groups. The size of the lesion area did not change significantly over time or between the groups. The percentages of aciduric bacteria and of S. mutans decreased from 47.4 to 38.1% and from 9.6 to 6.6%, respectively. No differences were found between groups. We observed no clinical advantage for use of the CPP-ACFP paste supplementary to normal oral hygiene over the time span of 12 wk.

Full Text Available Statement of the Problem: With the recent focus of researches on the development of non-invasive treatment modalities, the non-invasive treatment of early carious lesions by remineralization would bring a major advance in the clinical management of these dental defects. Casein phosphopeptide-amorphous calcium phosphate (CPP-ACP is considered to be effective in tooth remineralization. Purpose: The aim of this in-vitro study was to compare the effects of whey and CPP-ACP in increasing the enamel microhardness. Materials and Method: Microhardness of 30 sound human permanent premolars was measured before and after 8-minute immersion of samples in Coca-Cola. The teeth were then randomly divided into 3 groups and were immersed in artificial saliva, whey, and tooth mousse for 10 minutes. The changes of microhardness within each group and among the groups were recorded and analyzed using paired t-test. Results: The microhardness increased in each group and between the groups; this increase was statistically significant (p= 0.009. Conclusion: The effect of whey on increasing the enamel microhardness was more than that of tooth mousse.

Full Text Available Scale-Invariant Feature Transform (SIFT is being investigated more and more to realize a less-constrained hand vein recognition system. Contrast enhancement (CE, compensating for deficient dynamic range aspects, is a must for SIFT based framework to improve the performance. However, evidence of negative influence on SIFT matching brought by CE is analysed by our experiments. We bring evidence that the number of extracted keypoints resulting by gradient based detectors increases greatly with different CE methods, while on the other hand the matching result of extracted invariant descriptors is negatively influenced in terms of Precision-Recall (PR and Equal Error Rate (EER. Rigorous experiments with state-of-the-art and other CE adopted in published SIFT based hand vein recognition system demonstrate the influence. What is more, an improved SIFT model by importing the kernel of RootSIFT and Mirror Match Strategy into a unified framework is proposed to make use of the positive keypoints change and make up for the negative influence brought by CE.

Full Text Available The consumption of conventional energy sources and environmental concerns have resulted in rapid growth in the amount of renewable energy introduced to power systems. With the help of distributed generations (DG, the improvement of power loss and voltage profile can be the salient benefits. However, studies show that improper placement and size of energy storage system (ESS lead to undesired power loss and the risk of voltage stability, especially in the case of high renewable energy penetration. To solve the problem, this paper sets up a microgrid based on IEEE 34-bus distribution system which consists of wind power generation system, photovoltaic generation system, diesel generation system, and energy storage system associated with various types of load. Furthermore, the particle swarm optimization (PSO algorithm is proposed in the paper to minimize the power loss and improve the system voltage profiles by optimally managing the different sorts of distributed generations under consideration of the worst condition of renewable energy production. The established IEEE 34-bus system is adopted to perform case studies. The detailed simulation results for each case clearly demonstrate the necessity of optimal management of the system operation and the effectiveness of the proposed method.

The radial electric field (E{sub r}) properties in LHD have been investigated to indicate the guidance towards improved confinement with possible E{sub r} transition and bifurcation. The ambipolar E{sub r} is obtained from the neoclassical flux based on the analytical formulae. This approach is appropriate to clarify ambipolar E{sub r} properties in a wide range of temperature and density in a more transparent way. The comparison between calculated E{sub r} and experimentally measured one has shown the qualitatively good agreement such as the threshold density for the transition from ion root to electron root. The calculations also well reproduce the experimentally observed tendency that the electron root is possible by increasing temperatures even for higher density and the ion root is enhanced for higher density. Based on the usefulness of this approach to analyze E{sub r} in LHD, calculations in a wide range have been performed to clarify the parameter region of interest where multiple solutions of E{sub r} can exist. This is the region where E{sub r} transition and bifurcation may be realized as already experimentally confirmed in CHS. The systematic calculations give a comprehensive understandings of experimentally observed E{sub r} properties, which indicates an optimum path towards improved confinement. (author)

Full Text Available Electric Water Heaters are widely used all over the world that can be categorized in two types i.e. Instant Water Heaters & Storage type Water Heaters. The energy consumption for 6 liter water heaters is much higher in the storage type of water heater. As energy is an important factor for economic development of country, therefore there is need to save the energy which implies the focus to use Storage type Water Heaters. In 6 Liter water heater, Existing model converting from 4 star rating to 5 star rating by thermal analysis & insulation. After the theoretical calculation of thickness of glass wool is the practical testing of product with BEE norms & got results for 5 Star Calculation. Finally we are doing the thermal analysis for theoretical & practical verification of the product

Recently, Wu and colleagues [1] proposed two novel statistics for genome-wide interaction analysis using case/control or case-only data. In computer simulations, their proposed case/control statistic outperformed competing approaches, including the fast-epistasis option in PLINK and logistic regression analysis under the correct model; however, reasons for its superior performance were not fully explored. Here we investigate the theoretical properties and performance of Wu et al.'s proposed statistics and explain why, in some circumstances, they outperform competing approaches. Unfortunately, we find minor errors in the formulae for their statistics, resulting in tests that have higher than nominal type 1 error. We also find minor errors in PLINK's fast-epistasis and case-only statistics, although theory and simulations suggest that these errors have only negligible effect on type 1 error. We propose adjusted versions of all four statistics that, both theoretically and in computer simulations, maintain correct type 1 error rates under the null hypothesis. We also investigate statistics based on correlation coefficients that maintain similar control of type 1 error. Although designed to test specifically for interaction, we show that some of these previously-proposed statistics can, in fact, be sensitive to main effects at one or both loci, particularly in the presence of linkage disequilibrium. We propose two new "joint effects" statistics that, provided the disease is rare, are sensitive only to genuine interaction effects. In computer simulations we find, in most situations considered, that highest power is achieved by analysis under the correct genetic model. Such an analysis is unachievable in practice, as we do not know this model. However, generally high power over a wide range of scenarios is exhibited by our joint effects and adjusted Wu statistics. We recommend use of these alternative or adjusted statistics and urge caution when using Wu et al

filter patch containing the ferromagnetic debris is typically of most interest as critical oil- wetted components are typically made from ferrous alloys ...are typically manufactured using special steels with specific alloying elements. Elemental analysis using a Scanning electron Microscope (SEM) with...debris patch (left) and extracted ferrous debris patch (right) 2.2.1 Results A total of 48 filters were analysed during the trial from all four

Full Text Available The important role of histone posttranslational modifications, particularly methylation and acetylation, in Plasmodium falciparum gene regulation has been established. However, the role of histone phosphorylation remains understudied. Here, we investigate histone phosphorylation utilizing liquid chromatography and tandem mass spectrometry to analyze histones extracted from asexual blood stages using two improved protocols to enhance preservation of PTMs. Enrichment for phosphopeptides lead to the detection of 14 histone phospho-modifications in P. falciparum. The majority of phosphorylation sites were observed at the N-terminal regions of various histones and were frequently observed adjacent to acetylated lysines. We also report the identification of one novel member of the P. falciparum histone phosphosite binding protein repertoire, Pf14-3-3I. Recombinant Pf14-3-3I protein bound to purified parasite histones. In silico structural analysis of Pf14-3-3 proteins revealed that residues responsible for binding to histone H3 S10ph and/or S28ph are conserved at the primary and the tertiary structure levels. Using a battery of H3 specific phosphopeptides, we demonstrate that Pf14-3-3I preferentially binds to H3S28ph over H3S10ph, independent of modification of neighbouring residues like H3S10phK14ac and H3S28phS32ph. Our data provide key insight into histone phosphorylation sites. The identification of a second member of the histone modification reading machinery suggests a widespread use of histone phosphorylation in the control of various nuclear processes in malaria parasites.

The important role of histone posttranslational modifications, particularly methylation and acetylation, in Plasmodium falciparum gene regulation has been established. However, the role of histone phosphorylation remains understudied. Here, we investigate histone phosphorylation utilizing liquid chromatography and tandem mass spectrometry to analyze histones extracted from asexual blood stages using two improved protocols to enhance preservation of PTMs. Enrichment for phosphopeptides lead to the detection of 14 histone phospho-modifications in P. falciparum. The majority of phosphorylation sites were observed at the N-terminal regions of various histones and were frequently observed adjacent to acetylated lysines. We also report the identification of one novel member of the P. falciparum histone phosphosite binding protein repertoire, Pf14-3-3I. Recombinant Pf14-3-3I protein bound to purified parasite histones. In silico structural analysis of Pf14-3-3 proteins revealed that residues responsible for binding to histone H3 S10ph and/or S28ph are conserved at the primary and the tertiary structure levels. Using a battery of H3 specific phosphopeptides, we demonstrate that Pf14-3-3I preferentially binds to H3S28ph over H3S10ph, independent of modification of neighbouring residues like H3S10phK14ac and H3S28phS32ph. Our data provide key insight into histone phosphorylation sites. The identification of a second member of the histone modification reading machinery suggests a widespread use of histone phosphorylation in the control of various nuclear processes in malaria parasites.

Continuous Phase Modulation (CPM) schemes are advantageous for low-power radios. The constant envelope transmit signal is more efficient for both linear and non-linear amplifier architectures. A standard, coherent CPM receiver can take advantage of modulation memory and is more complex than a coherent Phase Shift Keyed receiver. But the CPM signal can be demodulated non-coherently and still take advantage of the trellis structure inherent in the modulation. Prior analyses of several different non-coherent CPM schemes have been provided with many providing coherent or near coherent performance. In this paper we will discuss a new, reduced complexity decoder that improves upon the noncoherent performance. In addition, this new algorithm generates soft decision metrics that allow the addition of a forward error correction scheme (an outer code) with coherent equivalent performance gains.

Full Text Available As there is an enormous growth in the web in terms of web sites, the size of web usage data is also increasing gradually. But this web usage data plays a vital role in the effective management of web sites. This web usage data is stored in a file called weblog by the web server. In order to discover the knowledge, required for improving the performance of websites, we need to apply the best preprocessing methodology on the server weblog file. Data preprocessing is a phase which automatically identifies the meaningful patterns and user behavior. So far analyzing the weblog data has been a challenging task in the area of web usage mining. In this paper we propose an effective and enhanced data preprocessing methodology which produces an efficient usage patterns and reduces the size of weblog down to 75-80% of its initial size. The experimental results are also shown in the following chapters.

In the Phase I project we concentrated on three technical objectives to demonstrate the feasibility of the Phase II project: (1) the development of a parallel MDSplus data handler, (2) the parallelization of existing fusion data analysis packages, and (3) the development of techniques to automatically generate parallelized code using pre-compiler directives. We summarize the results of the Phase I research for each of these objectives below. We also describe below additional accomplishments related to the development of the TaskDL and mpiDL parallelization packages.

This paper describes the application of task analysis within the design process of a Web-based information system for managing clinical information in hemophilia care, in order to improve the requirements elicitation and, consequently, to validate the domain model obtained in a previous phase of the design process (system analysis). The use of task analysis in this case proved to be a practical and efficient way to improve the requirements engineering process by involving users in the design process.

IMPROVING THE SAMA ( SEGMENTATION ANALYSIS AND MARKET ASSESSMENT) RECRUITING MODEL by William N. Marmion June 2015 Thesis Advisor: Lyn...REPORT TYPE AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE EVALUATING AND IMPROVING THE SAMA ( SEGMENTATION ANALYSIS AND MARKET ...and potential within their areas of responsibility. One of the tools used by USAREC is the Segmentation Analysis and Market Assessment (SAMA) tool

This work presents a new software tool for morphometric analysis of drainage networks based on the methods of Hack (1973) and Etchebehere et al. (2004). This tool is applicable to studies of morphotectonics and neotectonics. The software used a digital elevation model (DEM) to identify the relief breakpoints along drainage profiles (knickpoints). The program was coded in Python for use on the ArcGIS platform and is called Knickpoint Finder. A study area was selected to test and evaluate the software's ability to analyze and identify neotectonic morphostructures based on the morphology of the terrain. For an assessment of its validity, we chose an area of the James River basin, which covers most of the Piedmont area of Virginia (USA), which is an area of constant intraplate seismicity and non-orogenic active tectonics and exhibits a relatively homogeneous geodesic surface currently being altered by the seismogenic features of the region. After using the tool in the chosen area, we found that the knickpoint locations are associated with the geologic structures, epicenters of recent earthquakes, and drainages with rectilinear anomalies. The regional analysis demanded the use of a spatial representation of the data after processing using Knickpoint Finder. The results were satisfactory in terms of the correlation of dense areas of knickpoints with active lineaments and the rapidity of the identification of deformed areas. Therefore, this software tool may be considered useful in neotectonic analyses of large areas and may be applied to any area where there is DEM coverage.

Because of high neutron and gamma-ray intensities generated during bombardment of a thallium-203 target, a thallium target-room shield and different ways of improving it have been investigated. Leakage of neutron and gamma ray dose rates at various points behind the shield are calculated by simulating the transport of neutrons and photons using the Monte Carlo N Particle transport computer code. By considering target-room geometry, its associated shield and neutron and gamma ray source strengths and spectra, three designs for enhancing shield performance have been analysed: a shielding door at the maze entrance, covering maze walls with layers of some effective materials and adding a shadow-shield in the target room in front of the radiation source. Dose calculations were carried out separately for different materials and dimensions for all the shielding scenarios considered. The shadow-shield has been demonstrated to be one suitable for neutron and gamma dose equivalent reduction. A 7.5-cm thick polyethylene shadow-shield reduces both dose equivalent rate at maze entrance door and leakage from the shield by a factor of 3.

This paper adresses the statistical performance of subspace DoA estimation using a sensor array, in the asymptotic regime where the number of samples and sensors both converge to infinity at the same rate. Improved subspace DoA estimators were derived (termed as G-MUSIC) in previous works, and were shown to be consistent and asymptotically Gaussian distributed in the case where the number of sources and their DoA remain fixed. In this case, which models widely spaced DoA scenarios, it is proved in the present paper that the traditional MUSIC method also provides DoA consistent estimates having the same asymptotic variances as the G-MUSIC estimates. The case of DoA that are spaced of the order of a beamwidth, which models closely spaced sources, is also considered. It is shown that G-MUSIC estimates are still able to consistently separate the sources, while it is no longer the case for the MUSIC ones. The asymptotic variances of G-MUSIC estimates are also evaluated.

Full Text Available Assessing waste production in schools highlights the contribution of school children and school staff to the total amount of waste generated in a region, as well as any poor practices of recycling (the so-called separate collection of waste in schools by the students, which could be improved through educational activities. Educating young people regarding the importance of environmental issues is essential, since instilling the right behavior in school children is also beneficial to the behavior of their families. The way waste management was carried out in different schools in Trento (northern Italy was analyzed: a primary school, a secondary school, and three high schools were taken as cases of study. The possible influence of the age of the students and of the various activities carried out within the schools on the different behaviors in separating waste was also evaluated. The results showed that the production of waste did not only depend on the size of the institutes and on the number of occupants, but, especially, on the type of activities carried out in addition to the ordinary classes and on the habits of both pupils and staff. In the light of the results obtained, some corrective measures were proposed to schools, aimed at increasing the awareness of the importance of the right behavior in waste management by students and the application of good practices of recycling.

In a previous study it was shown that, a simplified expression for the stiffness of the plate member in a bolt-plate assembly can be found. The stiffnesses of the bolt and the connected plates are the primary quantities that control the lifetime of a dynamically loaded connection. The present study...... of stiffnesses is extended to include different material parameters by including the influence of Poisson's ratio. Two simple practical formulas are suggested and their accuracies are documented for different bolts and different material (Poisson's ratio). Secondly, the contact analysis between the bolt head...... and the plate is extended by the possibility of designing a gap, that is, a nonuniform distance between the bolt and plate before prestressing. Designing the gap function generates the possibility for a better stress field by which the stiffness of the bolt is lowered, and at the same time the stiffness...

beyond effectiveness and costs to also considering the social, organizational and ethical implications of technologies. However, a commonly accepted method for analysing the ethical aspects of health technologies is lacking. This paper describes a model for ethical analysis of health technology......Health technology assessment (HTA) is the multidisciplinary study of the implications of the development, diffusion and use of health technologies. It supports health-policy decisions by providing a joint knowledge base for decision-makers. To increase its policy relevance, HTA tries to extend...... that is easy and flexible to use in different organizational settings and cultures. The model is part of the EUnetHTA project, which focuses on the transferability of HTAs between countries. The EUnetHTA ethics model is based on the insight that the whole HTA process is value laden. It is not sufficient...

Analysis of saponins by thin layer chromatography (TLC) is reported. The solvent system was n-butanol:water:acetic acid (84:14:7). Detection of saponins on the TLC plates after development and air-drying was done by immersion in a suspension of sheep erythrocytes, followed by washing off the excess blood on the plate surface. Saponins appeared as white spots against a pink background. The protocol provided specific detection of saponins in the saponins enriched extracts from Aesculusindica (Wall. ex Camb.) Hook.f., Lonicera japonica Thunb., Silene inflata Sm., Sapindusmukorossi Gaertn., Chlorophytum borivilianum Santapau & Fernandes, Asparagusadscendens Roxb., Asparagus racemosus Willd., Agave americana L., Camellia sinensis [L.] O. Kuntze. The protocol is convenient, inexpensive, does not require any corrosive chemicals and provides specific detection of saponins.

The contribution of functional virtual prototyping to vehicle chassis development is presented. The different topics that we took into consideration were reform analysis and improvement design during the vehicle chassis development. A frame of coordinates based on the digital-model was established, the main CAE analysis methods, multi-body system dynamics and finite element analysis were applied to the digital-model build by CAD/CAM software. The method was applied in the vehicle chassis reform analysis and improvement design, all the analysis and design projects were implemented in the uniform digital-model, and the development was carried through effectively.

This presentation describes the basic components of software-based energy analysis for residential buildings, explores the concepts of 'error' and 'accuracy' when analysis predictions are compared to measured data, and explains how NREL is working to continuously improve the accuracy of energy analysis methods.

The author has identified the following significant results. The spectral, spatial, and temporal characteristics of wheat and other signatures in LANDSAT multispectral scanner data were examined through empirical analysis and simulation. Irrigation patterns varied widely within Kansas; 88 percent of wheat acreage in Finney was irrigated and 24 percent in Morton, as opposed to less than 3 percent for western 2/3's of the State. The irrigation practice was definitely correlated with the observed spectral response; wheat variety differences produced observable spectral differences due to leaf coloration and different dates of maturation. Between-field differences were generally greater than within-field differences, and boundary pixels produced spectral features distinct from those within field centers. Multiclass boundary pixels contributed much of the observed bias in proportion estimates. The variability between signatures obtained by different draws of training data decreased as the sample size became larger; also, the resulting signatures became more robust and the particular decision threshold value became less important.

An area of increasing interest for the next generation of aircraft is autonomy and the integration of increasingly autonomous systems into the national airspace. Such integration requires humans to work closely with autonomous systems, forming human and autonomous agent teams. The intention behind such teaming is that a team composed of both humans and autonomous agents will operate better than homogenous teams. Procedures exist for licensing pilots to operate in the national airspace system and current work is being done to define methods for validating the function of autonomous systems, however there is no method in place for assessing the interaction of these two disparate systems. Moreover, currently these systems are operated primarily by subject matter experts, limiting their use and the benefits of such teams. Providing additional information about the ongoing mission to the operator can lead to increased usability and allow for operation by non-experts. Linguistic analysis of the context of verbal communication provides insight into the intended meaning of commonly heard phrases such as "What's it doing now?" Analyzing the semantic sphere surrounding these common phrases enables the prediction of the operator's intent and allows the interface to supply the operator's desired information.

monitoring of content that is accessible. The study examines risks associated with information security, technological change and continued popularity of Scouting. Mitigation is based on system functions that are defined. The approach to developing an improved system for facilitating Boy Scout leader functions was iterative with insights into capabilities coming in the course of working through the used cases and sequence diagrams.

Full Text Available The current regulatory analysis of the financial condition of insolvent organizations have some disadvantages also does not account the features of the analysis based on the consolidated financial statements under IFRS and GAAP. In this work on the basis of the comparative analysis of financial condition of a number of large Russian companies, calculated on their accounting statements prepared under Russian accounting standards, IFRS and GAAP, proposals are developed to improve the analysis of financial condition of insolvent institutions.

Full Text Available The current regulatory analysis of the financial condition of insolvent organizations have some disadvantages also does not account the features of the analysis based on the consolidated financial statements under IFRS and GAAP. In this work on the basis of the comparative analysis of financial condition of a number of large Russian companies, calculated on their accounting statements prepared under Russian accounting standards, IFRS and GAAP, proposals are developed to improve the analysis of financial condition of insolvent institutions.

High manganese steels can damage the differential thermal analysis (DTA) instrument due to the manganese evaporation during high temperature experiments. After analyzing the relationship between residual oxygen and manganese evaporation, tanta-lum metal was employed to modify the crucible of DTA, and zirconium getter together with strict gas puriifcation measures were applied to control the volatilization of manganese. By these modiifcations, problems of thermocouple damage and DTA instrument contamination were successfully resolved. Cobalt samples were adopted to calibrate the accuracy of DTA instruments under the same trial condition of high manganese steel samples, and the detection error was conifrmed to be less than 1 °C. Liquidus and soli-dus temperatures of high Mn steels were measured by improved DTA method. It was found that the liquidus temperatures of sam-ples tested by experiments increased linearly with the heating rates. To eliminate the effects of the heating rate, equilibrium liquidus temperature was determined by iftting the liquidus temperatures at different heating rates, and referred as real liquidus temperature. No clear relationship between solidus temperatures and heating rates was found, and the solidus temperature was ifnally set as the average value of several experimental data.

Despite expert guidelines, gaps persist in quality of care for children with asthma. This study sought to identify barriers and potential interventions to improve compliance to national asthma prevention guidelines at a single academic pediatric primary care clinic. Using the plan-do-check-act (PDCA) quality improvement framework and fishbone analysis, several barriers to consistent asthma processes and possible interventions were identified by a group of key stakeholders. Two interventions were implemented using the electronic medical record (EMR). Physician documentation of asthma quality measures were analyzed before intervention and during 2 subsequent time points over 16 months. Documentation of asthma action plans (core group P asthma care in a pediatric primary care setting.

Although many regard it as the most important step of life cycle assessment, improvementanalysis is given relatively little attention in the literature. Most available improvement approaches are highly subjective, and traditional LCA methods often do not account for resources other than fossil fuels. In this work exergy is evaluated as a thermodynamically rigorous way of identifying process improvement opportunities. As a case study, a novel process for producing titanium dioxide nanoparticles is considered. A traditional impact assessment, a first law energy analysis, and an exergy analysis are done at both the process and life cycle scales. The results indicate that exergy analysis provides insights not available via other methods, especially for identifying unit operations with the greatest potential for improvement. Exergetic resource accounting at the life cycle scale shows that other materials are at least as significant as fossil fuels for the production of TiO2 nanoparticles in this process.

Full Text Available Objective: The aim of this study was to compare the effects of casein phosphopeptide-amorphous calcium phosphate (CPP-ACP and 1.23% acidulated phosphate fluoride (APF; pH 3.5 on the microhardness of enamel treated with a bleaching agent. Materials and Method: Enamel slices (n=32; 2×4 mm were obtained from 8 mandibular permanent molar teeth. Specimens were embedded into acrylic resin blocks with the enamel surfaces facing upwards. Vickers microhardness (VHN values of the specimens were recorded at baseline. The specimens were randomly divided into 4 experimental groups, and the experimental designation was as follows: Group 1: no treatment (control, Group 2: 35% hydrogen peroxide (HP, Group 3: HP + CPP-ACP, Group 4: HP + APF application. After treatments, VHN values were measured and recorded again. Specimens were stored in artificial saliva at 37 °C for 1 week. After 1 week second application was done and VHN of the specimens was registered once more. Data were statistically analyzed with ANOVA and Tukey post hoc tests. Values obtained at baseline, and first and second applications were compared using paired samples t-test (α=0.05. Results: In inter-group comparisons, no statistically significant difference in the enamel microhardness values was found between the baseline, and first and second applications (p>0.05. In intra-group comparisons, again, no statistically significant difference in the enamel microhardness values was found between the baseline, and first and second applications (p>0.05. Conclusion: According to the limitations of this study it can be concluded that neither the HP application nor the CPP-ACP or APF application after HP had any significant effect on the enamel microhardness.

important physiological functions, such as stomata aperture, cell elongation, or cellular pH regulation. It is known that the activity of plant plasma membrane H+-ATPase is regulated by phosphorylation. Therefore, we first investigated the phosphorylation profile of plant H+-ATPase by enriching...... the phosphopeptides with optimized TiO2 and IMAC enrichment methods prior to MS analysis. We further investigated the global phosphorylation profile of the whole plant plasma membrane proteins using the combination of our recently established phosphopeptide enrichment method, Calcium phosphate precipitation...... Phosphorylation is a key regulatory factor in all aspects of eukaryotic biology including the regulation of plant membrane-bound transport proteins. To date, mass spectrometry (MS) has been introduced as powerful technology for study of post translational modifications (PTMs), including protein...

Scientific documentation of neurologic improvement following carotid endarterectomy (CEA) has not been established. The purpose of this prospective study is to investigate whether CEA performed for the internal carotid artery flow lesion improves gait and cerebrovascular hemodynamic status in patients with gait disturbance. We prospectively performed pre- and postCEA gait analysis and acetazolamide stress brain perfusion SPECT (Acz-SPECT) with Tc-99m ECD in 91 patients (M/F: 81/10, mean age: 64.1 y) who had gait disturbance before receiving CEA. Gait performance was assessed using a Vicon 370 motion analyzer. The gait improvement after CEA was correlated to cerebrovascular hemodynamic change as well as symptom duration. 12 hemiparetic stroke patients (M/F=9/3, mean age: 51 y) who did not receive CEA as a control underwent gait analysis twice in a week interval to evaluate whether repeat testing of gait performance shows learning effect. Of 91 patients, 73 (80%) patients showed gait improvement (change of gait speed > 10%) and 42 (46%) showed marked improvement (change of gait speed > 20%), but no improvement was observed in control group at repeat test. Post-operative cerebrovascular hemodynamic improvement was noted in 49 (54%) of 91 patients. There was marked gait improvement in patients group with cerebrovascular hemodynamic improvement compared to no change group (p<0.05). Marked gait improvement and cerebrovascular hemodynamic improvement were noted in 53% and 61% of the patient who had less than 3 month history of symptom compared to 31% and 24% of the patients who had longer than 3 months, respectively (p<0.05). Marked gait improvement was obtained in patients who had improvement of cerebrovascular hemodynamic status on Acz-SPECT after CEA. These results suggest functional improvement such as gait can result from the improved perfusion of misery perfusion area, which is viable for a longer period compared to literatures previously reported.

This study focuses on the mathematics department at a South African university and in particular on teaching of calculus to first year engineering students. The paper reports on a cause-effect analysis, often used for business improvement. The cause-effect analysis indicates that there are many factors that impact on secondary school teaching of…

Methods for the analysis of work accidents are discussed, and a description is given of the use of a causal situation analysis in terms of a 'variation tree' in order to explain the course of events of the individual cases and to identify possible improvements. The difficulties in identifying 'ca...

This study focuses on the mathematics department at a South African university and in particular on teaching of calculus to first year engineering students. The paper reports on a cause-effect analysis, often used for business improvement. The cause-effect analysis indicates that there are many factors that impact on secondary school teaching of…

This draft concerns the error analysis of a collocation method based on the moving least squares (MLS) approximation for integral equations, which improves the results of [2] in the analysis part. This is mainly a translation from Persian of some parts of Chapter 2 of the author's PhD thesis in 2011.

Window factor analysis (WFA) is a powerful tool in analyzing evolutionary process. However, it was found that window factor analysis is much sensitive to the noise involved in original data matrix. An error analysis was done with the fact that the concentration profiles resolved by the conventional window factor analysis are easily distorted by the noise reserved by the abstract factor analysis (AFA), and a modified algorithm for window factor analysis was proposed. Both simulated and experimental HPLC-DAD data were investigated by the conventional and the improved methods. Results show that the improved method can yield less noise-distorted concentration profiles than the conventional method, and the ability for resolution of noisy data sets can be greatly enhanced.

In order to effectively analyze and control use-related risk of medical devices, quantitative methodologies must be applied. Failure Mode and Effects Analysis (FMEA) is a proactive technique for error detection and risk reduction. In this article, an improved FMEA based on Fuzzy Mathematics and Grey Relational Theory is developed to better carry out user-related risk analysis for medical devices. As an example, the analysis process using this improved FMEA method for a certain medical device (C-arm X-ray machine) is described.

Full Text Available In this results-oriented era of accountability, educator preparation programs are called upon to provide comprehensive data related to student and program outcomes while also providing evidence of continuous improvement. Collaborative Analysis of Student Learning (CASL is one approach for fostering critical inquiry about student learning. Graduate educator preparation programs in our university used collaborative analysis as the basis for continuous improvement during an accreditation cycle. As authors of this study, we sought to better understand how graduate program directors and faculty used collaborative analysis to inform practice and improve programs. Our findings suggested that CASL has the potential to foster collective responsibility for student learning, but only with a strong commitment from administrators and faculty, purposefully designed protocols and processes, fidelity to the CASL method, and a focus on professional development. Through CASL, programs have the ability to produce meaningful data related to student and program outcomes and meet the requirements for accreditation.

recommendations, the method proposed identifies very explicit countermeasures. Improvements require a change in human decisions during equipment design, work planning, or the execution itself. The use of a model of human behavior drawing a distinction between automated skill-based behavior, rule-based 'know......-how' and knowledge-based analysis is proposed for identification of the human decisions which are most sensitive to improvements...

A culture of continuous service improvement underpins safe, efficient and cost-effective health and social care. This paper reports a qualitative research study of assessment material from one cohort of final year pre-registration health and social care students' interprofessional service improvement learning experience. Initially introduced to the theory of service improvement, students were linked with an interprofessional buddy group, and subsequently planned and implemented, if possible, a small scale service improvement project within a practice placement setting. Assessment was by oral project presentation and written reflection on learning. Summative assessment materials from 150 students were subjected to content analysis to identify: service user triggers for service improvement; ideas to address the identified area for improvement; and perceptions of service improvement learning. Triggers for service improvements included service user disempowerment, poor communication, gaps in service provision, poor transitions, lack of information, lack of role clarity and role duplication, and differed between professions. Ideas for improvement included both the implementation of evidence based best practice protocols in a local context and also innovative approaches to problem solving. Students described both intrapersonal and interprofessional learning as a result of engaging with service improvement theory and practice. Service improvement learning in an interprofessional context has positive learning outcomes for health and social care students. Students can identify improvement opportunities that may otherwise go undetected. Engaging positively in interprofessional service improvement learning as a student is an important rehearsal for life as a qualified practitioner. It can help students to develop an ability to challenge unsafe practice elegantly, thereby acting as advocates for the people in their care. Universities can play a key support role by working

An on-plate specific enrichment method is presented for the direct analysis of peptides phosphorylation. An array of sintered TiO2 nanoparticle spots was prepared on a stainless steel plate to provide porous substrate with a very large specific surface and durable functions. These spots were used to selectively capture phosphorylated peptides from peptide mixtures, and the immobilized phosphopeptides could then be analyzed directly by MALDI MS after washing away the nonphos- phorylated pepti...

We first give a stabilized improved moving least squares (IMLS) approximation, which has better computational stability and precision than the IMLS approximation. Then, analysis of the improved element-free Galerkin method is provided theoretically for both linear and nonlinear elliptic boundary value problems. Finally, numerical examples are given to verify the theoretical analysis. Project supported by the National Natural Science Foundation of China (Grant No. 11471063), the Chongqing Research Program of Basic Research and Frontier Technology, China (Grant No. cstc2015jcyjBX0083), and the Educational Commission Foundation of Chongqing City, China (Grant No. KJ1600330).

NEWRAP, an improved version of a previous multiple linear regression program called RAPIER, CREDUC, and CRSPLT, allows for a complete regression analysis including cross plots of the independent and dependent variables, correlation coefficients, regression coefficients, analysis of variance tables, t-statistics and their probability levels, rejection of independent variables, plots of residuals against the independent and dependent variables, and a canonical reduction of quadratic response functions useful in optimum seeking experimentation. A major improvement over RAPIER is that all regression calculations are done in double precision arithmetic.

Improved waste minimization practices at the Department of Energy's (DOE) Idaho National Engineering and Environmental Laboratory (INEEL) are leading to a 15% reduction in the generation of hazardous and radioactive waste. Bechtel, BWXT Idaho, LLC (BBWI), the prime management and operations contractor at the INEEL, applied the Six Sigma improvement process to the INEEL Waste Minimization Program to review existing processes and define opportunities for improvement. Our Six Sigma analysis team: composed of an executive champion, process owner, a black belt and yellow belt, and technical and business team members used this statistical based process approach to analyze work processes and produced ten recommendations for improvement. Recommendations ranged from waste generator financial accountability for newly generated waste to enhanced employee recognition programs for waste minimization efforts. These improvements have now been implemented to reduce waste generation rates and are producing positive results.

Full Text Available The ability to understand and interpret data is a critical aspect of scientific thinking. However, although data analysis is often a focus in biology majors classes, many textbooks for allied health majors classes are primarily content-driven and do not include substantial amounts of experimental data in the form of graphs and figures. In a lower-division allied health majors microbiology class, students were exposed to data from primary journal articles as take-home assignments and their data analysis skills were assessed in a pre-/posttest format. Students were given 3 assignments that included data analysis questions. Assignments ranged from case studies that included a figure from a journal article to reading a short journal article and answering questions about multiple figures or tables. Data were represented as line or bar graphs, gel photographs, and flow charts. The pre- and posttest was designed incorporating the same types of figures to assess whether the assignments resulted in any improvement in data analysis skills. The mean class score showed a small but significant improvement from the pretest to the posttest across three semesters of testing. Scores on individual questions testing accurate conclusions and predictions improved the most. This supports the conclusion that a relatively small number of out-of-class assignments through the semester resulted in a significant improvement in data analysis abilities in this population of students.

SPECIFICATION IMPROVEMENT THROUGH ANALYSIS OF PROOF STRUCTURE (SITAPS): HIGH ASSURANCE SOFTWARE DEVELOPMENT BAE SYSTEMS FEBRUARY...ANALYSIS OF PROOF STRUCTURE (SITAPS): HIGH ASSURANCE SOFTWARE DEVELOPMENT 5a. CONTRACT NUMBER FA8750-13-C-0240 5b. GRANT NUMBER N/A 5c. PROGRAM...General adoption of these techniques has had limited penetration in the software development community. Two interrelated causes may account for

This chapter will highlight analysis techniques to identify energy efficiency opportunities to improve operations and controls. A free tool, Energy Charting and Metrics (ECAM), will be used to assist in the analysis of whole-building, sub-metered, and/or data from the building automation system (BAS). Appendix A describes the features of ECAM in more depth, and also provide instructions for downloading ECAM and all resources pertaining to using ECAM.

The lecture is a technique for delivering knowledge and information cost-effectively to large medical classes in medical education. The aim of this study was to analyze teaching quality, based on triangle analysis of video recordings of medical lectures, to strengthen teaching competency in medical school. The subjects of this study were 13 medical professors who taught 1st- and 2nd-year medical students and agreed to a triangle analysis of video recordings of their lectures. We first performed triangle analysis, which consisted of a professional analysis of video recordings, self-assessment by teaching professors, and feedback from students, and the data were crosschecked by five school consultants for reliability and consistency. Most of the distress that teachers experienced during the lecture occurred in uniform teaching environments, such as larger lecture classes. Larger lectures that primarily used PowerPoint as a medium to deliver information effected poor interaction with students. Other distressing factors in the lecture were personal characteristics and lack of strategic faculty development. Triangle analysis of video recordings of medical lectures gives teachers an opportunity and motive to improve teaching quality. Faculty development and various improvement strategies, based on this analysis, are expected to help teachers succeed as effective, efficient, and attractive lecturers while improving the quality of larger lecture classes.

An on-plate specific enrichment method is presented for the direct analysis of peptides phosphorylation. An array of sintered TiO 2 nanoparticle spots was prepared on a stainless steel plate to provide porous substrate with a very large specific surface and durable functions. These spots were used to selectively capture phosphorylated peptides from peptide mixtures, and the immobilized phosphopeptides could then be analyzed directly by MALDI MS after washing away the nonphosphorylated peptides. beta-Casein and protein mixtures were employed as model samples to investigate the selection efficiency. In this strategy, the steps of phosphopeptide capture, purification, and subsequent mass spectrometry analysis are all successfully accomplished on a single target plate, which greatly reduces sample loss and simplifies analytical procedures. The low detection limit, small sample size, and rapid selective entrapment show that this on-plate strategy is promising for online enrichment of phosphopeptides, which is essential for the analysis of minute amount of samples in high-throughput proteome research.

Dynamic geometry software (DGS) aims to enhance mathematics education. This systematic review and meta-analysis evaluated the quasi-experimental studies on the effectiveness of DGS-based instruction in improving students' mathematical achievement. Research articles published between 1990 and 2013 were identified from major databases according to a…

In this article, we perform cost-effectiveness analysis on interventions that improve the rate of high school completion. Using the What Works Clearinghouse to select effective interventions, we calculate cost-effectiveness ratios for five youth interventions. We document wide variation in cost-effectiveness ratios between programs and between…

@@ Temporal clustering analysis (TCA) has been proposed recently as a method to detect time windows of brain responses in functional MRI (fMRI) studies when the timing and location of the activation are completely unknown. Modifications to the TCA technique are introduced in this report to further improve the sensitivity in detecting brain activation.

This paper presents the methodology and subsequent findings of a performance-improvement routine that employs automated finite element (FE) analysis to increase the torque-per-kilogram-magnet (TPKM) of a permanent magnet coupling (PMC). The routine is applied to a commercially available cylindrical...

This report was developed to address the need for comprehensive analysis of U.S. Forest Service (USFS) Region 1 air quality monitoring data. The monitoring data includes Phase 3 (long-term data) lakes, National Atmospheric Deposition Program (NADP), and Interagency Monitoring of Protected Visual Environments (IMPROVE). Annual and seasonal data for the periods of record...

The pushover analysis (POA) procedure is difficult to apply to high-rise buildings, as it cannot account for the contributions of higher modes. To overcome this limitation, a modal pushover analysis (MPA) procedure was proposed by Chopra et al. (2001). However, invariable lateral force distributions are still adopted in the MPA. In this paper, an improved MPA procedure is presented to estimate the seismic demands of structures, considering the redistribution of inertia forces after the structure yields. This improved procedure is verified with numerical examples of 5-, 9- and 22-story buildings. It is concluded that the improved MPA procedure is more accurate than either the POA procedure or MPA procedure. In addition, the proposed procedure avoids a large computational effort by adopting a two-phase lateral force distribution..

Mines systems such as ventilation system, strata support system, flame proof safety equipment, are exposed to dynamic operational conditions such as stress, humidity, dust, temperature, etc., and safety improvement of such systems can be done preferably during planning and design stage. However, the existing safety analysis methods do not handle the accident initiation and progression of mine systems explicitly. To bridge this gap, this paper presents an integrated Event Tree (ET) and Fault Tree (FT) approach for safety analysis and improvement of mine systems design. This approach includes ET and FT modeling coupled with redundancy allocation technique. In this method, a concept of top hazard probability is introduced for identifying system failure probability and redundancy is allocated to the system either at component or system level. A case study on mine methane explosion safety with two initiating events is performed. The results demonstrate that the presented method can reveal the accident scenarios and improve the safety of complex mine systems simultaneously.

One among priority trendsin health care in Russian Federation and abroad is minimization of occupational risks. The authors present evaluation of legislation basis for occupational risk analysis. The most promising trend in improvement of national legislation is its development on basis of internationally accepted documents, that-provides legislation basis for analysis of workers' health risk. Findings are that complete evaluation of occupational risk requires combination of data on work conditions and data of occupational control, and sometimes--with results of special research. Further improvement is needed for justifying hygienic norms with applying criteria of allowable risk for workers' health. Now development of risk analysis methodology enables quantitative evaluation of health risk via mathematic models including those describing risk evolution.

The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology.

The knowledge generated from evidence-based interventions in mental health systems research is seldom translated into policy and practice in low and middle-income countries (LMIC). Stakeholder analysis is a potentially useful tool in health policy and systems research to improve understanding of policy stakeholders and increase the likelihood of knowledge translation into policy and practice. The aim of this study was to conduct stakeholder analyses in the five countries participating in the Programme for Improving Mental health carE (PRIME); evaluate a template used for cross-country comparison of stakeholder analyses; and assess the utility of stakeholder analysis for future use in mental health policy and systems research in LMIC. Using an adapted stakeholder analysis instrument, PRIME country teams in Ethiopia, India, Nepal, South Africa and Uganda identified and characterised stakeholders in relation to the proposed action: scaling-up mental health services. Qualitative content analysis was conducted for stakeholder groups across countries, and a force field analysis was applied to the data. Stakeholder analysis of PRIME has identified policy makers (WHO, Ministries of Health, non-health sector Ministries and Parliament), donors (DFID UK, DFID country offices and other donor agencies), mental health specialists, the media (national and district) and universities as the most powerful, and most supportive actors for scaling up mental health care in the respective PRIME countries. Force field analysis provided a means of evaluating cross-country stakeholder power and positions, particularly for prioritising potential stakeholder engagement in the programme. Stakeholder analysis has been helpful as a research uptake management tool to identify targeted and acceptable strategies for stimulating the demand for research amongst knowledge users, including policymakers and practitioners. Implementing these strategies amongst stakeholders at a country level will

Several sophisticated menu analysis methods have been compared in studies using theoretical restaurant menus. Institutional and especially hospital cafeterias differ from commercial restaurants in ways that may influence the effectiveness of these menu analysis methods. In this study, we compared three different menu analysis methods - menu engineering, goal value analysis, and marginal analysis in an institutional setting, to evaluate their relative effectiveness for menu management decision-making. The three methods were used to analyze menu cost and sales data for a representative cafeteria in a large metropolitan hospital. The results were compared with informal analyses by the manager and an employee to determine accuracy and value of information for decision-making. Results suggested that all three methods would improve menu planning and pricing, which in turn would enhance customer demand (revenue) and profitability. However, menu engineering was ranked the easiest of the three methods to interpret.

Full Text Available DESCRIPTION This book addresses and appropriately explains the soccer match analysis, looks at the very latest in match analysis research, and at the innovative technologies used by professional clubs. This handbook is also bridging the gap between research, theory and practice. The methods in it can be used by coaches, sport scientists and fitness coaches to improve: styles of play, technical ability and physical fitness; objective feedback to players; the development of specific training routines; use of available notation software, video analysis and manual systems; and understanding of current academic research in soccer notational analysis. PURPOSE The aim is to provide a prepared manual on soccer match analysis in general for coaches and sport scientists. Thus, the professionals in this field would gather objective data on the players and the team, which in turn could be used by coaches and players to learn more about performance as a whole and gain a competitive advantage as a result. The book efficiently meets these objectives. AUDIENCE The book is targeted the athlete, the coach, the sports scientist professional or any sport conscious person who wishes to analyze relevant soccer performance. The editors and the contributors are authorities in their respective fields and this handbook depend on their extensive experience and knowledge accumulated over the years. FEATURES The book demonstrates how a notation system can be established to produce data to analyze and improve performance in soccer. It is composed of 9 chapters which present the information in an order that is considered logical and progressive as in most texts. Chapter headings are: 1. Introduction to Soccer Match Analysis, 2. Developing a Manual Notation System, 3. Video and Computerized Match Analysis Technology, 4. General Advice on Analyzing Match Performance, 5. Analysis and Presentation of the Results, 6. Motion Analysis and Consequences for Training, 7. What Match

Full Text Available Background & Aims of the Study: Family functioning is among the most important factors ensuring the mental health of family members. Disorder or disturbance in family functioning would cause many psychological problems for family members. Current study intended to examine the effectiveness of transactional analysis group counseling on the improvement of couple's family functioning. Materials & Methods: The design of the study is as semi experimental research with pretest and posttest with follow up and control group. Statistical population consists all couples referring to the psychological and counseling centers of Rasht city in 2012. Samples were selected at first by available sampling method and after completing family assessment device, and obtaining score for enter to research, were placement using random sampling method in two experimental and control groups (N = 8 couples per group. The experimental group participated in 12 sessions of group counseling based on transactional analysis and control group received no intervention. The gathered data were analyzed using covariance analysis. Results: The results show that there are significant differences between the pre-test and post test scores of the experimental group. This difference is significant at the level of 0.05. Therefore it seems that transactional group therapy improved the dimensions of family functioning in couples. Conclusions: The results indicated that transactional analysis group counseling can improve the family functioning and use this approach to working with couples is recommended.

An improved thermodynamic analysis method for vapor-phase epitaxy is proposed. In the conventional method, the mass-balance constraint equations are expressed in terms of variations in partial pressure. Although the conventional method is appropriate for gas-solid reactions occurring near the growth surface, it is not suitable for gas reactions that involve changes in the number of gas molecules. We reconsider the constraint equations in order to predict the effect of gas reactions on semiconductor growth processes. To demonstrate the feasibility of the improved method, the growth process of group-III nitrides by metalorganic vapor-phase epitaxy has been investigated.

This paper combines wavelet analysis and wavelet transform theory with artificial neural network, through the pretreatment on point feature attributes before in intrusion detection, to make them suitable for improvement of wavelet neural network. The whole intrusion classification model gets the better adaptability, self-learning ability, greatly enhances the wavelet neural network for solving the problem of field detection invasion, reduces storage space, contributes to improve the performance of the constructed neural network, and reduces the training time. Finally the results of the KDDCup99 data set simulation experiment shows that, this method reduces the complexity of constructing wavelet neural network, but also ensures the accuracy of the intrusion classification.

We report an improved haplotype analysis of the human myelin basic protein gene (MBP) short tandem repeat (STR) polymorphism. The polymorphic G-->A transition and 2 conventional STR polymorphisms, MBPA and MBPB, were simultaneously determined by an amplified product length polymorphism technique. After the MBPC fragments containing MBPA and MBPB were amplified, the linkage of these 2 STR loci was determined by a second amplification, using polymerase chain reaction (PCR) technique, of the isolated MBPC fragments. The present haplotype analysis dispensed with family studies for the haplotyping of MBPA and MBPB. Polymorphisms of the MBP loci studied in German and Japanese populations showed a high genomic variation. Haplotype analysis of the MBP loci showed distinct differences between the German and the Japanese populations. Consequently, haplotype analysis of the MBP loci promises to be useful in forensic identification and paternity testing.

Full Text Available The time for performance of a project is usually of the essence to the employer and the contractor. This has made it quite imperative for contracting parties to analyse project delays for purposes of making right decisions on potential time and/or cost compensation claims. Over the years, existing delay analysis techniques (DATs for aiding this decision-making have been helpful but have not succeeded in curbing the high incidence of disputes associated with delay claims resolutions. A major source of the disputes lies with the limitations and capabilities of the techniques in their practical use. Developing a good knowledge of these aspects of the techniques is of paramount importance in understanding the real problematic issues involved and their improvement needs. This paper seeks to develop such knowledge and understanding (as part of a wider research work via: an evaluation of the most common DATs based on a case study, a review of the key relevant issues often not addressed by the techniques, and the necessary improvements needs. The evaluation confirmed that the various techniques yield different analysis results for the same delay claims scenario, mainly due to their unique application procedures. The issues that are often ignored in the analysis but would also affect delay analysis results are: functionality of the programming software employed for the analysis, resource loading and levelling requirements, resolving concurrent delays, and delay-pacing strategy. Improvement needs by way of incorporating these issues in the analysis and focusing on them in future research work are the key recommendations of the study.

Value stream mapping is a tool which is needed to let the business leader of XYZ Hospital to see what is actually happening in its business process that have caused longer lead time for self-produced medicines in its pharmacy unit. This problem has triggered many complaints filed by patients. After deploying this tool, the team has come up with the fact that in processing the medicine, pharmacy unit does not have any storage and capsule packing tool and this condition has caused many wasting times in its process. Therefore, the team has proposed to the business leader to procure the required tools in order to shorten its process. This research has resulted in shortened lead time from 45 minutes to 30 minutes as required by the government through Indonesian health ministry with increased %VA (valued added activity) or Process Cycle Efficiency (PCE) from 66% to 68% (considered lean because it is upper than required 30%). This result has proved that the process effectiveness has been increase by the improvement.

Full Text Available Background: Today, learning the communication skills such as conflict solving is very important. The purpose of the present study was to investigate the efficiency of cognitive and transactional analysis group therapy on improving the conflict-solving skill.Materials and Method: This study is an experimental study with pretest-posttest and control group. Forty-five clients who were referring to the counseling and psychological services center of Ferdowsi University of Mashhad were chosen based on screening method. In addition, they were randomly divided into three equal groups: control group (15 participants, cognitive experimental group (15 participants and transactional analysis group (15 participants. Conflict-solving questionnaire was used to collect data and the intervention methods were cognitive and transactional analysis group therapy that was administrated during 8 weekly two-hour sessions. Mean and standard deviation were used for data analysis in the descriptive level and One-Way ANOVA method was used at the inference level.Results: The results of the study suggest that the conflict-solving skills in the two experimental groups were significantly increased. Conclusion: The finding of this research is indicative of the fact that both cognitive and transactional analysis group therapy could be an effective intervention for improving conflict-solving skills

The focus of this article is to improve the precipitation accumulation analysis, with special focus on the intense precipitation events. Two main objectives are addressed: (i) the assimilation of lightning observations together with radar and gauge measurements, and (ii) the analysis of the impact of different integration periods in the radar-gauge correction method. The article is a continuation of previous work by Gregow et al. (2013) in the same research field. A new lightning data assimilation method has been implemented and validated within the Finnish Meteorological Institute - Local Analysis and Prediction System. Lightning data do improve the analysis when no radars are available, and even with radar data, lightning data have a positive impact on the results. The radar-gauge assimilation method is highly dependent on statistical relationships between radar and gauges, when performing the correction to the precipitation accumulation field. Here, we investigate the usage of different time integration intervals: 1, 6, 12, 24 h and 7 days. This will change the amount of data used and affect the statistical calculation of the radar-gauge relations. Verification shows that the real-time analysis using the 1 h integration time length gives the best results.

In this work the Factorial Kriging analysis for the filtering of seismic attributes applied to reservoir characterization is considered. Factorial Kriging works in the spatial domain in a similar way to the Spectral Analysis in the frequency domain. The incorporation of filtered attributes as a secondary variable in Kriging system is discussed. Results prove that Factorial Kriging is an efficient technique for the filtering of seismic attributes images, of which geologic features are enhanced. The attribute filtering improves the correlation between the attributes and the well data and the estimates of the reservoir properties. The differences between the estimates obtained by External Drift Kriging and Collocated Cokriging are also reduced. (author)

For the cooperative system under development, it needs to use the spatial analysis and relative technology concerning data mining in order to carry out the detection of the subject conflict and redundancy, while the ID3 algorithm is an important data mining. Due to the traditional ID3 algorithm in the decision-making tree towards the log part is rather complicated, this paper obtained a new computational formula of information gain through the optimization of algorithm of the log part. During the experiment contrast and theoretical analysis, it is found that IID3 (Improved ID3 Algorithm) algorithm owns higher calculation efficiency and accuracy and thus worth popularizing.

A majority of original articles published in biomedical journals include some form of statistical analysis. Unfortunately, many of the articles contain errors in statistical design and/or analysis. These errors are worrisome, as the misuse of statistics jeopardizes the process of scientific discovery and the accumulation of scientific knowledge. To help avoid these errors and improve statistical reporting, four approaches are suggested: (1) development of guidelines for statistical reporting that could be adopted by all journals, (2) improvement in statistics curricula in biomedical research programs with an emphasis on hands-on teaching by biostatisticians, (3) expansion and enhancement of biomedical science curricula in statistics programs, and (4) increased participation of biostatisticians in the peer review process along with the adoption of more rigorous journal editorial policies regarding statistics. In this chapter, we provide an overview of these issues with emphasis to the field of molecular biology and highlight the need for continuing efforts on all fronts.

Full Text Available In this paper, an improved finite element (FE model is proposed to investigate the temperature distribution in gas insulated transmission lines (GILs. The solution of joule losses in eddy current field analysis is indirectly coupled into fluid and thermal fields. As is different from the traditional methods, the surrounding air of the GIL is involved in the model to avoid constant convective heat transfer coefficient, thus multiple species transport technique is employed to deal with the problem of two fluid types in a single model. In addition, the temperature dependent electrical and thermal properties of the materials are considered. The steady-state and transient thermal analysis of the GIL are performed separately with the improved model. The corresponding temperature distributions are compared with experimental results reported in the literature.

the theoretical analysis and the improved control method, real time simulation model of a hybrid multi-infeed HVDC system based on western Danish power system is established in RTDS™. Simulation results show that the enhanced transient voltage stability can be achieved.......This paper presents transient voltage stability analysis of an AC system with multi-infeed HVDC links including a traditional LCC HVDC link and a VSC HVDC link. It is found that the voltage supporting capability of the VSC-HVDC link is significantly influenced by the tie-line distance between...... the two links and the size of loads. In order to improve the transient voltage stability, a voltage adjusting method is proposed in this paper. A voltage increment component has been introduced into the outer voltage control loop under emergency situation caused by severe grid faults. In order to verify...

Flux balance analysis (FBA) has been widely used in calculating steady-state flux distributions that provide important information for metabolic engineering. Several thermodynamics-based methods, for example, quantitative assignment of reaction directionality and energy balance analysis have been developed to improve the prediction accuracy of FBA. However, these methods can only generate a thermodynamically feasible range, rather than the most thermodynamically favorable solution. We therefore developed a novel optimization method termed as thermodynamic optimum searching (TOS) to calculate the thermodynamically optimal solution, based on the second law of thermodynamics, the minimum magnitude of the Gibbs free energy change and the maximum entropy production principle (MEPP). Then, TOS was applied to five physiological conditions of Escherichia coli to evaluate its effectiveness. The resulting prediction accuracy was found significantly improved (10.7-48.5%) by comparing with the (13)C-fluxome data, indicating that TOS can be considered an advanced calculation and prediction tool in metabolic engineering.

Full Text Available A praxeological approach has been proposed in order to improve a forecasting process through the employment of the forecast value added (FVA analysis. This may be interpreted as a manifestation of lean management in forecasting. The author discusses the concepts of the effectiveness and efficiency of forecasting. The former, defined in the praxeology as the degree to which goals are achieved, refers to the accuracy of forecasts. The latter reflects the relation between the benefits accruing from the results of forecasting and the costs incurred in this process. Since measuring the benefits accruing from a forecasting is very difficult, a simplification according to which this benefit is a function of the forecast accuracy is proposed. This enables evaluating the efficiency of the forecasting process. Since improving this process may consist of either reducing forecast error or decreasing costs, FVA analysis, which expresses the concept of lean management, may be applied to reduce the waste accompanying forecasting. (original abstract

fuel economy of the M1113 and XM1124 based on analysis of the HEVEA test data set. A repower option of the HMMWV M1113 engine (6.5 L V8 turbo...fuel economy benefits of an engine repower option for the M1113. Table 5 summarizes the results of the computer simulations using both the...current and recommended repower option. The fuel economy improvements reported in Table 5 do not account for the additional cooling requirements for the

to be used up beyond recovery or repair—and provides these items to the services when requisitioned in support of approximately 2,400 weapon...distribution functions for depot maintenance operations varies across the services . For example, the Air Force and DLA have agreed to a local recovery ...DEFENSE INVENTORY Further Analysis and Enhanced Metrics Could Improve Service Supply and Depot Operations Report

The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.

Experimentation ( DISE ) research group where he leads multidisciplinary studies ranging from leading the Analyst Capability Working Group for the U.S. Air...Information and Systems Experimentation ( DISE ). Dr. Gallup has a multidisciplinary science, engineering, and analysis background, including microbiology...4), 19–31. Retrieved from http://www.nps.edu/Academics/Schools/GSOIS/Departments/IS/ DISE /docs/improving- use-and-understanding-of-data-dod.pdf Zhou

Full Text Available The main purpose of this research to analyze the occupational risks in a Renal Clinic located in central-RS. From the observational analysis of risk maps and instrument data collection, we implemented improvements in local. Through the results, it was noted that the implementations have been significant and that changes are needed to reduce occupational disorders, promoting better quality of life for clinical professionals.

Comparative study of 3D numerical simulation of fluid flow and coal-firing processes was applied for flame combustion of Kansk-Achinsk brown coal in a vortex furnace of improved design with bottom injection of secondary air. The analysis of engineering performance of this furnace was carried out for several operational modes as a function of coal grinding fineness and coal input rate. The preferable operational regime for furnace was found.

analysis alone. Since detection of FCDs on MRI during the presurgical evaluation markedly improves the chance of becoming seizure free postoperatively, we apply morphometric analysis in all patients who are MRI-negative after conventional visual analysis at our centre.

Introduction It is unclear whether using peers can improve adherence to antiretroviral therapy (ART). To construct the World Health Organization's global guidance on adherence interventions, we conducted a systematic review and network meta-analysis to determine the effectiveness of using peers for achieving adequate adherence and viral suppression. Methods We searched for randomized clinical trials of peer-based interventions to promote adherence to ART in HIV populations. We searched six electronic databases from inception to July 2015 and major conference abstracts within the last three years. We examined the outcomes of adherence and viral suppression among trials done worldwide and those specific to low- and middle-income countries (LMIC) using pairwise and network meta-analyses. Results and discussion Twenty-two trials met the inclusion criteria. We found similar results between pairwise and network meta-analyses, and between the global and LMIC settings. Peer supporter+Telephone was superior in improving adherence than standard-of-care in both the global network (odds-ratio [OR]=4.79, 95% credible intervals [CrI]: 1.02, 23.57) and the LMIC settings (OR=4.83, 95% CrI: 1.88, 13.55). Peer support alone, however, did not lead to improvement in ART adherence in both settings. For viral suppression, we found no difference of effects among interventions due to limited trials. Conclusions Our analysis showed that peer support leads to modest improvement in adherence. These modest effects may be due to the fact that in many settings, particularly in LMICs, programmes already include peer supporters, adherence clubs and family disclosures for treatment support. Rather than introducing new interventions, a focus on improving the quality in the delivery of existing services may be a more practical and effective way to improve adherence to ART. PMID:27914185

Achieving economic competitiveness as compared to LWRs and other Generation IV (Gen-IV) reactors is one of the major requirements for large-scale investment in commercial sodium cooled fast reactor (SFR) power plants. Advances in R&D for advanced SFR fuel and structural materials provide key long-term opportunities to improve SFR economics. In addition, other new opportunities are emerging to further improve SFR economics. This paper provides an overview on potential ideas from the perspective of thermal hydraulics to improve SFR economics. These include a new hybrid loop-pool reactor design to further optimize economics, safety, and reliability of SFRs with more flexibility, a multiple reheat and intercooling helium Brayton cycle to improve plant thermal efficiency and reduce safety related overnight and operation costs, and modern multi-physics thermal analysis methods to reduce analysis uncertainties and associated requirements for over-conservatism in reactor design. This paper reviews advances in all three of these areas and their potential beneficial impacts on SFR economics.

The improved line sampling (LS) technique, an effective numerical simulation method, is employed to analyze the probabilistic characteristics and reliability sensitivity of flutter with random structural parameter in transonic flow. The improved LS technique is a novel methodology for reliability and sensitivity analysis of high dimensionality and low probability problem with implicit limit state function, and it does not require any approximating surrogate of the implicit limit state equation. The improved LS is used to estimate the flutter reliability and the sensitivity of a two-dimensional wing, in which some structural properties, such as frequency, parameters of gravity center and mass ratio, are considered as random variables. Computational fluid dynamics (CFD) based unsteady aerodynamic reduced order model (ROM) method is used to construct the aerodynamic state equations. Coupling structural state equations with aerodynamic state equations, the safety margin of flutter is founded by using the critical velocity of flutter. The results show that the improved LS technique can effectively decrease the computational cost in the random uncertainty analysis of flutter. The reliability sensitivity, defined by the partial derivative of the failure probability with respect to the distribution parameter of random variable, can help to identify the important parameters and guide the structural optimization design.

Surgical management of children with short bowel syndrome (SBS) changed with the introduction of the serial transverse enteroplasty procedure (STEP). We conducted a systematic review and meta-analysis using MEDLINE and SCOPUS to determine if children with SBS had improved enteral tolerance following STEP. Studies were included if information about a child's pre- and post-STEP enteral tolerance was provided. A random effects meta-analysis provided a summary estimate of the proportion of children with enteral tolerance increase following STEP. From 766 abstracts, seven case series involving 86 children were included. Mean percent tolerance of enteral nutrition improved from 35.1 to 69.5. Sixteen children had no enteral improvement following STEP. A summary estimate showed that 87 % (95 % CI 77-95 %) of children who underwent STEP had an increase in enteral tolerance. Compilation of the literature supports the belief that SBS subjects' enteral tolerance improves following STEP. Enteral nutritional tolerance is a measure of efficacy of STEP and should be presented as a primary or secondary outcome. By standardizing data collection on children undergoing STEP procedure, better determination of nutritional benefit from STEP can be ascertained.

@@ A method of interference correction for improving the sensitivity of non-dispersive infrared (NDIR) gas analysis system is demonstrated.Based on the proposed method, the interference due to water vapor and carbon dioxide in the NDIR NO analyzer is corrected.After interference correction, the absorbance signal at the NO filter channel is only controlled by the absorption of NO, and the sensitivity of the analyzer is improved greatly.In the field experiment for pollution source emission monitoring, the concentration trend of NO monitored by NDIR analyzer is in good agreement with the differential optical absorption spectroscopy NO analyzer.Small variations of NO concentration can also be resolved, and the measuring correlation coefficient of the two analyzers is 94.28%.%A method of interference correction for improving the sensitivity of non-dispersive infrared (NDIR) gas analysis system is demonstrated. Based on the proposed method, the interference due to water vapor and carbon dioxide in the NDIR NO analyzer is corrected. After interference correction, the absorbance signal at the NO filter channel is only controlled by the absorption of NO, and the sensitivity of the analyzer is improved greatly. In the field experiment for pollution source emission monitoring, the concentration trend of NO monitored by NDIR analyzer is in good agreement with the differential optical absorption spectroscopy NO analyzer. Small variations of NO concentration can also be resolved, and the measuring correlation coefficient of the two analyzers is 94.28％.

Proteomic analysis of bacterial samples provides valuable information about cellular responses and functions under different environmental pressures. Proteomic analysis is dependent upon efficient extraction of proteins from bacterial samples without introducing bias toward extraction of particular protein classes. While no single method can recover 100% of the bacterial proteins, selected protocols can improve overall protein isolation, peptide recovery, or enrich for certain classes of proteins. The method presented here is technically simple and does not require specialized equipment such as a mechanical disrupter. Our data reveal that for particularly challenging samples, such as B. anthracis Sterne spores, trichloroacetic acid extraction improved the number of proteins identified within a sample compared to bead beating (714 vs 660, respectively). Further, TCA extraction enriched for 103 known spore specific proteins whereas bead beating resulted in 49 unique proteins. Analysis of C. botulinum samples grown to 5 days, composed of vegetative biomass and spores, showed a similar trend with improved protein yields and identification using our method compared to bead beating. Interestingly, easily lysed samples, such as B. anthracis vegetative cells, were equally as effectively processed via TCA and bead beating, but TCA extraction remains the easiest and most cost effective option. As with all assays, supplemental methods such as implementation of an alternative preparation method may provide additional insight to the protein biology of the bacteria being studied.

We present an updated and improved simulations analysis of precision measurements of neutrino oscillation parameters from the study of charged-current interactions of atmospheric neutrinos in the Iron Calorimeter (ICAL) detector at the proposed India-based Neutrino Observatory (INO). The present analysis is done in the extended muon energy range of 0.5--25 GeV, as compared to the previous analyses which were limited to the range 1--11 GeV of muon energy. A substantial improvement in the precision measurement of the oscillation parameters in the 2--3 sector, including the magnitude and sign of the 2--3 mass-squared difference $\\Delta{m^2_{32}}$ and especially $\\theta_{23}$ is observed. The sensitivities are further improved by the inclusion of additional systematics which constrains the ratio of neutrino to anti-neutrino fluxes. The best $1\\sigma$ precision on $\\sin^2 \\theta_{23}$ and $|\\Delta{m^2_{32}}|$ achievable with the new analysis for 500 kTon yr exposure of ICAL are $\\sim9\\%$ and $\\sim2.5\\%$ respective...

To improve the accuracy of quantitative analysis in laser-induced breakdown spectroscopy, the plasma produced by a Nd:YAG laser from steel targets was confined by a cavity. A number of elements with low concentrations, such as vanadium (V), chromium (Cr), and manganese (Mn), in the steel samples were investigated. After the optimization of the cavity dimension and laser fluence, significant enhancement factors of 4.2, 3.1, and 2.87 in the emission intensity of V, Cr, and Mn lines, respectively, were achieved at a laser fluence of 42.9 J/cm(2) using a hemispherical cavity (diameter: 5 mm). More importantly, the correlation coefficient of the V I 440.85/Fe I 438.35 nm was increased from 0.946 (without the cavity) to 0.981 (with the cavity); and similar results for Cr I 425.43/Fe I 425.08 nm and Mn I 476.64/Fe I 492.05 nm were also obtained. Therefore, it was demonstrated that the accuracy of quantitative analysis with low concentration elements in steel samples was improved, because the plasma became uniform with spatial confinement. The results of this study provide a new pathway for improving the accuracy of quantitative analysis of LIBS.

Gravity data are the results of gravity force field interaction from all the underground sources. The objects of detection are always submerged in the background field, and thus one of the crucial problems for gravity data interpretation is how to improve the resolution of observed information.The wavelet transform operator has recently been introduced into the domain fields both as a filter and as a powerful source analysis tool. This paper studied the effects of improving resolution of gravity data with wavelet analysis and spectral method, and revealed the geometric characteristics of density heterogeneities described by simple shaped sources. First, the basic theory of the multiscale wavelet analysis and its lifting scheme and spectral method were introduced. With the exper-imental study on forward simulation of anomalies given by the superposition of six objects and measured data in Songliao plain, Northeast China, the shape, size and depth of the buried objects were estimated in the study. Also, the results were compared with those obtained by conventional techniques,which demonstrated that this method greatly improves the resolution of gravity anomalies.

The traditional method of applying a digital optimal filter to measure X-ray pulses from transition-edge sensor (TES) devices does not achieve the best energy resolution when the signals have a highly non-linear response to energy, or the noise is non-stationary during the pulse. We present an implementation of a method to analyze X-ray data from TESs, which is based upon principal component analysis (PCA). Our method separates the X-ray signal pulse into orthogonal components that have the largest variance. We typically recover pulse height, arrival time, differences in pulse shape, and the variation of pulse height with detector temperature. These components can then be combined to form a representation of pulse energy. An added value of this method is that by reporting information on more descriptive parameters (as opposed to a single number representing energy), we generate a much more complete picture of the pulse received. Here we report on progress in developing this technique for future implementation on X-ray telescopes. We used an ^{55}Fe source to characterize Mo/Au TESs. On the same dataset, the PCA method recovers a spectral resolution that is better by a factor of two than achievable with digital optimal filters.

Full Text Available Reliable results represent the pinnacle assessment of quality of an analytical laboratory, and therefore variability is considered to be a critical quality problem associated with the selenium analysis method executed at Western Cape Provincial Veterinary Laboratory (WCPVL. The elimination and control of variability is undoubtedly of significant importance because of the narrow margin of safety between toxic and deficient doses of the trace element for good animal health. A quality methodology known as Lean Six Sigma was believed to present the most feasible solution for overcoming the adverse effect of variation, through steps towards analytical process improvement. Lean Six Sigma represents a form of scientific method type, which is empirical, inductive and deductive, and systematic, which relies on data, and is fact-based. The Lean Six Sigma methodology comprises five macro-phases, namely Define, Measure, Analyse, Improve and Control (DMAIC. Both qualitative and quantitative laboratory data were collected in terms of these phases. Qualitative data were collected by using quality-tools, namely an Ishikawa diagram, a Pareto chart, Kaizen analysis and a Failure Mode Effect analysis tool. Quantitative laboratory data, based on the analytical chemistry test method, were collected through a controlled experiment. The controlled experiment entailed 13 replicated runs of the selenium test method, whereby 11 samples were repetitively analysed, whilst Certified Reference Material (CRM was also included in 6 of the runs. Laboratory results obtained from the controlled experiment was analysed by using statistical methods, commonly associated with quality validation of chemistry procedures. Analysis of both sets of data yielded an improved selenium analysis method, believed to provide greater reliability of results, in addition to a greatly reduced cycle time and superior control features. Lean Six Sigma may therefore be regarded as a valuable tool in any

Reliable results represent the pinnacle assessment of quality of an analytical laboratory, and therefore variability is considered to be a critical quality problem associated with the selenium analysis method executed at Western Cape Provincial Veterinary Laboratory (WCPVL). The elimination and control of variability is undoubtedly of significant importance because of the narrow margin of safety between toxic and deficient doses of the trace element for good animal health. A quality methodology known as Lean Six Sigma was believed to present the most feasible solution for overcoming the adverse effect of variation, through steps towards analytical process improvement. Lean Six Sigma represents a form of scientific method type, which is empirical, inductive and deductive, and systematic, which relies on data, and is fact-based. The Lean Six Sigma methodology comprises five macro-phases, namely Define, Measure, Analyse, Improve and Control (DMAIC). Both qualitative and quantitative laboratory data were collected in terms of these phases. Qualitative data were collected by using quality-tools, namely an Ishikawa diagram, a Pareto chart, Kaizen analysis and a Failure Mode Effect analysis tool. Quantitative laboratory data, based on the analytical chemistry test method, were collected through a controlled experiment. The controlled experiment entailed 13 replicated runs of the selenium test method, whereby 11 samples were repetitively analysed, whilst Certified Reference Material (CRM) was also included in 6 of the runs. Laboratory results obtained from the controlled experiment was analysed by using statistical methods, commonly associated with quality validation of chemistry procedures. Analysis of both sets of data yielded an improved selenium analysis method, believed to provide greater reliability of results, in addition to a greatly reduced cycle time and superior control features. Lean Six Sigma may therefore be regarded as a valuable tool in any laboratory, and

One of the hallmarks of blood bank stored red blood cells (RBCs) is the irreversible transition from a discoid to a spherocyte-like morphology with membrane perturbation and cytoskeleton disorders. Therefore, identification of the storage-associated modifications in the protein-protein interactions between the cytoskeleton and the lipid bilayer may contribute to enlighten the molecular mechanisms involved in the alterations of mechanical properties of stored RBCs. Here we report the results obtained analyzing RBCs after 0, 21 and 35 days of storage under standard blood banking conditions by label free mass spectrometry (MS)-based experiments. We could quantitatively measure changes in the phosphorylation level of crucial phosphopeptides belonging to β-spectrin, ankyrin-1, α-adducin, dematin, glycophorin A and glycophorin C proteins. Data have been validated by both western blotting and pseudo-Multiple Reaction Monitoring (MRM). Although each phosphopeptide showed a distinctive trend, a sharp increase in the phosphorylation level during the storage duration was observed. Phosphopeptide mapping and structural modeling analysis indicated that the phosphorylated residues localize in protein functional domains fundamental for the maintenance of membrane structural integrity. Along with previous morphological evidence acquired by electron microscopy, our results seem to indicate that 21-day storage may represent a key point for the molecular processes leading to the erythrocyte deformability reduction observed during blood storage. These findings could therefore be helpful in understanding and preventing the morphology-linked mechanisms responsible for the post-transfusion survival of preserved RBCs.

Full Text Available The impulse turbo expander (ITE is employed to replace the throttling valve in the vapor compression refrigeration cycle to improve the system performance. An improved ITE and the corresponding cycle are presented. In the new cycle, the ITE not only acts as an expansion device with work extraction, but also serves as an economizer with vapor injection. An increase of 20% in the isentropic efficiency can be attained for the improved ITE compared with the conventional ITE owing to the reduction of the friction losses of the rotor. The performance of the novel cycle is investigated based on energy and exergy analysis. A correlation of the optimum intermediate pressure in terms of ITE efficiency is developed. The improved ITE cycle increases the exergy efficiency by 1.4%–6.1% over the conventional ITE cycle, 4.6%–8.3% over the economizer cycle and 7.2%–21.6% over the base cycle. Furthermore, the improved ITE cycle is also preferred due to its lower exergy loss.

Full Text Available Speech enhancement has become an essential issue within the field of speech and signal processing, because of the necessity to enhance the performance of voice communication systems in noisy environment. There has been a number of research works being carried out in speech processing but still there is always room for improvement. The main aim is to enhance the apparent quality of the speech and to improve the intelligibility. Signal representation and enhancement in cosine transformation is observed to provide significant results. Discrete Cosine Transformation has been widely used for speech enhancement. In this research work, instead of DCT, Advanced DCT (ADCT which simultaneous offers energy compaction along with critical sampling and flexible window switching. In order to deal with the issue of frame to frame deviations of the Cosine Transformations, ADCT is integrated with Pitch Synchronous Analysis (PSA. Moreover, in order to improve the noise minimization performance of the system, Improved Iterative Wiener Filtering approach called Constrained Iterative Wiener Filtering (CIWF is used in this approach. Thus, a novel ADCT based speech enhancement using improved iterative filtering algorithm integrated with PSA is used in this approach.

Kaizen, or continuous improvement, lies at the core of lean. Kaizen is implemented through practices that enable employees to propose ideas for improvement and solve problems. The aim of this study is to describe the types of issues and improvement suggestions that hospital employees feel empowered to address through kaizen practices in order to understand when and how kaizen is used in healthcare. We analysed 186 structured kaizen documents containing improvement suggestions that were produced by 165 employees at a Swedish hospital. Directed content analysis was used to categorise the suggestions into following categories: type of situation (proactive or reactive) triggering an action; type of process addressed (technical/administrative, support and clinical); complexity level (simple or complex); and type of outcomes aimed for (operational or sociotechnical). Compliance to the kaizen template was calculated. 72% of the improvement suggestions were reactions to a perceived problem. Support, technical and administrative, and primary clinical processes were involved in 47%, 38% and 16% of the suggestions, respectively. The majority of the kaizen documents addressed simple situations and focused on operational outcomes. The degree of compliance to the kaizen template was high for several items concerning the identification of problems and the proposed solutions, and low for items related to the test and implementation of solutions. There is a need to combine kaizen practices with improvement and innovation practices that help staff and managers to address complex issues, such as the improvement of clinical care processes. The limited focus on sociotechnical aspects and the partial compliance to kaizen templates may indicate a limited understanding of the entire kaizen process and of how it relates to the overall organisational goals. This in turn can hamper the sustainability of kaizen practices and results. Published by the BMJ Publishing Group Limited. For

By using a farm’s data in Yantai City and the theory of Cost-Volume-Profit analysis and the financial management methods,this paper construct a multi-factor analysis model for improving profit management using Excel 2007 in Shellfish farming projects and describes the procedures to construct a multi-factor analysis model.The model can quickly calculate the profit,improve the level of profit management,find out the breakeven point and enhance the decision-making efficiency of businesses etc.It is also a thought of the application to offer suggestions for government decisions and economic decisions for corporations as a simple analysis tool.While effort has been exerted to construct a four-variable model,some equally important variables may not be discussed sufficiently due to limitation of the paper’s space and the authors’knowledge.All variables can be listed in EXCEL 2007 and can be associated in a logical way to manage the profit of shellfish farming projects more efficiently and more practically.

Multivariate analysis techniques have been applied to near-infrared (NIR) spectra coals to investigate the relationship between nine coal properties (moisture (%), ash (%), volatile matter (%), fixed carbon (%), heating value (kcal/kg), carbon (%), hydrogen (%), nitrogen (%) and sulphur (%)) and the corresponding predictor variables. In this work, a whole set of coal samples was grouped into six more homogeneous clusters following the ASTM reference method for classification prior to the application of calibration methods to each coal set. The results obtained showed a considerable improvement of the error determination compared with the calibration for the whole sample set. For some groups, the established calibrations approached the quality required by the ASTM/ISO norms for laboratory analysis. To predict property values for a new coal sample it is necessary the assignation of that sample to its respective group. Thus, the discrimination and classification ability of coal samples by Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS) in the NIR range was also studied by applying Soft Independent Modelling of Class Analogy (SIMCA) and Linear Discriminant Analysis (LDA) techniques. Modelling of the groups by SIMCA led to overlapping models that cannot discriminate for unique classification. On the other hand, the application of Linear Discriminant Analysisimproved the classification of the samples but not enough to be satisfactory for every group considered.

Underground Electro Magnetic Interference (EMI) has become so serious that there were false alarms in monitoring system, which induced troubles of coal mine safety in production. In order to overcome difficulties caused by the explosion-proof enclosure of the equipments and the limitation of multiple startup and stop in transient process during EMI measurement, a novel technique was proposed to measure underground EMI distribution indirectly and enhance Electromagnetic Campatibility(EMC) of the monitoring system. The wavelet time-frequency analysis was introduced to underground monitoring system. Therefore, the sources, the startup time, duration and waveform of EMI could be ascertained correctly based on running records of underground electric equipments. The electrical fast transient/burst (EFT/B) was studied to verify the validity of wavelet analysis.EMI filter was improved in accordance of the EMI distribution gotten from wavelet analysis.Power port immunity was developed obviously. In addition, the method of setting wavelet thresholds was amended based upon conventional thresholds in the wavelet filter design.Therefore the EFT/B of data port was restrained markedly with the wavelet filtering. Coordinative effect of EMI power and wavelet filter makes false alarms of monitoring system reduce evidently. It is concluded that wavelet analysis and the improved EMI filter have enhanced the EMC of monitoring system obviously.

The LISA Pathfinder mission (LPF) aims to test key technologies for the future LISA mission. The LISA Technology Package (LTP) on-board LPF will consist of an exhaustive suite of experiments and its outcome will be crucial for the future detection of gravitational waves. In order to achieve maximum sensitivity, we need to have an understanding of every instrument on-board and parametrize the properties of the underlying noise models. The Data Analysis team has developed algorithms for parameter estimation of the system. A very promising one implemented for LISA Pathfinder data analysis is the Markov Chain Monte Carlo. A series of experiments are going to take place during flight operations and each experiment is going to provide us with essential information for the next in the sequence. Therefore, it is a priority to optimize and improve our tools available for data analysis during the mission. Using a Bayesian framework analysis allows us to apply prior knowledge for each experiment, which means that we can efficiently use our prior estimates for the parameters, making the method more accurate and significantly faster. This, together with other algorithm improvements, will lead us to our main goal, which is no other than creating a robust and reliable tool for parameter estimation during the LPF mission.

In modern computer systems, system event logs have always been the primary source for checking system status. As computer systems become more and more complex, the interaction between software and hardware increases frequently. The components will generate enormous log information, including running reports and fault information. The sheer quantity of data is a great challenge for analysis relying on the manual method. In this paper, we implement a management and analysis system of log information, which can assist system administrators to understand the real-time status of the entire system, classify logs into different fault types, and determine the root cause of the faults. In addition, we improve the existing fault correlation analysis method based on the results of system log classification. We apply the system in a cloud computing environment for evaluation. The results show that our system can classify fault logs automatically and effectively. With the proposed system, administrators can easily detect the root cause of faults.

Laparoscopic appendectomy for acute appendicitis has become increasingly used over the past decade. The objective of this trend analysis is to assess whether clinical outcomes after laparoscopic appendectomy have improved over the past 12 years. This analysis is based on the prospective database of the Swiss Association of Laparoscopic and Thoracoscopic Surgery. All patients undergoing emergency laparoscopic appendectomy for acute appendicitis from 1995 to 2006 were included. The following outcomes were assessed for each of the 12 years: conversion rates, intraoperative complications, surgical postoperative complications, general postoperative complications, rate of reoperations, and length of hospital stay. Unadjusted and risk-adjusted multivariable analyses were performed. Statistical significance was set at a level of P trend)trend)trend)trend)trend)trend)trend analysis is the first one in the literature encompassing more than a decade and reporting clinical outcomes after laparoscopic appendectomy for acute appendicitis, which represents an important quality control.

For the structural dynamic analysis of large space structures, the technology in structural synthesis and the development of structural analysis software have increased the capability to predict the dynamic characteristics of the structural system. The various subsystems which comprise the system are represented by various displacement functions; the displacement functions are then combined to represent the total structure. Experience has indicated that even when subsystem mathematical models are verified by test, the mathematical representations of the total system are often in error because the mathematical model of the structural elements which are significant when loads are applied at the interconnection points are not adequately verified by test. A multiple test concept, based upon the Multiple Boundary Condition Test (MBCT), is presented which will increase the accuracy of the system mathematical model by improving the subsystem test and test/analysis correlation procedure.

Full Text Available Many industries, for example automotive, have well defined product development process definitions and risk evaluation methods. The FMEA (Failure Mode and Effects Analysis is a first line risk analysis method in design, which has been implemented in development and production since decades. Although the first applications were focusing on mechanical and electrical design and functionalities, today, software components are implemented in many modern vehicle systems. However, standards or industry specific associations do not specify any “best practice” how to design the interactions of multiple entities in one model. This case study focuses on modelling interconnections and on the improvement of the FMEA modelling process in the automotive. Selecting and grouping software components for the analysis is discussed, but software architect design patterns are excluded from the study.

The quantity and quality of first-strand cDNA directly influence the accuracy of transcriptional analysis and quantification. Using a plant-derived α-tubulin as a model system, the effect of oligo sequence and DTT on the quality and quantity of first-strand cDNA synthesis was assessed via a combination of semi-quantitative PCR and real-time PCR. The results indicated that anchored oligo dT significantly improved the quantity and quality of α-tubulin cDNA compared to the conventional oligo dT. Similarly, omitting DTT from the first-strand cDNA synthesis also enhanced the levels of transcript. This is the first time that a comparative analysis has been undertaken for a plant system and it shows conclusively that small changes to current protocols can have very significant impact on transcript analysis.

Polarization analysis has been used to analyze the polarization characteristics of waves and developed in various spheres, for example, electromagnetics, optics, and seismology. As for seismology, polarization analysis is used to discriminate seismic phases or to enhance specific phase (e.g., Flinn, 1965)[1], by taking advantage of the difference in polarization characteristics of seismic phases. In earthquake early warning, polarization analysis is used to estimate the epicentral direction using single station, based on the polarization direction of P-wave portion in seismic records (e.g., Smart and Sproules(1981) [2], Noda et al.,(2012) [3]). Therefore, improvement of the Estimation of Epicentral Direction by Polarization Analysis (EEDPA) directly leads to enhance the accuracy and promptness of earthquake early warning. In this study, the author tried to improve EEDPA by using seismic records of events occurred around Japan from 2003 to 2013. The author selected the events that satisfy following conditions. MJMA larger than 6.5 (JMA: Japan Meteorological Agency). Seismic records are available at least 3 stations within 300km in epicentral distance. Seismic records obtained at stations with no information on seismometer orientation were excluded, so that precise and quantitative evaluation of accuracy of EEDPA becomes possible. In the analysis, polarization has calculated by Vidale(1986) [4] that extended the method proposed by Montalbetti and Kanasewich(1970)[5] to use analytical signal. As a result of the analysis, the author found that accuracy of EEDPA improves by about 15% if velocity records, not displacement records, are used contrary to the author's expectation. Use of velocity records enables reduction of CPU time in integration of seismic records and improvement in promptness of EEDPA, although this analysis is still rough and further scrutiny is essential. At this moment, the author used seismic records that obtained by simply integrating acceleration

A cost-benefit analysis using deterministic and stochastic modelling was conducted to identify the net benefits for households that adopt (1) vaccination of individual birds against Newcastle disease (ND) or (2) improved management of chick rearing by providing coops for the protection of chicks from predation and chick starter feed inside a creep feeder to support chicks' nutrition in village chicken flocks in Myanmar. Partial budgeting was used to assess the additional costs and benefits associated with each of the two interventions tested relative to neither strategy. In the deterministic model, over the first 3 years after the introduction of the interventions, the cumulative sum of the net differences from neither strategy was 13,189Kyat for ND vaccination and 77,645Kyat for improved chick management (effective exchange rate in 2005: 1000Kyat=1$US). Both interventions were also profitable after discounting over a 10-year period; Net Present Values for ND vaccination and improved chick management were 30,791 and 167,825Kyat, respectively. The Benefit-Cost Ratio for ND vaccination was very high (28.8). This was lower for improved chick management, due to greater costs of the intervention, but still favourable at 4.7. Using both interventions concurrently yielded a Net Present Value of 470,543Kyat and a Benefit-Cost Ratio of 11.2 over the 10-year period in the deterministic model. Using the stochastic model, for the first 3 years following the introduction of the interventions, the mean cumulative sums of the net difference were similar to those values obtained from the deterministic model. Sensitivity analysis indicated that the cumulative net differences were strongly influenced by grower bird sale income, particularly under improved chick management. The effects of the strategies on odds of households selling and consuming birds after 7 months, and numbers of birds being sold or consumed after this period also influenced profitability. Cost variations for

Full Text Available BACKGROUND. Demographic and clinical predictors of aphasia recovery have been identified in the literature. However, little attention has been devoted to identifying and distinguishing predictors of improvement for different outcomes, e.g., production of treated vs. untreated materials. These outcomes may rely on different mechanisms, and therefore be predicted by different variables. Furthermore, treatment features are not typically accounted for when studying predictors of aphasia recovery. This is partly due to the small numbers of cases reported in studies, but also to limitations of data analysis techniques usually employed. METHOD. We reviewed the literature on predictors of aphasia recovery, and conducted a meta-analysis of single-case studies designed to assess the efficacy of treatments for verb production. The contribution of demographic, clinical, and treatment-related variables was assessed by means of Random Forests (a machine-learning technique used in classification and regression. Two outcomes were investigated: production of treated (for 142 patients and untreated verbs (for 166 patients. RESULTS. Improved production of treated verbs was predicted by a three-way interaction of pre-treatment scores on tests for verb comprehension and word repetition, and the frequency of treatment sessions. Improvement in production of untreated verbs was predicted by an interaction including the use of morphological cues, presence of grammatical impairment, pre-treatment scores on a test for noun comprehension and frequency of treatment sessions. CONCLUSION. Improvement in the production of treated verbs occurs frequently. It may depend on restoring access to and/or knowledge of lexeme representations, and requires relative sparing of semantic knowledge (as measured by verb comprehension and phonological output abilities (including working memory, as measured by word repetition. Improvement in the production of untreated verbs has not been

Background Although acute congestive heart failure (CHF) patients typically present with abnormal auscultatory findings on lung examination, lung sounds are not normally subjected to rigorous analysis. The goals of this study were to use a computerized analytic acoustic tool to evaluate lung sound patterns in CHF patients during acute exacerbation and after clinical improvement and to compare CHF profiles with those of normal individuals.Methods Lung sounds throughout the respiratory cycle was captured using a computerized acoustic-based imaging technique. Thirty-two consecutive CHF patients were imaged at the time of presentation to the emergency department and after clinical improvement. Digital images were created, geographical area of the images and lung sound patterns were quantitatively analyzed.Results The geographical areas of the vibration energy image of acute CHF patients without and with radiographically evident pulmonary edema were (67.9±4.7) and (60.3±3.5) kilo-pixels, respectively (P <0.05). In CHF patients without and with radiographically evident pulmonary edema (REPE), after clinical improvement the geographical area of vibration energy image of lung sound increased to (74.5±4.4) and (73.9±3.9) kilo-pixels (P <0.05), respectively. Vibration energy decreased in CHF patients with REPE following clinical improvement by an average of (85±19)% (P <0.01). Conclusions With clinical improvement of acute CHF exacerbations, there was more homogenous distribution of lung vibration energy, as demonstrated by the increased geographical area of the vibration energy image. Lung sound analysis may be useful to track in acute CHF exacerbations.

Defense Nuclear Facilities Safety Board (DNFSB) Recommendation 2002-1 (''Quality Assurance for Safety-Related Software'') identified a number of quality assurance issues on the use of software in Department of Energy (DOE) facilities for analyzing hazards, and designing and operating controls to prevent or mitigate potential accidents. Over the last year, DOE has begun several processes and programs as part of the Implementation Plan commitments, and in particular, has made significant progress in addressing several sets of issues particularly important in the application of software for performing hazard and accident analysis. The work discussed here demonstrates that through these actions, Software Quality Assurance (SQA) guidance and software tools are available that can be used to improve resulting safety analysis. Specifically, five of the primary actions corresponding to the commitments made in the Implementation Plan to Recommendation 2002-1 are identified and discussed in this paper. Included are the web-based DOE SQA Knowledge Portal and the Central Registry, guidance and gap analysis reports, electronic bulletin board and discussion forum, and a DOE safety software guide. These SQA products can benefit DOE safety contractors in the development of hazard and accident analysis by precluding inappropriate software applications and utilizing best practices when incorporating software results to safety basis documentation. The improvement actions discussed here mark a beginning to establishing stronger, standard-compliant programs, practices, and processes in SQA among safety software users, managers, and reviewers throughout the DOE Complex. Additional effort is needed, however, particularly in: (1) processes to add new software applications to the DOE Safety Software Toolbox; (2) improving the effectiveness of software issue communication; and (3) promoting a safety software quality assurance culture.

The scope of this work is the accomplishment of an overview about the current state-of-the-art flow analysis techniques applied to the environmental determination of organic compounds expressed as total indices. Flow analysis techniques are proposed as effective tools for the quick obtention of preliminary chemical information about the occurrence of organic compounds on the environment prior to the use of more complex, time-consuming and expensive instrumental techniques. Recently improved flow-based methodologies for the determination of chemical oxygen demand, halogenated organic compounds and phenols are presented and discussed in detail. The aim of the present work is to demonstrate the highlight of flow-based techniques as vanguard tools on the determination of organic compounds in environmental water samples.

CAT, Cryogenic Analysis Tools is a software package developed using LabVIEW and ROOT environments to analyze the performances of large size cryostats, where many parameters, input, and control variables need to be acquired and studied at the same time. The present paper describes how CAT works and which are the main improvements achieved in the new version: CAT 2. New Graphical User Interfaces have been developed in order to make the use of the full package more user-friendly as well as a process of resource optimization has been carried out. The offline analysis of the full cryostat performances is available both trough ROOT line command interface band also by using the new graphical interfaces.

In the last years, several policies have been proposed by governments and global institutions in order to improve the efficient use of energy in industries worldwide. However, projects in industrial motor systems require new approach, mainly in decision making area, considering the organizational barriers for energy efficiency. Despite the wide application, multicriteria methods remain unexplored in industrial motor systems until now. This paper proposes a multicriteria model using the PROMETHEE II method, with the aim of ranking alternatives for induction motors replacement. A comparative analysis of the model, applied to a Brazilian industry, has shown that multicriteria analysis presents better performance on energy saving as well as return on investments than single criterion. The paper strongly recommends the dissemination of multicriteria decision aiding as a policy to support the decision makers in industries and to improve energy efficiency in electric motor systems. - Highlights: > Lack of decision model in industrial motor system is the main motivation of the research. > A multicriteria model based on PROMETHEE method is proposed with the aim of supporting the decision makers in industries. > The model can contribute to transpose some barriers within the industries, improving the energy efficiency in industrial motor system.

This study demonstrates the use of a geographically weighted principal components analysis (GWPCA) of remote sensing imagery to improve land cover classification accuracy. A principal components analysis (PCA) is commonly applied in remote sensing but generates global, spatially-invariant results. GWPCA is a local adaptation of PCA that locally transforms the image data, and in doing so, can describe spatial change in the structure of the multi-band imagery, thus directly reflecting that many landscape processes are spatially heterogenic. In this research the GWPCA localised loadings of MODIS data are used as textural inputs, along with GWPCA localised ranked scores and the image bands themselves to three supervised classification algorithms. Using a reference data set for land cover to the west of Jakarta, Indonesia the classification procedure was assessed via training and validation data splits of 80/20, repeated 100 times. For each classification algorithm, the inclusion of the GWPCA loadings data was found to significantly improve classification accuracy. Further, but more moderate improvements in accuracy were found by additionally including GWPCA ranked scores as textural inputs, data that provide information on spatial anomalies in the imagery. The critical importance of considering both spatial structure and spatial anomalies of the imagery in the classification is discussed, together with the transferability of the new method to other studies. Research topics for method refinement are also suggested.

An improved threshold shift-invariant wavelet transform de-noising algorithm for high-resolution gamma-ray spectroscopy is proposed to optimize the threshold function of wavelet transforms and reduce signal resulting from pseudo-Gibbs artificial fluctuations. This algorithm was applied to a segmented gamma scanning system with large samples in which high continuum levels caused by Compton scattering are routinely encountered. De-noising data from the gamma ray spectrum measured by segmented gamma scanning system with improved, shift-invariant and traditional wavelet transform algorithms were all evaluated. The improved wavelet transform method generated significantly enhanced performance of the figure of merit, the root mean square error, the peak area, and the sample attenuation correction in the segmented gamma scanning system assays. We also found that the gamma energy spectrum can be viewed as a low frequency signal as well as high frequency noise superposition by the spectrum analysis. Moreover, a smoothed spectrum can be appropriate for straightforward automated quantitative analysis.

This dissertation describes a variety of studies meant to improve the analytical performance of inductively coupled plasma mass spectrometry (ICP-MS) and laser ablation (LA) ICP-MS. The emission behavior of individual droplets and LA generated particles in an ICP is studied using a high-speed, high frame rate digital camera. Phenomena are observed during the ablation of silicate glass that would cause elemental fractionation during analysis by ICP-MS. Preliminary work for ICP torch developments specifically tailored for the improvement of LA sample introduction are presented. An abnormal scarcity of metal-argon polyatomic ions (MAr{sup +}) is observed during ICP-MS analysis. Evidence shows that MAr{sup +} ions are dissociated by collisions with background gas in a shockwave near the tip of the skimmer cone. Method development towards the improvement of LA-ICP-MS for environmental monitoring is described. A method is developed to trap small particles in a collodion matrix and analyze each particle individually by LA-ICP-MS.

Misinterpretation of the maternal heart rate (MHR) as fetal may lead to significant errors in fetal heart rate (FHR) interpretation. In this study we hypothesized that the removal of these MHR-FHR ambiguities would improve FHR analysis during the final hour of labor. Sixty-one MHR and FHR recordings were simultaneously acquired in the final hour of labor. Removal of MHR-FHR ambiguities was performed by subtracting MHR signals from their FHR counterparts when the absolute difference between the two was less or equal to 5 beats per minute. Major MHR-FHR ambiguities were defined when they exceeded 1% of the tracing. Maternal, fetal and neonatal characteristics were evaluated in cases where major MHR-FHR ambiguities occurred and computer analysis of FHR recordings was compared, before and after removal of the ambiguities. Seventy-two percent of tracings (44/61) exhibited episodes of major MHR-FHR ambiguities, which were not significantly associated with any maternal, fetal or neonatal characteristics, but were associated with MHR accelerations, FHR signal loss and decelerations. Removal of MHR-FHR ambiguities resulted in a significant decrease in FHR decelerations, and improvement in FHR tracing classification. FHR interpretation during the final hour of labor can be significantly improved by the removal of MHR-FHR ambiguities.

Earlier studies have yielded conflicting evidence on whether or not cardiac resynchronization therapy (CRT) improves left ventricular (LV) rotation mechanics. In dogs with left bundle branch block and pacing-induced heart failure (n=7), we studied the effects of CRT on LV rotation mechanics in vivo by 3-dimensional tagged magnetic resonance imaging with a temporal resolution of 14 ms. CRT significantly improved hemodynamic parameters but did not significantly change the LV rotation or rotation rate. LV torsion, defined as LV rotation of each slice with respect to that of the most basal slice, was not significantly changed by CRT. CRT did not significantly change the LV torsion rate. There was no significant circumferential regional heterogeneity (anterior, lateral, inferior, and septal) in LV rotation mechanics in either left bundle branch block with pacing-induced heart failure or CRT, but there was significant apex-to-base regional heterogeneity. CRT acutely improves hemodynamic parameters without improving LV rotation mechanics. There is no significant circumferential regional heterogeneity of LV rotation mechanics in the mechanically dyssynchronous heart. These results suggest that LV rotation mechanics is an index of global LV function, which requires coordination of all regions of the left ventricle, and improvement in LV rotation mechanics appears to be a specific but insensitive index of acute hemodynamic response to CRT.

Full Text Available Abstract Background The extensive use of DNA microarray technology in the characterization of the cell transcriptome is leading to an ever increasing amount of microarray data from cancer studies. Although similar questions for the same type of cancer are addressed in these different studies, a comparative analysis of their results is hampered by the use of heterogeneous microarray platforms and analysis methods. Results In contrast to a meta-analysis approach where results of different studies are combined on an interpretative level, we investigate here how to directly integrate raw microarray data from different studies for the purpose of supervised classification analysis. We use median rank scores and quantile discretization to derive numerically comparable measures of gene expression from different platforms. These transformed data are then used for training of classifiers based on support vector machines. We apply this approach to six publicly available cancer microarray gene expression data sets, which consist of three pairs of studies, each examining the same type of cancer, i.e. breast cancer, prostate cancer or acute myeloid leukemia. For each pair, one study was performed by means of cDNA microarrays and the other by means of oligonucleotide microarrays. In each pair, high classification accuracies (> 85% were achieved with training and testing on data instances randomly chosen from both data sets in a cross-validation analysis. To exemplify the potential of this cross-platform classification analysis, we use two leukemia microarray data sets to show that important genes with regard to the biology of leukemia are selected in an integrated analysis, which are missed in either single-set analysis. Conclusion Cross-platform classification of multiple cancer microarray data sets yields discriminative gene expression signatures that are found and validated on a large number of microarray samples, generated by different laboratories and

An improved approach is discussed that may be used to directly include first and second order temperature effects in the load prediction algorithm of a wind tunnel strain-gage balance. The improved approach was designed for the Iterative Method that fits strain-gage outputs as a function of calibration loads and uses a load iteration scheme during the wind tunnel test to predict loads from measured gage outputs. The improved approach assumes that the strain-gage balance is at a constant uniform temperature when it is calibrated and used. First, the method introduces a new independent variable for the regression analysis of the balance calibration data. The new variable is designed as the difference between the uniform temperature of the balance and a global reference temperature. This reference temperature should be the primary calibration temperature of the balance so that, if needed, a tare load iteration can be performed. Then, two temperature{dependent terms are included in the regression models of the gage outputs. They are the temperature difference itself and the square of the temperature difference. Simulated temperature{dependent data obtained from Triumph Aerospace's 2013 calibration of NASA's ARC-30K five component semi{span balance is used to illustrate the application of the improved approach.

Background Economic evaluation of public policies has been advocated but rarely performed. Studies from a systematic review of the health impacts of housing improvement included data on costs and some economic analysis. Examination of these data provides an opportunity to explore the difficulties and the potential for economic evaluation of housing. Methods Data were extracted from all studies included in the systematic review of housing improvement which had reported costs and economic analysis (n=29/45). The reported data were assessed for their suitability to economic evaluation. Where an economic analysis was reported the analysis was described according to pre-set definitions of various types of economic analysis used in the field of health economics. Results 25 studies reported cost data on the intervention and/or benefits to the recipients. Of these, 11 studies reported data which was considered amenable to economic evaluation. A further four studies reported conducting an economic evaluation. Three of these studies presented a hybrid ‘balance sheet’ approach and indicated a net economic benefit associated with the intervention. One cost-effectiveness evaluation was identified but the data were unclearly reported; the cost-effectiveness plane suggested that the intervention was more costly and less effective than the status quo. Conclusions Future studies planning an economic evaluation need to (i) make best use of available data and (ii) ensure that all relevant data are collected. To facilitate this, economic evaluations should be planned alongside the intervention with input from health economists from the outset of the study. When undertaken appropriately, economic evaluation provides the potential to make significant contributions to housing policy. PMID:23929616

Recent papers introduced the Non-Harmonic Fourier Analysis for bladed wheels damage detection. This technique showed its potential in estimating the frequency of sinusoidal signals even when the acquisition time is short with respect to the vibration period, provided that some hypothesis are fulfilled. Anyway, previously proposed algorithms showed severe limitations in cracks detection at their early stage. The present paper proposes an improved algorithm which allows to detect a blade vibration frequency shift due to a crack whose size is really small compared to the blade width. Such a technique could be implemented for condition-based maintenance, allowing to use non-contact methods for vibration measurements. A stator-fixed laser sensor could monitor all the blades as they pass in front of the spot, giving precious information about the wheel health. This configuration determines an acquisition time for each blade which become shorter as the machine rotational speed increases. In this situation, traditional Discrete Fourier Transform analysis results in poor frequency resolution, being not suitable for small frequency shift detection. Non-Harmonic Fourier Analysis instead showed high reliability in vibration frequency estimation even with data samples collected in a short time range. A description of the improved algorithm is provided in the paper, along with a comparison with the previous one. Finally, a validation of the method is presented, based on finite element simulations results.

Recent improvements to the 2-8 GHz CP-FTMW spectrometer at University of Virginia have improved the structural and spectroscopic analysis of the sevoflurane-benzene cluster. Previously reported results, although robust, were limited to a fit of the a-type transitions of the normal species in the determination of the six-fold barrier to benzene internal rotation. Structural analysis was limited to the benzene hydrogen atom positions using benzene-d_{1}. The increased sensitivity of the new 2-8 GHz setup allows for a full internal rotation analysis of the a- and c-type transitions of the normal species, which was performed with BELGI. A fit value for V_{6} of 32.868(11) cm^{-1} is determined. Additionally, a full substitution structure of the benzene carbon atom positions was determined in natural abundance. Also, new measurements of a sevoflurane/benzene-d_{1} mixture enabled detection of 33 of the 60 possible ^{2}D / ^{13}C double isotopologues. This abundance of isotopic data, a total of 45 isotopologues, enabled a full heavy atom least-squares r_{0} structure fit for the complex, including positions for all seven fluorines in sevoflurane. N. A. Seifert, D. P. Zaleski, J. L. Neill, B. H. Pate, A. Lesarri, M. Vallejo, E. J. Cocinero, F. Castańo. 67th OSU Int. Symp. On Mol. Spectrosc., Columbus, OH, 2012, MH13.

Although until April 2012, all Spanish citizens regardless of their origin, residence status and work situation were entitled to health care, available evidence suggested inadequate access for immigrants. Following the Aday and Andersen model, we conducted an analysis of policy elements that affect immigrants' access to health care in Spain, based on documentary analysis of national policies and selected regional policies related to migrant health care. Selected documents were (a) laws and plans in force at the time containing migrant health policies and (b) evaluations. The analysis included policy principles, objectives, strategies and evaluations. Results show that the national and regional policies analyzed are based on the principle that health care is a right granted to immigrants by law. These policies include strategies to facilitate access to health care, reducing barriers for entry to the system, for example simplifying requirements and raising awareness, but mostly they address the necessary qualities for services to be able to attend to a more diverse population, such as the adaptation of resources and programs, or improved communication and training. However, limited planning was identified in terms of their implementation, necessary resources and evaluation. In conclusion, the policies address relevant barriers of access for migrants and signal improvements in the health system's responsiveness, but reinforcement is required in order for them to be effectively implemented.

We present new, more precise measurements of the mass and distance of our Galaxy's central supermassive black hole, Sgr A*. These results stem from a new analysis that more than doubles the time baseline for astrometry of faint stars orbiting Sgr A*, combining two decades of speckle imaging and adaptive optics data. Specifically, we improve our analysis of the speckle images by using information about a star's orbit from the deep adaptive optics data (2005 - 2013) to inform the search for the star in the speckle years (1995 - 2005). When this new analysis technique is combined with the first complete re-reduction of Keck Galactic Center speckle images using speckle holography, we are able to track the short-period star S0-38 (K-band magnitude = 17, orbital period = 19 years) through the speckle years. We use the kinematic measurements from speckle holography and adaptive optics to estimate the orbits of S0-38 and S0-2 and thereby improve our constraints of the mass ($M_{bh}$) and distance ($R_o$) of Sgr A*: $...

Radiology tests, such as MRI, CT-scan, X-ray and ultrasound, are cost intensive and insurance pre-approvals are necessary to get reimbursement. In some cases, tests may be denied for payments by insurance companies due to lack of pre-approvals, inaccurate or missing necessary information. This can lead to substantial revenue losses for the hospital. In this paper, we present a simulation study of a centralized scheduling process for outpatient radiology tests at a large community hospital (Central Baptist Hospital in Lexington, Kentucky). Based on analysis of the central scheduling process, a simulation model of information flow in the process has been developed. Using such a model, the root causes of financial losses associated with errors and omissions in this process were identified and analyzed, and their impacts were quantified. In addition, "what-if" analysis was conducted to identify potential process improvement strategies in the form of recommendations to the hospital leadership. Such a model provides a quantitative tool for continuous improvement and process control in radiology outpatient test scheduling process to reduce financial losses associated with process error. This method of analysis is also applicable to other departments in the hospital.

A cost-benefit analysis of measures to improve air quality in an existing air-conditoned office building (11581 m2, 864 employees) was carried out for hot, temperate and cold climates and for two operating modes: Variable Air Volume (VAV) with economizer; and Constant Air Volume (CAV) with heat...... recovery. The annual energy cost and first cost of the HVAC system were calculat4ed using DOE 2.1E for different levels of air quality (10-50% dissatisfied). This was achieved by changing the outdoor air supply rate and the pollution loads. Previous studies have documented a 1.1% increase in office...... productivity for every 10% reduction in the proportion of occupants entering a space who are dissatisfied with the air quality. With this assumption, the annual benefit due to improved air quality was always at least 10 times higher than the increase in annual energy and maintenance costs. The payback time...

Wind towers for passive evaporative cooling offer real opportunity for improving the ambient comfort conditions in building whilst reducing the energy consumption of air-conditioning systems. This study aims at assessing the thermal performance of a bioclimatic housing using wind towers realized in a hot dry region of Algeria. Performance monitoring and site measurement of the system provide data which assist model validation. The analysis and site measurement are encouraging, and they confirm the advantage of the application of this passive cooling strategies in hot dry climate. A mathematical model is developed using heat and mass transfer balances. For a more effective evaporative cooling, a number of improvements on wind tower configurations are proposed. (author)

Pacific Northwest Laboratory provided support to the Office of Conservation and Renewable Energy (CE), under the Office of Planning and Assessment, to develop improved energy and environmental analysis tools. Commercial building sector energy models from the past decade were analyzed in order to provoke comment and stimulate discussion between potential model users and developers as to the appropriate structure and capability of a commercial sector energy model supported by CE. Three specific areas were examined during this review. These areas provide (1) a look at recent suggestions and guidance as to what constitutes a minimal set of requirements and capabilities for a commercial buildings energy model for CE, (2) a review of several existing models in terms of their general structure and how they match up with the requirements listed previously, and (3) an overview of a proposed improved commercial sector energy model.

Full Text Available The increased competitiveness in the market encourages the ongoing development of systems and production processes. The aim is to increase production efficiency to production costs and waste be reduced to the extreme, allowing an increased product competitiveness. The objective of this study was to analyze the overall results of implementing a Kaizen philosophy in an automaker of construction machinery, using the methodology of action research, which will be studied in situ the macro production process from receipt of parts into the end of the assembly line , prioritizing the analysis time of shipping and handling. The results show that the continuous improvement activities directly impact the elimination of waste from the assembly process, mainly related to shipping and handling, improving production efficiency by 30% in the studied processes.

productivity for every 10% reduction in the proportion of occupants entering a space who are dissatisfied with the air quality. With this assumption, the annual benefit due to improved air quality was always at least 10 times higher than the increase in annual energy and maintenance costs. The payback time......A cost-benefit analysis of measures to improve air quality in an existing air-conditoned office building (11581 m2, 864 employees) was carried out for hot, temperate and cold climates and for two operating modes: Variable Air Volume (VAV) with economizer; and Constant Air Volume (CAV) with heat...... recovery. The annual energy cost and first cost of the HVAC system were calculat4ed using DOE 2.1E for different levels of air quality (10-50% dissatisfied). This was achieved by changing the outdoor air supply rate and the pollution loads. Previous studies have documented a 1.1% increase in office...

Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by nonoptimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery.

The structure importance in Fault Tree Analysis (FTA) reflects how important Basic Events are to Top Event.Attribute at alternative level in Analytic Hierarchy Process (AHP) also reflect its importance to general goal.Based on the coherence of these two methods,an improved AHP is put forward.Using this improved method,how important the attribute is to the fire safety of public building can be ana-lyzed more credibly because of the reduction of subjective judgment.Olympic venues are very impor-tant public buildings in China.The fire safety evaluation of them will be a big issue to engineers.Im-proved AHP is a useful tool to the safety evaluation to these Olympic venues,and it will guide the evaluation in other areas.

The orthopedics at the rehabilitation hospital found that disorders caused by sports injuries to the feet or caused by lower-back are improved by wearing dynamic shoe insoles, these improve walking balance and stability. However, the relationship of the lower-back and knees and the rate of increase in stability were not quantitatively analyzed. In this study, using two accelerometers, we quantitatively analyzed the reciprocal spatiotemporal contributions between the lower-back and knee of patients with left lower-back pain by means of Relative Power Contribution Analysis. When the insoles were worn, the contribution of the left and right knee relative to the left lower-back pain was up to 26% ( panalysis of the left and right knee decreased by up to 67% ( p<0.05). This shows an increase in stability.

Full Text Available In this study, we analysis of improvement on human resource management within Chinese enterprises in economic globalization. China’s entry into WTO has accelerated the economic globalization pace of Chinese enterprises and Chinese economy is further integrated with the global economy in a global scope. Human resource is what economic globalization of Chinese enterprises relies on, the first resource for China to participate in the international competition and is also the key to make effective use of other resources. Nevertheless, under the background of economic globalization, human resource management in Chinese enterprises is still faced up with quite a lot of challenges and problems. In order to establish a human resource management concept of globalization and set up a human resource management mechanism to respond to the economic globalization, this study makes a discussion and proposes management method and improvement measures for reference.

In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.

Full Text Available Abstract Background Isobutanol is considered as a leading candidate for the replacement of current fossil fuels, and expected to be produced biotechnologically. Owing to the valuable features, Bacillus subtilis has been engineered as an isobutanol producer, whereas it needs to be further optimized for more efficient production. Since elementary mode analysis (EMA is a powerful tool for systematical analysis of metabolic network structures and cell metabolism, it might be of great importance in the rational strain improvement. Results Metabolic network of the isobutanol-producing B. subtilis BSUL03 was first constructed for EMA. Considering the actual cellular physiological state, 239 elementary modes (EMs were screened from total 11,342 EMs for potential target prediction. On this basis, lactate dehydrogenase (LDH and pyruvate dehydrogenase complex (PDHC were predicted as the most promising inactivation candidates according to flux flexibility analysis and intracellular flux distribution simulation. Then, the in silico designed mutants were experimentally constructed. The maximal isobutanol yield of the LDH- and PDHC-deficient strain BSUL05 reached 61% of the theoretical value to 0.36 ± 0.02 C-mol isobutanol/C-mol glucose, which was 2.3-fold of BSUL03. Moreover, this mutant produced approximately 70 % more isobutanol to the maximal titer of 5.5 ± 0.3 g/L in fed-batch fermentations. Conclusions EMA was employed as a guiding tool to direct rational improvement of the engineered isobutanol-producing B. subtilis. The consistency between model prediction and experimental results demonstrates the rationality and accuracy of this EMA-based approach for target identification. This network-based rational strain improvement strategy could serve as a promising concept to engineer efficient B. subtilis hosts for isobutanol, as well as other valuable products.

By selecting different types of polymer mixing into concrete,the toughness of concrete is investigated,and results indicate polymer has obvious effect to improve the toughness of concrete.Microstructure of polymer-modified concrete were studied through environment scanning electron microscope and digital micro-hardness tester,results show that polymer acts as a flexible filler and reinforcement in concrete,and alters the microstructure at mortar and ITZ.By crack path prediction and energy consumption analysis,the crack path of polymer-modified concrete is more tortuous and consumes more energy than that of ordinary concrete.

Network flow control is formulated as a global optimization problem of user profit. A general global optimization flow control model is established. This model combined with the stochastic model of TCP is used to study the global rate allocation characteristic of TCP. Analysis shows when active queue manage ment is used in network TCP rates tend to be allocated to maximize the aggregate of a user utility function Us (called Us fairness). The TCP throughput formula is derived. An improved TCP congestion control mecha nism is proposed. Simulations show its throughput is TCP friendly when competing with existing TCP and its rate change is smoother. Therefore, it is suitable to carry multimedia applications.

In this paper we propose the use of principal component analysis to process the measured acceleration data in order to determine the direction of acceleration with the highest variance on given frequency of interest. This method can be used for improving the power generated by inertial energy harvesters. Their power output is highly dependent on the excitation acceleration magnitude and frequency, but the axes of acceleration measurements might not always be perfectly aligned with the directions of movement, and therefore the generated power output might be severely underestimated in simulations, possibly leading to false conclusions about the feasibility of using the inertial energy harvester for the examined application.

In recent years, some quantum proxy signature schemes based on controlled teleportation are proposed by Cao et al.. In these schemes, the properties of quantum mechanics are directly applied to ensure the security. In this paper, we have summarized a general model from the quantum proxy signature schemes. Furthermore, it can be seen that there exist some loopholes which have not been considered in the previous analysis. Specifically, the receiver can forge a valid signature. And these schemes can not be immune to collusive attack. To overcome these loopholes, some improved ideas are presented in this paper.

Full Text Available IT project governance involves establishing authority structures, policies and mechanisms for IT projects. However, the way governance arrangements are implemented can sometimes exclude or marginalise important stakeholders. In this paper, we use critical systems thinking, and the notions of boundary critique and entrenched structural conflict, to inform a critical re-analysis of a case study where the governance proved relatively ineffective. We use the ‘twelve questions’ from the critical systems heuristics (CSH approach to diagnose problems with governance arrangements and suggest solutions. Based on this, we suggest the CSH approach has theoretical and practical efficacy for improving IT project governance in general.

The Vertical Shear Instability is one of two known mechanisms potentially active in the so-called dead zones of protoplanetary accretion disks. A recent analysis indicates that a subset of unstable modes shows unbounded growth - both as resolution is increased and when the nominal lid of the atmosphere is extended, possibly indicating ill-posedness in previous attempts of linear analysis. The reduced equations governing the instability are revisited and the generated solutions are examined using both the previously assumed separable forms and an improved non-separable solution form that is herewith introduced. Analyzing the reduced equations using the separable form shows that, while the low-order body modes have converged eigenvalues and eigenfunctions (as both the vertical boundaries of the atmosphere are extended and with increased radial resolution), it is also confirmed that the corresponding high-order body modes and the surface modes do indeed show unbounded growth rates. However, the energy contained ...

The goal of this paper is to emphasize the importance of developing complete and unambiguous requirements early in the project cycle (prior to Preliminary Design Phase). Having a complete set of requirements early in the project cycle allows sufficient time to generate a traceability matrix. Requirements traceability and analysis are the key elements in improving verification and validation process, and thus overall software quality. Traceability can be most beneficial when the system changes. If changes are made to high-level requirements it implies that low-level requirements need to be modified. Traceability ensures that requirements are appropriately and efficiently verified at various levels whereas analysis ensures that a rightly interpreted set of requirements is produced.

In the past years modern mathematical methods for image analysis have led to a revolution in many fields, from computer vision to scientific imaging. However, some recently developed image processing techniques successfully exploited by other sectors have been rarely, if ever, experimented on astronomical observations. We present here tests of two classes of variational image enhancement techniques: "structure-texture decomposition" and "super-resolution" showing that they are effective in improving the quality of observations. Structure-texture decomposition allows to recover faint sources previously hidden by the background noise, effectively increasing the depth of available observations. Super-resolution yields an higher-resolution and a better sampled image out of a set of low resolution frames, thus mitigating problematics in data analysis arising from the difference in resolution/sampling between different instruments, as in the case of EUCLID VIS and NIR imagers.

Lack of reproducibility is an ongoing problem in some areas of the biomedical sciences. Poor experimental design and a failure to engage with experienced statisticians at key stages in the design and analysis of experiments are two factors that contribute to this problem. The RIPOSTE (Reducing IrreProducibility in labOratory STudiEs) framework has been developed to support early and regular discussions between scientists and statisticians in order to improve the design, conduct and analysis of laboratory studies and, therefore, to reduce irreproducibility. This framework is intended for use during the early stages of a research project, when specific questions or hypotheses are proposed. The essential points within the framework are explained and illustrated using three examples (a medical equipment test, a macrophage study and a gene expression study). Sound study design minimises the possibility of bias being introduced into experiments and leads to higher quality research with more reproducible results.

Full Text Available The paper presents the results of analysis performed to search for feasible design improvements for capacitive micromachined ultrasonic transducer. Carried out search has been aided with the sensitivity analysis and the application of Response Surface Method. The multiphysics approach has been taken into account in elaborated finite element model of one cell of described transducer in order to include significant physical phenomena present in modelled microdevice. The set of twelve input uncertain and design parameters consists of geometric, material and control properties. The amplitude of dynamic membrane deformation of the transducer has been chosen as studied parameter. The objective of performed study has been defined as the task of finding robust design configurations of the transducer, i.e. characterizing maximal value of deformation amplitude with its minimal variation.

Full Text Available The aim of the paper is to highlight the significance of implication of risk analysis and quality control methods for the improvement of parameters of lead molding process. For this reason, Fault Mode and Effect Analysis (FMEA was developed in the conceptual stage of a new product TC-G100-NR. However, the final product was faulty (a complete lack of adhesion of brass insert to leak regardless of the previously defined potential problem and its preventive action. It contributed to the recognition of root causes, corrective actions and change of production parameters. It showed how these methods, level of their organization, systematic and rigorous study affect molding process parameters.

Awkward work posture is associated with the development of musculo-skeletal disorders. Previous workplace investigations in new building construction have shown that physical work affects workers' health in 46% of jobs. There is, however, a need for detailed analysis of jobs having physical workload and ergonomics problems. OWAS (Ovako Working Posture Analysing System) is a simple observation method for postural analysis, but there has been no study of its use in the building construction industry. The work described here examined (a) the use of the OWAS method to analyse work postures in building construction, (b) the development of a portable computer system for the OWAS method, (c) improvement of work postures identified as poor, and (d) use of the results as part of the ergonomics training programme of the company. Suggestions for work redesign measures are given.

Full Text Available This paper presents a practical approach for rational choice of silica nanopowders as modifiers to control and improve the performance of protective coating systems operating in harsh environmental conditions. The approach is based on the multiparameter analysis of nanoparticle reactivity of similar silica synthesized by using chemical and physical methods. The analysis indicates distinct adsorption centers due to the differences in the particles formation; the features of the formation and adsorption mechanisms lead to higher diffusion capacity of the nanoparticles, synthesized by physical methods, into a paint material and finally result in stronger chemical bonds between the system elements. The approach allows reducing the consumption of paint materials by 30% or more, at least 2-3 times increasing of the coating adhesion and hence the system life. Validity of the approach is illustrated through the data obtained from comparative modeling, factory testing, and practical use of modified systems.

The Essentials of Baccalaureate Education for Professional Nursing Practice provides a framework for building the baccalaureate education for the twenty-first century. One of the exemplars included in the essentials toolkit includes student participation in an actual root cause analysis (RCA) or failure mode effects analysis. To align with this exemplar, faculty at the University of Michigan School of Nursing developed a pilot RCA project for the senior-level Leadership and Management course. While working collaboratively with faculty and unit liaisons at the University Health System, students completed an RCA on a nursing sensitive indicator (pain assessment or plan of care compliance). An overview of the pilot project, including the implementation process, is described. Each team of students identified root causes and recommendations for improvement on clinical and documentation practice within the context of the unit. Feedback from both the unit liaisons and the students confirmed the pilot's success.

Probabilistic risk assessment (PRA) has become an increasingly important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. The NRC recently published a final policy statement, SECY-95-126, encouraging the use of PRA in regulatory activities. Human reliability analysis (HRA), while a critical element of PRA, has limitations in the analysis of human actions in PRAs that have long been recognized as a constraint when using PRA. In fact, better integration of HRA into the PRA process has long been a NRC issue. Of particular concern, has been the omission of errors of commission - those errors that are associated with inappropriate interventions by operators with operating systems. To address these concerns, the NRC identified the need to develop an improved HRA method, so that human reliability can be better represented and integrated into PRA modeling and quantification.

Time-series analysis of Synthetic Aperture Radar (SAR) data using the two techniques of Small BAseline Subset (SBAS) and Persistent Scatterer Interferometric SAR (PSInSAR) extends the capability of conventional interferometry technique for deformation monitoring and mitigating many of its limitations. Using dual/quad polarized data provides us with an additional source of information to improve further the capability of InSAR time-series analysis. In this paper we use dual-polarized data and combine the Amplitude Dispersion Index (ADI) optimization of pixels with phase stability criterion for PSInSAR analysis. ADI optimization is performed by using Simulated Annealing algorithm to increase the number of Persistent Scatterer Candidate (PSC). The phase stability of PSCs is then measured using their temporal coherence to select the final sets of pixels for deformation analysis. We evaluate the method for a dataset comprising of 17 dual polarization SAR data (HH/VV) acquired by TerraSAR-X data from July 2013 to January 2014 over a subsidence area in Iran and compare the effectiveness of the method for both agricultural and urban regions. The results reveal that using optimum scattering mechanism decreases the ADI values in urban and non-urban regions. As compared to single-pol data the use of optimized polarization increases initially the number of PSCs by about three times and improves the final PS density by about 50%, in particular in regions with high rate of deformation which suffer from losing phase stability over the time. The classification of PS pixels based on their optimum scattering mechanism revealed that the dominant scattering mechanism of the PS pixels in the urban area is double-bounce while for the non-urban regions (ground surfaces and farmlands) it is mostly single-bounce mechanism.

Comparison of proximate analyses obtained using ASTM (American Society for Testing of Materials) methods with those from the Fisher coal analyzer shows that the analyzer gives consistently low moisture and ash values, and high volatile matter values. While the accuracy of moisture and ash determinations can be improved by introducing various instrument and crucible modifications, volatile matter values are less accurate, mainly because of differences in heating rates. However, reproducibility of results is very good and, with modifications, the instrument can be used to advantage for internal purposes, chiefly because of its large sample capacity. In ultimate analysis of coals using the Perkin-Elmer element analyzer, the main problem is that the initial purge gas flushing period after sample introduction partially removes water from the sample. Various methods of sample drying have shown that the best approach is to dry the sample directly in the instrument at the temperature used for moisture determination; with this modification of the analystical cycle, excellent reproducibility and correlation with the ASTM method have been achieved. The proximate and ultimate analysis of samples of extracts and extract residue are impaired by the presence of residual solvent. The samples can contain up to 10% residual solvent which appear as moisture in the proximate analysis. The report describes several ways of removing the solvent so that accurate analysis can be obtained. The foregoing modifications to procedures and equipment have considerably improved both accuracy and reliability of results obtained by instrumental methods. In consequence, considerably more samples can be handled than by using ASTM standard procedures. 4 refs., 1 figs., 19 tabs.

SUMMARY Spinal cord stimulation has been in clinical use for the treatment of chronic pain for over four decades. Since the initial use by Norman Shealy, the indications for its use have increased steadily over the decades to include neuropathic pain owing to failed back surgery syndrome, complex regional pain syndrome and painful diabetic peripheral neuropathies. To date, the precise mechanism of action of spinal cord stimulation remains unclear, yet it is still one of the most expensive interventional treatment modalities available in pain medicine with increasing application across the world. Given the worldwide focus on cost-effective care, there is an opportunity to focus on process analysis as a mechanism for optimizing the operations within and between all specialties engaged in the provision of care in pain medicine. Here, we propose a process analysis approach to model, measure and improve the delivery of disease-based care to enhance effective treatment with a costly modality. Systems-based process analysis is not widely utilized in pain medicine, and there is a limited body of evidence for its application. The purpose of this article is to generate interest in the discipline of process analysis in pain medicine, as it has found value in other healthcare settings and industries. We mention the applicability across countries and specialties that we hope will increase the awareness of this concept and possibly generate interest in further examination by investigators that will lead to the development of highly efficient and effective healthcare delivery processes and systems across the globe.

Fetal heart rate (FHR) monitoring is used routinely in labor, but conventional methods have a limited capacity to detect fetal hypoxia/acidosis. An exploratory study was performed on the simultaneous assessment of maternal heart rate (MHR) and FHR variability, to evaluate their evolution during labor and their capacity to detect newborn acidemia. MHR and FHR were simultaneously recorded in 51 singleton term pregnancies during the last two hours of labor and compared with newborn umbilical artery blood (UAB) pH. Linear/nonlinear indices were computed separately for MHR and FHR. Interaction between MHR and FHR was quantified through the same indices on FHR-MHR and through their correlation and cross-entropy. Univariate and bivariate statistical analysis included nonparametric confidence intervals and statistical tests, receiver operating characteristic curves and linear discriminant analysis. Progression of labor was associated with a significant increase in most MHR and FHR linear indices, whereas entropy indices decreased. FHR alone and in combination with MHR as FHR-MHR evidenced the highest auROC values for prediction of fetal acidemia, with 0.76 and 0.88 for the UAB pH thresholds 7.20 and 7.15, respectively. The inclusion of MHR on bivariate analysis achieved sensitivity and specificity values of nearly 100 and 89.1%, respectively. These results suggest that simultaneous analysis of MHR and FHR may improve the identification of fetal acidemia compared with FHR alone, namely during the last hour of labor.

The laparoscopic suturing task is a complex procedure that requires objective assessment of surgical skills. Analysis of laparoscopic suturing task components was performed to improve current objective assessment tools. Twelve subjects participated in this study as three groups of four surgeons (novices, intermediates and experts). A box-trainer and organic tissue were used to perform the experiment while tool movements were recorded with the augmented reality haptic system. All subjects were right-handed and developed a surgeon's knot. The laparoscopic suturing procedure was decomposed into four subtasks. Different objective metrics were applied during tool-motion analysis (TMA). Statistical analysis was performed, and results from three groups were compared using the Jonckheere-Terpstra test, considering significant differences when P ≤ 0.05. Several first, second and fourth subtask metrics had significant differences between the three groups. Subtasks 1 and 2 had more significant differences in metrics than subtask 4. Almost all metrics showed superior task executions accomplished by experts (lower time, total path length and number of movements) compared with intermediates and novices. The most important subtasks during suture learning process are needle puncture and first knot. The TMA could be a useful objective assessment tool to discriminate surgical experience and could be used in the future to measure and certify surgical proficiency.

Proteomic analysis of bacterial samples provides valuable information about cellular responses and functions under different environmental pressures. Analysis of cellular proteins is dependent upon efficient extraction from bacterial samples, which can be challenging with increasing complexity and refractory characteristics. While no single method can recover 100% of the bacterial proteins, selected protocols can improve overall protein isolation, peptide recovery, or enrichment for certain classes of proteins. The method presented here is technically simple, does not require specialized equipment such as a mechanical disrupter, and is effective for protein extraction of the particularly challenging sample type of Bacillus anthracis Sterne spores. The ability of Trichloroacetic acid (TCA) extraction to isolate proteins from spores and enrich for spore-specific proteins was compared to the traditional mechanical disruption method of bead beating. TCA extraction improved the total average number of proteins identified within a sample as compared to bead beating (547 vs 495, respectively). Further, TCA extraction enriched for 270 spore proteins, including those typically identified by first isolating the spore coat and exosporium layers. Bead beating enriched for 156 spore proteins more typically identified from whole spore proteome analyses. The total average number of proteins identified was equal using TCA or bead beating for easily lysed samples, such as B. anthracis vegetative cells. As with all assays, supplemental methods such as implementation of an alternative preparation method may simplify sample preparation and provide additional insight to the protein biology of the organism being studied.

In this paper, an improved joint time-frequency (TF) analysis method based on a reassigned smoothed pseudo Wigner-Ville distribution (RSPWVD) has been proposed in interference detection for Global Navigation Satellite System (GNSS) receivers. In the RSPWVD, the two-dimensional low-pass filtering smoothing function is introduced to eliminate the cross-terms present in the quadratic TF distribution, and at the same time, the reassignment method is adopted to improve the TF concentration properties of the auto-terms of the signal components. This proposed interference detection method is evaluated by experiments on GPS L1 signals in the disturbing scenarios compared to the state-of-the-art interference detection approaches. The analysis results show that the proposed interference detection technique effectively overcomes the cross-terms problem and also preserves good TF localization properties, which has been proven to be effective and valid to enhance the interference detection performance of the GNSS receivers, particularly in the jamming environments.

Block copolymers constitute a fascinating class of polymeric materials that are used in a broad range of applications. The performance of these materials is highly coupled to the physical and chemical properties of the constituting block copolymers. Traditionally, the composition of block copolymers is obtained by 1H NMR spectroscopy on purified copolymer fractions. Specifically, the integrals of a properly selected set of 1H resonances are compared and used to infer the number average molecular weight (M(n)) of one of the block from the (typically known) M(n) value of the other. As a corollary, compositional determinations achieved on imperfectly purified samples lead to serious errors, especially when isolation of the block copolymer from the initial macro initiator is tedious. This investigation shows that Diffusion Ordered NMR Spectroscopy (DOSY) can be used to provide a way to assess the advancement degree of the copolymerization purification/reaction, in order to optimize it and hence contribute to an improved compositional analysis of the resulting copolymer. To this purpose, a series of amphiphilic polystyrene-b-poly(ethylene oxide) block copolymers, obtained by controlled free-radical nitroxide mediated polymerization, were analyzed and it is shown that, under proper experimental conditions, DOSY allows for an improved compositional analysis of these block copolymers.

As the essential foundation of noise reduction, many noise source identification methods have been developed and applied to engineering practice. To identify the noise source in the board-band frequency of different engine parts at various typical speeds, this paper presents an integrated noise source identification method based on the ensemble empirical mode decomposition (EEMD), the coherent power spectrum analysis, and the improved analytic hierarchy process (AHP). The measured noise is decomposed into several IMFs with physical meaning, which ensures the coherence analysis of the IMFs and the vibration signals are meaningful. An improved AHP is developed by introducing an objective weighting function to replace the traditional subjective evaluation, which makes the results no longer dependent on the subject performances and provides a better consistency in the meantime. The proposed noise identification model is applied to identifying a diesel engine surface radiated noise. As a result, the frequency-dependent contributions of different engine parts to different test points at different speeds are obtained, and an overall weight order is obtained as oil pan > left body > valve chamber cover > gear chamber casing > right body > flywheel housing, which provides an effectual guidance for the noise reduction.

Full Text Available Internet of Things is a ubiquitous concept where physical objects are connected over the internet and are provided with unique identifiers to enable their self-identification to other devices and the ability to continuously generate data and transmit it over a network. Hence, the security of the network, data and sensor devices is a paramount concern in the IoT network as it grows very fast in terms of exchanged data and interconnected sensor nodes. This paper analyses the authentication and access control method using in the Internet of Things presented by Jing et al. According to our analysis, Jing et al.’s protocol is costly in the message exchange and the security assessment is not strong enough for such a protocol. Therefore, we propose improvements to the protocol to fill the discovered weakness gaps. The protocol enhancements facilitate many services to the users such as user anonymity, mutual authentication, and secure session key establishment. Finally, the performance and security analysis show that the improved protocol possesses many advantages against popular attacks, and achieves better efficiency at low communication cost.

Highlights: • Sensitivity analysis is performed on the reflood model of RELAP5. • The selected influential models are discussed and modified. • The modifications are assessed by FEBA experiment and better predictions are obtained. - Abstract: Reflooding is an important and complex process to the safety of nuclear reactor during loss of coolant accident (LOCA). Accurate prediction of the reflooding behavior is one of the challenge tasks for the current system code development. RELAP5 as a widely used system code has the capability to simulate this process but with limited accuracy, especially for low inlet flow rate reflooding conditions. Through the preliminary assessment with six FEBA (Flooding Experiments with Blocked Arrays) tests, it is observed that the peak cladding temperature (PCT) is generally underestimated and bundle quench is predicted too early compared to the experiment data. In this paper, the improvement of constitutive models related to reflooding is carried out based on single parametric sensitivity analysis. Film boiling heat transfer model and interfacial friction model of dispersed flow are selected as the most influential models to the results of interests. Then studies and discussions are specifically focused on these sensitive models and proper modifications are recommended. These proposed improvements are implemented in RELAP5 code and assessed against FEBA experiment. Better agreement between calculations and measured data for both cladding temperature and quench time is obtained.

Internet of Things is a ubiquitous concept where physical objects are connected over the internet and are provided with unique identifiers to enable their self-identification to other devices and the ability to continuously generate data and transmit it over a network. Hence, the security of the network, data and sensor devices is a paramount concern in the IoT network as it grows very fast in terms of exchanged data and interconnected sensor nodes. This paper analyses the authentication and access control method using in the Internet of Things presented by Jing et al. (Authentication and Access Control in the Internet of Things. In Proceedings of the 2012 32nd International Conference on Distributed Computing Systems Workshops, Macau, China, 18-21 June 2012, pp. 588-592). According to our analysis, Jing et al.'s protocol is costly in the message exchange and the security assessment is not strong enough for such a protocol. Therefore, we propose improvements to the protocol to fill the discovered weakness gaps. The protocol enhancements facilitate many services to the users such as user anonymity, mutual authentication, and secure session key establishment. Finally, the performance and security analysis show that the improved protocol possesses many advantages against popular attacks, and achieves better efficiency at low communication cost.

The performance of the prompt gamma neutron activation analysis (PGNAA) facility at the MIT Research Reactor has been improved by a series of modifications. These modifications have increased the flux by a factor of three at the sample position to 1.7 × 10 7 n/cm 2 s, and have increased the sensitivity, on average, by a factor of 2.5. The background for many samples of interest is dominated by unavoidable neutron interactions that occur in or near the sample. Other background components comprise only 20% of the total background count rate. The implementation of fast electronics has helped to keep dead time reasonable, in spite of the increased count rates. The PGNAA facility at the MIT Research Reactor continues to serve as a major analytical tool for quantifying 10B in biological samples for Boron Neutron Capture Therapy (BNCT) research. The sensitivity for boron-10 in water is 18 750 cps/mg. The sensitivity for pure elements suitable for PGNAA analysis is reported. Possible further improvements are discussed.

Near-infrared (NIR) spectroscopy will present a more promising tool for quantitative analysis if the predictive ability of the calibration model is further improved. To achieve this goal, a new ensemble calibration method based on uninformative variable elimination (UVE)-partial least square (PLS) is proposed, which is named as ensemble PLS (EPLS), meaning a fusion of multiple PLS models. In this method, different calibration sets are first generated by bootstrap and different PLS models are obtained. Then, the UVE is used to shrink the original variable space into a specific subspace. By repeating this process, a fixed number of candidates PLS member models are obtained. Finally, a smaller part of candidate models are integrated to produce an ensemble model. In order to verify the performance of EPLS, three NIR spectral datasets from food industry were used for illustration. Both full-spectrum PLS and UVEPLS of single models were used as reference. It was found that the proposed method could lead to lower RMSEP (root mean square error of prediction) value than PLS and UVEPLS and such an improvement is statistically significant according to a paired t-test. The results showed that the method is of value to enhance the predictive ability of PLS-based calibration involving complex NIR matrices in food analysis.

The plasticity-based distortion prediction method was improved to address the computationally intensive nature of welding simulations. Plastic strains, which are typically first computed using either two-dimensional (2D) or three-dimensional (3D) thermo-elastic-plastic analysis (EPA) on finite element models of simple weld geometry, are mapped to the full structure finite element model to predict distortion by conducting a linear elastic analysis. To optimize welding sequence to control distortion, a new theory was developed to consider the effect of weld interactions on plastic strains. This improved method was validated with experimental work on a Tee joint and tested on two large-scale welded structures—a light fabrication and a heavy fabrication—by comparing against full-blown distortion predictions using thermo-EPA. 3D solid and shell models were used for the heavy and light fabrications, respectively, to compute plastic strains due to each weld. Quantitative comparisons between this method and thermo-EPA indicate that this method can predict distortions fairly accurately—even for different welding sequences—and is roughly 1-2 orders of magnitude faster. It was concluded from these findings that, with further technical development, this method can be an ideal solver for optimizing welding sequences.

Full Text Available OBJECTIVE: Whilst regular exercise is advocated for people with type 1 diabetes, the benefits of this therapy are poorly delineated. Our objective was to review the evidence for a glycaemic benefit of exercise in type 1 diabetes. RESEARCH DESIGN AND METHODS: Electronic database searches were carried out in MEDLINE, Embase, Cochrane's Controlled Trials Register and SPORTDiscus. In addition, we searched for as yet unpublished but completed trials. Glycaemic benefit was defined as an improvement in glycosylated haemoglobin (HbA1c. Both randomised and non-randomised controlled trials were included. RESULTS: Thirteen studies were identified in the systematic review. Meta-analysis of twelve of these (including 452 patients demonstrated an HbA1c reduction but this was not statistically significant (standardised mean difference (SMD -0.25; 95% CI, -0.59 to 0.09. CONCLUSIONS: This meta-analysis does not reveal evidence for a glycaemic benefit of exercise as measured by HbA1c. Reasons for this finding could include increased calorie intake, insulin dose reductions around the time of exercise or lack of power. We also suggest that HbA1c may not be a sensitive indicator of glycaemic control, and that improvement in glycaemic variability may not be reflected in this measure. Exercise does however have other proven benefits in type 1 diabetes, and remains an important part of its management.

Full Text Available This paper provides results of an application of a holistic systematic approach of water accounting using remote sensing and GIS coupled with ground water modeling to evaluate water saving options by tracking non-beneficial evaporation in the Liuyuankou Irrigation System (LIS of China. Groundwater rise is a major issue in the LIS, where groundwater levels have risen alarmingly close to the ground surface (within 1 m near the Yellow River. The lumped water balance analysis showed high fallow evaporation losses and which need to be reduced for improving water productivity.

The seasonal actual evapotranspiration (ETs was estimated by applying the SEBAL algorithm for eighteen NOAA AVHRR-12 images over the year of 1990–1991. This analysis was aided by the unsupervised land use classification applied to two Landsat 5 TM images of the study area. SEBAL results confirmed that a significant amount (116.7 MCM of water can be saved by reducing ETs from fallow land which will result in improved water productivity at the irrigation system. The water accounting indicator (for the analysis period shows that the process fraction per unit of depleted water (PFdepleted is 0.52 for LIS, meaning that 52% of the depleted water is consumed by agricultural crops and 48% is lost through non-process depletion.

Finally, the groundwater modeling was applied to simulate three land use and water management interventions to assess their effectiveness for both water savings and impact on the groundwater in LIS. MODFLOW's Zone Budget code calculates the groundwater budget of user-specified subregions, the exchange of flows between subregions and also calculates a volumetric water budget for the entire model at the end of each time step. The simulation results showed that fallow evaporation could be reduced between 14.2% (25.51 MCM and 45.3% (81.36 MCM by interventions such as canal lining and ground

Full Text Available The past oil crises have caused dramatic improvements in fuel efficiency in all industrial sectors. The aviation sector—aircraft manufacturers and airlines—has also made significant efforts to improve the fuel efficiency through more advanced jet engines, high-lift wing designs, and lighter airframe materials. However, the innovations in energy-saving aircraft technologies do not coincide with the oil crisis periods. The largest improvement in aircraft fuel efficiency took place in the 1960s while the high oil prices in the 1970s and on did not induce manufacturers or airlines to achieve a faster rate of innovation. In this paper, we employ a historical analysis to examine the socio-economic reasons behind the relatively slow technological innovation in aircraft fuel efficiency over the last 40 years. Based on the industry and passenger behaviors studied and prospects for alternative fuel options, this paper offers insights for the aviation sector to shift toward more sustainable technological options in the medium term. Second-generation biofuels could be the feasible option with a meaningful reduction in aviation’s lifecycle environmental impact if they can achieve sufficient economies of scale.

Working memory (WM), the ability to store and manipulate information for short periods of time, is an important predictor of scholastic aptitude and a critical bottleneck underlying higher-order cognitive processes, including controlled attention and reasoning. Recent interventions targeting WM have suggested plasticity of the WM system by demonstrating improvements in both trained and untrained WM tasks. However, evidence on transfer of improved WM into more general cognitive domains such as fluid intelligence (Gf) has been more equivocal. Therefore, we conducted a meta-analysis focusing on one specific training program, n-back. We searched PubMed and Google Scholar for all n-back training studies with Gf outcome measures, a control group, and healthy participants between 18 and 50 years of age. In total, we included 20 studies in our analyses that met our criteria and found a small but significant positive effect of n-back training on improving Gf. Several factors that moderate this transfer are identified and discussed. We conclude that short-term cognitive training on the order of weeks can result in beneficial effects in important cognitive functions as measured by laboratory tests.

The past oil crises have caused dramatic improvements in fuel efficiency in all industrial sectors. The aviation sector-aircraft manufacturers and airlines-has also made significant efforts to improve the fuel efficiency through more advanced jet engines, high-lift wing designs, and lighter airframe materials. However, the innovations in energy-saving aircraft technologies do not coincide with the oil crisis periods. The largest improvement in aircraft fuel efficiency took place in the 1960s while the high oil prices in the 1970s and on did not induce manufacturers or airlines to achieve a faster rate of innovation. In this paper, we employ a historical analysis to examine the socio-economic reasons behind the relatively slow technological innovation in aircraft fuel efficiency over the last 40 years. Based on the industry and passenger behaviors studied and prospects for alternative fuel options, this paper offers insights for the aviation sector to shift toward more sustainable technological options in the medium term. Second-generation biofuels could be the feasible option with a meaningful reduction in aviation's lifecycle environmental impact if they can achieve sufficient economies of scale.

This study determined the relative importance of attributes of food safety improvement in the production chain of fluid pasteurized milk. The chain was divided into 4 blocks: "feed" (compound feed production and its transport), "farm" (dairy farm), "dairy processing" (transport and processing of raw milk, delivery of pasteurized milk), and "consumer" (retailer/catering establishment and pasteurized milk consumption). The concept of food safety improvement focused on 2 main groups of hazards: chemical (antibiotics and dioxin) and microbiological (Salmonella, Escherichia coli, Mycobacterium paratuberculosis, and Staphylococcus aureus). Adaptive conjoint analysis was used to investigate food safety experts' perceptions of the attributes' importance. Preference data from individual experts (n = 24) on 101 attributes along the chain were collected in a computer-interactive mode. Experts perceived the attributes from the "feed" and "farm" blocks as being more vital for controlling the chemical hazards; whereas the attributes from the "farm" and "dairy processing" were considered more vital for controlling the microbiological hazards. For the chemical hazards, "identification of treated cows" and "quality assurance system of compound feed manufacturers" were considered the most important attributes. For the microbiological hazards, these were "manure supply source" and "action in salmonellosis and M. paratuberculosis cases". The rather high importance of attributes relating to quality assurance and traceability systems of the chain participants indicates that participants look for food safety assurance from the preceding participants. This information has substantial decision-making implications for private businesses along the chain and for the government regarding the food safety improvement of fluid pasteurized milk.

A method is proposed to find key components of traffic networks with homogenous and heterogeneous topologies, in which heavier traffic flow is transported. One component, called the skeleton, is the minimum spanning tree (MST) based on the zero flow cost (ZCMST). The other component is the infinite incipient percolation cluster (IIC) which represents the spine of the traffic network. Then, a new method to analysis the property of the bottleneck in a large scale traffic network is given from a macroscopic and statistical viewpoint. Moreover, three effective strategies are proposed to alleviate traffic congestion. The significance of the findings is that one can significantly improve the global transport by enhancing the capacity in the ZCMST with a few links, while for improving the local traffic property, improving a tiny fraction of the traffic network in the IIC is effective. The result can be used to help traffic managers prevent and alleviate traffic congestion in time, guard against the formation of congestion bottleneck, and make appropriate policies for traffic demand management. Meanwhile, the method has very important theoretical significance and practical worthiness in optimizing traffic organization, traffic control, and disposal of emergency.

Repetitive transcranial magnetic stimulation is a noninvasive treatment technique that can directly alter cortical excitability and improve cerebral functional activity in unconscious patients. To investigate the effects and the electrophysiological changes of repetitive transcranial magnetic stimulation cortical treatment, 10 stroke patients with non-severe brainstem lesions and with disturbance of consciousness were treated with repetitive transcranial magnetic stimulation. A quantitative electroencephalography spectral power analysis was also performed. The absolute power in the alpha band was increased immediately after the first repetitive transcranial magnetic stimulation treatment, and the energy was reduced in the delta band. The alpha band relative power values slightly decreased at 1 day post-treatment, then increased and reached a stable level at 2 weeks post-treatment. Glasgow Coma Score and JFK Coma Recovery Scale-Revised score were improved. Relative power value in the alpha band was positively related to Glasgow Coma Score and JFK Coma Recovery Scale-Revised score. These data suggest that repetitive transcranial magnetic stimulation is a noninvasive, safe, and effective treatment technology for improving brain functional activity and promoting awakening in unconscious stroke patients.

Using the improved Hilbert-Huang transform (HHT), this paper investigates the problems of analysis and interpretation of the energy spectrum of a blast wave. It has been previously established that the energy spectrum is an effective feature by which to characterize a blast wave. In fact, the higher the energy spectra in a frequency band of a blast wave, the greater the damage to a target in the same frequency band. However, most current research focuses on analyzing wave signals in the time domain or frequency domain rather than considering the energy spectrum. We propose here an improved HHT method combined with a wavelet packet to extract the energy spectrum feature of a blast wave. When applying the HHT, the signal is first roughly decomposed into a series of intrinsic mode functions (IMFs) by empirical mode decomposition. The wavelet packet method is then performed on each IMF to eliminate noise on the energy spectrum. Second, a coefficient is introduced to remove unrelated IMFs. The energy of each instantaneous frequency can be derived through the Hilbert transform. The energy spectrum can then be obtained by adding up all the components after the wavelet packet filters and screens them through a coefficient to obtain the effective IMFs. The effectiveness of the proposed method is demonstrated by 12 groups of experimental data, and an energy attenuation model is established based on the experimental data. The improved HHT is a precise method for blast wave signal analysis. For other shock wave signals from blasting experiments, an energy frequency time distribution and energy spectrum can also be obtained through this method, allowing for more practical applications.

The left atrium (LA) can change in size and shape due to atrial fibrillation (AF)-induced remodeling. These alterations can be linked to poorer outcomes of AF ablation. In this study, we propose a novel comprehensive computational analysis of LA anatomy to identify what features of LA shape can optimally predict post-ablation AF recurrence. To this end, we construct smooth 3D geometrical models from the segmentation of the LA blood pool captured in pre-procedural MR images. We first apply this methodology to characterize the LA anatomy of 144 AF patients and build a statistical shape model that includes the most salient variations in shape across this cohort. We then perform a discriminant analysis to optimally distinguish between recurrent and non-recurrent patients. From this analysis, we propose a new shape metric called vertical asymmetry, which measures the imbalance of size along the anterior to posterior direction between the superior and inferior left atrial hemispheres. Vertical asymmetry was found, in combination with LA sphericity, to be the best predictor of post-ablation recurrence at both 12 and 24 months (area under the ROC curve: 0.71 and 0.68, respectively) outperforming other shape markers and any of their combinations. We also found that model-derived shape metrics, such as the anterior-posterior radius, were better predictors than equivalent metrics taken directly from MRI or echocardiography, suggesting that the proposed approach leads to a reduction of the impact of data artifacts and noise. This novel methodology contributes to an improved characterization of LA organ remodeling and the reported findings have the potential to improve patient selection and risk stratification for catheter ablations in AF.

The left atrium (LA) can change in size and shape due to atrial fibrillation (AF)-induced remodeling. These alterations can be linked to poorer outcomes of AF ablation. In this study, we propose a novel comprehensive computational analysis of LA anatomy to identify what features of LA shape can optimally predict post-ablation AF recurrence. To this end, we construct smooth 3D geometrical models from the segmentation of the LA blood pool captured in pre-procedural MR images. We first apply this methodology to characterize the LA anatomy of 144 AF patients and build a statistical shape model that includes the most salient variations in shape across this cohort. We then perform a discriminant analysis to optimally distinguish between recurrent and non-recurrent patients. From this analysis, we propose a new shape metric called vertical asymmetry, which measures the imbalance of size along the anterior to posterior direction between the superior and inferior left atrial hemispheres. Vertical asymmetry was found, in combination with LA sphericity, to be the best predictor of post-ablation recurrence at both 12 and 24 months (area under the ROC curve: 0.71 and 0.68, respectively) outperforming other shape markers and any of their combinations. We also found that model-derived shape metrics, such as the anterior-posterior radius, were better predictors than equivalent metrics taken directly from MRI or echocardiography, suggesting that the proposed approach leads to a reduction of the impact of data artifacts and noise. This novel methodology contributes to an improved characterization of LA organ remodeling and the reported findings have the potential to improve patient selection and risk stratification for catheter ablations in AF.

Purpose: The treatment of ventral hernias (VH) has been a challenging problem for medical care. Repair of these hernias is fraught with failure; recurrence rates ranging from 24% to 43% have been reported, even with the use of biocompatible mesh. Currently, computed tomography (CT) is used to guide intervention through expert, but qualitative, clinical judgments, notably, quantitative metrics based on image-processing are not used. The authors propose that image segmentation methods to capture the three-dimensional structure of the abdominal wall and its abnormalities will provide a foundation on which to measure geometric properties of hernias and surrounding tissues and, therefore, to optimize intervention.Methods: In this study with 20 clinically acquired CT scans on postoperative patients, the authors demonstrated a novel approach to geometric classification of the abdominal. The authors’ approach uses a texture analysis based on Gabor filters to extract feature vectors and follows a fuzzy c-means clustering method to estimate voxelwise probability memberships for eight clusters. The memberships estimated from the texture analysis are helpful to identify anatomical structures with inhomogeneous intensities. The membership was used to guide the level set evolution, as well as to derive an initial start close to the abdominal wall.Results: Segmentation results on abdominal walls were both quantitatively and qualitatively validated with surface errors based on manually labeled ground truth. Using texture, mean surface errors for the outer surface of the abdominal wall were less than 2 mm, with 91% of the outer surface less than 5 mm away from the manual tracings; errors were significantly greater (2–5 mm) for methods that did not use the texture.Conclusions: The authors’ approach establishes a baseline for characterizing the abdominal wall for improving VH care. Inherent texture patterns in CT scans are helpful to the tissue classification, and texture

Health centers in Ghana play an important role in health care delivery especially in deprived communities. They usually serve as the first line of service and meet basic health care needs. Unfortunately, these facilities are faced with inadequate resources. While health policy makers seek to increase resources committed to primary healthcare, it is important to understand the nature of inefficiencies that exist in these facilities. Therefore, the objectives of this study are threefold; (i) estimate efficiency among primary health facilities (health centers), (ii) examine the potential fiscal space from improved efficiency and (iii) investigate the efficiency disparities in public and private facilities. Data was from the 2015 Access Bottlenecks, Cost and Equity (ABCE) project conducted by the Institute for Health Metrics and Evaluation. The Stochastic Frontier Analysis (SFA) was used to estimate efficiency of health facilities. Efficiency scores were then used to compute potential savings from improved efficiency. Outpatient visits was used as output while number of personnel, hospital beds, expenditure on other capital items and administration were used as inputs. Disparities in efficiency between public and private facilities was estimated using the Nopo matching decomposition procedure. Average efficiency score across all health centers included in the sample was estimated to be 0.51. Also, average efficiency was estimated to be about 0.65 and 0.50 for private and public facilities, respectively. Significant disparities in efficiency were identified across the various administrative regions. With regards to potential fiscal space, we found that, on average, facilities could save about GH₵11,450.70 (US$7633.80) if efficiency was improved. We also found that fiscal space from efficiency gains varies across rural/urban as well as private/public facilities, if best practices are followed. The matching decomposition showed an efficiency gap of 0.29 between private

and demonstrates that the improved DFT dissimilarity measure is an efficient and effective similarity measure of DNA sequences. Due to its high efficiency and accuracy, the proposed DFT similarity measure is successfully applied on phylogenetic analysis for individual genes and large whole bacterial genomes.

The change in T 1-hypointense lesion ("black hole") volume is an important marker of pathological progression in multiple sclerosis (MS). Black hole boundaries often have low contrast and are difficult to determine accurately and most (semi-)automated segmentation methods first compute the T 2-hyperintense lesions, which are a superset of the black holes and are typically more distinct, to form a search space for the T 1w lesions. Two main potential sources of measurement noise in longitudinal black hole volume computation are partial volume and variability in the T 2w lesion segmentation. A paired analysis approach is proposed herein that uses registration to equalize partial volume and lesion mask processing to combine T 2w lesion segmentations across time. The scans of 247 MS patients are used to compare a selected black hole computation method with an enhanced version incorporating paired analysis, using rank correlation to a clinical variable (MS functional composite) as the primary outcome measure. The comparison is done at nine different levels of intensity as a previous study suggests that darker black holes may yield stronger correlations. The results demonstrate that paired analysis can strongly improve longitudinal correlation (from -0.148 to -0.303 in this sample) and may produce segmentations that are more sensitive to clinically relevant changes.

A computational fluid dynamics code is used to model the primary natural circulation loop of a proposed small modular reactor for comparison to experimental data and best-estimate thermal-hydraulic code results. Recent advances in computational fluid dynamics code modeling capabilities make them attractive alternatives to the current conservative approach of coupled best-estimate thermal hydraulic codes and uncertainty evaluations. The results from a computational fluid dynamics analysis are benchmarked against the experimental test results of a 1:3 length, 1:254 volume, full pressure and full temperature scale small modular reactor during steady-state power operations and during a depressurization transient. A comparative evaluation of the experimental data, the thermal hydraulic code results and the computational fluid dynamics code results provides an opportunity to validate the best-estimate thermal hydraulic code's treatment of a natural circulation loop and provide insights into expanded use of the computational fluid dynamics code in future designs and operations. Additionally, a sensitivity analysis is conducted to determine those physical phenomena most impactful on operations of the proposed reactor's natural circulation loop. The combination of the comparative evaluation and sensitivity analysis provides the resources for increased confidence in model developments for natural circulation loops and provides for reliability improvements of the thermal hydraulic code.

For prevention and mitigation of the containment failure during severe accident, the study is focused on the severe accident phenomena, especially, the ones occurring inside the cavity and is intended to improve existing models and develop analytical tools for the assessment of severe accidents. A correlation equation of the flame velocity of pre mixture gas of H{sub 2}/air/steam has been suggested and combustion flame characteristic was analyzed using a developed computer code. For the analysis of the expansion phase of vapor explosion, the mechanical model has been developed. The development of a debris entrainment model in a reactor cavity with captured volume has been continued to review and examine the limitation and deficiencies of the existing models. Pre-test calculation was performed to support the severe accident experiment for molten corium concrete interaction study and the crust formation process and heat transfer characteristics of the crust have been carried out. A stress analysis code was developed using finite element method for the reactor vessel lower head failure analysis. Through international program of PHEBUS-FP and participation in the software development, the research on the core degradation process and fission products release and transportation are undergoing. CONTAIN and MELCOR codes were continuously updated under the cooperation with USNRC and French developed computer codes such as ICARE2, ESCADRE, SOPHAEROS were also installed into the SUN workstation. 204 figs, 61 tabs, 87 refs. (Author).

Structures in recurrence plots (RPs), preserving the rich information of nonlinear invariants and trajectory characteristics, have been increasingly analyzed in dynamic discrimination studies. The conventional analysis of RPs is mainly focused on quantifying the overall diagonal and vertical line structures through a method, called recurrence quantification analysis (RQA). This study extensively explores the information in RPs by quantifying local complex RP structures. To do this, an approach was developed to analyze the combination of three major RQA variables: determinism, laminarity, and recurrence rate (DLR) in a metawindow moving over a RP. It was then evaluated in two experiments discriminating (1) ideal nonlinear dynamic series emulated from the Lorenz system with different control parameters and (2) data sets of human heart rate regulations with normal sinus rhythms (n = 18) and congestive heart failure (n = 29). Finally, the DLR was compared with seven major RQA variables in terms of discriminatory power, measured by standardized mean difference (DSMD). In the two experiments, DLR resulted in the highest discriminatory power with DSMD = 2.53 and 0.98, respectively, which were 7.41 and 2.09 times the best performance from RQA. The study also revealed that the optimal RP structures for the discriminations were neither typical diagonal structures nor vertical structures. These findings indicate that local complex RP structures contain some rich information unexploited by RQA. Therefore, future research to extensively analyze complex RP structures would potentially improve the effectiveness of the RP analysis in dynamic discrimination studies.

Full Text Available This paper proposes a novel security-constrained unit commitment model to calculate the optimal spinning reserve (SR amount. The model combines cost-benefit analysis with an improved multiscenario risk analysis method capable of considering various uncertainties, including load and wind power forecast errors as well as forced outages of generators. In this model, cost-benefit analysis is utilized to simultaneously minimize the operation cost of conventional generators, the expected cost of load shedding, the penalty cost of wind power spillage, and the carbon emission cost. It remedies the defects of the deterministic and probabilistic methods of SR calculation. In cases where load and wind power generation are negatively correlated, this model based on multistep modeling of net demand can consider the wind power curtailment to maximize the overall economic efficiency of system operation so that the optimal economic values of wind power and SR are achieved. In addition, the impact of the nonnormal probability distributions of wind power forecast error on SR optimization can be taken into account. Using mixed integer linear programming method, simulation studies on a modified IEEE 26-generator reliability test system connected to a wind farm are performed to confirm the effectiveness and advantage of the proposed model.

High-Throughput (HT) SELEX combines SELEX (Systematic Evolution of Ligands by EXponential Enrichment), a method for aptamer discovery, with massively parallel sequencing technologies. This emerging technology provides data for a global analysis of the selection process and for simultaneous discovery of a large number of candidates but currently lacks dedicated computational approaches for their analysis. To close this gap, we developed novel in-silico methods to analyze HT-SELEX data and utilized them to study the emergence of polymerase errors during HT-SELEX. Rather than considering these errors as a nuisance, we demonstrated their utility for guiding aptamer discovery. Our approach builds on two main advancements in aptamer analysis: AptaMut-a novel technique allowing for the identification of polymerase errors conferring an improved binding affinity relative to the 'parent' sequence and AptaCluster-an aptamer clustering algorithm which is to our best knowledge, the only currently available tool capable of efficiently clustering entire aptamer pools. We applied these methods to an HT-SELEX experiment developing aptamers against Interleukin 10 receptor alpha chain (IL-10RA) and experimentally confirmed our predictions thus validating our computational methods. Published by Oxford University Press on behalf of Nucleic Acids Research 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

To present the results of a strengths, weaknesses, opportunities and threats (SWOT) analysis used as part of a process aimed at reorganising services provided within a pediatric rehabilitation programme (PRP) in Quebec, Canada and to report the perceptions of the planning committee members regarding the usefulness of the SWOT in this process. Thirty-six service providers working in the PRP completed a SWOT questionnaire and reported what they felt worked and what did not work in the existing model of care. Their responses were used by a planning committee over a 12-month period to assist in the development of a new service delivery model. Committee members shared their thoughts about the usefulness of the SWOT. Current programme strengths included favourable organisational climate and interdisciplinary work whereas weaknesses included lack of psychosocial support to families and long waiting times for children. Opportunities included working with community partners, whereas fear of losing professional autonomy with the new service model was a threat. The SWOT results helped the planning committee redefine the programme goals and make decisions to improve service coordination. SWOT analysis was deemed as a very useful tool to help guide service reorganisation. SWOT analysis appears to be an interesting evaluation tool to promote awareness among service providers regarding the current functioning of a rehabilitation programme. It fosters their active participation in the reorganisation of a new service delivery model for pediatric rehabilitation.

Structures in recurrence plots (RPs), preserving the rich information of nonlinear invariants and trajectory characteristics, have been increasingly analyzed in dynamic discrimination studies. The conventional analysis of RPs is mainly focused on quantifying the overall diagonal and vertical line structures through a method, called recurrence quantification analysis (RQA). This study extensively explores the information in RPs by quantifying local complex RP structures. To do this, an approach was developed to analyze the combination of three major RQA variables: determinism, laminarity, and recurrence rate (DLR) in a metawindow moving over a RP. It was then evaluated in two experiments discriminating (1) ideal nonlinear dynamic series emulated from the Lorenz system with different control parameters and (2) data sets of human heart rate regulations with normal sinus rhythms (n = 18) and congestive heart failure (n = 29). Finally, the DLR was compared with seven major RQA variables in terms of discriminatory power, measured by standardized mean difference (DSMD). In the two experiments, DLR resulted in the highest discriminatory power with DSMD = 2.53 and 0.98, respectively, which were 7.41 and 2.09 times the best performance from RQA. The study also revealed that the optimal RP structures for the discriminations were neither typical diagonal structures nor vertical structures. These findings indicate that local complex RP structures contain some rich information unexploited by RQA. Therefore, future research to extensively analyze complex RP structures would potentially improve the effectiveness of the RP analysis in dynamic discrimination studies.

Accurate measurement of volatile organic compounds (VOCs) in the troposphere is critical for the understanding of emissions and physical and chemical processes that can impact both air quality and climate. Airborne VOC measurements have proven challenging due to the requirements of short sample collection times (=10 s) to maximize spatial resolution and sampling frequency and high sensitivity (pptv) to chemically diverse hydrocarbons, halocarbons, oxygen- and nitrogen-containing VOCs. NOAA ESRL CSD has built an improved whole air sampler (iWAS) which collects compressed ambient air samples in electropolished stainless steel canisters, based on the NCAR HAIS Advanced Whole Air Sampler [Atlas and Blake]. Post-flight chemical analysis is performed with a custom-built gas chromatograph-mass spectrometer system that pre-concentrates analyte cryostatically via a Stirling cooler, an electromechanical chiller which precludes the need for liquid nitrogen to reach trapping temperatures. For the 2015 Shale Oil and Natural Gas Nexus Study (SONGNEX), CSD conducted iWAS measurements on 19 flights aboard the NOAA WP-3D aircraft between March 19th and April 27th. Nine oil and natural gas production regions were surveyed during SONGNEX and more than 1500 air samples were collected and analyzed. For the first time, we employed real-time mapping of sample collection combined with live data from fast time-response measurements (e.g. ethane) for more uniform surveying and improved target plume sampling. Automated sample handling allowed for more than 90% of iWAS canisters to be analyzed within 96 hours of collection - for the second half of the campaign improved efficiencies reduced the median sample age at analysis to 36 hours. A new chromatography peak-fitting software package was developed to minimize data reduction time by an order of magnitude without a loss of precision or accuracy. Here we report mixing ratios for aliphatic and aromatic hydrocarbons (C2-C8) along with select

OBJECTIVE Existing studies have shown a high overall rate of adverse events (AEs) following pediatric neurosurgical procedures. However, little is known regarding the morbidity of specific procedures or the association with risk factors to help guide quality improvement (QI) initiatives. The goal of this study was to describe the 30-day mortality and AE rates for pediatric neurosurgical procedures by using the American College of Surgeons (ACS) National Surgical Quality Improvement Program-Pediatrics (NSQIP-Peds) database platform. METHODS Data on 9996 pediatric neurosurgical patients were acquired from the 2012-2014 NSQIP-Peds participant user file. Neurosurgical cases were analyzed by the NSQIP-Peds targeted procedure categories, including craniotomy/craniectomy, defect repair, laminectomy, shunts, and implants. The primary outcome measure was 30-day mortality, with secondary outcomes including individual AEs, composite morbidity (all AEs excluding mortality and unplanned reoperation), surgical-site infection, and unplanned reoperation. Univariate analysis was performed between individual AEs and patient characteristics using Fischer's exact test. Associations between individual AEs and continuous variables (duration from admission to operation, work relative value unit, and operation time) were examined using the Student t-test. Patient characteristics and continuous variables associated with any AE by univariate analysis were used to develop category-specific multivariable models through backward stepwise logistic regression. RESULTS The authors analyzed 3383 craniotomy/craniectomy, 242 defect repair, 1811 laminectomy, and 4560 shunt and implant cases and found a composite overall morbidity of 30.2%, 38.8%, 10.2%, and 10.7%, respectively. Unplanned reoperation rates were highest for defect repair (29.8%). The mortality rate ranged from 0.1% to 1.2%. Preoperative ventilator dependence was a significant predictor of any AE for all procedure groups, whereas

The authors describe the application of a technique called Patient Flow Analysis aimed at the improvement of Clinic Personnel efficiency and reduction of patient waiting time. Results were satisfactory and encourage further experiences.

To mitigate serious air pollution, the State Council of China promulgated the Air Pollution Prevention and Control Action Plan in 2013. To verify the feasibility and validity of industrial energy-saving and emission-reduction policies in the action plan, we conducted a cost-benefit analysis of implementing these policies in 31 provinces for the period of 2013 to 2017. We also completed a scenario analysis in this study to assess the cost-effectiveness of different measures within the energy-saving and the emission-reduction policies individually. The data were derived from field surveys, statistical yearbooks, government documents, and published literatures. The results show that total cost and total benefit are 118.39 and 748.15 billion Yuan, respectively, and the estimated benefit-cost ratio is 6.32 in the S3 scenario. For all the scenarios, these policies are cost-effective and the eastern region has higher satisfactory values. Furthermore, the end-of-pipe scenario has greater emission reduction potential than energy-saving scenario. We also found that gross domestic product and population are significantly correlated with the benefit-cost ratio value through the regression analysis of selected possible influencing factors. The sensitivity analysis demonstrates that benefit-cost ratio value is more sensitive to unit emission-reduction cost, unit subsidy, growth rate of gross domestic product, and discount rate among all the parameters. Compared with other provinces, the benefit-cost ratios of Beijing and Tianjin are more sensitive to changes of unit subsidy than unit emission-reduction cost. These findings may have significant implications for improving China's air pollution prevention policy.

The Japan subduction zone represents a complex set of plate interfaces with significant trench-parallel variability in great earthquakes and transient deep slip events. Within the Japan arc the Nankai segment of the Eurasian-Philippine plate boundary is one of the classic subduction zone segments that last produced a set of temporally linked great earthquakes in the 1940's. Recently, down-dip of the Nankai seismogenic portion of the plate interface, transient slip events and seismic tremor events were observed. Through analysis of the GEONET GPS data, the spatial and higher frequency temporal characteristics of transient slip events can be captured. We describe our analysis methods, the spatial filtering technique that has been developed for use on large networks, a periodic signal filtering method that improves on commonly-used sinusoidal function models, and the resultant velocities and time series. Our newly developed analysis method, the GPS Network Processor, gives us the ability to process large volumes of data extremely fast. The basis of the GPS Network Processor is the JPL-developed GIPSY-OASIS GPS analysis software and the JPL-developed precise point positioning technique. The Network Processor was designed and developed to efficiently implement precise point positioning and bias fixing on a 1000-node (2000 cpu) Beowulf cluster. The entire 10 year ~1000-station GEONET data set can be reanalyzed using the Network Processor in a matter of days. This permits us to test different processing strategies, each with potentially large influence on our ability to detect strain transients from the subduction zones. For example, we can test different ocean loading models, which can effect the diurnal positions of coastal GPS sites by up to 2 cm. We can also test other potentially important factors such as using reprocessed satellite orbits and clocks, the parameterization of the tropospheric delay, or the implementation of refined solid body tide estimates. We will

Methicillin-resistant Staphylococcus aureus is one of the most significant pathogens associated with health care. For efficient surveillance, control and outbreak investigation, S. aureus typing is essential. A high resolution melting curve analysis was developed and evaluated for rapid identification of the most frequent spa types found in an Austrian hospital consortium covering 2,435 beds. Among 557 methicillin-resistant Staphylococcus aureus isolates 38 different spa types were identified by sequence analysis of the hypervariable region X of the protein A gene (spa). Identification of spa types through their characteristic high resolution melting curve profiles was considerably improved by double spiking with genomic DNA from spa type t030 and spa type t003 and allowed unambiguous and fast identification of the ten most frequent spa types t001 (58%), t003 (12%), t190 (9%), t041 (5%), t022 (2%), t032 (2%), t008 (2%), t002 (1%), t5712 (1%) and t2203 (1%), representing 93% of all isolates within this hospital consortium. The performance of the assay was evaluated by testing samples with unknown spa types from the daily routine and by testing three different high resolution melting curve analysis real-time PCR instruments. The ten most frequent spa types were identified from all samples and on all instruments with 100% specificity and 100% sensitivity. Compared to classical spa typing by sequence analysis, this gene scanning assay is faster, cheaper and can be performed in a single closed tube assay format. Therefore it is an optimal screening tool to detect the most frequent endemic spa types and to exclude non-endemic spa types within a hospital.

Value-added analysis is the most robust, statistically significant method available for helping educators quantify student progress over time. This powerful tool also reveals tangible strategies for improving instruction. Built around the work of Battelle for Kids, this book provides a field-tested continuous improvement model for using…

As research progresses towards nanoscale materials, there has become a need for a more efficient and effective way to obtain ultra-thin samples for imaging under transmission electron microscope (TEM) for atomic resolution analysis. There are various methods used to obtain thin samples (epoxy composites, are of poor quality due to the sample cutting difficulties. Such poor quality samples are characterized by uneven sample thicknesses, objective overlapping, overall darkness due to large thickness, and defects such as cutting scratches. This research is a continuous effort to study and improve the ultra-microtome cutting technique to provide an effective and reliable approach of obtaining an ultra-thin (25-50 nm) cross section of a CNT/polymer composite for high resolution TEM analysis. Improvements were achieved by studying the relationships between the chosen cutting parameters, sample characteristics and TEM image quality. From this information, a cutting protocol was established so that ultra-thin sample slices can be achieved by different microtome operators for high resolution TEM analysis. In addition, a custom tool was created to aid in the sample collection process. In this research, three composite samples were studied for both microtome cutting and TEM analysis: 1) Unidirectional (UD) IM7/BMI composite; 2) Single-layer CNT buckypaper (BP)/epoxy nanocomposite; 3) 3-layer CNT BP/BMI nanocomposite. The resultant TEM images revealed a clear microstructure consisting of amorphous resin and graphite crystalline packing. UD IM7/BMI composite TEM results did not reveal an interfacial region resulting in a need for even thinner sliced cross sections. TEM results for the single-layer CNT BP/epoxy nanocomposite revealed the alignment direction of the nanotubes and numerous stacks of CNT bundles. In addition, there was visible flattening of CNT packing into dumbbell shapes similar to results obtain by Alan Windle. TEM results for the 3-layer CNT BP/BMI nanocomposite

Loads on a gearbox casing of a certain type of tracked vehicle were calculated according to the engine＇s full load characteristic curve and the worst load condition where the gearbox operated while the tracked vehicle was running, and then stiffness and strength of the casing were analyzed by means of Patran/Nastran software. After a- nalysis, it was found that the casing satisfied the Mises ＇ yield condition; however, the stress distribution was hetero- geneous, and stresses near the bearing saddle bores of the casing were higher while those in other regions were much less than the allowable stress. For this reason, thicknesses of the casing wall on bearing assembling holes needed in- creasing, while those in other places can decrease. After much structural improving and re-analysis, the optimal casing design was found, and its weight decreased by 5% ; the casing still satisfied the Mises yield criterion and the stress distribution was more homogeneous.

Full Text Available Today, hoteliers have problems with handling online reputation due to bad reviews they’ve received on social networks. The aim of this research is to identify the key factors to consider in the operation of each hotel to avoid negative comments and to increase their online reputation. The ratings received by virtual means in 57 Latin American hotels belonging to the GHL Hotel Chain from March 31st, 2015 until March 31st, 2016. By using the software Revinate, there were analyzed the reviews by department. Then, they were classified to developed a manual of good practices. From the analysis of those comments, recommendations were made on six areas of the hotels: Rooms, Food and Beverage, Front Desk, Business Center, Security, and Management to optimize the quality in hotels and thus improve their online reputation.

The analysis of the composition of polysaccharides, i.e. dextran, by total acid hydrolysis, in the presence or absence of oxygen, and by different methods of neutralization of the hydrolysate, is presented. It was found that hydrolysis of polysaccharides under nitrogen atmosphere, in the absence of oxygen, diminishes the possibility of a decomposition of monosaccharides formed during hydrolysis. The neutralization of the acid hydrolysate by passing it through a column of weak-base ion exchange resin. Amberlite IRA-94, instead of neutralizing the hydrolysate by Ba(OH)/sub 2/ diminishes the possibility of epimerization of glucose to other saccharides. This improved method gives more reliable results, even in the presence of readily decomposed polysaccharides.

A multidisciplinary safety initiative transformed blood transfusion practices at St. Luke's Episcopal Hospital in Houston, Texas. An intense analysis of a mistransfusion using the principles of a Just Culture and the process of Cause Mapping identified system and human performance factors that led to the transfusion error. Multiple initiatives were implemented including technology, education and human behaviour change. The wireless technology of Pyxis Transfusion Verification by CareFusion is effective with the rapid infusion module efficient for use in critical care. Improvements in blood transfusion safety were accomplished by thoroughly evaluating the process of transfusions and by implementing wireless electronic transfusion verification technology. During the 27 months following implementation of the CareFusion Transfusion Verification there have been zero cases of transfusing mismatched blood.

The high performance liquid chromatographic (HPLC) method of Flores and Galston (1982 Plant Physiol 69: 701) for the separation and quantitation of benzoylated polyamines in plant tissues has been widely adopted by other workers. However, due to previously unrecognized problems associated with the derivatization of agmatine, this important intermediate in plant polyamine metabolism cannot be quantitated using this method. Also, two polyamines, putrescine and diaminopropane, also are not well resolved using this method. A simple modification of the original HPLC procedure greatly improves the separation and quantitation of these amines, and further allows the simulation analysis of phenethylamine and tyramine, which are major monoamine constituents of tobacco and other plant tissues. We have used this modified HPLC method to characterize amine titers in suspension cultured carrot (Daucas carota L.) cells and tobacco (Nicotiana tabacum L.) leaf tissues.

Full Text Available Conceptual homogeneity is one determinant of the quality of text documents. A concept remains the same if the words used (termini change [1, 2]. In other words, termini can vary while the concept retains the same meaning. Human beings are able to handle concepts and termini because of their semantic network, which is able to connect termini to the actual context and thus identify the adequate meaning of the termini. Problems could arise when humans have to learn new content and correspondingly new concepts. Since the content is basically imparted by text via particular termini, it is a challenge to establish the right concept from the text with the termini. A term might be known, but have a different meaning [3, 4]. Therefore, it is very important to build up the correct understanding of concepts within a text. This is only possible when concepts are explained by the right termini, within an adequate context, and above all, homogeneously. So, when setting up or using text documents for teaching or application, it is essential to provide concept homogeneity.Understandably, the quality of documents is, ceteris paribus, reciprocally proportional to variations of termini. Therefore, an analysis of variations of termini could form a basis for specific improvement of conceptual homogeneity.Consequently, an exposition of variations of termini as control and improvement parameters is carried out in this investigation. This paper describes the functionality and the profit of a tool called TermAnalysis.It also outlines the margins, typeface and other vital specifications necessary for authors preparing camera-ready papers for submission to the 5th International Conference on Advanced Engineering Design. The aim of this paper is to ensure that all readers are clear as to the uniformity required by the organizing committee and to ensure that readers’ papers will be accepted as camera-ready for the conference.TermAnalysis is a software tool developed

Full Text Available The aim of the paper is to investigate possible improvements in the design and operation of a tubular solid oxide fuel cell. To achieve this purpose, a CFD model of the cell is introduced. The model includes thermo-fluid dynamics, chemical reactions and electrochemistry. The fluid composition and mass flow rates at the inlet sections are obtained through a finite difference model of the whole stack. This model also provides boundary conditions for the radiation heat transfer. All of these conditions account for the position of each cell within the stack. The analysis of the cell performances is conducted on the basis of the entropy generation. The use of this technique makes it possible to identify the phenomena provoking the main irreversibilities, understand their causes and propose changes in the system design and operation.

Continuous flow analysis (CFA) is a well-established method to obtain information about impurity contents in ice cores as indicators of past changes in the climate system. A section of an ice core is continuously melted on a melter head supplying a sample water flow which is analyzed online. This provides high depth and time resolution of the ice core records and very efficient sample decontamination as only the inner part of the ice sample is analyzed. Here we present an improved CFA system which has been totally redesigned in view of a significantly enhanced overall efficiency and flexibility, signal quality, compactness, and ease of use. These are critical requirements especially for operations of CFA during field campaigns, e.g., in Antarctica or Greenland. Furthermore, a novel deviceto measure the total air content in the ice was developed. Subsequently, the air bubbles are now extracted continuously from the sample water flow for subsequent gas measurements.

With the advent of miniaturized inertial sensors many systems have been developed within the last decade to study and analyze human motion and posture, specially in the medical field. Data measured by the sensors are usually processed by algorithms based on Kalman Filters in order to estimate the orientation of the body parts under study. These filters traditionally include fixed parameters, such as the process and observation noise variances, whose value has large influence in the overall performance. It has been demonstrated that the optimal value of these parameters differs considerably for different motion intensities. Therefore, in this work, we show that, by applying frequency analysis to determine motion intensity, and varying the formerly fixed parameters accordingly, the overall precision of orientation estimation algorithms can be improved, therefore providing physicians with reliable objective data they can use in their daily practice.

We present improved results on the measurement of the angular power spectrum of the Cosmic Microwave Background (CMB) temperature anisotropies using the data from the last Archeops flight. This refined analysis is obtained by using the 6 most sensitive photometric pixels in the CMB bands centered at 143 and 217 GHz and 20% of the sky, mostly clear of foregrounds. Using two different cross-correlation methods, we obtain very similar results for the angular power spectrum. Consistency checks are performed to test the robustness of these results paying particular attention to the foreground contamination level which remains well below the statistical uncertainties. The multipole range from l=10 to l=700 is covered with 25 bins, confirming strong evidence for a plateau at large angular scales (the Sachs-Wolfe plateau) followed by two acoustic peaks centered around l=220 and l=550 respectively. These data provide an independent confirmation, obtained at different frequencies, of the WMAP first year results.

A slender and flat shaft is a key part of the track recorder in marine vessels. However, the axial straightness of the shaft often exceeds standard measurements after it is machined. It has also been found that its precision does not last a long time. After thorough analysis of these problems the main reasons that affect machining quality are identified-and a process modification plan is put forward that meets design requirements of the shaft. The production and practice indicate that the precision of the shaft is stable for a long period and the quality of products improved substantially after new measures were employed, securing the e accuracy of the track recording of the marine vessel.

The frequency-domain fast boundary element method (BEM) combined with the exponential window technique leads to an efficient yet simple method for elastodynamic analysis. In this paper, the efficiency of this method is further enhanced by three strategies. Firstly, we propose to use exponential window with large damping parameter to improve the conditioning of the BEM matrices. Secondly, the frequency domain windowing technique is introduced to alleviate the severe Gibbs oscillations in time-domain responses caused by large damping parameters. Thirdly, a solution extrapolation scheme is applied to obtain better initial guesses for solving the sequential linear systems in the frequency domain. Numerical results of three typical examples with the problem size up to 0.7 million unknowns clearly show that the first and third strategies can significantly reduce the computational time. The second strategy can effectively eliminate the Gibbs oscillations and result in accurate time-domain responses.

Digital signal processing techniques were employed to investigate the joint use of charge division and risetime analyses for the resistive anode (RA) coupled to a microchannel plate detector (MCP). In contrast to the typical approach of using the relative charge at each corner of the RA, this joint approach results in a significantly improved position resolution. A conventional charge division analysis utilizing analog signal processing provides a position measured resolution of 170 $\\mu$m (FWHM). By using the correlation between risetime and position we were able to obtain a measured resolution of 92 $\\mu$m (FWHM), corresponding to an intrinsic resolution of 64 $\\mu$m (FMHM) for a single Z-stack MCP detector.

We conducted a two-group longitudinal partially nested randomized controlled trial to examine whether young adolescent youth-parent dyads participating in Mission Possible: Parents and Kids Who Listen, in contrast to a comparison group, would demonstrate improved problem-solving skill. The intervention is based on the Circumplex Model and Social Problem-Solving Theory. The Circumplex Model posits that families who are balanced, that is characterized by high cohesion and flexibility and open communication, function best. Social Problem-Solving Theory informs the process and skills of problem solving. The Conditional Latent Growth Modeling analysis revealed no statistically significant differences in problem solving among the final sample of 127 dyads in the intervention and comparison groups. Analyses of effect sizes indicated large magnitude group effects for selected scales for youth and dyads portraying a potential for efficacy and identifying for whom the intervention may be efficacious if study limitations and lessons learned were addressed.

Full Text Available Abstract Background Existing methods for analyzing bacterial CGH data from two-color arrays are based on log-ratios only, a paradigm inherited from expression studies. We propose an alternative approach, where microarray signals are used in a different way and sequence identity is predicted using a supervised learning approach. Results A data set containing 32 hybridizations of sequenced versus sequenced genomes have been used to test and compare methods. A ROC-analysis has been performed to illustrate the ability to rank probes with respect to Present/Absent calls. Classification into Present and Absent is compared with that of a gaussian mixture model. Conclusion The results indicate our proposed method is an improvement of existing methods with respect to ranking and classification of probes, especially for multi-genome arrays.

The high performance liquid chromatographic (HPLC) method of Flores and Galston (1982 Plant Physiol 69: 701) for the separation and quantitation of benzoylated polyamines in plant tissues has been widely adopted by other workers. However, due to previously unrecognized problems associated with the derivatization of agmatine, this important intermediate in plant polyamine metabolism cannot be quantitated using this method. Also, two polyamines, putrescine and diaminopropane, also are not well resolved using this method. A simple modification of the original HPLC procedure greatly improves the separation and quantitation of these amines, and further allows the simulation analysis of phenethylamine and tyramine, which are major monoamine constituents of tobacco and other plant tissues. We have used this modified HPLC method to characterize amine titers in suspension cultured carrot (Daucas carota L.) cells and tobacco (Nicotiana tabacum L.) leaf tissues.

In the past decades,on-line monitoring of batch processes using multi-way independent component analysis (MICA) has received considerable attention in both academia and industry.This paper focuses on two troublesome issues concerning selecting dominant independent components without a standard criterion and determining the control limits of monitoring statistics in the presence of non-Gaussian distribution.To optimize the number of key independent components,we introduce a novel concept of system deviation,which is able to evaluate the reconstructed observations with different independent components.The monitored statistics are transformed to Gaussian distribution data by means of Box-Cox transformation,which helps readily determine the control limits.The proposed method is applied to on-line monitoring of a fed-batch penicillin fermentation simulator,and the experimental results indicate the advantages of the improved MICA monitoring compared to the conventional methods.

Introduction “The Match” has become the accepted selection process for graduate medical education. Otomatch.com has provided an online forum for Otolaryngology-Head and Neck Surgery (OHNS) Match-related questions for over a decade. Herein, we aim to 1) delineate the types of posts on Otomatch to better understand the perspective of medical students applying for residency and 2) provide recommendations to potentially improve the Match process. Methods Discussion forum posts on Otomatch between December 2001 and April 2014 were reviewed. The title of each thread and total number of views were recorded for quantitative analysis. Each thread was organized into one of six major categories and one of eighteen subcategories, based on chronology within the application cycle and topic. National Resident Matching Program (NRMP) data were utilized for comparison. Results We identified 1,921 threads corresponding to over 2 million page views. Over 40% of threads related to questions about specific programs, and 27% were discussions about interviews. Views, a surrogate measure for popularity, reflected different trends. The majority of individuals viewed posts on interviews (42%), program specific questions (20%) and how to rank programs (11%). Increase in viewership tracked with a rise in applicant numbers based on NRMP data. Conclusions Our study provides an in depth analysis of a popular discussion forum for medical students interested in the OHNS Match. The most viewed posts are about interview dates and questions regarding specific programs. We provide suggestions to address unmet needs for medical students and potentially improve the Match process. PMID:25550223

Lyapunov exponent is an important index for describing chaotic systems behavior, and the largest Lyapunov exponent can be used to determine whether a system is chaotic or not. For discrete-time dynamical systems, the Lyapunov exponents are calculated by an eigenvalue method. In theory, according to eigenvalue method, the more accurate calculations of Lyapunov exponent can be obtained with the increment of iterations, and the limits also exist. However, due to the finite precision of computer and other reasons, the results will be numeric overflow, unrecognized, or inaccurate, which can be stated as follows: (1) The iterations cannot be too large, otherwise, the simulation result will appear as an error message of NaN or Inf; (2) If the error message of NaN or Inf does not appear, then with the increment of iterations, all Lyapunov exponents will get close to the largest Lyapunov exponent, which leads to inaccurate calculation results; (3) From the viewpoint of numerical calculation, obviously, if the iterations are too small, then the results are also inaccurate. Based on the analysis of Lyapunov-exponent calculation in discrete-time systems, this paper investigates two improved algorithms via QR orthogonal decomposition and SVD orthogonal decomposition approaches so as to solve the above-mentioned problems. Finally, some examples are given to illustrate the feasibility and effectiveness of the improved algorithms.

Global Navigation Satellite Systems (GNSS) are now recognized as cost-effective tools for ionospheric studies by providing the global coverage through worldwide networks of GNSS stations. While GNSS networks continue to expand to improve the observability of the ionosphere, the amount of poor quality GNSS observation data is also increasing and the use of poor-quality GNSS data degrades the accuracy of ionospheric measurements. This paper develops a comprehensive method to determine the quality of GNSS observations for the purpose of ionospheric studies. The algorithms are designed especially to compute key GNSS data quality parameters which affect the quality of ionospheric product. The quality of data collected from the Continuously Operating Reference Stations (CORS) network in the conterminous United States (CONUS) is analyzed. The resulting quality varies widely, depending on each station and the data quality of individual stations persists for an extended time period. When compared to conventional methods, the quality parameters obtained from the proposed method have a stronger correlation with the quality of ionospheric data. The results suggest that a set of data quality parameters when used in combination can effectively select stations with high-quality GNSS data and improve the performance of ionospheric data analysis.

The first draft of the common marmoset (Callithrix jacchus) genome was published by the Marmoset Genome Sequencing and Analysis Consortium. The draft was based on whole-genome shotgun sequencing, and the current assembly version is Callithrix_jacches-3.2.1, but there still exist 187,214 undetermined gap regions and supercontigs and relatively short contigs that are unmapped to chromosomes in the draft genome. We performed resequencing and assembly of the genome of common marmoset by deep sequencing with high-throughput sequencing technology. Several different sequence runs using Illumina sequencing platforms were executed, and 181 Gbp of high-quality bases including mate-pairs with long insert lengths of 3, 8, 20, and 40 Kbp were obtained, that is, approximately 60× coverage. The resequencing significantly improved the MGSAC draft genome sequence. The N50 of the contigs, which is a statistical measure used to evaluate assembly quality, doubled. As a result, 51% of the contigs (total length: 299 Mbp) that were unmapped to chromosomes in the MGSAC draft were merged with chromosomal contigs, and the improved genome sequence helped to detect 5,288 new genes that are homologous to human cDNAs and the gaps in 5,187 transcripts of the Ensembl gene annotations were completely filled.

Full Text Available As a group, students with intellectual disabilities display difficulties in a wide range of academic skills, including the acquisition of basic academic skills such as literacy. Early writing and reading skills must be supported to prepare students with intellectual disabilities to learn to read and write. The goal of this study was to replicate and extend the current research on Brief Experimental Analysis with letter formation. Three students with intellectual disabilities participated in the study. A brief multi-element design was used to test effectiveness of four interventions on letter formation. These interventions included goal setting plus contingent reinforcement, graphical feedback, error correction and modeling. For one student, modeling was effective; for the two remaining students, goal setting plus contingent reinforcement was effective. The results of this study extend the BEA literature by investigating the effects of interventions for improving letter formation in students with intellectual disabilities. The study findings suggest that using BEA to assess the relative contribution of each intervention can identify the most effective interventions for improving letter formation in students with intellectual disabilities.

The US Department of Energy (US-DOE) manages diverse facilities ranging from laboratory complexes to nuclear reactors and waste repositories. It is self-regulating in the areas of radiological safety, occupational protection and environmental disturbances. In these areas the US-DOE has obtained mostly good results, but at high expense by using conservative and unsystematic approaches. In an effort to improve both safety and use of resources a project has been undertaken to understand better how to utilize risk assessment techniques to obtain improved safety outcomes and their regulation. The example of the Test Reactor Area Hot Cell (TRAHC) at the Idaho National Engineering and Environmental Laboratory (INEEL) is the subject of a simple probabilistic risk assessment (PRA) in the areas of radiological releases to the environment and of occupational hazards. To our knowledge this is the first attempt to utilize quantitative risk analyses for management of non-radiological occupational risks. Its purpose is to examine the feasibility of utilizing risk assessment as a technique to supplant the currently employed, less formal, hazard analysis as the basis for allocating safety-related resources. Problems of data and modeling adequacy have proven to be important; results to-date indicate areas where revised resource allocation should be considered. (author)

Self-control is positively associated with a host of beneficial outcomes. Therefore, psychological interventions that reliably improve self-control are of great societal value. A prominent idea suggests that training self-control by repeatedly overriding dominant responses should lead to broad improvements in self-control over time. Here, we conducted a random-effects meta-analysis based on robust variance estimation of the published and unpublished literature on self-control training effects. Results based on 33 studies and 158 effect sizes revealed a small-to-medium effect of g = 0.30, confidence interval (CI95) [0.17, 0.42]. Moderator analyses found that training effects tended to be larger for (a) self-control stamina rather than strength, (b) studies with inactive compared to active control groups, (c) males than females, and (d) when proponents of the strength model of self-control were (co)authors of a study. Bias-correction techniques suggested the presence of small-study effects and/or publication bias and arrived at smaller effect size estimates (range: gcorrected = .13 to .24). The mechanisms underlying the effect are poorly understood. There is not enough evidence to conclude that the repeated control of dominant responses is the critical element driving training effects.

In order to obtain cassava starch films with improved mechanical properties in relation to the synthetic polymer in the packaging production, a complete factorial design 23 was carried out in order to investigate which factor significantly influences the tensile strength of the biofilm. The factors to be investigated were cassava starch, glycerol and modified clay contents. Modified bentonite clay was used as a filling material of the biofilm. Glycerol was the plasticizer used to thermoplastify cassava starch. The factorial analysis suggested a regression model capable of predicting the optimal mechanical property of the cassava starch film from the maximization of the tensile strength. The reliability of the regression model was tested by the correlation established with the experimental data through the following statistical analyse: Pareto graph. The modified clay was the factor of greater statistical significance on the observed response variable, being the factor that contributed most to the improvement of the mechanical property of the starch film. The factorial experiments showed that the interaction of glycerol with both modified clay and cassava starch was significant for the reduction of biofilm ductility. Modified clay and cassava starch contributed to the maximization of biofilm ductility, while glycerol contributed to the minimization.

In general, vehicle vibration is non-stationary and has a non-Gaussian probability distribution; yet existing testing methods for packaging design employ Gaussian distributions to represent vibration induced by road profiles. This frequently results in over-testing and/or over-design of the packaging to meet a specification and correspondingly leads to wasteful packaging and product waste, which represent 15bn per year in the USA and €3bn per year in the EU. The purpose of the paper is to enable a measured non-stationary acceleration signal to be replaced by a constructed signal that includes as far as possible any non-stationary characteristics from the original signal. The constructed signal consists of a concatenation of decomposed shorter duration signals, each having its own kurtosis level. Wavelet analysis is used for the decomposition process into inner and outlier signal components. The constructed signal has a similar PSD to the original signal, without incurring excessive acceleration levels. This allows an improved and more representative simulated input signal to be generated that can be used on the current generation of shaker tables. The wavelet decomposition method is also demonstrated experimentally through two correlation studies. It is shown that significant improvements over current international standards for packaging testing are achievable; hence the potential for more efficient packaging system design is possible.

Full Text Available Subspace-based algorithms for operational modal analysis have been extensively studied in the past decades. In the postprocessing of subspace-based algorithms, the stabilization diagram is often used to determine modal parameters. In this paper, an improved stabilization diagram is proposed for stochastic subspace identification. Specifically, first, a model order selection method based on singular entropy theory is proposed. The singular entropy increment is calculated from nonzero singular values of the output covariance matrix. The corresponding model order can be selected when the variation of singular entropy increment approaches to zero. Then, the stabilization diagram with confidence intervals which is established using the uncertainty of modal parameter is presented. Finally, a simulation example of a four-story structure and a full-scale cable-stayed footbridge application is employed to illustrate the improved stabilization diagram method. The study demonstrates that the model order can be reasonably determined by the proposed method. The stabilization diagram with confidence intervals can effectively remove the spurious modes.

In order to study selection indices for improving rice grain yield, a cross was made between an Iranian traditional rice (Oryza sativa L.) variety, Tarommahalli and an improved indica rice variety, Khazar in 2006. The traits of the parents (30 plants), F1 (30 plants) and F2 generations (492 individuals) were evaluated at the Rice Research institute of Iran (RRII) during 2007. Heritabilities of the number of panicles per plant, plant height, days to heading and panicle exsertion were greater than that of grain yield. The selection indices were developed using the results of multivariate analysis. To evaluate selection strategies to maximize grain yield, 14 selection indices were calculated based on two methods (optimum and base) and combinations of 12 traits with various economic weights. Results of selection indices showed that selection for grain weight, number of panicles per plant and panicle length by using their phenotypic and/or genotypic direct effects (path coefficient) as economic weights should serve as an effective selection criterion for using either the optimum or base index.

Full Text Available Application of bracings to increase the lateral stiffness of building structures is a technique of seismic improvement that engineers frequently have recourse to. Accordingly, investigating the role of bracings in concrete structures along with the development of seismic fragility curves are of overriding concern to civil engineers. In this research, an ordinary RC building, designed according to the 1st edition of Iranian seismic code, was selected for examination. According to FEMA 356 code, this building is considered to be vulnerable. To improve the seismic performance of this building, 3 different types of bracings, which are Concentrically Braced Frames, Eccentrically Braced Frames and Buckling Restrained Frames were employed, and each bracing element was distributed in 3 different locations in the building. The researchers developed fragility curves and utilized 30 earthquake records on the Peak Ground Acceleration seismic intensity scale to carry out a time history analysis. Tow damage scale, including Inter-Story Drifts and Plastic Axial Deformation were also used. The numerical results obtained from this investigation confirm that Plastic Axial Deformation is more reliable than conventional approaches in developing fragility curves for retrofitted frames. In lieu of what is proposed, the researchers selected the suitable damage scale and developed and compared log-normal distribution of fragility curves first for the original and then for the retrofitted building.

An important, open problem in neuroimaging analyses is developing analytical methods that ensure precise inferences about neural activity underlying fMRI BOLD signal despite the known presence of confounds. Here, we develop and test a new meta-algorithm for conducting semi-blind (i.e., no knowledge of stimulus timings) deconvolution of the BOLD signal that estimates, via bootstrapping, both the underlying neural events driving BOLD as well as the confidence of these estimates. Our approach includes two improvements over the current best performing deconvolution approach; 1) we optimize the parametric form of the deconvolution feature space; and, 2) we pre-classify neural event estimates into two subgroups, either known or unknown, based on the confidence of the estimates prior to conducting neural event classification. This knows-what-it-knows approach significantly improves neural event classification over the current best performing algorithm, as tested in a detailed computer simulation of highly-confounded fMRI BOLD signal. We then implemented a massively parallelized version of the bootstrapping-based deconvolution algorithm and executed it on a high-performance computer to conduct large scale (i.e., voxelwise) estimation of the neural events for a group of 17 human subjects. We show that by restricting the computation of inter-regional correlation to include only those neural events estimated with high-confidence the method appeared to have higher sensitivity for identifying the default mode network compared to a standard BOLD signal correlation analysis when compared across subjects.