Background We present the results of EGASP, a community experiment to assess the state-of-the-art in genome annotation within the ENCODE regions, which span 1% of the human genome sequence. The experiment had two major goals: the assessment of the accuracy of computational methods to predict protein coding genes; and the overall assessment of the completeness of the current human genome annotations as represented in the ENCODE regions. For the computational prediction assessment, eighteen groups contributed gene predictions. We evaluated these submissions against each other based on a 'reference set' of annotations generated as part of the GENCODE project. These annotations were not available to the prediction groups prior to the submission deadline, so that their predictions were blind and an external advisory committee could perform a fair assessment. Results The best methods had at least one gene transcript correctly predicted for close to 70% of the annotated genes. Nevertheless, the multiple transcript accuracy, taking into account alternative splicing, reached only approximately 40% to 50% accuracy. At the coding nucleotide level, the best programs reached an accuracy of 90% in both sensitivity and specificity. Programs relying on mRNA and protein sequences were the most accurate in reproducing the manually curated annotations. Experimental validation shows that only a very small percentage (3.2%) of the selected 221 computationally predicted exons outside of the existing annotation could be verified. Conclusion This is the first such experiment in human DNA, and we have followed the standards established in a similar experiment, GASP1, in Drosophila melanogaster. We believe the results presented here contribute to the value of ongoing large-scale annotationprojects and should guide further experimental methods when being scaled up to the entire human genome sequence. PMID:16925836

While the C. elegans genome is extensively annotated, relatively little information is available for other Caenorhabditis species. The nematode genome annotationassessmentproject (nGASP) was launched to objectively assess the accuracy of protein-coding gene prediction software in C. elegans, and to apply this knowledge to the annotation of the genomes of four additional Caenorhabditis species and other nematodes. Seventeen groups worldwide participated in nGASP, and submitted 47 prediction sets for 10 Mb of the C. elegans genome. Predictions were compared to reference gene sets consisting of confirmed or manually curated gene models from WormBase. The most accurate gene-finders were 'combiner' algorithms, which made use of transcript- and protein-alignments and multi-genome alignments, as well as gene predictions from other gene-finders. Gene-finders that used alignments of ESTs, mRNAs and proteins came in second place. There was a tie for third place between gene-finders that used multi-genome alignments and ab initio gene-finders. The median gene level sensitivity of combiners was 78% and their specificity was 42%, which is nearly the same accuracy as reported for combiners in the human genome. C. elegans genes with exons of unusual hexamer content, as well as those with many exons, short exons, long introns, a weak translation start signal, weak splice sites, or poorly conserved orthologs were the most challenging for gene-finders. While the C. elegans genome is extensively annotated, relatively little information is available for other Caenorhabditis species. The nematode genome annotationassessmentproject (nGASP) was launched to objectively assess the accuracy of protein-coding gene prediction software in C. elegans, and to apply this knowledge to the annotation of the genomes of four additional Caenorhabditis species and other nematodes. Seventeen groups worldwide participated in nGASP, and submitted 47 prediction sets for 10 Mb of the C. elegans genome

This paper harnesses collaborative annotations by students as learning feedback on online formative assessments to improve the learning achievements of students. Through the developed Web platform, students can conduct formative assessments, collaboratively annotate, and review historical records in a convenient way, while teachers can generate…

The Critical Assessment of Function Annotation meeting was held July 14-15, 2011 at the Austria Conference Center in Vienna, Austria. There were 73 registered delegates at the meeting. We thank the DOE for this award. It helped us organize and support a scientific meeting AFP 2011 as a special interest group (SIG) meeting associated with the ISMB 2011 conference. The conference was held in Vienna, Austria, in July 2011. The AFP SIG was held on July 15-16, 2011 (immediately preceding the conference). The meeting consisted of two components, the first being a series of talks (invited and contributed) and discussion sections dedicated to protein function research, with an emphasis on the theory and practice of computational methods utilized in functional annotation. The second component provided a large-scale assessment of computational methods through participation in the Critical Assessment of Functional Annotation (CAFA).

This annotated bibliography lists and summarizes the key points of 33 resource materials that focus specifically on portfolio assessment. Compiled in Spring 1993 as part of a demonstration project funded by the Wisconsin State Board of Vocational, Technical, and Adult Education, the bibliography is intended to serve as a tool for use in developing…

A selected annotated bibliography on projects, training, and strategies for generating income, intended for persons actively engaged in non-formal education for development, reflects a growing number of projects on income generation by and for women's groups, and a reliance upon indigenous associations and group action. Documents dating from 1969…

This bibliography covers aspects of the Detection and Early Warning of Proliferation from Online INdicators of Threat (DEWPOINT) project including 1) data management and querying, 2) baseline and advanced methods for classifying free text, and 3) algorithms to achieve the ultimate goal of inferring intent from free text sources. Metrics for assessing the quality and correctness of classification are addressed in the second group. Data management and querying include methods for efficiently storing, indexing, searching, and organizing the data we expect to operate on within the DEWPOINT project.

Over 260 books, textbooks, articles, pamphlets, periodicals, films, and multi-media packages appropriate for the analysis of global issues at the college level are briefly annotated. Entries include classic books and articles as well as a number of recent (1976-1981) publications. The purpose is to assist students and educators in developing a…

Image annotation and markup are at the core of medical interpretation in both the clinical and the research setting. Digital medical images are managed with the DICOM standard format. While DICOM contains a large amount of meta-data about whom, where, and how the image was acquired, DICOM says little about the content or meaning of the pixel data. An image annotation is the explanatory or descriptive information about the pixel data of an image that is generated by a human or machine observer. An image markup is the graphical symbols placed over the image to depict an annotation. While DICOM is the standard for medical image acquisition, manipulation, transmission, storage, and display, there are no standards for image annotation and markup. Many systems expect annotation to be reported verbally, while markups are stored in graphical overlays or proprietary formats. This makes it difficult to extract and compute with both of them. The goal of the Annotation and Image Markup (AIM) project is to develop a mechanism, for modeling, capturing, and serializing image annotation and markup data that can be adopted as a standard by the medical imaging community. The AIM project produces both human- and machine-readable artifacts. This paper describes the AIM information model, schemas, software libraries, and tools so as to prepare researchers and developers for their use of AIM. PMID:19294468

The Rice AnnotationProject Database (RAP-DB) was created to provide the genome sequence assembly of the International Rice Genome Sequencing Project (IRGSP), manually curated annotation of the sequence, and other genomics information that could be useful for comprehensive understanding of the rice biology. Since the last publication of the RAP-DB, the IRGSP genome has been revised and reassembled. In addition, a large number of rice-expressed sequence tags have been released, and functional genomics resources have been produced worldwide. Thus, we have thoroughly updated our genome annotation by manual curation of all the functional descriptions of rice genes. The latest version of the RAP-DB contains a variety of annotation data as follows: clone positions, structures and functions of 31 439 genes validated by cDNAs, RNA genes detected by massively parallel signature sequencing (MPSS) technology and sequence similarity, flanking sequences of mutant lines, transposable elements, etc. Other annotation data such as Gnomon can be displayed along with those of RAP for comparison. We have also developed a new keyword search system to allow the user to access useful information. The RAP-DB is available at: http://rapdb.dna.affrc.go.jp/ and http://rapdb.lab.nig.ac.jp/. PMID:18089549

The results of an extensive literature survey on decision analysis, with specific application to problems in research and development project management, are summarized in bibliographic form. Approximately 215 references are organized by subject matter and also summarized and annotated (several lines per reference) in a separate listing.

The MATADOR project is focused on developing methods to infer the operational mode of facilities that have the potential to be used in weapons development programs. Our central hypothesis is that by persistent, non-intrusive monitoring of such facilities, differences between various use scenarios can be reliably discovered. The impact of success in this area is that new tools and techniques for monitoring and treaty verification would make it easier to reliably discover and document weapons development activities. This document captures the literature that will serve as a basis to approach this task. The relevant literature is divided into topical areas that relate to the various aspects of expected MATADOR project development. We have found that very little work that is directly applicable for our purposes has been published, which has motivated the development of novel methods under the project. Therefore, the manuscripts referenced in this document were selected based on their potential use as foundational blocks for the methods we anticipate developing, or so that we can understand the limitations of existing methods.

This is a report with an attached annotated bibliography. This study explores the literature for analytical techniques that can support the complex decision-making process associated with Corps of Engineers environmental projects. The literature review focuses on opportunities for using trade-off methodologies and group processes in environmental plan formulation and evaluation. The work was conducted under the Evaluation Framework Work Unit within the Evaluation of Environmental Investments Research Program.

Assessment is one of the most essential parts of any instructive learning process which aims to evaluate a learner's knowledge about learning concepts. In this work, a new method for learner assessment based on learner annotations is presented. The proposed method exploits the M-BLEU algorithm to find the most similar reference annotations…

Identifying, tracking and reasoning about tumor lesions is a central task in cancer research and clinical practice that could potentially be automated. However, information about tumor lesions in imaging studies is not easily accessed by machines for automated reasoning. The Annotation and Image Markup (AIM) information model recently developed for the cancer Biomedical Informatics Grid provides a method for encoding the semantic information related to imaging findings, enabling their storage and transfer. However, it is currently not possible to apply automated reasoning methods to image information encoded in AIM. We have developed a methodology and a suite of tools for transforming AIM image annotations into OWL, and an ontology for reasoning with the resulting image annotations for tumor lesion assessment. Our methods enable automated inference of semantic information about cancer lesions in images. PMID:20351880

The online encyclopedia Wikipedia has become one of the most important online references in the world and has a substantial and growing scientific content. A search of Google with many RNA-related keywords identifies a Wikipedia article as the top hit. We believe that the RNA community has an important and timely opportunity to maximize the content and quality of RNA information in Wikipedia. To this end, we have formed the RNA WikiProject (http://en.wikipedia.org/wiki/Wikipedia:WikiProject_RNA) as part of the larger Molecular and Cellular Biology WikiProject. We have created over 600 new Wikipedia articles describing families of noncoding RNAs based on the Rfam database, and invite the community to update, edit, and correct these articles. The Rfam database now redistributes this Wikipedia content as the primary textual annotation of its RNA families. Users can, therefore, for the first time, directly edit the content of one of the major RNA databases. We believe that this Wikipedia/Rfam link acts as a functioning model for incorporating community annotation into molecular biology databases. PMID:18945806

Background The amount of data deposited in the Gene Expression Omnibus (GEO) has expanded significantly. It is important to ensure that these data are properly annotated with clinical data and descriptions of experimental conditions so that they can be useful for future analysis. This study assesses the adequacy of documented asthma markers in GEO. Three objective measures (coverage, consistency and association) were used for evaluation of annotations contained in 17 asthma studies. Results There were 918 asthma samples with 20,640 annotated markers. Of these markers, only 10,419 had documented values (50% coverage). In one study carefully examined for consistency, there were discrepancies in drug name usage, with brand name and generic name used in different sections to refer to the same drug. Annotated markers showed adequate association with other relevant variables (i.e. the use of medication only when its corresponding disease state was present). Conclusions There is inadequate variable coverage within GEO and usage of terms lacks consistency. Association between relevant variables, however, was adequate. PMID:21044366

Background Analysis of large-scale experimental datasets frequently produces one or more sets of proteins that are subsequently mined for functional interpretation and validation. To this end, a number of computational methods have been devised that rely on the analysis of functional annotations. Although current methods provide valuable information (e.g. significantly enriched annotations, pairwise functional similarities), they do not specifically measure the degree of homogeneity of a protein set. Results In this work we present a method that scores the degree of functional homogeneity, or coherence, of a set of proteins on the basis of the global similarity of their functional annotations. The method uses statistical hypothesis testing to assess the significance of the set in the context of the functional space of a reference set. As such, it can be used as a first step in the validation of sets expected to be homogeneous prior to further functional interpretation. Conclusion We evaluate our method by analysing known biologically relevant sets as well as random ones. The known relevant sets comprise macromolecular complexes, cellular components and pathways described for Saccharomyces cerevisiae, which are mostly significantly coherent. Finally, we illustrate the usefulness of our approach for validating 'functional modules' obtained from computational analysis of protein-protein interaction networks. Matlab code and supplementary data are available at PMID:18937846

The GENCODE Consortium aims to identify all gene features in the human genome using a combination of computational analysis, manual annotation, and experimental validation. Since the first public release of this annotation data set, few new protein-coding loci have been added, yet the number of alternative splicing transcripts annotated has steadily increased. The GENCODE 7 release contains 20,687 protein-coding and 9640 long noncoding RNA loci and has 33,977 coding transcripts not represented in UCSC genes and RefSeq. It also has the most comprehensive annotation of long noncoding RNA (lncRNA) loci publicly available with the predominant transcript form consisting of two exons. We have examined the completeness of the transcript annotation and found that 35% of transcriptional start sites are supported by CAGE clusters and 62% of protein-coding genes have annotated polyA sites. Over one-third of GENCODE protein-coding genes are supported by peptide hits derived from mass spectrometry spectra submitted to Peptide Atlas. New models derived from the Illumina Body Map 2.0 RNA-seq data identify 3689 new loci not currently in GENCODE, of which 3127 consist of two exon models indicating that they are possibly unannotated long noncoding loci. GENCODE 7 is publicly available from gencodegenes.org and via the Ensembl and UCSC Genome Browsers. PMID:22955987

The GENCODE Consortium aims to identify all gene features in the human genome using a combination of computational analysis, manual annotation, and experimental validation. Since the first public release of this annotation data set, few new protein-coding loci have been added, yet the number of alternative splicing transcripts annotated has steadily increased. The GENCODE 7 release contains 20,687 protein-coding and 9640 long noncoding RNA loci and has 33,977 coding transcripts not represented in UCSC genes and RefSeq. It also has the most comprehensive annotation of long noncoding RNA (lncRNA) loci publicly available with the predominant transcript form consisting of two exons. We have examined the completeness of the transcript annotation and found that 35% of transcriptional start sites are supported by CAGE clusters and 62% of protein-coding genes have annotated polyA sites. Over one-third of GENCODE protein-coding genes are supported by peptide hits derived from mass spectrometry spectra submitted to Peptide Atlas. New models derived from the Illumina Body Map 2.0 RNA-seq data identify 3689 new loci not currently in GENCODE, of which 3127 consist of two exon models indicating that they are possibly unannotated long noncoding loci. GENCODE 7 is publicly available from gencodegenes.org and via the Ensembl and UCSC Genome Browsers. PMID:22955987

Motivation: Ever increasing amounts of biological interaction data are being accumulated worldwide, but they are currently not readily accessible to the biologist at a single site. New techniques are required for retrieving, sharing and presenting data spread over the Internet. Results: We introduce the DASMI system for the dynamic exchange, annotation and assessment of molecular interaction data. DASMI is based on the widely used Distributed Annotation System (DAS) and consists of a data exchange specification, web servers for providing the interaction data and clients for data integration and visualization. The decentralized architecture of DASMI affords the online retrieval of the most recent data from distributed sources and databases. DASMI can also be extended easily by adding new data sources and clients. We describe all DASMI components and demonstrate their use for protein and domain interactions. Availability: The DASMI tools are available at http://www.dasmi.de/ and http://ipfam.sanger.ac.uk/graph. The DAS registry and the DAS 1.53E specification is found at http://www.dasregistry.org/. Contact: mario.albrecht@mpi-inf.mpg.de Supplementary information: Supplementary data and all figures in color are available at Bioinformatics online. PMID:19420069

This annotated bibliography was compiled to assist physical education majors, especially those having a major interest in football and football coaching. The bibliography is limited to the areas of coaching techniques and philosophy, fundamentals, offense, defense, injuries, and conditioning at the high school and college level. These broader…

This annotated bibliography, which includes publishers' addresses and prices, provides an overview of the research on effective schools, the implications of this research, and implementation efforts. Titles of four videotapes, two cassette tapes, and seven books are included. Also listed are the names of charter members of the editorial board of…

National Center for Education in Maternal and Child Health, Washington, DC.

An annotated listing is presented of projects offering maternal and child health care services. These projects, referred to as special projects of regional and national significance (SPRANS), are supported by the Office of Maternal and Child Health of the Department of Health and Human Services. The first section provides information on services…

The Gene Ontology (GO) is a collaborative effort that provides structured vocabularies for annotating the molecular function, biological role, and cellular location of gene products in a highly systematic way and in a species-neutral manner with the aim of unifying the representation of gene function across different organisms. Each contributing member of the GO Consortium independently associates GO terms to gene products from the organism(s) they are annotating. Here we introduce the Reference Genome project, which brings together those independent efforts into a unified framework based on the evolutionary relationships between genes in these different organisms. The Reference Genome project has two primary goals: to increase the depth and breadth of annotations for genes in each of the organisms in the project, and to create data sets and tools that enable other genome annotation efforts to infer GO annotations for homologous genes in their organisms. In addition, the project has several important incidental benefits, such as increasing annotation consistency across genome databases, and providing important improvements to the GO's logical structure and biological content. PMID:19578431

National Center for Education in Maternal and Child Health, Washington, DC.

This annotated listing provides brief descriptions of the 591 projects funded during 1991 by federal set-aside funds of the Maternal and Child Health (MCH) Services Block Grant and identified as special projects of regional and national significance (SPRANS). Preliminary information includes an introduction, an organization chart of the Maternal…

The curriculum materials developed by 34 projects are described in this directory. The discussions are organized by discipline: Anthropology, Economics, Geography, History, Political Science, Social Psychology, Sociology, and General and Interdisciplinary. Each individual project note includes: project name, director, address, and a summary of the…

This report is part of ongoing research engaged in transforming knowledge of how human communication works into improvements in man-machine communication of existing and planned computer systems. The methodology includes having a trained "Observer" annotate transcripts of human communication in a prescribed manner. One of the issues, therefore, in…

The Ensembl gene annotation system has been used to annotate over 70 different vertebrate species across a wide range of genome projects. Furthermore, it generates the automatic alignment-based annotation for the human and mouse GENCODE gene sets. The system is based on the alignment of biological sequences, including cDNAs, proteins and RNA-seq reads, to the target genome in order to construct candidate transcript models. Careful assessment and filtering of these candidate transcripts ultimately leads to the final gene set, which is made available on the Ensembl website. Here, we describe the annotation process in detail.Database URL: http://www.ensembl.org/index.html. PMID:27337980

The Ensembl gene annotation system has been used to annotate over 70 different vertebrate species across a wide range of genome projects. Furthermore, it generates the automatic alignment-based annotation for the human and mouse GENCODE gene sets. The system is based on the alignment of biological sequences, including cDNAs, proteins and RNA-seq reads, to the target genome in order to construct candidate transcript models. Careful assessment and filtering of these candidate transcripts ultimately leads to the final gene set, which is made available on the Ensembl website. Here, we describe the annotation process in detail. Database URL: http://www.ensembl.org/index.html PMID:27337980

The Rice AnnotationProject Database (RAP-DB, http://rapdb.dna.affrc.go.jp/) has been providing a comprehensive set of gene annotations for the genome sequence of rice, Oryza sativa (japonica group) cv. Nipponbare. Since the first release in 2005, RAP-DB has been updated several times along with the genome assembly updates. Here, we present our newest RAP-DB based on the latest genome assembly, Os-Nipponbare-Reference-IRGSP-1.0 (IRGSP-1.0), which was released in 2011. We detected 37,869 loci by mapping transcript and protein sequences of 150 monocot species. To provide plant researchers with highly reliable and up to date rice gene annotations, we have been incorporating literature-based manually curated data, and 1,626 loci currently incorporate literature-based annotation data, including commonly used gene names or gene symbols. Transcriptional activities are shown at the nucleotide level by mapping RNA-Seq reads derived from 27 samples. We also mapped the Illumina reads of a Japanese leading japonica cultivar, Koshihikari, and a Chinese indica cultivar, Guangluai-4, to the genome and show alignments together with the single nucleotide polymorphisms (SNPs) and gene functional annotations through a newly developed browser, Short-Read Assembly Browser (S-RAB). We have developed two satellite databases, Plant Gene Family Database (PGFD) and Integrative Database of Cereal Gene Phylogeny (IDCGP), which display gene family and homologous gene relationships among diverse plant species. RAP-DB and the satellite databases offer simple and user-friendly web interfaces, enabling plant and genome researchers to access the data easily and facilitating a broad range of plant research topics. PMID:23299411

We describe the organization of a nascent international effort - the "Functional Annotation of ANimal Genomes" project - whose aim is to produce comprehensive maps of functional elements in the genomes of domesticated animal species....

This U.S. Bureau of Mines publication is intended to provide mining industry representatives and regulatory authorities with a reference package dealing with hydrology and underground mine subsidence related studies. An annotated bibliography and a list of additional sources are given and represent references published prior to June 1993. Ninety-seven references were identified and 75 obtained for assessment. Annotating the references included evaluating the study methodology and summarizing the basic results. Table 1 compiles the references based on geographic location and will aid the user in identifying articles of particular interest. Appendix A is an annotated bibliography of the literature listed in Table 1, and Appendix B is a listing of additional sources on the subject.

A set of instructional materials aimed at the inservice education of teachers on the topic of student assessment was developed. The Student AssessmentProject comprises seven modules in slide-tape format covering the topics of test design, item writing, analysis of norm-referenced and criterion-referenced tests, etc. (Author/MLW)

Background Large-scale sequencing projects have now become routine lab practice and this has led to the development of a new generation of tools involving function prediction methods, bringing the latter back to the fore. The advent of Gene Ontology, with its structured vocabulary and paradigm, has provided computational biologists with an appropriate means for this task. Methodology We present here a novel method called ARGOT (Annotation Retrieval of Gene Ontology Terms) that is able to process quickly thousands of sequences for functional inference. The tool exploits for the first time an integrated approach which combines clustering of GO terms, based on their semantic similarities, with a weighting scheme which assesses retrieved hits sharing a certain number of biological features with the sequence to be annotated. These hits may be obtained by different methods and in this work we have based ARGOT processing on BLAST results. Conclusions The extensive benchmark involved 10,000 protein sequences, the complete S. cerevisiae genome and a small subset of proteins for purposes of comparison with other available tools. The algorithm was proven to outperform existing methods and to be suitable for function prediction of single proteins due to its high degree of sensitivity, specificity and coverage. PMID:19247487

The annotated bibliography describes foreign language assessment instruments currently used in elementary and middle schools. The instruments are drawn from a wide variety of program models: Foreign Language in the Elementary School (FLES), middle school sequential instruction, and immersion (total, two-way, partial). The bibliography has six…

The compilation of this annotated bibliography of selected materials was undertaken to provide a thorough review of the literature concerning the cooperative and project methods of instruction in the field of distributive education to be used in "A Pilot Program Comparing Cooperative and Project Methods of Teaching Distributive Education" (ED 016…

Citrus is one of the most important and widely grown fruit crop with global production ranking firstly among all the fruit crops in the world. Sweet orange accounts for more than half of the Citrus production both in fresh fruit and processed juice. We have sequenced the draft genome of a double-haploid sweet orange (C. sinensis cv. Valencia), and constructed the Citrus sinensis annotationproject (CAP) to store and visualize the sequenced genomic and transcriptome data. CAP provides GBrowse-based organization of sweet orange genomic data, which integrates ab initio gene prediction, EST, RNA-seq and RNA-paired end tag (RNA-PET) evidence-based gene annotation. Furthermore, we provide a user-friendly web interface to show the predicted protein-protein interactions (PPIs) and metabolic pathways in sweet orange. CAP provides comprehensive information beneficial to the researchers of sweet orange and other woody plants, which is freely available at http://citrus.hzau.edu.cn/. PMID:24489955

An annotated bibliography of methodology of assessment of undiscovered oil and gas resources is presented as a useful reference for those engaged in resource assessment. The articles that are included deal only with quantitative assessment of undiscovered or inferred resources. the articles in this bibliography are classified largely according to the major assessment method that was applied in each situation. Major assessment methods include areal and volumetric yield methods, field size distributions, historical extrapolation, deposit modeling, organic geochemical mass balance methods, and direct expert assessment. Other categories include mathematical tools, reserve growth/confirmation, quantitative characterization of undiscovered resources, and general topics. For the purpose of future updates, we solicit contributions of articles that may have been missed in the preparation of this bibliography. ?? 1995 Oxford University Press.

Missouri State Dept. of Elementary and Secondary Education, Jefferson City.

This document provides supplemental assessment information to "Missouri's Framework for Curriculum Development in Health Education and Physical Education (Healthy, Active Living) K-12." The assessmentannotations found in the third column of this document are intended to provide information for administrators, curriculum directors, and teachers…

The South Florida Ecosystem AssessmentProject is an innovative, large-scale monitoring and assessment program designed to measure current and changing conditions of ecological resources in South Florida using an integrated holistic approach. Using the United States Environmenta...

An image annotation is the explanatory or descriptive information about the pixel data of an image that is generated by a human (or machine) observer. An image markup is the graphical symbols placed over the image to depict an annotation. In the majority of current, clinical and research imaging practice, markup is captured in proprietary formats and annotations are referenced only in free text radiology reports. This makes these annotations difficult to query, retrieve and compute upon, hampering their integration into other data mining and analysis efforts. This paper describes the National Cancer Institute's Cancer Biomedical Informatics Grid's (caBIG) Annotation and Image Markup (AIM) project, focusing on how to use AIM to query for annotations. The AIM project delivers an information model for image annotation and markup. The model uses controlled terminologies for important concepts. All of the classes and attributes of the model have been harmonized with the other models and common data elements in use at the National Cancer Institute. The project also delivers XML schemata necessary to instantiate AIMs in XML as well as a software application for translating AIM XML into DICOM S/R and HL7 CDA. Large collections of AIM annotations can be built and then queried as Grid or Web services. Using the tools of the AIM project, image annotations and their markup can be captured and stored in human and machine readable formats. This enables the inclusion of human image observation and inference as part of larger data mining and analysis activities. PMID:19964202

This article discusses the process and findings of a study in which video annotation (VideoANT) and a learning management system (LMS) were implemented together in the microteaching lessons of fourth-year geography student teachers. The aim was to ensure adequate assessment of and feedback for each student, since these aspects are, in general, a…

We have evaluated the new Swine Protein-Annotated Oligonucleotide Microarray (http://www.pigoligoarray.org) by analyzing transcriptional profiles for longissimus dorsi muscle (LD), Bronchial lymph node (BLN) and Lung. Four LD samples were used to assess the stringency of hybridization conditions com...

As the scientific literature grows, leading to an increasing volume of published experimental data, so does the need to access and analyze this data using computational tools. The most commonly used method to convert published experimental data on gene function into controlled vocabulary annotations relies on a professional curator, employed by a model organism database or a more general resource such as UniProt, to read published articles and compose annotation statements based on the articles' contents. A more cost-effective and scalable approach capable of capturing gene function data across the whole range of biological research organisms in computable form is urgently needed. We have analyzed a set of ontology annotations generated through collaborations between the Arabidopsis Information Resource and several plant science journals. Analysis of the submissions entered using the online submission tool shows that most community annotations were well supported and the ontology terms chosen were at an appropriate level of specificity. Of the 503 individual annotations that were submitted, 97% were approved and community submissions captured 72% of all possible annotations. This new method for capturing experimental results in a computable form provides a cost-effective way to greatly increase the available body of annotations without sacrificing annotation quality. Database URL: www.arabidopsis.org. PMID:22859749

An ecological assessment in the Tensas River Basin, Louisiana, has been completed by the U.S. EPA in partnership with the Louisiana Department of Environmental Quality and other stakeholder groups. This assessment, conducted using landscape ecology and water quality methods, can...

This annotated bibliography and index presents nearly 2,000 references that are substantially unique to African or African American teaching and learning. Designed to support teacher education, the bibliography features references that were chosen if they were culturally relevant, recognized the African or African American experience, and drew…

This document provides a composite index of the first five sets of software annotations produced by Project SEED. The software has been indexed by title, subject area, and grade level, and it covers sets of annotations distributed in September 1986, April 1987, September 1987, November 1987, and February 1988. The date column in the index…

Aspergillus oryzae is widely used for the industrial production of enzymes. In A. oryzae metabolism, transporters appear to play crucial roles in controlling the flux of molecules for energy generation, nutrients delivery, and waste elimination in the cell. While the A. oryzae genome sequence is available, transporter annotation remains limited and thus the connectivity of metabolic networks is incomplete. In this study, we developed a metabolic annotation strategy to understand the relationship between the sequence, structure, and function for annotation of A. oryzae metabolic transporters. Sequence-based analysis with manual curation showed that 58 genes of 12,096 total genes in the A. oryzae genome encoded metabolic transporters. Under consensus integrative databases, 55 unambiguous metabolic transporter genes were distributed into channels and pores (7 genes), electrochemical potential-driven transporters (33 genes), and primary active transporters (15 genes). To reveal the transporter functional role, a combination of homology modeling and molecular dynamics simulation was implemented to assess the relationship between sequence to structure and structure to function. As in the energy metabolism of A. oryzae, the H+-ATPase encoded by the AO090005000842 gene was selected as a representative case study of multilevel linkage annotation. Our developed strategy can be used for enhancing metabolic network reconstruction. PMID:27274991

The well-established inaccuracy of purely computational methods for annotating genome sequences necessitates an interactive tool to allow biological experts to refine these approximations by viewing and independently evaluating the data supporting each annotation. Apollo was developed to meet this need, enabling curators to inspect genome annotations closely and edit them. FlyBase biologists successfully used Apollo to annotate the Drosophila melanogaster genome and it is increasingly being used as a starting point for the development of customized annotation editing tools for other genome projects. PMID:12537571

Fifty-five published projects, theses, and dissertations dealing with the Native American and written by Arizona State University students are listed in this annotated bibliography. Arranged alphabetically according to authors and topics, the publications cover the period from 1943 to 1974. Topics include: (1) attitudes/achievement, (2)…

The goal of the Western Airborne Contaminants AssessmentProject (WACAP) is to assess the deposition of airborne contaminants in Western National Parks, providing regional and local information on exposure, accumulation, impacts, and probable sources. This project is being desig...

This bibliography, developed by Project RIMES (Reading Instructional Methods of Efficacy with Students) lists 80 software packages for teaching early reading and spelling to students at risk for reading and spelling failure. The software packages are presented alphabetically by title. Entries usually include a grade level indicator, a brief…

This case study examines the reasoning of a clinical supervisor as she assesses preservice teacher candidates with a state-mandated performance assessment instrument. The supervisor's evaluations were recorded using video annotation software, which allowed her to record her observations in real-time. The study reveals some of the inherent…

The purpose of this research activity was to develop a list for NASA of major U.S. government information systems contacts who are able to cooperate with NASA on technical interchange. The list contains the names of appropriate managers involved in major information system projects, U.S. government office officials, and their hierarchy up to the highest officials whose major responsibilities include government information systems development.

The eTUMOUR (eT) multi-centre project gathered in vivo and ex vivo magnetic resonance (MR) data, as well as transcriptomic and clinical information from brain tumour patients, with the purpose of improving the diagnostic and prognostic evaluation of future patients. In order to carry this out, among other work, a database--the eTDB--was developed. In addition to complex permission rules and software and management quality control (QC), it was necessary to develop anonymization, processing and data visualization tools for the data uploaded. It was also necessary to develop sophisticated curation strategies that involved on one hand, dedicated fields for QC-generated meta-data and specialized queries and global permissions for senior curators and on the other, to establish a set of metrics to quantify its contents. The indispensable dataset (ID), completeness and pairedness indices were set. The database contains 1317 cases created as a result of the eT project and 304 from a previous project, INTERPRET. The number of cases fulfilling the ID was 656. Completeness and pairedness were heterogeneous, depending on the data type involved. PMID:23180768

The eTUMOUR (eT) multi-centre project gathered in vivo and ex vivo magnetic resonance (MR) data, as well as transcriptomic and clinical information from brain tumour patients, with the purpose of improving the diagnostic and prognostic evaluation of future patients. In order to carry this out, among other work, a database—the eTDB—was developed. In addition to complex permission rules and software and management quality control (QC), it was necessary to develop anonymization, processing and data visualization tools for the data uploaded. It was also necessary to develop sophisticated curation strategies that involved on one hand, dedicated fields for QC-generated meta-data and specialized queries and global permissions for senior curators and on the other, to establish a set of metrics to quantify its contents. The indispensable dataset (ID), completeness and pairedness indices were set. The database contains 1317 cases created as a result of the eT project and 304 from a previous project, INTERPRET. The number of cases fulfilling the ID was 656. Completeness and pairedness were heterogeneous, depending on the data type involved. PMID:23180768

A central problem for 21st century science is annotating the human genome and making this annotation useful for the interpretation of personal genomes. My talk will focus on annotating the 99% of the genome that does not code for canonical genes, concentrating on intergenic features such as structural variants (SVs), pseudogenes (protein fossils), binding sites, and novel transcribed RNAs (ncRNAs). In particular, I will describe how we identify regulatory sites and variable blocks (SVs) based on processing next-generation sequencing experiments. I will further explain how we cluster together groups of sites to create larger annotations. Next, I will discuss a comprehensive pseudogene identification pipeline, which has enabled us to identify >10K pseudogenes in the genome and analyze their distribution with respect to age, protein family, and chromosomal location. Throughout, I will try to introduce some of the computational algorithms and approaches that are required for genome annotation. Much of this work has been carried out in the framework of the ENCODE, modENCODE, and 1000 genomes projects.

Abstract BACKGROUND: Progress in genome sequencing is proceeding at an exponential pace, and several new algal genomes are becoming available every year. One of the challenges facing the community is the association of protein sequences encoded in the genomes with biological function. While most genome assembly projects generate annotations for predicted protein sequences, they are usually limited and integrate functional terms from a limited number of databases. Another challenge is the use of annotations tomore » interpret large lists of 'interesting' genes generated by genome-scale datasets. Previously, these gene lists had to be analyzed across several independent biological databases, often on a gene-by-gene basis. In contrast, several annotation databases, such as DAVID, integrate data from multiple functional databases and reveal underlying biological themes of large gene lists. While several such databases have been constructed for animals, none is currently available for the study of algae. Due to renewed interest in algae as potential sources of biofuels and the emergence of multiple algal genome sequences, a significant need has arisen for such a database to process the growing compendiums of algal genomic data. DESCRIPTION: The Algal Functional Annotation Tool is a web-based comprehensive analysis suite integrating annotation data from several pathway, ontology, and protein family databases. The current version provides annotation for the model alga Chlamydomonas reinhardtii, and in the future will include additional genomes. The site allows users to interpret large gene lists by identifying associated functional terms, and their enrichment. Additionally, expression data for several experimental conditions were compiled and analyzed to provide an expression-based enrichment search. A tool to search for functionally-related genes based on gene expression across these conditions is also provided. Other features include dynamic visualization of genes on

Abstract BACKGROUND: Progress in genome sequencing is proceeding at an exponential pace, and several new algal genomes are becoming available every year. One of the challenges facing the community is the association of protein sequences encoded in the genomes with biological function. While most genome assembly projects generate annotations for predicted protein sequences, they are usually limited and integrate functional terms from a limited number of databases. Another challenge is the use of annotations to interpret large lists of 'interesting' genes generated by genome-scale datasets. Previously, these gene lists had to be analyzed across several independent biological databases, often on a gene-by-gene basis. In contrast, several annotation databases, such as DAVID, integrate data from multiple functional databases and reveal underlying biological themes of large gene lists. While several such databases have been constructed for animals, none is currently available for the study of algae. Due to renewed interest in algae as potential sources of biofuels and the emergence of multiple algal genome sequences, a significant need has arisen for such a database to process the growing compendiums of algal genomic data. DESCRIPTION: The Algal Functional Annotation Tool is a web-based comprehensive analysis suite integrating annotation data from several pathway, ontology, and protein family databases. The current version provides annotation for the model alga Chlamydomonas reinhardtii, and in the future will include additional genomes. The site allows users to interpret large gene lists by identifying associated functional terms, and their enrichment. Additionally, expression data for several experimental conditions were compiled and analyzed to provide an expression-based enrichment search. A tool to search for functionally-related genes based on gene expression across these conditions is also provided. Other features include dynamic visualization of genes on KEGG

Non domestic buildings account for about one-sixth of the U.K.'s entire C02 emissions and one-third of the building related ones 2 . Their proportion of energy consumption, particularly electricity, has also been growing 2 . New buildings are not necessarily better, with energy use often proving to be much higher than their designers anticipated 2 . Annual C02 emissions of two- and sometimes three- times design expectations are far from unusual, leaving a massive credibility gap 2 . These and other global environmental and human health related concerns have motivated an increasing number of designers, developers and building users to pursue more environmentally sustainable designs and construction strategies 5 . However, these buildings can be difficult to evaluate, since they are large in scale, complex in materials and function and temporally dynamic due to limited service life of building components and changing user requirements 5 . All of these factors make environmental assessment of the buildings challenging. Previous Post Occupancy Review of Buildings and their Engineering (PROBE) building investigations have uncovered serious shortcomings in facilities management, or at least mismatches between a building's management needs and the ability of the occupiers to provide the right level of management 1 . Consequently, large differences between energy performance expectations and outcomes can occur virtually unnoticed, while designers continue to repeat flawed descriptions 2 . This investigation attempts to evaluate the building's operation and to help achieving demonstrable improvements in terms of energy efficiency and occupant satisfaction. The scope of this study is to evaluate the actual environmental performance of a building notable for its advanced design. The Education Resource Centre at the Eden Project was selected to compare design expectations with post occupancy performance. This report contains a small-scale survey of user satisfaction with the

The Town of Lakeview is proposing to construct and operate a geothermal direct use district heating system in Lakeview, Oregon. The proposed project would be in Lake County, Oregon, within the Lakeview Known Geothermal Resources Area (KGRA). The proposed project includes the following elements: Drilling, testing, and completion of a new production well and geothermal water injection well; construction and operation of a geothermal production fluid pipeline from the well pad to various Town buildings (i.e., local schools, hospital, and Lake County Industrial Park) and back to a geothermal water injection well. This EA describes the proposed project, the alternatives considered, and presents the environmental analysis pursuant to the National Environmental Policy Act. The project would not result in adverse effects to the environment with the implementation of environmental protection measures.

The availability of multi-media technologies in education has made the option of independent learning increasingly attractive. Whilst independent learning presents learners with a more flexible learning context, it also presents new challenges in assessment in that the onus is placed upon the learners themselves to monitor and evaluate their own…

Task 1 of the Hawaii Geothermal Project Interagency Agreement between the Fish and Wildlife Service and the Department of Energy-Oak Ridge National Laboratory (DOE) includes an annotated bibliography of published and unpublished documents that cover biological issues related to the lowland rain forest in Puna, adjacent areas, transmission corridors, and in the proposed Hawaii Geothermal Project (HGP). The 51 documents reviewed in this report cover the main body of biological information for these projects. The full table of contents and bibliography for each document is included along with two copies (as requested in the Interagency Agreement) of the biological sections of each document. The documents are reviewed in five main categories: (1) geothermal subzones (29 documents); (2) transmission cable routes (8 documents); (3) commercial satellite launching facility (Spaceport; 1 document); (4) manganese nodule processing facility (2 documents); (5) water resource development (1 document); and (6) ecosystem stability and introduced species (11 documents).

Genome sequencing continues to be a rapidly evolving technology, yet most downstream aspects of genome annotation pipelines remain relatively stable or are even being abandoned. To date, the perceived value of manual curation for genome annotations is not offset by the real cost and time associated with the process. In order to balance the large number of sequences generated, the annotation process is now performed almost exclusively in an automated fashion for most genome sequencing projects. One possible way to reduce errors inherent to automated computational annotations is to apply data from 'omics' measurements (i.e. transcriptional and proteomic) to the un-annotated genome with a proteogenomic-based approach. This approach does require additional experimental and bioinformatics methods to include omics technologies; however, the approach is readily automatable and can benefit from rapid developments occurring in those research domains as well. The annotation process can be improved by experimental validation of transcription and translation and aid in the discovery of annotation errors. Here the concept of annotation refinement has been extended to include a comparative assessment of genomes across closely related species, as is becoming common in sequencing efforts. Transcriptomic and proteomic data derived from three highly similar pathogenic Yersiniae (Y. pestis CO92, Y. pestis pestoides F, and Y. pseudotuberculosis PB1/+) was used to demonstrate a comprehensive comparative omic-based annotation methodology. Peptide and oligo measurements experimentally validated the expression of nearly 40% of each strain's predicted proteome and revealed the identification of 28 novel and 68 previously incorrect protein-coding sequences (e.g., observed frameshifts, extended start sites, and translated pseudogenes) within the three current Yersinia genome annotations. Gene loss is presumed to play a major role in Y. pestis acquiring its niche as a virulent pathogen, thus

The Conservation Effects AssessmentProject (CEAP) is a unique effort to quantify the environmental benefits of conservation practices at watershed scales and nationally. Such a large-scale project cannot be accomplished without the cooperation and communication of a wide range of experts and stakeh...

Projects are extended pieces of work completed over a period of time. They provide contexts for the assessment of general skills, as well as the ability to apply subject-specific knowledge and skills. Some of the general skills that projects demonstrate are collecting and organizing information, solving problems, working in a group, and…

This paper reviews the literatuare on projective techniques of personality assessment and their use by school psychologists. Following a brief survey of the development of projective techniques, several of the most widely used techniques are briefly discussed, i.e., the Thematic Apperception Test (TAT), the Childrens Apperception Test (CAT), the…

Background Accurate gene structure annotation is a fundamental but somewhat elusive goal of genome projects, as witnessed by the fact that (model) genomes typically undergo several cycles of re-annotation. In many cases, it is not only different versions of annotations that need to be compared but also different sources of annotation of the same genome, derived from distinct gene prediction workflows. Such comparisons are of interest to annotation providers, prediction software developers, and end-users, who all need to assess what is common and what is different among distinct annotation sources. We developed ParsEval, a software application for pairwise comparison of sets of gene structure annotations. ParsEval calculates several statistics that highlight the similarities and differences between the two sets of annotations provided. These statistics are presented in an aggregate summary report, with additional details provided as individual reports specific to non-overlapping, gene-model-centric genomic loci. Genome browser styled graphics embedded in these reports help visualize the genomic context of the annotations. Output from ParsEval is both easily read and parsed, enabling systematic identification of problematic gene models for subsequent focused analysis. Results ParsEval is capable of analyzing annotations for large eukaryotic genomes on typical desktop or laptop hardware. In comparison to existing methods, ParsEval exhibits a considerable performance improvement, both in terms of runtime and memory consumption. Reports from ParsEval can provide relevant biological insights into the gene structure annotations being compared. Conclusions Implemented in C, ParsEval provides the quickest and most feature-rich solution for genome annotation comparison to date. The source code is freely available (under an ISC license) at http://parseval.sourceforge.net/. PMID:22852583

A preliminary wind energy resource assessment of Mexico that produced wind resource maps for both utility-scale and rural applications was undertaken as part of the Mexico-U.S. Renewable Energy Cooperation Program. This activity has provided valuable information needed to facilitate the commercialization of small wind turbines and windfarms in Mexico and to lay the groundwork for subsequent wind resource activities. A surface meteorological data set of hourly data in digital form was utilized to prepare a more detailed and accurate wind resource assessment of Mexico than otherwise would have been possible. Software was developed to perform the first ever detailed analysis of the wind characteristics data for over 150 stations in Mexico. The hourly data set was augmented with information from weather balloons (upper-air data), ship wind data from coastal areas, and summarized wind data from sources in Mexico. The various data were carefully evaluated for their usefulness in preparing the wind resource assessment. The preliminary assessment has identified many areas of good-to-excellent wind resource potential and shows that the wind resource in Mexico is considerably greater than shown in previous surveys.

Technology assessments provide a status of the development maturity of specific technologies. Along with benefit analysis, the risks the project assumes can be quantified. Normally due to budget constraints, the competing technologies are prioritized and decisions are made which ones to fund. A detailed technology development plan is produced for the selected technologies to provide a roadmap to reach the desired maturity by the project s critical design review. Technology assessments can be conducted for both technology only tasks or for product development programs. This paper is primarily biased toward the product development programs. The paper discusses the Ares Project s approach to technology assessment. System benefit analysis, risk assessment, technology prioritization, and technology readiness assessment are addressed. A description of the technology readiness level tool being used is provided.

Biomedical annotation is a common and affective artifact for researchers to discuss, show opinion, and share discoveries. It becomes increasing popular in many online research communities, and implies much useful information. Ranking biomedical annotations is a critical problem for data user to efficiently get information. As the annotator's knowledge about the annotated entity normally determines quality of the annotations, we evaluate the knowledge, that is, semantic relationship between them, in two ways. The first is extracting relational information from credible websites by mining association rules between an annotator and a biomedical entity. The second way is frequent pattern mining from historical annotations, which reveals common features of biomedical entities that an annotator can annotate with high quality. We propose a weighted and concept-extended RDF model to represent an annotator, a biomedical entity, and their background attributes and merge information from the two ways as the context of an annotator. Based on that, we present a method to rank the annotations by evaluating their correctness according to user's vote and the semantic relevancy between the annotator and the annotated entity. The experimental results show that the approach is applicable and efficient even when data set is large. PMID:24899918

Biomedical annotation is a common and affective artifact for researchers to discuss, show opinion, and share discoveries. It becomes increasing popular in many online research communities, and implies much useful information. Ranking biomedical annotations is a critical problem for data user to efficiently get information. As the annotator's knowledge about the annotated entity normally determines quality of the annotations, we evaluate the knowledge, that is, semantic relationship between them, in two ways. The first is extracting relational information from credible websites by mining association rules between an annotator and a biomedical entity. The second way is frequent pattern mining from historical annotations, which reveals common features of biomedical entities that an annotator can annotate with high quality. We propose a weighted and concept-extended RDF model to represent an annotator, a biomedical entity, and their background attributes and merge information from the two ways as the context of an annotator. Based on that, we present a method to rank the annotations by evaluating their correctness according to user's vote and the semantic relevancy between the annotator and the annotated entity. The experimental results show that the approach is applicable and efficient even when data set is large. PMID:24899918

In the summer and spring of 1980 Mercer County Community College undertook a large-scale employer needs assessmentproject, during which 1,140 Mercer County employers were contacted in order to: (1) assess employers' practices and preferences in the recruitment of personnel; (2) determine employer satisfaction with the College's ability to educate…

The Information Systems Assessment Report documents the results from assessing the Project Hanford Management Contract (PHMC) Hanford Data Integrator 2000 (HANDI 2000) system, Business Management System (BMS) and Work Management System phases (WMS), with respect to the System Engineering Capability Assessment Model (CAM). The assessment was performed in accordance with the expectations stated in the fiscal year (FY) 1999 Performance Agreement 7.1.1, item (2) which reads, ''Provide an assessment report on the selected Integrated Information System by July 31, 1999.'' This report assesses the BMS and WMS as implemented and planned for the River Protection Project (RPP). The systems implementation is being performed under the PHMC HANDI 2000 information system project. The project began in FY 1998 with the BMS, proceeded in FY 1999 with the Master Equipment List portion of the WMS, and will continue the WMS implementation as funding provides. This report constitutes an interim quality assessment providing information necessary for planning RPP's information systems activities. To avoid confusion, HANDI 2000 will be used when referring to the entire system, encompassing both the BMS and WMS. A graphical depiction of the system is shown in Figure 2-1 of this report.

The purpose of this project is to utilize Ruby on Rails to create a web application that will replace a spreadsheet keeping track of training courses and tasks. The goal is to create a fast and easy to use web application that will allow users to track progress on training courses. This application will allow users to update and keep track of all of the training required of them. The training courses will be organized by group and by user, making readability easier. This will also allow group leads and administrators to get a sense of how everyone is progressing in training. Currently, updating and finding information from this spreadsheet is a long and tedious task. By upgrading to a web application, finding and updating information will be easier than ever as well as adding new training courses and tasks. Accessing this data will be much easier in that users just have to go to a website and log in with NDC credentials rather than request the relevant spreadsheet from the holder. In addition to Ruby on Rails, I will be using JavaScript, CSS, and jQuery to help add functionality and ease of use to my web application. This web application will include a number of features that will help update and track progress on training. For example, one feature will be to track progress of a whole group of users to be able to see how the group as a whole is progressing. Another feature will be to assign tasks to either a user or a group of users. All of these together will create a user friendly and functional web application.

Conservation projects occur under many types of uncertainty. Where this uncertainty can affect achievement of a project’s objectives, there is risk. Understanding risks to project success should influence a range of strategic and tactical decisions in conservation, and yet, formal risk assessment rarely features in the guidance or practice of conservation planning. We describe how subjective risk analysis tools can be framed to facilitate the rapid identification and assessment of risks to conservation projects, and how this information should influence conservation planning. Our approach is illustrated with an assessment of risks to conservation success as part of a conservation plan for the work of The Nature Conservancy in northern Australia. Risks can be both internal and external to a project, and occur across environmental, social, economic and political systems. Based on the relative importance of a risk and the level of certainty in its assessment we propose a series of appropriate, project level responses including research, monitoring, and active amelioration. Explicit identification, prioritization, and where possible, management of risks are important elements of using conservation resources in an informed and accountable manner.

The Clean Coal Technology (CCT) Demonstration Program is a government and industry co-funded technology development effort to demonstrate a new generation of innovative coal utilization processes. One goal of the program is to furnish the energy marketplace with a variety of energy efficient, environmentally superior coal-based technologies. Demonstration projects seek to establish the commercial feasibility of the most promising coal technologies that have proceeded beyond the proof-of-concept stage. This report is a post-projectassessment of the DOE CCT Demonstration Program, the Tidd PFBC Demonstration Project. A major objective of the CCT Program is to provide the technical data necessary for the private sector to proceed confidently with the commercial replication of the demonstrated technologies. An essential element of meeting this goal is the dissemination of results from the demonstration projects. This post-projectassessment (PPA) report is an independent DOE appraisal of the successes that the completed project had in achieving its objectives and aiding in the commercialization of the demonstrated technology. The report also provides an assessment of the expected technical, environmental, and economic performance of the commercial version of the technology, as well as an analysis of the commercial market.

This project will assist Douglas County and other conservation partners by assessing the types and locations of wetland resources in the watershed. This study will involve site visits by ecologists, botanists, and other wetland experts. Study results will be mapped using GIS so...

This report documents the educational needs of Native Hawaiians across ecosystem levels. Identifying the unique educational needs of Native Hawaiians and effective Native American and local programs that meet the unique educational needs of native Hawaiians, this project works within certain parameters: (1) part of a continuous needs assessment,…

Research and development organizations that push the innovation edge of technology frequently encounter challenges when attempting to identify an investment strategy and to accurately forecast the cost and schedule performance of selected projects. Fast moving and complex environments require managers to quickly analyze and diagnose the value of returns on investment versus allocated resources. Our ProjectAssessment Framework through Design (PAFTD) tool facilitates decision making for NASA senior leadership to enable more strategic and consistent technology development investment analysis, beginning at implementation and continuing through the project life cycle. The framework takes an integrated approach by leveraging design principles of useability, feasibility, and viability and aligns them with methods employed by NASA's Independent Program Assessment Office for project performance assessment. The need exists to periodically revisit the justification and prioritization of technology development investments as changes occur over project life cycles. The framework informs management rapidly and comprehensively about diagnosed internal and external root causes of project performance.

Project Spectrum is a pilot project to fuse assessment and the curriculum of preschool and daycare programs. The article reviews standard assessment methods, describes alternative notions of intelligence, and examines the implementation of Project Spectrum in detail. (JL)

The US Department of Energy`s (DOE) Office of Civilian Radioactive Waste Management (OCRWM) is responsible for developing the Civilian Radioactive Waste Management System (CRWMS) to accept spent nuclear fuel from commercial facilities. The objective of the Facility Interface Capability Assessment (FICA) project was to assess the capability of each commercial spent nuclear fuel (SNF) storage facility, at which SNF is stored, to handle various SNF shipping casks. The purpose of this report is to present and analyze the results of the facility assessments completed within the FICA project. During Phase 1, the data items required to complete the facility assessments were identified and the database for the project was created. During Phase 2, visits were made to 122 facilities on 76 sites to collect data and information, the database was updated, and assessments of the cask-handling capabilities at each facility were performed. Each assessment of cask-handling capability contains three parts: the current capability of the facility (planning base); the potential enhanced capability if revisions were made to the facility licensing and/or administrative controls; and the potential enhanced capability if limited physical modifications were made to the facility. The main conclusion derived from the planning base assessments is that the current facility capabilities will not allow handling of any of the FICA Casks at 49 of the 122 facilities evaluated. However, consideration of potential revisions and/or modifications showed that all but one of the 49 facilities could be adapted to handle at least one of the FICA Casks. For this to be possible, facility licensing, administrative controls, and/or physical aspects of the facility would need to be modified.

Gene Ontology (GO) has established itself as the undisputed standard for protein function annotation. Most annotations are inferred electronically, i.e. without individual curator supervision, but they are widely considered unreliable. At the same time, we crucially depend on those automated annotations, as most newly sequenced genomes are non-model organisms. Here, we introduce a methodology to systematically and quantitatively evaluate electronic annotations. By exploiting changes in successive releases of the UniProt Gene Ontology Annotation database, we assessed the quality of electronic annotations in terms of specificity, reliability, and coverage. Overall, we not only found that electronic annotations have significantly improved in recent years, but also that their reliability now rivals that of annotations inferred by curators when they use evidence other than experiments from primary literature. This work provides the means to identify the subset of electronic annotations that can be relied upon—an important outcome given that >98% of all annotations are inferred without direct curation. PMID:22693439

This annotated bibliography reviews selected literature focusing on the concept of staff differentiation. Included are 62 items (dated 1966-1970), along with a list of mailing addresses where copies of individual items can be obtained. Also a list of 31 staff differentiation projects receiving financial assistance from the U.S. Office of Education…

The Middle Urals is an important Russian industrial region. The key industries are also the most environmentally damaging: mining, metallurgical and chemical industries. There are some 600 large-sized and medium-sized enterprises located within the Middle Urals` region. Their annual solid and gaseous chemical releases have led to exceeding some maximum permissible contaminant concentrations by factors of tens and hundreds. The environmental problems of the Middle Urals are of such magnitude, seriousness, and urgency that the limited available resources can be applied only to the problems of the highest priority in the most cost-effective way. By the combined efforts of scientists from Lawrence Livermore National Laboratory (USA), Institute of Industrial Ecology (Ekaterinburg, Russia) and Russian Federal Nuclear Center (Snezhinsk, Russia) the project on Environmental Priorities Assessment was initiated in 1993. Because the project will cut across a spectrum of Russian environmental, social, and political issues, it has been established as a genuine Russian effort led by Russian principals. Russian participants are the prime movers and decision-makers, and LLNL participants are advisors. A preliminary project has been completed to gather relevant environmental data and to develop a formal proposal for the full priorities assessmentproject for submittal to the International Science and Technology Center. The proposed priorities assessment methodology will be described in this paper. The specific objectives of this project are to develop and to implement a methodology to establish Russian priorities for future pollution prevention efforts in a limited geographic region of the Middle Urals (a part of Chelyabinsk and Sverdlovsk Oblasts). This methodology will be developed on two geographic levels: local (town scale) and regional (region scale). Detailed environmental analysis will be performed on a local scale and extrapolated to the regional scale.

Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genesmore » and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary

Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to

Literature on the effects of general noise on human performance is reviewed in an attempt to identify (1) those characteristics of noise that have been found to affect human performance; (2) those characteristics of performance most likely to be affected by the presence of noise, and (3) those characteristics of the performance situation typically associated with noise effects. Based on the characteristics identified, a theoretical framework is proposed that will permit predictions of possible effects of time-varying aircraft-type noise on complex human performance. An annotated bibliography of 50 articles is included.

This catalog contains annotations for 170 bilingual vocational training materials. Most of the materials are written in English, but materials written in 13 source languages and directed toward speakers of 17 target languages are provided. Annotations are provided for the following different types of documents: administrative, assessment and…

The majority of the material cited in this annotated bibliography is in Spanish although bilingual and English materials are also included. Each annotation is presented both in English and in Spanish. The bibliography is part of a modular sequence for teaching reading to bilingual learners. The bibliography covers the following areas: (a) general…

The goal of the U.S. Department of Energy's (DOE) Clean Coal Technology (CCT) Program is to provide the energy marketplace with advanced, more efficient, and environmentally responsible coal utilization options by conducting demonstrations of new technologies. These demonstration projects are intended to establish the commercial feasibility of promising advanced coal technologies that have been developed to a level at which they are ready for demonstration testing under commercial conditions. This document serves as a DOE post-projectassessment (PPA) of the Healy Clean Coal Project (HCCP), selected under Round III of the CCT Program, and described in a Report to Congress (U.S. Department of Energy, 1991). The desire to demonstrate an innovative power plant that integrates an advanced slagging combustor, a heat recovery system, and both high- and low-temperature emissions control processes prompted the Alaska Industrial Development and Export Authority (AIDEA) to submit a proposal for this project. In April 1991, AIDEA entered into a cooperative agreement with DOE to conduct this project. Other team members included Golden Valley Electric Association (GVEA), host and operator; Usibelli Coal Mine, Inc., coal supplier; TRW, Inc., Space & Technology Division, combustor technology provider; Stone & Webster Engineering Corp. (S&W), engineer; Babcock & Wilcox Company (which acquired the assets of Joy Environmental Technologies, Inc.), supplier of the spray dryer absorber technology; and Steigers Corporation, provider of environmental and permitting support. Foster Wheeler Energy Corporation supplied the boiler. GVEA provided oversight of the design and provided operators during demonstration testing. The project was sited adjacent to GVEA's Healy Unit No. 1 in Healy, Alaska. The objective of this CCT project was to demonstrate the ability of the TRW Clean Coal Combustion System to operate on a blend of run-of-mine (ROM) coal and waste coal, while meeting strict

In 2012, we began a project of nationwide Probabilistic Tsunami Hazard Assessment (PTHA) in Japan to support various measures (Fujiwara et al., 2013, JpGU; Hirata et al., 2014, AOGS). The most important strategy in the nationwide PTHA is predominance of aleatory uncertainty in the assessment but use of epistemic uncertainty is limited to the minimum, because the number of all possible combinations among epistemic uncertainties diverges quickly when the number of epistemic uncertainties in the assessment increases ; we consider only a type of earthquake occurrence probability distribution as epistemic uncertainty. We briefly show outlines of the nationwide PTHA as follows; (i) we consider all possible earthquakes in the future, including those that the Headquarters for Earthquake Research Promotion (HERP) of Japanese Government, already assessed. (ii) We construct a set of simplified earthquake fault models, called "Characterized Earthquake Fault Models (CEFMs)", for all of the earthquakes by following prescribed rules (Toyama et al., 2014, JpGU; Korenaga et al., 2014, JpGU). (iii) For all of initial water surface distributions caused by a number of the CEFMs, we calculate tsunamis by solving a nonlinear long wave equation, using FDM, including runup calculation, over a nesting grid system with a minimum grid size of 50 meters. (iv) Finally, we integrate information about the tsunamis calculated from the numerous CEFMs to get nationwide tsunami hazard assessments. One of the most popular representations of the integrated information is a tsunami hazard curve for coastal tsunami heights, incorporating uncertainties inherent in tsunami simulation and earthquake fault slip heterogeneity (Abe et al., 2014, JpGU). We will show a PTHA along the eastern coast of Honshu, Japan, based on approximately 1,800 tsunami sources located within the subduction zone along the Japan Trench, as a prototype of the nationwide PTHA. This study is supported by part of the research

The Ecological Compliance AssessmentProject (ECAP) began full operation on March 1, 1994. The project is designed around a baseline environmental data concept that includes intensive biological field surveys of key areas of the Hanford Site where the majority of Site activities occur. These surveys are conducted at biologically appropriate times of year to ensure that the data gathered are current and accurate. The data are entered into the ECAP database, which serves as a reference for the evaluation of review requests coming in to the project. This methodology provided the basis for over 90 percent of the review requests received. Field surveys conducted under ECAP are performed to document occurrence information for species of concern and to obtain habitat descriptions. There are over 200 species of concern on the Hanford Site, including plants, birds, mammals, reptiles, amphibians, fish, and invertebrates. In addition, Washington State has designated mature sagebrush-steppe habitat as a Priority Habitat meriting special protective measures. Of the projects reviewed, 17 resulted or will result in impacts to species or habitats of concern on the Hanford Site. The greatest impact has been on big sagebrush habitat. Most of the impact has been or will be within the 600 Area of the Site.

To meet BPA`s contractual obligation to supply electrical power to its customers, BPA proposes to acquire power generated by Klickitat Cogeneration Project. BPA has prepared an environmental assessment evaluating the proposed project. Based on the EA analysis, BPA`s proposed action is not a major Federal action significantly affecting the quality of the human environment within the meaning of the National Environmental Policy Act of 1969 for the following reasons: (1)it will not have a significant impact land use, upland vegetation, wetlands, water quality, geology, soils, public health and safety, visual quality, historical and cultural resources, recreation and socioeconomics, and (2) impacts to fisheries, wildlife resources, air quality, and noise will be temporary, minor, or sufficiently offset by mitigation. Therefore, the preparation of an environmental impact statement is not required and BPA is issuing this FONSI (Finding of No Significant Impact).

This second session at the Wind Energy and Birds/Bats workshop consisted of two presentations followed by a discussion/question and answer period. The focus of the presentations was on the practices and methodologies used in the wind energy industry for assessing risk to birds and bats at candidate project sites. Presenters offered examples of pre-development siting evaluation requirements set by certain states. Presentation one was titled ''Practices and Methodologies and Initial Screening Tools'' by Richard Curry of Curry and Kerlinger, LLC. Presentation two was titled ''State of the Industry in the Pacific Northwest'' by Andy Linehan, CH2MHILL.

The Space Launch System (SLS) will launch NASA's Multi-Purpose Crew Vehicle (MPCV). This launch vehicle will provide American launch capability for human exploration and travelling beyond Earth orbit. SLS is designed to be flexible for crew or cargo missions. The first test flight is scheduled for December 2017. The SLS SRR/SDR provided insight into the project development life cycle. NASA IV&V ran the standard Risk Based Assessment and Portfolio Based Risk Assessment to identify analysis tasking for the SLS program. This presentation examines the SLS System Requirements Review/System Definition Review (SRR/SDR), IV&V findings for IV&V process validation correlation to/from the selected IV&V tasking and capabilities. It also provides a reusable IEEE 1012 scorecard for programmatic completeness across the software development life cycle.

Paper describes two assessmentprojects that provide examples of how preservice physical educators design, implement, and understand assessment, focusing on formative assessment. A volleyball project occurred in a middle school where students had physical education twice weekly. A gymnastics project occurred in an elementary school where teachers…

A goal of the Chromosome-centric Human Proteome Project is to identify all human protein species. With 3844 proteins annotated as “missing”, this is challenging. Moreover, proteolytic processing generates new protein species with characteristic neo-N termini that are frequently accompanied by altered half-lives, function, interactions, and location. Enucleated and largely void of internal membranes and organelles, erythrocytes are simple yet proteomically challenging cells due to the high hemoglobin content and wide dynamic range of protein concentrations that impedes protein identification. Using the N-terminomics procedure TAILS, we identified 1369 human erythrocyte natural and neo-N-termini and 1234 proteins. Multiple semitryptic N-terminal peptides exhibited improved mass spectrometric identification properties versus the intact tryptic peptide enabling identification of 281 novel erythrocyte proteins and six missing proteins identified for the first time in the human proteome. With an improved bioinformatics workflow, we developed a new classification system and the Terminus Cluster Score. Thereby we described a new stabilizing N-end rule for processed protein termini, which discriminates novel protein species from degradation remnants, and identified protein domain hot spots susceptible to cleavage. Strikingly, 68% of the N-termini were within genome-encoded protein sequences, revealing alternative translation initiation sites, pervasive endoproteolytic processing, and stabilization of protein fragments in vivo. The mass spectrometry proteomics data have been deposited to ProteomeXchange with the data set identifier . PMID:24555563

With the completion of the human genome sequence and genome sequence available for other vertebrate genomes, the task of manual annotation at the large genome scale has become a priority. Possibly even more important, is the requirement to curate and improve this annotation in the light of future data. For this to be possible, there is a need for tools to access and manage the annotation. Ensembl provides an excellent means for storing gene structures, genome features, and sequence, but it does not support the extra textual data necessary for manual annotation. We have extended Ensembl to create the Otter manual annotation system. This comprises a relational database schema for storing the manual annotation data, an application-programming interface (API) to access it, an extensible markup language (XML) format to allow transfer of the data, and a server to allow multiuser/multimachine access to the data. We have also written a data-adaptor plugin for the Apollo Browser/Editor to enable it to utilize an Otter server. The otter database is currently used by the Vertebrate Genome Annotation (VEGA) site (http://vega.sanger.ac.uk), which provides access to manually curated human chromosomes. Support is also being developed for using the AceDB annotation editor, FMap, via a perl wrapper called Lace. The Human and Vertebrate Annotation (HAVANA) group annotators at the Sanger center are using this to annotate human chromosomes 1 and 20. PMID:15123593

Various projections of the relation between future CO{sub 2} concentrations and future emissions were undertaken as part of the scientific assessment for Working Group 1 of the Intergovernmental Panel on Climate Change. There were three type of calculation: (1) forward projections, calculating the atmospheric CO{sub 2} concentrations resulting from specified emission scenarios, (2) inverse calculations determining the emission rates that would be required to achieve stabilization of CO{sub 2} concentrations via specified pathways and (3) impulse response function calculations required for determining Global Warming Potentials. The use of a standardized set of conditions allows an intercomparison of models. The ocean models used in the calculations presented here span a range of forms from response function descriptions to general circulation models. The general issue for all levels of modelling is whether the model parameters can reasonably be regarded as being the same in the future as at present. Sensitivity studies explore other aspects of the uncertainties of such projections. This report documents the specifications, the models that were used and the results that were obtained. Some preliminary interpretations of the results are included.

Describes the key features of the Collegiate Learning Assessment (CLA) project, which assesses the "value added" of an institution. The projectassesses the institutional contribution to student learning through a focus on general education skills and the assessment of student performance relative to other students and through a pretest-posttest…

The Fish and Wildlife Program of the Northwest Power Planning Council (NPPC) prescribes several approaches to achieve its goal of doubling the salmon and steelhead runs of the Columbia River. Among those approaches are habitat restoration, improvements in adult and juvenile passage at dams and artificial propagation. Supplementation will be a major part of the new hatchery programs. The purpose of the Regional Assessment of Supplementation Project (RASP) is to provide an overview of ongoing and planned supplementation activities, to construct a conceptual framework and model for evaluating the potential benefits and risks of supplementation and to develop a plan for better regional coordination of research and monitoring and evaluation of supplementation. RASP has completed its first year of work. Progress toward meeting the first year`s objectives and recommendations for future tasks are contained in this report.

The Fish and Wildlife Program of the Northwest Power Planning Council (NPPC) prescribes several approaches to achieve its goal of doubling the salmon and steelhead runs of the Columbia River. Among those approaches are habitat restoration, improvements in adult and juvenile passage at dams and artificial propagation. Supplementation will be a major part of the new hatchery programs. The purpose of the Regional Assessment of Supplementation Project (RASP) is to provide an overview of ongoing and planned supplementation activities, to construct a conceptual framework and model for evaluating the potential benefits and risks of supplementation and to develop a plan for better regional coordination of research and monitoring and evaluation of supplementation. RASP has completed its first year of work. Progress toward meeting the first year's objectives and recommendations for future tasks are contained in this report.

This paper describes the Knowledge Encapsulation Framework (KEF), a suite of tools to enable automated knowledge annotation for modeling and simulation projects. This framework can be used to capture evidence (e.g., facts extracted from journal articles and government reports), discover new evidence (from similar peer-reviewed material as well as social media), enable discussions surrounding domain-specific topics and provide automatically generated semantic annotations for improved corpus investigation. The current KEF implementation is presented within a wiki environment, providing a simple but powerful collaborative space for team members to review, annotate, discuss and align evidence with their modeling frameworks.

The US Department of Energy (DOE) has prepared an environmental assessment (EA) for the Rapid Reactivation Project at Sandia National Laboratories, New Mexico. The EA analyzes the potential effects of a proposal to increase production of neutron generators from the current capability of 600 units per year up to 2,000 units per year. The project would use existing buildings and infrastructure to the maximum extent possible to meet the additional production needs. The increased production levels would necessitate modifications and additions involving a total area of approximately 26,290 gross square feet at Sandia National Laboratories, New Mexico, Technical Area 1. Additional production equipment would be procured and installed. The no-action alternative would be to continue production activities at the current capability of 600 units per year. The EA analyzes effects on health, safety, and air quality, resulting from construction and operation and associated cumulative effects. A detailed description of the proposed action and its environmental consequences is presented in the EA.

With the existence of large publicly available plant gene expression data sets, many groups have undertaken data analyses to construct gene coexpression networks and functionally annotate genes. Often, a large compendium of unrelated or condition-independent expression data is used to construct gene networks. Condition-dependent expression experiments consisting of well-defined conditions/treatments have also been used to create coexpression networks to help examine particular biological processes. Gene networks derived from either condition-dependent or condition-independent data can be difficult to interpret if a large number of genes and connections are present. However, algorithms exist to identify modules of highly connected and biologically relevant genes within coexpression networks. In this study, we have used publicly available rice (Oryza sativa) gene expression data to create gene coexpression networks using both condition-dependent and condition-independent data and have identified gene modules within these networks using the Weighted Gene Coexpression Network Analysis method. We compared the number of genes assigned to modules and the biological interpretability of gene coexpression modules to assess the utility of condition-dependent and condition-independent gene coexpression networks. For the purpose of providing functional annotation to rice genes, we found that gene modules identified by coexpression analysis of condition-dependent gene expression experiments to be more useful than gene modules identified by analysis of a condition-independent data set. We have incorporated our results into the MSU Rice Genome AnnotationProject database as additional expression-based annotation for 13,537 genes, 2,980 of which lack a functional annotation description. These results provide two new types of functional annotation for our database. Genes in modules are now associated with groups of genes that constitute a collective functional annotation of those

Large-scale sequencing of prokaryotic genomes demands the automation of certain annotation tasks currently manually performed in the production of the SWISS-PROT protein knowledgebase. The HAMAP project, or 'High-quality Automated and Manual Annotation of microbial Proteomes', aims to integrate manual and automatic annotation methods in order to enhance the speed of the curation process while preserving the quality of the database annotation. Automatic annotation is only applied to entries that belong to manually defined orthologous families and to entries with no identifiable similarities (ORFans). Many checks are enforced in order to prevent the propagation of wrong annotation and to spot problematic cases, which are channelled to manual curation. The results of this annotation are integrated in SWISS-PROT, and a website is provided at http://www.expasy.org/sprot/hamap/. PMID:12798039

The facilities needed to maintain and repair Bonneville Power Administration (BPA's) electrical equipment in northwest Montana are currently in two locations: A maintenance headquarters at the Kalispell Substation, and a temporary leased facility south of Kalispell. The present situation is not efficient. There is not enough space to accommodate the equipment needed at each site, and coordination and communication between the two sites is difficult. Also, two sites means duplication of equipment and facilities. BPA needs a single, centralized facility that would efficiently accommodate all the area's maintenance activities and equipment. BPA proposes to build a maintenance headquarters facility consisting of 2 to 4 single-story buildings totaling about 35,000 square feet (office spaces and workshop areas); an open-ended vehicle storage building (carport style); a fenced-in storage year; a storage building for flammables, herbicides, and hazardous wastes; and a parking lot. The facility would require developing about 6 to 10 acres of land. Two sites are being considered for the proposed project (see the attached map for locations). This report is the environmental assessment of the two options.

The South Florida Ecosystem AssessmentProject is an innovative, large-scale monitoring and assessment program designed to measure current and changing conditions of ecological resources in South Florida using an integrated holistic approach. Using the United States Environmenta...

Missouri State Dept. of Elementary and Secondary Education, Jefferson City. Div. of Instruction.

This document includes the left-hand column ("What All Students Should Know") and the center column ("What All Students Should Be Able To Do") from "Missouri's Framework for Curriculum Development in Communication Arts K-12." Next to these two columns has been added a column which includes assessment notes for those grade levels which will be…

The purpose of this quantitative study was to assess the relationship between ethical project management and information technology (IT) project success. The success of IT projects is important for organizational success, but the rate of IT projects is historically low, costing billions of dollars annually. Using four key ethical variables…

Protein functional annotation consists in associating proteins with textual descriptors elucidating their biological roles. The bulk of annotation is done via automated procedures that ultimately rely on annotation transfer. Despite a large number of existing protein annotation procedures the ever growing protein space is never completely annotated. One of the facets of annotation incompleteness derives from annotation uncertainty. Often when protein function cannot be predicted with enough specificity it is instead conservatively annotated with more generic terms. In a scenario of protein families or functionally related (or even dissimilar) sets this leads to a more difficult task of using annotations to compare the extent of functional relatedness among all family or set members. However, we postulate that identifying sub-sets of functionally coherent proteins annotated at a very specific level, can help the annotation extension of other incompletely annotated proteins within the same family or functionally related set. As an example we analyse the status of annotation of a set of CAZy families belonging to the Polysaccharide Lyase class. We show that through the use of visualization methods and semantic similarity based metrics it is possible to identify families and respective annotation terms within them that are suitable for possible annotation extension. Based on our analysis we then propose a semi-automatic methodology leading to the extension of single annotation terms within these partially annotated protein sets or families. PMID:24130572

The document presents the final report of the Assessment and Improvement of Related Services (AIRS) Project, an effort to assess the impact and effectiveness of special education related services in Hawaii. Each of the four project objectives focused on accomplishment of one of the evaluation types specified in the Context-Input-Process-Product…

The "Multiple Intelligences, Curriculum and AssessmentProject" at University College Cork was a collaborative project carried out between 1995 and 1999. The key research question focused on whether Howard Gardner's theory of Multiple Intelligences could be applied to, and enhance, aspects of curriculum and assessment at primary and second level…

This annual report summaries the activities and accomplishments of the Solar Radiation Resource AssessmentProject during fiscal year 1992 (1 October to 30 September 1992). Managed by the Analytic Studies Division of the National Renewable Energy Laboratory, this project is the major activity of the US Department of Energy's Resource Assessment Program.

Technological Omics breakthroughs, including next generation sequencing, bring avalanches of data which need to undergo effective data management to ensure integrity, security, and maximal knowledge-gleaning. Data management system requirements include flexible input formats, diverse data entry mechanisms and views, user friendliness, attention to standards, hardware and software platform definition, as well as robustness. Relevant solutions elaborated by the scientific community include Laboratory Information Management Systems (LIMS) and standardization protocols facilitating data sharing and managing. In project planning, special consideration has to be made when choosing relevant Omics annotation sources, since many of them overlap and require sophisticated integration heuristics. The data modeling step defines and categorizes the data into objects (e.g., genes, articles, disorders) and creates an application flow. A data storage/warehouse mechanism must be selected, such as file-based systems and relational databases, the latter typically used for larger projects. Omics project life cycle considerations must include the definition and deployment of new versions, incorporating either full or partial updates. Finally, quality assurance (QA) procedures must validate data and feature integrity, as well as system performance expectations. We illustrate these data management principles with examples from the life cycle of the GeneCards Omics project (http://www.genecards.org), a comprehensive, widely used compendium of annotative information about human genes. For example, the GeneCards infrastructure has recently been changed from text files to a relational database, enabling better organization and views of the growing data. Omics data handling benefits from the wealth of Web-based information, the vast amount of public domain software, increasingly affordable hardware, and effective use of data management and annotation principles as outlined in this chapter

Students in the Physiotherapy Programme carried out a group project in their final year of studies. The objectives of the project were that the students learn and appreciate the process and activities involved in research, acquire deeper understanding of a topic in their professional interest, learn to work as a team, manage their own time,…

The authors discuss an environmental impact assessment (EIA) of the Dulang Oilfield Development Project, conducted to determine whether the project could proceed in a safe and environmentally acceptable manner. This is the first EIA for an offshore oilfield in Malaysian waters, and was conducted in anticipation of the Environmental Quality (Prescribed Activities) (Environmental Impact Assessment Order(1987)) which requires an EIA to be conducted for major oil and gas field development projects.

As part of the USGCRP's First National Assessment effort, EPA's Global Change Research Program sponsored the first Mid-Atlantic Regional Assessment. A multi-disciplinary team of 14 Pennsylvania State University (Penn State) faculty members led this regional assessment effort.

The 1993 edition of the Directory provides detailed citations and abstracts for 357 major natural resource and environmental studies (257 of which were added since the 1990 edition) covering 129 developing countries and 12 regional groupings. Most of the studies (93%) were prepared between 1987 and 1992. Included are 66 national reports prepared for the U.N. Conference on Environment and Development, held in Rio de Janeiro in June 1992. Thirty-three countries, including Algeria, Bhutan, Cuba, Republic of Korea, Laos, Namibia, South Africa, Suriname, Taiwan, and Venezuela, are new to this edition of the directory, which focuses on reports that link quantitative assessment of a country's natural resources to economic development and the maintenance of ecosystems, with particular emphasis on natural resource management strategies and action plans.

The proposed Happy Valley project consists of construction of a new BPA customer service 69-kV substation south of Sequim in Clallam County, Washington. A tie line, to be constructed by the customer as part of this project, will link the new BPA facility to the existing customer's transmission system in the area. This project responds to rapid load growth in the Olympic Peninsula, and will strengthen the existing BPA system and interconnected utility systems. It will reduce transmission losses presently incurred, especially on the BPA system supplying power to the Olympic Peninsula. This report describes the potential environmental impact of the proposed actions. 2 figs., 1 tab.

Comprehensive annotated compilation of books, journals, periodicals, and reports on energy and energy related topics, contains approximately 10,0000 tehcnical and nontechnical references from bibliographic and other sources dated January 1975 through May 1977.

The purpose of this collaborative project between NREL and industry is: (1) provide high quality solar measurements in support of deploying Concentrating Solar Thermal projects; and (2) provide NREL with research-quality data sets for refining solar models and developing solar forecasting capabilities. The benefits of this project are: (1) lends NREL credibility to data sets used for economic analyses and commercial justification; (2) helps minimize costly mistakes in estimating capacity and economic return on investment; (3) helps maximize the development of projects for which adequate solar resources exist; (4) provides data to NREL for research to improve/validate models and explore RA innovations; and (5) helps maintain collaborative channels between NREL and industry.

This paper discusses the impact of a specially developed assessment and feedback system implemented within a second year industrial design module at Coventry University, UK. The "Assessment Buddy" system was developed in response to the need for a successful assessment and feedback method that could cope with the complexities of a creative…

Functional analysis using the Gene Ontology (GO) is crucial for array analysis, but it is often difficult for researchers to assess the amount and quality of GO annotations associated with different sets of gene products. In many cases the source of the GO annotations and the date the GO annotations were last updated is not apparent, further complicating a researchers’ ability to assess the quality of the GO data provided. Moreover, GO biocurators need to ensure that the GO quality is maintained and optimal for the functional processes that are most relevant for their research community. We report the GO Annotation Quality (GAQ) score, a quantitative measure of GO quality that includes breadth of GO annotation, the level of detail of annotation and the type of evidence used to make the annotation. As a case study, we apply the GAQ scoring method to a set of diverse eukaryotes and demonstrate how the GAQ score can be used to track changes in GO annotations over time and to assess the quality of GO annotations available for specific biological processes. The GAQ score also allows researchers to quantitatively assess the functional data available for their experimental systems (arrays or databases). PMID:18187504

This annotated bibliography lists publications and World Wide Web sites dealing with health communication and literacy. The 51 publications, which were all published between 1982 and 1998, contain information about and/or for use in the following areas: assessment, assessment tools, elderly adults, empowerment, maternal and child health, patient…

This chapter provides an overview of the use of professional standards of practice in assessment and of the Council for the Advancement of Standards in Higher Education (CAS). It outlines a model for conducting program self-studies and discusses the importance of implementing change based on assessment results.

As part of the USGCRP's First National Assessment effort, EPA is sponsoring the Gulf Coast Regional Assessment. Southern University and A&M College and its collaborators are analyzing and evaluating the potential consequences of climate variability and change for the region in th...

Described are national methods of assessing and monitoring the achievement in science of students of 11, 13, and 16 years old in England and Wales. The tasks of the Assessment of Performance Unit (APU), a unit within the Department of Education and Science, are also described. (HM)

Annotations delineating regions of interest can provide valuable information for training medical image classification and segmentation methods. However the process of obtaining annotations is tedious and time-consuming, especially for high-resolution volumetric images. In this paper we present a novel learning framework to reduce the requirement of manual annotations while achieving competitive classification performance. The approach is evaluated on a dataset with 59 3D optical projection tomography images of colorectal polyps. The results show that the proposed method can robustly infer patterns from partially annotated images with low computational cost. PMID:24505790

Different learning methods such as project-based learning, spiral learning and peer assessment have been implemented in science disciplines with different outcomes. This paper presents a proposal for a project management course in the context of a computer science degree. Our proposal combines three well-known methods: project-based learning,…

Background A protein annotation database, such as the Universal Protein Resource knowledge base (UniProtKb), is a valuable resource for the validation and interpretation of predicted 3D structure patterns in proteins. Existing studies have focussed on point mutation extraction methods from biomedical literature which can be used to support the time consuming work of manual database curation. However, these methods were limited to point mutation extraction and do not extract features for the annotation of proteins at the residue level. Results This work introduces a system that identifies protein residues in MEDLINE abstracts and annotates them with features extracted from the context written in the surrounding text. MEDLINE abstract texts have been processed to identify protein mentions in combination with taxonomic species and protein residues (F1-measure 0.52). The identified protein-species-residue triplets have been validated and benchmarked against reference data resources (UniProtKb, average F1-measure of 0.54). Then, contextual features were extracted through shallow and deep parsing and the features have been classified into predefined categories (F1-measure ranges from 0.15 to 0.67). Furthermore, the feature sets have been aligned with annotation types in UniProtKb to assess the relevance of the annotations for ongoing curation projects. Altogether, the annotations have been assessed automatically and manually against reference data resources. Conclusion This work proposes a solution for the automatic extraction of functional annotation for protein residues from biomedical articles. The presented approach is an extension to other existing systems in that a wider range of residue entities are considered and that features of residues are extracted as annotations. PMID:19758468

Electronic annotation of scientific data is very similar to annotation of documents. Both types of annotation amplify the original object, add related knowledge to it, and dispute or support assertions in it. In each case, annotation is a framework for discourse about the original object, and, in each case, an annotation needs to clearly identify its scope and its own terminology. However, electronic annotation of data differs from annotation of documents: the content of the annotations, including expectations and supporting evidence, is more often shared among members of networks. Any consequent actions taken by the holders of the annotated data could be shared as well. But even those current annotation systems that admit data as their subject often make it difficult or impossible to annotate at fine-enough granularity to use the results in this way for data quality control. We address these kinds of issues by offering simple extensions to an existing annotation ontology and describe how the results support an interest-based distribution of annotations. We are using the result to design and deploy a platform that supports annotation services overlaid on networks of distributed data, with particular application to data quality control. Our initial instance supports a set of natural science collection metadata services. An important application is the support for data quality control and provision of missing data. A previous proof of concept demonstrated such use based on data annotations modeled with XML-Schema. PMID:24223697

The aim of this study is to test whether projection bias exists in consumers' purchasing decisions for food products. To achieve our aim, we used a non-hypothetical experiment (i.e., experimental auction), where hungry and non-hungry participants were incentivized to reveal their willingness to pay (WTP). The results confirm the existence of projection bias when consumers made their decisions on food products. In particular, projection bias existed because currently hungry participants were willing to pay a higher price premium for cheeses than satiated ones, both in hungry and satiated future states. Moreover, participants overvalued the food product more when they were delivered in the future hungry condition than in the satiated one. Our study provides clear, quantitative and meaningful evidence of projection bias because our findings are based on economic valuation of food preferences. Indeed, the strength of this study is that findings are expressed in terms of willingness to pay which is an interpretable amount of money. PMID:26828930

The aim of this study is to test whether projection bias exists in consumers’ purchasing decisions for food products. To achieve our aim, we used a non-hypothetical experiment (i.e., experimental auction), where hungry and non-hungry participants were incentivized to reveal their willingness to pay (WTP). The results confirm the existence of projection bias when consumers made their decisions on food products. In particular, projection bias existed because currently hungry participants were willing to pay a higher price premium for cheeses than satiated ones, both in hungry and satiated future states. Moreover, participants overvalued the food product more when they were delivered in the future hungry condition than in the satiated one. Our study provides clear, quantitative and meaningful evidence of projection bias because our findings are based on economic valuation of food preferences. Indeed, the strength of this study is that findings are expressed in terms of willingness to pay which is an interpretable amount of money. PMID:26828930

Different learning methods such as project-based learning, spiral learning and peer assessment have been implemented in science disciplines with different outcomes. This paper presents a proposal for a project management course in the context of a computer science degree. Our proposal combines three well-known methods: project-based learning, spiral learning and peer assessment. Namely, the course is articulated during a semester through the structured (progressive and incremental) development of a sequence of four projects, whose duration, scope and difficulty of management increase as the student gains theoretical and instrumental knowledge related to planning, monitoring and controlling projects. Moreover, the proposal is complemented using peer assessment. The proposal has already been implemented and validated for the last 3 years in two different universities. In the first year, project-based learning and spiral learning methods were combined. Such a combination was also employed in the other 2 years; but additionally, students had the opportunity to assessprojects developed by university partners and by students of the other university. A total of 154 students have participated in the study. We obtain a gain in the quality of the subsequently projects derived from the spiral project-based learning. Moreover, this gain is significantly bigger when peer assessment is introduced. In addition, high-performance students take advantage of peer assessment from the first moment, whereas the improvement in poor-performance students is delayed.

Different learning methods such as project-based learning, spiral learning and peer assessment have been implemented in science disciplines with different outcomes. This paper presents a proposal for a project management course in the context of a computer science degree. Our proposal combines three well-known methods: project-based learning, spiral learning and peer assessment. Namely, the course is articulated during a semester through the structured (progressive and incremental) development of a sequence of four projects, whose duration, scope and difficulty of management increase as the student gains theoretical and instrumental knowledge related to planning, monitoring and controlling projects. Moreover, the proposal is complemented using peer assessment. The proposal has already been implemented and validated for the last 3 years in two different universities. In the first year, project-based learning and spiral learning methods were combined. Such a combination was also employed in the other 2 years; but additionally, students had the opportunity to assessprojects developed by university partners and by students of the other university. A total of 154 students have participated in the study. We obtain a gain in the quality of the subsequently projects derived from the spiral project-based learning. Moreover, this gain is significantly bigger when peer assessment is introduced. In addition, high-performance students take advantage of peer assessment from the first moment, whereas the improvement in poor-performance students is delayed.

The Brazil/US Aspen Global Forum on Climate Change Policies and Programs has facilitated a dialogue between key Brazil and US public and private sector leaders on the subject of the Clean Development Mechanism (CDM). With support from the US government, a cooperative effort between Lawrence Berkeley National Laboratory and the University of Sao Paulo conducted an assessment of a number of projects put forth by Brazilian sponsors. Initially, we gathered information and conducted a screening assessment for ten projects in the energy sector and six projects in the forestry sector. Some of the projects appeared to offer greater potential to be attractive for CDM, or had better information available. We then conducted a more detailed assessment of 12 of these projects, and two other projects that were submitted after the initial screening. An important goal was to assess the potential impact of Certified Emission Reductions (CERs) on the financial performance of projects. With the exception of the two forestry-based fuel displacement projects, the impact of CERs on the internal rate of return (IRR) is fairly small. This is true for both the projects that displace grid electricity and those that displace local (diesel-based) electricity production. The relative effect of CERs is greater for projects whose IRR without CERs is low. CERs have a substantial effect on the IRR of the two short-rotation forestry energy substitution projects. One reason is that the biofuel displaces coke and oil, both of which are carbon-intensive. Another factor is that the product of these projects (charcoal and woodfuel, respectively) is relatively low value, so the revenue from carbon credits has a strong relative impact. CERs also have a substantial effect on the NPV of the carbon sequestration projects. Financial and other barriers pose a challenge for implementation of most of the projects. In most cases, the sponsor lacks sufficient capital, and loans are available only at high interest

The Columbia Power Cooperative Association (CPCA), Monument, Oregon, proposes to upgrade a 69-kV transmission line in Wasco and Wheeler Counties, Oregon, between the Antelope Substation and the Bonneville Power Administration`s (BPA) Fossil Substation. The project involves rebuilding and reconductoring 23.2 miles of transmission line, including modifying it for future use at 115 kV. Related project activities will include setting new wood pole structures, removing and disposing of old structures, conductors, and insulators, and stringing new conductor, all within the existing right-of-way. No new access roads will be required. A Borrower`s Environmental Report was prepared for the 1992--1993 Work Plan for Columbia Power Cooperative Association in March 1991. This report investigated cultural resources, threatened or endangered species, wetlands, and floodplains, and other environmental issues, and included correspondence with appropriate Federal, state, and local agencies. The report was submitted to the Rural Electrification Administration for their use in preparing their environmental documentation for the project.

The Columbia Power Cooperative Association (CPCA), Monument, Oregon, proposes to upgrade a 69-kV transmission line in Wasco and Wheeler Counties, Oregon, between the Antelope Substation and the Bonneville Power Administration's (BPA) Fossil Substation. The project involves rebuilding and reconductoring 23.2 miles of transmission line, including modifying it for future use at 115 kV. Related project activities will include setting new wood pole structures, removing and disposing of old structures, conductors, and insulators, and stringing new conductor, all within the existing right-of-way. No new access roads will be required. A Borrower's Environmental Report was prepared for the 1992--1993 Work Plan for Columbia Power Cooperative Association in March 1991. This report investigated cultural resources, threatened or endangered species, wetlands, and floodplains, and other environmental issues, and included correspondence with appropriate Federal, state, and local agencies. The report was submitted to the Rural Electrification Administration for their use in preparing their environmental documentation for the project.

The purpose of this annotated bibliography is to list books, articles, and bulletins (written from 1900 to 1968) related to small towns in the United States. The work contributes to the project "Population Changes in Small Towns," sponsored by the Division of Social Sciences of the National Science Foundation and by the University of Wisconsin…

The document is a summarized final report of the Multi-County Assessment of Adult Needs Project (MAP) which took place in central Texas (Bosque, Falls, Hill, and McLennan Counties). It summarizes the major activities and accomplishments of the project and contains all materials except Attachments 1 and 2, the reports on Phase I (Survey of Adult…

This article describes a project which required students to write assessment items for a personality inventory. The 104 items generated were administered to 126 subjects. Results showed the items were reasonably reliable and valid. The pedagogical value of the project is discussed. (Author/JDH)

Well-designed school health education should provide students with the knowledge and skills to prevent the health risk behaviors most responsible for the major causes of morbidity and mortality. This paper reports the methodology and findings of a West Virginia statewide health education assessment initiative and describes how the findings are…

The Algal Functional Annotation Tool is a web-based comprehensive analysis suite integrating annotation data from several pathway, ontology, and protein family databases. The current version provides annotation for the model alga Chlamydomonas reinhardtii, and in the future will include additional genomes. The site allows users to interpret large gene lists by identifying associated functional terms, and their enrichment. Additionally, expression data for several experimental conditions were compiled and analyzed to provide an expression-based enrichment search. A tool to search for functionally-related genes based on gene expression across these conditions is also provided. Other features include dynamic visualization of genes on KEGG pathway maps and batch gene identifier conversion.

Project M is a mission Johnson Space Center is working on to send an autonomous humanoid robot to the moon (also known as Robonaut 2) in l000 days. The robot will be in a lander, fueled by liquid oxygen and liquid methane, and land on the moon, avoiding any hazardous obstacles. It will perform tasks like maintenance, construction, and simple student experiments. This mission is also being used as inspiration for new advancements in technology. I am considering three of the design assumptions that contribute to determining the mission feasibility: maturity of robotic technology, launch vehicle determination, and the LOX/Methane fueled spacecraft

Discusses the five steps involved in conducting risk assessment for fieldwork using two examples of typical student projects: (1) identity the hazards; (2) identify who might be harmed; (3) evaluate the risks; (4) record the findings; and (5) review the assessment periodically. Addresses expeditions and work overseas. (CMK)

Inquiry provides both the impetus and experience that helps students acquire problem solving and lifelong learning skills. Teachers on the Strategies for Assessment of Inquiry Learning in Science Project (SAILS) strengthened their inquiry pedagogy, through focusing on seeking assessment evidence for formative action. This paper reports on both the…

This paper describes the GenScope AssessmentProject, a project that is exploring ways of using multimedia computers to teach complex science content, refining sociocultural views of assessment and motivation, and considering different ways of reconciling the differences between these newer views and prior behavioral and cognitive views. The…

Morgantown Energy Technology Center proposes to conduct fundamental research on fluidization technology by designing, constructing, and operating a 2-foot diameter, 50-foot high, pressurized fluidized-bed unit. The anticipated result of the proposed project would be a better, understanding of fluidization phenomena under pressurized and high velocity conditions. This improved understanding would, provide a sound basis for design and scale-up of pressurized circulating fluidized-bed combustion (PCFBC) processes for fossil energy applications. Based on the analysis in the EA, DOE has determined that the proposed action is not a major, Federal action significantly affecting the quality of the human environment, within the meaning of the National Environmental Policy Act (NEPA) of 1969. Therefore, the preparation of an Environmental Impact Statement is not required and the Department is issuing this FONSI.

DIAGNOSER is an Internet-based tool for classroom instruction. It delivers continuous formative assessment and feedback to high school physics students and their teachers about the correct and incorrect concepts and ideas the students may hold regarding physical situations. That is, it diagnoses misconceptions that underlie wrong answers of students, such as a confusion of velocity with acceleration. We use data about patterns of student responses, particularly consistency of errors from question to question, to improve the system's understanding of student concepts. PMID:15354688

A case is made for the importance of high quality semantic and coreference annotation. The challenges of providing such annotation are described. Asperger's Syndrome is introduced, and the connections are drawn between the needs of text annotation and the abilities of persons with Asperger's Syndrome to meet those needs. Finally, a pilot program is recommended wherein semantic annotation is performed by people with Asperger's Syndrome. The primary points embodied in this paper are as follows: (1) Document annotation is essential to the Natural Language Processing (NLP) projects at Lawrence Livermore National Laboratory (LLNL); (2) LLNL does not currently have a system in place to meet its need for text annotation; (3) Text annotation is challenging for a variety of reasons, many related to its very rote nature; (4) Persons with Asperger's Syndrome are particularly skilled at rote verbal tasks, and behavioral experts agree that they would excel at text annotation; and (6) A pilot study is recommend in which two to three people with Asperger's Syndrome annotate documents and then the quality and throughput of their work is evaluated relative to that of their neuro-typical peers.

Designing effective and accurate tools for identifying the functional and structural elements in a genome remains at the frontier of genome annotation owing to incompleteness and inaccuracy of the data, limitations in the computational models, and shifting paradigms in genomics, such as alternative splicing. We present a methodology for the automated annotation of genes and their alternatively spliced mRNA transcripts based on existing cDNA and protein sequence evidence from the same species or projected from a related species using syntenic mapping information. At the core of the method is the splice graph, a compact representation of a gene, its exons, introns, and alternatively spliced isoforms. The putative transcripts are enumerated from the graph and assigned confidence scores based on the strength of sequence evidence, and a subset of the high-scoring candidates are selected and promoted into the annotation. The method is highly selective, eliminating the unlikely candidates while retaining 98% of the high-quality mRNA evidence in well-formed transcripts, and produces annotation that is measurably more accurate than some evidence-based gene sets. The process is fast, accurate, and fully automated, and combines the traditionally distinct gene annotation and alternative splicing detection processes in a comprehensive and systematic way, thus considerably aiding in the ensuing manual curation efforts. PMID:15632090

Automated analysis and annotation of video sequences are important for digital video libraries, content-based video browsing and data mining projects. A successful video annotation system should provide users with useful video content summary in a reasonable processing time. Given the wide variety of video genres available today, automatically extracting meaningful video content for annotation still remains hard by using current available techniques. However, a wide range video has inherent structure such that some prior knowledge about the video content can be exploited to improve our understanding of the high-level video semantic content. In this paper, we develop tools and techniques for analyzing structured video by using the low-level information available directly from MPEG compressed video. Being able to work directly in the video compressed domain can greatly reduce the processing time and enhance storage efficiency. As a testbed, we have developed a basketball annotation system which combines the low-level information extracted from MPEG stream with the prior knowledge of basketball video structure to provide high level content analysis, annotation and browsing for events such as wide- angle and close-up views, fast breaks, steals, potential shots, number of possessions and possession times. We expect our approach can also be extended to structured video in other domains.

The current first phase (2006-2011) has the three major goals of: 1) optimizing the conventional cancer risk models currently used based on the double-detriment life-table and radiation quality functions; 2) the integration of biophysical models of acute radiation syndromes; and 3) the development of new systems radiation biology models of cancer processes. The first-phase also includes continued uncertainty assessment of space radiation environmental models and transport codes, and relative biological effectiveness factors (RBE) based on flight data and NSRL results, respectively. The second phase of the (2012-2016) will: 1) develop biophysical models of central nervous system risks (CNS); 2) achieve comphrensive systems biology models of cancer processes using data from proton and heavy ion studies performed at NSRL; and 3) begin to identify computational models of biological countermeasures. Goals for the third phase (2017-2021) include: 1) the development of a systems biology model of cancer risks for operational use at NASA; 2) development of models of degenerative risks, 2) quantitative models of counter-measure impacts on cancer risks; and 3) indiviudal based risk assessments. Finally, we will support a decision point to continue NSRL research in support of NASA's exploration goals beyond 2021, and create an archival of NSRL research results for continued analysis. Details on near term goals, plans for a WEB based data resource of NSRL results, and a space radiation Wikepedia are described.

The Goldstone Deep Space Communications Complex (GDSCC), located in the MoJave Desert, is part of the National Aeronautics and Space Administration's (NASA's) Deep Space Network (DSN), the world's largest and most sensitive scientific telecommunications and radio navigation network. The Goldstone Complex is operated for NASA by the Jet Propulsion Laboratory. At present, activities at the GDSCC support the operation of nine parabolic dish antennas situated at five separate locations known as 'sites.' Each of the five sites at the GDSCC has one or more antennas, called 'Deep Space Stations' (DSS's). In the course of operation of these DSS's, various hazardous and non-hazardous wastes are generated. In 1992, JPL retained Kleinfelder, Inc., San Diego, California, to quantify the various streams of hazardous and non-hazardous wastes generated at the GDSCC. In June 1992, Kleinfelder, Inc., submitted a report to JPL entitled 'Waste Minimization Assessment.' This present volume is a JPL-expanded version of the Kleinfelder, Inc. report. The 'Waste Minimization Assessment' report did not find any deficiencies in the various waste-management programs now practiced at the GDSCC, and it found that these programs are being carried out in accordance with environmental rules and regulations.

Manual annotation of genomic data is extremely valuable to produce an accurate reference gene set but is expensive compared with automatic methods and so has been limited to model organisms. Annotation tools that have been developed at the Wellcome Trust Sanger Institute (WTSI, http://www.sanger.ac.uk/.) are being used to fill that gap, as they can be used remotely and so open up viable community annotation collaborations. We introduce the ‘Blessed’ annotator and ‘Gatekeeper’ approach to Community Annotation using the Otterlace/ZMap genome annotation tool. We also describe the strategies adopted for annotation consistency, quality control and viewing of the annotation. Database URL: http://vega.sanger.ac.uk/index.html PMID:22434843

New guidelines have been developed and trialled in Australia to assist urban stormwater managers to assess options for projects that aim to improve urban waterway health. These guidelines help users to examine the financial, ecological and social dimensions of projects (i.e., the so-called "triple-bottom-line"). Features of the assessment process described in the guidelines include use of multi criteria analysis, input from technical experts as well as non-technical stakeholders, and provision of three alternative levels of assessment to suit stormwater managers with differing needs and resources. This paper firstly provides a background to the new guidelines and triple-bottom-line assessment. The assessment methodology promoted in the new guidelines is then briefly summarised. This methodology is compared and contrasted with European guidelines from the "SWARD" project that have been primarily developed for assessing the relative sustainability of options involving urban water supply and sewerage assets. Finally, the paper discusses how assessment methodologies that evaluate the financial, ecological and social dimensions of projects can, under some circumstances, be used to evaluate the relative progress of options for urban water management on a journey towards the widely pursued, but vaguely defined goal of "sustainable development". PMID:17120681

Background: Whilst interest has focused on the origin and nature of the savant syndrome for over a century, it is only within the past two decades that empirical group studies have been carried out. Methods: The following annotation briefly reviews relevant research and also attempts to address outstanding issues in this research area.…

In this paper, we focus on metadata for self-created movies like those found on YouTube and Google Video, the duration of which are increasing in line with falling upload restrictions. While simple tags may have been sufficient for most purposes for traditionally very short video footage that contains a relatively small amount of semantic content, this is not the case for movies of longer duration which embody more intricate semantics. Creating metadata is a time-consuming process that takes a great deal of individual effort; however, this effort can be greatly reduced by harnessing the power of Web 2.0 communities to create, update and maintain it. Consequently, we consider the annotation of movies within Web 2.0 environments, such that users create and share that metadata collaboratively and propose an architecture for collaborative movie annotation. This architecture arises from the results of an empirical experiment where metadata creation tools, YouTube and an MPEG-7 modelling tool, were used by users to create movie metadata. The next section discusses related work in the areas of collaborative retrieval and tagging. Then, we describe the experiments that were undertaken on a sample of 50 users. Next, the results are presented which provide some insight into how users interact with existing tools and systems for annotating movies. Based on these results, the paper then develops an architecture for collaborative movie annotation.

An annotated bibliography which presents approximately 300 references from 1951 to 1973 on the education of severely/profoundly handicapped persons. Citations are grouped alphabetically by author's name within the following categories: characteristics and treatment, gross motor development, sensory and motor development, physical therapy for the…

Drawn from communication journals, historical and news magazines, business and industrial magazines, political science and world affairs journals, general interest periodicals, and literary and political review magazines, the approximately 90 entries in this annotated bibliography discuss ghostwriting as practiced through the ages and reveal the…

This efforts objective was to identify and hybridize a suite of technologies enabling the development of predictive decision aids for use principally in combat environments but also in any complex information terrain. The technologies required included formal concept analysis for knowledge representation and information operations, Peircean reasoning to support hypothesis generation, Mill's's canons to begin defining information operators that support the first two technologies and co-evolutionary game theory to provide the environment/domain to assess predictions from the reasoning engines. The intended application domain is the IED problem because of its inherent evolutionary nature. While a fully functioning integrated algorithm was not achieved the hybridization and demonstration of the technologies was accomplished and demonstration of utility provided for a number of ancillary queries.

The specific problem addressed in this study was the low success rate of information technology (IT) projects in the U.S. Due to the abstract nature and inherent complexity of software development, IT projects are among the most complex projects encountered. Most existing schools of project management theory are based on the rational systems…

ASAP (a systematic annotation package for community analysis of genomes) is a relational database and web interface developed to store, update and distribute genome sequence data and functional characterization (https://asap.ahabs.wisc.edu/annotation/php/ASAP1.htm). ASAP facilitates ongoing community annotation of genomes and tracking of information as genome projects move from preliminary data collection through post-sequencing functional analysis. The ASAP database includes multiple genome sequences at various stages of analysis, corresponding experimental data and access to collections of related genome resources. ASAP supports three levels of users: public viewers, annotators and curators. Public viewers can currently browse updated annotation information for Escherichia coli K-12 strain MG1655, genome-wide transcript profiles from more than 50 microarray experiments and an extensive collection of mutant strains and associated phenotypic data. Annotators worldwide are currently using ASAP to participate in a community annotationproject for the Erwinia chrysanthemi strain 3937 genome. Curation of the E. chrysanthemi genome annotation as well as those of additional published enterobacterial genomes is underway and will be publicly accessible in the near future. PMID:12519969

This paper describes two methods, Technology Roadmapping and Project Risk Assessment, which were used to identify and manage the technical risks relating to the treatment of sodium bearing waste at the Idaho National Engineering and Environmental Laboratory. The waste treatment technology under consideration was Direct Vitrification. The primary objective of the Technology Roadmap is to identify technical data uncertainties for the technologies involved and to prioritize the testing or development studies to fill the data gaps. Similarly, project management's objective for a multi-million dollar construction project includes managing all the key risks in accordance to DOE O 413.3 - ''Program and Project Management for the Acquisition of Capital Assets.'' In the early stages, the Project Risk Assessment is based upon a qualitative analysis for each risk's probability and consequence. In order to clearly prioritize the work to resolve the technical issues identified in the Technology Roadmap, the issues must be cross- referenced to the project's Risk Assessment. This will enable the project to get the best value for the cost to mitigate the risks.

This paper describes two methods, Technology Roadmapping and Project Risk Assessment, which were used to identify and manage the technical risks relating to the treatment of sodium bearing waste at the Idaho National Engineering and Environmental Laboratory. The waste treatment technology under consideration was Direct Vitrification. The primary objective of the Technology Roadmap is to identify technical data uncertainties for the technologies involved and to prioritize the testing or development studies to fill the data gaps. Similarly, project management's objective for a multi-million dollar construction project includes managing all the key risks in accordance to DOE O 413.3 - "Program and Project Management for the Acquisition of Capital Assets." In the early stages, the Project Risk Assessment is based upon a qualitative analysis for each risk's probability and consequence. In order to clearly prioritize the work to resolve the technical issues identified in the Technology Roadmap, the issues must be cross- referenced to the project's Risk Assessment. This will enable the project to get the best value for the cost to mitigate the risks.

In order to make decisions on how to invest limited research dollars on asteroid surveillance and mitigation options, an analytic understanding of the risks posed by impacts is necessary. Qualitative and quantitative studies have been performed to assess such risks, and some reasonable point estimates have been proposed. However, since consequential asteroid impacts tend to be rare events, point estimates and expected annual death rates do not adequately convey the heavy tail of the distribution, potentially leading to misguided resource allocations. We propose and develop a framework for new risk measures, including a distribution over the number of fatalities from asteroid impacts and the probability of a globally consequential impact. We implement a simulation of asteroid impacts using probabilistic inputs for impactor characteristics, and a Poisson process for asteroid arrivals over the next 100 years. Simulation results indicate that a significant portion of the risk to humans comes from asteroids in the 300-1000 meter diameter range; this is because asteroid impacts in this range can produce global effects, and are more frequent than those from asteroids greater than 1km in diameter. The relative importance of this size regime in overall asteroid impact risk is robust in simulation results, and we find the magnitude of risks is still sensitive to factors that contribute global effects from an asteroid impact. Initial results are provided on the sensitivity of impact risks to various mitigation measures, including 'civil defense' methods. These results underscore the need for next-generation survey missions, and can help provide the basis for setting future space telescope observation requirements.

This article describes a project designed to ensure that class participation in a large introductory commercial law course is assessed fairly and reliably. The subjectivity often associated with this type of assessment is minimized by involving students in the specification of clear criteria and the assessment process as they were asked to assess…

The planning phases of quality improvement projects are commonly overlooked. Disorganized planning and implementation can escalate chaos, intensify resistance to change, and increase the likelihood of failure. Two important steps in the planning phase are (1) assessing local resources available to aid in the quality improvement project and (2) evaluating the culture in which the desired change is to be implemented. Assessing local resources includes identifying and engaging key stakeholders and evaluating if appropriate expertise is available for the scope of the project. This process also involves engaging informaticists and gathering available IT tools to plan and automate (to the extent possible) the data-gathering, analysis, and feedback steps. Culture in a department is influenced by the ability and willingness to manage resistance to change, build consensus, span boundaries between stakeholders, and become a learning organization. Allotting appropriate time to perform these preparatory steps will increase the odds of successfully performing a quality improvement project and implementing change. PMID:25467724

The aims of this project were to develop improved methods for computational genome annotation and to apply these methods to improve the annotation of genomic sequence data with a specific focus on human genome sequencing. The project resulted in a substantial body of published work. Notable contributions of this project were the identification of basecalling and lane tracking as error processes in genome sequencing and contributions to improved methods for these steps in genome sequencing. This technology improved the accuracy and throughput of genome sequence analysis. Probabilistic methods for physical map construction were developed. Improved methods for sequence alignment, alternative splicing analysis, promoter identification and NF kappa B response gene prediction were also developed.

The aims of this project were to develop improved methods for computational genome annotation and to apply these methods to improve the annotation of genomic sequence data with a specific focus on human genome sequencing. The project resulted in a substantial body of published work. Notable contributions of this project were the identification of basecalling and lane tracking as error processes in genome sequencing and contributions to improved methods for these steps in genome sequencing. This technology improved the accuracy and throughput of genome sequence analysis. Probabilistic methods for physical map construction were developed. Improved methods for sequence alignment, alternative splicing analysis, promoter identification and NF kappa B response gene prediction were also developed.

The goal of the U.S. Department of Energy (DOE) Clean Coal Technology Program (CCT) is to furnish the energy marketplace with a number of advanced, more efficient, and environmentally responsible coal utilization technologies through demonstration projects. These projects seek to establish the commercial feasibility of the most promising advanced coal technologies that have developed beyond the proof-of-concept stage. This document serves as a DOE post-projectassessment (PPA) of a project selected in CCT Round IV, the Wabash River Coal Gasification Repowering (WRCGR) Project, as described in a Report to Congress (U.S. Department of Energy 1992). Repowering consists of replacing an existing coal-fired boiler with one or more clean coal technologies to achieve significantly improved environmental performance. The desire to demonstrate utility repowering with a two-stage, pressurized, oxygen-blown, entrained-flow, integrated gasification combined-cycle (IGCC) system prompted Destec Energy, Inc., and PSI Energy, Inc., to form a joint venture and submit a proposal for this project. In July 1992, the Wabash River Coal Gasification Repowering Project Joint Venture (WRCGRPJV, the Participant) entered into a cooperative agreement with DOE to conduct this project. The project was sited at PSI Energy's Wabash River Generating Station, located in West Terre Haute, Indiana. The purpose of this CCT project was to demonstrate IGCC repowering using a Destec gasifier and to assess long-term reliability, availability, and maintainability of the system at a fully commercial scale. DOE provided 50 percent of the total project funding (for capital and operating costs during the demonstration period) of $438 million.

The Bonneville Power Administration proposes funding the Hellsgate Winter Range Wildlife Mitigation Project in cooperation with the Colville Convederated Tribes and Bureau of Indian Affairs. This Preliminary Environmental Assessment examines the potential environmental effects of acquiring and managing property for wildlife and wildlife habitat within a large project area. The Propose action is intended to meet the need for mitigation of wildlife and wild life habitat that was adversely affected by the construction of Grand Coulee and Chief Joseph Dams and their reservoirs.

This assessment describes the potential Year 2000 (Y2K) problems and describes the methods for achieving Y2K Compliance for Project W-420, Ventilation Stack Monitoring Systems Upgrades. The purpose of this assessment is to give an overview of the project. This document will not be updated and any dates contained in this document are estimates and may change. The project work scope includes upgrades to ventilation stacks and generic effluent monitoring systems (GEMS) at the 244-A Double Contained Receiver Tank (DCRT), the 244-BX DCRT, the 244-CR Vault, tanks 241-C-105 and 241-C-106, the 244-S DCRT, and the 244-TX DCRT. A detailed description of system dates, functions, interfaces, potential Y2K problems, and date resolutions can not be described since the project is in the definitive design phase, This assessment will describe the methods, protocols, and practices to ensure that equipment and systems do not have Y2K problems.

An ongoing technical assessment and requirements definition project is examining the potential role of digital imaging technology at NASA's STI facility. The focus is on the basic components of imaging technology in today's marketplace as well as the components anticipated in the near future. Presented is a requirement specification for a prototype project, an initial examination of current image processing at the STI facility, and an initial summary of image processing projects at other sites. Operational imaging systems incorporate scanners, optical storage, high resolution monitors, processing nodes, magnetic storage, jukeboxes, specialized boards, optical character recognition gear, pixel addressable printers, communications, and complex software processes.

We have written a software suite designed to facilitate solar data analysis by scientists, students, and the public, anticipating enormous datasets from future instruments. Our “STAR" suite includes an interactive learning section explaining 15 classes of solar events. Users learn software tools that exploit humans’ superior ability (over computers) to identify many events. Annotation tools include time slice generation to quantify loop oscillations, the interpolation of event shapes using natural cubic splines (for loops, sigmoids, and filaments) and closed cubic splines (for coronal holes). Learning these tools in an environment where examples are provided prepares new users to comfortably utilize annotation software with new data. Upon completion of our tutorial, users are presented with media of various solar events and asked to identify and annotate the images, to test their mastery of the system. Goals of the project include public input into the data analysis of very large datasets from future solar satellites, and increased public interest and knowledge about the Sun. In 2010, the Solar Dynamics Observatory (SDO) will be launched into orbit. SDO’s advancements in solar telescope technology will generate a terabyte per day of high-quality data, requiring innovation in data management. While major projects develop automated feature recognition software, so that computers can complete much of the initial event tagging and analysis, still, that software cannot annotate features such as sigmoids, coronal magnetic loops, coronal dimming, etc., due to large amounts of data concentrated in relatively small areas. Previously, solar physicists manually annotated these features, but with the imminent influx of data it is unrealistic to expect specialized researchers to examine every image that computers cannot fully process. A new approach is needed to efficiently process these data. Providing analysis tools and data access to students and the public have proven

The focus of this paper is on the design, implementation, and validation of asynchronous multimedia annotations designed for Web-based collaboration in educational and research settings. The two key questions we explore in this paper are: How useful are such annotations and what purpose do annotations serve? What is the ease of use of our specific implementation of annotations? The context of our project has been in the area of multimedia information usage and collaboration in the biological sciences. We have developed asynchronous annotations for HTML and image data. Our annotations can be executed via any browser and require no downloads. They are stored in a central database allowing search and asynchronous access by all registered users. An easy to use user interface allows users to add, view and search annotations. We also performed a usability study that showed that our implementation of text annotations to validate our implementation.

The focus of this paper is on the design, implementation, and validation of asynchronous multimedia annotations designed for Web-based collaboration in educational and research settings. The two key questions we explore in this paper are: How useful are such annotations and what purpose do annotations serve? What is the ease of use of our specific implementation of annotations? The context of our project has been in the area of multimedia information usage and collaboration in the biological sciences. We have developed asynchronous annotations for HTML and image data. Our annotations can be executed via any browser and require no downloads. They are stored in a central database allowing search and asynchronous access by all registered users. An easy to use user interface allows users to add, view and search annotations. We also performed a usability study that showed that our implementation of text annotations to validate our implementation.

Large-scale genome projects have generated a rapidly increasing number of DNA sequences. Therefore, development of computational methods to rapidly analyze these sequences is essential for progress in genomic research. Here we present an automatic annotation system for preliminary analysis of DNA sequences. The gene annotation tool (GATO) is a Bioinformatics pipeline designed to facilitate routine functional annotation and easy access to annotated genes. It was designed in view of the frequent need of genomic researchers to access data pertaining to a common set of genes. In the GATO system, annotation is generated by querying some of the Web-accessible resources and the information is stored in a local database, which keeps a record of all previous annotation results. GATO may be accessed from everywhere through the internet or may be run locally if a large number of sequences are going to be annotated. It is implemented in PHP and Perl and may be run on any suitable Web server. Usually, installation and application of annotation systems require experience and are time consuming, but GATO is simple and practical, allowing anyone with basic skills in informatics to access it without any special training. GATO can be downloaded at [http://mariwork.iq.usp.br/gato/]. Minimum computer free space required is 2 MB. PMID:16258624

The annotated bibliography serves as the initial literature base for ecological indicators for regional monitoring within the U.S. EPA's Environmental Monitoring and Assessment Program (EMAP). ive hundred fifty six (556) citations were obtained through a combination of computeriz...

Presents 36 annotations of journal articles (published between January and June, 2001) dealing with assessment, bilingual/foreign language education, literacy, professional development, reading, teaching and learning of literature, teaching and learning of writing, and technology and literacy. (SG)

A capability for rapidly performing quantitative risk assessments has been developed by JSC Safety and Mission Assurance for use on project design trade studies early in the project life cycle, i.e., concept development through preliminary design phases. A risk assessment tool set has been developed consisting of interactive and integrated software modules that allow a user/project designer to assess the impact of alternative design or programmatic options on the probability of mission success or other risk metrics. The risk and design trade space includes interactive options for selecting parameters and/or metrics for numerous design characteristics including component reliability characteristics, functional redundancy levels, item or system technology readiness levels, and mission event characteristics. This capability is intended for use on any project or system development with a defined mission, and an example project will used for demonstration and descriptive purposes, e.g., landing a robot on the moon. The effects of various alternative design considerations and their impact of these decisions on mission success (or failure) can be measured in real time on a personal computer. This capability provides a high degree of efficiency for quickly providing information in NASA s evolving risk-based decision environment

We present a statistical mechanical theory of the process of annotating an object with terms selected from an ontology. The term selection process is formulated as an ideal lattice gas model, but in a highly structured inhomogeneous field. The model enables us to explain patterns recently observed in real-world annotation data sets, in terms of the underlying graph structure of the ontology. By relating the external field strengths to the information content of each node in the ontology graph, the statistical mechanical model also allows us to propose a number of practical metrics for assessing the quality of both the ontology, and the annotations that arise from its use. Using the statistical mechanical formalism we also study an ensemble of ontologies of differing size and complexity; an analysis not readily performed using real data alone. Focusing on regular tree ontology graphs we uncover a rich set of scaling laws describing the growth in the optimal ontology size as the number of objects being annotated increases. In doing so we provide a further possible measure for assessment of ontologies.

This project at the Texas College of Osteopathic Medicine (Fort Worth) evaluated the use of an artificial-intelligence-derived measure, "Knowledge-Based Inference Tool" (KBIT), as the basis for assessing medical students' diagnostic capabilities and designing instruction to improve diagnostic skills. The instrument was designed to address the…

The Choptank River is a benchmark watershed in the Conservation Effectiveness AssessmentProject. It is an estuary and tributary of the Chesapeake Bay. Land use in the watershed (2057 square km) is classified as 52% agriculture, 26% forested, and 5% developed. Agricultural production is centered ...

We designed scenarios for impact assessment that explicitly address policy choices and uncertainty in climate response. Economic projections and the resulting greenhouse gas emissions for the “no climate policy” scenario and two stabilization scenarios: at 4.5 W/m2 and 3.7 W/m2 b...

Following the proposal of the Chancellor of the California Community to shift from attendance-based funding to performance-based funding, Palomar College (California) has articulated its intention to judge its quality and formulate its policies primarily on learning outcomes. Out of this imperative, the Assessment of Learning Project (ALP) was…

This Safety Assessment is based on information derived from the Conceptual Design Report for the Environmental Restoration Disposal Facility (DOE/RL 1994) and ancillary documentation developed during the conceptual design phase of Project W-296. The Safety Assessment has been prepared to support the Solid Waste Burial Ground Interim Safety Basis document. The purpose of the Safety Assessment is to provide an evaluation of the design to determine if the process, as proposed, will comply with US Department of Energy (DOE) Limits for radioactive and hazardous material exposures and be acceptable from an overall health and safety standpoint. The evaluation considered affects on the worker, onsite personnel, the public, and the environment.

A quantitative method for estimating an expected uncertainty (reliability and validity) in assessment results arising from the relativity between four variables, viz examiner's expertise, examinee's expertise achieved, assessment task difficulty and examinee's performance, was developed for the complex assessment applicable to final year project thesis assessment including peer assessment. A guide map can be generated by the method for finding expected uncertainties prior to the assessment implementation with a given set of variables. It employs a scale for visualisation of expertise levels, derivation of which is based on quantified clarities of mental images for levels of the examiner's expertise and the examinee's expertise achieved. To identify the relevant expertise areas that depend on the complexity in assessment format, a graphical continuum model was developed. The continuum model consists of assessment task, assessment standards and criterion for the transition towards the complex assessment owing to the relativity between implicitness and explicitness and is capable of identifying areas of expertise required for scale development.

The US Department of Energy (DOE) has considered a proposal from the State of Colorado, Office of Energy Conservation (OEC), for funding construction of the Expanded Ponnequin Wind Project in Weld County, Colorado. OEC plans to enter into a contracting arrangement with Public Service Company of Colorado (PSCo) for the completion of these activities. PSCo, along with its subcontractors and business partners, are jointly developing the Expanded Ponnequin Wind Project. The purpose of this Final Environmental Assessment (EA) is to provide DOE and the public with information on potential environmental impacts associated with the Expanded Ponnequin Wind Energy Project. This EA, and public comments received on it, were used in DOE`s deliberations on whether to release funding for the expanded project under the Commercialization Ventures Program.

The goal of the U.S. Department of Energy (DOE) Clean Coal Technology Program (CCT) is to furnish the energy marketplace with a number of advanced, more efficient, and environmentally responsible coal utilization technologies through demonstration projects. These projects seek to establish the commercial feasibility of the most promising advanced coal technologies that have developed beyond the proof-of-concept stage. This document serves as a DOE post-projectassessment (PPA) of a project selected in CCT Round IV, the Wabash River Coal Gasification Repowering (WRCGR) Project, as described in a Report to Congress (U.S. Department of Energy 1992). Repowering consists of replacing an existing coal-fired boiler with one or more clean coal technologies to achieve significantly improved environmental performance. The desire to demonstrate utility repowering with a two-stage, pressurized, oxygen-blown, entrained-flow, integrated gasification combined-cycle (IGCC) system prompted Destec Energy, Inc., and PSI Energy, Inc., to form a joint venture and submit a proposal for this project. In July 1992, the Wabash River Coal Gasification Repowering Project Joint Venture (WRCGRPJV, the Participant) entered into a cooperative agreement with DOE to conduct this project. The project was sited at PSI Energy's Wabash River Generating Station, located in West Terre Haute, Indiana. The purpose of this CCT project was to demonstrate IGCC repowering using a Destec gasifier and to assess long-term reliability, availability, and maintainability of the system at a fully commercial scale. DOE provided 50 percent of the total project funding (for capital and operating costs during the demonstration period) of $438 million. Construction for the demonstration project was started in July 1993. Pre-operational tests were initiated in August 1995, and construction was completed in November 1995. Commercial operation began in November 1995, and the demonstration period was completed in December

USDA initiated the Conservation Effects AssessmentProject (CEAP) in 2002 to analyze societal and environmental benefits gained from the increased conservation program funding provided in the 2002 Farm Bill. The Natural Resources Conservation Service (NRCS), Agricultural Research Service (ARS), and...

The U.S.Department of Energy (DOE) has considered a proposal from the State of Colorado, Office of Energy Conservation (OEC), for funding construction of the Expanded Ponnequin Wind Project in Weld County, Colorado. OEC plans to enter into a contracting arrangement with Public Service Company of Colorado (PSCO) for the completion of these activities. PSCo, along with its subcontractors and business partners, are jointly developing the Expanded Ponnequin Wind Project. DOE completed an environmental assessment of the original proposed project in August 1997. Since then, the geographic scope and the design of the project changed, necessitating additional review of the project under the National Environmental Policy Act. The project now calls for the possible construction of up to 48 wind turbines on State and private lands. PSCo and its partners have initiated construction of the project on private land in Weld County, Colorado. A substation, access road and some wind turbines have been installed. However, to date, DOE has not provided any funding for these activities. DOE, through its Commercialization Ventures Program, has solicited applications for financial assistance from state energy offices, in a teaming arrangement with private-sector organizations, for projects that will accelerate the commercialization of emerging renewable energy technologies. The Commercialization Ventures Program was established by the Renewable Energy and Energy Efficiency Technology Competitiveness Act of 1989 (P.L. 101-218) as amended by the Energy Policy Act of 1992 (P.L. 102-486). The Program seeks to assist entry into the marketplace of newly emerging renewable energy technologies, or of innovative applications of existing technologies. In short, an emerging renewable energy technology is one which has already proven viable but which has had little or no operational experience. The Program is managed by the Department of Energy, Office of Energy Efficiency and Renewable Energy. The

An assessment of the Mod-2 Wind Turbine project is presented based on initial goals and present results. Specifically, the Mod-2 background, project flow, and a chronology of events/results leading to Mod-2 acceptance is presented. After checkout/acceptance of the three operating turbines, NASA/LeRC will continue management of a two year test program performed at the DOE Goodnoe Hills test site. This test program is expected to yield data necessary for the continued development and optimization of wind energy systems. These test activities, the implementation of, and the results to date are also presented.

Background The creation of accurate quantitative Systems Biology Markup Language (SBML) models is a time-intensive, manual process often complicated by the many data sources and formats required to annotate even a small and well-scoped model. Ideally, the retrieval and integration of biological knowledge for model annotation should be performed quickly, precisely, and with a minimum of manual effort. Results Here we present rule-based mediation, a method of semantic data integration applied to systems biology model annotation. The heterogeneous data sources are first syntactically converted into ontologies, which are then aligned to a small domain ontology by applying a rule base. We demonstrate proof-of-principle of this application of rule-based mediation using off-the-shelf semantic web technology through two use cases for SBML model annotation. Existing tools and technology provide a framework around which the system is built, reducing development time and increasing usability. Conclusions Integrating resources in this way accommodates multiple formats with different semantics, and provides richly-modelled biological knowledge suitable for annotation of SBML models. This initial work establishes the feasibility of rule-based mediation as part of an automated SBML model annotation system. Availability Detailed information on the project files as well as further information on and comparisons with similar projects is available from the project page at http://cisban-silico.cs.ncl.ac.uk/RBM/. PMID:20626923

This document presents an annotated bibliography of books and articles on topics relevant to widowhood. These annotations are included: (1) 21 annotations on the grief process; (2) 11 annotations on personal observations about widowhood; (3) 16 annotations on practical problems surrounding widowhood, including legal and financial problems and job…

This poster describes the assessment of commercially available and prototype parallel optics modules for possible use as back end components for the Versatile Link common project. The assessment covers SNAP12 transmitter and receiver modules as well as optical engine technologies in dense packaging options. Tests were performed using vendor evaluation boards (SNAP12) as well as custom evaluation boards (optical engines). The measurements obtained were used to compare the performance of these components with single channel SFP+ components operating at a transmission wavelength of 850 nm over multimode fibers.

Shotgun sequencing of the nuclear genome of Chlamydomonas reinhardtii (Chlamydomonas throughout) was performed at an approximate 10X coverage by JGI. Roughly half of the genome is now contained on 26 scaffolds, all of which are at least 1.6 Mb, and the coverage of the genome is ~95%. There are now over 200,000 cDNA sequence reads that we have generated as part of the Chlamydomonas genome project (Grossman, 2003; Shrager et al., 2003; Grossman et al. 2007; Merchant et al., 2007); other sequences have also been generated by the Kasuza sequence group (Asamizu et al., 1999; Asamizu et al., 2000) or individual laboratories that have focused on specific genes. Shrager et al. (2003) placed the reads into distinct contigs (an assemblage of reads with overlapping nucleotide sequences), and contigs that group together as part of the same genes have been designated ACEs (assembly of contigs generated from EST information). All of the reads have also been mapped to the Chlamydomonas nuclear genome and the cDNAs and their corresponding genomic sequences have been reassembled, and the resulting assemblage is called an ACEG (an Assembly of contiguous EST sequences supported by genomic sequence) (Jain et al., 2007). Most of the unique genes or ACEGs are also represented by gene models that have been generated by the Joint Genome Institute (JGI, Walnut Creek, CA). These gene models have been placed onto the DNA scaffolds and are presented as a track on the Chlamydomonas genome browser associated with the genome portal (http://genome.jgi-psf.org/Chlre3/Chlre3.home.html). Ultimately, the meeting grant awarded by DOE has helped enormously in the development of an annotation pipeline (a set of guidelines used in the annotation of genes) and resulted in high quality annotation of over 4,000 genes; the annotators were from both Europe and the USA. Some of the people who led the annotation initiative were Arthur Grossman, Olivier Vallon, and Sabeeha Merchant (with many individual

Gene Ontology (GO) is developed to provide standard vocabularies of gene products in different databases. The process of annotating GO terms to genes requires curators to read through lengthy articles. Methods for speeding up or automating the annotation process are thus of great importance. We propose a GO annotation approach using full-text biomedical documents for directing more relevant papers to curators. This system explores word density and gravitation relationships between genes and GO terms. Different density and gravitation models are built and several evaluation criteria are employed to assess the effects of the proposed methods. PMID:17503384

This report documents the results of the Environmental Management Assessment performed at the Fernald Environmental Management Project (FEMP) in Fernald, Ohio. During this assessment, the activities conducted by the assessment team included review of internal documents and reports from previous audits and assessments; interviews with US Department of Energy (DOE) and FEMP contractor personnel; and inspection and observation of selected facilities and operations. The onsite portion of the assessment was conducted from March 15 through April 1, 1993, by DOE`s Office of Environmental Audit (EH-24) located within the Office of the Assistant Secretary for Environment, Safety, and Health (EH-1). EH-24 carries out independent assessments of DOE facilities and activities as part of the EH-1 Environment, Safety, and Health (ES&H) Oversight Audit Program. The EH-24 program is designed to evaluate the status of DOE facilities and activities with respect to compliance with Federal, state, and local environmental laws and regulations; compliance with DOE Orders, Guidance and Directives; conformance with accepted industry practices and standards of performance; and the status and adequacy of management systems developed to address environmental requirements. The Environmental Management Assessment of FEMP focused on the adequacy of environmental management systems. Further, in response to requests by the Office of Environmental Restoration and Waste Management (EM) and Fernald Field Office (FN), Quality Assurance and Environmental Radiation activities at FEMP were evaluated from a programmatic standpoint. The results of the evaluation of these areas are contained in the Environmental Protection Programs section in this report.

The purpose of this document is to provide a summary of the basic tools that will be used in conducting assessments under the Environmentally Conscious Manufacturing (ECM) Projectassessment program. ECM can cover a wide range of issues including: finding safer alternatives to toxic materials; changing processes to become more efficient; environmental costs and regulatory compliance; waste reduction; energy conservation; product packaging; and product reuse/recycling. The assessments performed as part of this program will try to identify opportunities to implement technologies/actions that will promote the types of results listed above. The general methodology, or sequence of events, that will be used in conducting assessments is as follows: 1. Form an Assessment Team; 2. Map Process by flow diagrams and materials accounting; 3. Identify opportunities for ECM by activity based accounting and pareto analysis; 4. Identify and evaluate ECM/pollution prevention alternatives; 5. Implement alternatives; 6. Monitor progress. All of the assessment steps listed above are addressed in this document except forming the assessment team. The tools discussed in this document are well known, widely used process analysis or quality improvement tools which have been adapted for use in evaluating opportunities for ECM/Pollution prevention.

Despite the great effort to design efficient systems allowing the electronic indexation of information concerning genes, proteins, structures, and interactions published daily in scientific journals, some problems are still observed in specific tasks such as functional annotation. The annotation of function is a critical issue for bioinformatic routines, such as for instance, in functional genomics and the further prediction of unknown protein function, which are highly dependent of the quality of existing annotations. Some information management systems evolve to efficiently incorporate information from large-scale projects, but often, annotation of single records from the literature is difficult and slow. In this short report, functional characterizations of a representative sample of the entire set of uncharacterized proteins from Escherichia coli K12 was compiled from Swiss-Prot, PubMed, and EcoCyc and demonstrate a functional annotation deficit in biological databases. Some issues are postulated as causes of the lack of annotation, and different solutions are evaluated and proposed to avoid them. The hope is that as a consequence of these observations, there will be new impetus to improve the speed and quality of functional annotation and ultimately provide updated, reliable information to the scientific community. PMID:20050264

Stream-habitat assessment for evaluation of restoration projects requires the examination of many parameters, both watershed-scale and reach-scale, to incorporate the complex non-linear effects of geomorphic, riparian, watershed and hydrologic factors on aquatic ecosystems. Rapid geomorphic assessment tools used by many jurisdictions to assess natural channel design projects seldom include watershed-level parameters, which have been shown to have a significant effect on benthic habitat in stream systems. In this study, Artificial Neural Network (ANN) models were developed to integrate complex non-linear relationships between the aquatic ecosystem health indices and key watershed-scale and reach-scale parameters. Physical stream parameters, based on QHEI parameters, and watershed characteristics data were collected at 112 sites on 62 stream systems located in Southern Ontario. Benthic data were collected separately and benthic invertebrate summary indices, specifically Hilsenhoff's Biotic Index (HBI) and Richness, were determined. The ANN models were trained on the randomly selected 3/4 of the dataset of 112 streams in Ontario, Canada and validated on the remaining 1/4. The R2 values for the developed ANN model predictions were 0.86 for HBI and 0.92 for Richness. Sensitivity analysis of the trained ANN models revealed that Richness was directly proportional to Erosion and Riparian Width and inversely proportional to Floodplain Quality and Substrate parameters. HBI was directly proportional to Velocity Types and Erosion and inversely proportional to Substrate, % Treed and 1:2 Year Flood Flow parameters. The ANN models can be useful tools for watershed managers in stream assessment and restoration projects by allowing consideration of watershed properties in the stream assessment.

A key advantage of trenchless construction methods compared with traditional open-cut methods is their ability to install or rehabilitate underground utility systems with limited disruption to the surrounding built and natural environments. The equivalent monetary values of these disruptions are commonly called social costs. Social costs are often ignored by engineers or project managers during project planning and design phases, partially because they cannot be calculated using standard estimating methods. In recent years some approaches for estimating social costs were presented. Nevertheless, the cost data needed for validation of these estimating methods is lacking. Development of such social cost databases can be accomplished by compiling relevant information reported in various case histories. This paper identifies eight most important social cost categories, presents mathematical methods for calculating them, and summarizes the social cost impacts for two pipeline construction projects. The case histories are analyzed in order to identify trends for the various social cost categories. The effectiveness of the methods used to estimate these values is also discussed. These findings are valuable for pipeline infrastructure engineers making renewal technology selection decisions by providing a more accurate process for the assessment of social costs and impacts. - Highlights: • Identified the eight most important social cost factors for pipeline construction • Presented mathematical methods for calculating those social cost factors • Summarized social cost impacts for two pipeline construction projects • Analyzed those projects to identify trends for the social cost factors.

Focusing on the similarities and differences in men's and women's verbal and nonverbal communication behavior, this 33-item annotated bibliography presents a sample of articles appearing in speech communication publications on the subject. Categories of the annotated bibliography are books, sexism and sexual harassment in academia, theoretic…

This bibliography consists of a total of 215 entries dealing with drug education, including curriculum guides, and drawn from documents in the ERIC system. There are two sections, the first containing 130 annotated citations of documents and journal articles, and the second containing 85 citations of journal articles without annotations, but with…

This annotated bibliography is designed to survey the field of women in communication. The bibliography is centered on a specific context: who are and who were the women who worked in the communication field, and specifically, what were their writings like? The 56 annotations date from 1949 through 1990 and deal mostly with books (especially…

Proprietary approaches for representing annotations and image markup are serious barriers for researchers to share image data and knowledge. The Annotation and Image Markup (AIM) project is developing a standard based information model for image annotation and markup in health care and clinical trial environments. The complex hierarchical structures of AIM data model pose new challenges for managing such data in terms of performance and support of complex queries. In this paper, we present our work on managing AIM data through a native XML approach, and supporting complex image and annotation queries through native extension of XQuery language. Through integration with xService, AIM databases can now be conveniently shared through caGrid. PMID:21218167

Substantial changes in the hydrological cycle are projected for the 21st century, with potential major impacts, particularly at regional scale. However, the projections are subject to major uncertainties and the metrics generally used to assess such changes do not fully account for the hydroclimatological characteristics of the land surface. In this context, the 'dry gets drier, wet gets wetter' paradigm is often used as a simplifying summary. However, recent studies have challenged the validity of the paradigm both for observations (Greve et al., 2014) and projections (Roderick et al., 2014), especially casting doubt on applying the widely used P-E (precipitation - evapotranspiration) metric over global land surfaces. Here we show in a comprehensive assessment that projected changes in mean annual P-E are generally not significant in most land areas, with the exception of the northern high latitudes where significant changes towards wetter conditions are found. We further show that the combination of decreasing P and increasing atmospheric demand (potential evapotranspiration, Ep) leads to a significant increase in aridity in many subtropical and neighbouring regions, thus confirming the paradigm for some dry regions, but invalidating it for the relative large fraction of the affected area which is currently in a humid or transitional climate regime. Combining both metrics (P-E and P-E_p) we conclude that the 'dry gets drier, wet gets wetter' paradigm is generally not confirmed for projected changes in most land areas (despite notable exceptions in the high latitudes and subtropics), because of a lack of robustness of the projected changes in some regions (tropics) and because humid to transitional regions are shifting to drier conditions, i.e. not following the paradigm. References Greve, P., Orlowsky, B., Mueller, B., Sheffield, J., Reichstein, M., & Seneviratne, S. I. Global assessment of trends in wetting and drying over land. Nature Geosci. 7, 716-721 (2014

Unified, structured vocabularies and classifications freely provided by the Gene Ontology (GO) Consortium are widely accepted in most of the large scale gene annotationprojects. Consequently, many tools have been created for use with the GO ontologies. WEGO (Web Gene Ontology Annotation Plot) is a simple but useful tool for visualizing, comparing and plotting GO annotation results. Different from other commercial software for creating chart, WEGO is designed to deal with the directed acyclic graph structure of GO to facilitate histogram creation of GO annotation results. WEGO has been used widely in many important biological research projects, such as the rice genome project and the silkworm genome project. It has become one of the daily tools for downstream gene annotation analysis, especially when performing comparative genomics tasks. WEGO, along with the two other tools, namely External to GO Query and GO Archive Query, are freely available for all users at http://wego.genomics.org.cn. There are two available mirror sites at http://wego2.genomics.org.cn and http://wego.genomics.com.cn. Any suggestions are welcome at wego@genomics.org.cn. PMID:16845012

Human rights impact assessment (HRIA) is a process for systematically identifying, predicting and responding to the potential impact on human rights of a business operation, capital project, government policy or trade agreement. Traditionally, it has been conducted as a desktop exercise to predict the effects of trade agreements and government policies on individuals and communities. In line with a growing call for multinational corporations to ensure they do not violate human rights in their activities, HRIA is increasingly incorporated into the standard suite of corporate development project impact assessments. In this context, the policy world's non-structured, desk-based approaches to HRIA are insufficient. Although a number of corporations have commissioned and conducted HRIA, no broadly accepted and validated assessment tool is currently available. The lack of standardisation has complicated efforts to evaluate the effectiveness of HRIA as a risk mitigation tool, and has caused confusion in the corporate world regarding company duties. Hence, clarification is needed. The objectives of this paper are (i) to describe an HRIA methodology, (ii) to provide a rationale for its components and design, and (iii) to illustrate implementation of HRIA using the methodology in two selected corporate development projects—a uranium mine in Malawi and a tree farm in Tanzania. We found that as a prognostic tool, HRIA could examine potential positive and negative human rights impacts and provide effective recommendations for mitigation. However, longer-term monitoring revealed that recommendations were unevenly implemented, dependent on market conditions and personnel movements. This instability in the approach to human rights suggests a need for on-going monitoring and surveillance. -- Highlights: • We developed a novel methodology for corporate human rights impact assessment. • We piloted the methodology on two corporate projects—a mine and a plantation. • Human

Bonneville Power Administration (BPA) proposes to fund the Hellsgate Winter Range Wildlife Mitigation Project (Project) in a cooperative effort with the Colville Confederated Tribes and the Bureau of Indian Affairs (BIA). The proposed action would allow the sponsors to secure property and conduct wildlife management activities within the boundaries of the Colville Indian Reservation. This Final Environmental Assessment (EA) examines the potential environmental effects of acquiring and managing property for wildlife and wildlife habitat within a large project area. This area consists of several separated land parcels, of which 2,000 hectares (4,943 acres) have been purchased by BPA and an additional 4,640 hectares (11,466 acres) have been identified by the Colville Confederated Tribes for inclusion in the Project. Four proposed activities (habitat protection, habitat enhancement, operation and maintenance, and monitoring and evaluation) are analyzed. The proposed action is intended to meet the need for mitigation of wildlife and wildlife habitat that was adversely affected by the construction of Grand Coulee and Chief Joseph Dams and their reservoirs.

The assessment of Smolt Condition for Travel Time Analysis Project (Bonneville Power Administration Project 87-401) monitored attributes of salmonid smolt physiology in the Columbia and Snake River basins from 1987 to 1997, under the Northwest Power Planning Council Fish and Wildlife Program, in cooperation with the Smolt Monitoring Program of the Fish Passage Center. The primary goal of the project was to investigate the physiological development of juvenile salmonids related to migration rates. The assumption was made that the level of smolt development, interacting with environmental factos such as flow, would be reflected in travel times. The Fish Passage Center applied the physiological measurements of smolt condition to Water Budget management, to regulate flows so as to decrease travel time and increase survival.

Under the Pacific Northwest Electric Power Planning and Conservation Act of 1980, and the subsequent Northwest Power Planning Council`s Columbia River Basin Fish and Wildlife Program, a wildlife habitat impact assessment and identification of mitigation objectives have been developed for the US Army Corps of Engineer`s Chief Joseph Dam Project in north-central Washington. This study will form the basis for future mitigation planning and implementation.

The goal of the Gene Ontology (GO) project is to provide a uniform way to describe the functions of gene products from organisms across all kingdoms of life and thereby enable analysis of genomic data. Protein annotations are either based on experiments or predicted from protein sequences. Since most sequences have not been experimentally characterized, most available annotations need to be based on predictions. To make as accurate inferences as possible, the GO Consortium's Reference Genome Project is using an explicit evolutionary framework to infer annotations of proteins from a broad set of genomes from experimental annotations in a semi-automated manner. Most components in the pipeline, such as selection of sequences, building multiple sequence alignments and phylogenetic trees, retrieving experimental annotations and depositing inferred annotations, are fully automated. However, the most crucial step in our pipeline relies on software-assisted curation by an expert biologist. This curation tool, Phylogenetic Annotation and INference Tool (PAINT) helps curators to infer annotations among members of a protein family. PAINT allows curators to make precise assertions as to when functions were gained and lost during evolution and record the evidence (e.g. experimentally supported GO annotations and phylogenetic information including orthology) for those assertions. In this article, we describe how we use PAINT to infer protein function in a phylogenetic context with emphasis on its strengths, limitations and guidelines. We also discuss specific examples showing how PAINT annotations compare with those generated by other highly used homology-based methods. PMID:21873635

This assessment describes the potential Year 2000 (Y2K) problems and describes the methods for achieving Y2K compliance for Project W-151, Tank 101-AZ Waste Retrieval System. The purpose of this assessment is to give an overview of the project. This document will not be updated and any dates contained in this document are estimates and may change. Two mixer pumps and instrumentation have been or are planned to be installed in waste tank 101-AZ to demonstrate solids mobilization. The information and experience gained during this process test will provide data for comparison with sludge mobilization prediction models and provide indication of the effects of mixer pump operation on an Aging Waste Facility tank. A limited description of system dates, functions, interfaces, potential Y2K problems, and date resolutions is presented. The project is presently on hold, and definitive design and procurement have been completed. This assessment will describe the methods, protocols, and practices to ensure that equipment and systems do not have Y2K problems.

The Gene Ontology (GO) Consortium (GOC, http://www.geneontology.org) is a community-based bioinformatics resource that classifies gene product function through the use of structured, controlled vocabularies. Over the past year, the GOC has implemented several processes to increase the quantity, quality and specificity of GO annotations. First, the number of manual, literature-based annotations has grown at an increasing rate. Second, as a result of a new 'phylogenetic annotation' process, manually reviewed, homology-based annotations are becoming available for a broad range of species. Third, the quality of GO annotations has been improved through a streamlined process for, and automated quality checks of, GO annotations deposited by different annotation groups. Fourth, the consistency and correctness of the ontology itself has increased by using automated reasoning tools. Finally, the GO has been expanded not only to cover new areas of biology through focused interaction with experts, but also to capture greater specificity in all areas of the ontology using tools for adding new combinatorial terms. The GOC works closely with other ontology developers to support integrated use of terminologies. The GOC supports its user community through the use of e-mail lists, social media and web-based resources. PMID:23161678

The SEED Project is a cooperative effort to annotate ever-expanding genomic data so researchers can conduct effective comparative analyses of genomes. Launched in 2003 by the Fellowship for Interpretation of Genomes (FIG), the project is one of several initiatives in ongoing development of data curation systems. SEED is designed to be used by scientists from numerous centers and with varied research objectives. As such, several institutions have since joined FIG in a consortium, including the University of Chicago, DOE’s Argonne National Laboratory (ANL), the University of Illinois at Urbana-Champaign, and others. As one example, ANL has used SEED to develop the National Microbial Pathogen Data Resource. Other agencies and institutions have used the project to discover genome components and clarify gene functions such as metabolism. SEED also has enabled researchers to conduct comparative analyses of closely related genomes and has supported derivation of stoichiometric models to understand metabolic processes. The SEED Project has been extended to support metagenomic samples and concomitant analytical tools. Moreover, the number of genomes being introduced into SEED is growing very rapidly. Building a framework to support this growth while providing highly accurate annotations is centrally important to SEED. The project’s subsystem-based annotation strategy has become the technological foundation for addressing these challenges.(copied from Appendix 7 of Systems Biology Knowledgebase for a New Era in Biology, A Genomics:GTL Report from the May 2008 Workshop, DOE/SC-0113, Grequrick, S; Fredrickson, J.K.; Stevens, R., Pub March 1, 2009.)

The Department of Energy (DOE) has prepared an environmental assessment for a proposed Sewer System Upgrade Project at the Idaho National Engineering Laboratory (INEL) near Idaho Falls, Idaho. The proposed action would include activities conducted at the Central Facilities Area, Test Reactor Area, and the Containment Test Facility at the Test Area North at INEL. The proposed action would consist of replacing or remodeling the existing sewage treatment plants at the Central Facilities Area, Test Reactor Area, and Containment Test Facility. Also, a new sewage testing laboratory would be constructed at the Central Facilities Area. Finally, the proposed action would include replacing, repairing, and/or adding sewer lines in areas where needed.

A major focus of sequencing project is to identify genes in genomes. However it is necessary to define the variety of genes and the criteria for identifying them. In this work we present discrepancies and dependencies from the application of different bioinformatic programs for structural annotation performed on the cucumber data set from Polish Consortium of Cucumber Genome Sequencing. We use Fgenesh, GenScan and GeneMark to automated structural annotation, the results have been compared to reference annotation.

Submarine outfalls need to be evaluated as part of an integrated environmental protection system for coastal areas. Although outfalls are tight with the diversity of economic activities along a densely populated coastline being effluent treatment and effluent reuse a sign of economic prosperity, precautions must be taken in the construction of these structures. They must be designed so as to have the least possible impact on the environment and at the same time be economically viable. This paper outlines the initial phases of a risk assessment procedure for submarine outfall projects. This approach includes a cost-benefit analysis in which risks are systematically minimized or eliminated. The methods used in this study also allow for randomness and uncertainty. The input for the analysis is a wide range of information and data concerning the failure probability of outfalls and the consequences of an operational stoppage or failure. As part of this risk assessment, target design levels of reliability, functionality, and operationality were defined for the outfalls. These levels were based on an inventory of risks associated with such construction projects, and thus afforded the possibility of identifying possible failure modes. This assessment procedure was then applied to four case studies in Portugal. The results obtained were the values concerning the useful life of the outfalls at the four sites and their joint probability of failure against the principal failure modes assigned to ultimate and serviceability limit states. Also defined were the minimum operationality of these outfalls, the average number of admissible technical breakdowns, and the maximum allowed duration of a stoppage mode. It was found that these values were in consonance with the nature of the effluent (tourist-related, industrial, or mixed) as well as its importance for the local economy. Even more important, this risk assessment procedure was able to measure the impact of the outfalls on

A methodology for a thematic and scientifically-credible assessment of Open Ocean waters as a part of the Global Environment Facility (GEF) Transboundary Waters AssessmentProject (TWAP) has been developed in the last 18 months by the Intergovernmental Oceanographic Commission of UNESCO, and is presented for feedback and comment. While developed to help the GEF International Waters focal area target investment to manage looming environmental threats in interlinked freshwater and marine systems (a very focused decision support system), the assessment methodology could contribute to other assessment and management efforts in the UN system and elsewhere. Building on a conceptual framework that describes the relationships between human systems and open ocean natural systems, and on mapping of the human impact on the marine environment, the assessment will evaluate and make projections on a thematic basis, identifying key metrics, indices, and indicators. These themes will include the threats on key ecosystem services of climate change through sea level rise, changed stratification, warming, and ocean acidification; vulnerabilities of ecosystems, habitats, and living marine resources; the impact and sustainability of fisheries; and pollution. Global-level governance arrangements will also be evaluated, with an eye to identifying scope for improved global-level management. The assessment will build on sustained ocean observing systems, model projections, and an assessment of scientific literature, as well as tools for combining knowledge to support identification of priority concerns and in developing scenarios for management. It will include an assessment of key research and observing needs as one way to deal with the scientific uncertainty inherent in such an exercise, and to better link policy and science agendas.

Transcript-based annotation and pedigree analysis are two basic steps in the computational analysis of whole-exome sequencing experiments in genetic diagnostics and disease-gene discovery projects. Here, we present Jannovar, a stand-alone Java application as well as a Java library designed to be used in larger software frameworks for exome and genome analysis. Jannovar uses an interval tree to identify all transcripts affected by a given variant, and provides Human Genome Variation Society-compliant annotations both for variants affecting coding sequences and splice junctions as well as untranslated regions and noncoding RNA transcripts. Jannovar can also perform family-based pedigree analysis with Variant Call Format (VCF) files with data from members of a family segregating a Mendelian disorder. Using a desktop computer, Jannovar requires a few seconds to annotate a typical VCF file with exome data. Jannovar is freely available under the BSD2 license. Source code as well as the Java application and library file can be downloaded from http://compbio.charite.de (with tutorial) and https://github.com/charite/jannovar. PMID:24677618

Constructing highways in dense urban areas is always a challenge. In Sao Paulo Metropolitan Region, heavy truck traffic contributes to clog streets and expressways alike. As part of the traffic neither originates nor head to the region, a peripheral highway has been proposed to reduce traffic problems. This project, called Rodoanel, is an expressway approximately 175 km long. The fact that the projected south and north sections would cross catchments that supply most of the metropolis water demand was strongly disputed and made the environmental permitting process particularly difficult. The agency in charge commissioned a strategic environmental assessment (SEA) of a revamped project, and called it the Rodoanel Programme. However, the SEA report failed to satisfactorily take account of significant strategic issues. Among these, the highway potential effect of inducing urban sprawl over water protection zones is the most critical issue, as it emerged later as a hurdle to project licensing. Conclusion is that, particularly where no agreed-upon framework for SEA exists, when vertical tiering with downstream project EIA is sought, then a careful scoping of strategic issues is more than necessary. If an agreement on 'what is strategic' is not reached and not recognized by influential stakeholders, then the unsettled conflicts will be transferred to project EIA. In such a context, SEA will have added another loop to the usually long road to project approval.

Because of the extreme impact of genome sequencing projects, protein sequences without accompanying experimental data now dominate public databases. Homology searches, by providing an opportunity to transfer functional information between related proteins, have become the de facto way to address this. Although a single, well annotated, close relationship will often facilitate sufficient annotation, this situation is not always the case, particularly if mutations are present in important functional residues. When only distant relationships are available, the transfer of function information is more tenuous, and the likelihood of encountering several well annotated proteins with different functions is increased. The consequence for a researcher is a range of candidate functions with little way of knowing which, if any, are correct. Here, we address the problem directly by introducing a computational approach to accurately identify and segregate related proteins into those with a functional similarity and those where function differs. This approach should find a wide range of applications, including the interpretation of genomics/proteomics data and the prioritization of targets for high-throughput structure determination. The method is generic, but here we concentrate on enzymes and apply high-quality catalytic site data. In addition to providing a series of comprehensive benchmarks to show the overall performance of our approach, we illustrate its utility with specific examples that include the correct identification of haptoglobin as a nonenzymatic relative of trypsin, discrimination of acid-d-amino acid ligases from a much larger ligase pool, and the successful annotation of BioH, a structural genomics target. PMID:16037208

This report summarizes the activities of the US Department of Energy's (DOE) Hydropower Program for fiscal years 1990 and 1991, and provides an annotated bibliography of research, engineering, operations, regulations, and costs of projects pertinent to hydropower development. The Hydropower Program is organized as follows: background (including Technology Development and Engineering Research and Development); Resource Assessment; National Energy Strategy; Technology Transfer; Environmental Research; and, the bibliography discusses reports written by both private and non-Federal Government sectors. Most reports are available from the National Technical Information Service. 5 figs., 2 tabs.

the assessment, further exploration was proposed. In cases where rerouting was constrained, mitigation via structural measures was proposed. This paper further discusses the cost, schedule and resource challenges of planning and executing such a large-scale geotechnical investigation, the interfaces between the various disciplines involved during the assessment, the innovative tools employed for the field mapping, the classifications developed for mapping landslides, karst geology, and trench excavatability, determining liquefaction stretches and the process for the site localization of the Above Ground Installations (AGI). It finally discusses the objectives of the FEED study in terms of providing a route, a ± 20% project cost estimate and a schedule, and the additional engineering work foreseen to take place in the detailed engineering phase of the project.

Today`s notice announces BPA`s proposal to fund land acquisition or acquisition of a conservation easement and a wildlife management plan to protect and enhance wildlife habitat at the Willow Creek Natural Area in Eugene, Oregon. This action would provide partial mitigation for wildlife and wildlife habitat lost by the development of Federal hydroelectric projects in the Willamette River Basin. The project is consistent with BPA`s obligations under provisions of the Pacific Northwest Electric Power Planning and Conservation Act of 1980 as outlined by the Northwest Power Planning Council`s 1994 Columbia River Basin Fish and Wildlife Program. BPA has prepared an environmental assessment (DOE/EA-1023) evaluating the proposed project. Based on the analysis in the EA, BPA has determined that the proposed action is not a major Federal action significantly affecting the quality of the human environment, within the meaning of the National Environmental Policy Act (NEPA) of 1969. Therefore, the preparation of an environmental impact statement (EIS) is not required and BPA is issuing this FONSI.

This paper assesses the uncertainties involved in the projections of seasonal temperature and precipitation changes over South America in the twenty-first century. Climate simulations generated by 24 general circulation models are weighted according to the reliability ensemble averaging (REA) approach. The results show that the REA mean temperature change is slightly smaller over South America compared to the simple ensemble mean. Higher reliability in the temperature projections is found over the La Plata basin, and a larger uncertainty range is located in the Amazon. A temperature increase exceeding 2 °C is found to have a very likely (>90 %) probability of occurrence for the entire South American continent in all seasons, and a more likely than not (>50 %) probability of exceeding 4 °C by the end of this century is found over northwest South America, the Amazon Basin, and Northeast Brazil. For precipitation, the projected changes have the same magnitude as the uncertainty range and are comparable to natural variability.

United States. Bonneville Power Administration; United States. Bureau of Indian Affairs; Spokane Tribe of the Spokane Reservation, Washington

1994-11-01

Bonneville Power Administration (BPA) proposes to fund that portion of the Washington Wildlife Agreement pertaining to the Blue Creek Winter Range Wildlife Mitigation Project (Project) in a cooperative effort with the Spokane Tribe, Upper Columbia United Tribes, and the Bureau of Indian Affairs (BIA). If fully implemented, the proposed action would allow the sponsors to protect and enhance 2,631 habitat units of big game winter range and riparian shrub habitat on 2,185 hectares (5,400 acres) of Spokane Tribal trust lands, and to conduct long term wildlife management activities within the Spokane Indian Reservation project area. This Final Environmental Assessment (EA) examines the potential environmental effects of securing land and conducting wildlife habitat enhancement and long term management activities within the boundaries of the Spokane Indian Reservation. Four proposed activities (habitat protection, habitat enhancement, operation and maintenance, and monitoring and evaluation) are analyzed. The proposed action is intended to meet the need for mitigation of wildlife and wildlife habitat adversely affected by the construction of Grand Coulee Dam and its reservoir.

Topics included in this annotated bibliography on patient education are (1) background on development of patient education programs, (2) patient education interventions, (3) references for health professionals, and (4) research and evaluation in patient education. (TA)

Fully understanding the genetic potential of a microbial community requires functional annotation of all the genes it encodes. The recently developed deep metagenome sequencing approach has enabled rapid identification of millions of genes from a complex microbial community without cultivation. Current homology-based gene annotation fails to detect distantly-related or structural homologs. Furthermore, homology searches with millions of genes are very computational intensive. To overcome these limitations, we developed rhModeller, a homology-independent software pipeline to efficiently annotate genes from metagenomic sequencing projects. Using cellulases and carbonic anhydrases as two independent test cases, we demonstrated that rhModeller is much faster than HMMER but with comparable accuracy, at 94.5percent and 99.9percent accuracy, respectively. More importantly, rhModeller has the ability to detect novel proteins that do not share significant homology to any known protein families. As {approx}50percent of the 2 million genes derived from the cow rumen metagenome failed to be annotated based on sequence homology, we tested whether rhModeller could be used to annotate these genes. Preliminary results suggest that rhModeller is robust in the presence of missense and frameshift mutations, two common errors in metagenomic genes. Applying the pipeline to the cow rumen genes identified 4,990 novel cellulases candidates and 8,196 novel carbonic anhydrase candidates.In summary, we expect rhModeller to dramatically increase the speed and quality of metagnomic gene annotation.

The Gene Ontology (GO) Consortium (GOC, http://www.geneontology.org) is a community-based bioinformatics resource that classifies gene product function through the use of structured, controlled vocabularies. Over the past year, the GOC has implemented several processes to increase the quantity, quality and specificity of GO annotations. First, the number of manual, literature-based annotations has grown at an increasing rate. Second, as a result of a new ‘phylogenetic annotation’ process, manually reviewed, homology-based annotations are becoming available for a broad range of species. Third, the quality of GO annotations has been improved through a streamlined process for, and automated quality checks of, GO annotations deposited by different annotation groups. Fourth, the consistency and correctness of the ontology itself has increased by using automated reasoning tools. Finally, the GO has been expanded not only to cover new areas of biology through focused interaction with experts, but also to capture greater specificity in all areas of the ontology using tools for adding new combinatorial terms. The GOC works closely with other ontology developers to support integrated use of terminologies. The GOC supports its user community through the use of e-mail lists, social media and web-based resources. PMID:23161678

Recent technological advances have opened unprecedented opportunities for large-scale sequencing and analysis of populations of pathogenic species in disease outbreaks, as well as for large-scale diversity studies aimed at expanding our knowledge across the whole domain of prokaryotes. To meet the challenge of timely interpretation of structure, function and meaning of this vast genetic information, a comprehensive approach to automatic genome annotation is critically needed. In collaboration with Georgia Tech, NCBI has developed a new approach to genome annotation that combines alignment based methods with methods of predicting protein-coding and RNA genes and other functional elements directly from sequence. A new gene finding tool, GeneMarkS+, uses the combined evidence of protein and RNA placement by homology as an initial map of annotation to generate and modify ab initio gene predictions across the whole genome. Thus, the new NCBI's Prokaryotic Genome Annotation Pipeline (PGAP) relies more on sequence similarity when confident comparative data are available, while it relies more on statistical predictions in the absence of external evidence. The pipeline provides a framework for generation and analysis of annotation on the full breadth of prokaryotic taxonomy. For additional information on PGAP see https://www.ncbi.nlm.nih.gov/genome/annotation_prok/ and the NCBI Handbook, https://www.ncbi.nlm.nih.gov/books/NBK174280/. PMID:27342282

This report is a post-projectassessment of the ENCOAL{reg_sign} Mild Coal Gasification Project, which was selected under Round III of the U.S. Department of Energy (DOE) Clean Coal Technology (CCT) Demonstration Program. The CCT Demonstration Program is a government and industry cofunded technology development effort to demonstrate a new generation of innovative coal utilization processes in a series of commercial-scale facilities. The ENCOAL{reg_sign} Corporation, a wholly-owned subsidiary of Bluegrass Coal Development Company (formerly SMC Mining Company), which is a subsidiary of Ziegler Coal Holding Company, submitted an application to the DOE in August 1989, soliciting joint funding of the project in the third round of the CCT Program. The project was selected by DOE in December 1989, and the Cooperative Agreement (CA) was approved in September 1990. Construction, commissioning, and start-up of the ENCOAL{reg_sign} mild coal gasification facility was completed in June 1992. In October 1994, ENCOAL{reg_sign} was granted a two-year extension of the CA with the DOE, that carried through to September 17, 1996. ENCOAL{reg_sign} was then granted a six-month, no-cost extension through March 17, 1997. Overall, DOE provided 50 percent of the total project cost of $90,664,000. ENCOAL{reg_sign} operated the 1,000-ton-per-day mild gasification demonstration plant at Triton Coal Company's Buckskin Mine near Gillette, Wyoming, for over four years. The process, using Liquids From Coal (LFC{trademark}) technology originally developed by SMC Mining Company and SGI International, utilizes low-sulfur Powder River Basin (PRB) coal to produce two new fuels, Process-Derived Fuel (PDF{trademark}) and Coal-Derived Liquids (CDL{trademark}). The products, as alternative fuel sources, are capable of significantly lowering current sulfur emissions at industrial and utility boiler sites throughout the nation thus reducing pollutants causing acid rain. In support of this overall objective

Expressed sequence tags (ESTs) present a special set of problems for bioinformatic analysis. They are partial and error-prone, and large datasets can have significant internal redundancy. To facilitate analysis of small EST datasets from in-house projects, we present an integrated "pipeline" of tools that take EST data from sequence trace to database submission. These tools also can be used to provide clustering of ESTs into putative genes and to annotate these genes with preliminary sequence similarity searches. The systems are written to use the public-domain LINUX environment and other openly available analytical tools. PMID:15153624

Despite the ever-growing body of life cycle assessment (LCA) literature on electricity generation technologies, inconsistent methods and assumptions hamper comparison across studies and pooling of published results. Synthesis of the body of previous research is necessary to generate robust results to assess and compare environmental performance of different energy technologies for the benefit of policy makers, managers, investors, and citizens. With funding from the U.S. Department of Energy, the National Renewable Energy Laboratory initiated the LCA Harmonization Project in an effort to rigorously leverage the numerous individual studies to develop collective insights. The goals of this project were to: (1) understand the range of published results of LCAs of electricity generation technologies, (2) reduce the variability in published results that stem from inconsistent methods and assumptions, and (3) clarify the central tendency of published estimates to make the collective results of LCAs available to decision makers in the near term. The LCA Harmonization Project's initial focus was evaluating life cycle greenhouse gas (GHG) emissions from electricity generation technologies. Six articles from this first phase of the project are presented in a special supplemental issue of the Journal of Industrial Ecology on Meta-Analysis of LCA: coal (Whitaker et al. 2012), concentrating solar power (Burkhardt et al. 2012), crystalline silicon photovoltaics (PVs) (Hsu et al. 2012), thin-film PVs (Kim et al. 2012), nuclear (Warner and Heath 2012), and wind (Dolan and Heath 2012). Harmonization is a meta-analytical approach that addresses inconsistency in methods and assumptions of previously published life cycle impact estimates. It has been applied in a rigorous manner to estimates of life cycle GHG emissions from many categories of electricity generation technologies in articles that appear in this special supplemental supplemental issue, reducing the variability and

One of the most significant challenges faced by modern-day society is that of global warming. An exclusive focus on reducing the greenhouse gas (GHG) emissions will not suffice and therefore technologies capable of removing CO2 directly from the atmosphere at low or minimal cost are gaining increased attention. The production and use of biochar is an example of such an emerging mitigation strategy. However, as with any novel product, process and technology it is vital to conduct an assessment of the entire life cycle in order to determine the environmental impacts of the new concept in addition to analysing the other sustainability criteria. Life Cycle Assessment (LCA), standardized by ISO (2006a), is an example of a tool used to calculate the environmental impacts of a product or process. Imperial College London will follow the guidelines and recommendations of the ISO 14040 series (ISO 2002, ISO 2006a-b) and the International Life Cycle Data System (ILCD) Handbook (EC JRC IES, 2010a-e), and will use the SimaPro software to conduct a LCA of the biochar supply chains for the EuroChar project. EuroChar ('biochar for Carbon sequestration and large-scale removal of GHG from the atmosphere') is a project funded by the European Commission under its Seventh Framework Programme (FP7). EuroChar aims to investigate and reduce uncertainties around the impacts of, and opportunities for, biochar and, in particular, explore a possible introduction into modern agricultural systems in Europe, thereby moving closer to the determination of the true potential of biochar. EuroChar will use various feedstocks, ranging from wheat straw to olive residues and poplar, as feedstocks for biochar production and will focus on two conversion technologies, Hydrothermal Carbonization (HTC) and Thermochemical Carbonization (TC), followed by the application of the biochar in crop-growth field trials in England, France and Italy. In April 2012, the EuroChar project will be at its halfway mark and

Background The Gene Ontology project integrates data about the function of gene products across a diverse range of organisms, allowing the transfer of knowledge from model organisms to humans, and enabling computational analyses for interpretation of high-throughput experimental and clinical data. The core data structure is the annotation, an association between a gene product and a term from one of the three ontologies comprising the GO. Historically, it has not been possible to provide additional information about the context of a GO term, such as the target gene or the location of a molecular function. This has limited the specificity of knowledge that can be expressed by GO annotations. Results The GO Consortium has introduced annotation extensions that enable manually curated GO annotations to capture additional contextual details. Extensions represent effector–target relationships such as localization dependencies, substrates of protein modifiers and regulation targets of signaling pathways and transcription factors as well as spatial and temporal aspects of processes such as cell or tissue type or developmental stage. We describe the content and structure of annotation extensions, provide examples, and summarize the current usage of annotation extensions. Conclusions The additional contextual information captured by annotation extensions improves the utility of functional annotation by representing dependencies between annotations to terms in the different ontologies of GO, external ontologies, or an organism’s gene products. These enhanced annotations can also support sophisticated queries and reasoning, and will provide curated, directional links between many gene products to support pathway and network reconstruction. PMID:24885854

The DOE-JGI Microbial Genome Annotation Pipeline performs structural and functional annotation of microbial genomes that are further included into the Integrated Microbial Genome comparative analysis system. MGAP is applied to assembled nucleotide sequence datasets that are provided via the IMG submission site. Dataset submission for annotation first requires project and associated metadata description in GOLD. The MGAP sequence data processing consists of feature prediction including identification of protein-coding genes, non-coding RNAs and regulatory RNA features, as well as CRISPR elements. Structural annotation is followed by assignment of protein product names and functions. PMID:26512311

The DOE-JGI Microbial Genome Annotation Pipeline performs structural and functional annotation of microbial genomes that are further included into the Integrated Microbial Genome comparative analysis system. MGAP is applied to assembled nucleotide sequence datasets that are provided via the IMG submission site. Dataset submission for annotation first requires project and associated metadata description in GOLD. The MGAP sequence data processing consists of feature prediction including identification of protein-coding genes, non-coding RNAs and regulatory RNA features, as well as CRISPR elements. In conclusion, structural annotation is followed by assignment of protein product names and functions.

Despite a severe regulation concerning the building in flooding areas, 80% of these areas are already built in the Greater Paris (Paris, Val-de-Marne, Hauts-de-Seine and Seine-Saint-Denis). The land use in flooding area is presented as one of the main solutions to solve the ongoing real estate pressure. For instance some of the industrial wastelands located along the river are currently in redevelopment and residential buildings are planned. So the landuse in the flooding areas is currently a key issue in the development of the Greater Paris area. To deal with floods there are some resilience tools, whether structural (such as perimeter barriers or building aperture barriers, etc) or non structural (such as warning systems, etc.). The technical solutions are available and most of the time efficient1. Still, we notice that these tools are not much implemented. The people; stakeholders and inhabitants, literally seems to be not interested. This papers focus on the integration of resilience tools in urban projects. Indeed one of the blockages in the implementation of an efficient flood risk prevention policy is the lack of concern of the landuse stakeholders and the inhabitants for the risk2. We conducted an important number of interviews with stakeholders involved in various urban projects and we assess, in this communication, to what extent the improvement of the resilience to floods is considered as a main issue in the execution of an urban project? How this concern is maintained or could be maintained throughout the project. Is there a dilution of this concern? In order to develop this topic we rely on a case study. The "Ardoines" is a project aiming at redeveloping an industrial site (South-East Paris), into a project including residential and office buildings and other amenities. In order to elaborate the master plan, the urban planning authority brought together some flood risk experts. According to the comments of the experts, the architect in charge of the

The Electric Power Research Institute was awarded this grant to continue the joint effort initiated by EPRI, and VE International to proceed beyond the prototype phase of the electric G-Van development. The goal of EPRI and VEHMA was to develop a market for the electric G-Van, and to distribute them to commercial fleet operators. The objective of this project was to produce G-Vans in a production facility that would be comparable to the GMC Truck internal combustion engine Vandura Van produced by General Motors in quality, reliability, durability and safety. An initial market assessment/demonstration phase of sixty (60) vehicles was to be undertaken, with the ability to expand production volume quickly to meet market demands. Brief description of each task of this grant is given and the actions taken by EPRI to complete them.

The purpose of this study was to assess contaminated soil and groundwater for the urban redevelopment of a rapid transit railway and a new mega-shopping area. Contaminated soil and groundwater may interfere with the progress of this project, and residents and shoppers may be exposed to human health risks. The study area has been remediated after application of first remediation technologies. Of the entire area, several sites were still contaminated by waste materials and petroleum. For zinc (Zn) contamination, high Zn concentrations were detected because waste materials were disposed in the entire area. For petroleum contamination, high total petroleum hydrocarbon (TPH) and hydrocarbon degrading microbe concentrations were observed at the depth of 7 m because the underground petroleum storage tank had previously been located at this site. Correlation results suggest that TPH (soil) concentration is still related with TPH (groundwater) concentration. The relationship is taken into account in the Spearman coefficient (α). PMID:23307052

Idaho National Laboratories (INL) has an ongoing research and development (R&D) project to remove excess conservatism from seismic probabilistic risk assessments (SPRA) calculations. These risk calculations should focus on providing best estimate results, and associated insights, for evaluation and decision-making. This report presents a plan for improving our current traditional SPRA process using a seismic event recorded at a nuclear power plant site, with known outcomes, to improve the decision making process. SPRAs are intended to provide best estimates of the various combinations of structural and equipment failures that can lead to a seismic induced core damage event. However, in general this approach has been conservative, and potentially masks other important events (for instance, it was not the seismic motions that caused the Fukushima core melt events, but the tsunami ingress into the facility).

The ENCODE Project has generated a wealth of experimental information mapping diverse chromatin properties in several human cell lines. Although each such data track is independently informative toward the annotation of regulatory elements, their interrelations contain much richer information for the systematic annotation of regulatory elements. To uncover these interrelations and to generate an interpretable summary of the massive datasets of the ENCODE Project, we apply unsupervised learning methodologies, converting dozens of chromatin datasets into discrete annotation maps of regulatory regions and other chromatin elements across the human genome. These methods rediscover and summarize diverse aspects of chromatin architecture, elucidate the interplay between chromatin activity and RNA transcription, and reveal that a large proportion of the genome lies in a quiescent state, even across multiple cell types. The resulting annotation of non-coding regulatory elements correlate strongly with mammalian evolutionary constraint, and provide an unbiased approach for evaluating metrics of evolutionary constraint in human. Lastly, we use the regulatory annotations to revisit previously uncharacterized disease-associated loci, resulting in focused, testable hypotheses through the lens of the chromatin landscape. PMID:23221638

Nitrogen (N) availability is a key role in food and fiber production. Providing plant-available N through synthetic fertilizer in the 20th and early 21st century has been a major contributor to the increased production required to feed and clothe the growing human population. To continue to meet the global demands and to minimize environmental problems, significant improvements are needed in the efficiency with which fertilizer N is utilized within production systems. There are still major uncertainties regarding the fate of fertilizer N added to agricultural soils and the potential for reducing losses to the environment. Enhancing the technical and economic efficiency of fertilizer N is seen to promote a favorable situation for both agricultural production and the environment, and this has provided much of the impetus for a new N fertilizer project. To address this important issue, a rapid assessmentproject on N fertilizer (NFRAP) was conducted by SCOPE (the Scientific Committee on Problems of the Environment) during late 2003 and early 2004. This was the first formal project of the International Nitrogen Initiative (INI). As part of this assessment, a successful international workshop was held in Kampala, Uganda on 12 -16 January, 2004. This workshop brought together scientists from around the world to assess the fate of synthetic fertilizer N in the context of overall N inputs to agricultural systems, with a view to enhancing the efficiency of N use and reducing negative impacts on the environment. Regionalization of the assessment highlighted the problems of too little N for crop production to meet the nutrient requirements of sub-Saharan Africa and the oversupply of N in the major rice-growing areas of China. The results of the assessment are presented in a book (SCOPE 65) which is now available to provide a basis for further discussions on N fertilizer. PMID:16512199

Nitrogen (N) availability is a key role in food and fiber production. Providing plant-available N through synthetic fertilizer in the 20th and early 21st century has been a major contributor to the increased production required to feed and clothe the growing human population. To continue to meet the global demands and to minimize environmental problems, significant improvements are needed in the efficiency with which fertilizer N is utilized within production systems. There are still major uncertainties regarding the fate of fertilizer N added to agricultural soils and the potential for reducing losses to the environment. Enhancing the technical and economic efficiency of fertilizer N is seen to promote a favorable situation for both agricultural production and the environment, and this has provided much of the impetus for a new N fertilizer project. To address this important issue, a rapid assessmentproject on N fertilizer (NFRAP) was conducted by SCOPE (the Scientific Committee on Problems of the Environment) during late 2003 and early 2004. This was the first formal project of the International Nitrogen Initiative (INI). As part of this assessment, a successful international workshop was held in Kampala, Uganda on 12 -16 January, 2004. This workshop brought together scientists from around the world to assess the fate of synthetic fertilizer N in the context of overall N inputs to agricultural systems, with a view to enhancing the efficiency of N use and reducing negative impacts on the environment. Regionalization of the assessment highlighted the problems of too little N for crop production to meet the nutrient requirements of sub-Saharan Africa and the oversupply of N in the major rice-growing areas of China. The results of the assessment are presented in a book (SCOPE 65) which is now available to provide a basis for further discussions on N fertilizer. PMID:20549432

Background Gene annotation is a pivotal component in computational genomics, encompassing prediction of gene function, expression analysis, and sequence scrutiny. Hence, quantitative measures of the annotation landscape constitute a pertinent bioinformatics tool. GeneCards® is a gene-centric compendium of rich annotative information for over 50,000 human gene entries, building upon 68 data sources, including Gene Ontology (GO), pathways, interactions, phenotypes, publications and many more. Results We present the GeneCards Inferred Functionality Score (GIFtS) which allows a quantitative assessment of a gene's annotation status, by exploiting the unique wealth and diversity of GeneCards information. The GIFtS tool, linked from the GeneCards home page, facilitates browsing the human genome by searching for the annotation level of a specified gene, retrieving a list of genes within a specified range of GIFtS value, obtaining random genes with a specific GIFtS value, and experimenting with the GIFtS weighting algorithm for a variety of annotation categories. The bimodal shape of the GIFtS distribution suggests a division of the human gene repertoire into two main groups: the high-GIFtS peak consists almost entirely of protein-coding genes; the low-GIFtS peak consists of genes from all of the categories. Cluster analysis of GIFtS annotation vectors provides the classification of gene groups by detailed positioning in the annotation arena. GIFtS also provide measures which enable the evaluation of the databases that serve as GeneCards sources. An inverse correlation is found (for GIFtS>25) between the number of genes annotated by each source, and the average GIFtS value of genes associated with that source. Three typical source prototypes are revealed by their GIFtS distribution: genome-wide sources, sources comprising mainly highly annotated genes, and sources comprising mainly poorly annotated genes. The degree of accumulated knowledge for a given gene measured by

This report summarizes the activities and accomplishments of NREL's Solar Radiation Resource AssessmentProject during fiscal year 1991. Currently, the primary focus of the SRRAP is to produce a 1961 - 1990 National Solar Radiation Data Base, providing hourly values of global horizontal, diffuse, and direct normal solar radiation at approximately 250 sites around the United States. Because these solar radiation quantities were measured intermittently at only about 50 of these sites, models were developed and applied to the majority of the stations to provide estimates of these parameters. Although approximately 93 percent of the data base consists of modeled data this represents a significant improvement over the SOLMET/ERSATZ 1952 - 1975 data base. The magnitude and importance of this activity are such that the majority of SRRAP human and financial resources were devoted to the data base development. However, in FY 1991 the SRRAP was involved in many other activities, which are reported here. These include the continued maintenance of a solar radiation monitoring network in the southeast United States at six Historically Black Colleges and Universities (HBCU's), the transfer of solar radiation resource assessment technology through a variety of activities, participation in international programs, and the maintenance and operation of NREL's Solar Radiation Research Laboratory.

The DOE is proposing to provide financial assistance to the Kotzebue Electric Association to expand its existing wind installation near Kotzebue, Alaska. Like many rural Alaska towns, Kotzebue uses diesel-powered generators to produce its electricity, the high cost of which is currently subsidized by the Alaska State government. In an effort to provide a cost effective and clean source of electricity, reduce dependence on diesel fuel, and reduce air pollutants, the DOE is proposing to fund an experimental wind installation to test commercially available wind turbines under Arctic conditions. The results would provide valuable information to other Alaska communities experiencing similar dependence on diesel-powered generators. The environmental assessment for the proposed wind installation assessed impacts to biological resources, land use, electromagnetic interference, coastal zone, air quality, cultural resources, and noise. It was determined that the project does not constitute a major Federal action significantly affecting the quality of the human environment. Therefore, the preparation of an environmental impact statement is not required, and DOE has issued a Finding of No Significant Impact.

Community organizations addressing health and human service needs generally have minimal capacity for research and evaluation. As a result, they are often inadequately equipped to independently carry out activities that can be critical for their own success, such as conducting needs assessments, identifying best practices, and evaluating outcomes. Moreover, they are unable to develop equitable partnerships with academic researchers to conduct community-based research. This paper reports on the progress of the Community Research Scholar Initiative (CRSI), a program that aims to enhance community research and evaluation capacity through training of selected employees from Greater Cleveland community organizations. The intensive 2-year CRSI program includes didactic instruction, fieldwork, multiple levels of community and academic engagement, leadership training, and a mentored research project. The first cohort of CRSI Scholars, their community organizations, and other community stakeholders have incorporated program lessons into their practices and operations. The CRSI program evaluation indicates: the importance of careful Scholar selection; the need to engage executive leadership from Scholar organizations; the value of a curriculum integrating classwork, fieldwork, and community engagement; and the need for continual scholar skill and knowledge assessment. These findings and lessons learned guide other efforts to enhance community organization research and evaluation capacity. PMID:26073663

This report summarizes the activities and accomplishments of NREL's Solar Radiation Resource AssessmentProject during fiscal year 1991. Currently, the primary focus of the SRRAP is to produce a 1961--1990 National Solar Radiation Data Base, providing hourly values of global horizontal, diffuse, and direct normal solar radiation at approximately 250 sites around the United States. Because these solar radiation quantities have been measured intermittently at only about 50 of these sites, models were developed and applied to the majority of the stations to provide estimates of these parameters. Although approximately 93% of the data base consists of modeled data this represents a significant improvement over the SOLMET/ERSATZ 1952--1975 data base. The magnitude and importance of this activity are such that the majority of SRRAP human and financial in many other activities, which are reported here. These include the continued maintenance of a solar radiation monitoring network in the southeast United States at six Historically Black Colleges and Universities (HBCU's), the transfer of solar radiation resource assessment technology through a variety of activities, participation in international programs, and the maintenance and operation of NREL's Solar Radiation Research Laboratory. 17 refs.

SITE-94 is a research project conducted as a performance assessment of a hypothetical repository for spent nuclear fuel, but with real pre-excavation data from a real site. The geosphere, the engineered barriers and the processes for radionuclide release and transport comprise an integrated interdependent system, which is described by an influence diagram (PID) that reflects how different Features, Events or Processes (FEPs) inside the system interact. Site evaluation is used to determine information of transport paths in the geosphere and to deliver information on geosphere interaction with the engineered barriers. A three-dimensional geological structure model of the site as well as alternative conceptual models consistent with the existing hydrological field data, have been analyzed. Groundwater chemistry is evaluated and a model, fairly consistent with the flow model, for the origin of the different waters has been developed. The geological structure model is also used for analyzing the mechanical stability of the site. Several phenomena of relevance for copper corrosion in a repository environment have been investigated. For Reference Case conditions and regardless of flow variability, output is dominated by I-129, which, for a single canister, may give rise to drinking water well doses in the order of 10{sup -6}Sv/yr. Finally, it appears that the procedures involved in the development of influence diagrams may be a promising tool for quality assurance of performance assessments.

The enormous increase of popularity and use of the worldwide web has led in the recent years to important changes in the ways people communicate. An interesting example of this fact is provided by the now very popular social annotation systems, through which users annotate resources (such as web pages or digital photographs) with keywords known as “tags.” Understanding the rich emergent structures resulting from the uncoordinated actions of users calls for an interdisciplinary effort. In particular concepts borrowed from statistical physics, such as random walks (RWs), and complex networks theory, can effectively contribute to the mathematical modeling of social annotation systems. Here, we show that the process of social annotation can be seen as a collective but uncoordinated exploration of an underlying semantic space, pictured as a graph, through a series of RWs. This modeling framework reproduces several aspects, thus far unexplained, of social annotation, among which are the peculiar growth of the size of the vocabulary used by the community and its complex network structure that represents an externalization of semantic structures grounded in cognition and that are typically hard to access. PMID:19506244

The enormous increase of popularity and use of the worldwide web has led in the recent years to important changes in the ways people communicate. An interesting example of this fact is provided by the now very popular social annotation systems, through which users annotate resources (such as web pages or digital photographs) with keywords known as "tags." Understanding the rich emergent structures resulting from the uncoordinated actions of users calls for an interdisciplinary effort. In particular concepts borrowed from statistical physics, such as random walks (RWs), and complex networks theory, can effectively contribute to the mathematical modeling of social annotation systems. Here, we show that the process of social annotation can be seen as a collective but uncoordinated exploration of an underlying semantic space, pictured as a graph, through a series of RWs. This modeling framework reproduces several aspects, thus far unexplained, of social annotation, among which are the peculiar growth of the size of the vocabulary used by the community and its complex network structure that represents an externalization of semantic structures grounded in cognition and that are typically hard to access. PMID:19506244

The Federal action addressed by this Environmental Assessment (EA) is joint funding the retrofitting of a heating and hot water system in a hospital at Marlin, Texas, with a geothermal preheat system. The project will be located within the existing hospital boiler room. One supply well was drilled in an existing adjacent parking lot. It was necessary to drill the well prior to completion of this environmental assessment in order to confirm the reservoir and to obtain fluids for analysis in order to assess the environmental effects of fluid disposal. Fluid from operation will be disposed of by discharging it directly into existing street drains, which will carry the fluid to Park Lake and eventually the Brazos River. Fluid disposal activities are regulated by the Texas Railroad Commission. The local geology is determined by past displacements in the East Texas Basin. Boundaries are marked by the Balcones and the Mexia-Talco fault systems. All important water-bearing formations are in the cretaceous sedimentary rocks and are slightly to highly saline. Geothermal fluids are produced from the Trinity Group; they range from approximately 3600 to 4000 ppM TDS. Temperatures are expected to be above 64/sup 0/C (147/sup 0/F). Surface water flows southeastward as a part of the Brazos River Basin. The nearest perennial stream is the Brazos River 5.6 km (3.5 miles) away, to which surface fluids will eventually discharge. Environmental impacts of construction were small because of the existing structures and paved areas. Construction run-off and geothermal flow-test fluid passed through a small pond in the city park, lowering its water quality, at least temporarily. Construction noise was not out of character with existing noises around the hospital.

Parsons was selected by the United States Agency for International Development (USAID) as the general contractor for construction management for the construction of 2,500 housing units within the Russian Federation. These housing units, to be occupied by Russian officers returning from the Baltic States, are under construction on 15 sites, selected from an initial list of 200, based on habitability, capability of successful final construction. Cost meeting USAID guidelines, and impacts on the environment. USAID fulfilled NEPA requirements by preparing, with assistance of Parsons Engineering Science, a Programmatic Environmental Assessment and 15 site specific Environmental Assessments for the project. The sites were scattered over the entire Russian Federation west of the Ural Mountains. The site offerors completed an environmental checklist covering a broad range of possible impacts. Significant environmental issues and concerns were further identified during scoping meetings held at the site locations. The most important issues discussed were: soil contamination; gaseous, liquid, and solid pollutants to which the site may be exposed; incompatible adjacent land uses; ready access to utilities and social services; and socioeconomic situation favorable to resettlement of Russian military officers. No major environmental issues or concerns were identified for the 15 selected sites. Certificates indicating the absence of chemical and radiological surface and subsurface contamination at the proposed sites were provided by the local environmental officers. Polynuclear aromatic hydrocarbons were found present at one of the sites considered in a preliminary selection, and later rejected due to the failure of contractual negotiations. The environmental assessments included mitigation and monitoring measures for construction and operation (occupancy) impacts.

This annotated bibliography is designed to assist rural leaders seeking ways to effectively structure successful job development projects in their communities. The 120 entries are listed in the main body alphabetically by author, and are grouped in the index into categories reflecting Thomas's "seven hallmarks of successful rural development": (1)…

The Biology Department at Salt Lake Community College has used the IMG-ACT toolbox to introduce a genome mapping and annotation exercise into the laboratory portion of its Cell Biology course. This project provides students with an authentic inquiry-based learning experience while introducing them to computational biology and contemporary learning…

State Univ. of New York, Ithaca. School of Hotel Administration at Cornell Univ.

This joint project of the American Library Association, Children's Services Division, and the African American Institute is intended to provide kindergarten through ninth grade children with an annotated listing of appropriate works concerning the land and peoples of the African continent. The listings are presented by country and include…

This paper introduces a new application called multimedia annotation, currently being developed in the European Union-funded project DIANE. A system for instant multimedia authoring, with special features for supporting the creation of multimedia documents in a distributed working environment such as distance education, is described. The system…

The California State Water Resources Control Board’s (SWRCB) GAMA Program is a comprehensive assessment of statewide groundwater quality in California. From 2004 to 2012, the GAMA Program’s Priority Basin Project focused on assessing groundwater resources used for public drinking-water supplies. More than 2,000 public-supply wells were sampled by U.S. Geological Survey (USGS) for this effort. Starting in 2012, the GAMA Priority Basin Project began an assessment of water resources in shallow aquifers in California. These shallow aquifers provide water for domestic and small community-supply wells, which are often drilled to shallower depths in the groundwater system than public-supply wells. Shallow aquifers are of interest because shallow groundwater may respond more quickly and be more susceptible to contamination from human activities at the land surface, than the deeper aquifers. The SWRCB’s GAMA Program was developed in response to the Groundwater Quality Monitoring Act of 2001 (Water Code sections 10780-10782.3): a public mandate to assess and monitor the quality of groundwater resources used for drinking-water supplies, and to increase the availability of information about groundwater quality to the public. The U.S. Geological Survey is the technical lead of the Priority Basin Project. Stewardship of California’s groundwater resources is a responsibility shared between well owners, communities, and the State. Participants and collaborators in the GAMA Program include Regional Water Quality Control Boards, Department of Water Resources, Department of Public Health, local and regional groundwater management entities, county and local water agencies, community groups, and private citizens. Well-owner participation in the GAMA Program is entirely voluntary.

Missouri State Dept. of Elementary and Secondary Education, Jefferson City. Div. of Instruction.

This document includes the left-hand column ("What all Students Should Know") and the center column ("What All Students Should Be Able To Do") from "Missouri's Framework for Curriculum Development in Communication Arts K-12." Next to these two columns has been added a column which includes assessment notes for those grade levels which will be…

Missouri State Dept. of Elementary and Secondary Education, Jefferson City. Div. of Instruction.

This document includes the left-hand column ("What All Students Should Know") and the center column ("What All Students Should Be Able To Do") from "Missouri's Framework for Curriculum Development in Communication Arts K-12." Next to these two columns has been added a column which includes assessment noted for those grade levels which will be…

Missouri State Dept. of Elementary and Secondary Education, Jefferson City. Div. of Instruction.

This document includes the left-hand column ("What All Students Should Know") and the center column ("What All Students Should Be Able To Do") from "Missouri's Framework for Curriculum Development in Communication Arts K-12." Next to these two columns has been added a column which includes assessment notes for those grade levels which will be…

This document includes both a booklet and a presentation guide. The booklet contains the anchor papers used to score the 2001 Washington Assessment of Student Learning (WASL) in writing, grade 4. Anchor papers are concrete examples that illustrate the intent of the scoring guides. The papers in the booklet exemplify the full range of score points…

Washington Office of the State Superintendent of Public Instruction, Olympia.

This document includes a booklet and a presentation guide. The booklet contains the anchor papers used to score the 2001 Washington Assessment of Student Learning (WASL) in writing, grade 10. Anchor papers are concrete examples that illustrate the intent of the scoring guides. The papers in the booklet exemplify the full range of score points…

This document includes a booklet and presentation guide. The booklet contains the anchor papers used to score the 2001 Washington Assessment of Student Learning (WASL) in writing, grade 7. Anchor papers are concrete examples that illustrate the intent of the scoring guides. The papers in the booklet exemplify the full range of score points…

Objective: To review the literature on whiplash injury including an overview, collision mechanics, pathophysiology, neurobehavioral, imaging, treatment/management, prognosis, outcomes, and litigation. Design: An annotated bibliography. Methods: A literature search of MEDLINE from 1987 to 1995 and CHIROLARS from 1900 to 1996, with emphasis on the last ten years, was performed. Conference proceedings and the personal files of the authors were searched for relevant citations. Key words utilized in the search were whiplash injury, acceleration/deceleration injury, neck pain, head pain, cognitive impairment, treatment, imaging, prognosis and litigation. Results: This annotated bibliography identifies key studies and potential models for future research. Conclusions: There is currently a lack of clinical consensus both in practice and in the literature regarding the evaluation and management of an episode of whiplash injury. This annotated bibliography has been developed in an attempt to provide an overview of the literature regarding various issues surrounding an episode of whiplash injury.

Purpose: To assess the Luneburg Sustainable University Project (the Project) in a non-European international context; to relate the project scholarly approach to selected scholarly and practice-oriented North American sustainability in higher education (SHE) methods; to analyze project innovations against North American initiatives.…

Functional context for biological sequence is provided in the form of annotations. However, within a group of similar sequences there can be annotation heterogeneity in terms of coverage and specificity. This in turn can introduce issues regarding the interpretation of actual functional similarity and overall functional coherence of such a group. One way to mitigate such issues is through the use of visualization and statistical techniques. Therefore, in order to help interpret this annotation heterogeneity we created a web application that generates Gene Ontology annotation graphs for protein sets and their associated statistics from simple frequencies to enrichment values and Information Content based metrics. The publicly accessible website http://xldb.di.fc.ul.pt/gryfun/ currently accepts lists of UniProt accession numbers in order to create user-defined protein sets for subsequent annotation visualization and statistical assessment. GRYFUN is a freely available web application that allows GO annotation visualization of protein sets and which can be used for annotation coherence and cohesiveness analysis and annotation extension assessments within under-annotated protein sets. PMID:25794277

The integration of genome annotations is critical to the identification of genetic variants that are relevant to studies of disease or other traits. However, comprehensive variant annotation with diverse file formats is difficult with existing methods. Here we describe vcfanno, which flexibly extracts and summarizes attributes from multiple annotation files and integrates the annotations within the INFO column of the original VCF file. By leveraging a parallel "chromosome sweeping" algorithm, we demonstrate substantial performance gains by annotating ~85,000 variants per second with 50 attributes from 17 commonly used genome annotation resources. Vcfanno is available at https://github.com/brentp/vcfanno under the MIT license. PMID:27250555

Commonly, it is said that there is lack of communication among scientists, conservators, restorers, project managers and architects. But sometimes this communication flows, and we can find enormous benefits from and for all the participating agents. This is the case we present in this work, in which technical agents in charge of the restoration of a building, asked for some scientific advice to perform the restoration of a heritage building. The results were successful and fantastic for both of them, in terms of one part asking for consultation and the other answering to the demands and resolving real problems. This is the case of a marvellous Renaissance building (Medinaceli Dukes palace, 15th-16th centuries) in the central area of Spain (Cogolludo, Guadalajara). Focused on the restoration project, we were asked for consultancy on how to solve matters like the assessment of the already fixed in project cleaning method for the stone façades, the efficacy and durability methods for some conservation products to be applied, the presence or not of a patina on the stone; the viability of using some restoration mortars, and the origin of some efflorescences that came out just after placed in the building a restoration rendering mortar. Responses to these matters were answered by performing tests both in the lab and on site in the building. The efficiency and effects on stone of the blasting cleaning method was assessed by first analysing the nature and thickness of the surface deposits to be removed (SEM-EDS analyses); secondly, roughness and colour measurements were performed, and thirdly, SEM-EDS analyses were carried out again to determine whether the cleaning method was able to remove part of the surface deposits, completely, or even part of the stone substrate. Some conservation products were tested on stone specimens, both their efficacy and their durability, concluding that it was better not to apply any of them. A patina was found on the stone façade under SEM

This is an informative listing for educators, librarians, and others interested in materials for bilingual multicultural education. There are two main sections, annotations and analyses. Annotated entries are arranged under the following headings: (1) assessment and evaluation; (2) bibliographies; (3) classroom resources; (4) English as a second…

A bibliography of internal and external documents produced by the Jet Propulsion Laboratory, based on the work performed by the Photovoltaics Program Analysis and Integration Center, is presented with annotations. As shown in the Table of Contents, the bibliography is divided into three subject areas: (1) Assessments, (2) Methdological Studies, and (3) Supporting Studies. Annotated abstracts are presented for 20 papers.

Alignment-Annotator is a novel web service designed to generate interactive views of annotated nucleotide and amino acid sequence alignments (i) de novo and (ii) embedded in other software. All computations are performed at server side. Interactivity is implemented in HTML5, a language native to web browsers. The alignment is initially displayed using default settings and can be modified with the graphical user interfaces. For example, individual sequences can be reordered or deleted using drag and drop, amino acid color code schemes can be applied and annotations can be added. Annotations can be made manually or imported (BioDAS servers, the UniProt, the Catalytic Site Atlas and the PDB). Some edits take immediate effect while others require server interaction and may take a few seconds to execute. The final alignment document can be downloaded as a zip-archive containing the HTML files. Because of the use of HTML the resulting interactive alignment can be viewed on any platform including Windows, Mac OS X, Linux, Android and iOS in any standard web browser. Importantly, no plugins nor Java are required and therefore Alignment-Anotator represents the first interactive browser-based alignment visualization. Availability: http://www.bioinformatics.org/strap/aa/ and http://strap.charite.de/aa/. PMID:24813445

The video annotation is important for web indexing and browsing systems. Indeed, in order to evaluate the performance of video query and mining techniques, databases with concept annotations are required. Therefore, it is necessary generate a database with a semantic indexing that represents the digital content of the Mexican bullfighting atmosphere. This paper proposes a scheme to make complex annotations in a video in the frame of multimedia search engine project. Each video is partitioned using our segmentation algorithm that creates shots of different length and different number of frames. In order to make complex annotations about the video, we use ELAN software. The annotations are done in two steps: First, we take note about the whole content in each shot. Second, we describe the actions as parameters of the camera like direction, position and deepness. As a consequence, we obtain a more complete descriptor of every action. In both cases we use the concepts of the TRECVid 2014 dataset. We also propose new concepts. This methodology allows to generate a database with the necessary information to create descriptors and algorithms capable to detect actions to automatically index and classify new bullfighting multimedia content.

Earth Science Markup Language (ESML) is efficient and effective in representing scientific data in an XML- based formalism. However, features of the data being represented are not accounted for in ESML. Such features might derive from events (e.g., a gap in data collection due to instrument servicing), identifications (e.g., a scientifically interesting area/volume in an image), or some other source. In order to account for features in an ESML context, we consider them from the perspective of annotation, i.e., the addition of information to existing documents without changing the originals. Although it is possible to extend ESML to incorporate feature-based annotations internally (e.g., by extending the XML schema for ESML), there are a number of complicating factors that we identify. Rather than pursuing the ESML-extension approach, we focus on an external representation for feature-based annotations via XML Pointer Language (XPointer). In previous work (Lumb &Aldridge, HPCS 2006, IEEE, doi:10.1109/HPCS.2006.26), we have shown that it is possible to extract relationships from ESML-based representations, and capture the results in the Resource Description Format (RDF). Thus we explore and report on this same requirement for XPointer-based annotations of ESML representations. As in our past efforts, the Global Geodynamics Project (GGP) allows us to illustrate with a real-world example this approach for introducing annotations into automatically generated knowledge representations.

Numerous databases containing information about DNA, RNA, and protein variations are available. Gene-specific variant databases (locus-specific variation databases, LSDBs) are typically curated and maintained for single genes or groups of genes for a certain disease(s). These databases are widely considered as the most reliable information source for a particular gene/protein/disease, but it should also be made clear they may have widely varying contents, infrastructure, and quality. Quality is very important to evaluate because these databases may affect health decision-making, research, and clinical practice. The Human Variome Project (HVP) established a Working Group for Variant Database Quality Assessment. The basic principle was to develop a simple system that nevertheless provides a good overview of the quality of a database. The HVP quality evaluation criteria that resulted are divided into four main components: data quality, technical quality, accessibility, and timeliness. This report elaborates on the developed quality criteria and how implementation of the quality scheme can be achieved. Examples are provided for the current status of the quality items in two different databases, BTKbase, an LSDB, and ClinVar, a central archive of submissions about variants and their clinical significance. PMID:26919176

In the September 2010 issue of JGME, the Pediatric Milestones Working Group published "The Pediatrics Milestones: Conceptual Framework, Guiding Principles, and Approach to Development", a document that describes the construction of the first iteration of the Pediatric Milestones. These Milestones were developed by the Working Group as a group of practical behavioral expectations for each of the 52 sub-competencies. In constructing these Milestones, the authors were cognizant of the need to ground the Milestones themselves in evidence, theories or other conceptual frameworks that would provide the basis for the ontogeny of development for each sub-competency. During this next phase of the Milestones development, the process will continue with consultation with content experts and consideration of assessment of Milestones. We have described possible measurement tools, explored threats to validity, establishment of benchmarks, and possible approaches to reporting of performance. The vision of the Pediatrics Milestone Project is to understand the development of a pediatrician from entry into medical school through the twilight of a physician's career, and the work will require a collaborative effort of the undergraduate and graduate medical education communities, and the accrediting and certifying bodies. PMID:22132281

This paper depicts the method used to quantify the environmental impact of mining activities in surface mine projects. The affected environment was broken down into thirteen components, such as Human health and immunity, Surface water, Air quality, etc. The effect of twenty impacting factors from the mining and milling activities was then calculated for each Environmental Component. Environmental assessments are often performed by using matrix methods in which one dimension of the matrix is the "Impacting Factor" and the other one is the "Environmental Components". For the presented matrix method, each Impacting Factor was first given a magnitude between -10 and 10. These factors are used to set up a matrix named Impacting Factor Matrix, whose elements represent the Impacting Factor values. The effects of each Impacting Factor on each Environmental Component were then quantified by multiplying the Impacting Factor Matrix by Weighting Factor Matrix. The elements of the weighting factors matrix reflect the effects of each Impacting Factor on each Environmental Component. The outlined method was originally developed for a mining and milling operation in Iran, but it can successfully be used for mining ventures and more general industrial activities in other countries in accordance to their environmental regulations and laws. PMID:19286301

Idaho National Laboratory, along with Idaho State University’s Idaho Accelerator Center and Los Alamos National Laboratory, is developing an electron accelerator-based, photonuclear inspection technology, called the Pulsed Photonuclear Assessment (PPA) system, for the detection of nuclear material concealed within air-, rail-, and, primarily, maritime-cargo transportation containers. This report summarizes the advances and progress of the system’s development in 2005. The contents of this report include an overview of the prototype inspection system, selected Receiver-Operator-Characteristic curves for system detection performance characterization, a description of the approach used to integrate the three major detection components of the PPA inspection system, highlights of the gray-scale density mapping technique being used for significant shield material detection, and higher electron beam energy detection results to support an evaluation for an optimal interrogating beam energy. This project is supported by the Department of Homeland Security Office of Research and Development and, more recently, the Domestic Nuclear Detection Office.

The purpose of this Environmental Assessment (EA) is to update the ``Test Area North Pool Stabilization Project`` EA (DOE/EA-1050) and finding of no significant impact (FONSI) issued May 6, 1996. This update analyzes the environmental and health impacts of a drying process for the Three Mile Island (TMI) nuclear reactor core debris canisters now stored underwater in a facility on the Idaho National Engineering and Environmental Laboratory (INEEL). A drying process was analyzed in the predecision versions of the EA released in 1995 but that particular process was determined to be ineffective and dropped form the Ea/FONSI issued May 6, 1996. The origin and nature of the TMI core debris and the proposed drying process are described and analyzed in detail in this EA. As did the 1996 EA, this update analyzes the environmental and health impacts of removing various radioactive materials from underwater storage, dewatering these materials, constructing a new interim dry storage facility, and transporting and placing the materials into the new facility. Also, as did the 1996 EA, this EA analyzes the removal, treatment and disposal of water from the pool, and placement of the facility into a safe, standby condition. The entire action would take place within the boundaries of the INEEL. The materials are currently stored underwater in the Test Area North (TAN) building 607 pool, the new interim dry storage facility would be constructed at the Idaho Chemical Processing Plant (ICPP) which is about 25 miles south of TAN.

The purpose of this Environmental Assessment (EA) is to update the ``Test Area North Pool Stabilization Project`` EA (DOE/EA-1050) and finding of no significant impact (FONSI) issued May 6, 1996. This update analyzes the environmental and health impacts of a drying process for the Three Mile Island (TMI) nuclear reactor core debris canisters now stored underwater in a facility on the Idaho National Engineering and Environmental Laboratory (INEEL). A drying process was analyzed in the predecision versions of the EA released in 1995 but that particular process was determined to be ineffective and dropped from the EA/FONSI issued May 6, 1996. A new drying process was subsequently developed and is analyzed in Section 2.1.2 of this document. As did the 1996 EA, this update analyzes the environmental and health impacts of removing various radioactive materials from underwater storage, dewatering these materials, constructing a new interim dry storage facility, and transporting and placing the materials into the new facility. Also, as did the 1996 EA, this EA analyzes the removal, treatment and disposal of water from the pool, and placement of the facility into a safe, standby condition. The entire action would take place within the boundaries of the INEEL. The materials are currently stored underwater in the Test Area North (TAN) building 607 pool, the new interim dry storage facility would be constructed at the Idaho Chemical Processing Plant (ICPP) which is about 25 miles south of TAN.

This document updates the bibliography published in Liquefied Gaseous Fuels Safety and Environmental Control Assessment Program: third status report (PNL-4172) and is a complete listing of literature reviewed and reported under the LNG Technical Surveillance Task. The bibliography is organized alphabetically by author.

The McLennan Community College Multi-County Needs AssessmentProject's (MAP) survey, assessing the felt and perceived needs, problems, and interests of the local population relative to education and training programs, is discussed in the document. The Needs Assessment Survey, one component of MAP, was conducted in the central Texas area (Bosque,…

Project C-018H is one of the fourteen subprojects to the Hanford Environmental Compliance (HEC) Project. Project C-018H provides treatment and disposal for the 242-A Evaporator and PUREX plant process condensate waste streams. This project used the Integrated Management Team (IMT) approach proposed by RL. The IMT approach included all affected organizations on the project team to coordinate and execute all required project tasks, while striving to integrate and satisfy all technical, operational, functional, and organizational objectives. The HEC Projects were initiated in 1989. Project C-018H began in early 1990, with completion of construction currently targeted for mid-1995. This assessment was performed to evaluate the effectiveness of the management control on design documents and quality assurance records developed and submitted for processing, use, and retention for the Project. The assessment focused primarily on the overall adequacy and quality of the design documentation currently being submitted to the project document control function.

This annotated bibliography presents annotations of 31 books and journal articles dealing with systems theory and its relation to organizational communication, marketing, information theory, and cybernetics. Materials were published between 1963 and 1992 and are listed alphabetically by author. (RS)

One hundred and forty citations comprise this annotated bibliography of books, articles, and selected dissertations that encompass trends in music theory and k-16 music education since the late 19th century. Special emphasis is upon writings since the 1950's. During earlier development, music analysts concentrated upon the elements of music (i.e.,…

This annotated bibliography lists 40 items, published between 1966 and 1971, that have to do with teacher aides. The listing is arranged alphabetically by author. In addition to the abstract and standard bibliographic information, addresses where the material can be purchased are often included. The items cited include handbooks, research studies,…

Intended for parents, health professionals and allied health workers, and others involved in caring for infants and young children, this annotated bibliography brings together in one selective listing a review of over 700 current publications related to infant feeding. Reflecting current knowledge in infant feeding, the bibliography has as its…

This annotated bibliography represents a first step toward compiling a comprehensive overview of current research on issues related to English language learners (ELLs). It is intended to be a resource for researchers, policymakers, administrators, and educators who are engaged in efforts to bridge the divide between research, policy, and practice…

This annotated bibliography on Vietnamese Amerasians includes primary and secondary sources as well as reviews of three documentary films. Sources were selected in order to provide an overview of the historical and political context of Amerasian resettlement and a review of the scant available research on coping and adaptation with this…

This selective annotated bibliography covers various sources of information on the radiocarbon dating method, including journal articles, conference proceedings, and reports, reflecting the most important and useful sources of the last 25 years. The bibliography is divided into five parts--general background on radiocarbon, radiocarbon dating,…

An annotated bibliography lists 74 articles and reports on instructional materials centers (IMC) which appeared from 1967-70. The articles deal with such topics as the purposes of an IMC, guidelines for setting up an IMC, and the relationship of an IMC to technology. Most articles deal with use of an IMC on an elementary or secondary level, but…

THIS ANNOTATED BIBLIOGRAPHY ON BIBLIOTHERAPY IS COMPOSED OF 138 CITATIONS RANGING IN DATE FROM 1936 TO 1967. IT IS DESIGNED TO AID TEACHERS AND LIBRARIANS IN MODIFYING THE ATTITUDES AND BEHAVIOR OF BOYS AND GIRLS. ITS LISTINGS ARE ARRANGED ALPHABETICALLY ACCORDING TO AUTHOR UNDER THE GENERAL DIVISIONS OF BOOKS, PERIODICALS, AND UNPUBLISHED…

This annotated bibliography lists books, films, filmstrips, recordings, and booklets on sex equity. Entries are arranged according to the following topics: career resources, curriculum resources, management, sex equity, sex roles, women's studies, student activities, and sex-fair fiction. Included in each entry are name of author, editor or…

In his introduction to the 86-item annotated bibliography by Mueller and Poliakoff, McKenna discusses his views on teacher evaluation and his impressions of the documents cited. He observes, in part, that the current concern is with the process of evaluation and that most researchers continue to believe that student achievement is the most…

This report, which is based on a review of practitioner-oriented sources and scholarly journals, uses a three-part framework to organize annotated bibliographies that, together, list a total of 104 sources that provide the following three perspectives on work force reduction issues: organizational, organizational-individual relationship, and…

Massachusetts Dept. of Education, Boston. Bureau of Nutrition Education and School Food Services.

This annotated bibliography on nutrition is for the use of teachers at the elementary grade level. It contains a list of books suitable for reading about nutrition and foods for pupils from kindergarten through the sixth grade. Films and audiovisual presentations for classroom use are also listed. The names and addresses from which these materials…

This bibliography compiles annotations of 178 books, journal articles, ERIC documents, and dissertations on Appalachian women and their social, cultural, and economic environment. Entries were published 1966-93 and are listed in the following categories: (1) authors and literary criticism; (2) bibliographies and resource guides; (3) economics,…

This annotated bibliography describes 53 books, papers, and articles written about efforts toward integrating and improving human services for children, youth, and families living in poverty. The bibliography has been developed for individuals working on and interested in service integration, including policymakers, program administrators,…

Project-level impact assessment was originally conceived as a snapshot taken in advance of project implementation, contrasting current conditions with a likely future scenario involving a variety of predicted impacts. Current best practice guidance has encouraged a shift towards longitudinal assessments from the pre-project stage through the implementation and operating phases. Experience and study show, however, that assessment of infrastructure-intensive projects rarely endures past the project's construction phase. Negative consequences for environmental, social and health outcomes have been documented. Such consequences clarify the pressing need for longitudinal assessment in each of these domains, with human rights impact assessment (HRIA) as an umbrella over, and critical augmentation of, environmental, social and health assessments. Project impacts on human rights are more closely linked to political, economic and other factors beyond immediate effects of a company's policy and action throughout the project lifecycle. Delineating these processes requires an adequate framework, with strategies for collecting longitudinal data, protocols that provide core information for impact assessment and guidance for adaptive mitigation strategies as project-related effects change over time. This article presents general principles for the design and implementation of sustained, longitudinal HRIA, based on experience assessing and responding to human rights impact in a uranium mining project in Malawi. The case study demonstrates the value of longitudinal assessment both for limiting corporate risk and improving human welfare. - Graphical abstract: Assessing changes in human rights condition as affected by both project and context, over time. - Highlights: • Corporate capital projects affect human rights in myriad ways. • Ongoing, longitudinal impact assessment techniques are needed. • We present an approach for conducting longitudinal human rights impact assessment

For their ambitious project, called America at War, high school juniors at Da Vinci Charter Academy in the Davis (California) Joint Unified School District didn't just study history. They became historians. Their project offers compelling evidence of what students can accomplish through project-based learning (PBL), an instructional approach that…

This annotated bibliography describes materials that can be helpful to adult educators working with exceptional adults. The bibliography includes 186 citations of resource materials, assessment materials, training guides, curriculum guides, research findings, films, and general information. The opening section consists of citations of general…

Noting that the use of multi-rater assessment tools (also called 360-degree feedback) in organizations has increased dramatically in recent years, this booklet provides an introduction to this growing practical literature. In addition to annotations of 56 books and journal articles published between 1990 and 1997, the booklet answers frequently…

Magnifying Genomes (MaGe) is a microbial genome annotation system based on a relational database containing information on bacterial genomes, as well as a web interface to achieve genome annotationprojects. Our system allows one to initiate the annotation of a genome at the early stage of the finishing phase. MaGe's main features are (i) integration of annotation data from bacterial genomes enhanced by a gene coding re-annotation process using accurate gene models, (ii) integration of results obtained with a wide range of bioinformatics methods, among which exploration of gene context by searching for conserved synteny and reconstruction of metabolic pathways, (iii) an advanced web interface allowing multiple users to refine the automatic assignment of gene product functions. MaGe is also linked to numerous well-known biological databases and systems. Our system has been thoroughly tested during the annotation of complete bacterial genomes (Acinetobacter baylyi ADP1, Pseudoalteromonas haloplanktis, Frankia alni) and is currently used in the context of several new microbial genome annotationprojects. In addition, MaGe allows for annotation curation and exploration of already published genomes from various genera (e.g. Yersinia, Bacillus and Neisseria). MaGe can be accessed at http://www.genoscope.cns.fr/agc/mage. PMID:16407324

Magnifying Genomes (MaGe) is a microbial genome annotation system based on a relational database containing information on bacterial genomes, as well as a web interface to achieve genome annotationprojects. Our system allows one to initiate the annotation of a genome at the early stage of the finishing phase. MaGe's main features are (i) integration of annotation data from bacterial genomes enhanced by a gene coding re-annotation process using accurate gene models, (ii) integration of results obtained with a wide range of bioinformatics methods, among which exploration of gene context by searching for conserved synteny and reconstruction of metabolic pathways, (iii) an advanced web interface allowing multiple users to refine the automatic assignment of gene product functions. MaGe is also linked to numerous well-known biological databases and systems. Our system has been thoroughly tested during the annotation of complete bacterial genomes (Acinetobacter baylyi ADP1, Pseudoalteromonas haloplanktis, Frankia alni) and is currently used in the context of several new microbial genome annotationprojects. In addition, MaGe allows for annotation curation and exploration of already published genomes from various genera (e.g. Yersinia, Bacillus and Neisseria). MaGe can be accessed at . PMID:16407324

The U.S. Department of Energy's (DOE) Office of River Protection (ORP) needs to collect engineering and technical information on (1) the physical response and behavior of a Phase I grout fill in an actual tank, (2) field deployment of grout production equipment and (3) the conduct of component closure activities for single-shell tank (SST) 241-C-106 (C-106). Activities associated with this Accelerated Tank Closure Demonstration (ATCD) project include placement of grout in C-106 following retrieval, and associated component closure activities. The activities will provide information that will be used in determining future closure actions for the remaining SSTs and tank farms at the Hanford Site. This information may also support preparation of the Environmental Impact Statement (EIS) for Retrieval, Treatment, and Disposal of Tank Waste and Closure of Single-Shell Tanks at the Hanford Site, Richland, Washington (Tank Closure EIS). Information will be obtained from the various activities associated with the component closure activities for C-106 located in the 241-C tank farm (C tank farm) under the ''Resource Conservation and Recovery Act of 1976'' (RCRA) and the Hanford Federal Facility Agreement and Consent Order (HFFACO) (Ecology et al. 1989). The impacts of retrieving waste from C-106 are bounded by the analysis in the Tank Waste Remediation System (TWRS) EIS (DOE/EIS-0189), hereinafter referred to as the TWRS EIS. DOE has conducted and continues to conduct retrieval activities at C-106 in preparation for the ATCD Project. For major federal actions significantly affecting the quality of the human environment, the ''National Environmental Policy Act of 1969'' (NEPA) requires that federal agencies evaluate the environmental effects of their proposed and alternative actions before making decisions to take action. The President's Council on Environmental Quality (CEQ) has developed regulations for implementing NEPA. These regulations are found in Title 40 of the Code

This paper explores the annotation and classification of students' revision behaviors in argumentative writing. A sentence-level revision schema is proposed to capture why and how students make revisions. Based on the proposed schema, a small corpus of student essays and revisions was annotated. Studies show that manual annotation is reliable with…

This 873-item annotated bibliography cites books, pamphlets, leaflets, and other materials produced for education about alcohol from 1950 to May 1973. The major part of each annotation is a brief summary of the contents. The annotation also contains a statement of orientation or type of presentation and evaluative comments. Each item is classified…

This article presents a new approach for extracting high level semantic concepts from digital histopathological images. This strategy provides not only annotation of several biological concepts, but also a coarse location of these concepts. The proposed approach is composed of five main steps: (1) a stain decomposition stage, which separates the contribution of hematoxylin and eosin dyes, (2) a color standardization that corrects color image differences, (3) a part-based representation, which describes the image in terms of the conditional probability of relevant local patches, selected by their stain contributions, (4) a discriminative classification model, which bridges out the found patterns and the biological concepts, (5) a block-based annotation strategy that identifies the multiple biological concepts within an image. A set of 655 skin images, containing 10 biological concepts of skin tissues were used for assessing the proposed approach, obtaining a sensitivity of 84% and a specificity of 67% when annotating images with multiple concepts. PMID:21997952

The Kinder Lernen Deutsch (LKD) materials evaluation project identifies materials appropriate for the elementary school German classrooms in grades K-8. This guide consists of an annotated bibliography, with ratings, of these materials. The guiding principles by which the materials were assessed were: use of the communicative approach; integration…

The US Department of Energy (DOE) proposes to build a beamline on the Fermi National Accelerator Laboratory (Fermilab) site to accommodate an experimental research program in neutrino physics. The proposed action, called Neutrino Beams at the Main Injector (NuMI), is to design, construct, operate and decommission a facility for producing and studying a high flux beam of neutrinos in the energy range of 1 to 40 GeV (1 GeV is one billion or 10{sup 9} electron volts). The proposed facility would initially be dedicated to two experiments, COSMOS (Cosmologically Significant Mass Oscillations) and MINOS (Main Injector Neutrino Oscillation Search). The neutrino beam would pass underground from Fermilab to northern Minnesota. A tunnel would not be built in this intervening region because the neutrinos easily pass through the earth, not interacting, similar to the way that light passes through a pane of glass. The beam is pointed towards the MINOS detector in the Soudan Underground Laboratory in Minnesota. Thus, the proposed project also includes construction, operation and decommissioning of the facility located in the Soudan Underground Laboratory in Minnesota that houses this MINOS detector. This environmental assessment (EA) has been prepared by the US Department of Energy (DOE) in accordance with the DOE`s National Environmental Policy Act (NEPA) Implementing Procedures (10 CFR 1021). This EA documents DOE`s evaluation of potential environmental impacts associated with the proposed construction and operation of NuMI at Fermilab and its far detector facility located in the Soudan Underground Laboratory in Minnesota. Any future use of the facilities on the Fermilab site would require the administrative approval of the Director of Fermilab and would undergo a separate NEPA review. Fermilab is a Federal high-energy physics research laboratory in Batavia, Illinois operated on behalf of the DOE by Universities Research Association, Inc.

Background Traditional genome annotation systems were developed in a very different computing era, one where the World Wide Web was just emerging. Consequently, these systems are built as centralized black boxes focused on generating high quality annotation submissions to GenBank/EMBL supported by expert manual curation. The exponential growth of sequence data drives a growing need for increasingly higher quality and automatically generated annotation. Typical annotation pipelines utilize traditional database technologies, clustered computing resources, Perl, C, and UNIX file systems to process raw sequence data, identify genes, and predict and categorize gene function. These technologies tightly couple the annotation software system to hardware and third party software (e.g. relational database systems and schemas). This makes annotation systems hard to reproduce, inflexible to modification over time, difficult to assess, difficult to partition across multiple geographic sites, and difficult to understand for those who are not domain experts. These systems are not readily open to scrutiny and therefore not scientifically tractable. The advent of Semantic Web standards such as Resource Description Framework (RDF) and OWL Web Ontology Language (OWL) enables us to construct systems that address these challenges in a new comprehensive way. Results Here, we develop a framework for linking traditional data to OWL-based ontologies in genome annotation. We show how data standards can decouple hardware and third party software tools from annotation pipelines, thereby making annotation pipelines easier to reproduce and assess. An illustrative example shows how TURTLE (Terse RDF Triple Language) can be used as a human readable, but also semantically-aware, equivalent to GenBank/EMBL files. Conclusions The power of this approach lies in its ability to assemble annotation data from multiple databases across multiple locations into a representation that is understandable to

Background Traditional genome annotation systems were developed in a very different computing era, one where the World Wide Web was just emerging. Consequently, these systems are built as centralized black boxes focused on generating high quality annotation submissions to GenBank/EMBL supported by expert manual curation. The exponential growth of sequence data drives a growing need for increasingly higher quality and automatically generated annotation. Typical annotation pipelines utilize traditional database technologies, clustered computing resources, Perl, C, and UNIX file systems to process raw sequence data, identify genes, and predict and categorize gene function. These technologies tightly couple the annotation software system to hardware and third party software (e.g. relational database systems and schemas). This makes annotation systems hard to reproduce, inflexible to modification over time, difficult to assess, difficult to partition across multiple geographic sites, and difficult to understand for those who are not domain experts. These systems are not readily open to scrutiny and therefore not scientifically tractable. The advent of Semantic Web standards such as Resource Description Framework (RDF) and OWL Web Ontology Language (OWL) enables us to construct systems that address these challenges in a new comprehensive way. Results Here, we develop a framework for linking traditional data to OWL-based ontologies in genome annotation. We show how data standards can decouple hardware and third party software tools from annotation pipelines, thereby making annotation pipelines easier to reproduce and assess. An illustrative example shows how TURTLE (Terse RDF Triple Language) can be used as a human readable, but also semantically-aware, equivalent to GenBank/EMBL files. Conclusions The power of this approach lies in its ability to assemble annotation data from multiple databases across multiple locations into a representation that is understandable to

This paper focuses on the difficulties of assessing multi-year team projects, in which a team of students drawn from all three years of a full-time degree course works on a problem with and for a real-life organization. Although potential solutions to the problem of assessing team projects may be context-dependent, we believe that discussing these…

... Geological Survey Notice of Availability of the Final Environmental Assessment for Solar Roof Project AGENCY..., the US Geological Survey (USGS) has prepared a Final Environmental Assessment for the Solar Roof... for the Solar Roof Project should immediately contact the USGS S.O. Conte Anadromous Fish...

Identifies the demand for distributional impact assessment related to forest policies and projects and the linkages between distributional impacts and sustainable development. An integrated model is developed to assess the distributional impact of forest policies and projects. Studying the impact of the introduction of structural particleboard…

The Southeastern Isolated Wetlands Assessment is the new Regional Environmental Monitoring and Assessment Program (REMAP) project in EPA Region 4. The project will produce data and synthesis on the ways that isolated wetlands can protect downstream water quality at a watershed s...

Report highlights the increase in resources, project speed, and scale that is required to achieve the U.S. Department of Defense (DoD) energy efficiency and renewable energy goals and summarizes the net zero energy installation assessment (NZEI) process and the lessons learned from NZEI assessments and large-scale renewable energy projects implementations at DoD installations.

This annual report summaries the activities and accomplishments of the Solar Radiation Resource AssessmentProject during fiscal year 1992 (1 October to 30 September 1992). Managed by the Analytic Studies Division of the National Renewable Energy Laboratory, this project is the major activity of the US Department of Energy`s Resource Assessment Program.

... Federal Highway Administration Environmental Assessment for the I-395 Air Rights Project AGENCY: Federal... Department of Transportation are announcing the availability for public review of the Environmental Assessment (EA) prepared for the I-395 Air Rights Projects in conjunction with the District Department...

This article explores changes in teachers' beliefs and practice concerning assessment after participating in a project for improving assessment practices in Norwegian schools. The project was initiated by the Norwegian Directorate for Education and Training in 2008, and included a total of 77 schools, more than 600 teachers and a sample of their…

Dual Organellar GenoMe Annotator (DOGMA) automates the annotation of extra-nuclear organellar (chloroplast and animal mitochondrial) genomes. It is a web-based package that allows the use of comparative BLAST searches to identify and annotate genes in a genome. DOGMA presents a list of putative genes to the user in a graphical format for viewing and editing. Annotations are stored on our password-protected server. Complete annotations can be extracted for direct submission to GenBank. Furthermore, intergenic regions of specified length can be extracted, as well the nucleotide sequences and amino acid sequences of the genes.

The Water Resources Council is completing a water assessment of synfuels development in the Upper Missouri River Basin. This is being done under Section 13(a) of the Federal Nonnuclear Energy Research and Development Act. The assessment area includes the coal deposits in the Mercer County project site. Levels of North Dakota coal gasification development that are several times the production level of the Great Plains gasification project are being examined. This report assesses: (1) the availability of adequate water supplies to meet the water requirements of the project, supporting activities, and other development induced by the project; and (2) the changes in the water resources that will result from the project. Findings of the 13(a) assessment show that water supplies are physically available within the mainstem of the Missouri River in North Dakota to supply the requirements of the gasification facilities and the supporting activities - mining and reclamation, electricity, and project-induced population increases.

This report was prepared under the provisions of paragraph (c) of Sect. 13 of the Federal Nonnuclear Energy Research and Development Act of 1974. It assesses (1) the availability of adequate water supplies to meet the water requirement of the project, supporting activities and other development induced by the project and (2) the changes in the water resources that will result from the project.

Augusta Newsprint undertook a plant-wide energy efficiency assessment of its Augusta, Georgia, plant in 2001. The assessment helped the company decide to implement five energy efficiency projects. Four of the five projects will save the company 11,000 MWh of electrical energy (about$369,000) each year. The remaining project will produce more than$300,000 annually, from sale of the byproduct turpentine. The largest annual savings,$881,000, will come from eliminating Kraft pulp by using better process control. All of the projects could be applied to other paper mills and most of the projects could be applied in other industries.

Augusta Newsprint undertook a plant-wide energy efficiency assessment of its Augusta, Georgia, plant in 2001. The assessment helped the company decide to implement five energy efficiency projects. Four of the five projects will save the company 11,000 MWh of electrical energy (about $369,000) each year. The remaining project will produce more than $300,000 annually, from sale of the byproduct turpentine. The largest annual savings, $881,000, will come from eliminating Kraft pulp by using better process control. All of the projects could be applied to other paper mills and most of the projects could be applied in other industries.

Since 2002, information on individual microRNAs (miRNAs), such as reference names and sequences, has been stored in miRBase, the reference database for miRNA annotation. As a result of progressive insights into the miRNome and its complexity, miRBase underwent addition and deletion of miRNA records, changes in annotated miRNA sequences and adoption of more complex naming schemes over time. Unfortunately, miRBase does not allow straightforward assessment of these ongoing miRNA annotation changes, which has resulted in substantial ambiguity regarding miRNA identity and sequence in public literature, in target prediction databases and in content on various commercially available analytical platforms. As a result, correct interpretation, comparison and integration of miRNA study results are compromised, which we demonstrate here by assessing the impact of ignoring sequence annotation changes. To address this problem, we developed miRBase Tracker (www.mirbasetracker.org), an easy-to-use online database that keeps track of all historical and current miRNA annotation present in the miRBase database. Three basic functionalities allow researchers to keep their miRNA annotation up-to-date, reannotate analytical miRNA platforms and link published results with outdated annotation to the latest miRBase release. We expect miRBase Tracker to increase the transparency and annotation accuracy in the field of miRNA research. Database URL: www.mirbasetracker.org PMID:25157074

Recent technology assessment studies sponsored by NASA are reviewed, and a summary of the technical results as well as a critique of the methodologies are presented. The reviews include Assessment of Lighter-Than-Air Technology, Technology Assessment of Portable Energy RDT&P, Technology Assessment of Future Intercity Passenger Transportation Systems, and Technology Assessment of Space Disposal of Radioactive Nuclear Waste. The use of workshops has been introduced as a unique element of some of these assessments. Also included in this report is a brief synopsis of a method of quantifying opinions obtained through such group interactions. Representative of the current technology assessments, these studies cover a broad range of socio-political factors and issues in greater depth than previously considered in NASA sponsored studies. In addition to the lessons learned through the conduct of these studies, a few suggestions for improving the effectiveness of future technology assessments are provided.

Although the medical scanners are rapidly moving towards a three-dimensional paradigm, the manipulation and annotation/labeling of the acquired data is still performed in a standard 2D environment. Editing and annotation of three-dimensional medical structures is currently a complex task and rather time-consuming, as it is carried out in 2D projections of the original object. A major problem in 2D annotation is the depth ambiguity, which requires 3D landmarks to be identified and localized in at least two of the cutting planes. Operating directly in a three-dimensional space enables the implicit consideration of the full 3D local context, which significantly increases accuracy and speed. A three-dimensional environment is as well more natural optimizing the user's comfort and acceptance. The 3D annotation environment requires the three-dimensional manipulation device and display. By means of two novel and advanced technologies, Wii Nintendo Controller and Philips 3D WoWvx display, we define an appropriate 3D annotation tool and a suitable 3D visualization monitor. We define non-coplanar setting of four Infrared LEDs with a known and exact position, which are tracked by the Wii and from which we compute the pose of the device by applying a standard pose estimation algorithm. The novel 3D renderer developed by Philips uses either the Z-value of a 3D volume, or it computes the depth information out of a 2D image, to provide a real 3D experience without having some special glasses. Within this paper we present a new framework for manipulation and annotation of medical landmarks directly in three-dimensional volume.

User's information needs and the tasks they face change over the stages of a research project. In previous research by Peiling Wang, a cognitive model of users' document selection behavior for their research projects was developed. This study looks at the general applicability of Wang's model to subsequent decision-making about items selected…

The goal of the U.S. Department of Energy's (DOE) Clean Coal Technology (CCT) program is to furnish the energy marketplace with a number of advanced, more efficient, and environmentally responsible coal-utilization technologies through demonstration projects. These projects seek to establish the commercial feasibility of the most promising advanced coal technologies that have developed beyond the proof-of-concept stage.

Project-based learning (PBL) that has authenticity in the pupils' world enables the teaching of science and technology to pupils from a variety of backgrounds. PBL has the potential to enable pupils to research, plan, design, and reflect on the creation of technological projects (Doppelt, 2000). Engineering education, which is common in Israel,…

This notice announces BPA`S`s decision to fund the Oregon Department of Fish and Wildlife (ODFW), the Washington Department of Fish and Wildlife (WDFW), and the Clatsop Economic Development Committee for the Lower Columbia River Terminal Fisheries Research Project (Project). The Project will continue the testing of various species/stocks, rearing regimes, and harvest options for terminal fisheries, as a means to increase lower river sport and commercial harvest of hatchery fish, while providing both greater protection of weaker wild stocks and increasing the return of upriver salmon runs to potential Zone 6 Treaty fisheries. The Project involves relocating hatchery smolts to new, additional pen locations in three bays/sloughs in the lower Columbia River along both the Oregon and Washington sides. The sites are Blind Slough and Tongue Point in Clatsop County, Oregon, and Grays Bay/Deep River, Wahkiakum County, Washington. The smolts will be acclimated for various lengths of time in the net pens and released from these sites. The Project will expand upon an existing terminal fisheries project in Youngs Bay, Oregon. The Project may be expanded to other sites in the future, depending on the results of this initial expansion. BPA`S has determined the project is not a major Federal action significantly affecting the quality of the human environment, within the meaning of the National Environmental Policy Act (NEPA) of 1969. Therefore, the preparation of an environmental impact statement is not required, and BPA`S is issuing this FONSI.

This discussion of methods used to assess the effectiveness of training for U.S. Army personnel identifies various types of training, describes methods currently used, and suggests ways of improving the assessment process. The methodology and results of assessments of effectiveness, including the costs associated with the level of performance, are…

In the United Kingdom, the majority of national assessments involve human raters. The processes by which raters determine the scores to award are central to the assessment process and affect the extent to which valid inferences can be made from assessment outcomes. Thus, understanding rater cognition has become a growing area of research in the…

Translational cancer genomics research aims to ensure that experimental knowledge is subject to computational analysis, and integrated with a variety of records from omics and clinical sources. The data retrieval from such sources is not trivial, due to their redundancy and heterogeneity, and the presence of false evidence. In silico marker identification, therefore, remains a complex task that is mainly motivated by the impact that target identification from the elucidation of gene co-expression dynamics and regulation mechanisms, combined with the discovery of genotype-phenotype associations, may have for clinical validation. Based on the reuse of publicly available gene expression data, our aim is to propose cancer marker classification by integrating the prediction power of multiple annotation sources. In particular, with reference to the functional annotation for colorectal markers, we indicate a classification of markers into diagnostic and prognostic classes combined with susceptibility and risk factors. PMID:23928109

Microbial genome sequencing projects produce numerous sequences of deduced proteins, only a small fraction of which have been or will ever be studied experimentally. This leaves sequence analysis as the only feasible way to annotate these proteins and assign to them tentative functions. The Clusters of Orthologous Groups of proteins (COGs) database (http://www.ncbi.nlm.nih.gov/COG/), first created in 1997, has been a popular tool for functional annotation. Its success was largely based on (i) its reliance on complete microbial genomes, which allowed reliable assignment of orthologs and paralogs for most genes; (ii) orthology-based approach, which used the function(s) of the characterized member(s) of the protein family (COG) to assign function(s) to the entire set of carefully identified orthologs and describe the range of potential functions when there were more than one; and (iii) careful manual curation of the annotation of the COGs, aimed at detailed prediction of the biological function(s) for each COG while avoiding annotation errors and overprediction. Here we present an update of the COGs, the first since 2003, and a comprehensive revision of the COG annotations and expansion of the genome coverage to include representative complete genomes from all bacterial and archaeal lineages down to the genus level. This re-analysis of the COGs shows that the original COG assignments had an error rate below 0.5% and allows an assessment of the progress in functional genomics in the past 12 years. During this time, functions of many previously uncharacterized COGs have been elucidated and tentative functional assignments of many COGs have been validated, either by targeted experiments or through the use of high-throughput methods. A particularly important development is the assignment of functions to several widespread, conserved proteins many of which turned out to participate in translation, in particular rRNA maturation and tRNA modification. The new version of the

Helicopter EMS (HEMS) and its possible association with outcomes improvement continues to be a subject of discussion. As is the case with other scientific discourse, debate over HEMS usefulness should be framed around an evidence-based assessment of the relevant literature. In an effort to facilitate the academic pursuit of assessment of HEMS utility, in late 2000 the National Association of EMS Physicians' (NAEMSP) Air Medical Task Force prepared annotated bibliographies of the HEMS-related outcomes literature. As a result of that work, two review articles, one covering HEMS use in nontrauma and the other in trauma, published in 2002 in Prehospital Emergency Care surveyed HEMS outcomes-related literature published between 1980 and mid-2000. The project was extended with two subsequent reviews covering the literature through 2006. This review continues the series, outlining outcomes-associated HEMS literature for the three-year period 2007 through the first half of 2011. PMID:22288016

Contemporary undergraduates in the biological sciences have unprecedented access to scientific information. Although many of these students may be savvy technologists, studies from the field of library and information science consistently show that undergraduates often struggle to locate, evaluate, and use high-quality, reputable sources of information. This study demonstrates the efficacy and pedagogical value of a collaborative teaching approach designed to enhance information literacy competencies among undergraduate biology majors who must write a formal scientific research paper. We rely on the triangulation of assessment data to determine the effectiveness of a substantial research paper project completed by students enrolled in an upper-level biology course. After enhancing library-based instruction, adding an annotated bibliography requirement, and using multiple assessment techniques, we show fundamental improvements in students' library research abilities. Ultimately, these improvements make it possible for students to more independently and effectively complete this challenging science-based writing assignment. We document critical information literacy advances in several key areas: student source-type use, annotated bibliography enhancement, plagiarism reduction, as well as student and faculty/librarian satisfaction. PMID:18056306

This report is a post-projectassessment of the Nucla Circulating Fluidized-Bed (CFB) Demonstration Project, the second project to be completed in the DOE Clean Coal Technology Program. Nucla was the first successful utility repowering project in the US, increasing the capacity of the original power station from 36 MW(e) to 110 MW(e) and extending its life by 30 years. In the CFB boiler, combustion and desulfurization both take place in the fluidized bed. Calcium in the sorbent captures sulfur dioxide and the relatively low combustion temperatures limit NOx formation. Hot cyclones separate the larger particles from the gas and recirculates them to the lower zones of the combustion chambers. This continuous circulation of coal char and sorbent particles is the novel feature of CFB technology. This demonstration project significantly advanced the environmental, operational, and economic potential of atmospheric CFB technology, precipitating a large number of orders for atmospheric CFB equipment. By 1994, more than 200 atmospheric CFB boilers have been constructed worldwide. Although at least six CFB units have been operated, the Nucla project`s CFB database continues to be an important and unique resource for the design of yet larger atmospheric CFB systems. The post-projectassessment report is an independent DOE appraisal of the success a completed project had in achieving its objectives and aiding in the commercialization of the demonstrated technology. The report also provides an assessment of the expected technical, environmental, and economic performance of the commercial version of the technology as well as an analysis of the commercial market.

The identification of translation initiation sites (TISs) constitutes an important aspect of sequence-based genome analysis. An erroneous TIS annotation can impair the identification of regulatory elements and N-terminal signal peptides, and also may flaw the determination of descent, for any particular gene. We have formulated a reference-free method to score the TIS annotation quality. The method is based on a comparison of the observed and expected distribution of all TISs in a particular genome given prior gene-calling. We have assessed the TIS annotations for all available NCBI RefSeq microbial genomes and found that approximately 87% is of appropriate quality, whereas 13% needs substantial improvement. We have analyzed a number of factors that could affect TIS annotation quality such as GC-content, taxonomy, the fraction of genes with a Shine-Dalgarno sequence and the year of publication. The analysis showed that only the first factor has a clear effect. We have then formulated a straightforward Principle Component Analysis-based TIS identification strategy to self-organize and score potential TISs. The strategy is independent of reference data and a priori calculations. A representative set of 277 genomes was subjected to the analysis and we found a clear increase in TIS annotation quality for the genomes with a low quality score. The PCA-based annotation was also compared with annotation with the current tool of reference, Prodigal. The comparison for the model genome of Escherichia coli K12 showed that both methods supplement each other and that prediction agreement can be used as an indicator of a correct TIS annotation. Importantly, the data suggest that the addition of a PCA-based strategy to a Prodigal prediction can be used to ‘flag’ TIS annotations for re-evaluation and in addition can be used to evaluate a given annotation in case a Prodigal annotation is lacking. PMID:26204119

The Bonneville Power Administration (BPA), a Federal power marketing agency, has statutory responsibilities to supply electrical power to its utility, industrial, and other customers in the Pacific Northwest. BPA`s latest load/resource balance forecast, projects the capability of existing resources to satisfy projected Federal system loads. The forecast indicates a potential resource deficit. The underlying need for action is to satisfy BPA customers` demand for electrical power.

Identifying concepts and relationships in biomedical text enables knowledge to be applied in computational analyses. Many biological natural language processing (BioNLP) projects attempt to address this challenge, but the state of the art still leaves much room for improvement. Progress in BioNLP research depends on large, annotated corpora for evaluating information extraction systems and training machine learning models. Traditionally, such corpora are created by small numbers of expert annotators often working over extended periods of time. Recent studies have shown that workers on microtask crowdsourcing platforms such as Amazon’s Mechanical Turk (AMT) can, in aggregate, generate high-quality annotations of biomedical text. Here, we investigated the use of the AMT in capturing disease mentions in PubMed abstracts. We used the NCBI Disease corpus as a gold standard for refining and benchmarking our crowdsourcing protocol. After several iterations, we arrived at a protocol that reproduced the annotations of the 593 documents in the ‘training set’ of this gold standard with an overall F measure of 0.872 (precision 0.862, recall 0.883). The output can also be tuned to optimize for precision (max = 0.984 when recall = 0.269) or recall (max = 0.980 when precision = 0.436). Each document was completed by 15 workers, and their annotations were merged based on a simple voting method. In total 145 workers combined to complete all 593 documents in the span of 9 days at a cost of $.066 per abstract per worker. The quality of the annotations, as judged with the F measure, increases with the number of workers assigned to each task; however minimal performance gains were observed beyond 8 workers per task. These results add further evidence that microtask crowdsourcing can be a valuable tool for generating well-annotated corpora in BioNLP. Data produced for this analysis are available at http://figshare.com/articles/Disease_Mention_Annotation

Earth sciences have become more data-intensive, requiring access to heterogeneous data collected from multiple places, times, and thematic scales. For example, research on climate change may involve exploring and analyzing observational data such as the migration of animals and temperature shifts across the earth, as well as various model-observation inter-comparison studies. Recently, DataONE, a federated data network built to facilitate access to and preservation of environmental and ecological data, has come to exist. ONEMercury has recently been implemented as part of the DataONE project to serve as a portal for discovering and accessing environmental and observational data across the globe. ONEMercury harvests metadata from the data hosted by multiple data repositories and makes it searchable via a common search interface built upon cutting edge search engine technology, allowing users to interact with the system, intelligently filter the search results on the fly, and fetch the data from distributed data sources. Linking data from heterogeneous sources always has a cost. A problem that ONEMercury faces is the different levels of annotation in the harvested metadata records. Poorly annotated records tend to be missed during the search process as they lack meaningful keywords. Furthermore, such records would not be compatible with the advanced search functionality offered by ONEMercury as the interface requires a metadata record be semantically annotated. The explosion of the number of metadata records harvested from an increasing number of data repositories makes it impossible to annotate the harvested records manually, urging the need for a tool capable of automatically annotating poorly curated metadata records. In this paper, we propose a topic-model (TM) based approach for automatic metadata annotation. Our approach mines topics in the set of well annotated records and suggests keywords for poorly annotated records based on topic similarity. We utilize the

It is becoming increasingly imperative to integrate renewable energy, such as solar and wind, into electricity generation due to increased regulations on air and water pollution and a sociopolitical desire to develop more clean energy sources. This increased spotlight on renewable energy requires evaluating competing projects using either conventional economic analysis techniques or other economics-based models and approaches in order to select a subset of the projects to be funded. Even then, there are reasons to suspect that techniques applied to renewable energy projects may result in decisions that will reject viable projects due to the use of a limited number of quantifiable and tangible attributes about the projects. This paper presents a framework for economic evaluation of renewable energy projects. The framework is based on a systems approach in which the processes within the entire network of the system, from generation to consumption, are accounted for. Furthermore, the framework uses the concept of fuzzy system to calculate the value of information under conditions of uncertainty.

Next-generation sequencing (NGS) has brought human genomic research to an unprecedented era. RNA-Seq is a branch of NGS that can be used to quantify gene expression and depends on accurate annotation of the human genome (i.e., the definition of genes and all of their variants or isoforms). Multiple annotations of the human genome exist with varying complexity. However, it is not clear how the choice of genome annotation influences RNA-Seq gene expression quantification. We assess the effect of different genome annotations in terms of (1) mapping quality, (2) quantification variation, (3) quantification accuracy (i.e., by comparing to qRT-PCR data), and (4) the concordance of detecting differentially expressed genes. External validation with qRT-PCR suggests that more complex genome annotations result in higher quantification variation.

Neville Chemical conducted a plant-wide energy efficiency assessment of its Anaheim, California, plant in the spring of 2002. The assessment justified five projects that would significantly reduce electricity and fuel costs. Four of the five projects, when complete will save 436,200 kilowatt-hours, or $31,840 of electrical energy each year. The remaining project will save 7,473 million British thermal units or $43,600 in fossil fuel each year. One year later, the same assessment team applied its knowledge of Neville's processes in a plant-wide assessment at Neville's Pittsburgh plant, and identified 15 projects with more than $715,000 in projected annual savings.

Peer assessment has been increasingly integrated in educational settings as a strategy to foster student learning. Yet little has been studied about how students at different learning levels may benefit from peer assessment. This study examined how peer-assessment and students' learning levels influenced students' project performance using a…

Rubric assessment of information literacy is an important tool for librarians seeking to show evidence of student learning. The authors, who collaborated on the Rubric Assessment of Informational Literacy Skills (RAILS) research project, draw from their shared experience to present practical recommendations for implementing rubric assessment in a…

Background Gene function annotations, which are associations between a gene and a term of a controlled vocabulary describing gene functional features, are of paramount importance in modern biology. Datasets of these annotations, such as the ones provided by the Gene Ontology Consortium, are used to design novel biological experiments and interpret their results. Despite their importance, these sources of information have some known issues. They are incomplete, since biological knowledge is far from being definitive and it rapidly evolves, and some erroneous annotations may be present. Since the curation process of novel annotations is a costly procedure, both in economical and time terms, computational tools that can reliably predict likely annotations, and thus quicken the discovery of new gene annotations, are very useful. Methods We used a set of computational algorithms and weighting schemes to infer novel gene annotations from a set of known ones. We used the latent semantic analysis approach, implementing two popular algorithms (Latent Semantic Indexing and Probabilistic Latent Semantic Analysis) and propose a novel method, the Semantic IMproved Latent Semantic Analysis, which adds a clustering step on the set of considered genes. Furthermore, we propose the improvement of these algorithms by weighting the annotations in the input set. Results We tested our methods and their weighted variants on the Gene Ontology annotation sets of three model organism genes (Bos taurus, Danio rerio and Drosophila melanogaster ). The methods showed their ability in predicting novel gene annotations and the weighting procedures demonstrated to lead to a valuable improvement, although the obtained results vary according to the dimension of the input annotation set and the considered algorithm. Conclusions Out of the three considered methods, the Semantic IMproved Latent Semantic Analysis is the one that provides better results. In particular, when coupled with a proper

Recent policy developments in England have, to some extent, relaxed the hold of external, high-stakes assessment on teachers of students in the early years of secondary education. In such a context, there is the opportunity for teachers to reassert the importance of teacher assessment as the most reliable means of judging a student's abilities. A…

CEAP is a multi-agency effort to quantify the environmental benefits of conservation practices used by private landowners participating in selected USDA conservation programs. Within CEAP are a national assessment, focusing on regional and national-scale modeling, and a watershed assessment of cropl...

The Gene Ontology (GO) is widely recognised as the gold standard bioinformatics resource for summarizing functional knowledge of gene products in a consistent and computable, information-rich language. GO describes cellular and organismal processes across all species, yet until now there has been a considerable gene annotation deficit within the neurological and immunological domains, both of which are relevant to Parkinson's disease. Here we introduce the Parkinson's disease GO AnnotationProject, funded by Parkinson's UK and supported by the GO Consortium, which is addressing this deficit by providing GO annotation to Parkinson's-relevant human gene products, principally through expert literature curation. We discuss the steps taken to prioritise proteins, publications and cellular processes for annotation, examples of how GO annotations capture Parkinson's-relevant information, and the advantages that a topic-focused annotation approach offers to users. Building on the existing GO resource, this project collates a vast amount of Parkinson's-relevant literature into a set of high-quality annotations to be utilized by the research community. PMID:26825309

Background Annotated patient-provider encounters can provide important insights into clinical communication, ultimately suggesting how it might be improved to effect better health outcomes. But annotating outpatient transcripts with Roter or General Medical Interaction Analysis System (GMIAS) codes is expensive, limiting the scope of such analyses. We propose automatically annotating transcripts of patient-provider interactions with topic codes via machine learning. Methods We use a conditional random field (CRF) to model utterance topic probabilities. The model accounts for the sequential structure of conversations and the words comprising utterances. We assess predictive performance via 10- fold cross-validation over GMIAS-annotated transcripts of 360 outpatient visits (over 230,000 utterances). We then used automated in place of manual annotations to reproduce an analysis of 116 additional visits from a randomized trial that used GMIAS to assess the efficacy of an intervention aimed at improving communication around antiretroviral (ARV) adherence. Results With respect to six topic codes, the CRF achieved a mean pairwise kappa compared with human annotators of 0.49 (range: 0.47, 0.53) and a mean overall accuracy of 0.64 (range: 0.62, 0.66). With respect to the RCT re-analysis, results using automated annotations agreed with those obtained using manual ones. According to the manual annotations, the median number of ARV-related utterances without and with the intervention was 49.5 versus 76, respectively (paired sign test p=0.07). Using automated annotations, the respective numbers were 39 versus 55 (p=0.04). Limitations While moderately accurate, the predicted annotations are far from perfect. Conversational topics are intermediate outcomes; their utility is still being researched. Conclusions This foray into automated topic inference suggests that machine learning methods can classify utterances comprising patient-provider interactions into clinically relevant

Triggered by the sequencing of the human genome, personalized medicine has been one of the fastest growing research areas in the last decade. Multiple software and hardware technologies have been developed by several projects, culminating in the exponential growth of genetic data. Considering the technological developments in this field, it is now fairly easy and inexpensive to obtain genetic profiles for unique individuals, such as those performed by several genetic analysis companies. The availability of computational tools that simplify genetic data analysis and the disclosure of biomedical evidences are of utmost importance. We present Variobox, a desktop tool to annotate, analyze, and compare human genes. Variobox obtains variant annotation data from WAVe, protein metadata annotations from Protein Data Bank, and sequences are obtained from Locus Reference Genomic or RefSeq databases. To explore the data, Variobox provides an advanced sequence visualization that enables agile navigation through genetic regions. DNA sequencing data can be compared with reference sequences retrieved from LRG or RefSeq records, identifying and automatically annotating new potential variants. These features and data, ranging from patient sequences to HGVS-compliant variant descriptions, are combined in an intuitive interface to analyze genes and variants. Variobox is a Java application, available at http://bioinformatics.ua.pt/variobox. PMID:24186831

We present a design for a web-based image annotation interface developed to assist in supervised classification of organisms and substrate for habitat assessment from multiple, heterogeneous oceanographic imaging systems. The interface enables human image annotators to count, identify, and measure targets and classify substrate in a variety of kinds of imagery including benthic surveys and imaging flow cytometry. These annotations are then used to build training sets for supervised classification algorithms for purposes of characterizing community structure and habitat assessment. The Ocean Imaging Informatics team at WHOI used the Tetherless World Constellation's collaborative design methodology to develop shared formal information model and system design that applies to a variety of image annotation use cases. Because the information model represents consensus between researchers with differing instrumentation and science needs, it assists with rapid prototyping and establishes a baseline against which existing and forthcoming image annotation tools can be evaluated. A technology review suggested that there are few general-purpose image annotation tools suitable for annotation of high-volume oceanographic imagery. Most tools require too many steps for operations that must be repeated thousands of times, and/or lack critical features such as display of instrument metadata, QA/QC, and management of annotator tasks. While some of these problems are user interface limitations, others suggest that existing tools are missing critically important concepts. For example, QA/QC appears in our information model as an "activity stream" associated with each image annotation, consisting of events indicating review status, specific image quality issues, etc. The model also includes "identification modes" that contextualize annotations according to the annotator's assigned task, assisting both with interpreting annotations and with providing contextual user interface shortcuts

Identifying functional regions in the human genome is a major goal in human genetics. Great efforts have been made to functionally annotate the human genome either through computational predictions, such as genomic conservation, or high-throughput experiments, such as the ENCODE project. These efforts have resulted in a rich collection of functional annotation data of diverse types that need to be jointly analyzed for integrated interpretation and annotation. Here we present GenoCanyon, a whole-genome annotation method that performs unsupervised statistical learning using 22 computational and experimental annotations thereby inferring the functional potential of each position in the human genome. With GenoCanyon, we are able to predict many of the known functional regions. The ability of predicting functional regions as well as its generalizable statistical framework makes GenoCanyon a unique and powerful tool for whole-genome annotation. The GenoCanyon web server is available at http://genocanyon.med.yale.edu. PMID:26015273

Thermal annealing is the only known method for mitigating the effects of neutron irradiation embrittlement in reactor pressure vessel (RPV) steels. In May 1996, the US Department of Energy (DOE) in conjunction with the American Society of Mechanical Engineers, Westinghouse, Cooperheat, Electric Power Research Institute (with participating utilities), Westinghouse Owner`s Group, Consumers Power, Electricite` de France, Duquesne Light and the Central Research Institute of the Electric Power Industry (Japan) sponsored an annealing demonstration project (ADP) at Marble Hill. The Marble Hill Plant, located in Madison, Indiana, is a Westinghouse 4 loop design. The plant was nearly 70% completed when the project was canceled. Hence, the RPV was never irradiated. The paper will present highlights from the NRCs independent evaluation of the Marble Hill Annealing Demonstration Project.

This report describes efforts by IBACOS, a Building America research team, in the St. Bernard Project, a nonprofit, community-based organization whose mission is to assist Hurricane Katrina survivors to return to their homes in the New Orleans area. The report focuses on energy modeling results of two plans that the St. Bernard Project put forth as 'typical' building types and on quality issues that were observed during the field walk and best practice recommendations that could improve the energy efficiency and durability of the renovated homes.

This report describes efforts by IBACOS, a Department of Energy Building America research team, in the St. Bernard Project, a nonprofit, community-based organization whose mission is to assist Hurricane Katrina survivors return to their homes in the New Orleans area. The report focuses on energy modeling results of two plans that the St. Bernard Project put forth as 'typical' building types and on quality issues that were observed during the field walk and Best Practice recommendations that could improve the energy efficiency and durability of the renovated homes.

In support of the international effort to obtain a reference sequence of the bread wheat genome and to provide plant communities dealing with large and complex genomes with a versatile, easy-to-use online automated tool for annotation, we have developed the TriAnnot pipeline. Its modular architecture allows for the annotation and masking of transposable elements, the structural, and functional annotation of protein-coding genes with an evidence-based quality indexing, and the identification of conserved non-coding sequences and molecular markers. The TriAnnot pipeline is parallelized on a 712 CPU computing cluster that can run a 1-Gb sequence annotation in less than 5 days. It is accessible through a web interface for small scale analyses or through a server for large scale annotations. The performance of TriAnnot was evaluated in terms of sensitivity, specificity, and general fitness using curated reference sequence sets from rice and wheat. In less than 8 h, TriAnnot was able to predict more than 83% of the 3,748 CDS from rice chromosome 1 with a fitness of 67.4%. On a set of 12 reference Mb-sized contigs from wheat chromosome 3B, TriAnnot predicted and annotated 93.3% of the genes among which 54% were perfectly identified in accordance with the reference annotation. It also allowed the curation of 12 genes based on new biological evidences, increasing the percentage of perfect gene prediction to 63%. TriAnnot systematically showed a higher fitness than other annotation pipelines that are not improved for wheat. As it is easily adaptable to the annotation of other plant genomes, TriAnnot should become a useful resource for the annotation of large and complex genomes in the future. PMID:22645565

Annotation on the reference genome of the C57BL6/J mouse has been an ongoing project ever since the draft genome was first published. Initially, the principle focus was on the identification of all protein-coding genes, although today the importance of describing long non-coding RNAs, small RNAs, and pseudogenes is recognized. Here, we describe the progress of the GENCODE mouse annotationproject, which combines manual annotation from the HAVANA group with Ensembl computational annotation, alongside experimental and in silico validation pipelines from other members of the consortium. We discuss the more recent incorporation of next-generation sequencing datasets into this workflow, including the usage of mass-spectrometry data to potentially identify novel protein-coding genes. Finally, we will outline how the C57BL6/J genebuild can be used to gain insights into the variant sites that distinguish different mouse strains and species. PMID:26187010

This study describes channel changes following completion of the Provo River Restoration Project (PRRP), the largest stream restoration project in Utah and one of the largest projects in the United States in which a gravel-bed river was fully reconstructed. We summarize project objectives and the design process, and we analyze monitoring data collected during the first 7 years after project completion. Post-project channel adjustment during the study period included two phases: (i) an initial phase of rapid, but small-scale, adjustment during the first years after stream flow was introduced to the newly constructed channel and (ii) a subsequent period of more gradual topographic adjustment and channel migration. Analysis of aerial imagery and ground-survey data demonstrate that the channel has been more dynamic in the downstream 4 km where a local source contributes a significant annual supply of bed material. Here, the channel migrates and exhibits channel adjustments that are more consistent with project objectives. The upstream 12 km of the PRRP are sediment starved, the channel has been laterally stable, and this condition may not be consistent with large-scale project objectives.

The U. S. Department of Energy s Hydropower Advancement Project (HAP) was initiated to characterize and trend hydropower asset conditions across the U.S.A. s existing hydropower fleet and to identify and evaluate the upgrading opportunities. Although HAP includes both detailed performance assessments and condition assessments of existing hydropower plants, this paper focuses on the performance assessments. Plant performance assessments provide a set of statistics and indices that characterize the historical extent to which each plant has converted the potential energy at a site into electrical energy for the power system. The performance metrics enable benchmarking and trending of performance across many projects in a variety contexts (e.g., river systems, power systems, and water availability). During FY2011 and FY2012, assessments will be performed on ten plants, with an additional fifty plants scheduled for FY2013. This paper focuses on the performance assessments completed to date, details the performance assessment process, and describes results from the performance assessments.

Service learning involves providing service to the community while requiring students to meet learning goals in a specific course. A service learning project was implemented in a general biology course at Rockhurst University to involve students in promoting scientific education in conjunction with community partner educators. Students were…

The purpose of this study is to characterize the technical performance of the PNM Prosperity electricity storage project, and to identify lessons learned that can be used to improve similar projects in the future. The PNM Prosperity electricity storage project consists of a 500 kW/350 kWh advanced lead-acid battery with integrated supercapacitor (for energy smoothing) and a 250 kW/1 MWh advanced lead-acid battery (for energy shifting), and is co-located with a 500 kW solar photovoltaic (PV) resource. The project received American Reinvestment and Recovery Act (ARRA) funding. The smoothing system is e ective in smoothing intermittent PV output. The shifting system exhibits good round-trip efficiencies, though the AC-to-AC annual average efficiency is lower than one might hope. Given the current utilization of the smoothing system, there is an opportunity to incorporate additional control algorithms in order to increase the value of the energy storage system.

Application of game theory to small-group project evaluation in higher education instruction finds that the best strategy for students wishing high grades may not be a strategy that promotes teamwork and cooperation. Suggests that putting students into groups may randomly disadvantage some students relative to others, producing serious unfairness…

Notes that reforms in engineering education have caused a shift from the traditional stand-alone courses in technical communication for engineering students towards communication training integrated in courses and design projects that allows students to develop four levels of competence. Describes three formats for integrated communication…

Describes the use of monthly portfolio projects to track student progress and improve science performance in a 7th grade life science course. Students explain concepts they have learned, produce products to present what they have learned, and use the concepts to critique magazine articles, advertisements, or news stories. Provides an evaluation…

One of the most persistent problems in the use of projective techniques is the need to develop objective, reliable and valid scoring systems. The sample consisted of 100 college students enrolled in an introductory psychology course. Ss were administered the DAPIR along with an extensive biographical questionnaire. Additionally, Ss were rated by…

Building on the materials in the two previous successful editions, this book features approximately 40% all new material and updates the previous information. The authors use the DDD-E model (Decide, Design, Develop--Evaluate) to show how to select and plan multimedia projects, use presentation and development tools, manage graphics, audio, and…

This study provides an analysis of potential changes that may take place in a Uranium Mill Tailings Remedial Action (UMTRA) Project disposal cell cover system as a result of plant biointrusion. Potential changes are evaluated by performing a sensitivity analysis of the relative impact of root penetrations on radon flux out of the cell cover and/or water infiltration into the cell cover. Data used in this analysis consist of existing information on vegetation growth on selected cell cover systems and information available from published studies and/or other available project research. Consistent with the scope of this paper, no new site-specific data were collected from UMTRA Project sites. Further, this paper does not focus on the issue of plant transport of radon gas or other contaminants out of the disposal cell cover though it is acknowledged that such transport has the potential to be a significant pathway for contaminants to reach the environment during portions of the design life of a disposal cell where plant growth occurs. Rather, this study was performed to evaluate the effects of physical penetration and soil drying caused by plant roots that have and are expected to continue to grow in UMTRA Project disposal cell covers. An understanding of the biological and related physical processes that take place within the cover systems of the UMTRA Project disposal cells helps the U.S. Department of Energy (DOE) determine if the presence of a plant community on these cells is detrimental, beneficial, or of mixed value in terms of the cover system`s designed function. Results of this investigation provide information relevant to the formulation of a vegetation control policy.

Identifying genomic annotations that differentiate causal from trait-associated variants is essential to fine mapping disease loci. Although many studies have identified non-coding functional annotations that overlap disease-associated variants, these annotations often colocalize, complicating the ability to use these annotations for fine mapping causal variation. We developed a statistical approach (Genomic Annotation Shifter [GoShifter]) to assess whether enriched annotations are able to prioritize causal variation. GoShifter defines the null distribution of an annotation overlapping an allele by locally shifting annotations; this approach is less sensitive to biases arising from local genomic structure than commonly used enrichment methods that depend on SNP matching. Local shifting also allows GoShifter to identify independent causal effects from colocalizing annotations. Using GoShifter, we confirmed that variants in expression quantitative trail loci drive gene-expression changes though DNase-I hypersensitive sites (DHSs) near transcription start sites and independently through 3′ UTR regulation. We also showed that (1) 15%–36% of trait-associated loci map to DHSs independently of other annotations; (2) loci associated with breast cancer and rheumatoid arthritis harbor potentially causal variants near the summits of histone marks rather than full peak bodies; (3) variants associated with height are highly enriched in embryonic stem cell DHSs; and (4) we can effectively prioritize causal variation at specific loci. PMID:26140449

Assessment of high-throughput—omics data initially focuses on relative or raw levels of a particular feature, such as an expression value for a transcript, protein, or metabolite. At a second level, analyses of annotations including known or predicted functions and associations of each individual feature, attempt to distill biological context. Most currently available comparative- and meta-analyses methods are dependent on the availability of identical features across data sets, and concentrate on determining features that are differentially expressed across experiments, some of which may be considered “biomarkers.” The heterogeneity of measurement platforms and inherent variability of biological systems confounds the search for robust biomarkers indicative of a particular condition. In many instances, however, multiple data sets show involvement of common biological processes or signaling pathways, even though individual features are not commonly measured or differentially expressed between them. We developed a methodology, categoryCompare, for cross-platform and cross-sample comparison of high-throughput data at the annotation level. We assessed the utility of the approach using hypothetical data, as well as determining similarities and differences in the set of processes in two instances: (1) denervated skin vs. denervated muscle, and (2) colon from Crohn's disease vs. colon from ulcerative colitis (UC). The hypothetical data showed that in many cases comparing annotations gave superior results to comparing only at the gene level. Improved analytical results depended as well on the number of genes included in the annotation term, the amount of noise in relation to the number of genes expressing in unenriched annotation categories, and the specific method in which samples are combined. In the skin vs. muscle denervation comparison, the tissues demonstrated markedly different responses. The Crohn's vs. UC comparison showed gross similarities in inflammatory

PAZAR is an open-access and open-source database of transcription factor and regulatory sequence annotation with associated web interface and programming tools for data submission and extraction. Curated boutique data collections can be maintained and disseminated through the unified schema of the mall-like PAZAR repository. The Pleiades Promoter Project collection of brain-linked regulatory sequences is introduced to demonstrate the depth of annotation possible within PAZAR. PAZAR, located at , is open for business. PMID:17916232

DOE has made a commitment to compliance with all applicable environmental regulatory requirements. In this respect, it is important to consider and design all tritium supply alternatives so that they can comply with these requirements. The management of waste is an integral part of this activity and it is therefore necessary to estimate the quantities and specific wastes that will be generated by all tritium supply alternatives. A thorough assessment of waste streams includes waste characterization, quantification, and the identification of treatment and disposal options. The waste assessment for APT has been covered in two reports. The first report was a process waste assessment (PWA) that identified and quantified waste streams associated with both target designs and fulfilled the requirements of APT Work Breakdown Structure (WBS) Item 5.5.2.1. This second report is an expanded version of the first that includes all of the data of the first report, plus an assessment of treatment and disposal options for each waste stream identified in the initial report. The latter information was initially planned to be issued as a separate Waste Treatment and Disposal Options Assessment Report (WBS Item 5.5.2.2).

Integrating and relating images with clinical and molecular data is a crucial activity in translational research, but challenging because the information in images is not explicit in standard computer-accessible formats. We have developed an ontology-based representation of the semantic contents of radiology images called AIM (Annotation and Image Markup). AIM specifies the quantitative and qualitative content that researchers extract from images. The AIM ontology enables semantic image annotation and markup, specifying the entities and relations necessary to describe images. AIM annotations, represented as instances in the ontology, enable key use cases for images in translational research such as disease status assessment, query, and inter-observer variation analysis. AIM will enable ontology-based query and mining of images, and integration of images with data in other ontology-annotated bioinformatics databases. Our ultimate goal is to enable researchers to link images with related scientific data so they can learn the biological and physiological significance of the image content. PMID:21347180

For a class project students in an industrial/organizational psychology course had to construct a performance appraisal instrument for assessing the instructor's performance. Evaluations revealed that students found the exercise useful. (RM)

This article reviews the Maricopa Integrated Risk Assessment (MIRA) project and discusses its challenges and successes. Strategies and resources are offered for assisting community college administrators, faculty, and staff to successfully implement enterprise risk management at their institutions.

An attempt was made to explore personality dimensions with projective and verbal tests. The Holtzman Inkblot Technique (HIT), the California Psychological Inventory (CPI) and the Rokeach Dogmatism Scale (RDS) were administered to 161 college students of both sexes. A description of the canonical correlations between the two subsets of projective and verbal instruments is presented, as well as three separate factor analyses, one of the HIT, another of the CPI-RDS and the third of the HIT and CPI-RDS together. The results support the conclusion that, with the exception of one factor that includes HIT and CPI-RDS variables, the HIT factors have no relationship with the CPI-RDS factors. Furthermore, of the 19 canonical correlations only the first is significant. PMID:1165283

Programs and projects employing payments for ecosystem service (PES) interventions achieve their objectives by linking buyers and sellers of ecosystem services. Although PES projects are popular conservation and development interventions, little is known about their adherence to basic ecological principles. We conducted a quantitative assessment of the degree to which a global set of PES projects adhered to four ecological principles that are basic scientific considerations for any project focused on ecosystem management: collection of baseline data, identification of threats to an ecosystem service, monitoring, and attention to ecosystem dynamics or the formation of an adaptive management plan. We evaluated 118 PES projects in three markets-biodiversity, carbon, and water-compiled using websites of major conservation organizations; ecology, economic, and climate-change databases; and three scholarly databases (ISI Web of Knowledge, Web of Science, and Google Scholar). To assess adherence to ecological principles, we constructed two scientific indices (one additive [ASI] and one multiplicative [MSI]) based on our four ecological criteria and analyzed index scores by relevant project characteristics (e.g., sector, buyer, seller). Carbon-sector projects had higher ASI values (P < 0.05) than water-sector projects and marginally higher ASI scores (P < 0.1) than biodiversity-sector projects, demonstrating their greater adherence to ecological principles. Projects financed by public-private partnerships had significantly higher ASI values than projects financed by governments (P < 0.05) and marginally higher ASI values than those funded by private entities (P < 0.1). We did not detect differences in adherence to ecological principles based on the inclusion of cobenefits, the spatial extent of a project, or the size of a project's budget. These findings suggest, at this critical phase in the rapid growth of PES projects, that fundamental ecological principles should be

In 2007, the National Children's Center for Rural and Agricultural Health and Safety (NCCRAHS) published Agritourism Health and Safety Guidelines for Children to provide helpful recommendations for protecting the health and safety of children visiting agritourism farms. Supplement A: Policies and Procedures Guide and Supplement B: Worksite Guide were subsequently published in 2009 and provided agritourism farms with checklists to use in reviewing, planning, and implementing their own health and safety practices. In order to better understand what would be required of a farm wishing to implement the guidelines using Supplements A and B, the North Carolina Agromedicine Institute conducted a single-family farm demonstration project with support from the NCCRAHS. The aims of the project were to (1) determine child health and safety risks associated with an existing agritourism farm; (2) determine the cost of making improvements necessary to reduce risks; and (3) use project findings to motivate other agritourism farms, Cooperative Extension agents, and agritourism insurers to adopt or recommend Agritourism Health and Safety Guidelines for Children for their own farms or farms with which they work. At the conclusion of the study, the target farm was in compliance with an average of 86.9% of items in Supplements A and B. Furthermore, 89% of individuals self-identifying as farmers or farm workers and 100% of Cooperative Extension agents and agritourism insurers attending an end-of-project workshop indicated their intent to adopt or recommend Agritourism Health and Safety Guidelines for Children for their own farms or farms with which they work. PMID:23540301

PEVs can represent a significant power resource for the grid. An IVCI with bi-direction V2G capabilities would allow PEVs to provide grid support services and thus generate a source of revenue for PEV owners. The fleet of EV Project vehicles represents a power resource between 30 MW and 90 MW, depending on the power rating of the grid connection (5-15 kW). Aggregation of vehicle capacity would allow PEVs to participate in wholesale reserve capacity markets. One of the key insights from EV Project data is the fact that vehicles are connected to an EVSE much longer than is necessary to deliver a full charge. During these hours when the vehicles are not charging, they can be participating in wholesale power markets providing the high-value services of regulation and spinning reserves. The annual gross revenue potential for providing these services using the fleet of EV Project vehicles is several hundred thousands of dollars to several million dollars annually depending on the power rating of the grid interface, the number of hours providing grid services, and the market being served. On a per vehicle basis, providing grid services can generate several thousands of dollars over the life of the vehicle.

The mission of the Solar Radiation Resource AssessmentProject is to provide essential information about the solar radiation resource to users and planners of solar technologies so that they can make informed and timely decisions concerning applications of those technologies. The project team accomplishes this by producing and disseminating relevant and reliable information about solar radiation. Topics include: Variability of solar radiation, measurements of solar radiation, spectral distribution of solar radiation, and assessment of the solar resource. FY 1993 accomplishments are detailed.

An anticipatory projectassessment is discussed which is characterized as the capacity to perform, and the disposition to take into account in relevant decisional areas, the following operations: identification of the significant effects which will result from the introduction of a specified project configuration into alternative projected future social environments during the planning, implementation, and operational states; evaluation of such effects in terms of social impacts on affected participants and social value-institutional processes in accord with specified concepts of social justice.

The New Standards Project, attempting to create an assessment system comprised of performance exams, portfolios and projects, may not have fulfilled its promise to create a national system of tests worth teaching to. However, NSP did deliver high- quality professional-development opportunities for hundreds of teachers across the nation. (Contains…

This paper is about using projects for assessment of student learning in different courses of an Information Systems (IS) program. An overview of the role of educational projects in student learning is presented. The various aspects of defining standardized rubrics across an IS program are discussed. A methodology for the use of such rubrics in…

This paper presents an empirical study of how Second Life (SL) was utilized for a highly successful project-based graduate interdisciplinary communication course. Researchers found that an integrated threefold approach emphasizing project-based pedagogy, technical training and support, and assessment/research was effective in cultivating and…

Honours research projects in the School of Civil, Environmental and Mining Engineering at the University of Adelaide are run with small groups of students working with an academic supervisor in a chosen area for one year. The research project is mainly self-directed study, which makes it very difficult to fairly assess the contribution of…

The purpose of this literature review and Ex Post Facto descriptive study was to determine which type of benchmark assessment, multiple-choice or project-based, provides the best indication of general success on the history portion of the CST (California Standards Tests). The result of the study indicates that although the project-based benchmark…

Aim of the study is to assess the project works and application processes in science education of 6th, 7th and 8th grades in primary education upon teachers' perspectives. The research was fulfilled upon the descriptive survey model to collect data. Participants of the research were 90 teachers, having project management experience in science…

An Assessment Audit is described consisting of 47 questions, each being scored 0 to 4, by the module team depending on the extent to which the audit point was satisfied. Scores of 2 or less indicated unsatisfactory provision. Audits were carried out on 14 bioscience- or medicine-based modules in 13 universities. There was great variability between…

Purpose: The purpose of this paper is to provide assessment guidelines which help to implement research-based education in science and technology areas, which would benefit from the quality of this type of education within this subject area. Design/methodology/approach: This paper is a reflection on, and analysis of, different aspects of…

College graduates are entering the world of work with record levels of school-related debt despite being generally unprepared to face the financial challenges of life after college. Findings from a needs assessment survey of financial topics of interest to college students and staff/faculty and preferences for how the information should be…

Purpose: The purpose of this paper is to chart developments in a community engagement scheme run by two Universities in the North East, offering students academic credit in return for work within the local community. The particular focus is on how learning has been assessed from this work experience, within the requirements of higher education…

Fully automated decoding of human activities and intentions from direct neural recordings is a tantalizing challenge in brain-computer interfacing. Implementing Brain Computer Interfaces (BCIs) outside carefully controlled experiments in laboratory settings requires adaptive and scalable strategies with minimal supervision. Here we describe an unsupervised approach to decoding neural states from naturalistic human brain recordings. We analyzed continuous, long-term electrocorticography (ECoG) data recorded over many days from the brain of subjects in a hospital room, with simultaneous audio and video recordings. We discovered coherent clusters in high-dimensional ECoG recordings using hierarchical clustering and automatically annotated them using speech and movement labels extracted from audio and video. To our knowledge, this represents the first time techniques from computer vision and speech processing have been used for natural ECoG decoding. Interpretable behaviors were decoded from ECoG data, including moving, speaking and resting; the results were assessed by comparison with manual annotation. Discovered clusters were projected back onto the brain revealing features consistent with known functional areas, opening the door to automated functional brain mapping in natural settings. PMID:27148018

Fully automated decoding of human activities and intentions from direct neural recordings is a tantalizing challenge in brain-computer interfacing. Implementing Brain Computer Interfaces (BCIs) outside carefully controlled experiments in laboratory settings requires adaptive and scalable strategies with minimal supervision. Here we describe an unsupervised approach to decoding neural states from naturalistic human brain recordings. We analyzed continuous, long-term electrocorticography (ECoG) data recorded over many days from the brain of subjects in a hospital room, with simultaneous audio and video recordings. We discovered coherent clusters in high-dimensional ECoG recordings using hierarchical clustering and automatically annotated them using speech and movement labels extracted from audio and video. To our knowledge, this represents the first time techniques from computer vision and speech processing have been used for natural ECoG decoding. Interpretable behaviors were decoded from ECoG data, including moving, speaking and resting; the results were assessed by comparison with manual annotation. Discovered clusters were projected back onto the brain revealing features consistent with known functional areas, opening the door to automated functional brain mapping in natural settings. PMID:27148018

Background DNA barcoding and other DNA sequence-based techniques for investigating and estimating biodiversity require explicit methods for associating individual sequences with taxa, as it is at the taxon level that biodiversity is assessed. For many projects, the bioinformatic analyses required pose problems for laboratories whose prime expertise is not in bioinformatics. User-friendly tools are required for both clustering sequences into molecular operational taxonomic units (MOTU) and for associating these MOTU with known organismal taxonomies. Results Here we present jMOTU, a Java program for the analysis of DNA barcode datasets that uses an explicit, determinate algorithm to define MOTU. We demonstrate its usefulness for both individual specimen-based Sanger sequencing surveys and bulk-environment metagenetic surveys using long-read next-generation sequencing data. jMOTU is driven through a graphical user interface, and can analyse tens of thousands of sequences in a short time on a desktop computer. A companion program, Taxonerator, that adds traditional taxonomic annotation to MOTU, is also presented. Clustering and taxonomic annotation data are stored in a relational database, and are thus amenable to subsequent data mining and web presentation. Conclusions jMOTU efficiently and robustly identifies the molecular taxa present in survey datasets, and Taxonerator decorates the MOTU with putative identifications. jMOTU and Taxonerator are freely available from http://www.nematodes.org/. PMID:21541350

A checklist is presented for assessing preschool handicapped and nonhandicapped children's development in six skill areas: language, gross motor, cognitive, fine motor, socio-emotional, and self help. The assessment describes a specific observable skill and includes information on the chronological age range. The chart provides space for recording…

Accurate gene model annotation of reference genomes is critical for making them useful. The modENCODE project has improved the D. melanogaster genome annotation by using deep and diverse high-throughput data. Since transcriptional activity that has been evolutionarily conserved is likely to have an advantageous function, we have performed large-scale interspecific comparisons to increase confidence in predicted annotations. To support comparative genomics, we filled in divergence gaps in the Drosophila phylogeny by generating draft genomes for eight new species. For comparative transcriptome analysis, we generated mRNA expression profiles on 81 samples from multiple tissues and developmental stages of 15 Drosophila species, and we performed cap analysis of gene expression in D. melanogaster and D. pseudoobscura. We also describe conservation of four distinct core promoter structures composed of combinations of elements at three positions. Overall, each type of genomic feature shows a characteristic divergence rate relative to neutral models, highlighting the value of multispecies alignment in annotating a target genome that should prove useful in the annotation of other high priority genomes, especially human and other mammalian genomes that are rich in noncoding sequences. We report that the vast majority of elements in the annotation are evolutionarily conserved, indicating that the annotation will be an important springboard for functional genetic testing by the Drosophila community. PMID:24985915

Accurate gene model annotation of reference genomes is critical for making them useful. The modENCODE project has improved the D. melanogaster genome annotation by using deep and diverse high-throughput data. Since transcriptional activity that has been evolutionarily conserved is likely to have an advantageous function, we have performed large-scale interspecific comparisons to increase confidence in predicted annotations. To support comparative genomics, we filled in divergence gaps in the Drosophila phylogeny by generating draft genomes for eight new species. For comparative transcriptome analysis, we generated mRNA expression profiles on 81 samples from multiple tissues and developmental stages of 15 Drosophila species, and we performed cap analysis of gene expression in D. melanogaster and D. pseudoobscura. We also describe conservation of four distinct core promoter structures composed of combinations of elements at three positions. Overall, each type of genomic feature shows a characteristic divergence rate relative to neutral models, highlighting the value of multispecies alignment in annotating a target genome that should prove useful in the annotation of other high priority genomes, especially human and other mammalian genomes that are rich in noncoding sequences. We report that the vast majority of elements in the annotation are evolutionarily conserved, indicating that the annotation will be an important springboard for functional genetic testing by the Drosophila community. PMID:24985915

The Habitat Evaluation Procedures were used to evaluate pre- and post-construction habitat conditions of the US Bureau of Reclamation's Palisades Project in eastern Idaho. Eight evaluation species were selected with losses expressed in the number of Habitat Units (HU's). One HU is equivalent to one acre of prime habitat. The evaluation estimated that a loss of 2454 HU's of mule deer habitat, 2276 HU's of mink habitat, 2622 HU's of mallard habitat, 805 HU's of Canada goose habitat, 2331 HU's of ruffed grouse habitat, 5941 and 18,565 HU's for breeding and wintering bald eagles, and 1336 and 704 HU's for forested and scrub-shrub wetland nongame species occurred as a result of the project. The study area currently has 29 active osprey nests located around the reservoir and the mudflats probably provide more feeding habitat for migratory shore birds and waterfowl than was previously available along the river. A comparison of flow conditions on the South Fork of the Snake River below the dam between pre- and post-construction periods also could not substantiate claims that water releases from the dam were causing more Canada goose nest losses than flow in the river prior to construction. 41 refs., 16 figs., 9 tabs.

United States. Bonneville Power Administration; Washington Department of Fish and Wildlife; Confederated Tribes and Bands of the Yakama Nation

1999-01-01

Before the Bonneville Power Administration (BPA) decides whether to fund a program to reintroduce coho salmon to mid-Columbia River basin tributaries, research is needed to determine the ecological risks and biological feasibility of such an effort. Since the early 1900s, the native stock of coho has been decimated in the tributaries of the middle reach of the Columbia River. The four Columbia River Treaty Tribes identified coho reintroduction in the mid-Columbia as a priority in the Tribal Restoration Plan. It is a comprehensive plan put forward by the Tribes to restore the Columbia River fisheries. In 1996, the Northwest Power Planning Council (NPPC) recommended the tribal mid-Columbia reintroduction project for funding by BPA. It was identified as one of fifteen high-priority supplementation projects for the Columbia River basin, and was incorporated into the NPPC`s Fish and Wildlife Program. The release of coho from lower Columbia hatcheries into mid-Columbia tributaries is also recognized in the Columbia River Fish Management Plan.

This environmental assessment analyzes the environmental impacts of replacing the Winnett School District complex`s existing oil-fired heating system with a new coal-fired heating system with funds provided from a grant under the Institutional Conservation Program. This Assessment has been prepared in accordance with the provisions of the National Environmental Policy Act (NEPA), the Council on Environmental Quality`s regulations; the Department`s Implementing Procedures and Guidelines Revocation; and the May 1993 ``Recommendations for the Preparation of Environmental Assessments and Environmental Impact Statements,`` by the Department`s Office of NEPA Oversight. Under the Institutional Conservation Programs, created by the National Energy Conservation Policy Act (PL 95--619), the Department is authorized to encourage energy conservation-by providing funding for up to 50 percent of the costs of installation of qualified energy conservation measures by entities such as schools, hospitals, and other buildings owned by local governments. This proposed action to fund partially the installation of a new coal-fired heating system for the Winnett School District is part of this energy conservation program.

The paper examines hand-written annotation, its many features, difficulties and strengths as a feedback tool. It extends and clarifies what modest evidence is in the public domain and offers an evaluation of how to use annotation effectively in the support of student feedback [Marshall, C.M., 1998a. The Future of Annotation in a Digital (paper) World. Presented at the 35th Annual GLSLIS Clinic: Successes and Failures of Digital Libraries, June 20-24, University of Illinois at Urbana-Champaign, March 24, pp. 1-20; Marshall, C.M., 1998b. Toward an ecology of hypertext annotation. Hypertext. In: Proceedings of the Ninth ACM Conference on Hypertext and Hypermedia, June 20-24, Pittsburgh Pennsylvania, US, pp. 40-49; Wolfe, J.L., Nuewirth, C.M., 2001. From the margins to the centre: the future of annotation. Journal of Business and Technical Communication, 15(3), 333-371; Diyanni, R., 2002. One Hundred Great Essays. Addison-Wesley, New York; Wolfe, J.L., 2002. Marginal pedagogy: how annotated texts affect writing-from-source texts. Written Communication, 19(2), 297-333; Liu, K., 2006. Annotation as an index to critical writing. Urban Education, 41, 192-207; Feito, A., Donahue, P., 2008. Minding the gap annotation as preparation for discussion. Arts and Humanities in Higher Education, 7(3), 295-307; Ball, E., 2009. A participatory action research study on handwritten annotation feedback and its impact on staff and students. Systemic Practice and Action Research, 22(2), 111-124; Ball, E., Franks, H., McGrath, M., Leigh, J., 2009. Annotation is a valuable tool to enhance learning and assessment in student essays. Nurse Education Today, 29(3), 284-291]. Although a significant number of studies examine annotation, this is largely related to on-line tools and computer mediated communication and not hand-written annotation as comment, phrase or sign written on the student essay to provide critique. Little systematic research has been conducted to consider how this latter form

The needs assessment, a popular tool for helping businesses determine what they need to do to improve their operations, services, and products, can serve as an excellent vehicle for meeting many of the objectives in an introductory business communication course. A needs assessmentproject, as with other university-business collaborations, provides…

The guide, developed by the Secondary Transition and Employment Project (STEP) in Idaho, describes a rationale and model for implementing secondary/vocational assessment of students with disabilities that is integrated with curriculum and transition strategies. Assessment and curricular strategies are particularly intended for students in rural…

The Vocational Assessment of Special Needs Individuals Project originated with the regional vocational schools and educational collaboratives of the Assabet and Blackstone Valleys cooperating to determine a meaningful process through which vocational assessment information could be collected, organized, and used in formulating a basis for…

The Project to Assess the Status of Women Students and Employees in Vocational Education assessed number and distribution of women students and employees in vocational programs, their perceptions of equal access, and extent of efforts to address sex equity. Its main activity was a survey of 3,609 vocational students, 455 faculty, 126 counselors,…

The ILSI Health and Environmental Sciences Institute (HESI) established a project in 2009 on Animal Alternatives in Environmental Risk Assessment (AA-ERA) following a successful two-year emerging issues assessment of the topic. The early stages of this work included the execution...

The California New Teacher Project (CNTP) commissioned pilot tests of assessment instruments during 1990. This document is the final report and analysis of the administration and scoring of these assessment instruments. The document, organized into 11 chapters, begins with an introduction describing research on new and experienced teachers,…

This study proposes an advanced method to factor in the contributions of individual group members engaged in an integrated group project using peer assessment procedures. Conway et al. proposed the Individual Weight Factor (IWF) method for peer assessment which has been extensively developed over the years. However, most methods associated with…

The emphasis in this annotated bibliography is citizen participation in education in the areas of decision making, policy development, and school governance. The focus is on the public school and school system rather than on private and parochial schools. One hundred fifty books, parts of books, and published reports are annotated, together with…

Discusses studies of German students using "CyberBuch," a hypermedia application for reading German texts that contains annotations for words in the form of text, pictures, and video. The article examines incidental vocabulary learning, the effectiveness of different types of annotations for vocabulary acquisition, and the effect of look-up…

In October 1994 climate researchers met at the Forum on Global Change Modeling to create a consensus document summarizing the debate on issues related to the use of climate models to influence policy. The charge to the Forum was to develop a brief statement on the credibility of projections of climate change provided by General Circulation Models. The Forum focused specifically on the climate aspects of the entire global change issue, not on emission scenarios, the consequences of change to ecosystems and natural resource systems, or the socio-economic implications and potential for responses.The Forum report put thoughts on this often divisive issue into perspective for use by the Government Accounting Office in developing and considering national policy options. The forum was organized in response to requests from the White House Office of Science and Technology by the Subcommitteeon Global Change Research, abranch of the new Committee on Earth and Natural Resources set up by the Clinton administration.

This summary report describes efforts to bridge developments in the use of technology in delivering higher education services with the traditional roles and responsibilities of state agencies, accrediting bodies, and institutions (Project ALLTEL). Designed to suggest policies and procedures to states and accrediting institutions charged with…

This is the first of two articles in which the author shares experiences gained from the development and delivery of a business/industry project-based capstone course. The course integrates research, proposal development and design experience based on knowledge and skills acquired in earlier coursework. The course also incorporates standards and…

Starting a new research project can be a challenge, but especially so in education research because the literature is scattered throughout many journals. Relevant astronomy education research may be in psychology journals, science education journals, physics education journals, or even in science journals. Tracking the vast realm of literature is difficult, especially because libraries frequently do not subscribe to many of the relevant journals and abstracting services. The Searchable Annotated Bibliography of Education Research (SABER) is an online resource that was started to service the needs of the astronomy education community, specifically to reduce this "scatter" by compiling an annotated bibliography of education research articles in one electronic location. Although SABER started in 2001, the database has a new URL—http://astronom- y.uwp.edu/saber/—and has recently undergone a major update.

This is a detailed description of an assessment that can be used in a graduate level of study in the area of public school finance. This has been approved by NCATE as meeting all of the stipulated ELCC standards for which it is designed (1.1, 1.2, 1.3, 1.4, 1.5, 2.1, 2.2, 2.3, 2.4, 3.1, 3.2, 3.3, 4.1, 4.2, 4.3, 5.1, 5.2, 5.3.). This course of…

Image retrieval at the semantic level mostly depends on image annotation or image classification. Image annotation performance largely depends on three issues: (1) automatic image feature extraction; (2) a semantic image concept modeling; (3) algorithm for semantic image annotation. To address first issue, multilevel features are extracted to construct the feature vector, which represents the contents of the image. To address second issue, domain-dependent concept hierarchy is constructed for interpretation of image semantic concepts. To address third issue, automatic multilevel code generation is proposed for image classification and multilevel image annotation. We make use of the existing image annotation to address second and third issues. Our experiments on a specific domain of X-ray images have given encouraging results. PMID:17846834

The McLennan Community College Multi-County Needs AssessmentProject's (MAP) survey to identify employer needs is discussed in the document. The Business and Industry Survey, one component of MAP, was conducted in the central Texas area (Bosque, Falls, Hill, and McLennan counties) during 1974-1975. Survey development and procedures for its…

Systems that allow users to store and retrieve spatial data, provide for analyses of spatial data, and offer highly detailed display of spatial data are referred to as geographic information systems, or more typically, GIS. Since their initial usage in the 1960s, GISs have evolved as a means of assembling and analyzing diverse data pertaining to specific geographical areas, with spatial locations of the data serving as the organizational basis for the information systems. The structure of GISs is built around spatial identifiers and the methods used to encode data for storage and manipulation. This paper examines how GIS has been used in typical environmental assessment, its use for cumulative impact assessment, and explores litigation that occurred in the United States Federal court system where GIS was used in some aspect of cumulative effects. The paper also summarizes fifteen case studies that range from area wide transportation planning to wildlife and habitat impacts, and draws together a few lessons learned from this review of literature and litigation.

Presenting science fair projects gave students an opportunity to complete a performance assessment that comprised a meaningful task focused on process and subject to standards-based assessment. Students presented science inquiry and engineering design projects to judges at a regional science fair. The judges used the domains of the Potter Rubrics to assess the students' work and assigned a Quality score to each project. Using multiple regression, this study found that the mean scores on the Methods and Analysis domains predicted the mean Quality scores. Analyzing the technical quality of the Potter Rubrics addressed some of the measurement and generalizability concerns about performance assessments. Recommendations for future research and implications for practice were examined.

Background Manually annotated corpora are critical for the training and evaluation of automated methods to identify concepts in biomedical text. Results This paper presents the concept annotations of the Colorado Richly Annotated Full-Text (CRAFT) Corpus, a collection of 97 full-length, open-access biomedical journal articles that have been annotated both semantically and syntactically to serve as a research resource for the biomedical natural-language-processing (NLP) community. CRAFT identifies all mentions of nearly all concepts from nine prominent biomedical ontologies and terminologies: the Cell Type Ontology, the Chemical Entities of Biological Interest ontology, the NCBI Taxonomy, the Protein Ontology, the Sequence Ontology, the entries of the Entrez Gene database, and the three subontologies of the Gene Ontology. The first public release includes the annotations for 67 of the 97 articles, reserving two sets of 15 articles for future text-mining competitions (after which these too will be released). Concept annotations were created based on a single set of guidelines, which has enabled us to achieve consistently high interannotator agreement. Conclusions As the initial 67-article release contains more than 560,000 tokens (and the full set more than 790,000 tokens), our corpus is among the largest gold-standard annotated biomedical corpora. Unlike most others, the journal articles that comprise the corpus are drawn from diverse biomedical disciplines and are marked up in their entirety. Additionally, with a concept-annotation count of nearly 100,000 in the 67-article subset (and more than 140,000 in the full collection), the scale of conceptual markup is also among the largest of comparable corpora. The concept annotations of the CRAFT Corpus have the potential to significantly advance biomedical text mining by providing a high-quality gold standard for NLP systems. The corpus, annotation guidelines, and other associated resources are freely available at http

Background Modeling results from chicken microarray studies is challenging for researchers due to little functional annotation associated with these arrays. The Affymetrix GenChip chicken genome array, one of the biggest arrays that serve as a key research tool for the study of chicken functional genomics, is among the few arrays that link gene products to Gene Ontology (GO). However the GO annotation data presented by Affymetrix is incomplete, for example, they do not show references linked to manually annotated functions. In addition, there is no tool that facilitates microarray researchers to directly retrieve functional annotations for their datasets from the annotated arrays. This costs researchers amount of time in searching multiple GO databases for functional information. Results We have improved the breadth of functional annotations of the gene products associated with probesets on the Affymetrix chicken genome array by 45% and the quality of annotation by 14%. We have also identified the most significant diseases and disorders, different types of genes, and known drug targets represented on Affymetrix chicken genome array. To facilitate functional annotation of other arrays and microarray experimental datasets we developed an Array GO Mapper (AGOM) tool to help researchers to quickly retrieve corresponding functional information for their dataset. Conclusion Results from this study will directly facilitate annotation of other chicken arrays and microarray experimental datasets. Researchers will be able to quickly model their microarray dataset into more reliable biological functional information by using AGOM tool. The disease, disorders, gene types and drug targets revealed in the study will allow researchers to learn more about how genes function in complex biological systems and may lead to new drug discovery and development of therapies. The GO annotation data generated will be available for public use via AgBase website and will be updated on regular

This unit describes how to use the genome annotation and curation tools MAKER and MAKER-P to annotate protein coding and non-coding RNA genes in newly assembled genomes, update/combine legacy annotations in light of new evidence, add quality metrics to annotations from other pipelines, and map existing annotations to a new assembly. MAKER and MAKER-P can rapidly annotate genomes of any size, and scale to match available computational resources. PMID:25501943

We report the current status of the FlyBase annotated gene set for Drosophila melanogaster and highlight improvements based on high-throughput data. The FlyBase annotated gene set consists entirely of manually annotated gene models, with the exception of some classes of small non-coding RNAs. All gene models have been reviewed using evidence from high-throughput datasets, primarily from the modENCODE project. These datasets include RNA-Seq coverage data, RNA-Seq junction data, transcription start site profiles, and translation stop-codon read-through predictions. New annotation guidelines were developed to take into account the use of the high-throughput data. We describe how this flood of new data was incorporated into thousands of new and revised annotations. FlyBase has adopted a philosophy of excluding low-confidence and low-frequency data from gene model annotations; we also do not attempt to represent all possible permutations for complex and modularly organized genes. This has allowed us to produce a high-confidence, manageable gene annotation dataset that is available at FlyBase (http://flybase.org). Interesting aspects of new annotations include new genes (coding, non-coding, and antisense), many genes with alternative transcripts with very long 3′ UTRs (up to 15–18 kb), and a stunning mismatch in the number of male-specific genes (approximately 13% of all annotated gene models) vs. female-specific genes (less than 1%). The number of identified pseudogenes and mutations in the sequenced strain also increased significantly. We discuss remaining challenges, for instance, identification of functional small polypeptides and detection of alternative translation starts. PMID:26109357

A quantitative method for estimating an expected uncertainty (reliability and validity) in assessment results arising from the relativity between four variables, viz examiner's expertise, examinee's expertise achieved, assessment task difficulty and examinee's performance, was developed for the complex assessment applicable to final…

A group of research projects based at HP-Labs Bristol, the University of Bristol (England) and ARKive (a new large multimedia database project focused on the worlds biodiversity based in the United Kingdom) are working to develop a flexible model for the indexing of multimedia collections that allows users to annotate content utilizing extensible…

This annotated bibliography of 1,997 projects dated from 1955-1969 and administered by branches of the Social and Rehabilitation Service is divided into four parts: (1) Research and Demonstration Projects which were authorized by the 1954 amendments of the Vocational Rehabilitation Act, listed under 22 subject headings, (2) Cooperative Research or…

While bacterial genome annotations have significantly improved in recent years, techniques for bacterial proteome annotation (including post-translational chemical modifications, signal peptides, proteolytic events, etc.) are still in their infancy. At the same time, the number of sequenced bacterial genomes is rising sharply, far outpacing our ability to validate the predicted genes, let alone annotate bacterial proteomes. In this study, we use tandem mass spectrometry (MS/MS) to annotate the proteome of Shewanella oneidensis MR-1, an important microbe for bioremediation. In particular, we provide the first comprehensive map of post-translational modifications in a bacterial genome, including a large number of chemical modifications, signal peptide cleavages and cleavage of N-terminal methionine residues. We also detect multiple genes that were missed or assigned incorrect start positions by gene prediction programs and suggest corrections to improve the gene annotation. This study demonstrates that complementing every genome sequencing project by an MS/MS project would significantly improve both genome and proteome annotations for a reasonable cost.

The DOE-JGI Metagenome Annotation Pipeline (MAP v.4) performs structural and functional annotation for metagenomic sequences that are submitted to the Integrated Microbial Genomes with Microbiomes (IMG/M) system for comparative analysis. The pipeline runs on nucleotide sequences provided via the IMG submission site. Users must first define their analysis projects in GOLD and then submit the associated sequence datasets consisting of scaffolds/contigs with optional coverage information and/or unassembled reads in fasta and fastq file formats. The MAP processing consists of feature prediction including identification of protein-coding genes, non-coding RNAs and regulatory RNAs, as well as CRISPR elements. Structural annotation is followed by functional annotation including assignment of protein product names and connection to various protein family databases. PMID:26918089

Genome annotationprojects can produce incorrect results if they are based on obsolete data or inappropriate models. We have developed an automatic re-annotation system that uses agents to perform repetitive tasks and reports the results to the user. These tasks involve BLAST searches on biological databases (GenBank) and the use of detection tools (Genemark and Glimmer) to identify new open reading frames. Several agents execute these tools and combine their results to produce a list of open reading frames that is sent back to the user. Our goal was to reduce the manual work, executing most tasks automatically by computational tools. A prototype was implemented and validated using Mycoplasma pneumoniae and Haemophilus influenzae original annotated genomes. The results reported by the system identify most of new features present in the re-annotated versions of these genomes. PMID:16342042

Under the Tritium Facility Modernization {ampersand} Consolidation (TFM{ampersand}C) Project (S-7726) at the Savannah River Site (SS), all tritium processing operations in Building 232-H, with the exception of extraction and obsolete/abandoned systems, will be reestablished in Building 233-H. These operations include hydrogen isotopic separation, loading and unloading of tritium shipping and storage containers, tritium recovery from zeolite beds, and stripping of nitrogen flush gas to remove tritium prior to stack discharge. The scope of the TFM{ampersand}C Project also provides for a new replacement R&D tritium test manifold in 233-H, upgrading of the 233- H Purge Stripper and 233-H/234-H building HVAC, a new 234-H motor control center equipment building and relocating 232-H Materials Test Facility metallurgical laboratories (met labs), flow tester and life storage program environment chambers to 234-H.

The intent of this internal memo is to provide a recommendation for the transfer of tank 241-C-106 waste, Attachment 2, to tank 241-AY-102. This internal memo also identifies additional requirements which have been deemed necessary for safely receiving and storing the waste documented in Attachment 2 from tank 241-C-106 in tank 241-AY-102. This waste transfer is planned in support of tank 241-C-106 solids sluicing activities. Approximately 200,000 gallons of waste and flush water are expected to be pumped from tank 241-C-106 into tank 241-AY-102. Several transfers will be necessary to complete the sluicing of tank 241-C-106 solids. To assure ourselves that this waste transfer will not create any compatibility concerns, a waste compatibility assessment adhering to current waste compatibility requirements has been performed.

A series of scoping analyses have been completed investigating the thermal-hydraulic performance and feasibility of the Spent Nuclear Fuel Project (SNFP) Integrated Process Strategy (IPS). The SNFP was established to develop engineered solutions for the expedited removal, stabilization, and storage of spent nuclear fuel from the K Basins at the U.S. Department of Energy`s Hanford Site in Richland, Washington. The subject efforts focused on independently investigating, quantifying, and establishing the governing heat production and removal mechanisms for each of the IPS operations and configurations, obtaining preliminary results for comparison with and verification of other analyses, and providing technology-based recommendations for consideration and incorporation into the design bases for the SNFP. The goal was to develop a series fo thermal-hydraulic models that could respond to all process and safety-related issues that may arise pertaining to the SNFP. A series of sensitivity analyses were also performed to help identify those parameters that have the greatest impact on energy transfer and hence, temperature control. It is anticipated that the subject thermal-hydraulic models will form the basis for a series of advanced and more detailed models that will more accurately reflect the thermal performance of the IPS and alleviate the necessity for some of the more conservative assumptions and oversimplifications, as well as form the basis for the final process and safety analyses.

The AFGD process as demonstrated by Pure Air at the Bailly Station offers a reliable and cost-effective means of achieving a high degree of SO{sub 2} emissions reduction when burning high-sulfur coals. Many innovative features have been successfully incorporated in this process, and it is ready for widespread commercial use. The system uses a single-loop cocurrent scrubbing process with in-situ oxidation to produce wallboard-grade gypsum instead of wet sludge. A novel wastewater evaporation system minimizes effluents. The advanced scrubbing process uses a common absorber to serve multiple boilers, thereby saving on capital through economies of scale. Major results of the project are: (1) SO{sub 2} removal of over 94 percent was achieved over the three-year demonstration period, with a system availability exceeding 99.5 percent; (2) a large, single absorber handled the combined flue gas of boilers generating 528 MWe of power, and no spares were required; (3) direct injection of pulverized limestone into the absorber was successful; (4) Wastewater evaporation eliminated the need for liquid waste disposal; and (5) the gypsum by-product was used directly for wallboard manufacture, eliminating the need to dispose of waste sludge.

This report documents the results of the Environment, Safety, and Health (ES&H) Progress Assessment of the Fernald Environmental Management Project (FEMP), Fernald, Ohio, conducted from October 15 through October 25, 1991. The Secretary of Energy directed that small, focused, ES&H Progress Assessments be performed as part of the continuing effort to institutionalize line management accountability and the self-assessment process in the areas of ES&H. The FEMP assessment is the pilot assessment for this new program. The objectives for the FEMP ES&H Progress Assessment were to assess: (1) how the FEMP has progressed since the 1989 Tiger Assessment; (2) how effectively the FEMP has corrected specific deficiencies and associated root causes identified by that team; and (3) whether the current organization, resources, and systems are sufficient to proactively manage ES&H issues.

This publication presents the reports from two studies, Math Online (MOL) and Writing Online (WOL), part of the National Assessment of Educational Progress (NAEP) Technology-Based Assessment (TBA) project. Funded by the National Center for Education Statistics (NCES), the Technology-Based Assessmentproject is intended to explore the use of new…

Impacts of climate change on agricultural production are likely to negatively affect food security. However, large uncertainties exist in future projections of agricultural yields as well as regional differences in the direction and magnitude of the projected changes. An important question with regard to uncertainties in future crop yield projections is how to translate the modelling range into results meaningful for impact analyses and provide policy-relevant information. One way of addressing this question is to use a risk-based approach, analysing the risk of yield reductions at different levels of temperature increase on the basis of modelling intercomparison data (AgMIP). To assess regional scale differences in yield changes, we look at aggregates of agricultural production within the 26 regions defined in the IPCC SREX report. Using the available output of the AgMIP project, we assess the projected risk of regional yield reductions for maize, rice, wheat and soy at incremental steps of 0.5°C warming. Based on production areas of the year 2000 (MIRCA2000, Portmann, 2011), we assessprojected yield changes only within current production areas, thereby excluding potential cropland expansion. Our approach provides an additional view-point to the existing analyses of the output of the AgMIP project. References: Portmann, F.T. (2011): Global estimation of monthly irrigated and rainfed crop areas on a 5 arc-minute grid. Frankfurt Hydrology Paper 09, Institute of Physical Geography, University of Frankfurt, Frankfurt am Main, Germany.

To perform task-based image quality assessment in CT, it is desirable to have a large number of realistic patient images with known diagnostic truth. One effective way to achieve this objective is to create hybrid images that combine patient images with simulated lesions. Because conventional hybrid images generated in the image-domain fails to reflect the impact of scan and reconstruction parameters on lesion appearance, this study explored a projection-domain approach. Liver lesion models were forward projected according to the geometry of a commercial CT scanner to acquire lesion projections. The lesion projections were then inserted into patient projections (decoded from commercial CT raw data with the assistance of the vendor) and reconstructed to acquire hybrid images. To validate the accuracy of the forward projection geometry, simulated images reconstructed from the forward projections of a digital ACR phantom were compared to physically acquired ACR phantom images. To validate the hybrid images, lesion models were inserted into patient images and visually assessed. Results showed that the simulated phantom images and the physically acquired phantom images had great similarity in terms of HU accuracy and high-contrast resolution. The lesions in the hybrid image had a realistic appearance and merged naturally into the liver background. In addition, the inserted lesion demonstrated reconstruction-parameter-dependent appearance. Compared to conventional image-domain approach, our method enables more realistic hybrid images for image quality assessment.

The Test Area North (TAN) Pool is located within the fenced TAN facility boundaries on the Idaho National Engineering Laboratory (INEL). The TAN pool stores 344 canisters of core debris from the March, 1979, Three Mile Island (TMI) Unit 2 reactor accident; fuel assemblies from Loss-of-Fluid Tests (LOFT); and Government-owned commercial fuel rods and assemblies. The LOFT and government owned commercial fuel rods and assemblies are hereafter referred to collectively as {open_quotes}commercial fuels{close_quotes} except where distinction between the two is important to the analysis. DOE proposes to remove the canisters of TMI core debris and commercial fuels from the TAN Pool and transfer them to the Idaho Chemical Processing Plant (ICPP) for interim dry storage until an alternate storage location other than at the INEL, or a permanent federal spent nuclear fuel (SNF) repository is available. The TAN Pool would be drained and placed in an industrially and radiologically safe condition for refurbishment or eventual decommissioning. This environmental assessment (EA) identifies and evaluates environmental impacts associated with (1) constructing an Interim Storage System (ISS) at ICPP; (2) removing the TMI and commercial fuels from the pool and transporting them to ICPP for placement in an ISS, and (3) draining and stabilizing the TAN Pool. Miscellaneous hardware would be removed and decontaminated or disposed of in the INEL Radioactive Waste Management Complex (RWMC). This EA also describes the environmental consequences of the no action alternative.

The Ocean Thermal Energy Conversion (OTEC) principle is discussed along with general system and cycle types, specific OTEC designs, applications, and the ocean thermal resource. the historic development and present status of OTEC are reviewed. Power system components of the more technically advanced closed-cycle OTEC concept are discussed: heat exchangers, corrosion and biofouling countermeasures, working fluids, ammonia power systems, and on-platform seawater sytems. Several open-cycle features are also discussed. A critical review of the ocean engineering aspects of the OTEC power system is presented. Major subsystems such as platform, cold water pipe, mooring system, dynamic positioning system and power transmission cable system are assessed for their relationships with the ocean environment and with each other. Nine available studies of OTEC costs are reviewed, and tentative comparisons are made between OTEC and traditional fuel costs. OTEC products and markets are considered. Possible environmental and social effects of OTEC development are discussed. International and national laws regulating OTEC plants are reviewed, specifically, the United Nations Third Conference on the Law of the Sea and the Ocean Thermal Energy Conversion Act of 1980. Coast Guard regulations, OSHA laws, and state and local government regulations are also considered as well as attitudes of the utilities. (LEW)

As Digital Libraries (DL) become more aligned with the web architecture, their functional components need to be fundamentally rethought in terms of URIs and HTTP. Annotation, a core scholarly activity enabled by many DL solutions, exhibits a clearly unacceptable characteristic when existing models are applied to the web: due to the representations of web resources changing over time, an annotation made about a web resource today may no longer be relevant to the representation that is served from that same resource tomorrow. We assume the existence of archived versions of resources, and combine the temporal features of the emerging Open Annotation data model with the capability offered by the Memento framework that allows seamless navigation from the URI of a resource to archived versions of that resource, and arrive at a solution that provides guarantees regarding the persistence of web annotations over time. More specifically, we provide theoretical solutions and proof-of-concept experimental evaluations for two problems: reconstructing an existing annotation so that the correct archived version is displayed for all resources involved in the annotation, and retrieving all annotations that involve a given archived version of a web resource.

Sequencing centers such as the Human Genome Center at LBNL are producing an ever-increasing flood of genetic data. Annotation can greatly enhance the biological value of these sequences. Useful annotations include possible gene locations, homologies to known genes, and gene signal such as promoters and splice sites. Genotator is a workbench for automated sequence annotation and annotation browsing. The back end runs a series of sequence analysis tools on a DNA sequence, handling the various input and output formats required by the tools. Genotator currently runs five different gene-finding programs, three homology searches, and searches for promoters, splice sites, and ORFs. The results of the analyses run by Genotator can be viewed with the interactive graphical browser. The browser displays color-coded sequence annotations on a canvas that can be scrolled and zoomed, allowing the annotated sequence to be explored at multiple levels of detail. The user can view the actual DNA sequence in a separate window; when a region is selected in the map display, it is highlighted automatically in the sequence display, and vice versa. By displaying the output of all of the sequence analyses, Genotator provides an intuitive way to identify the significant regions (for example, probable exons) in a sequence. Users can interactively add personal annotations to label regions of interest. Additional capabilities of Genotator include primer design and pattern searching. PMID:9253604

Hypothetical proteins (HPs) are the proteins predicted to be expressed from an open reading frame, making a substantial fraction of proteomes in both prokaryotes and eukaryotes. Genome projects have led to the identification of many therapeutic targets, the putative function of the protein, and their interactions. In this review we enlist various methods linking annotation to structural and functional prediction of HPs that assist in the discovery of new structures and functions serving as markers and pharmacological targets for drug designing, discovery, and screening. Further we give an overview of how mass spectrometry as an analytical technique is used to validate protein characterisation. We discuss how microarrays and protein expression profiles help understanding the biological systems through a systems-wide study of proteins and their interactions with other proteins and non-proteinaceous molecules to control complex processes in cells. Finally, we articulate challenges on how next generation sequencing methods have accelerated multiple areas of genomics with special focus on uncharacterized proteins. PMID:25873935

Basic technical and economic examinations of Austrian mass waste landfills, concerning the recovery of secondary raw materials, have been carried out by the 'LAMIS - Landfill Mining Austria' pilot project for the first time in Austria. A main focus of the research - the subject of this article - was the first devotion of a pilot landfill to an integrated ecological and economic assessment so that its feasibility could be verified before a landfill mining project commenced. A Styrian mass waste landfill had been chosen for this purpose that had been put into operation in 1979 and received mechanically-biologically pre-treated municipal waste till 2012. The whole assessment procedure was divided into preliminary and main assessment phases to evaluate the general suitability of a landfill mining project with little financial and human resource expense. A portfolio chart, based on a questionnaire, was created for the preliminary assessment that, as a result, has provided a recommendation for subsequent investigation - the main assessment phase. In this case, specific economic criteria were assessed by net present value calculation, while ecological or socio-economic criteria were rated by utility analysis, transferring the result into a utility-net present value chart. In the case of the examined pilot landfill, assessing the landfill mining project produced a higher utility but a lower net present value than a landfill leaving-in for aftercare. Since no clearly preferable scenario could be identified this way, a cost-revenue analysis was carried out in addition that determined a dimensionless ratio: the 'utility - net present value quotient' of both scenarios. Comparing this quotient showed unmistakably that in the overall assessment, 'leaving the landfill in aftercare' was preferable to a 'landfill mining project' in that specific case. PMID:27170192

The U.S. Department of Energy's National Renewable Energy Laboratory collaborates with the solar industry to establish high quality solar and meteorological measurements. This Solar Resource and Meteorological AssessmentProject (SOLRMAP) provides high quality measurements to support deployment of power projects in the United States. The no-funds-exchanged collaboration brings NREL solar resource assessment expertise together with industry needs for measurements. The end result is high quality data sets to support the financing, design, and monitoring of large scale solar power projects for industry in addition to research-quality data for NREL model development. NREL provides consultation for instrumentation and station deployment, along with instrument calibrations, data acquisition, quality assessment, data distribution, and summary reports. Industry participants provide equipment, infrastructure, and station maintenance.

The U.S. Department of Energy's National Renewable Energy Laboratory collaborates with the solar industry to establish high quality solar and meteorological measurements. This Solar Resource and Meteorological AssessmentProject (SOLRMAP) provides high quality measurements to support deployment of power projects in the United States. The no-funds-exchanged collaboration brings NREL solar resource assessment expertise together with industry needs for measurements. The end result is high quality data sets to support the financing, design, and monitoring of large scale solar power projects for industry in addition to research-quality data for NREL model development. NREL provides consultation for instrumentation and station deployment, along with instrument calibrations, data acquisition, quality assessment, data distribution, and summary reports. Industry participants provide equipment, infrastructure, and station maintenance.

The U.S. Department of Energy's National Renewable Energy Laboratory collaborates with the solar industry to establish high quality solar and meteorological measurements. This Solar Resource and Meteorological AssessmentProject (SOLRMAP) provides high quality measurements to support deployment of power projects in the United States. The no-funds-exchanged collaboration brings NREL solar resource assessment expertise together with industry needs for measurements. The end result is high quality data sets to support the financing, design, and monitoring of large scale solar power projects for industry in addition to research-quality data for NREL model development. NREL provides consultation for instrumentation and station deployment, along with instrument calibrations, data acquisition, quality assessment, data distribution, and summary reports. Industry participants provide equipment, infrastructure, and station maintenance.

The U.S. Department of Energy's National Renewable Energy Laboratory collaborates with the solar industry to establish high quality solar and meteorological measurements. This Solar Resource and Meteorological AssessmentProject (SOLRMAP) provides high quality measurements to support deployment of power projects in the United States. The no-funds-exchanged collaboration brings NREL solar resource assessment expertise together with industry needs for measurements. The end result is high quality data sets to support the financing, design, and monitoring of large scale solar power projects for industry in addition to research-quality data for NREL model development. NREL provides consultation for instrumentation and station deployment, along with instrument calibrations, data acquisition, quality assessment, data distribution, and summary reports. Industry participants provide equipment, infrastructure, and station maintenance.

The U.S. Department of Energy's National Renewable Energy Laboratory collaborates with the solar industry to establish high quality solar and meteorological measurements. This Solar Resource and Meteorological AssessmentProject (SOLRMAP) provides high quality measurements to support deployment of power projects in the United States. The no-funds-exchanged collaboration brings NREL solar resource assessment expertise together with industry needs for measurements. The end result is high quality data sets to support the financing, design, and monitoring of large scale solar power projects for industry in addition to research-quality data for NREL model development. NREL provides consultation for instrumentation and station deployment, along with instrument calibrations, data acquisition, quality assessment, data distribution, and summary reports. Industry participants provide equipment, infrastructure, and station maintenance.

The U.S. Department of Energy's National Renewable Energy Laboratory collaborates with the solar industry to establish high quality solar and meteorological measurements. This Solar Resource and Meteorological AssessmentProject (SOLRMAP) provides high quality measurements to support deployment of power projects in the United States. The no-funds-exchanged collaboration brings NREL solar resource assessment expertise together with industry needs for measurements. The end result is high quality data sets to support the financing, design, and monitoring of large scale solar power projects for industry in addition to research-quality data for NREL model development. NREL provides consultation for instrumentation and station deployment, along with instrument calibrations, data acquisition, quality assessment, data distribution, and summary reports. Industry participants provide equipment, infrastructure, and station maintenance.

The U.S. Department of Energy's National Renewable Energy Laboratory collaborates with the solar industry to establish high quality solar and meteorological measurements. This Solar Resource and Meteorological AssessmentProject (SOLRMAP) provides high quality measurements to support deployment of power projects in the United States. The no-funds-exchanged collaboration brings NREL solar resource assessment expertise together with industry needs for measurements. The end result is high quality data sets to support the financing, design, and monitoring of large scale solar power projects for industry in addition to research-quality data for NREL model development. NREL provides consultation for instrumentation and station deployment, along with instrument calibrations, data acquisition, quality assessment, data distribution, and summary reports. Industry participants provide equipment, infrastructure, and station maintenance.

Hidden Markov model (HMM) has been widely applied in bearing performance degradation assessment. As a machine learning-based model, its accuracy, subsequently, is dependent on the sensitivity of the features used to estimate the degradation performance of bearings. It's a big challenge to extract effective features which are not influenced by other qualities or attributes uncorrelated with the bearing degradation condition. In this paper, a bearing performance degradation assessment method based on HMM and nuisance attribute projection (NAP) is proposed. NAP can filter out the effect of nuisance attributes in feature space through projection. The new feature space projected by NAP is more sensitive to bearing health changes and barely influenced by other interferences occurring in operation condition. To verify the effectiveness of the proposed method, two different experimental databases are utilized. The results show that the combination of HMM and NAP can effectively improve the accuracy and robustness of the bearing performance degradation assessment system.

This paper discusses the features of an annotation language that we believe to be essential for optimizing user-defined abstractions. These features should capture semantics of function, data, and object-oriented abstractions, express abstraction equivalence (e.g., a class represents an array abstraction), and permit extension of traditional compiler optimizations to user-defined abstractions. Our future work will include developing a comprehensive annotation language for describing the semantics of general object-oriented abstractions, as well as automatically verifying and inferring the annotated semantics.

With the high number of sequences and structures streaming in from genomic projects, there is a need for more powerful and sophisticated annotation tools. Most problematic of the annotation efforts is predicting gene and protein function. Over the past few years there has been considerable progress in automated protein function prediction, using a diverse set of methods. Nevertheless, no single method reports all the information possible, and molecular biologists resort to ‘shopping around’ using different methods: a cumbersome and time-consuming practice. Here we present the Joined Assembly of Function Annotations, or JAFA server. JAFA queries several function prediction servers with a protein sequence and assembles the returned predictions in a legible, non-redundant format. In this manner, JAFA combines the predictions of several servers to provide a comprehensive view of what are the predicted functions of the proteins. JAFA also offers its own output, and the individual programs' predictions for further processing. JAFA is available for use from . PMID:16845030

For the first time in Austria, fundamental technological and economic studies on recovering secondary raw materials from large landfills have been carried out, based on the 'LAMIS - Landfill Mining Austria' pilot project. A main focus of the research - and the subject of this article - was to develop an assessment or decision-making procedure that allows landfill owners to thoroughly examine the feasibility of a landfill mining project in advance. Currently there are no standard procedures that would sufficiently cover all the multiple-criteria requirements. The basic structure of the multiple attribute decision making process was used to narrow down on selection, conceptual design and assessment of suitable procedures. Along with a breakdown into preliminary and main assessment, the entire foundation required was created, such as definitions of requirements to an assessment method, selection and accurate description of the various assessment criteria and classification of the target system for the present 'landfill mining' vs. 'retaining the landfill in after-care' decision-making problem. Based on these studies, cost-utility analysis and the analytical-hierarchy process were selected from the range of multiple attribute decision-making procedures and examined in detail. Overall, both methods have their pros and cons with regard to their use for assessing landfill mining projects. Merging these methods or connecting them with single-criteria decision-making methods (like the net present value method) may turn out to be reasonable and constitute an appropriate assessment method. PMID:26123349

The paper takes China's authoritative Environmental Impact Statement for the Yangzi (Yangtze) Three Gorges Project (TGP) in 1992 as a benchmark against which to evaluate emerging major environmental outcomes since the initial impoundment of the Three Gorges reservoir in 2003. The paper particularly examines five crucial environmental aspects and associated causal factors. The five domains include human resettlement and the carrying capacity of local environments (especially land), water quality, reservoir sedimentation and downstream riverbed erosion, soil erosion, and seismic activity and geological hazards. Lessons from the environmental impact assessments of the TGP are: (1) hydro project planning needs to take place at a broader scale, and a strategic environmental assessment at a broader scale is necessary in advance of individual environmental impact assessments; (2) national policy and planning adjustments need to react quickly to the impact changes of large projects; (3) long-term environmental monitoring systems and joint operations with other large projects in the upstream areas of a river basin should be established, and the cross-impacts of climate change on projects and possible impacts of projects on regional or local climate considered.

There are four primary accident types at steel building construction (SC) projects: falls (tumbles), object falls, object collapse, and electrocution. Several systematic safety risk assessment approaches, such as fault tree analysis (FTA) and failure mode and effect criticality analysis (FMECA), have been used to evaluate safety risks at SC projects. However, these traditional methods ineffectively address dependencies among safety factors at various levels that fail to provide early warnings to prevent occupational accidents. To overcome the limitations of traditional approaches, this study addresses the development of a safety risk-assessment model for SC projects by establishing the Bayesian networks (BN) based on fault tree (FT) transformation. The BN-based safety risk-assessment model was validated against the safety inspection records of six SC building projects and nine projects in which site accidents occurred. The ranks of posterior probabilities from the BN model were highly consistent with the accidents that occurred at each project site. The model accurately provides site safety-management abilities by calculating the probabilities of safety risks and further analyzing the causes of accidents based on their relationships in BNs. In practice, based on the analysis of accident risks and significant safety factors, proper preventive safety management strategies can be established to reduce the occurrence of accidents on SC sites. PMID:23499984

Combining assessment and research components on a large development and research project is a complex task. There are many descriptions of how either assessment or research should be conducted, but detailed examples illustrating integration of such strategies in complex projects are scarce. This paper provides definitions of assessment,…

Bonneville Power Administration (BPA) proposes to fund that portion of the Washington Wildlife Mitigation Agreement pertaining to the Lower Yakima Valley Wetlands and Riparian Restoration Project (Project) in a cooperative effort with the Yakama Indian Nation and the Bureau of Indian Affairs (BIA). The proposed action would allow the sponsors to secure property and conduct wildlife management activities for the Project within the boundaries of the Yakama Indian Reservation. This Environmental Assessment examines the potential environmental effects of acquiring and managing property for wildlife and wildlife habitat within a large 20, 340 hectare (50, 308 acre) project area. As individual properties are secured for the Project, three site-specific activities (habitat enhancement, operation and maintenance, and monitoring and evaluation) may be subject to further site-specific environmental review. All required Federal/Tribal coordination, permits and/or approvals would be obtained prior to ground disturbing activities.

The Bayer Corporation undertook a plant-wide energy efficiency assessment of its New Martinsville, West Virginia, plant in 2001. The objectives were to identify energy saving projects in the utilities area. The projects, when complete, will save the company the loss of an estimated 236,000 MMBtu ($1.16 million) annually in energy from burning and leaking fossil fuels. Certain other projects will save the company 6,300,000 kWh ($219,000) of electrical energy each year. All of the projects could be duplicated in other chemical manufacturing facilities and most of the projects could be duplicated in other industries utilizing steam, pumps, and/or compressed air.

Due to ambiguity, search engines for scientific literatures may not return right search results. One efficient solution to the problems is to automatically annotate literatures and attach the semantic information to them. Generally, semantic annotation requires identifying entities before attaching semantic information to them. However, due to abbreviation and other reasons, it is very difficult to identify entities correctly. The paper presents a Semantic Annotation System for Literature (SASL), which utilizes Wikipedia as knowledge base to annotate literatures. SASL mainly attaches semantic to terminology, academic institutions, conferences, and journals etc. Many of them are usually abbreviations, which induces ambiguity. Here, SASL uses regular expressions to extract the mapping between full name of entities and their abbreviation. Since full names of several entities may map to a single abbreviation, SASL introduces Hidden Markov Model to implement name disambiguation. Finally, the paper presents the experimental results, which confirm SASL a good performance.

Biological databases have been developed with a special focus on the efficient retrieval of single records or the efficient computation of specialized bioinformatics algorithms against the overall database, such as in sequence alignment. The continuos production of biological knowledge spread on several biological databases and ontologies, such as Gene Ontology, and the availability of efficient techniques to handle such knowledge, such as annotation and semantic similarity measures, enable the development on novel bioinformatics applications that explicitly use and integrate such knowledge. After introducing the annotation process and the main semantic similarity measures, this paper shows how annotations and semantic similarity can be exploited to improve the extraction and analysis of biologically relevant data from protein interaction databases. As case studies, the paper presents two novel software tools, OntoPIN and CytoSeVis, both based on the use of Gene Ontology annotations, for the advanced querying of protein interaction databases and for the enhanced visualization of protein interaction networks.

THIS ANNOTATED BIBLIOGRAPHY CONTAINS REFERENCES TO GENERAL GRADUATE EDUCATION AND TO EDUCATION FOR THE FOLLOWING PROFESSIONAL FIELDS--ARCHITECTURE, BUSINESS, CLINICAL PSYCHOLOGY, DENTISTRY, ENGINEERING, LAW, LIBRARY SCIENCE, MEDICINE, NURSING, SOCIAL WORK, TEACHING, AND THEOLOGY. (HW)

Due to the student-centered nature of problem-based learning (PBL) and project-based science (PBS), it is easy for teachers "not" to provide students with adequate feedback or enough support to promote critical thinking. However, research has shown that PBL and PBS are most effective when appropriate learning goals are defined, embedded supports…

Recent reports on mathematics education reform have focused the attention of educational practitioners and policymakers on new goals for mathematics education and new descriptions of mathematical proficiency. QUASAR is a national project (Quantitative Understanding: Amplifying Student Achievement and Reasoning) designed to improve the mathematics…

Biofuels may contribute to both rural economic development and climate change mitigation and adaptation. The Gota Verde Project in Yoro, Honduras, attempts to demonstrate the technical and economic feasibility of small-scale biofuel production for local use by implementing a distinctive approach to feedstock production that encourages small farm…

The paper discusses social impact assessments (SIA) for mining projects in light of the international principles and guidelines for such assessments and the academic literature in the field. The data consist of environmental impact assessment (EIA) programmes and reports for six mining projects that have started up in northern Finland in the 2000s. A first observation is that the role of the SIAs in the EIA programmes and reports studied was quite minor: measured in number of pages, the assessments account for three or four percent of the total. This study analyses the data collection, research methodology and conceptual premises used in the SIAs. It concludes that the assessments do not fully meet the high standards of the international principles and guidelines set out for them: for example, elderly men are over-represented in the data and no efforts were made to identify and bring to the fore vulnerable groups. Moreover, the reliability of the assessments is difficult to gauge, because the qualitative methods are not described and where quantitative methods were used, details such as non-response rates to questionnaires are not discussed. At the end of the paper, the SIAs are discussed in terms of Jürgen Habermas' theory of knowledge interests, with the conclusion that the assessments continue the empirical analytical tradition of the social sciences and exhibit a technical knowledge interest. -- Highlights: • Paper investigates social impact assessments in Finnish mining projects. • Role of social impact assessment is minor in whole EIA-process. • Mining SIAs give the voice for elderly men, vulnerable groups are not identified. • Assessment of SIAs is difficult because of lacking transparency in reporting. • SIAs belong to empirical analytical tradition with technical knowledge interest.

We have developed a portable and easily configurable genome annotation pipeline called MAKER. Its purpose is to allow investigators to independently annotate eukaryotic genomes and create genome databases. MAKER identifies repeats, aligns ESTs and proteins to a genome, produces ab initio gene predictions, and automatically synthesizes these data into gene annotations having evidence-based quality indices. MAKER is also easily trainable: Outputs of preliminary runs are used to automatically retrain its gene-prediction algorithm, producing higher-quality gene-models on subsequent runs. MAKER’s inputs are minimal, and its outputs can be used to create a GMOD database. Its outputs can also be viewed in the Apollo Genome browser; this feature of MAKER provides an easy means to annotate, view, and edit individual contigs and BACs without the overhead of a database. As proof of principle, we have used MAKER to annotate the genome of the planarian Schmidtea mediterranea and to create a new genome database, SmedGD. We have also compared MAKER’s performance to other published annotation pipelines. Our results demonstrate that MAKER provides a simple and effective means to convert a genome sequence into a community-accessible genome database. MAKER should prove especially useful for emerging model organism genome projects for which extensive bioinformatics resources may not be readily available. PMID:18025269

Background Genome annotation can be viewed as an incremental, cooperative, data-driven, knowledge-based process that involves multiple methods to predict gene locations and structures. This process might have to be executed more than once and might be subjected to several revisions as the biological (new data) or methodological (new methods) knowledge evolves. In this context, although a lot of annotation platforms already exist, there is still a strong need for computer systems which take in charge, not only the primary annotation, but also the update and advance of the associated knowledge. In this paper, we propose to adopt a blackboard architecture for designing such a system Results We have implemented a blackboard framework (called Genepi) for developing automatic annotation systems. The system is not bound to any specific annotation strategy. Instead, the user will specify a blackboard structure in a configuration file and the system will instantiate and run this particular annotation strategy. The characteristics of this framework are presented and discussed. Specific adaptations to the classical blackboard architecture have been required, such as the description of the activation patterns of the knowledge sources by using an extended set of Allen's temporal relations. Although the system is robust enough to be used on real-size applications, it is of primary use to bioinformatics researchers who want to experiment with blackboard architectures. Conclusion In the context of genome annotation, blackboards have several interesting features related to the way methodological and biological knowledge can be updated. They can readily handle the cooperative (several methods are implied) and opportunistic (the flow of execution depends on the state of our knowledge) aspects of the annotation process. PMID:17038181

Identifying concepts and relationships in biomedical text enables knowledge to be applied in computational analyses. Many biological natural language processing (BioNLP) projects attempt to address this challenge, but the state of the art still leaves much room for improvement. Progress in BioNLP research depends on large, annotated corpora for evaluating information extraction systems and training machine learning models. Traditionally, such corpora are created by small numbers of expert annotators often working over extended periods of time. Recent studies have shown that workers on microtask crowdsourcing platforms such as Amazon's Mechanical Turk (AMT) can, in aggregate, generate high-quality annotations of biomedical text. Here, we investigated the use of the AMT in capturing disease mentions in PubMed abstracts. We used the NCBI Disease corpus as a gold standard for refining and benchmarking our crowdsourcing protocol. After several iterations, we arrived at a protocol that reproduced the annotations of the 593 documents in the 'training set' of this gold standard with an overall F measure of 0.872 (precision 0.862, recall 0.883). The output can also be tuned to optimize for precision (max = 0.984 when recall = 0.269) or recall (max = 0.980 when precision = 0.436). Each document was completed by 15 workers, and their annotations were merged based on a simple voting method. In total 145 workers combined to complete all 593 documents in the span of 9 days at a cost of $.066 per abstract per worker. The quality of the annotations, as judged with the F measure, increases with the number of workers assigned to each task; however minimal performance gains were observed beyond 8 workers per task. These results add further evidence that microtask crowdsourcing can be a valuable tool for generating well-annotated corpora in BioNLP. Data produced for this analysis are available at http://figshare.com/articles/Disease_Mention_Annotation_with_Mechanical_Turk/1126402