Sample records for information retrieval workshop

Discussion of informationretrieval focuses on an Interaction InformationRetrieval model in which documents are interconnected; queries and documents are treated in the same way; and retrieval is the result of the interconnection between query and documents. A theoretical mathematical formulation of this type of retrieval is given. (Contains 31…

Discussion of connectionist views for adaptive clustering in informationretrieval focuses on a connectionist clustering technique and activation spreading-based informationretrieval model using the interaction informationretrieval method. Presents theoretical as well as simulation results as regards computational complexity and includes…

This paper was prepared to provide a frame of reference to the role of microfilm within the informationretrieval world and to provide an opportunity for evaluation of the use of microforms for active retrieval applications. The paper discusses the principles of informationretrieval, considers subject and classification indexing, and describes…

The Fort Detrick InformationRetrieval System is a system of computer programs written in COBOL for a CDC 3150 to store and retrieveinformation about the scientific and technical reports and documents of the Fort Detrick Technical Library. The documents and reports have been abstracted and indexed. This abstract, the subject matter descriptors,…

Describes an informationretrieval system in which advanced natural language processing is used to enhance the effectiveness of term-based document retrieval by preprocessing the documents; discovering interterm dependencies and build a conceptual hierarchy specific to database domain; and processing the user's natural language requests into…

Generalized information storage and retrieval system capable of generating and maintaining a file, gathering statistics, sorting output, and generating final reports for output is reviewed. File generation and file maintenance programs written for the system are general purpose routines.

The materials in this collection were used at workshops designed to assist school library media specialists and learning resources center professionals in making effective use of "Information Power," a recent joint publication of the Association for Educational Communications and Technology (AECT) and the American Association of School Librarians…

Discussed in this paper are the information problems in physics and the current program of the American Institute of Physics (AIP) being conducted in an attempt to develop an informationretrieval system. The seriousness of the need is described by means of graphs indicating the exponential rise in the number of physics publications in the last…

Researchers from the University of Washington, Microsoft Research, Boeing, and Risoe National Laboratory in Denmark have embarked on a project to explore the manifestations of Collaborative InformationRetrieval (CIR) in work settings and to propose technological innovations and organizational changes that can support, facilitate, and improve CIR.…

As InformationRetrieval (IR) has evolved, it has become a highly interactive process, rooted in cognitive and situational contexts. Consequently the traditional cybernetic-based IR model does not suffice for interactive IR or the human approach to IR. Reviews different views of feedback in IR and their relationship to cybernetic and social…

Examines role and contributions of natural-language processing in informationretrieval and artificial intelligence research in context of large operational informationretrieval systems and services. State-of-the-art informationretrieval systems combining the functional capabilities of conventional inverted file term adjacency approach with…

Intuition suggests that one way to enhance the informationretrieval process would be the use of phrases to characterize the contents of text. A number of researchers, however, have noted that phrases alone do not improve retrieval effectiveness. In this paper we briefly review the use of phrases in informationretrieval and then suggest extensions to this paradigm using semantic information. We claim that semantic processing, which can be viewed as expressing relations between the concepts represented by phrases, will in fact enhance retrieval effectiveness. The availability of the UMLS domain model, which we exploit extensively, significantly contributes to the feasibility of this processing. PMID:8130547

This is an individually administered rating scale designed to evaluate teacher trainee attitudes toward an informationretrieval system. A major goal of the scale is to seek responses that measure students' reactions to the cognitive interest and motivational nature of the informationretrieval system through the use of Likert-type items. The…

Bell Canada, the Public School and Collegiate Institute Boards of Ottawa, and the Ontario Institute for Studies in Education are collaborating on an educational television project which will provide a retrieval system that can supply any given program at any time under the control of the classroom teacher. Four schools in Ottawa will participate…

In this paper, we propose mobile access to peer-reviewed medical information based on textual search and content-based visual image retrieval. Web-based interfaces designed for limited screen space were developed to query via web services a medical informationretrieval engine optimizing the amount of data to be transferred in wireless form. Visual and textual retrieval engines with state-of-the-art performance were integrated. Results obtained show a good usability of the software. Future use in clinical environments has the potential of increasing quality of patient care through bedside access to the medical literature in context.

THIS REPORT IS A BRIEF REVIEW OF RESULTS OF AN EXPERIMENT TO DETERMINE THE INFORMATIONRETRIEVAL EFFICIENCY OF A MANUAL SPECIALIZED INFORMATION SYSTEM BASED ON 700,000 DOCUMENTS IN THE FIELDS OF ENDOCRINOLOGY, STRESS, MAST CELLS, AND ANAPHYLACTOID REACTIONS. THE SYSTEM RECEIVES 30,000 PUBLICATIONS ANNUALLY. DETAILED INFORMATION IS REPRESENTED BY…

Explains the development of an electronic information network in Maryland called Seymour that offers bibliographic records; full-text databases; community information databases; the ability to request information and materials; local, state, and federal information; and access to the Internet. Policy issues are addressed, including user fees and…

This survey paper highlights some of the recent, influential work in multimedia informationretrieval (MIR). MIR is a branch area of multimedia (MM). The young and fast-growing area has received strong industrial and academic support in the United States and around the world (see Section 7 for a list of major conferences and journals of the community). The term "informationretrieval" may be misleading to those with different computer science or information technology backgrounds. As shown in our discussion later, it indeed includes topics from user interaction, data analytics, machine learning, feature extraction, information visualization, and more.

An approach for the retrieval of price information from internet sites is applied to real-world application problems in this paper. The Web InformationRetrieval System (WIRS) utilizes Hidden Markov Model (HMM) for its powerful capability to process temporal information. HMM is an extremely flexible tool and has been successfully applied to a wide variety of stochastic modeling tasks. In order to compare the prices and features of products from various web sites, the WIRS extracts prices and descriptions of various products within web pages. The WIRS is evaluated with real-world problems and compared with a conventional method and the result is reported in this paper.

Discusses ways in which online searching promotes the development of research, computer, and critical thinking skills, and reviews two databases that offer access to information on elementary school science. Lesson activities for grades four through six are suggested, and information on the vendor, system requirements, and cost of each database is…

Research syntheses are increasingly being conducted within the fields of ecology and environmental management. Informationretrieval is crucial in any synthesis in identifying data for inclusion whilst potentially reducing biases in the dataset gathered, yet the nature of ecological information provides several challenges when compared with…

Introduction: This paper presents an initial proposal for a formal framework that, by studying the metric variables involved in informationretrieval, can establish the sequence of events involved and how to perform it. Method: A systematic approach from the equations of Shannon and Weaver to establish the decidability of information retrieval…

Informationretrieval in the context of virtual universities deals with the representation, organization, and access to learning objects. The representation and organization of learning objects should provide the learner with an easy access to the learning objects. In this article, we give an overview of the ONES system, and analyze the relevance…

This report discusses the problem of the meansings of words used in informationretrieval systems, and shows how semantic tools can aid in the communication which takes place between indexers and searchers via index terms. After treating the differing use of semantic tools in different types of systems, two tools (classification tables and…

Reviews research and practice in cross-language informationretrieval (CLIR) that seeks to support the process of finding documents written in one natural language with automated systems that can accept queries expressed in other languages. Addresses user needs, document preprocessing, query formulation, matching strategies, sources of translation…

Expert systems have considerable potential to assist computer users in managing the large volume of information available to them. One possible use of an expert system is to model the informationretrieval interests of a human user and then make recommendations to the user as to articles of interest. At Cal Poly, a prototype expert system written in the C Language Integrated Production System (CLIPS) serves as an Automated InformationRetrieval System (AIRS). AIRS monitors a user's reading preferences, develops a profile of the user, and then evaluates items returned from the information base. When prompted by the user, AIRS returns a list of items of interest to the user. In order to minimize the impact on system resources, AIRS is designed to run in the background during periods of light system use.

Accurate measurements of global distributions of cloud parameters and their diurnal, seasonal, and interannual variations are needed to improve understanding of the role of clouds in the weather and climate system, and to monitor their time-space variations. Cloud properties retrieved from satellite observations, such as cloud vertical placement, cloud water path and cloud particle size, play an important role for such studies. In order to give climate and weather researchers more confidence in the quality of these retrievals their validity needs to be determined and their error characteristics must be quantified. The purpose of the Cloud Retrieval Evaluation Workshop (CREW), held from 15-18 Nov. 2011 in Madison, Wisconsin, USA, is to enhance knowledge on state-of-art cloud properties retrievals from passive imaging satellites, and pave the path towards optimizing these retrievals for climate monitoring as well as for the analysis of cloud parameterizations in climate and weather models. CREW also seeks to observe and understand methods used to prepare daily and monthly cloud parameter climatologies. An important workshop component is discussion on results of the algorithm and sensor comparisons and validation studies. Hereto a common database with about 12 different cloud properties retrievals from passive imagers (MSG, MODIS, AVHRR, POLDER and/or AIRS), complemented with cloud measurements that serve as a reference (CLOUDSAT, CALIPSO, AMSU, MISR), was prepared for a number of "golden days". The passive imager cloud property retrievals were inter-compared and validated against Cloudsat, Calipso and AMSU observations. In our presentation we summarize the outcome of the inter-comparison and validation work done in the framework of CREW, and elaborate on reasons for observed differences. More in depth discussions were held on retrieval principles and validation, and utilization of cloud parameters for climate research. This was done in parallel breakout sessions on

This document is the proceedings from the 'Science Information Management and Data Compression Workshop,' which was held on October 26-27, 1995, at the NASA Goddard Space Flight Center, Greenbelt, Maryland. The Workshop explored promising computational approaches for handling the collection, ingestion, archival, and retrieval of large quantities of data in future Earth and space science missions. It consisted of fourteen presentations covering a range of information management and data compression approaches that are being or have been integrated into actual or prototypical Earth or space science data information systems, or that hold promise for such an application. The Workshop was organized by James C. Tilton and Robert F. Cromp of the NASA Goddard Space Flight Center.

This document is the proceedings from the 'Science Information Management and Data Compression Workshop,' which was held on September 26-27, 1994, at the NASA Goddard Space Flight Center, Greenbelt, Maryland. The Workshop explored promising computational approaches for handling the collection, ingestion, archival and retrieval of large quantities of data in future Earth and space science missions. It consisted of eleven presentations covering a range of information management and data compression approaches that are being or have been integrated into actual or prototypical Earth or space science data information systems, or that hold promise for such an application. The workshop was organized by James C. Tilton and Robert F. Cromp of the NASA Goddard Space Flight Center.

Describes rough sets theory and discusses the advantages it offers for informationretrieval, including the implicit inclusion of Boolean logic, term weighting, ranked retrieval output, and relevance feedback. Rough set formalism is compared to Boolean, vector, and fuzzy models of informationretrieval and a small scale evaluation of rough sets is…

If scientists and researchers working to solve the tank waste challenges, technical program office managers at the tank sites, and others understand the connection between retrieval and pretreatment activities, more efficient processes and reduced costs can be achieved. To make this possible, researchers involved in retrieval and pretreatment activities met at the Conference Center in Richland, Washington, on July 16 and 17, 1997, to discuss the connections between these activities. The purpose of the workshop was to help participants (1) gain a better understanding of retrieval and pretreatment process needs and experiences; (2) gain practical knowledge of the applications, capabilities, and requirements of retrieval and pretreatment technologies being developed and deployed; and (3) focus on identifying and troubleshooting interface issues and problems. The end product of this meeting was to create a checklist of retrieval and pretreatment parameters to consider when developing new technologies or managing work at the sites in these areas. For convenience, the information is also organized by pretreatment parameter and retrieval-pretreatment parameter in Section 5.0.

This work presents a new dictionary-based approach to biomedical cross-language informationretrieval (CLIR) that addresses many of the general and domain-specific challenges in current CLIR research. Our method is based on a multilingual lexicon that was generated partly manually and partly automatically, and currently covers six European languages. It contains morphologically meaningful word fragments, termed subwords. Using subwords instead of entire words significantly reduces the number of lexical entries necessary to sufficiently cover a specific language and domain. Mediation between queries and documents is based on these subwords as well as on lists of word-n-grams that are generated from large monolingual corpora and constitute possible translation units. The translations are then sent to a standard Internet search engine. This process makes our approach an effective tool for searching the biomedical content of the World Wide Web in different languages. We evaluate this approach using the OHSUMED corpus, a large medical document collection, within a cross-language retrieval setting.

RESULTS OF FIFTY DIFFERENT RETRIEVAL METHODS AS APPLIED IN THREE EXPERIMENTAL RETRIEVAL SYSTEMS WERE SUBJECTED TO AN ANALYSIS SUGGESTED BY STATISTICAL DECISION THEORY. THE ANALYSIS USES A PREVIOUSLY-PROPOSED MEASURE OF EFFECTIVENESS AND DEMONSTRATES ITS SEVERAL PROPERTIES. SOME OF THESE PROPERTIES ARE--(1) IT ENABLES THE RETRIEVAL SYSTEM TO…

Discusses the potential of recent work in artificial intelligence, especially expert systems, for the development of more effective informationretrieval systems. Highlights include the role of an expert bibliographic retrieval system and a prototype expert retrieval system, PROBIB-2, that uses MicroProlog to provide deductive reasoning…

Information on the 48 courses, workshops, seminars, and other educational opportunities listed in this guide was gathered by questionnaires sent to schools of library and/or information science and organizations sponsoring training opportunities in indexing in the United States and Canada. The definition of indexing used is broad: it includes…

Conceptual informationretrieval systems use structured document indices, domain knowledge and a set of heuristic retrieval strategies to match user queries with a set of indices describing the document's content. Such retrieval strategies increase the set of relevant documents retrieved (increase recall), but at the expense of returning additional irrelevant documents (decrease precision). Usually in conceptual informationretrieval systems this tradeoff is managed by hand and with difficulty. This paper discusses ways of managing this tradeoff by the application of standard induction algorithms to refine the retrieval strategies in an engineering design domain. We gathered examples of query/retrieval pairs during the system's operation using feedback from a user on the retrievedinformation. We then fed these examples to the induction algorithm and generated decision trees that refine the existing set of retrieval strategies. We found that (1) induction improved the precision on a set of queries generated by another user, without a significant loss in recall, and (2) in an interactive mode, the decision trees pointed out flaws in the retrieval and indexing knowledge and suggested ways to refine the retrieval strategies.

Reports some of the progress made over the years toward exploring information beyond the text domain. Describes the Multimedia Analysis and Retrieval Systems (MARS), developed to increase access to non-textual information. Addresses the following aspects of MARS: (1) visual feature extraction; (2) retrieval models; (3) query reformulation…

In this study, automatic feedback techniques are applied to Boolean query statements in online informationretrieval to generate improved query statements based on information contained in previously retrieved documents. Feedback operations are carried out using conventional Boolean logic and extended logic. Experimental output is included to…

Addresses the application of automatic classification methods to the problems associated with computerized document retrieval. Different kinds of classifications are described, and both document and term clustering methods are discussed. References and notes are provided. (Author/JD)

Provides an overview of some of the main ideas in the philosophy of language that have relevance to the issues of informationretrieval, focusing on the description of the intellectual content. Highlights include retrieval problems; recall and precision; words and meanings; context; externalism and the philosophy of language; and scaffolding and…

Because information handling applies to all forms of learning, one Microelectronics Education Programme (MEP) INSET strategy is devoted to a discussion of the generation, storage, retrieval, communication, and use of information in all subject areas. Discusses the nature of information; MEP and the information domain; information gathering, use,…

The purpose of the InformationRetrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional InformationRetrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional informationretrieval is that they typically retrieveinformation without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web informationretrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.

In order to stabilize and improve quality of informationretrieval service, the informationretrieval team of Daicel Corporation has given some efforts on standard operating procedures, interview sheet for informationretrieval, structured format for search report, and search expressions for some technological fields of Daicel. These activities and efforts will also lead to skill sharing and skill tradition between searchers. In addition, skill improvements are needed not only for a searcher individually, but also for the informationretrieval team totally when playing searcher's new roles.

WALT (Washington University's Approach to Lots of Text), is a prototype interface designed to support informationretrieval research. The WALT interface serves as a "front end" to a wide array of retrieval engines including those based on Boolean retrieval, latent semantic indexing, term frequency--inverse document frequency, and Bayesian inference techniques. The WALT interface is composed of seven distinct components: a document examination component known as the Document Browsing Area; four navigation components called the Book Shelf, the Book Spine, the Table of Contents, and the Path Clipboard; a term-based informationretrieval component called Control Panel; and a relevance feedback component known as the Reader Feedback Panel. WALT's most unique feature may be it's use of "book shelf" and "book spine" metaphors both to facilitate navigation and to provide a histogram-based display showing documents deemed appropriate for answering user queries. PMID:1807717

The proceedings of a workshop on the study of information, computation, and cognition, a field of interdisciplinary research that includes communication research in artificial intelligence, computer science, linguistics, logic, philosophy, and psychology, gives an overview of the status of funding support for the field and the concerns of…

Recently, many Wikipedia-based methods have been proposed to improve the performance of different natural language processing (NLP) tasks, such as semantic relatedness computation, text classification and informationretrieval. Among these methods, salient semantic analysis (SSA) has been proven to be an effective way to generate conceptual representation for words or documents. However, its feasibility and effectiveness in informationretrieval is mostly unknown. In this paper, we study how to efficiently use SSA to improve the informationretrieval performance, and propose a SSA-based retrieval method under the language model framework. First, SSA model is adopted to build conceptual representations for documents and queries. Then, these conceptual representations and the bag-of-words (BOW) representations can be used in combination to estimate the language models of queries and documents. The proposed method is evaluated on several standard text retrieval conference (TREC) collections. Experiment results on standard TREC collections show the proposed models consistently outperform the existing Wikipedia-based retrieval methods.

The second Greenhouse Gas Information System (GHGIS) workshop was held May 20-22, 2009 at the Sandia National Laboratories in Albuquerque, New Mexico. The workshop brought together 74 representatives from 28 organizations including U.S. government agencies, national laboratories, and members of the academic community to address issues related to the understanding, operational monitoring, and tracking of greenhouse gas emissions and carbon offsets. The workshop was organized by an interagency collaboration between NASA centers, DOE laboratories, and NOAA. It was motivated by the perceived need for an integrated interagency, community-wide initiative to provide information about greenhouse gas sources and sinks at policy-relevant temporal and spatial scales in order to significantly enhance the ability of national and regional governments, industry, and private citizens to implement and evaluate effective climate change mitigation policies. This talk provides an overview of the second Greenhouse Gas Information System workshop, presents its key findings, and discusses current status and next steps in this interagency collaborative effort.

Discusses lifelong learning and the need for informationretrieval skills, and describes how Northwest Missouri State University incorporates a heuristic model of library instruction in which students continually evaluate and refine information-seeking practices while progressing through all levels of courses in diverse disciplines. (Author/LRW)

Aerometric InformationRetrieval System (AIRS) is a computer-based repository of information about airborne pollution in the United States and various World Health Organization (WHO) member countries. AIRS is administered by the U.S. Environmental Protection Agency, and runs on t...

Describes the function of the Open Systems Interconnection (OSI) Z39.50 protocol, which allows for construction of information "servers"--i.e., resources attached to a computer communications network that can be accessed by client machines to retrieveinformation. The relationship of Z39.5 to other OSI protocols is explained. (23…

These guidelines are intended to provide realistic and practical guidance about the options available to teachers and planners of education and training programs wholly or partly concerned with online informationretrieval, particularly those in academic departments of library and information studies. Seven sections address: (1) the aims and scope…

pertinent information. This session will focus on the various Internet search engines , directories, and how to improve the user experience through the use of...such techniques as metadata, meta- search engines , subject specific search tools, and other developing technologies.

pertinent information. This session will focus on the various Internet search engines , directories, and how to improve the user experience through the use of...such techniques as metadata, meta- search engines , subject specific search tools, and other developing technologies.

Audio is an information-rich component of multimedia. Information can be extracted from audio in a number of different ways, and thus there are several established audio signal analysis research fields. These fields include speech recognition, speaker recognition, audio segmentation and classification, and audio finger-printing. The information that can be extracted from tools and methods developed in these fields can greatly enhance multimedia systems. In this paper, we present the current state of research in each of the major audio analysis fields. The goal is to introduce enough back-ground for someone new in the field to quickly gain high-level understanding and to provide direction for further study.

The first Applied Information Systems Research Program (AISRP) Workshop provided the impetus for several groups involved in information systems to review current activities. The objectives of the workshop included: (1) to provide an open forum for interaction and discussion of information systems; (2) to promote understanding by initiating a dialogue with the intended benefactors of the program, the scientific user community, and discuss options for improving their support; (3) create an advocacy in having science users and investigators of the program meet together and establish the basis for direction and growth; and (4) support the future of the program by building collaborations and interaction to encourage an investigator working group approach for conducting the program.

Document image understanding techniques have been widely used in many application domains. Various kinds of documents have been researched and different methods are developed for informationretrieval purpose. In this paper we present a practical method to extract information items from Chinese business card. Before retrievalinformation in business card, the image of business card had been segmented into little text regions and each text region had been recognized. Because the typeset of business card is variable, and both English and Chinese characters are used, so there are errors in segmentation and recognition result. We focus on building a robust model that can tolerate errors and extract syntax pattern of each text lines in business card, which using both layout information and logical information. By this model, many errors will be identified and adjusted. Finally, correct property will be assigned to each text region in business card, and recognition errors will be corrected.

Presents an indexing and informationretrieval method that, based on the vector space model, incorporates term dependencies and thus obtains semantically richer representations of documents. Highlights include term context vectors; techniques for estimating the dependencies among terms; term weights; experimental results on four text collections;…

Fifth graders were taught to use an electronic card catalog to retrieveinformation and materials for class assignments and leisure reading materials. Groups of 10 or 12 students were seen twice a week for periods lasting up to 30 minutes. At these sessions they were introduced to computer components, proper handling, how to log into a network…

An introduction is presented to data transmission technology and networks for informationretrieval purposes. Data signals are analyzed, modulation techniques are discussed, communication procedures between terminals and the central processing unit are surveyed, and possible network configurations are considered. (Author/PF)

Discussion of the need for distributed informationretrieval systems focuses on a model system, Fulcrum FUL/Text. Differences from distributed database management systems are described; system design is discussed; implementation requirements are explained including remote operation calls (ROC's); and a prototype simulation model based on FUL/Text…

This paper concerns an experiment in teaching a graduate seminar on InformationRetrieval using telediscussion techniques. Outstanding persons from Project INTREX, MEDLARS, Chemical Abstracts, the University of Georgia, the SUNY biomedical Network, AEC, NASA, and DDC gave hour-long telelectures. A Conference Telephone Set was used with success.…

The present invention relates to resistive random-access memory (RRAM or ReRAM) systems, as well as methods of employing multiple state variables to form degenerate states in such memory systems. The methods herein allow for precise write and read steps to form multiple state variables, and these steps can be performed electrically. Such an approach allows for multilevel, high density memory systems with enhanced information storage capacity and simplified informationretrieval.

Europeans are now taking steps to homogenize policies and standardize procedures in electronic publishing (EP) in astronomy and space sciences. This arose from an open meeting organized in Oct. 1991 at Strasbourg Observatory (France) and another business meeting held late Mar. 1992 with the major publishers and journal editors in astronomy and space sciences. The ultimate aim of EP might be considered as the so-called 'intelligent informationretrieval' (IIR) or better named 'advanced informationretrieval' (AIR), taking advantage of the fact that the material to be published appears at some stage in a machine-readable form. It is obvious that the combination of desktop and electronic publishing with networking and new structuring of knowledge bases will profoundly reshape not only our ways of publishing, but also our procedures of communicating and retrievinginformation. It should be noted that a world-wide survey among astronomers and space scientists carried out before the October 1991 colloquium on the various packages and machines used, indicated that TEX-related packages were already in majoritarian use in our community. It has also been stressed at each meeting that the European developments should be carried out in collaboration with what is done in the US (STELLAR project, for instance). American scientists and journal editors actually attended both meetings mentioned above. The paper will offer a review of the status of electronic publishing in astronomy and its possible contribution to advanced informationretrieval in this field. It will also report on recent meetings such as the 'Astronomy from Large Databases-2 (ALD-2)' conference dealing with the latest developments in networking, in data, information, and knowledge bases, as well as in the related methodologies.

The Oklahoma Geographic InformationRetrieval System (OGIRS) is a highly interactive data entry, storage, manipulation, and display software system for use with geographically referenced data. Although originally developed for a project concerned with coal strip mine reclamation, OGIRS is capable of handling any geographically referenced data for a variety of natural resource management applications. A special effort has been made to integrate remotely sensed data into the information system. The timeliness and synoptic coverage of satellite data are particularly useful attributes for inclusion into the geographic information system.

A Programming Language (APL) is a precise, concise, and powerful computer programming language. Several features make APL useful to managers and other potential computer users. APL is interactive; therefore, the user can communicate with his program or data base in near real-time. This, coupled with the fact that APL has excellent debugging features, reduces program checkout time to minutes or hours rather than days or months. Of particular importance is the fact that APL can be utilized as a management science tool using such techniques as operations research, statistical analysis, and forecasting. The gap between the scientist and the manager could be narrowed by showing how APL can be used to do what the scientists and the manager each need to do, retrieveinformation. Sometimes, the information needs to be retrieved rapidly. In this case APL is ideally suited for this challenge.

Focuses on natural language processing (NLP) in informationretrieval. Defines the seven levels at which people extract meaning from text/spoken language. Discusses the stages of information processing; how an informationretrieval system works; advantages to adding full NLP to informationretrieval systems; and common problems with information…

Plans give structure to behavior by specifying whether and when different tasks must be performed. However, the structure of behavior need not mirror the structure of the plan. To investigate this idea, the authors studied how plan information is retrieved in the context of a novel sequence-position cuing procedure, wherein subjects memorize two task sequences, then perform trials on which they are randomly cued to perform a task at one of the serial positions in a sequence. Several empirical effects were consistent with retrieval from a hierarchically structured representation (but not a non-hierarchical representation), including large sequence-repetition benefits, position-repetition benefits only for sequence repetitions, and a lack of robust task-repetition benefits. The data were successfully modeled by assuming that retrieval was time-consuming, susceptible to priming, cue-dependent, structurally constrained, and token-specific. In tandem, the empirical data and modeling work provide deeper insight into the representation of and access to information in memory that comprises a plan for guiding behavior.

Discusses the inability of the standard Boolean logic model of informationretrieval to deal effectively with the inherent fallibility of retrieval decisions. Recent advances in informationretrieval research are reviewed, and their practical potential for overcoming the deficiencies of the Boolean model is examined. (45 references) (Author/CLB)

All of the methods currently used to assess informationretrieval (IR) systems have limitations in their ability to measure how well users are able to acquire information. We utilized a new approach to assessing information obtained, based on a short-answer test given to senior medical students. Students took the ten-question test and then searched one of two IR systems on the five questions for which they were least certain of their answer Our results showed that pre-searching scores on the test were low but that searching yielded a high proportion of answers with both systems. These methods are able to measure information obtained, and will be used in subsequent studies to assess differences among IR systems. PMID:7950053

informationretrieval. For instance, informationretrieval tools must contend with obstacles such as polysemy , which refers to words with multiple...meanings, and synonymy, which is used to describe multiple words with the same meaning. Many of these problems can be minimised when the query is...with the proportion of retrieved items that are relevant. Informationretrieval systems aim to maximise both of these measures, and

Describes an informationretrieval, visualization, and manipulation model which offers the user multiple ways to exploit the retrieval set, based on weighted query terms, via an interactive interface. Outlines the mathematical model and describes an informationretrieval application built on the model to search structured and full-text files.…

Contents: The problem of information storage and retrieval - data banking ; The nature of information and communication between minds; The five steps...of data banking ; A classification system for data banking processes; A partial analysis of the problem of retrieval; A retrieval solution; and Implementation of the solution.

Presented are the proceedings from a workshop held in Rabat, Morocco from May 24-28, 1976. The main objective of the workshop was to evaluate progress made in information transfer and prepare a program of action enabling documentation centers created or assisted by UNESCO to cooperate in such areas as staff training, translation activities, and…

Introduces a new method for the visualization of informationretrieval called TOFIR (Tool of Facilitating InformationRetrieval). Discusses the use of angle attributes of a document to construct the angle-based visual space; two-dimensional and three-dimensional visual tools; ambiguity; and future research directions. (Author/LRW)

Searching useful information from unstructured medical multimedia data has been a difficult problem in informationretrieval. This paper reports an effective semantic medical multimedia retrieval approach which can reflect the users' query intent. Firstly, semantic annotations will be given to the multimedia documents in the medical multimedia database. Secondly, the ontology that represented semantic information will be hidden in the head of the multimedia documents. The main innovations of this approach are cross-type retrieval support and semantic information preservation. Experimental results indicate a good precision and efficiency of our approach for medical multimedia retrieval in comparison with some traditional approaches. PMID:24082915

Modern medical informationretrieval systems are paramount to manage the insurmountable quantities of clinical data. These systems empower health care experts in the diagnosis of patients and play an important role in the clinical decision process. However, the ever-growing heterogeneous information generated in medical environments poses several challenges for retrieval systems. We propose a medical informationretrieval system with support for multimodal medical case-based retrieval. The system supports medical information discovery by providing multimodal search, through a novel data fusion algorithm, and term suggestions from a medical thesaurus. Our search system compared favorably to other systems in 2013 ImageCLEFMedical.

Searching useful information from unstructured medical multimedia data has been a difficult problem in informationretrieval. This paper reports an effective semantic medical multimedia retrieval approach which can reflect the users' query intent. Firstly, semantic annotations will be given to the multimedia documents in the medical multimedia database. Secondly, the ontology that represented semantic information will be hidden in the head of the multimedia documents. The main innovations of this approach are cross-type retrieval support and semantic information preservation. Experimental results indicate a good precision and efficiency of our approach for medical multimedia retrieval in comparison with some traditional approaches.

Background: During the last fifty years, improved informationretrieval techniques have become necessary because of the huge amount of information people have available, which continues to increase rapidly due to the use of new technologies and the Internet. Stemming is one of the processes that can improve informationretrieval in terms of…

Transcriptome analysis is essential to understand the mechanisms regulating key biological processes and functions. The first step usually consists of identifying candidate genes; to find out which pathways are affected by those genes, however, functional analysis (FA) is mandatory. The most frequently used strategies for this purpose are Gene Set and Singular Enrichment Analysis (GSEA and SEA) over Gene Ontology. Several statistical methods have been developed and compared in terms of computational efficiency and/or statistical appropriateness. However, whether their results are similar or complementary, the sensitivity to parameter settings, or possible bias in the analyzed terms has not been addressed so far. Here, two GSEA and four SEA methods and their parameter combinations were evaluated in six datasets by comparing two breast cancer subtypes with well-known differences in genetic background and patient outcomes. We show that GSEA and SEA lead to different results depending on the chosen statistic, model and/or parameters. Both approaches provide complementary results from a biological perspective. Hence, an Integrative Functional Analysis (IFA) tool is proposed to improve informationretrieval in FA. It provides a common gene expression analytic framework that grants a comprehensive and coherent analysis. Only a minimal user parameter setting is required, since the best SEA/GSEA alternatives are integrated. IFA utility was demonstrated by evaluating four prostate cancer and the TCGA breast cancer microarray datasets, which showed its biological generalization capabilities.

Personalized informationretrieval systems can help customers to gain orientation in information overload by determining which items are relevant for their interests. One type of informationretrieval is content-based filtering. In content-based filtering, items contain words in natural language. Meanings of words in natural language are often ambiguous. The problem of word meaning disambiguation is often decomposed to determining semantic similarity of words. In this paper, the architecture of personalized informationretrieval based on user interest is presented. The architecture includes user interface model, user interest model, detecting interest model and update model. It established a user model for personalized informationretrieval based on user interest keyword list on client server, which can supply personalized informationretrieval service for user with the communications and collaboration of all modules of the architecture.

Considers some of the most fundamental problems in music informationretrieval, challenging the common assumption that searching on pitch alone is likely to be satisfactory for all purposes. Discusses special issues related to polyphonic music, user-interface issues, and the notion of relevance for music informationretrieval. (Contains 52…

Discusses XML and informationretrieval and describes a query language, ELIXIR (expressive and efficient language for XML informationretrieval), with a textual similarity operator that can be used for similarity joins. Explains the algorithm for answering ELIXIR queries to generate intermediate relational data. (Author/LRW)

Discusses the requirements of informationretrieval systems to support creative thinking as well as more convergent thinking. Highlights include the nature of creative thinking; similarity relationships; serendipity; machine processing of similarities; high order knowledge representation; and fuzzy and parallel informationretrieval. (Contains 34…

This document first outlines considerations relative to a systems approach to evaluation, and then argues for such an approach to the evaluation of informationretrieval systems (ISR). The criterion of such evaluations should be the utility of the informationretrieved to the user, and the ISR ought to be regarded as one of three interrelated…

In 7 experiments, we explored the role of retrieval in associative updating, that is, in incorporating new information into an associative memory. We tested the hypothesis that retrieval would facilitate incorporating a new contextual detail into a learned association. Participants learned 3 pieces of information--a person's face, name, and…

Informationretrieval models usually represent content only, and not other considerations, such as authority, cost, and recency. How could multiple criteria be utilized in informationretrieval, and how would it affect the results? In our experiments, using multiple user-centric criteria always produced better results than a single criteria.

Built using a distributed architecture, this prototype distributed informationretrieval system uses statistical ranking techniques to provide better service to the end user. Distributed architecture was shown to be a feasible alternative to centralized or CD-ROM informationretrieval, and user testing of the ranking methodology showed both…

The algorithms are characterized that were used for production processing by the major suppliers of ozone data to show quantitatively: how the retrieved profile is related to the actual profile (This characterizes the altitude range and vertical resolution of the data); the nature of systematic errors in the retrieved profiles, including their vertical structure and relation to uncertain instrumental parameters; how trends in the real ozone are reflected in trends in the retrieved ozone profile; and how trends in other quantities (both instrumental and atmospheric) might appear as trends in the ozone profile. No serious deficiencies were found in the algorithms used in generating the major available ozone data sets. As the measurements are all indirect in someway, and the retrieved profiles have different characteristics, data from different instruments are not directly comparable.

The development, capabilities, and products of the computer-based retrieval system of the Jet Propulsion Laboratory Library are described. The system handles books and documents, produces a book catalog, and provides a machine search capability. (Author)

Manufacturing Execution System (MES) is one of the crucial technologies to implement informatization management in manufacturing enterprises, and the construction of its information model is the base of MES database development. Basis on the analysis of the manufacturing process information in mechanical blanking workshop and the information requirement of MES every function module, the IDEF1X method was adopted to construct the information model of MES oriented to mechanical blanking workshop, and a detailed description of the data structure feature included in MES every function module and their logical relationship was given from the point of view of information relationship, which laid the foundation for the design of MES database.

The objective of this research is to develop technology for the automated characterization and interactive retrieval and visualization of very large, complex scientific data sets. Technologies will be developed for the following specific areas: (1) rapidly archiving data sets; (2) automatically characterizing and labeling data in near real-time; (3) providing users with the ability to browse contents of databases efficiently and effectively; (4) providing users with the ability to access and retrieve system independent data sets electronically; and (5) automatically alerting scientists to anomalies detected in data.

Discusses the motivation for integrating informationretrieval and database management systems, and proposes a probabilistic retrieval model in which records in a file may be composed of attributes (formatted data items) and descriptors (content indicators). The details and resolutions of difficulties involved in integrating such systems are…

Reviews previous work on producing knowledge by the use of informationretrieval or classification schemes, and describes techniques by which hidden knowledge may be retrieved, i.e., serendipity in browsing and use of appropriate search strategies. Possible future methods based on relational indexing or artificial intelligence are also explored.…

Explains how a competition-based connectionist model for diagnostic problem-solving is adapted to informationretrieval. Topics include probabilistic causal networks; Bayesian networks; the neural network model; empirical studies of test collections that evaluated retrieval performance; precision results; and the use of a thesaurus to provide…

Presents review of recent and current work in the United Kingdom on developing computer assisted learning aids (tape-slide presentations, teaching packages, simulations and emulations of commercial retrieval services) for the teaching of online informationretrieval at library schools. Seventeen references are cited. (EJS)

Discussion of the interpretation of user queries in informationretrieval highlights theoretical models that utilize user characteristics maintained in the form of a user profile. Various query/profile interaction models are identified, and an experiment is described that tested the relevance of retrieved documents based on various models. (29…

This paper reports an experiment in on-line retrieval using man-machine dialogue on a remote console. Message editing procedures and the use of two command languages are described. The system employs a PDP-8 computer for generating, proofreading, and editing messages, and an IBM 7040 computer for informationretrieval processing. The symbolic…

Unlike conventional informationretrieval systems, natural language processing (NLP) systems translate queries automatically into the language of the system. This paper discusses the potential impact of NLP on both the indexing and retrieval of text and examines some current NLP projects and systems that have established knowledge bases in narrow…

This study examines existent and new methods for evaluating the success of informationretrieval systems. The theory underlying current methods is not robust enough to allow testing retrieval using different meta-tagging schema's. Traditional measures rely on judgments of whether a document is relevant to a particular question. A good system…

In 1992, the Clearinghouse project initiated a series of workshops based on the premise that good information, appropriately tailored, is critical to all aspects of programming and decision making in health. The workshops emphasize group problem-solving and participation, rather than a top-down transfer of preformulated information. All workshop participants develop an information strategy for their organization and prepare a plan for evaluating their information activities. Participants become members of the "Information for Action" (IFA) network for ongoing capacity building, technology transfer, collaboration, and information exchange; all receive regular information packets and updates from the Clearinghouse. The workshops also draw on the experiences and resources of the Panos Institute, Advocates for Youth, Communications Development Group, the Benton Foundation, and the Advocacy Institute. The Clearinghouse project has been producing and disseminating information since 1980; its collection is organized into four databases and includes more than 20,000 documents. The circulation of "Mothers and Children" is 45,000. The Clearinghouse is decentralizing by strengthening and supporting ongoing efforts of organizations, building on what is already in place, rather than by establishing new satellite libraries.

This is a lecture at the 15th anniversary of JICST Kyushu Branch. In Medical Science there are many fields of study classified by the difference of approach. Each field is related closely, and to make a study of a field the knowledges of other fields are also needed. Such characteristic of medical study has been the problem on research of medical literature. Online informationretrieval such as JOIS has changed the retrieval much easier, however some difficulties by the characteristic still remain. Importance of training specialist in informationretrieval, construction of specialized databases, making databases easier to use and so on are suggested.

Patients' health related information is stored in electronic health records (EHRs) by health service providers. These records include sequential documentation of care episodes in the form of clinical notes. EHRs are used throughout the health care sector by professionals, administrators and patients, primarily for clinical purposes, but also for secondary purposes such as decision support and research. The vast amounts of information in EHR systems complicate information management and increase the risk of information overload. Therefore, clinicians and researchers need new tools to manage the information stored in the EHRs. A common use case is, given a - possibly unfinished - care episode, to retrieve the most similar care episodes among the records. This paper presents several methods for informationretrieval, focusing on care episode retrieval, based on textual similarity, where similarity is measured through domain-specific modelling of the distributional semantics of words. Models include variants of random indexing and the semantic neural network model word2vec. Two novel methods are introduced that utilize the ICD-10 codes attached to care episodes to better induce domain-specificity in the semantic model. We report on experimental evaluation of care episode retrieval that circumvents the lack of human judgements regarding episode relevance. Results suggest that several of the methods proposed outperform a state-of-the art search engine (Lucene) on the retrieval task. PMID:26099735

Patients' health related information is stored in electronic health records (EHRs) by health service providers. These records include sequential documentation of care episodes in the form of clinical notes. EHRs are used throughout the health care sector by professionals, administrators and patients, primarily for clinical purposes, but also for secondary purposes such as decision support and research. The vast amounts of information in EHR systems complicate information management and increase the risk of information overload. Therefore, clinicians and researchers need new tools to manage the information stored in the EHRs. A common use case is, given a--possibly unfinished--care episode, to retrieve the most similar care episodes among the records. This paper presents several methods for informationretrieval, focusing on care episode retrieval, based on textual similarity, where similarity is measured through domain-specific modelling of the distributional semantics of words. Models include variants of random indexing and the semantic neural network model word2vec. Two novel methods are introduced that utilize the ICD-10 codes attached to care episodes to better induce domain-specificity in the semantic model. We report on experimental evaluation of care episode retrieval that circumvents the lack of human judgements regarding episode relevance. Results suggest that several of the methods proposed outperform a state-of-the art search engine (Lucene) on the retrieval task.

An evaluation of an informationretrieval system using a Boolean-based retrieval engine and inverted file architecture and WAIS, which uses a vector-based engine, was conducted. Four research questions in aeronautical engineering were used to retrieve sets of citations from the NASA Aerospace Database which was mounted on a WAIS server and available through Dialog File 108 which served as the Boolean-based system (BBS). High recall and high precision searches were done in the BBS and terse and verbose queries were used in the WAIS condition. Precision values for the WAIS searches were consistently above the precision values for high recall BBS searches and consistently below the precision values for high precision BBS searches. Terse WAIS queries gave somewhat better precision performance than verbose WAIS queries. In every case, a small number of relevant documents retrieved by one system were not retrieved by the other, indicating the incomplete nature of the results from either retrieval system. Relevant documents in the WAIS searches were found to be randomly distributed in the retrieved sets rather than distributed by ranks. Advantages and limitations of both types of systems are discussed.

Three firms which offer online informationretrieval are compared. The firms are Lockheed Information Service, System Development Corporation and the Western Research Application Center. Comparison tables provide information such as hours accessible, coverage, file update, search elements and cost figures for 15 data bases. In addition, general…

Discusses users' search behavior and decision making in data mining and informationretrieval. Describes iterative information seeking as a Markov process during which users advance through states of nodes; and explains how the information system records the decision as weights, allowing the incorporation of users' decisions into the Markov…

The new trend on the Web has totally changed today's information access environment. The traditional information overload problem has evolved into the qualitative level beyond the quantitative growth. The mode of producing and consuming information is changing and we need a new paradigm for accessing information. Personalized search is one of…

High amount of relevant information is contained in reports stored in the electronic patient records and associated metadata. R-oogle is a project aiming at developing informationretrieval engines adapted to these reports and designed for clinicians. The system consists in a data warehouse (full-text reports and structured data) imported from two different hospital information systems. Informationretrieval is performed using metadata-based semantic and full-text search methods (as Google). Applications may be biomarkers identification in a translational approach, search of specific cases, and constitution of cohorts, professional practice evaluation, and quality control assessment.

This packet offers information about NIE (Newspaper in Education) credit-granting courses and workshops (some of them cooperative press/school ventures) on the use of newspapers in instructional programs. The packet is in four major sections, containing: (1) case studies of two exceptional programs at the University of Wisconsin-Madison and at…

The Earth and space science participants were able to see where the current research can be applied in their disciplines and computer science participants could see potential areas for future application of computer and information systems research. The Earth and Space Science research proposals for the High Performance Computing and Communications (HPCC) program were under evaluation. Therefore, this effort was not discussed at the AISRP Workshop. OSSA's other high priority area in computer science is scientific visualization, with the entire second day of the workshop devoted to it.

monolingual and cross-lan- guage Arabic retrieval, and did not submit any runs based on novel approaches. We submitted three monolingual runs and one...performance by a sub- stantial amount. 2. InformationRetrieval Engines We used INQUERY [2] for two of our three monolingual runs and our cross-language...run, and language modeling (LM) for one monolingual run. The processing was carried out using in-house software which implemented both engines, to

In this paper, we present an organized survey of the existing literature on music informationretrieval systems in which descriptor features are extracted directly from the compressed audio files, without prior decompression to pulse-code modulation format. Avoiding the decompression step and utilizing the readily available compressed-domain information can significantly lighten the computational cost of a music informationretrieval system, allowing application to large-scale music databases. We identify a number of systems relying on compressed-domain information and form a systematic classification of the features they extract, the retrieval tasks they tackle and the degree in which they achieve an actual increase in the overall speed-as well as any resulting loss in accuracy. Finally, we discuss recent developments in the field, and the potential research directions they open toward ultra-fast, scalable systems.

Urbana-Champaign Urbana-Champaign, IL, USA willis8@illinois.edu Richard Medlin School of Information & Library Science University of North Carolina at...Chapel Hill Chapel Hill, NC, USA rich_medlin@med.unc.edu Jaime Arguello † School of Information & Library Science University of North Carolina at...published prior to time tQ. The School of Information and Library Science at the Uni- versity of Carolina at Chapel Hill submitted four runs to the Microblog

This paper opens with a brief history of hypertext and hypermedia in the context of information management during the 'information age.' Relevant terms are defined and the approach of the paper is explained. Linear and hypermedia information access methods are contrasted. A discussion of hyperprogramming in the handling of complex scientific and technical information follows. A selection of innovative hypermedia systems is discussed. An analysis of the Clinical Practice Library of Medicine NASA STI Program hypermedia application is presented. The paper concludes with a discussion of the NASA STI Program's future hypermedia project plans.

This examination of the use of local area networks (LANs) by libraries summarizes the findings of a nationwide survey of 600 libraries and information centers and 200 microcomputer networking system manufacturers and vendors, which was conducted to determine the relevance of currently available networking systems for library and information center…

Briefly described is the County of San Mateo Online System (COSMOS) which was developed and is used by the San Mateo Educational Resources Center (SMERC) to access the Educational Resources Information Center (ERIC) and Fugitive Information Data Organizer (FIDO) databases as well as the curriculum guides housed at SMERC. (TG)

Among the key recommendations of a recent WCRP Workshop on Drought Predictability and Prediction in a Changing Climate is the development of an experimental global drought information system (GDIS). The timeliness of such an effort is evidenced by the wide aITay of relevant ongoing national and international (as well as regional and continental scale) efforts to provide drought information, including the US and North American drought monitors, and various integrating activities such as GEO and the Global Drought Portal. The workshop will review current capabilities and needs, and focus on the steps necessary to develop a GDIS that will build upon the extensive worldwide investments that have already been made in developing drought monitoring (including new space-based observations), drought risk management, and climate prediction capahilities.

Information overproduction and the lack of an adequate system for its storage and retrieval have frustrated integrative efforts and hindered orderly progress in the area of psychological testing. Describes a prototype repository system to handle and service the mass of information produced. (Authors)

A project that attempts to overcome the principal obstacles and to provide an efficient and effective method of teaching informationretrieval skills to second-year medical students is described. The method includes a pretest, a diagnosis of deficiencies in information skills, a self-paced learning module, and a posttest. (Author/MLW)

This introduction to a special issue devoted to modeling data, information, and knowledge briefly describes the origins of the papers presented and the topics covered, which include: Boolean logic; probability theory; artificial intelligence; organizing and encoding information and data; and characteristics of users of retrieval systems. (12…

Discusses latent semantic indexing (LSI); considers the high cost associated with the singular value decomposition (SVD) of the large term-by-document matrix that becomes a barrier for its application to scalable informationretrieval; and shows that information filtering using level search techniques can reduce the SVD computation cost for LSI.…

As the fourteenth report in a series describing research in automatic information storage and retrieval, this document covers work carried out on the SMART project for approximately one year (summer 1967 to summer 1968). The document is divided into four main parts: (1) SMART systems design, (2) analysis and search experiments, (3) user feedback…

The following topics are discussed to show various forms for cooperative ventures in the information field which countries within a region might consider: (1) building up basic resources, collections, and stores; (2) collective retrieval tools to the combined resources of the region; (3) regional information service activities; (4) communication…

We show that, contrary to common prejudice, a measurement of an open quantum system can reduce its decoherence rate. We demonstrate this in an example of indirect measurement of a qubit, where information regarding its state is hidden in the environment. This information is extracted by a distant device, coupled with the environment only. We also show that the reduction of decoherence generated by this device is accompanied by diminution of the environmental noise in the vicinity of the qubit. An interpretation of these results in terms of quantum interference on large scales is presented.

Computer handling of mass spectra serves two main purposes: the interpretation of the occasional, problematic mass spectrum, and the identification of the large number of spectra generated in the gas-chromatographic-mass spectrometric (GC-MS) analysis of complex natural and synthetic mixtures. Methods available fall into the three categories of library search, artificial intelligence, and learning machine. Optional procedures for coding, abbreviating and filtering a library of spectra minimize time and storage requirements. Newer techniques make increasing use of probability and information theory in accessing files of mass spectral information.

We described a web-based data warehousing method for retrieving and analyzing neurological multimedia information. The web-based method supports convenient access, effective search and retrieval of clinical textual and image data, and on-line analysis. To improve the flexibility and efficiency of multimedia information query and analysis, a three-tier, multimedia data warehouse for epilepsy research has been built. The data warehouse integrates clinical multimedia data related to epilepsy from disparate sources and archives them into a well-defined data model.

The ability to retrieve relevant information is at the heart of every aspect of research and development in the life sciences industry. Information is often distributed across multiple systems and recorded in a way that makes it difficult to piece together the complete picture. Differences in data formats, naming schemes and network protocols amongst information sources, both public and private, must be overcome, and user interfaces not only need to be able to tap into these diverse information sources but must also assist users in filtering out extraneous information and highlighting the key relationships hidden within an aggregated set of information. The Semantic Web community has made great strides in proposing solutions to these problems, and many efforts are underway to apply Semantic Web techniques to the problem of informationretrieval in the life sciences space. This article gives an overview of the principles underlying a Semantic Web-enabled informationretrieval system: creating a unified abstraction for knowledge using the RDF semantic network model; designing semantic lenses that extract contextually relevant subsets of information; and assembling semantic lenses into powerful information displays. Furthermore, concrete examples of how these principles can be applied to life science problems including a scenario involving a drug discovery dashboard prototype called BioDash are provided.

In informationretrieval (IR), the aim of genetic algorithms (GA) is to help a system to find, in a huge documents collection, a good reply to a query expressed by the user. Analysis of phenomena seen during the implementation of a GA for IR has led to a new crossover operation, which is introduced and compared to other learning methods.…

The use of an on-line informationretrieval system by the scientists themselves is described. MEDUSA was designed to allow physicians to interrogate the MEDLARS data base. A Brief description is given of the system and details of an experiment to test its effectiveness. (8 references) (Author)

The last three decades have shown a marked development in new technologies for storing and retrievinginformation: microform in the 1960s; online database in the 1970s, and CD-ROM in the 1980s. While microform lacks volatility and multiple access points, preservation programs find it to be ideal storage media: it has a longer life expectancy than…

A study, using a specific patient encounter as the focal point for each student's research, is described that documents the skills of entering freshmen medical students before and immediately after a short course emphasizing informationretrieval and at follow-up one year later. (MLW)

The twenty-second in a series, this report describes research in information organization and retrieval conducted by the Department of Computer Science at Cornell University. The report covers work carried out during the period summer 1972 through summer 1974 and is divided into four parts: indexing theory, automatic content analysis, feedback…

Presents notations and definitions necessary to identify the concepts and relationships that are important in modelling informationretrieval objects and processes in the context of vector spaces. Earlier work on the use of vector models is evaluated in terms of the concepts introduced and certain problems are identified. (Author/EM)

Discusses political questions raised by the existence and growth potential of the online retrieval industry because it is privately owned and profit oriented. The user fee debate, information ownership, danger of manipulation, libraries as marketers, library action, and a library database consortium are highlighted. Seventeen references are cited.…

Compares support vector machines (SVMs) to Rocchio, Ide regular and Ide dec-hi algorithms in informationretrieval (IR) of text documents using relevancy feedback. If the preliminary search is so poor that one has to search through many documents to find at least one relevant document, then SVM is preferred. Includes nine tables. (Contains 24…

The emphasis of this text is that the problems of informationretrieval are the intellectual ones of subject analysis and description. These problems are not easily solved by technology alone, although certain inroads are being made by knowledge-based expert systems and other computer aids to indexing. Linguistic approaches are also showing…

Content-based image retrieval is based on the idea of extracting visual features from images and using them to index images in a database. Proposes similarity measures and an indexing algorithm based on information theory that permits an image to be represented as a single number. When used in conjunction with vectors, this method displays…

Two major uses were identified for the Sorption InformationRetrieval System: (1) to aid geochemists in the elucidation of sorption mechanisms; and (2) to aid safety assessment modelers in selection of Kds for any given scenerio. Other benefits such as providing an auditable vehicle for the Kd selection were also discussed.

We propose two symmetrically-private informationretrieval protocols based on quantum key distribution, which provide a good degree of database and user privacy while being flexible, loss-resistant and easily generalized to a large database similar to the precedent works. Furthermore, one protocol is robust to a collective-dephasing noise, and the other is robust to a collective-rotation noise.

Purpose: The purpose of this paper is to make a scientific contribution to web informationretrieval (IR). Design/methodology/approach: A multiagent system for web IR is proposed based on new technologies: Hierarchical Small-Worlds (HSW) and Possibilistic Networks (PN). This system is based on a possibilistic qualitative approach which extends the…

Investigates an automatic method for Cross Language InformationRetrieval (CLIR) that utilizes the multilingual Unified Medical Language System (UMLS) Metathesaurus to translate Spanish natural-language queries into English. Results indicate that for Spanish, the UMLS Metathesaurus-based CLIR method is at least equivalent to if not better than…

The results of manual and online searching are compared during a unit on online chemical informationretrieval taught at Hebrew University. Strategies and results obtained are provided for student searches on the synthesis of vitamin K(3) from 2-methylnaphthalene and polywater. (JN)

Describes the use of PROLOG to program knowledge-based informationretrieval systems, in which the knowledge contained in a document is translated into machine processable logic. Several examples of the resulting search process, and the program rules supporting the process, are given. (10 references) (CLB)

We give an outline of JOIS (JICST On-line Information System) - III, which has been developed as a succession to JOIS-II now being on service by the Japan Information Center of Science and Technology to start servicing in January 1990. In this report we explain mainly about new retrieval functions such as multiple file searches and proximity searching. (1st part Vol. 32. No. 3, 3rd part Vol. 32. No. 5)

The Adaptive Indexing and Retrieval Agent (ARNIE) is a code library, designed to be used by an application program, that assists human users in retrieving desired information in a hypertext setting. Using ARNIE, the program implements a computational model for interactively learning what information each human user considers relevant in context. The model, called a "relevance network," incrementally adapts retrievedinformation to users individual profiles on the basis of feedback from the users regarding specific queries. The model also generalizes such knowledge for subsequent derivation of relevant references for similar queries and profiles, thereby, assisting users in filtering information by relevance. ARNIE thus enables users to categorize and share information of interest in various contexts. ARNIE encodes the relevance and structure of information in a neural network dynamically configured with a genetic algorithm. ARNIE maintains an internal database, wherein it saves associations, and from which it returns associated items in response to a query. A C++ compiler for a platform on which ARNIE will be utilized is necessary for creating the ARNIE library but is not necessary for the execution of the software.

Problems and methods are discussed of automating informationretrieval operations in a data bank used for long term storage and retrieval of data from scientific experiments. Existing informationretrieval languages are analyzed along with those being developed. The results of studies discussing the application of the descriptive 'Kristall' language used in the 'ASIOR' automated informationretrieval system are presented. The development and use of a specialized language of the classification-descriptive type, using universal decimal classification indices as the main descriptors, is described.

The workshop was held to discuss the status of marketing ocean energy information and to develop an understanding of information needs and how to satisfy them. Presentations were made by the Solar Energy Research Institute (SERI) staff and media consultants about the effective use of audio-visual and print products, the mass media, and audience needs. Industry and government representatives reported on current efforts in each of their communication programs and outlined future plans. Four target audiences (DOE contractors, researchers, influencers, and general public) were discussed with respect to developing priorities for projects to enhance the commercialization of ocean energy technology.

The workshop was held to discuss the status of marketing ocean energy information and to develop an understanding of information needs and how to satisfy them. Presentations were made by the Solar Energy Research Institute (SERI) staff and media consultants about the effective use of audio visual and print products, the mass media, and audience needs. Industry and government representatives reported on current efforts in each of their communication programs and outlined future plans. Four target audiences (DOE contractors, researchers, influencers, and general public) were discussed with respect to developing priorities for projects to enhance the commercialization of ocean energy technology.

We give an outline of JOIS (JICST On-line Information System) - III, which has been developed as a succession to JOIS-II now being on service by The Japan Information Center of Science and Technology to start servicing in January 1990. JOIS-III is an on-line informationretrieval system intended to be a totally up-graded JOIS-II with functional expansion, well-provided data base and high-speed processing of large quantity of data. We also refer to structures of files to be retrieved such as bit-map file, which has been introduced to cope with large-sized data base, and group structure. (2nd part Vol. 32. No.4, 3rd part Vol. 32. No.5)

The fourth in a series of articles on the evaluation of microcomputer software for information storage and retrieval, conducted by the Netherlands Association of Users of Online Information Systems (VOGIN), presents test results for six indexing and full-text retrieval programs--Ask-It, KAware, Texplore, TextMaster, WordCruncher, and ZYindex. (13…

In this paper, we propose an intelligent bird informationretrieval system which aims to construct a mobility-learning activity under the up-to-date wireless technology. The system consists of a Tablet PC and PDAs with wireless networking capabilities. The PDA is equipped with a friendly retrieval interface and a good learning environment. In our system, users only need to click the buttons or input the keywords to retrieve bird information. Besides, users can discuss or share their information and knowledge via the wireless network. Our system saves bird information in four categories including "Introduction," "Images," "Sound," "Streaming Media," and "Ecological Memo." The integral knowledge helps users understand more about birds. Data mining and fuzzy association rules are applied to recommend users those birds they may be interested in. A streaming server on the Tablet PC is built to provide the streaming media for PDA users. By this way, PDA users can enjoy the multimedia from Tablet PC in real time without downloading completely. Finally, the system is a perfect tool for outdoor teaching and can be easily extended to provide navigation and touring services for national parks or museums.

This paper presents an InformationRetrieval mechanism to facilitate the writing of technical documents in the space domain. To address the need for document exchange between partners in a given project, documents are standardized. The writing of a new document requires the re-use of existing documents or parts thereof. These parts can be identified by {open_quotes}tagging{close_quotes} the logical structure of documents and restored by means of a purpose-built InformationRetrieval System (I.R.S.). The I.R.S. implemented in our writing assistance tool uses natural language queries and is based on a statistical linguistic approach which is enhanced by the use of document structure module.

Sequence retrieval in genomic databases is used for finding sequences related to a query sequence specified by a user. Comparison is the main part of the retrieval system in genomic databases. An efficient sequence comparison algorithm is critical in bioinformatics. There are several different algorithms to perform sequence comparison, such as the suffix array based database search, divergence measurement, methods that rely upon the existence of a local similarity between the query sequence and sequences in the database, or common mutual information between query and sequences in DB. In this paper we have described a new method for DNA sequence retrieval based on data mining techniques. Data mining tools generally find patterns among data and have been successfully applied in industries to improve marketing, sales, and customer support operations. We have applied the descriptive data mining techniques to find relevant patterns that are significant for comparing genetic sequences. Relevance feedback score based on common patterns is developed and employed to compute distance between sequences. The contigs of human chromosomes are used to test the retrieval accuracy and the experimental results are presented.

The paper presents a knowledge-based framework for skills and talent management based on an advanced matchmaking between profiles of candidates and available job positions. Interestingly, informative content of top-k retrieval is enriched through semantic capabilities. The proposed approach allows to: (1) express a requested profile in terms of both hard constraints and soft ones; (2) provide a ranking function based also on qualitative attributes of a profile; (3) explain the resulting outcomes (given a job request, a motivation for the obtained score of each selected profile is provided). Top-k retrieval allows to select most promising candidates according to an ontology formalizing the domain knowledge. Such a knowledge is further exploited to provide a semantic-based explanation of missing or conflicting features in retrieved profiles. They also indicate additional profile characteristics emerging by the retrieval procedure for a further request refinement. A concrete case study followed by an exhaustive experimental campaign is reported to prove the approach effectiveness.

EM-21 is the Waste Processing Division of the Office of Engineering and Technology, within the U.S. Department of Energy’s (DOE) Office of Environmental Management (EM). In August of 2008, EM-21 began an initiative to develop a Retrieval Knowledge Center (RKC) to provide the DOE, high level waste retrieval operators, and technology developers with centralized and focused location to share knowledge and expertise that will be used to address retrieval challenges across the DOE complex. The RKC is also designed to facilitate information sharing across the DOE Waste Site Complex through workshops, and a searchable database of waste retrieval technology information. The database may be used to research effective technology approaches for specific retrieval tasks and to take advantage of the lessons learned from previous operations. It is also expected to be effective for remaining current with state-of-the-art of retrieval technologies and ongoing development within the DOE Complex. To encourage collaboration of DOE sites with waste retrieval issues, the RKC team is co-led by the Savannah River National Laboratory (SRNL) and the Pacific Northwest National Laboratory (PNNL). Two RKC workshops were held in the Fall of 2008. The purpose of these workshops was to define top level waste retrieval functional areas, exchange lessons learned, and develop a path forward to support a strategic business plan focused on technology needs for retrieval. The primary participants involved in these workshops included retrieval personnel and laboratory staff that are associated with Hanford and Savannah River Sites since the majority of remaining DOE waste tanks are located at these sites. This report summarizes and documents the results of the initial RKC workshops. Technology challenges identified from these workshops and presented here are expected to be a key component to defining future RKC-directed tasks designed to facilitate tank waste retrieval solutions.

Discusses distributed informationretrieval systems that take into account the weights of descriptors from thesauri. Topics addressed include a mathematical model for informationretrieval subsystems; organization of inverted files; models for the distributed homogeneous information systems; a distributed informationretrieval system based on…

The article describes the main features of the website SIBIL (Sistema Informativo per la Bioetica In Linea) implemented within the framework of a research project of the ISS for collecting, indexing and disseminating Italian literature on bioethics since 1995 through an integrated electronic system. The site, addressed to a wide range of people interested at different degrees and levels in bioethics, offers a comprehensive overview of the activities, such as courses and meetings, on the major ethical issues at stake in Italy, as well as a survey of the most important activities both at national and international level. The main feature of SIBIL is a database of a large collection of documents retrieved through sources or exploitation of the most important international electronic databases. A thesaurus of 1,600 terms, available in Italian and English, was created in order to organize documents with standardized criteria currently adopted in the Italian scientific environment. Future trends of the website are also discussed for sharing experiences with other countries and laying the basis for a European portal on bioethics.

Retrieval in a multimedia database usually involves combining information from different modalities of data, such as text and images. However, all modalities of the data may not be available to form the query. The retrieval results from such a partial query are often less than satisfactory. In this paper, we present an approach to complete a partial query by estimating the missing features in the query. Our experiments with a database of images and their associated captions show that, with an initial text-only query, our completion method has similar performance to a full query with both image and text features. In addition, when we use relevance feedback, our approach outperforms the results obtained using a full query.

This paper describes the requirements and prototype development for an intelligent document management and informationretrieval system that will be capable of handling millions of pages of text or other data. Technologies for scanning, Optical Character Recognition (OCR), magneto-optical storage, and multiplatform retrieval using a Standard Query Language (SQL) will be discussed. The semantic ambiguity inherent in the English language is somewhat compensated-for through the use of coefficients or weighting factors for partial synonyms. Such coefficients are used both for defining structured query trees for routine queries and for establishing long-term interest profiles that can be used on a regular basis to alert individual users to the presence of relevant documents that may have just arrived from an external source, such as a news wire service. Although this attempt at evidential reasoning is limited in comparison with the latest developments in AI Expert Systems technology, it has the advantage of being commercially available.

The workshop theme is Cyber Security: Beyond the Maginot Line Recently the FBI reported that computer crime has skyrocketed costing over $67 billion in 2005 alone and affecting 2.8M+ businesses and organizations. Attack sophistication is unprecedented along with availability of open source concomitant tools. Private, academic, and public sectors invest significant resources in cyber security. Industry primarily performs cyber security research as an investment in future products and services. While the public sector also funds cyber security R&D, the majority of this activity focuses on the specific mission(s) of the funding agency. Thus, broad areas of cyber security remain neglected or underdeveloped. Consequently, this workshop endeavors to explore issues involving cyber security and related technologies toward strengthening such areas and enabling the development of new tools and methods for securing our information infrastructure critical assets. We aim to assemble new ideas and proposals about robust models on which we can build the architecture of a secure cyberspace including but not limited to: * Knowledge discovery and management * Critical infrastructure protection * De-obfuscating tools for the validation and verification of tamper-proofed software * Computer network defense technologies * Scalable information assurance strategies * Assessment-driven design for trust * Security metrics and testing methodologies * Validation of security and survivability properties * Threat assessment and risk analysis * Early accurate detection of the insider threat * Security hardened sensor networks and ubiquitous computing environments * Mobile software authentication protocols * A new "model" of the threat to replace the "Maginot Line" model and more . . .

Holography offers a tremendous opportunity for dense information storage, theoretically one bit per cubic wavelength of material volume, with rapid retrieval, of up to thousands of pages of information simultaneously. However, many factors prevent the theoretical storage limit from being reached, including dynamic range problems and imperfections in recording materials. This research explores new ways of moving closer to practical holographic information storage and retrieval by altering the recording materials, in this case, photorefractive crystals, and by increasing the current storage capacity while improving the informationretrieved. As an experimental example of the techniques developed, the informationretrieved is the correlation peak from an optical recognition architecture, but the materials and methods developed are applicable to many other holographic information storage systems. Optical correlators can potentially solve any signal or image recognition problem. Military surveillance, fingerprint identification for law enforcement or employee identification, and video games are but a few examples of applications. A major obstacle keeping optical correlators from being universally accepted is the lack of a high quality, thick (high capacity) holographic recording material that operates with red or infrared wavelengths which are available from inexpensive diode lasers. This research addresses the problems from two positions: find a better material for use with diode lasers, and reduce the requirements placed on the material while maintaining an efficient and effective system. This research found that the solutions are new dopants introduced into photorefractive lithium niobate to improve wavelength sensitivities and the use of a novel inexpensive diffuser that reduces the dynamic range and optical element quality requirements (which reduces the cost) while improving performance. A uniquely doped set of 12 lithium niobate crystals was specified and

With a view to improving information systems and services in Eastern, Central, and Southern Africa, this Information Systems Workshop was organized by the East and Southern African Management Institute (ESAMI) with assistance from the Coordinating Centre for Regional Information Training (CRIT). The specific aims of the workshop were to acquaint…

We exploit quantitative metrics to investigate the information content in retrievals of atmospheric aerosol parameters (with a focus on single-scattering albedo), contained in multi-angle and multi-spectral measurements with sufficient dynamical range in the sunglint region. The simulations are performed for two classes of maritime aerosols with optical and microphysical properties compiled from measurements of the Aerosol Robotic Network. The information content is assessed using the inverse formalism and is compared to that deriving from observations not affected by sunglint. We find that there indeed is additional information in measurements containing sunglint, not just for single-scattering albedo, but also for aerosol optical thickness and the complex refractive index of the fine aerosol size mode, although the amount of additional information varies with aerosol type.

The National Aeronautics and Space Administration (NASA) and the American Society for Engineering Education (ASEE) have sponsored faculty fellowship programs in systems engineering design for the past several years. During the summer of 1972 four such programs were conducted by NASA, with Auburn University cooperating with Marshall Space Flight Center (MSFC). The subject for the Auburn-MSFC design group was ERISTAR, an acronym for Earth Resources Information Storage, Transformation, Analysis and Retrieval, which represents an earth resources information management network of state information centers administered by the respective states and linked to federally administered regional centers and a national center. The considerations for serving the users and the considerations that must be given to processing data from a variety of sources are described. The combination of these elements into a national network is discussed and an implementation plan is proposed for a prototype state information center. The compatibility of the proposed plan with the Department of Interior plan, RALI, is indicated.

The main thrust of this paper is application of a novel data mining approach on the log of user's feedback to improve web multimedia informationretrieval performance. A user space model was constructed based on data mining, and then integrated into the original information space model to improve the accuracy of the new information space model. It can remove clutter and irrelevant text information and help to eliminate mismatch between the page author's expression and the user's understanding and expectation. User space model was also utilized to discover the relationship between high-level and low-level features for assigning weight. The authors proposed improved Bayesian algorithm for data mining. Experiment proved that the authors' proposed algorithm was efficient.

We propose a quantum solution to the classical private informationretrieval (PIR) problem, which allows one to query a database in a private manner. The protocol offers privacy thresholds and allows the user to obtain information from a database in a way that offers the potential adversary, in this model the database owner, no possibility of deterministically establishing the query contents. This protocol may also be viewed as a solution to the symmetrically private informationretrieval problem in that it can offer database security (inability for a querying user to steal its contents). Compared to classical solutions, the protocol offers substantial improvement in terms of communication complexity. In comparison with the recent quantum private queries [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.100.230502 100, 230502 (2008)] protocol, it is more efficient in terms of communication complexity and the number of rounds, while offering a clear privacy parameter. We discuss the security of the protocol and analyze its strengths and conclude that using this technique makes it challenging to obtain the unconditional (in the information-theoretic sense) privacy degree; nevertheless, in addition to being simple, the protocol still offers a privacy level. The oracle used in the protocol is inspired both by the classical computational PIR solutions as well as the Deutsch-Jozsa oracle.

We propose a quantum solution to the classical private informationretrieval (PIR) problem, which allows one to query a database in a private manner. The protocol offers privacy thresholds and allows the user to obtain information from a database in a way that offers the potential adversary, in this model the database owner, no possibility of deterministically establishing the query contents. This protocol may also be viewed as a solution to the symmetrically private informationretrieval problem in that it can offer database security (inability for a querying user to steal its contents). Compared to classical solutions, the protocol offers substantial improvement in terms of communication complexity. In comparison with the recent quantum private queries [Phys. Rev. Lett. 100, 230502 (2008)] protocol, it is more efficient in terms of communication complexity and the number of rounds, while offering a clear privacy parameter. We discuss the security of the protocol and analyze its strengths and conclude that using this technique makes it challenging to obtain the unconditional (in the information-theoretic sense) privacy degree; nevertheless, in addition to being simple, the protocol still offers a privacy level. The oracle used in the protocol is inspired both by the classical computational PIR solutions as well as the Deutsch-Jozsa oracle.

A management information system was developed for the Contra Costa County, California, Department of Education's Educational InformationRetrieval Center. The system was designed to determine needed operational changes, to measure the effects of these changes, to monitor the center's operation, and to obtain information for dissemination. Data…

Purpose: The purpose of this paper is to introduce the concept of human information behaviour and to explore the relationship between information behaviour of users and the existing approaches dominating design and evaluation of informationretrieval (IR) systems and also to describe briefly new design and evaluation methods in which extensive…

The lack of a nursing thesaurus in Finnish has emerged among nursing professionals searching nursing knowledge and librarians when indexing literature to databases. The Finnish Nursing Education Society launched a project focusing on the development of a nursing vocabulary and the compilation of a thesaurus. The content of a vocabulary was created by six experts using Delphi-technique. The validity of the vocabulary was twice tested for indexing nursing research and has afterwards been revised. The vocabulary can be used for indexing and informationretrieval purposes. The main challenge is that nurses easily can find national as well as international nursing research from databases and enhance research utilization.

Methods and systems for rapid automatic keyword extraction for informationretrieval and analysis. Embodiments can include parsing words in an individual document by delimiters, stop words, or both in order to identify candidate keywords. Word scores for each word within the candidate keywords are then calculated based on a function of co-occurrence degree, co-occurrence frequency, or both. Based on a function of the word scores for words within the candidate keyword, a keyword score is calculated for each of the candidate keywords. A portion of the candidate keywords are then extracted as keywords based, at least in part, on the candidate keywords having the highest keyword scores.

The items presented in this compilation are divided into two sections. Section one treats of computer usage devoted to the retrieval of information that affords the user rapid entry into voluminous collections of data on a selective basis. Section two is a more generalized collection of computer options for the user who needs to take such data and reduce it to an analytical study within a specific discipline. These programs, routines, and subroutines should prove useful to users who do not have access to more sophisticated and expensive computer software.

Reducing custom software development effort is an important goal in informationretrieval (IR). This study evaluated a generalizable approach involving with no custom software or rules development. The study used documents “consistent with cancer” to evaluate system performance in the domains of colorectal (CRC), prostate (PC), and lung (LC) cancer. Using an end-user-supplied reference set, the automated retrieval console (ARC) iteratively calculated performance of combinations of natural language processing-derived features and supervised classification algorithms. Training and testing involved 10-fold cross-validation for three sets of 500 documents each. Performance metrics included recall, precision, and F-measure. Annotation time for five physicians was also measured. Top performing algorithms had recall, precision, and F-measure values as follows: for CRC, 0.90, 0.92, and 0.89, respectively; for PC, 0.97, 0.95, and 0.94; and for LC, 0.76, 0.80, and 0.75. In all but one case, conditional random fields outperformed maximum entropy-based classifiers. Algorithms had good performance without custom code or rules development, but performance varied by specific application. PMID:20595303

One of the goals of the Advanced Fuel Cycle Initiative (AFCI) is to preserve the knowledge that has been gained in the United States on Liquid Metal Reactors (LMR). In addition, preserving LMR information and knowledge is part of a larger international collaborative activity conducted under the auspices of the International Atomic Energy Agency (IAEA). A similar program is being conducted for EBR-II at the Idaho Nuclear Laboratory (INL) and international programs are also in progress. Knowledge preservation at the FFTF is focused on the areas of design, construction, startup, and operation of the reactor. As the primary function of the FFTF was testing, the focus is also on preserving information obtained from irradiation testing of fuels and materials. This information will be invaluable when, at a later date, international decisions are made to pursue new LMRs. In the interim, this information may be of potential use for international exchanges with other LMR programs around the world. At least as important in the United States, which is emphasizing large-scale computer simulation and modeling, this information provides the basis for creating benchmarks for validating and testing these large scale computer programs. Although the preservation activity with respect to FFTF information as discussed below is still underway, the team of authors above is currently retrieving and providing experimental and design information to the LMR modeling and simulation efforts for use in validating their computer models. On the Hanford Site, the FFTF reactor plant is one of the facilities intended for decontamination and decommissioning consistent with the cleanup mission on this site. The reactor facility has been deactivated and is being maintained in a cold and dark minimal surveillance and maintenance mode until final decommissioning is pursued. In order to ensure protection of information at risk, the program to date has focused on sequestering and secure retrieval

A new information storage and retrieval system has been developed for the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory to replace mainframe systems that have become obsolete. The database contains citations and abstracts of literature which were selected by RSIC analysts and indexed with terms from a controlled vocabulary. The database, begun in 1963, has been maintained continuously since that time. The new system, called JANE, incorporates automatic indexing techniques and on-line retrieval using the RSIC Data General Eclipse MV/4000 minicomputer, Automatic indexing and retrieval techniques based on fuzzy-set theory allow the presentation of results in order of Retrieval Status Value. The fuzzy-set membership function depends on term frequency in the titles and abstracts and on Term Discrimination Values which indicate the resolving power of the individual terms. These values are determined by the Cover Coefficient method. The use of a commercial database base to store and retrieve the indexing information permits rapid retrieval of the stored documents. Comparisons of the new and presently-used systems for actual searches of the literature indicate that it is practical to replace the mainframe systems with a minicomputer system similar to the present version of JANE. 18 refs., 10 figs.

Discusses the importance of research on the use of mathematical, logical, and formal methods in informationretrieval to help enhance retrieval effectiveness and clarify underlying concepts of informationretrieval. Highlights include logic; probability; spaces; and future research needs. (Author/LRW)

Describes initial design considerations and implementation issues for a hypertext retrieval system for structured bibliographic data. Previous research on informationretrieval and hypertext is reviewed; HyperLynx, a hypertext prototype informationretrieval system that emphasized ease of use and browsing capabilities, is explained; and further…

Discusses research into chemical information and document retrieval systems at the University of Sheffield. Highlights include the use of cluster analysis methods for document retrieval and drug design, representation and searching of files of generic chemical structures, and the application of parallel computer hardware to informationretrieval.…

Outlines an approach to informationretrieval which integrates the existing theory of probabilistic retrieval into a practical methodology based on Boolean searches. Basic concepts, search methodology, and examples of Boolean searching are noted. Twenty-six sources are appended. (EJS)

An on-line terminal oriented data storage and retrieval system is presented which allows a user to extract and process information from stored data bases. The use of on-line terminals for extracting and displaying data from the data bases provides a fast and responsive method for obtaining needed information. The system consists of general purpose computer programs that provide the overall capabilities of the total system. The system can process any number of data files via a Dictionary (one for each file) which describes the data format to the system. New files may be added to the system at any time, and reprogramming is not required. Illustrations of the system are shown, and sample inquiries and responses are given.

Establishing open, unified, seamless, access and ad-hoc analytics on cross-disciplinary, multi-source, multi-dimensional, spatiotemporal Earth Science data of extreme-size and their supporting metadata are the main challenges of the EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program. One of EarthServer's main objectives is to provide users with higher level coverage and metadata search, retrieval and processing capabilities to multi-disciplinary Earth Science data. Six Lighthouse Applications are being established, each one providing access to Cryospheric, Airborne, Atmospheric, Geology, Oceanography and Planetary science raster data repositories through strictly WCS 2.0 standard based service endpoints. EarthServers' informationretrieval subsystem aims towards exploiting the WCS endpoints through a physically and logically distributed service oriented architecture, foreseeing the collaboration of several standard compliant services, capable of exploiting modern large grid and cloud infrastructures and of dynamically responding to availability and capabilities of underlying resources. Towards furthering technology for integrated, coherent service provision based on WCS and WCPS the concept of a query language (QL), unifying coverage and metadata processing and retrieval is introduced. EarthServer's informationretrieval subsystem receives QL requests involving high volumes of all Earth Science data categories, executes them on the services that reside on the infrastructure and sends the results back to the requester through a high performance pipeline. In this contribution we briefly discuss EarthServer's service oriented coverage data and metadata search and retrieval architecture and further elaborate on the potentials of EarthServer's Query Language, called xWCPS (XQuery compliant WCPS). xWCPS aims towards merging the path that the two widely adopted standards (W3C XQuery, OGC WCPS) have paved, into a

Background On-line biodiversity information databases are growing quickly and being integrated into general bioinformatics systems due to the advances of fast gene sequencing technologies and the Internet. These can reduce the cost and effort of performing biodiversity surveys and genetic searches, which allows scientists to spend more time researching and less time collecting and maintaining data. This will cause an increased rate of knowledge build-up and improve conservations. The biodiversity databases in Korea have been scattered among several institutes and local natural history museums with incompatible data types. Therefore, a comprehensive database and a nation wide web portal for biodiversity information is necessary in order to integrate diverse information resources, including molecular and genomic databases. Results The Korean Natural History Research Information System (NARIS) was built and serviced as the central biodiversity information system to collect and integrate the biodiversity data of various institutes and natural history museums in Korea. This database aims to be an integrated resource that contains additional biological information, such as genome sequences and molecular level diversity. Currently, twelve institutes and museums in Korea are integrated by the DiGIR (Distributed Generic InformationRetrieval) protocol, with Darwin Core2.0 format as its metadata standard for data exchange. Data quality control and statistical analysis functions have been implemented. In particular, integrating molecular and genetic information from the National Center for Biotechnology Information (NCBI) databases with NARIS was recently accomplished. NARIS can also be extended to accommodate other institutes abroad, and the whole system can be exported to establish local biodiversity management servers. Conclusion A Korean data portal, NARIS, has been developed to efficiently manage and utilize biodiversity data, which includes genetic resources. NARIS aims

Online informationretrieval in pharmacy and related fields is described. Factors involved in determining whether to conduct an online search are discussed, including characteristics of appropriate and less suitable topics, advantages and limitations of online searching versus manual searching, and possible types of searches. The process of preparing for an online search, involving the determination of search vocabulary, relevant citations, important authors, time frame, special categories (such as language, publication type, and reviews), and the number of citations needed, as well as choosing a database, is explained. Sample search strategies on MEDLINE and IPA are illustrated to demonstrate the basic search commands and to compare file retrievals on the sample subject. Pharmacy-related bibliographic databases, general-interest databases, end-user search services, and full-text and numeric databases are profiled. Online database searching can be a cost-efficient and flexible alternative to manual literature searching for pharmacists. Although most online searching is currently conducted by librarian-search specialists, end-user searching is a growing trend, as is the availability of full-text databases.

Describes a simulation method for estimating recall and fallout in a document retrieval system. Earlier research on simulating document retrieval systems is reviewed, examples are presented of the current method, a probabilistic justification of the method is given, theoretical concerns dealing with retrieval precision are discussed, and further…

... Other Information; Public Workshop; Request for Comments AGENCY: Food and Drug Administration, HHS... the potential electronic submission of tobacco product applications and other information. This... input from regulated industry and other stakeholders and interested parties on the potential...

Written in German, this report summarizes a workshop on teaching and research activities in information science that was held at the City University, London, and attended by faculty and students from the university's Department of Information Science and H.-R. Simon of the GID (Gesellschaft fur Information und Dokumentation), Frankfort am Main,…

Universal blind quantum computation (UBQC) is a new secure quantum computing protocol which allows a user Alice who does not have any sophisticated quantum technology to delegate her computing to a server Bob without leaking any privacy. Using the features of UBQC, we propose a protocol to achieve symmetrically private informationretrieval, which allows a quantum limited Alice to query an item from Bob with a fully fledged quantum computer; meanwhile, the privacy of both parties is preserved. The security of our protocol is based on the assumption that malicious Alice has no quantum computer, which avoids the impossibility proof of Lo. For the honest Alice, she is almost classical and only requires minimal quantum resources to carry out the proposed protocol. Therefore, she does not need any expensive laboratory which can maintain the coherence of complicated quantum experimental setups.

Describes research conducted at the TREC (Text Retrieval Conference) interactive track that compared Boolean and natural language searching, showing they achieved comparable results; and assessed the validity of batch-oriented retrieval evaluations, showing that the results from batch evaluations were not comparable to those obtained in…

A standard approach to cross-language informationretrieval (CLIR) uses Latent Semantic Analysis (LSA) in conjunction with a multilingual parallel aligned corpus. This approach has been shown to be successful in identifying similar documents across languages - or more precisely, retrieving the most similar document in one language to a query in another language. However, the approach has severe drawbacks when applied to a related task, that of clustering documents 'language-independently', so that documents about similar topics end up closest to one another in the semantic space regardless of their language. The problem is that documents are generally more similar to other documents in the same language than they are to documents in a different language, but on the same topic. As a result, when using multilingual LSA, documents will in practice cluster by language, not by topic. We propose a novel application of PARAFAC2 (which is a variant of PARAFAC, a multi-way generalization of the singular value decomposition [SVD]) to overcome this problem. Instead of forming a single multilingual term-by-document matrix which, under LSA, is subjected to SVD, we form an irregular three-way array, each slice of which is a separate term-by-document matrix for a single language in the parallel corpus. The goal is to compute an SVD for each language such that V (the matrix of right singular vectors) is the same across all languages. Effectively, PARAFAC2 imposes the constraint, not present in standard LSA, that the 'concepts' in all documents in the parallel corpus are the same regardless of language. Intuitively, this constraint makes sense, since the whole purpose of using a parallel corpus is that exactly the same concepts are expressed in the translations. We tested this approach by comparing the performance of PARAFAC2 with standard LSA in solving a particular CLIR problem. From our results, we conclude that PARAFAC2 offers a very promising alternative to LSA not only for

Profile Hidden Markov Model (Profile-HMM) is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of informationretrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases. PMID:27875548

Profile Hidden Markov Model (Profile-HMM) is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of informationretrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases.

We investigated the role of lexical syntactic information such as grammatical gender and category in spoken word retrieval processes by using a blocking paradigm in picture and written word naming experiments. In Experiments 1, 3, and 4, we found that the naming of target words (nouns) from pictures or written words was faster when these target words were named within a list where only words from the same grammatical category had to be produced (homogeneous category list: all nouns) than when they had to be produced within a list comprising also words from another grammatical category (heterogeneous category list: nouns and verbs). On the other hand, we detected no significant facilitation effect when the target words had to be named within a homogeneous gender list (all masculine nouns) compared to a heterogeneous gender list (both masculine and feminine nouns). In Experiment 2, using the same blocking paradigm by manipulating the semantic category of the items, we found that naming latencies were significantly slower in the semantic category homogeneous in comparison with the semantic category heterogeneous condition. Thus semantic category homogeneity caused an interference, not a facilitation effect like grammatical category homogeneity. Finally, in Experiment 5, nouns in the heterogeneous category condition had to be named just after a verb (category-switching position) or a noun (same-category position). We found a facilitation effect of category homogeneity but no significant effect of position, which showed that the effect of category homogeneity found in Experiments 1, 3, and 4 was not due to a cost of switching between grammatical categories in the heterogeneous grammatical category list. These findings supported the hypothesis that grammatical category information impacts word retrieval processes in speech production, even when words are to be produced in isolation. They are discussed within the context of extant theories of lexical production. PMID

Assesses the promotion and development of online informationretrieval in China. Highlights include opening of the first online retrieval center at China Overseas Building Development Company Limited; establishment and activities of a cooperative network; online retrieval seminars; telecommunication lines and terminal installations; and problems…

Four papers are included in Part One of the eighteenth report on Salton's Magical Automatic Retriever of Texts (SMART) project. The first paper: "Content Analysis in InformationRetrieval" by S. F. Weiss presents the results of experiments aimed at determining the conditions under which content analysis improves retrieval results as well…

The use of informationretrieval (IR) systems is evolving towards larger, more complicated queries. Both the IR industrial and research communities have generated significant evidence indicating that in order to continue improving retrieval effectiveness, increases in retrieval model complexity may be unavoidable. From an operational perspective,…

Presents an approach to a Genetic InformationRetrieval Agent Filter (GIRAF) that filters and ranks documents retrieved from the Internet according to users' preferences by using a Genetic Algorithm and fuzzy set theory to handle the imprecision of users' preferences and users' evaluation of the retrieved documents. (Author/LRW)

Describes how operations on local inverted files are to be modified in order to use them in distributed informationretrieval systems based on thesauri. The presented rules may be viewed as the logical approach in implementing a distributed retrieval system consisting of n local retrieval systems. (Author/MBR)

In extant quantum secret sharing protocols, once the secret is shared in a quantum network ( qnet) it cannot be retrieved, even if the dealer wishes that his/her secret no longer be available in the network. For instance, if the dealer is part of the two qnets, say {{Q}}_1 and {{Q}}_2 and he/she subsequently finds that {{Q}}_2 is more reliable than {{Q}}_1, he/she may wish to transfer all her secrets from {{Q}}_1 to {{Q}}_2. Known protocols are inadequate to address such a revocation. In this work we address this problem by designing a protocol that enables the source/dealer to bring back the information shared in the network, if desired. Unlike classical revocation, the no-cloning theorem automatically ensures that the secret is no longer shared in the network. The implications of our results are multi-fold. One interesting implication of our technique is the possibility of routing qubits in asynchronous qnets. By asynchrony we mean that the requisite data/resources are intermittently available (but not necessarily simultaneously) in the qnet. For example, we show that a source S can send quantum information to a destination R even though (a) S and R share no quantum resource, (b) R's identity is unknown to S at the time of sending the message, but is subsequently decided, (c) S herself can be R at a later date and/or in a different location to bequeath her information (`backed-up' in the qnet) and (d) importantly, the path chosen for routing the secret may hit a dead end due to resource constraints, congestion, etc., (therefore the information needs to be back-tracked and sent along an alternate path). Another implication of our technique is the possibility of using insecure resources. For instance, if the quantum memory within an organization is insufficient, it may safely store (using our protocol) its private information with a neighboring organization without (a) revealing critical data to the host and (b) losing control over retrieving the data. Putting the

... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF THE INTERIOR Office of the Secretary Vendor Outreach Workshop for Small Information Technology (IT) Businesses in the... hosting a Vendor Outreach Workshop for small IT businesses in the National Capitol region of the...

Discusses the test collections developed in the TREC (Text REtrieval Conference) workshops for informationretrieval research and describes a study by NIST (National Institute of Standards and Technology) that verified their reliability by investigating the effect changes in the relevance assessments have on the evaluation of retrieval results.…

Recently considerable attention has been given in the online informationretrieval literature to techniques for producing a weighted output of documents in response to a request. One approach tries to maintain the form of and relationships among requests as they appear in current Boolean logic-based systems, while extending it to permit a weighted…

centralized index. The harvested files are indexed against SOLR search API consistently, so that it can render search capabilities such as simple, fielded, spatial and temporal searches across a span of projects ranging from land, atmosphere, and ocean ecology. Mercury also provides data sharing capabilities using Open Archive Initiatives Protocol for Metadata Handling (OAI-PMH). In this paper we will discuss about the best practices for archiving data and metadata, new searching techniques, efficient ways of data retrieval and information display.

Introduction: Research and theory on the topics of information seeking and retrieval have been plagued by some fundamental problems for several decades. Many of the difficulties spring from mechanistic and instrumental thinking and modelling. Method: Existing models of informationretrieval and information seeking are examined for efficacy in a…

Investigated adult age differences in accessing and retrievinginformation from long-term memory. Results showed that older adults (N=26) were slower than younger adults (N=35) at feature extraction, lexical access, and accessing category information. The age deficit was proportionally greater when retrieval of category information was required.…

Content-based visual information (or image) retrieval (CBIR) has been an extremely active research domain within medical imaging over the past ten years, with the goal of improving the management of visual medical information. Many technical solutions have been proposed, and application scenarios for image retrieval as well as image classification have been set up. However, in contrast to medical informationretrieval using textual methods, visual retrieval has only rarely been applied in clinical practice. This is despite the large amount and variety of visual information produced in hospitals every day. This information overload imposes a significant burden upon clinicians, and CBIR technologies have the potential to help the situation. However, in order for CBIR to become an accepted clinical tool, it must demonstrate a higher level of technical maturity than it has to date. Since 2004, the ImageCLEF benchmark has included a task for the comparison of visual informationretrieval algorithms for medical applications. In 2005, a task for medical image classification was introduced and both tasks have been run successfully for the past four years. These benchmarks allow an annual comparison of visual retrieval techniques based on the same data sets and the same query tasks, enabling the meaningful comparison of various retrieval techniques. The datasets used from 2004-2007 contained images and annotations from medical teaching files. In 2008, however, the dataset used was made up of 67,000 images (along with their associated figure captions and the full text of their corresponding articles) from two Radiological Society of North America (RSNA) scientific journals. This article describes the results of the medical image retrieval task of the ImageCLEF 2008 evaluation campaign. We compare the retrieval results of both visual and textual informationretrieval systems from 15 research groups on the aforementioned data set. The results show clearly that, currently

Most of essential information contained on Electronic Medical Record is stored as text, imposing several difficulties on automated data extraction and retrieval. Natural language processing is an approach that can unlock clinical information from free texts. The proposed methodology uses the specialized natural language processor MEDLEE developed for English language. To use this processor on Portuguese medical texts, chest x-ray reports were Machine Translated into English. The result of serial coupling of MT an NLP is tagged text which needs further investigation for extracting clinical findings. The objective of this experiment was to investigate normal reports and reports with device description on a set of 165 chest x-ray reports. We obtained sensitivity and specificity of 1 and 0.71 for the first condition and 0.97 and 0.97 for the second respectively. The reference was formed by the opinion of two radiologists. The results of this experiment indicate the viability of extracting clinical findings from chest x-ray reports through coupling MT and NLP. PMID:17911745

Stored memories enter a temporary state of vulnerability following retrieval known as ‘reconsolidation', a process that can allow memories to be modified to incorporate new information. Although reconsolidation has become an attractive target for treatment of memories related to traumatic past experiences, we still do not know what new information triggers the updating of retrieved memories. Here, we used biochemical markers of synaptic plasticity in combination with a novel behavioral procedure to determine what was learned during memory reconsolidation under normal retrieval conditions. We eliminated new information during retrieval by manipulating animals' training experience and measured changes in proteasome activity and GluR2 expression in the amygdala, two established markers of fear memory lability and reconsolidation. We found that eliminating new contextual information during the retrieval of memories for predictable and unpredictable fear associations prevented changes in proteasome activity and glutamate receptor expression in the amygdala, indicating that this new information drives the reconsolidation of both predictable and unpredictable fear associations on retrieval. Consistent with this, eliminating new contextual information prior to retrieval prevented the memory-impairing effects of protein synthesis inhibitors following retrieval. These results indicate that under normal conditions, reconsolidation updates memories by incorporating new contextual information into the memory trace. Collectively, these results suggest that controlling contextual information present during retrieval may be a useful strategy for improving reconsolidation-based treatments of traumatic memories associated with anxiety disorders such as post-traumatic stress disorder. PMID:26062788

Discussion of problems with fuzzy subsets in document retrieval highlights attempts to invent a system of weighted fuzzy queries in which weights correspond to relative importance of each term in query as whole, and use of Kantor's Logic for Retrieval as an alternative to Boolean queries. Six references are cited. (EJS)

The vision of the Semantic Web is to build a global Web of machine-readable data to be consumed by intelligent applications. As the first step to make this vision come true, the initiative of linked open data has fostered many novel applications aimed at improving data accessibility in the public Web. Comparably, the enterprise environment is so different from the public Web that most potentially usable business information originates in an unstructured form (typically in free text), which poses a challenge for the adoption of semantic technologies in the enterprise environment. Considering that the business information in a company is highly specific and centred around a set of commonly used concepts, this paper describes a pilot study to migrate the concept of linked data into the development of a domain-specific application, i.e. the vehicle repair support system. The set of commonly used concepts, including the part name of a car and the phenomenon term on the car repairing, are employed to build the linkage between data and documents distributed among different sources, leading to the fusion of documents and data across source boundaries. Then, we describe the approaches of semantic informationretrieval to consume these linkages for value creation for companies. The experiments on two real-world data sets show that the proposed approaches outperform the best baseline 6.3-10.8% and 6.4-11.1% in terms of top five and top 10 precisions, respectively. We believe that our pilot study can serve as an important reference for the development of similar semantic applications in an enterprise environment.

One interesting issue in artificial intelligence (Al) currently is the relative merits of, and relationship between, the symbolic and connectionist approaches to intelligent systems building. The performance of more-traditional symbolic systems has been striking, but getting these systems to learn truly new symbols has proven difficult. Recently, some researchers have begun to explore a distinctly different type of representation, similar in some respects to the nerve nets of several decades past. In these massively parallel, connectionist models, symbols arise implicitly, through the interactions of many simple and subsymbolic elements. The work described here was done in two phases. The first phase concentrated on mapping the informationretrieval (IR) task into a connectionist network; it is shown that IR is very amendable to this representation. The second, more central phase of the research has shown that this network can also adapt. AIR translates the browsing behaviors of its users into a feedback signal used by a Hebbian-like local learning rule to change the weights on some links. Experience with a series of alternative learning rules are reported, and the results of experiments using human subjects to evaluate the results of AIR's learning are presented.

At the Lister Hill National Center for Biomedical Communications, a research and development division of the National Library of Medicine (NLM), we are developing a prototype multimedia database system to provide World Wide Web access to biomedical databases. WebMIRS (Web-based Medical InformationRetrieval System) will allow access to databases containing text and images and will allow database query by standard SQL, by image content, or by a combination of the two. The system is being developed in the form of Java applets, which will communicate with the Informix DBMS on an NLM Sun workstation running the Solaris operating system. The system architecture will allow access from any hardware platform, which supports a Java-enabled Web browser, such as Netscape or Internet Explorer. Initial databases will include data from two national health surveys conducted by the National Center for Health Statistics (NCHS), and will include x-ray images from those surveys. In addition to describing in- house research in database access systems, this paper describes ongoing work toward querying by image content. Image content search capability will include capability to search for x-ray images similar to an input image with respect to vertebral morphometry used to characterize features such as fractures and disc space narrowing.

Multiplicity distributions (MDs) P(N) measured in multiparticle production processes are most frequently described by the negative binomial distribution (NBD). However, with increasing collision energy some systematic discrepancies have become more and more apparent. They are usually attributed to the possible multi-source structure of the production process and described using a multi-NBD form of the MD. We investigate the possibility of keeping a single NBD but with its parameters depending on the multiplicity N. This is done by modifying the widely known clan model of particle production leading to the NBD form of P(N). This is then confronted with the approach based on the so-called cascade-stochastic formalism which is based on different types of recurrence relations defining P(N). We demonstrate that a combination of both approaches allows the retrieval of additional valuable information from the MDs, namely the oscillatory behavior of the counting statistics apparently visible in the high energy data.

A series of workshops was conducted in September, 1982, by staff of the Virginia Office, Mid-Atlantic District of the U.S.-Geological Survey for the benefit of coal operators, consultants, and resource managers in the coal-producing area of southwest Virginia. The purpose of the workshop series was to present hydrologic information and data by the U.S. Geological Survey collected during four years of study of the. coal hydrology of Virginia. The workshop series was held one day a week for three weeks at two locations within southwest Virginia in order to reduce travel time and overnight travel costs for participants. Results of an independently conducted questionnaire indicate the series was a success. The report summarizes the workshop preparation, organization, and presentation, and gives general conclusions and suggestions for those interested in conducting workshops.

Motivation Weighted semantic networks built from text-mined literature can be used to retrieve known protein-protein or gene-disease associations, and have been shown to anticipate associations years before they are explicitly stated in the literature. Our text-mining system recognizes over 640,000 biomedical concepts: some are specific (i.e., names of genes or proteins) others generic (e.g., ‘Homo sapiens’). Generic concepts may play important roles in automated informationretrieval, extraction, and inference but may also result in concept overload and confound retrieval and reasoning with low-relevance or even spurious links. Here, we attempted to optimize the retrieval performance for protein-protein interactions (PPI) by filtering generic concepts (node filtering) or links to generic concepts (edge filtering) from a weighted semantic network. First, we defined metrics based on network properties that quantify the specificity of concepts. Then using these metrics, we systematically filtered generic information from the network while monitoring retrieval performance of known protein-protein interactions. We also systematically filtered specific information from the network (inverse filtering), and assessed the retrieval performance of networks composed of generic information alone. Results Filtering generic or specific information induced a two-phase response in retrieval performance: initially the effects of filtering were minimal but beyond a critical threshold network performance suddenly drops. Contrary to expectations, networks composed exclusively of generic information demonstrated retrieval performance comparable to unfiltered networks that also contain specific concepts. Furthermore, an analysis using individual generic concepts demonstrated that they can effectively support the retrieval of known protein-protein interactions. For instance the concept “binding” is indicative for PPI retrieval and the concept “mutation abnormality” is

When using informationretrieval (IR) systems, users often pose short and ambiguous query terms. It is critical for IR systems to obtain more accurate representation of users' information need, their document preferences, and the context they are working in, and then incorporate them into the design of the systems to tailor retrieval to…

Investigates the properties of a global model consisting of "n" local informationretrieval systems based on thesaurus. Definitions of a distributed informationretrieval system (thesaurus, documents set, set of queries) and proofs of theorems denoting further properties of the systems are highlighted. Five references are included. (EJS)

Discusses the inclusion of contextual information in indexing and retrieval systems to improve results and the ability to carry out text analysis by means of linguistic knowledge. Presents research that investigated whether discourse variables have an impact on information and retrieval and classification algorithms. (Author/LRW)

Information storage medium comprising a semiconductor doped with first and second impurities or dopants. Preferably, one of the impurities is introduced by ion implantation. Conductive electrodes are photolithographically formed on the surface of the medium. Information is recorded on the medium by selectively applying a focused laser beam to discrete regions of the medium surface so as to anneal discrete regions of the medium containing lattice defects introduced by the ion-implanted impurity. Information is retrieved from the storage medium by applying a focused laser beam to annealed and non-annealed regions so as to produce a photovoltaic signal at each region.

We describe an entirely statistics-based, unsupervised, and language-independent approach to multilingual informationretrieval, which we call Latent Morpho-Semantic Analysis (LMSA). LMSA overcomes some of the shortcomings of related previous approaches such as Latent Semantic Analysis (LSA). LMSA has an important theoretical advantage over LSA: it combines well-known techniques in a novel way to break the terms of LSA down into units which correspond more closely to morphemes. Thus, it has a particular appeal for use with morphologically complex languages such as Arabic. We show through empirical results that the theoretical advantages of LMSA can translate into significant gains in precision in multilingual informationretrieval tests. These gains are not matched either when a standard stemmer is used with LSA, or when terms are indiscriminately broken down into n-grams.

An existing experimental document and reference retrieval system operating in batch processing, tape oriented mode was converted to an on-line mode with user interaction. Objectives were faster response time, integration of functions, and system accessibility at user location. Effects on file organization, addressing techniques, file maintenance,…

One of the outstanding problems in data assimilation has been and continues to be how best to utilize satellite data while balancing the tradeoff between accuracy and computational cost. A number of weather prediction centers have recently achieved remarkable success in improving their forecast skill by changing the method by which satellite data are assimilated into the forecast model from the traditional approach of assimilating retrievals to the direct assimilation of radiances in a variational framework. The operational implementation of such a substantial change in methodology involves a great number of technical details, e.g., pertaining to quality control procedures, systematic error correction techniques, and tuning of the statistical parameters in the analysis algorithm. Although there are clear theoretical advantages to the direct radiance assimilation approach, it is not obvious at all to what extent the improvements that have been obtained so far can be attributed to the change in methodology, or to various technical aspects of the implementation. The issue is of interest because retrieval assimilation retains many practical and logistical advantages which may become even more significant in the near future when increasingly high-volume data sources become available. The central question we address here is: how much improvement can we expect from assimilating radiances rather than retrievals, all other things being equal? We compare the two approaches in a simplified one-dimensional theoretical framework, in which problems related to quality control and systematic error correction are conveniently absent. By assuming a perfect radiative transfer model and perfect knowledge of radiance and background error covariances, we are able to formulate a nonlinear local error analysis for each assimilation method. Direct radiance assimilation is optimal in this idealized context, while the traditional method of assimilating retrievals is suboptimal because it

An effective man-machine interactive retrieval system is not achieved by simply placing a terminal on each end of an existing machine retrieval...many of these needs was developed and tested. The objective of the development of this system, BROWSER, was to investigate the effectiveness of a free...form query with a combinatorial search algorithm and the effectiveness of various techniques and components to facilitate online browsing.

In fMRI analyses, the posterior parietal cortex (PPC) is particularly active during the successful retrieval of episodic memory. To delineate the neural correlates of episodic retrieval more succinctly, we compared retrieval of recently learned spatial locations (photographs of buildings) with retrieval of previously familiar locations (photographs of familiar campus buildings). Episodic retrieval of recently learned locations activated a circumscribed region within the ventral PPC (anterior angular gyrus and adjacent regions in the supramarginal gyrus) as well as medial PPC regions (posterior cingulated gyrus and posterior precuneus). Retrieval of familiar locations activated more posterior regions in the ventral PPC (posterior angular gyrus, LOC) and more anterior regions in the medial PPC (anterior precuneus and retrosplenial cortex). These dissociable effects define more precisely PPC regions involved in the retrieval of recent, contextually bound information as opposed to regions involved in other processes, such as visual imagery, scene reconstruction, and self-referential processing.

Informationretrieval has progressed from a reliance on traditional print sources to the modern era of computer databases and online networks. Surgeons, many from remote areas not served by professional medical libraries, must develop and maintain skills in informationretrieval and management in both electronic and standard formats. One hundred thirty-three New Mexico general surgeons were surveyed to identify their information-seeking patterns in five areas: retrieval purposes, retrieval sources, barriers to access, techniques used, and continuing education needs. Ninety-nine (74.4%) surgeons responded to the survey. Ninety-five percent utilize professional meetings, the medical literature, and physician colleagues as information sources. Only 17% utilize the outreach services of the state's only medical school library. Common retrieval barriers were practice demands (71%), isolation from medical schools (30%), computer illiteracy (28%), and rural environment (25%). Continuing education topics related to information management would be valuable to 61% of the surgeons. Sixty-nine percent believe their current ability to access biomedical information is adequate, despite most frequently accessing their personal libraries for information related to decision-making or patient management. These data suggest that, despite significant information needs, surgeons have not embraced newer forms of informationretrieval. It is imperative that surgeons acquire and maintain modern informationretrieval skills as a means of remaining up-to-date in their profession. Professional surgical organizations and medical librarians should collaborate on these continuing education ventures.

The third Workshop of the Applied Laboratory Systems Research Program (AISRP) met at the Univeristy of Colorado's Laboratory for Atmospheric and Space Physics in August of 1993. The presentations were organized into four sessions: Artificial Intelligence Techniques; Scientific Visualization; Data Management and Archiving; and Research and Technology.

Milberg, et al. (1996) postulated that significant intrasubtest scatter on the Wechsler Information subtest reflects impaired retrieval. From a pool of 205 male referrals at a VA medical center with complete WAIS-III and WMS-III protocols, 28 participants with impaired retrieval (Group I) defined by a high Retrieval Composite score were identified. A sample (Group II) without similar evidence of impaired retrieval was matched to Group I on age, education, Full Scale IQ, race, and diagnosis. Intrasubtest scatter on the Information subtest was the same across groups (Group I M = 6.3, SD = 2.7; Group II M = 6.9, SD = 3.4). A second study identified impaired retrieval using the WMS-III Word Lists subtest. 21 participants (Group III) had impaired retrieval indicated by a Recognition scaled score being > or = 4 points higher than the Delayed Recall scaled score. A matched sample (Group IV) of VA patients without similar evidence of impaired retrieval was constituted. Intrasubtest scatter on the Information subtest did not differ across groups (Group III M = 6.6, SD = 2.4; Group IV M = 6.0, SD = 2.5). Evaluations of the retrieval deficit hypothesis should be based on responses of participants whose Information performance is characterized by abnormal amounts of intrasubtest scatter. It is possible that a specific amount of response variability must be present within the subtest before retrieval problems can be detected.

A fundamental goal of the new National Science Foundation (NSF) initiative National Ecological Observatory Network (NEON) is to provide timely and broad access to the ecological data collected at NEON sites. Information management and data collection will be critical components to achieving this goal and a successful NEON implementation. The Southeast Ecological Observatory Network (SEEON) working group recognized the importance of information management and sensor technology in its first planning workshop and recommended that interested parties in the region come together to discuss these subjects in the context of the needs and capabilities of a southeast regional ecological observatory network. In February 2004, 28 participants from 14 organizations including academic institutions, state and federal agencies, private and non-profit entities convened at the Space Life Sciences Laboratory (SLSL) at the Kennedy Space Center, Florida for two days of presentations and discussions on ecological sensors and information management. Some of the participants were previously involved in the first SEEON workshop or other meetings concerned with NEON, but many were somewhat new to the NEON community. Each day focused on a different technical component, i.e. ecological sensors the first day and cyber-infrastructure the second day, and were structured in a similar manner. The mornings were devoted to presentations by experts to help stimulate discussions on aspects of the focal topic held in the afternoon. The formal and informal discussions held during the workshop succeeded in validating some concerns and needs identified in the first SEEON workshop, but also served to bring to light other questions or issues that will need to be addressed as the NEON planning and design stages move forward. While the expansion of the SEEON community meant that some of the presentation and discussion time was needed to help bring the newcomers up to speed on the goals, objectives and current

This paper Analys the design goals of Medical Instrumentation standard informationretrieval system. Based on the B /S structure,we established a medical instrumentation standard retrieval system with ASP.NET C # programming language, IIS f Web server, SQL Server 2000 database, in the. NET environment. The paper also Introduces the system structure, retrieval system modules, system development environment and detailed design of the system.

Three experiments investigated whether retrieval of information about different dimensions of a visual object varies as a function of the perceptual properties of those dimensions. The experiments involved two perception-based matching tasks and two retrieval-based matching tasks. A signal-to-respond methodology was used in all tasks. A stochastic…

Presents a dynamic model of user behavior when scanning an information storage and retrieval system output list, compares rules for determining the user's optimum stopping point, presents an algorithm for implementing the Bayesian model, and discusses implications for retrieval system design. Provided are 13 figures and 15 references. (Author/RBF)

Analyzes the collaborative patterns of the informationretrieval research field using co-authored articles retrieved from "Social Science Citation Index" for a period of 11 years, from 1987 to 1997. Results reveal an upward trend of collaborative research with interdisciplinary and intra-disciplinary scholarly communication. (Author/LRW)

Review of office filing facility filing and retrieval mechanisms for unstructured and mixed media information focuses on free text methods. Also discussed are the state of the art in handling voice and image data, problems with searching text surrogates to implement free text content retrieval, and work of Project Minstrel. (Author/MBR)

A theory for storage and retrieval of associative information is presented. Items or events are represented as random vectors. Convolution is used as the storage operation, correlation as the retrieval operation. A distributed memory system is assumed. The theory applies to recognition and recall and covers both accuracy and latency. (Author/RD)

This study conducted to determine representation impact on information items retrieval in terms of precision and recall performance and overlap used the INSPEC "Computers and Control Abstracts" loaded on DIATOM, an online retrieval system based on DIALOG, as the database to be searched. Sixty-nine users provided 84 queries which were…

Several algorithms were investigated which would allow a user to interact with an automatic document retrieval system by requesting relevance judgments on selected sets of documents. Two viewpoints were taken in evaluation. One measured the movement of queries toward the optimum query as defined by Rocchio; the other measured the retrieval…

Although a great deal of empirical evidence has indicated that retrieval practice is an effective means of promoting learning and memory, very few studies have investigated the strategy in the context of an actual class. The primary purpose of this study was to determine if a series of very brief retrieval quizzes could significantly improve the retention of previously tested information throughout an anatomy and physiology course. A second purpose was to determine if there were any significant differences between expanding and uniform patterns of retrieval that followed a standardized initial retrieval delay. Anatomy and physiology students were assigned to either a control group or groups that were repeatedly prompted to retrieve a subset of previously tested course information via a series of quizzes that were administered on either an expanding or a uniform schedule. Each retrieval group completed a total of 10 retrieval quizzes, and the series of quizzes required (only) a total of 2 h to complete. Final retention of the exam subset material was assessed during the last week of the semester. There were no significant differences between the expanding and uniform retrieval groups, but both retained an average of 41% more of the subset material than did the control group (ANOVA, F = 129.8, P = 0.00, ηp(2) = 0.36). In conclusion, retrieval practice is a highly efficient and effective strategy for enhancing the retention of anatomy and physiology material.

The rapid development of computerized information storage and retrieval techniques has introduced the possibility of extending the word processing concept to document processing. A major advantage of computerized document processing is the relief of the tedious task of manual editing and composition usually encountered by traditional publishers through the immense speed and storage capacity of computers. Furthermore, computerized document processing provides an author with centralized control, the lack of which is a handicap of the traditional publishing operation. A survey of some computerized document processing techniques is presented with emphasis on related information storage and retrieval issues. String matching algorithms are considered central to document information storage and retrieval and are also discussed.

The perceived value of information can influence one's motivation to successfully remember that information. This study investigated how information value can affect memory search and evaluation processes (i.e., retrieval monitoring). In Experiment 1, participants studied unrelated words associated with low, medium, or high values. Subsequent memory tests required participants to selectively monitor retrieval for different values. False memory effects were smaller when searching memory for high-value than low-value words, suggesting that people more effectively monitored more important information. In Experiment 2, participants studied semantically-related words, and the need for retrieval monitoring was reduced at test by using inclusion instructions (i.e., endorsement of any word related to the studied words) compared with standard instructions. Inclusion instructions led to increases in false recognition for low-value, but not for high-value words, suggesting that under standard-instruction conditions retrieval monitoring was less likely to occur for important information. Experiment 3 showed that words retrieved with lower confidence were associated with more effective retrieval monitoring, suggesting that the quality of the retrieved memory influenced the degree and effectiveness of monitoring processes. Ironically, unless encouraged to do so, people were less likely to carefully monitor important information, even though people want to remember important memories most accurately.

Outlines a prototype of an intelligent information-retrieval tool to facilitate information access for an undergraduate seeking information for a term paper. Topics include diagnosing the information need, Kuhlthau's information-search-process model, Shannon's mathematical theory of communication, and principles of uncertainty expansion and…

The Astrophysical Information Systems Workshop was one of the three Integrated Technology Planning workshops. Its objectives were to develop an understanding of future mission requirements for information systems, the potential role of technology in meeting these requirements, and the areas in which NASA investment might have the greatest impact. Workshop participants were briefed on the astrophysical mission set with an emphasis on those missions that drive information systems technology, the existing NASA space-science operations infrastructure, and the ongoing and planned NASA information systems technology programs. Program plans and recommendations were prepared in five technical areas: Mission Planning and Operations; Space-Borne Data Processing; Space-to-Earth Communications; Science Data Systems; and Data Analysis, Integration, and Visualization.

variety of representations characterizing human knowledge, coupled with the necessary invention of new compatible retrieval interfaces. A textile dyer...solutions... new structures, and new access procesoes . The work presented here represents a simple ongoing effort in that direction. It basically

Patients who confabulate retrieve personal habits, repeated events or over-learned information and mistake them for actually experienced, specific unique events. Although some hypotheses favour a disruption of frontal/executive functions operating at retrieval, the respective involvement of encoding and retrieval processes in confabulation is still controversial. The present study sought to investigate experimentally the involvement of encoding and retrieval processes and the interference of over-learned information in the confabulation of Alzheimer's disease patients. Twenty Alzheimer's disease patients and 20 normal controls encoded and retrieved unknown stories, well-known fairy tales (e.g. Snow White) and modified well-known fairy tales (e.g. Little Red Riding Hood is not eaten by the wolf) under three experimental conditions: (i) full attention at encoding and at retrieval; (ii) divided attention at encoding (i.e. performing an attention demanding secondary task) and full attention at retrieval; (iii) full attention at encoding and divided attention at retrieval. We found that confabulations in Alzheimer's disease patients were more frequent for the modified well-known fairy tales and when encoding was weakened by a concurrent secondary task (61%), compared with the other types of stories and experimental conditions. Confabulations in the modified fairy tales always consisted of elements of the original version of the fairy tale (e.g. Little Red Riding Hood is eaten by the wolf). This is the first experimental evidence showing that poor encoding and over-learned information are involved in confabulation in Alzheimer's disease.

GIFT workshops are a two-and-a-half-day teacher enhancement workshops organized by the EGU Committee on Education and held in conjunction with the EGU annual General Assembly. The program of each workshop focuses on a different general theme each year. Past themes have included, for example, "Mineral Resources", "Our changing Planet", "Natural Hazards", "Water", "Evolution and Biodiversity" and "Energy and Sustainable Development". These workshops combine scientific presentations on current research in the Earth and Space Sciences, given by prominent scientists attending EGU General Assemblies, with hands-on, inquiry-based activities that can be used by the teachers in their classrooms to explain related scientific principles or topics. Participating teachers are also invited to present their own classroom activities to their colleagues, even when not directly related to the current program. The main objective of these workshops is to communicate first-hand scientific information to teachers in primary and secondary schools, significantly shortening the time between discovery and textbook. The GIFT workshop provides the teachers with materials that can be directly incorporated into their classroom, as well as those of their colleagues at home institutions. In addition, the full immersion of science teachers in a truly scientific context (EGU General Assemblies) and the direct contact with leading geoscientists stimulates curiosity towards research that the teachers can transmit to their pupils. In addition to their scientific content, the GIFT workshops are of high societal value. The value of bringing teachers from many nations together includes the potential for networking and collaborations, the sharing of experiences and an awareness of science education as it is presented in other countries. Since 2003, the EGU GIFT workshops have brought together more than 700 teachers from more than 25 nations. At all previous EGU GIFT workshops teachers mingled with others

GIFT workshops are a two-and-a-half-day teacher enhancement workshops organized by the EGU Committee on Education and held in conjunction with the EGU annual General Assembly. The program of each workshop focuses on a different general theme each year. Past themes have included, for example, "Mineral Resources", "Our Changing Planet", "Natural Hazards", "Water" and "Biodiversity and Evolution". These workshops combine scientific presentations on current research in Earth and Space Sciences, given by prominent scientists attending EGU General Assemblies, with hands-on, inquiry-based activities that can be used by the teachers in their classrooms to explain related scientific principles or topics. Teachers are also invited to present their own classroom activities to their colleagues, regardless of the scientific topic. The main objective of these workshops is to communicate first-hand scientific information to teachers in primary and secondary schools, significantly shortening the time between discovery and textbook. The GIFT workshop provides the teachers with materials that can be directly incorporated into their classroom, as well as those of their colleagues at home institutions. In addition, the full immersion of science teachers in a truly scientific context (EGU General Assemblies) and the direct contact with leading geoscientists stimulates curiosity towards research that the teachers can transmit to their pupils. In addition to their scientific content, the GIFT workshops are of high societal value. The value of bringing teachers from many nations together includes the potential for networking and collaborations, the sharing of experiences and an awareness of science education as it is presented in other countries. Since 2003, the EGU GIFT workshops have brought together more than 700 teachers from more than 25 nations. At all previous EGU GIFT workshops teachers mingled with others from outside their own country and informally interacted with the

As future professionals, graduate students must be information literate; however, information literacy instruction of graduate students is often neglected. To address this need, we created literature review workshops to serve graduate students from a wide range of subject disciplines at a point of shared need. Not only did this strategy prove to…

However, evaluating the effec- tiveness of these suggestions has remained quite subjective, with a vast majority of the past work relying on expensive...addressing the polysemy problem while QR suggestions can ef- fectively address both the problems. For example, query expansion may be able to retrieve...However, these evaluation measures can be quite expensive since it involves user interaction. Another work on evaluation that has similar objectives to the

Amid the rapid growth of information today is the increasing challenge for people to survive and navigate its magnitude. Dynamics and heterogeneity of large information spaces such as the Web challenge informationretrieval in these environments. Collection of information in advance and centralization of IR operations are hardly possible because…

Objective. To utilize a skills-based workshop series to develop pharmacy students’ drug information, writing, critical-thinking, and evaluation skills during the final didactic year of training. Design. A workshop series was implemented to focus on written (researched) responses to drug information questions. These workshops used blinded peer-grading to facilitate timely feedback and strengthen assessment skills. Each workshop was aligned to the didactic coursework content to complement and extend learning, while bridging and advancing research, writing, and critical thinking skills. Assessment. Attainment of knowledge and skills was assessed by rubric-facilitated peer grades, faculty member grading, peer critique, and faculty member-guided discussion of drug information responses. Annual instructor and course evaluations consistently revealed favorable student feedback regarding workshop value. Conclusion. A drug informationworkshop series using peer-grading as the primary assessment tool was successfully implemented and was well received by pharmacy students. PMID:25657378

GIFT workshops are a two-and-a-half-day teacher enhancement workshops organized by the EGU Committee on Education and held in conjunction with the EGU annual General Assembly. The program of each workshop focuses on a different general theme each year. Past themes have included, for example, "The Polar Regions", "The Carbon Cycle" and "The Earth From Space". These workshops combine scientific presentations on current research in the Earth and Space Sciences, given by prominent scientists attending EGU General Assemblies, with hands-on, inquiry-based activities that can be used by the teachers in their classrooms to explain related scientific principles or topics. Participating teachers are also invited to present their own classroom activities to their colleagues, even when not directly related to the current program. The main objective of these workshops is to spread first-hand scientific information to teachers in primary and secondary schools, significantly shortening the time between discovery and textbook. The GIFT workshop provides the teachers with materials that can be directly incorporated into their classroom, as well as those of their colleagues at home institutions. In addition, the full immersion of science teachers in a truly scientific context (EGU General Assemblies) and the direct contact with leading geoscientists stimulates curiosity towards research that the teachers can transmit to their pupils. In addition to their scientific content, the GIFT workshops are of high societal value. The value of bringing teachers from many nations together includes the potential for networking and collaborations, the sharing of experiences and an awareness of science education as it is presented in other countries. Since 2003, the EGU GIFT workshops have brought together more than 500 teachers from more than 20 nations. At all previous EGU GIFT workshops teachers mingled with others from outside their own country, informally interacted with the scientists

GIFT workshops are a two-and-a-half-day teacher enhancement workshops organized by the EGU Committee on Education and held in conjunction with the EGU annual General Assembly. The program of each workshop focuses on a different general theme each year. Past themes have included, for example, "Biodiversity and Evolution", "The Polar Regions", "The Carbon Cycle" and "The Earth from Space". These workshops combine scientific presentations on current research in the Earth and Space Sciences, given by prominent scientists attending EGU General Assemblies, with hands-on, inquiry-based activities that can be used by the teachers in their classrooms to explain related scientific principles or topics. Participating teachers are also invited to present their own classroom activities to their colleagues, even when not directly related to the current program. The main objective of these workshops is to spread first-hand scientific information to teachers in primary and secondary schools, significantly shortening the time between discovery and textbook. The GIFT workshop provides the teachers with materials that can be directly incorporated into their classroom, as well as those of their colleagues at home institutions. In addition, the full immersion of science teachers in a truly scientific context (EGU General Assembly) and the direct contact with leading geoscientists stimulates curiosity towards research that the teachers can transmit to their pupils. In addition to their scientific content, the GIFT workshops are of high societal value. The value of bringing teachers from many nations together includes the potential for networking and collaborations, the sharing of experiences and an awareness of science education as it is presented in other countries. Since 2003, the EGU GIFT workshops have brought together more than 500 teachers from more than 25 nations. At all previous EGU GIFT workshops teachers mingled with others from outside their own country and informally

GIFT workshops are a two-and-a-half-day teacher enhancement workshops organized by the EGU Committee on Education and held in conjunction with the EGU annual General Assembly. The program of each workshop focuses on a different general theme each year. Past themes have included, for example, "Biodiversity and Evolution", "The Polar Regions", "The Carbon Cycle" and "The Earth from Space". These workshops combine scientific presentations on current research in the Earth and Space Sciences, given by prominent scientists attending EGU General Assemblies, with hands-on, inquiry-based activities that can be used by the teachers in their classrooms to explain related scientific principles or topics. Participating teachers are also invited to present their own classroom activities to their colleagues, even when not directly related to the current program. The main objective of these workshops is to spread first-hand scientific information to teachers in primary and secondary schools, significantly shortening the time between discovery and textbook. The GIFT workshop provides the teachers with materials that can be directly incorporated into their classroom, as well as those of their colleagues at home institutions. In addition, the full immersion of science teachers in a truly scientific context (EGU General Assembly) and the direct contact with leading geoscientists stimulates curiosity towards research that the teachers can transmit to their pupils. In addition to their scientific content, the GIFT workshops are of high societal value. The value of bringing teachers from many nations together includes the potential for networking and collaborations, the sharing of experiences and an awareness of science education as it is presented in other countries. Since 2003, the EGU GIFT workshops have brought together more than 500 teachers from more than 25 nations. At all previous EGU GIFT workshops teachers mingled with others from outside their own country and informally

GIFT workshops are a two-and-a-half-day teacher enhancement workshops organized by the EGU Committee on Education and held in conjunction with the EGU annual General Assembly. The program of each workshop focuses on a different general theme each year. Past themes have included, for example, "Natural Hazards", "Biodiversity and Evolution", "The Polar Regions", "The Carbon Cycle" and "The Earth from Space". These workshops combine scientific presentations on current research in Earth and Space Sciences, given by prominent scientists attending EGU General Assemblies, with hands-on, inquiry-based activities that can be used by the teachers in their classrooms to explain related scientific principles or topics. Participating teachers are also invited to present their own classroom activities to their colleagues, regardless of the scientific topic. The main objective of these workshops is to communicate first-hand scientific information to teachers in primary and secondary schools, significantly shortening the time between discovery and textbook. The GIFT workshop provides the teachers with materials that can be directly incorporated into their classroom, as well as those of their colleagues at home institutions. In addition, the full immersion of science teachers in a truly scientific context (EGU General Assemblies) and the direct contact with leading geoscientists stimulates curiosity towards research that the teachers can transmit to their pupils. In addition to their scientific content, the GIFT workshops are of high societal value. The value of bringing teachers from many nations together includes the potential for networking and collaborations, the sharing of experiences and an awareness of science education as it is presented in other countries. Since 2003, the EGU GIFT workshops have brought together more than 600 teachers from more than 25 nations. At all previous EGU GIFT workshops teachers mingled with others from outside their own country and informally

GIFT workshops are a two-and-a-half-day teacher enhancement workshops organized by the EGU Committee on Education and held in conjunction with the EGU annual General Assembly. The program of each workshop focuses on a different general theme each year. Past themes have included, for example, "Water!", "Natural Hazards", "Biodiversity and Evolution", "The Polar Regions", "The Carbon Cycle" and "The Earth from Space". These workshops combine scientific presentations on current research in the Earth and Space Sciences, given by prominent scientists attending EGU General Assemblies, with hands-on, inquiry-based activities that can be used by the teachers in their classrooms to explain related scientific principles or topics. Participating teachers are also invited to present their own classroom activities to their colleagues, even when not directly related to the current program. The main objective of these workshops is to communicate first-hand scientific information to teachers in primary and secondary schools, significantly shortening the time between discovery and textbook. The GIFT workshop provides the teachers with materials that can be directly incorporated into their classroom, as well as those of their colleagues at home institutions. In addition, the full immersion of science teachers in a truly scientific context (EGU General Assemblies) and the direct contact with leading geoscientists stimulates curiosity towards research that the teachers can transmit to their pupils. In addition to their scientific content, the GIFT workshops are of high societal value. The value of bringing teachers from many nations together includes the potential for networking and collaborations, the sharing of experiences and an awareness of science education as it is presented in other countries. Since 2003, the EGU GIFT workshops have brought together more than 600 teachers from more than 25 nations. At all previous EGU GIFT workshops teachers mingled with others from outside

GIFT workshops are a two-and-a-half-day teacher enhancement workshops organized by the EGU Committee on Education and held in conjunction with the EGU annual General Assembly. The program of each workshop focuses on a different general theme each year. Past themes have included, for example, “The Polar Regions”, “The Carbon Cycle” and “The Earth From Space”. These workshops combine scientific presentations on current research in the Earth and Space Sciences, given by prominent scientists attending EGU General Assemblies, with hands-on, inquiry-based activities that can be used by the teachers in their classrooms to explain related scientific principles or topics. Participating teachers are also invited to present their own classroom activities to their colleagues, even when not directly related to the current program. The main objective of these workshops is to spread first-hand scientific information to teachers in primary and secondary schools, significantly shortening the time between discovery and textbook. The GIFT workshop provides the teachers with materials that can be directly incorporated into their classroom, as well as those of their colleagues at home institutions. In addition, the full immersion of science teachers in a truly scientific context (EGU General Assemblies) and the direct contact with leading geoscientists stimulates curiosity towards research that the teachers can transmit to their pupils. In addition to their scientific content, the GIFT workshops are of high societal value. The value of bringing teachers from many nations together includes the potential for networking and collaborations, the sharing of experiences and an awareness of science education as it is presented in other countries. Since 2003, the EGU GIFT workshops have brought together more than 500 teachers from more than 20 nations. At all previous EGU GIFT workshops teachers mingled with others from outside their own country, informally interacted with the

GIFT workshops are a two-and-a-half-day teacher enhancement workshops organized by the EGU Committee on Education and held in conjunction with the EGU annual General Assembly. The program of each workshop focuses on a different general theme each year. Past themes have included, for example, "Water!", "Natural Hazards", "Biodiversity and Evolution", "The Polar Regions", "The Carbon Cycle" and "The Earth from Space". These workshops combine scientific presentations on current research in the Earth and Space Sciences, given by prominent scientists attending EGU General Assemblies, with hands-on, inquiry-based activities that can be used by the teachers in their classrooms to explain related scientific principles or topics. Participating teachers are also invited to present their own classroom activities to their colleagues, even when not directly related to the current program. The main objective of these workshops is to communicate first-hand scientific information to teachers in primary and secondary schools, significantly shortening the time between discovery and textbook. The GIFT workshop provides the teachers with materials that can be directly incorporated into their classroom, as well as those of their colleagues at home institutions. In addition, the full immersion of science teachers in a truly scientific context (EGU General Assemblies) and the direct contact with leading geoscientists stimulates curiosity towards research that the teachers can transmit to their pupils. In addition to their scientific content, the GIFT workshops are of high societal value. The value of bringing teachers from many nations together includes the potential for networking and collaborations, the sharing of experiences and an awareness of science education as it is presented in other countries. Since 2003, the EGU GIFT workshops have brought together more than 600 teachers from more than 25 nations. At all previous EGU GIFT workshops teachers mingled with others from outside

The purpose of the K-12 workshop is to stimulate a cross pollination of inter-center activity and introduce the regional centers to curing edge K-1 activities. The format of the workshop consists of project presentations, working groups, and working group reports, all contained in a three day period. The agenda is aggressive and demanding. The K-12 Education Project is a multi-center activity managed by the Information Infrastructure Technology and Applications (IITA)/K-12 Project Office at the NASA Ames Research Center (ARC). this workshop is conducted in support of executing the K-12 Education element of the IITA Project The IITA/K-12 Project funds activities that use the National Information Infrastructure (NII) (e.g., the Internet) to foster reform and restructuring in mathematics, science, computing, engineering, and technical education.

The amount of narrative clinical text documents stored in Electronic Patient Records (EPR) of Hospital Information Systems is increasing. Physicians spend a lot of time finding relevant patient-related information for medical decision making in these clinical text documents. Thus, efficient and topical retrieval of relevant patient-related information is an important task in an EPR system. This paper describes the prototype of a medical informationretrieval system (MIRS) for clinical text documents. The open-source informationretrieval framework Apache Lucene has been used to implement the prototype of the MIRS. Additionally, a multi-label classification system based on the open-source data mining framework WEKA generates metadata from the clinical text document set. The metadata is used for influencing the rank order of documents retrieved by physicians. Combining informationretrieval and automated document classification offers an enhanced approach to let physicians and in the near future patients define their information needs for information stored in an EPR. The system has been designed as a J2EE Web-application. First findings are based on a sample of 18,000 unstructured, clinical text documents written in German.

The feasibility of having a common information management network for space shuttle data, is studied. Identified are the information types required, sources and users of the information, and existing techniques for acquiring, storing and retrieving the data. The study concluded that a decentralized system is feasible, and described a recommended development plan for it.

This report, summarizing the activities of the Vision Information Center (VIC) in the field of computer-assisted instruction from December, 1966 to August, 1967, describes the methodology used to load a large body of information--a programed text on basic opthalmology--onto a computer for subsequent informationretrieval and computer-assisted…

On-line retrieval system design is discussed in the two papers which make up Part Five of this report on Salton's Magical Automatic Retriever of Texts (SMART) project report. The first paper: "A Prototype On-Line Document Retrieval System" by D. Williamson and R. Williamson outlines a design for a SMART on-line document retrieval system…

A spectral inversion technique for retrieval of the atmospheric gases and aerosols contents is proposed. This technique based upon the preliminary measurement or retrieval of the spectral optical thickness. The existence of a priori information about the spectral cross sections for some of the atmospheric components allows to retrieve the relative contents of these components in the atmosphere. Method of smooth filtration makes possible to estimate contents of atmospheric aerosols with known cross sections and to filter out other aerosols; this is done independently from their relative contribution to the optical thickness.

Boundedly rational heuristics for inference can be surprisingly accurate and frugal for several reasons. They can exploit environmental structures, co-opt complex capacities, and elude effortful search by exploiting information that automatically arrives on the mental stage. The fluency heuristic is a prime example of a heuristic that makes the most of an automatic by-product of retrieval from memory, namely, retrieval fluency. In 4 experiments, the authors show that retrieval fluency can be a proxy for real-world quantities, that people can discriminate between two objects' retrieval fluencies, and that people's inferences are in line with the fluency heuristic (in particular fast inferences) and with experimentally manipulated fluency. The authors conclude that the fluency heuristic may be one tool in the mind's repertoire of strategies that artfully probes memory for encapsulated frequency information that can veridically reflect statistical regularities in the world.

To determine the impact of an assignment and workshop intended to increase students' information literacy skills, we conducted a quasi-experiment using a pretest-posttest assessment with undergraduate students in four sections of an introduction to developmental psychology course. Two sections (N = 81) received the assignment and instructions…

Objective of the workshop were to produce an information exchange between the NASA centers on current efforts in advanced artificial intelligence, robotics, and automation and to provide the identification of current efforts throughout NASA, and gaps in the current efforts, in relation to Space Station automation and robotics. The agenda was a set of technical overview presentations from the NASA centers on current efforts in advanced artificial intelligence, robotics and automation, and a joint session with the (SSIS) workshop being held at JSC the same week.

Objectives of the workshop were to produce an information exchange between the NASA centers on current efforts in advanced artificial intelligence, robotics, and automation and to provide the identification of current efforts throughout NASA, and gaps in the current efforts, in relation to Space Station automation and robotics. The agenda was a set of technical overview presentations from the NASA centers on current efforts in advanced artificial intelligence, robotics and automation, and a joint session with the (SSIS) workshop being held at JSC the same week.

Research in social neuroscience has uncovered a social knowledge network that is particularly attuned to making social judgments. However, the processes that are being performed by both regions within this network and those outside of this network that are nevertheless engaged in the service of making a social judgment remain unclear. To help address this, we drew upon research in semantic memory, which suggests that making a semantic judgment engages 2 distinct control processes: A controlled retrieval process, which aids in bringing goal-relevant information to mind from long-term stores, and a selection process, which aids in selecting the information that is goal-relevant from the informationretrieved. In a neuroimaging study, we investigated whether controlled retrieval and selection for social information engage distinct portions of both the social knowledge network and regions outside this network. Controlled retrieval for social information engaged an anterior ventrolateral portion of the prefrontal cortex, whereas selection engaged both the dorsomedial prefrontal cortex and temporoparietal junction within the social knowledge network. These results suggest that the social knowledge network may be more involved with the selection of social information than the controlled retrieval of it and incorporates lateral prefrontal regions in accessing memory for making social judgments. PMID:23300111

president of the USA", and passes the information to the data base manager for consistency checking and storage. In order to pass the information...information to the data-base manager . Let us see a brief dialog with DYPAR. For simplicity, we start out with an empty data base. Items in italics below...for additional information required by the integrity-checker in the data-base manager .] What is NICE? +Nice is a disposition. Storing assertion in

Information professionals, whether designers, intermediaries, database producers or vendors, bear some responsibility for the information that they make available to users of information systems. The users of such systems may tend to make many assumptions about the information that a system provides, such as believing: that the data are comprehensive, current and accurate, that the information resources or databases have same degree of quality and consistency of indexing; that the abstracts, if they exist, correctly and adequate reflect the content of the article; that there is consistency informs of author names or journal titles or indexing within and across databases; that there is standardization in and across databases; that once errors are detected, they are corrected; that appropriate choices of databases or information resources are a relatively easy matter, etc. The truth is that few of these assumptions are valid in commercia or corporate or organizational databases. However, given these beliefs and assumptions by many users, often promoted by information providers, information professionals, impossible, should intervene to warn users about the limitations and constraints of the databases they are using. With the growth of the Internet and end-user products (e.g., CD-ROMs), such interventions have significantly declined. In such cases, information should be provided on start-up or through interface screens, indicating to users, the constraints and orientation of the system they are using. The principle of {open_quotes}caveat emptor{close_quotes} is naive and socially irresponsible: information professionals or systems have an obligation to provide some framework or context for the information that users are accessing.

The theme of this NASA Scientific and Technical Information Program Coordinating Council meeting was the role of controlled vocabularies (thesauri) in informationretrieval. Included are summaries of the presentations and the accompanying visuals. Dr. Raya Fidel addressed 'Retrieval: Free Text, Full Text, and Controlled Vocabularies.' Dr. Bella Hass Weinberg spoke on 'Controlled Vocabularies and Thesaurus Standards.' The presentations were followed by a panel discussion with participation from NASA, the National Library of Medicine, the Defense Technical Information Center, and the Department of Energy; this discussion, however, is not summarized in any detail in this document.

The currently developed multi-level language interfaces of information systems are generally designed for experienced users. These interfaces commonly ignore the nature and needs of the largest user group, i.e., casual users. This research identifies the importance of natural language query system research within information storage and retrieval system development; addresses the topics of developing such a query system; and finally, proposes a framework for the development of natural language query systems in order to facilitate the communication between casual users and information storage and retrieval systems.

Holographic interferograms can contain large amounts of information about flow and temperature fields. Their information content can be very high because they can be viewed from many different directions. This multidirectionality, and fringe localization add to the information contained in the fringe pattern if diffuse illumination is used. Additional information, and increased accuracy can be obtained through the use of dual reference wave holography to add reference fringes or to effect discrete phase shift or hetrodyne interferometry. Automated analysis of fringes is possible if interferograms are of simple structure and good quality. However, in practice a large number of practical problems can arise, so that a difficult image processing task results.

This study examines the indexing of drugs in the literature and compares actual drug indexing to stated indexing policies in selected databases. The goal is to aid health science information specialists, end-users, and/or non-subject experts to improve recall and comprehensiveness when searching for drug information by identifying the most useful…

There is a growing need for rapid and effective access to information in large electronic documentation systems. Access can be facilitated if information relevant in the current problem solving context can be automatically supplied to the user. This includes information relevant to particular user profiles, tasks being performed, and problems being solved. However most of this knowledge on contextual relevance is not found within the contents of documents, and current hypermedia tools do not provide any easy mechanism to let users add this knowledge to their documents. We propose a compositional relevance network to automatically acquire the context in which previous information was found relevant. The model records information on the relevance of references based on user feedback for specific queries and contexts. It also generalizes such information to derive relevant references for similar queries and contexts. This model lets users filter information by context of relevance, build personalized views of documents over time, and share their views with other users. It also applies to any type of multimedia information. Compared to other approaches, it is less costly and doesn't require any a priori statistical computation, nor an extended training period. It is currently being implemented into the Computer Integrated Documentation system which enables integration of various technical documents in a hypertext framework.

The general scope of SPIRAL is storage of free-flowing text information into a machine-readable library and recall of any portions of this stored information that are relevant to an inquiry. The major objectives in the design of the system were (1) to make it easy to use by persons unfamiliar with computer systems; and (2) to make it efficient, in…

Access to online information sources of aerospace, scientific, and engineering data, a mission focus for NASA's Scientific and Technical Information Program, has always been limited by factors such as telecommunications, query language syntax, lack of standardization in the information, and the lack of adequate tools to assist in searching. Today, the NASA STI Program's NASA Access Mechanism (NAM) prototype offers a solution to these problems by providing the user with a set of tools that provide a graphical interface to remote, heterogeneous, and distributed information in a manner adaptable to both casual and expert users. Additionally, the NAM provides access to many Internet-based services such as Electronic Mail, the Wide Area Information Servers system, Peer Locating tools, and electronic bulletin boards.

Access to online information sources of aerospace, scientific, and engineering data, a mission focus for NASA's Scientific and Technical Information Program, has always been limited to factors such as telecommunications, query language syntax, lack of standardization in the information, and the lack of adequate tools to assist in searching. Today, the NASA STI Program's NASA Access Mechanism (NAM) prototype offers a solution to these problems by providing the user with a set of tools that provide a graphical interface to remote, heterogeneous, and distributed information in a manner adaptable to both casual and expert users. Additionally, the NAM provides access to many Internet-based services such as Electronic Mail, the Wide Area Information Servers system, Peer Locating tools, and electronic bulletin boards.

Data entry forms are employed in all types of enterprises to collect hundreds of customer's information on daily basis. The information is filled manually by the customers. Hence, it is laborious and time consuming to use human operator to transfer these customers information into computers manually. Additionally, it is expensive and human errors might cause serious flaws. The automatic interpretation of scanned forms has facilitated many real applications from speed and accuracy point of view such as keywords spotting, sorting of postal addresses, script matching and writer identification. This research deals with different strategies to extract customer's information from these scanned forms, interpretation and classification. Accordingly, extracted information is segmented into characters for their classification and finally stored in the forms of records in databases for their further processing. This paper presents a detailed discussion of these semantic based analysis strategies for forms processing. Finally, new directions are also recommended for future research. [Figure not available: see fulltext.

The growing volume of heterogeneous and distributed information on the World Wide Web has made it increasingly difficult for existing tools to retrieve relevant information. To improve the performance of these tools, this paper suggests how to handle two aspects of the problem. The first aspect concerns a better representation and description of…

This research studies the development of a new prototype of visualization in support of movie retrieval. The goal of information visualization is unveiling of large amounts of data or abstract data set using visual presentation. With this knowledge the main goal is to develop a 2D presentation of information on movies from the IMDB (Internet Movie…

The use of computers in retrieving bibliographic chemical information is traced through the SDI, batch, and online modes, and related changes are noted in such areas as data base availability, cost, software, and amount of user control. The impact of these changes on both the quality and quantity of chemical information use is discussed, as well…

Traces the development of informationretrieval/services and suggests that the creation of large digital libraries seems inevitable. Examines possibilities for increasing electronic access and the role of artificial intelligence. Highlights include: searching full text; sending full texts; selective dissemination of information (SDI) profiling and…

Millennial students make up a large portion of undergraduate students attending colleges and universities, and they have a variety of online resources available to them to complete academically related information searches, primarily Web based and library-based online informationretrieval systems. The content, ease of use, and required search…

Academic libraries are increasingly collecting e-books, but little research has investigated how students use e-books compared to print texts. This study used a prompted think-aloud method to gain an understanding of the informationretrieval behavior of students in both formats. Qualitative analysis identified themes that will inform instruction…

Reviews the literature of networked informationretrieval tools to explore the concept of an interactive text-based virtual reality environment that would encompass resources currently available on the Internet. Highlights include academic libraries, electronic mail, hypertext navigation systems, wide area information servers, knowbots and…

We implemented a "how to study" workshop for small groups of students (6-12) for N = 93 consenting students, randomly assigned from a large introductory biology class. The goal of this workshop was to teach students self-regulating techniques with visualization-based exercises as a foundation for learning and critical thinking in two areas: information processing and self-testing. During the workshop, students worked individually or in groups and received immediate feedback on their progress. Here, we describe two individual workshop exercises, report their immediate results, describe students' reactions (based on the workshop instructors' experience and student feedback), and report student performance on workshop-related questions on the final exam. Students rated the workshop activities highly and performed significantly better on workshop-related final exam questions than the control groups. This was the case for both lower- and higher-order thinking questions. Student achievement (i.e., grade point average) was significantly correlated with overall final exam performance but not with workshop outcomes. This long-term (10 wk) retention of a self-testing effect across question levels and student achievement is a promising endorsement for future large-scale implementation and further evaluation of this "how to study" workshop as a study support for introductory biology (and other science) students.

Long-term memories can undergo destabilization/restabilization processes, collectively called reconsolidation. However, the parameters that trigger memory reconsolidation are poorly understood and are a matter of intense investigation. Particularly, memory retrieval is widely held as requisite to initiate reconsolidation. This assumption makes sense since only relevant cues will induce reconsolidation of a specific memory. However, recent studies show that pharmacological inhibition of retrieval does not avoid memory from undergoing reconsolidation, indicating that memory reconsolidation occurs through a process that can be dissociated from retrieval. We propose that retrieval is not a unitary process but has two dissociable components; one leading to the expression of memory and the other to reconsolidation, referred herein as executer and integrator respectively. The executer would lead to the behavioral expression of the memory. This component would be the one disrupted on the studies that show reconsolidation independence from retrieval. The integrator would deal with reconsolidation. This component of retrieval would lead to long-term memory destabilization when specific conditions are met. We think that an important number of reports are consistent with the hypothesis that reconsolidation is only initiated when updating information is acquired. We suggest that the integrator would initiate reconsolidation to integrate updating information into long-term memory.

We propose an algorithm for jointly designing two phase-only masks (POMs) that allow for the encryption and noise-free retrieval of triple images. The images required for optical retrieval are first stored in quick-response (QR) codes for noise-free retrieval and flexible readout. Two sparse POMs are respectively calculated from two different images used as references for authentication based on modified Gerchberg-Saxton algorithm (GSA) and pixel extraction, and are then used as support constraints in a modified double-phase retrieval algorithm (MPRA), together with the above-mentioned QR codes. No visible information about the target images or the reference images can be obtained from each of these authenticated POMs. This approach allows users to authenticate the two POMs used for image reconstruction without visual observation of the reference images. It also allows user to friendly access and readout with mobile devices.

We propose an algorithm for jointly designing two phase-only masks (POMs) that allow for the encryption and noise-free retrieval of triple images. The images required for optical retrieval are first stored in quick-response (QR) codes for noise-free retrieval and flexible readout. Two sparse POMs are respectively calculated from two different images used as references for authentication based on modified Gerchberg-Saxton algorithm (GSA) and pixel extraction, and are then used as support constraints in a modified double-phase retrieval algorithm (MPRA), together with the above-mentioned QR codes. No visible information about the target images or the reference images can be obtained from each of these authenticated POMs. This approach allows users to authenticate the two POMs used for image reconstruction without visual observation of the reference images. It also allows user to friendly access and readout with mobile devices. PMID:26494213

literatures associated with the perspectives are described. The perspectives themselves are first contrasted and then, using a communicatins framework... corporation division. This 19 31 (selective distribution greatly reduces the information processing load of the many potential receiving units having...of Management, 11: 75-86. Shields, M.D. 1983. Effects of Information Supply and Demand on Judgment Accuracy: Evidence from Corporate Managers. The

With the vast amount of biomedical data we face the necessity to improve informationretrieval processes in biomedical domain. The use of biomedical ontologies facilitated the combination of various data sources (e.g. scientific literature, clinical data repository) by increasing the quality of informationretrieval and reducing the maintenance efforts. In this context, we developed Ontology Look-up services (OLS), based on NEWT and MeSH vocabularies. Our services were involved in some informationretrieval tasks such as gene/disease normalization. The implementation of OLS services significantly accelerated the extraction of particular biomedical facts by structuring and enriching the data context. The results of precision in normalization tasks were boosted on about 20%.

Background The movement towards evidence-based practice makes explicit the need for access to current best evidence to improve health. Advances in electronic technologies have made health information more available, but does availability affect the rate of use of evidence in practice? Objectives To assess the effectiveness of interventions intended to provide electronic retrieval (access to information) to health information by healthcare providers to improve practice and patient care. Search methods We obtained studies from computerized searches of multiple electronic bibliographic databases, supplemented by checking reference lists, and consultation with experts. Selection criteria Randomized controlled trials (RCTs) including cluster randomized trials (CRCTs), controlled clinical trials (CCT), and interrupted time series analyses (ITS) of any language publication status examining interventions of effectiveness of electronic retrieval of health information by healthcare providers. Data collection and analysis Duplicate relevancy screening of searches, data abstraction and risk of bias assessment was undertaken. Main results We found two studies that examined this question. Neither study found any changes in professional behavior following an intervention that facilitated electronic retrieval of health information. There was some evidence of improvements in knowledge about the electronic sources of information reported in one study. Neither study assessed changes in patient outcomes or the costs of provision of the electronic resource and the implementation of the recommended evidence-based practices. Authors’ conclusions Overall there was insufficient evidence to support or refute the use of electronic retrieval of healthcare information by healthcare providers to improve practice and patient care. PMID:19588361

It is proposed that the different styles of online searching can be described as either formal (highly precise) or informal with the needs of the client dictating which is most applicable at a particular moment. The background and personality of the searcher also come into play. Particular attention is focused on meatball searching which is a form of online searching characterized by deliberate vagueness. It requires generally comprehensive searches, often on unusual topics and with tight deadlines. It is most likely to occur in search centers serving many different disciplines and levels of client information sophistication. Various information needs are outlined as well as the laws of meatball searching and the adversarial approach. Traits and characteristics important to sucessful searching include: (1) concept analysis, (2) flexibility of thinking, (3) ability to think in synonyms and (4) anticipation of variant word forms and spellings.

We give an outline of JOIS (JICST On-line Information System)-III, Which has been developed as a succession to JOIS-II now being on service by The Japan Information Center of Science and Technology to start service in January 1990. In this report we show input format from new and function-expanded commands substituting to those of JOIS-II and examples of using. (for 1st part, see vol.32, No. 3 ; 2nd part vol. 32, No.4)

This article examines the increasing use of the humanities in the education of health professionals and posits that the approach may be of use in teaching health professionals information search and retrieval skills. However little evidence exists to support the educational effectiveness of using the humanities. This lack of evidence raises concerns about the costs of financing this approach to learning. These costs include the issue of copyright which cannot be ignored. While the humanities might provide a more attractive approach to teaching information search and retrieval skills, further research is needed to justify the costs of this approach to learning in more general terms and urgent attention to.

The informationretrieval system of the National Museum of Ethnology made its debut in 1979 and now enables us to search the books not only in the Museum but in the country and abroad by means of JAPAN MARC & LC MARC. The author presents the outline and the development of the information managing system including the above briefly and secondly the practical case of using our retrieval system in particular. The problems to be solved in the course of the future plan are also mentioned.

The basics of online searching for bibliographic citations are presented through use of the "Economic Literature Index" (ELI), which is available as File 139 on Dialog Information Services. This article focuses on the choice of media for bibliographic searches, search strategies, and the selection of alternative databases, such as…

The electronic information access skills outlined in this guide for teachers and library media specialists expand the online searching skills discussed in the previous Wisconsin Educational Media Association handbook, and further delineate skill development in this crucial area. This publication is designed to serve as a broad planning and…

Discusses the current status of 4 developments that will determine the emergence of generally available multimedia information services: high-bandwidth networks; inexpensive user appliances capable of handling multimedia; adoption of standards for representation, compression, packaging, and transmission; and development of a corpus of multimedia…

Proposes an alternative real valued representation of color based on the information theoretic concept of entropy. A theoretical presentation of image entropy is accompanied by a practical description of the merits and limitations of image entropy compared to color histograms. Results suggest that image entropy is a promising approach to image…

Monotonic logic requires reexamination of the entire logic string when there is a contradiction. Nonmonotonic logic allows the user to withdraw conclusions in the face of contradiction without harm to the logic string, which has considerable application to the field of information searching. Artificial intelligence models and neural networks based…

There has been substantial interest in optical imaging in and through random media in applications as diverse as environmental sensing and tumor detection. The rich scatter environment also leads to multiple paths or channels, which may provide higher capacity for communication. Coherent light passing through random media produces an intensity speckle pattern when imaged, as a result of multiple scatter and the imaging optics. When polarized coherent light is used, the speckle pattern is sensitive to the polarization state, depending on the amount of scatter, and such measurements provide information about the random medium. This may form the basis for enhanced imaging of random media and provide information on the scatterers themselves. Second and third order correlations over laser scan frequency are shown to lead to the ensemble averaged temporal impulse response, with sensitivity to the polarization state in the more weakly scattering regime. A new intensity interferometer is introduced that provides information about two signals incident on a scattering medium. The two coherent beams, which are not necessarily overlapping, interfere in a scattering medium. A sinusoidal modulation in the second order intensity correlation with laser scan frequency is shown to be related to the relative delay of the two incident beams. An intensity spatial correlation over input position reveals that decorrelation occurs over a length comparable to the incident beam size. Such decorrelation is also related to the amount of scatter. Remarkably, with two beams incident at different angles, the intensity correlation over the scan position has a sinusoidal modulation that is related to the incidence angle difference between the two input beams. This spatial correlation over input position thus provides information about input wavevectors.

This study is concerned with the difficulties encountered by casual users wishing to employ Information Storage and Retrieval Systems. A casual user is defined as a professional who has neither time nor desire to pursue in depth the study of the numerous and varied retrieval systems. His needs for on-line search are only occasional, and not limited to any particular system. The paper takes a close look at the state of the art of research concerned with aiding casual users of Information Storage and Retrieval Systems. Current experiments such as LEXIS, CONIT, IIDA, CITE, and CCL are presented and discussed. Comments and proposals are offered, specifically in the areas of training, learning and cost as experienced by the casual user. An extensive bibliography of recent works on the subject follows the text.

This dissertation examines the impact of exploration and learning upon eDiscovery informationretrieval; it is written in three parts. Part I contains foundational concepts and background on the topics of informationretrieval and eDiscovery. This part informs the reader about the research frameworks, methodologies, data collection, and…

We show that, in order to preserve the equivalence principle until late times in unitarily evaporating black holes, the thermodynamic entropy of a black hole must be primarily entropy of entanglement across the event horizon. For such black holes, we show that the information entering a black hole becomes encoded in correlations within a tripartite quantum state, the quantum analogue of a one-time pad, and is only decoded into the outgoing radiation very late in the evaporation. This behavior generically describes the unitary evaporation of highly entangled black holes and requires no specially designed evolution. Our work suggests the existence of a matter-field sum rule for any fundamental theory.

Normal aging is characterized by deficits that cross multiple cognitive domains including episodic memory and attention. Compared to young adults (YA), older adults (OA) not only show reduction in true memories, but also an increase in false memories. In this study we aim to elucidate how the production of confabulation is influenced by encoding and retrieval processes. We hypothesized that in OA, compared to YA, over-learned information interferes with the recall of specific, unique past episodes and this interference should be more prominent when a concurrent task perturbs the encoding of the episodes to be recalled. We tested this hypothesis using an experimental paradigm in which a group of OA and a group of YA had to recall three different types of story: a previously unknown story, a well-known fairy tale (Snow White), and a modified well-known fairy tale (Little Red Riding Hood is not eaten by the wolf), in three different experimental conditions: (1) free encoding and free retrieval; (2) Divided attention (DA) at encoding and free retrieval; and (3) free encoding and DA at retrieval. Results showed that OA produced significantly more confabulations than YA, particularly, in the recall of the modified fairy tale. Moreover, DA at encoding markedly increased the number of confabulations, whereas DA at retrieval had no effect on confabulation. Our findings reveal the implications of two phenomena in the production of confabulation in normal aging: the effect of poor encoding and the interference of strongly represented, over-learned information in episodic memory recall.

A general overview is given of the National Space Science Data Center (NSSDC) Standard InformationRetrieval System. A description, in general terms, the information system that contains the data files and the software system that processes and manipulates the files maintained at the Data Center. Emphasis is placed on providing users with an overview of the capabilities and uses of the NSSDC Standard InformationRetrieval System (SIRS). Examples given are taken from the files at the Data Center. Detailed information about NSSDC data files is documented in a set of File Users Guides, with one user's guide prepared for each file processed by SIRS. Detailed information about SIRS is presented in the SIRS Users Guide.

Medical centers collect and store significant amount of valuable data pertaining to patients' visit in the form of medical free-text. In addition, standardized diagnosis codes (International Classification of Diseases, Ninth Revision, Clinical Modification: ICD9-CM) related to those dictated reports are usually available. In this work, we have created a framework where image searches could be initiated through a combination of free-text reports as well as ICD9 codes. This framework enables more comprehensive search on existing large sets of patient data in a systematic way. The free text search is enriched by computer-aided inclusion of additional search terms enhanced by a thesaurus. This combination of enriched search allows users to access to a larger set of relevant results from a patient-centric PACS in a simpler way. Therefore, such framework is of particular use in tasks such as gathering images for desired patient populations, building disease models, and so on. As the motivating application of our framework, we implemented a search engine. This search engine processed two years of patient data from the OSU Medical Center's Information Warehouse and identified lung nodule location information using a combination of UMLS Meta-Thesaurus enhanced text report searches along with ICD9 code searches on patients that have been discharged. Five different queries with various ICD9 codes involving lung cancer were carried out on 172552 cases. Each search was completed under a minute on average per ICD9 code and the inclusion of UMLS thesaurus increased the number of relevant cases by 45% on average.

Ontology is the process of growth and elucidation of concepts of an information domain being common for a group of users. Establishing ontology into informationretrieval is a normal method to develop searching effects of relevant information users require. Keywords matching process with historical or information domain is significant in recent calculations for assisting the best match for specific input queries. This research presents a better querying mechanism for informationretrieval which integrates the ontology queries with keyword search. The ontology-based query is changed into a primary order to predicate logic uncertainty which is used for routing the query to the appropriate servers. Matching algorithms characterize warm area of researches in computer science and artificial intelligence. In text matching, it is more dependable to study semantics model and query for conditions of semantic matching. This research develops the semantic matching results between input queries and information in ontology field. The contributed algorithm is a hybrid method that is based on matching extracted instances from the queries and information field. The queries and information domain is focused on semantic matching, to discover the best match and to progress the executive process. In conclusion, the hybrid ontology in semantic web is sufficient to retrieve the documents when compared to standard ontology.

A simulation study of library-based informationretrieval systems is described. Basic models for each of several important aspects are presented: (1) user behavior, emphasizing response to quality and delays in services; (2) the scheduling of services and the organization of the machine-readable files; and (3) the distribution of conventional…

Presents a genetic relevance optimization process performed in an informationretrieval system that uses genetic techniques for solving multimodal problems (niching) and query reformulation techniques. Explains that the niching technique allows the process to reach different relevance regions of the document space, and that query reformulations…

Genetic algorithms, a class of nondeterministic algorithms in which the role of chance makes the precise nature of a solution impossible to guarantee, seem to be well suited to combinatorial-optimization problems in informationretrieval. Provides an introduction to techniques and characteristics of genetic algorithms and illustrates their…

Proposes an approach to incorporate term similarity and inverse document frequency into a logical model of informationretrieval. Highlights include document representation and matching; incorporating term similarity into the measure of distance; new algorithms for implementation; inverse document frequency; and logical versus classical models of…

Discusses the possibility and usefulness of applying Habermas' universal pragmatics to analyze informationretrieval interaction. Examines current studies of human computer interaction (HCI) and presents a case study that investigated the initiation and development of verification of validity claims in HCI from the universal pragmatics…

Discusses reuse of existing software for new purposes as a key aspect of efficient software engineering by matching formal written requirements used to define the new and the old software. Explores two matching methodologies that use informationretrieval techniques and describes test results from a comparison of two military systems. (Author/LRW)

Discussion of visual informationretrieval systems focuses on an approach for testing novel interfaces that uses bottom-up, stepwise testing to allow evaluation of a visualization itself, rather than restricting evaluation to the system instantiating it. Presents a case study of undergraduates that compares a new visualization technique to more…

This article summarizes the author's dissertation regarding search strategies of millennial undergraduate students in Web and library online informationretrieval systems. Millennials bring a unique set of search characteristics and strategies to their research since they have never known a world without the Web. Through the use of search engines,…

This handbook is a set of guidelines to assist authors in preparing publications to meet two sets of criteria: requirements of federal and state government sponsors and requirements of informationretrieval systems. The guidelines include both a set of written instructions and a physical model, and are sufficiently flexible to apply to research…

Reviews informationretrieval (IR) studies since 1986 from the user's perspective. Identifies two main approaches that advocate user-centered design theory: (1) the cognitive approach; and (2) the holistic approach. Also explores other approaches--systems thinking/action research and usability techniques that may have potential for IR research and…

Present day shortcomings in informationretrieval are the results of a failure to properly contend with the problem of data representation. The index provides the necessary linkage between a multiplicity of sources and a single receiver. Whether considering the source/document-space interface or the query/index interface, the elements of the…

Evaluates the current state of natural language processing informationretrieval systems from the user's point of view, focusing on the structure and components of the systems' help mechanisms. Topics include user/system interaction; semantic parsing; syntactic parsing; semantic mapping; and concept matching. (Author/LRW)

Study was started to discover and state explicitly the fundamentals of data banking (more commonly called information storage and retrieval). A clear...framework or hierarchical tree is displayed that includes all possible data banking processes and shows their similarities and differences. The basis of

The purpose of this study was to describe and interpret the cognition of a graduate student during informationretrieval using the World Wide Web. The participant was a doctoral student in psychology with little experience using the Internet, and even less experience with the World Wide Web. The student performed an open search of her dissertation…

Three specific contributions to the field of informationretrieval are presented. The first two describe the establishment of an adaptive, interactive man-machine dialogue that produces a form of unsolicited librarian-like assistance for the user in his selection of index terms to characterize an indexing function. The data set upon which the…

Discusses how to improve research and design more effective informationretrieval systems. Topics include human-system interaction; knowledge integration; pluralistic research approaches; enhanced access to research data; more multidisciplinary integrative devices and conceptual mappings; establishing a greater mass of published research findings…

Using examples of data from the areas of informationretrieval and of multivariate data analysis, six hierarchic clustering algorithms (single link, median, centroid, group average, complete link, Wards's) are examined and evaluated by using three proposed coefficients of hierarchic structure. Nine references are cited. (EJS)

A project is being conducted to test the feasibility of an information storage and retrieval system for museum specimen data, particularly for natural history museums. A pilot data processing system has been developed, with the specimen records from the national collections of birds, marine crustaceans, and rocks used as sample data. The research…

Proposes a theoretical framework called NLPIR that integrates natural language processing (NLP) into informationretrieval (IR) based on the assumption that there exists representation distance between queries and documents. Discusses problems in traditional keyword-based IR, including relevance, and describes some existing NLP techniques.…

Presents an efficient spoken-access approach for both Chinese text and Mandarin speech informationretrieval. Highlights include human-computer interaction via voice input, speech query recognition at the syllable level, automatic term suggestion, relevance feedback techniques, and experiments that show an improvement in the effectiveness of…

Presents adaptations and tests undertaken to allow an informationretrieval system to forecast the likelihood of avalanches on a particular day; the forecasting process uses historical data of the weather and avalanche conditions for a large number of days. Describes a method for adapting these data into a form usable by a text-based IR system and…

Describes a study of graduate students that examined search behavior and affective response to a hypertext-based bibliographic informationretrieval system called HyperLynx for searchers with different search skills and backgrounds. Previous experience with hypertext or Boolean searching is examined, and search times are discussed. (21 references)…

Presents six principles for building and evaluating Web-based informationretrieval interfaces: help the user develop an understanding of the interface and search process, judge the value of continuing search paths, and refine search queries or search topics; avoid complex navigation; make system actions explicit; and provide verbal labels…

The publication of papers describing activity in computer-based storage and retrieval and geoscience information has continued at a vigorous pace since release of the last bibliography, which covered the period 1946-69 (ED 076 203). A total of 211 references are identified, nearly all of which were published during the three-year period 1970-72…

Wastewater treatment, microbiology, biochemistry, and engineering are the major subject areas covered in this preliminary thesaurus designed for use in a private informationretrieval system. The thesaurus was developed through meetings where each descriptor was discussed, necessary scope notes were written, definition and cross references were…

For those unfamiliar with the Stanford Physics InformationRetrieval System (SPIRES) an introduction and background section is provided in this 1969-70 annual report. This is followed by: (1) the SPIRES I prototype, (2) developing a production system--SPIRES II and (3) system scope and requirements analysis. The appendices present: (1) Stanford…

Presents a metric for comparing the performance of common classes of Internet informationretrieval tools, including human indexed catalogs of Web resources and automatically indexed databases of Web pages. The benefit of the proposed metric is that it is relevance-based, and it facilitates the comparison of the performance of different classes of…

Proposes a method of evaluating informationretrieval systems by concentrating on individual tools (commands, their menus or graphic interface equivalents, or a move/stratagem). A user would assess the relative success of a small part of a search, and every tool used in that part would be credited with a contribution to the result. Cumulative…

Discusses the CODER system, which was developed to investigate the application of artificial intelligence methods to increase the effectiveness of informationretrieval systems, particularly those involving heterogeneous documents. Highlights include the use of PROLOG programing, blackboard-based designs, knowledge engineering, lexicological…

Applying basic data assimilation techniques to the evaluation of remote-sensing products can clarify the impact of sensor design issues on the value of retrievals for hydrologic applications. For instance, the impact of incidence angle on the accuracy of radar surface soil moisture retrievals is largely unknown due to discrepancies in theoretical backscatter models as well as limitations in the availability of sufficiently-extensive ground-based soil moisture observations for validation purposes. In this presentation we will describe and apply a data assimilation evaluation technique for scatterometer-based surface soil moisture retrievals that does not require ground-based soil moisture observations to examine the sensitivity of retrieval skill to variations in incidence angle. Past results with the approach have shown that it is capable of detecting relative variations in the correlation between anomalies in remotely-sensed surface soil moisture retrievals and ground-truth soil moisture measurements. Application of the evaluation approach to the TU-Wien WARP5.0 European Space Radar (ERS) soil moisture data set over two regional-scale (~1000 km) domains in the Southern United States indicates a relative reduction in anomaly correlation-based skill of between 20% and 30% when moving between the lowest (< 26 degrees) and highest ERS (> 50 degrees) incidence angle ranges. These changes in anomaly-based correlation provide a useful proxy for relative variations in the value of estimates for data assimilation applications and can therefore be used to inform the design of appropriate retrieval algorithms. For example, the observed sensitivity of correlation-based skill with incidence angle is in approximate agreement with soil moisture retrieval uncertainty predictions made using the WARP5.0 backscatter model. However, the coupling of a bare soil backscatter model with the so-called "vegetation water cloud" model is shown to generally over-estimate the impact of

AIRNET was a thematic network project (2002–2004) initiated to stimulate the interaction between researchers in air pollution and health in Europe. As part of AIRNET’s communication strategy, a standardized workshop model was developed to organize national meetings on air pollution and health (AIRNET network days). Emphasis was given to tailor the national workshopinformation and related activities to the specific needs of a wider range of stakeholders (e.g., policy makers, nongovernmental organizations, industry representatives). In this report we present an overview of the results of four workshops held in western, northern, central/eastern, and southern regions of Europe in 2004. Overall, workshop experiences indicated that by actively involving participants in the planning of each meeting, AIRNET helped create an event that addressed participants’ needs and interests. A wide range of communication formats used to discuss air pollution and health also helped stimulate active interaction among participants. Overall, the national workshops held by AIRNET offered a way to improve communication among the different stakeholders. Because a broad stakeholder involvement in decision making can positively affect the development of widely supported policies, such meetings should be continued for Europe and elsewhere. PMID:16835066

AIRNET was a thematic network project (2002-2004) initiated to stimulate the interaction between researchers in air pollution and health in Europe. As part of AIRNET's communication strategy, a standardized workshop model was developed to organize national meetings on air pollution and health (AIRNET network days) . Emphasis was given to tailor the national workshopinformation and related activities to the specific needs of a wider range of stakeholders (e.g., policy makers, nongovernmental organizations, industry representatives) . In this report we present an overview of the results of four workshops held in western, northern, central/eastern, and southern regions of Europe in 2004. Overall, workshop experiences indicated that by actively involving participants in the planning of each meeting, AIRNET helped create an event that addressed participants' needs and interests. A wide range of communication formats used to discuss air pollution and health also helped stimulate active interaction among participants. Overall, the national workshops held by AIRNET offered a way to improve communication among the different stakeholders. Because a broad stakeholder involvement in decision making can positively affect the development of widely supported policies, such meetings should be continued for Europe and elsewhere.

The Idaho Drug Information Service has been in operation since 1972. During this time, five different files and manual methods of filing have evolved. As a result of confusion over indexing terms, information became lost within the filing systems, and the files fell into disuse. A reorganization of the files was undertaken in an attempt to develop a filing system that would be functional and efficient. Methods of manual filing are briefly reviewed. A computerized on-line key word indexing system for information storage and retrieval was initiated. The development and operation of the Drug InformationRetrieval Terminal System (DIRTS) is described completely. At this time, DIRTS is fully operational. The system has eliminated the previous problems encountered with the manual filing systems, and user response has been good.

This paper presents a novel approach to biomedical image retrieval by mapping image regions to local concepts and represent images in a weighted entropy-based concept feature space. The term concept refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist user in interactively select a Region-Of-Interest (ROI) and search for similar image ROIs. Further, a spatial verification step is used as a post-processing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval, is validated through experiments on a data set of 450 lung CT images extracted from journal articles from four different collections.

We discuss a scheme to retrieve transient conformational molecular structure information using photoelectron angular distributions (PADs) that have averaged over partial alignments of isolated molecules. The photoelectron is pulled out from a localized inner-shell molecular orbital by an X-ray photon. We show that a transient change in the atomic positions from their equilibrium will lead to a sensitive change in the alignment-averaged PADs, which can be measured and used to retrieve the former. Exploiting the experimental convenience of changing the photon polarization direction, we show that it is advantageous to use PADs obtained from multiple photon polarization directions. Lastly, amore » simple single-scattering model is proposed and benchmarked to describe the photoionization process and to do the retrieval using a multiple-parameter fitting method.« less

We discuss a scheme to retrieve transient conformational molecular structure information using photoelectron angular distributions (PADs) that have averaged over partial alignments of isolated molecules. The photoelectron is pulled out from a localized inner-shell molecular orbital by an X-ray photon. We show that a transient change in the atomic positions from their equilibrium will lead to a sensitive change in the alignment-averaged PADs, which can be measured and used to retrieve the former. Exploiting the experimental convenience of changing the photon polarization direction, we show that it is advantageous to use PADs obtained from multiple photon polarization directions. Lastly, a simple single-scattering model is proposed and benchmarked to describe the photoionization process and to do the retrieval using a multiple-parameter fitting method.

We discuss a scheme to retrieve transient conformational molecular structure information using photoelectron angular distributions (PADs) that have averaged over partial alignments of isolated molecules. The photoelectron is pulled out from a localized inner-shell molecular orbital by an X-ray photon. We show that a transient change in the atomic positions from their equilibrium will lead to a sensitive change in the alignment-averaged PADs, which can be measured and used to retrieve the former. Exploiting the experimental convenience of changing the photon polarization direction, we show that it is advantageous to use PADs obtained from multiple photon polarization directions. A simple single-scattering model is proposed and benchmarked to describe the photoionization process and to do the retrieval using a multiple-parameter fitting method. PMID:27025410

Several lines of evidence in humans and experimental animals suggest that the hippocampus is critical for the formation and retrieval of spatial memory. However, although the hippocampus is reciprocally connected to adjacent cortices within the medial temporal lobe and they, in turn, are connected to the neocortex, little is known regarding the function of these cortices in memory. Here, using a reference spatial memory task in the radial maze, we show that neurotoxic perirhinal cortex lesions produce a profound retrograde amnesia when learning-surgery intervals of 1 or 50 d are used (Experiment 1). With the aim of dissociating between consolidation and retrieval processes, we injected lidocaine either daily after training (Experiment 2) or before a retention test once the learning had been completed (Experiment 3). Results show that reversible perirhinal inactivation impairs retrieval but not consolidation. However, the same procedure followed in Experiment 2 disrupted consolidation when the lidocaine was injected into the dorsal hippocampus. The results of Experiment 4 rule out the possibility that the deficit in retrieval is due to a state-dependent effect. These findings demonstrate the differential contribution of various regions of the medial temporal lobe to memory, suggesting that the perirhinal cortex plays a key role in the retrieval of spatial information for a long period of time.

We need current information to support effective practice. This information can be accessed relatively quickly and inexpensively, using nothing more than a computer and an Internet connection. This demands skills in searching and accessing that information. In many hospitals, computers with an Internet connection are readily available in the clinical wards, allowing those with skills in retrieving medical information to find that information near the bedside where it is needed. This report describes techniques to access information on the Internet from various sources. This information exists at high levels, such as PubMed, the National Guideline Clearinghouse, and the Cochrane Library. However, there is also much information available on the Internet that has not been validated or subjected to peer review. Thus, it is important not only to find information but also to separate useful from useless information.

The role and place of the machine in scientific and technical information is explored including: basic trends in the development of informationretrieval systems; preparation of engineering and scientific cadres with respect to mechanization and automation of information works; the logic of descriptor retrieval systems; the 'SETKA-3' automated…

Presents an algorithmic approach to addressing the vocabulary problem in scientific informationretrieval and information sharing, using the molecular biology domain as an example. A cognitive study and a follow-up document retrieval study were conducted using first a conjoined fly-worm thesaurus and then an actual worm database and the conjoined…

The current study sought to examine the relative contributions of encoding and retrieval processes in accessing contextual information in the absence of item memory using an extralist cuing procedure in which the retrieval cues used to query memory for contextual information were "related" to the target item but never actually studied.…

THE MANUAL DESCRIBES AND DOCUMENTS THE RETRIEVAL SYSTEM IN TERMS OF ITS TAPE AND DISK FILE PROGRAMS AND ITS SEARCH PROGRAMS AS USED BY THE LEHIGH CENTER FOR THE INFORMATION SCIENCES FOR SELECTED CURRENT LITERATURE OF THE INFORMATION SCIENCES, ABOUT 2500 DOCUMENT REFERENCES. THE SYSTEM IS PRESENTLY ON-LINE VIA TELETYPE AND CONVERSION IS IN PROCESS…

The U.S. National Institutes of Health, through its National Library of Medicine, developed ClinicalTrials.gov to provide the public with easy access to information on clinical trials on a wide range of conditions or diseases. Only English language informationretrieval is currently supported. Given the growing number of Spanish speakers in the U.S. and their increasing use of the Web, we anticipate a significant increase in Spanish-speaking users. This study compares the effectiveness of two common cross-language informationretrieval methods using machine translation, query translation versus document translation, using a subset of genuine user queries from ClinicalTrials.gov. Preliminary results conducted with the ClinicalTrials.gov search engine show that in our environment, query translation is statistically significantly better than document translation. We discuss possible reasons for this result and we conclude with suggestions for future work.

Nurses information needs relate to nursing orders and nursing orders have many contexts including body systems, safety practices and other clinical categories. When searching for information related to orders one search term might retrieve documents related to multiple orders. We clustered nursing orders into sets that are related by the same logical clinical context. We then generated clusters and their search terms from a data set of 636 orders obtained from a CIS/CPOE system at an academic medical center. We refined those cluster search terms by searching an electronic nursing procedure manual to retrieve resources that could answer one of six generic nursing questions. Sixty-three cluster search terms were identified. The search terms for 100 (16%) of the orders were validated in a second hospitals electronic nursing procedure manual; precision was 32.5%.. Our process of identifying cluster search terms may be a useful method to obtain clinically relevant information resources.

Direct beam spectral extinction measurements of solar radiation contain important information on atmospheric composition in a form that is essentially free from multiple scattering contributions that otherwise tend to complicate the data analysis and informationretrieval. Such direct beam extinction measurements are available from the solar occultation satellite-based measurements made by the Stratospheric and Aerosol Gas Experiment (SAGE II) instrument and by ground-based Multi-Filter Shadowband Radiometers (MFRSRs). The SAGE II data provide cross-sectional slices of the atmosphere twice per orbit at seven wavelengths between 385 and 1020 nm with approximately 1 km vertical resolution, while the MFRSR data provide atmospheric column measurements at six wavelengths between 415 and 940 nm but at one minute time intervals. We apply the same retrieval technique of simultaneous least-squares fit to the observed spectral extinctions to retrieve aerosol optical depth, effective radius and variance, and ozone, nitrogen dioxide, and water vapor amounts from the SAGE II and MFRSR measurements. The retrieval technique utilizes a physical model approach based on laboratory measurements of ozone and nitrogen dioxide extinction, line-by-line and numerical k-distribution calculations for water vapor absorption, and Mie scattering constraints on aerosol spectral extinction properties. The SAGE II measurements have the advantage of being self-calibrating in that deep space provides an effective zero point for the relative spectral extinctions. The MFRSR measurements require periodic clear-day Langley regression calibration events to maintain accurate knowledge of instrument calibration.

Nowadays, the web has enabled an explosive growth of information sharing (there are currently over 4 billion pages covering most areas of human endeavor) so that the web has faced a new challenge of information overhead. The challenge that is now before us is not only to help people locating relevant information precisely but also to access and aggregate a variety of information from different resources automatically. Current web document are in human-oriented formats and they are suitable for the presentation, but machines cannot understand the meaning of document. To address this issue, Berners-Lee proposed a concept of semantic web. With semantic web technology, web information can be understood and processed by machine. It provides new possibilities for automatic web information processing. A main problem of semantic web informationretrieval is that when these is not enough knowledge to such informationretrieval system, the system will return to a large of no sense result to uses due to a huge amount of information results. In this paper, we present the architecture of information based on semantic web. In addiction, our systems employ the inference Engine to check whether the query should pose to Keyword-based Search Engine or should pose to the Semantic Search Engine.

A method for fast informationretrieval from a probe storage device is considered. It is shown that information can be stored and retrieved using the optical diffraction patterns obtained by the illumination of a large array of cantilevers by a monochromatic light source. In thermo-mechanical probe storage, the information is stored as a sequence of indentations on the polymer medium. To retrieve the information, the array of probes is actuated by applying a bending force to the cantilevers. Probes positioned over indentations experience deflection by the depth of the indentation, probes over the flat media remain un-deflected. Thus the array of actuated probes can be viewed as an irregular optical grating, which creates a data-dependent diffraction pattern when illuminated by laser light. We develop a low complexity modulation scheme, which allows the extraction of information stored in the pattern of indentations on the media from Fourier coefficients of the intensity of the diffraction pattern. We then derive a low-complexity maximum-likelihood sequence detection algorithm for retrieving the user information from the Fourier coefficients. The derivation of both the modulation and the detection schemes is based on the Fraunhofer formula for data-dependent diffraction patterns. The applicability of Fraunhofer diffraction theory to the optical set-up relevant for probe storage is established both theoretically and experimentally. We confirm the potential of the optical readout technique by demonstrating that the impairment characteristics of probe storage channels (channel noise, global positioning errors, small indentation depth) do not lead to an unacceptable increase in data recovery error rates. We also show that for as long as the Fresnel number F ≤ 0.1, the optimal channel detector derived from Fraunhofer diffraction theory does not suffer any significant performance degradation.

under study for peptic ulcer disease. There are significant new developments in diagnosing and treating this common and often serious disorder . The...foundation for an expanded Human Genetics Knowledge Base that covers some 3,000 monogenic traits. Through consensus, a panel of subject matter experts is

The classic workshop model for science educators in the past has been largely in situ: you show up somewhere, meet your fellow workshoppers, get personal treatment from instructors over several intensive days of content delivery, illustrative activities, and practice in technique, and try to incorporate what you've learned once you get back. But in an age when everybody's digitally connected, and many can't afford to travel, can an "online” workshop be as effective? This was a key question in the Astronomical Society of the Pacific (ASP) NSF-funded project "Astronomy from the Ground Up,” designed to increase astronomy education capacity at small and medium-sized science and nature centers and museums around the U.S. Together with its institutional partners, the Association of Science and Technology Centers (ASTC) and the Institute for Learning Innovation (ILI), and a cadre of individual partners, the ASP conducted both on-site and online workshops and created an online community of practice to increase informal educator capacity to present astronomy to their audiences, and to evaluate the relative effectiveness of the on-site and online delivery schemes. The presenter(s) will share some initial results and findings of the project.

A Family Workshop is an informal, multidisciplined educational program for adults and children, organized by a team of teachers. This article discusses the Lavender Hill Family Workshop, one of many, which attempts to provide education in various subject areas for adults and for children while also integrating both objectives in order to educate…

An algorithm to retrieve aerosol optical depth (AOD), single scattering albedo (SSA), and aerosol loading height is developed for GEMS (Geostationary Environment Monitoring Spectrometer) measurement. The GEMS is planned to be launched in geostationary orbit in 2018, and employs hyper-spectral imaging with 0.6 nm resolution to observe solar backscatter radiation in the UV and Visible range. In the UV range, the low surface contribution to the backscattered radiation and strong interaction between aerosol absorption and molecular scattering can be advantageous in retrieving aerosol information such as AOD and SSA [Torres et al., 2007; Torres et al., 2013; Ahn et al., 2014]. However, the large contribution of atmospheric scattering results in the increase of the sensitivity of the backward radiance to aerosol loading height. Thus, the assumption of aerosol loading height becomes important issue to obtain accurate result. Accordingly, this study focused on the simultaneous retrieval of aerosol loading height with AOD and SSA by utilizing the optimal estimation method. For the RTM simulation, the aerosol optical properties were analyzed from AERONET inversion data (level 2.0) at 46 AERONET sites over ASIA. Also, 2-channel inversion method is applied to estimate a priori value of the aerosol information to solve the Lavenberg Marquardt equation. The GEMS aerosol algorithm is tested with OMI level-1B dataset, a provisional data for GEMS measurement, and the result is compared with OMI standard aerosol product and AERONET values. The retrieved AOD and SSA show reasonable distribution compared with OMI products, and are well correlated with the value measured from AERONET. However, retrieval uncertainty in aerosol loading height is relatively larger than other results.

The theory of linguistics teaches us the existence of a hierarchical structure in linguistic expressions, from letter to word root, and on to word and sentences. By applying syntax and semantics beyond words, one can further recognize the grammatical relationship between among words and the meaning of a sequence of words. This layered view of a spoken language is useful for effective analysis and automated processing. Thus, it is interesting to ask if a similar hierarchy of representation of visual information does exist. A class of techniques that have a similar nature to the linguistic parsing is found in the Lempel-Ziv incremental parsing scheme. Based on a new class of multidimensional incremental parsing algorithms extended from the Lempel-Ziv incremental parsing, a new framework for image retrieval, which takes advantage of the source characterization property of the incremental parsing algorithm, was proposed recently. With the incremental parsing technique, a given image is decomposed into a number of patches, called a parsed representation. This representation can be thought of as a morphological interface between elementary pixel and a higher level representation. In this work, we examine the properties of two-dimensional parsed representation in the context of imagery informationretrieval and in contrast to vector quantization; i.e. fixed square-block representations and minimum average distortion criteria. We implemented four image retrieval systems for the comparative study; three, called IPSILON image retrieval systems, use parsed representation with different perceptual distortion thresholds and one uses the convectional vector quantization for visual pattern analysis. We observe that different perceptual distortion in visual pattern matching does not have serious effects on the retrieval precision although allowing looser perceptual thresholds in image compression result poor reconstruction fidelity. We compare the effectiveness of the use of the

Traditional approach on ship design involve the use of a method which takes a form that was earlier called the 'general design diagram' and is now known as the 'design spiral' - an iterative ship design process that allows for an increase in complexity and precision across the design cycle. Several advancements have been made towards the design spiral, however inefficient for handling complex simultaneous design changes, especially when later variable changes affect the ship's performance characteristics evaluated in earlier stages. Reviewed in this paper are several advancements in high speed planing craft design in preliminary design stage. An optimization framework for high speed planing craft is discussed which consist of surface informationretrieval module, a suite of state-of-the-art optimization algorithms and standard naval architectural performance estimation tools. A summary of the implementation of the proposed hull surface informationretrieval and several case studies are presented to demonstrate the capabilities of the framework.

Vector-product informationretrieval (IR) systems produce retrieval results superior to all other searching methods but presently have no commercial implementations beyond the personal computer environment. The NASA Electronic Library Systems (NELS) provides a ranked list of the most likely relevant objects in collections in response to a natural language query. Additionally, the system is constructed using standards and tools (Unix, X-Windows, Notif, and TCP/IP) that permit its operation in organizations that possess many different hosts, workstations, and platforms. There are no known commercial equivalents to this product at this time. The product has applications in all corporate management environments, particularly those that are information intensive, such as finance, manufacturing, biotechnology, and research and development.

Background and objective Doc'CISMeF (DC) is a semantic search engine used to find resources in CISMeF-BP, a quality controlled health gateway, which gathers guidelines available on the internet in French. Visualization of Concepts in Medicine (VCM) is an iconic language that may ease informationretrieval tasks. This study aimed to describe the creation and evaluation of an interface integrating VCM in DC in order to make this search engine much easier to use. Methods Focus groups were organized to suggest ways to enhance informationretrieval tasks using VCM in DC. A VCM interface was created and improved using the ergonomic evaluation approach. 20 physicians were recruited to compare the VCM interface with the non-VCM one. Each evaluator answered two different clinical scenarios in each interface. The ability and time taken to select a relevant resource were recorded and compared. A usability analysis was performed using the System Usability Scale (SUS). Results The VCM interface contains a filter based on icons, and icons describing each resource according to focus group recommendations. Some ergonomic issues were resolved before evaluation. Use of VCM significantly increased the success of informationretrieval tasks (OR=11; 95% CI 1.4 to 507). Nonetheless, it took significantly more time to find a relevant resource with VCM interface (101 vs 65 s; p=0.02). SUS revealed ‘good’ usability with an average score of 74/100. Conclusions VCM was successfully implemented in DC as an option. It increased the success rate of informationretrieval tasks, despite requiring slightly more time, and was well accepted by end-users. PMID:24650636

goal of Cross-Language InformationRetrieval (CLIR) is to support the task of searching multilingual col- lections by allowing users to enter queries in...37,600 entry) ELRA Basic Multilingual Lexicon covered common terms quite well, with 97% of the 1,000 most common English words being found (af- ter...text), 33 English topic descriptions,1 and binary (yes-no) relevance judgments for topic-document pairs. We used this monolingual test collection

This document presents the user's guide, system description, and mathematical specifications for the Langley Atmospheric InformationRetrieval System (LAIRS). It also includes a description of an optimal procedure for operational use of LAIRS. The primary objective of the LAIRS Program is to make it possible to obtain accurate estimates of atmospheric pressure, density, temperature, and winds along Shuttle reentry trajectories for use in postflight data reduction.

Scientific data received from satellites are characterized as a multi-dimensional time series, whose terms are vector functions of a vector of measurement conditions. Informationretrieval methods are used to construct lower dimensional samples on the basis of the condition vector, in order to obtain these data and to construct partial relations. The methods are applied to the joint Soviet-French Arkad project.

Satellite measurements of tropospheric carbon monoxide (CO) enable a wide array of applications including studies of air quality and pollution transport. The MOPITT (Measurements of Pollution in the Troposphere) instrument on the Earth Observing System Terra platform has been measuring CO concentrations globally since March 2000. As indicated by the Degrees of Freedom for Signal (DFS), the standard metric for trace-gas retrievalinformation content, MOPITT retrieval performance varies over a wide range. We show that both instrumental and geophysical effects yield significant geographical and temporal variability in MOPITT DFS values. Instrumental radiance uncertainties, which describe random errors (or "noise") in the calibrated radiances, vary over long time scales (e.g., months to years) and vary between the four detector elements of MOPITT's linear detector array. MOPITT retrieval performance depends on several factors including thermal contrast, fine-scale variability of surface properties, and CO loading. The relative importance of these various effects is highly variable, as demonstrated by analyses of monthly mean DFS values for the United States and the Amazon Basin. An understanding of the geographical and temporal variability of MOPITT retrieval performance is potentially valuable to data users seeking to limit the influence of the a priori through data filtering. To illustrate, it is demonstrated that calculated regional-average CO mixing ratios may be improved by excluding observations from a subset of pixels in MOPITT's linear detector array.

Nursing professionals have long recognized the importance to practice of research and the value of research evidence. Nurses still do not use research findings in practice. The purpose of this paper was to describe nurses' skills in using literature databases and the Internet in psychiatric hospitals and associations of nurses' gender, age, and job position with their informationretrieval skills. The study was carried out in 2004 among nursing staff (N=183) on nine acute psychiatric wards in two psychiatric hospitals in Finland (n=180, response rate 98%). The Finnish version of the European Computer Driving Licence test (ECDL) was used as a data collection instrument. The study showed that there were clear deficits in informationretrieval skills among nurses working in psychiatric hospitals. Thus, nurses' competence does not support the realization of evidence-based practice in the hospitals. Therefore, it is important to increase nurses' informationretrieval skills by tailoring continuing education modules. It would be also advisable to develop centralized systems for the internal dissemination of research findings for the use of nursing staff.

An informalworkshop was held to discuss aspects of the calculation of range and energy deposition distributions which are of interest in ion implantation experiments. Topics covered include: problems encountered in using published range and energy deposition tabulations; some limitations in the solutions of range/energy transport equations; the effect of the scattering cross section on straggle; Monte Carlo calculations of ranges and straggling; damage studies in aluminum; simulation of heavy-ion irradiation of gold using MARLOWE; and MARLOWE calculations of range distribution parameters - dependence on input data and calculational model. (GHT)

Executive Summary The International Atomic Energy Agency (IAEA) implements nuclear safeguards and verifies countries are compliant with their international nuclear safeguards agreements. One of the key provisions in the safeguards agreement is the requirement that the country provide nuclear facility design and operating information to the IAEA relevant to safeguarding the facility, and at a very early stage. , This provides the opportunity for the IAEA to verify the safeguards-relevant features of the facility and to periodically ensure that those features have not changed. The national authorities (State System of Accounting for and Control of Nuclear Material - SSAC) provide the design information for all facilities within a country to the IAEA. The design information is conveyed using the IAEA’s Design Information Questionnaire (DIQ) and specifies: (1) Identification of the facility’s general character, purpose, capacity, and location; (2) Description of the facility’s layout and nuclear material form, location, and flow; (3) Description of the features relating to nuclear material accounting, containment, and surveillance; and (4) Description of existing and proposed procedures for nuclear material accounting and control, with identification of nuclear material balance areas. The DIQ is updated as required by written addendum. IAEA safeguards inspectors examine and verify this information in design information examination (DIE) and design information verification (DIV) activities to confirm that the facility has been constructed or is being operated as declared by the facility operator and national authorities, and to develop a suitable safeguards approach. Under the Next Generation Safeguards Initiative (NGSI), the National Nuclear Security Administrations (NNSA) Office of Non-Proliferation and International Security identified the need for more effective and efficient verification of design information by the IAEA for improving international safeguards

Testing is a powerful means to boost the retention of information. The extent to which the benefits of testing generalize to nontested information, however, is not clear. In three experiments, we found that completing cued-recall tests for a subset of studied materials enhanced retention for the specific information tested, as well as for associated, nontested information during later free-recall testing. In Experiment 1, this generalized benefit was revealed for lists of category-exemplar pairs. Experiment 2 extended the effect to unrelated words, suggesting that retrieval can enhance later free recall of nontested information that is bound solely through episodic context. In Experiment 3, we manipulated the format of the final test and found facilitation in free-recall, but not in cued-recall, testing. The results suggest that testing may facilitate later free recall in part by enhancing access to information that is present during a prior temporal or list context. More generally, these findings suggest that retrieval-induced facilitation extends to a broader range of conditions than has previously been suggested, and they further motivate the adoption of testing as a practical and effective learning tool.

The application of formal concept analysis to the problem of informationretrieval has been shown useful but has lacked any real analysis of the idea of relevance ranking of search results. SearchSleuth is a program developed to experiment with the automated local analysis of Web search using formal concept analysis. SearchSleuth extends a standard search interface to include a conceptual neighbourhood centred on a formal concept derived from the initial query. This neighbourhood of the concept derived from the search terms is decorated with its upper and lower neighbours representing more general and special concepts, respectively. SearchSleuth is in many ways an archetype of search engines based on formal concept analysis with some novel features. In SearchSleuth, the notion of related categories - which are themselves formal concepts - is also introduced. This allows the retrieval focus to shift to a new formal concept called a sibling. This movement across the concept lattice needs to relate one formal concept to another in a principled way. This paper presents the issues concerning exploring, searching, and ordering the space of related categories. The focus is on understanding the use and meaning of proximity and semantic distance in the context of informationretrieval using formal concept analysis.

To remember a previous event, it is often helpful to use goal-directed control processes to constrain what comes to mind during retrieval. Behavioral studies have demonstrated that incidental learning of new "foil" words in a recognition test is superior if the participant is trying to remember studied items that were semantically encoded compared to items that were non-semantically encoded. Here, we applied subsequent memory analysis to fMRI data to understand the neural mechanisms underlying the "foil effect". Participants encoded information during deep semantic and shallow non-semantic tasks and were tested in a subsequent blocked memory task to examine how orienting retrieval towards different types of information influences the incidental encoding of new words presented as foils during the memory test phase. To assess memory for foils, participants performed a further surprise old/new recognition test involving foil words that were encountered during the previous memory test blocks as well as completely new words. Subsequent memory effects, distinguishing successful versus unsuccessful incidental encoding of foils, were observed in regions that included the left inferior frontal gyrus and posterior parietal cortex. The left inferior frontal gyrus exhibited disproportionately larger subsequent memory effects for semantic than non-semantic foils, and significant overlap in activity during semantic, but not non-semantic, initial encoding and foil encoding. The results suggest that orienting retrieval towards different types of foils involves re-implementing the neurocognitive processes that were involved during initial encoding.

This paper summarizes the results of the first of three workshops that were planned to assess the information needed by the Office of Conservation and Solar Energy (CS) to effectively evaluate the pending Energy Management Partnership Act (EMPA); the workshop concentrated on issues of the EMPA hierarchical partnership. The approach utilized offers two major benefits to CS. First, by considering the problem of program evaluation while EMPA is still in the planning stage, this study identifies any baseline information that should be collected prior to implementation of EMPA, and also provides CS with the opportunity to include evaluation considerations in the operating guidelines for the program. Second, by identifying the potential problems and benefits inherent in EMPA and then identifying the information necessary to evaluate these problems and benefits, information requirements tied to the reasons for needing that information are generated, rather than a long unrelated laundry list of information requirements. Drafting of EMPA is not yet complete. When the term EMPA is used here, it refers to a set of bills that are presently being melded together. The original EMPA bill, which originated in DOE, was designed to expand the role of state and local governments in achieving national energy goals. Specifically, EMPA would provide a total of $110 million annually to state and local governments over a five year period to (1) develop an overall state energy plan, (2) consolidate three existing federal energy grant programs, (3) allow the secretary to fund directly innovative projects at the local level, and (4) provide additional assistance to states to cover the administrative costs of existing energy programs. Other bills, which may be passed in conjuncttion with EMPA or incorporated into EMPA, place additional emphasis on the local level by allocating as much as $400 million annually to local governments.

We hypothesize that brain activity can be used to control future informationretrieval systems. To this end, we conducted a feasibility study on predicting the relevance of visual objects from brain activity. We analyze both magnetoencephalographic (MEG) and gaze signals from nine subjects who were viewing image collages, a subset of which was relevant to a predetermined task. We report three findings: i) the relevance of an image a subject looks at can be decoded from MEG signals with performance significantly better than chance, ii) fusion of gaze-based and MEG-based classifiers significantly improves the prediction performance compared to using either signal alone, and iii) non-linear classification of the MEG signals using Gaussian process classifiers outperforms linear classification. These findings break new ground for building brain-activity-based interactive image retrieval systems, as well as for systems utilizing feedback both from brain activity and eye movements.

Few studies have been performed within cross-language informationretrieval (CLIR) in the field of psychology and psychotherapy. The aim of this paper is to to analyze and assess the quality of available query translation methods for CLIR on a health portal for psychology. A test base of 100 user queries, 50 Multi Word Units (WUs) and 50 Single WUs, was used. Swedish was the source language and English the target language. Query translation methods based on machine translation (MT) and dictionary look-up were utilized in order to submit query translations to two search engines: Google Site Search and Quick Ask. Standard IR evaluation measures and a qualitative analysis were utilized to assess the results. The lexicon extracted with word alignment of the portal's parallel corpus provided better statistical results among dictionary look-ups. Google Translate provided more linguistically correct translations overall and also delivered better retrieval results in MT.

Utilizing external collections to improve retrieval performance is challenging research because various test collections are created for different purposes. Improving medical informationretrieval has also gained much attention as various types of medical documents have become available to researchers ever since they started storing them in machine processable formats. In this paper, we propose an effective method of utilizing external collections based on the pseudo relevance feedback approach. Our method incorporates the structure of external collections in estimating individual components in the final feedback model. Extensive experiments on three medical collections (TREC CDS, CLEF eHealth, and OHSUMED) were performed, and the results were compared with a representative expansion approach utilizing the external collections to show the superiority of our method.

The semantic content, or the meaning, is the essence of autobiographical memories. In comparison to previous research, which has mainly focused on the phenomenological experience and the age distribution of retrieved events, the present study provides a novel view on the retrieval of event information by quantifying the information as semantic representations. We investigated the semantic representation of sensory cued autobiographical events and studied the modality hierarchy within the multimodal retrieval cues. The experiment comprised a cued recall task, where the participants were presented with visual, auditory, olfactory or multimodal retrieval cues and asked to recall autobiographical events. The results indicated that the three different unimodal retrieval cues generate significantly different semantic representations. Further, the auditory and the visual modalities contributed the most to the semantic representation of the multimodally retrieved events. Finally, the semantic representation of the multimodal condition could be described as a combination of the three unimodal conditions. In conclusion, these results suggest that the meaning of the retrieved event information depends on the modality of the retrieval cues.

It is reported that digital dashboard systems in hospitals provide a user interface (UI) that can centrally manage and retrieve various information related to patients in a single screen, support the decision-making of medical professionals on a real time basis by integrating the scattered medical information systems and core work flows, enhance the competence and decision-making ability of medical professionals, and reduce the probability of misdiagnosis. However, the digital dashboard systems of hospitals reported to date have some limitations when medical professionals use them to generally treat inpatients, because those were limitedly used for the work process of certain departments or developed to improve specific disease-related indicators. Seoul National University Bundang Hospital developed a new concept of EMR system to overcome such limitations. The system allows medical professionals to easily access all information on inpatients and effectively retrieve important information from any part of the hospital by displaying inpatient information in the form of digital dashboard. In this study, we would like to introduce the structure, development methodology and the usage of our new concept.

With the increasing amount of medical data available on the Web, looking for health information has become one of the most widely searched topics on the Internet. Patients and people of several backgrounds are now using Web search engines to acquire medical information, including information about a specific disease, medical treatment or professional advice. Nonetheless, due to a lack of medical knowledge, many laypeople have difficulties in forming appropriate queries to articulate their inquiries, which deem their search queries to be imprecise due the use of unclear keywords. The use of these ambiguous and vague queries to describe the patients' needs has resulted in a failure of Web search engines to retrieve accurate and relevant information. One of the most natural and promising method to overcome this drawback is Query Expansion. In this paper, an original approach based on Bat Algorithm is proposed to improve the retrieval effectiveness of query expansion in medical field. In contrast to the existing literature, the proposed approach uses Bat Algorithm to find the best expanded query among a set of expanded query candidates, while maintaining low computational complexity. Moreover, this new approach allows the determination of the length of the expanded query empirically. Numerical results on MEDLINE, the on-line medical information database, show that the proposed approach is more effective and efficient compared to the baseline.

This report describes the activities, conclusions, and recommendations of the Workshop on Evaluation Systems for Renewable Energy Systems sponsored by the Agency for International Development and SERI, held 20-22 February 1980 in Golden, Colorado. The primary objectives of the workshop was to explore whether it was possible to establish common information elements that would describe the operation and impact of renewable energy projects in developing countries. The workshop provided a forum for development program managers to discuss the information they would like to receive about renewable energy projects and to determine whether common data could be agreed on to facilitate information exchange among development organizations. Such information could be shared among institutions and used to make informed judyments on the economic, technical, and social feasibility of the technologies. Because developing countries and foreign assistance agencies will be financing an increasing number of renewable energy projects, these organizations need information on the field experience of renewable energy technologies. The report describes the substance of the workshop discussions and includes the papers presented on information systems and technology evaluation and provides lists of important information elements generated by both the plenary sessions and the small working groups.

Discrepancy between the abundance of cognate protein and RNA molecules is frequently observed. A theoretical understanding of this discrepancy remains elusive, and it is frequently described as surprises and/or technical difficulties in the literature. Protein and RNA represent different steps of the multi-stepped cellular genetic information flow process, in which they are dynamically produced and degraded. This paper explores a comparison with a similar process in computers-multi-step information flow from storage level to the execution level. Functional similarities can be found in almost every facet of the retrieval process. Firstly, common architecture is shared, as the ribonome (RNA space) and the proteome (protein space) are functionally similar to the computer primary memory and the computer cache memory, respectively. Secondly, the retrieval process functions, in both systems, to support the operation of dynamic networks-biochemical regulatory networks in cells and, in computers, the virtual networks (of CPU instructions) that the CPU travels through while executing computer programs. Moreover, many regulatory techniques are implemented in computers at each step of the informationretrieval process, with a goal of optimizing system performance. Cellular counterparts can be easily identified for these regulatory techniques. In other words, this comparative study attempted to utilize theoretical insight from computer system design principles as catalysis to sketch an integrative view of the gene expression process, that is, how it functions to ensure efficient operation of the overall cellular regulatory network. In context of this bird's-eye view, discrepancy between protein and RNA abundance became a logical observation one would expect. It was suggested that this discrepancy, when interpreted in the context of system operation, serves as a potential source of information to decipher regulatory logics underneath biochemical network operation.

Digital technologies enable the storage of vast amounts of information, accessible with remarkable ease. However, along with this facility comes the challenge to find pertinent information from the volumes of nonrelevant information. The present article describes the pearl-harvesting methodological framework for informationretrieval. Pearl…

This article examines an after-school program entitled Silk City Media Workshop. Briefly, the workshop engages youth in digital storytelling as a means of enhancing both their technology and literacy skills. Transcending these goals, this workshop also provides opportunities for youth to reveal multiple aspects of their unfolding identities as…

This paper is the first part of a two-part study that aims to retrieve aerosol particle size distribution (PSD) and refractive index from the multispectral and multiangular polarimetric measurements taken by the new-generation Sun photometer as part of the Aerosol Robotic Network (AERONET). It provides theoretical analysis and guidance to the companion study in which we have developed an inversion algorithm for retrieving 22 aerosol microphysical parameters associated with a bimodal PSD function from real AERONET measurements. Our theoretical analysis starts with generating the synthetic measurements at four spectral bands (440, 675, 870, and 1020 nm) with a Unified Linearized Vector Radiative Transfer Model for various types of spherical aerosol particles. Subsequently, the quantitative information content for retrieving aerosol parameters is investigated in four observation scenarios, i.e., I1, I2, P1, and P2. Measurements in the scenario (I1) comprise the solar direct radiances and almucantar radiances that are used in the current AERONET operational inversion algorithm. The other three scenarios include different additional measurements: (I2) the solar principal plane radiances, (P1) the solar principal plane radiances and polarization, and (P2) the solar almucantar polarization. Results indicate that adding polarization measurements can increase the degree of freedom for signal by 2-5 in the scenario P1, while not as much of an increase is found in the scenarios I2 and P2. Correspondingly, smallest retrieval errors are found in the scenario P1: 2.3% (2.9%) for the fine-mode (coarse-mode) aerosol volume concentration, 1.3% (3.5%) for the effective radius, 7.2% (12%) for the effective variance, 0.005 (0.035) for the real-part refractive index, and 0.019 (0.068) for the single-scattering albedo. These errors represent a reduction from their counterparts in scenario I1 of 79% (57%), 76% (49%), 69% (52%), 66% (46%), and 49% (20%), respectively. We further

Automated informationretrieval is critical for enterprise information systems to acquire knowledge from the vast amount of data sets. One challenge in informationretrieval is text classification. Current practices rely heavily on the classical naïve Bayes algorithm due to its simplicity and robustness. However, results from this algorithm are not always satisfactory. In this article, the limitations of the naïve Bayes algorithm are discussed, and it is found that the assumption on the independence of terms is the main reason for an unsatisfactory classification in many real-world applications. To overcome the limitations, the dependent factors are considered by integrating a term frequency-inverse document frequency (TF-IDF) weighting algorithm in the naïve Bayes classification. Moreover, the TF-IDF algorithm itself is improved so that both frequencies and distribution information are taken into consideration. To illustrate the effectiveness of the proposed method, two simulation experiments were conducted, and the comparisons with other classification methods have shown that the proposed method has outperformed other existing algorithms in terms of precision and index recall rate.

A simple cue can be sufficient to elicit vivid recollection of a past episode. Theoretical models suggest that upon perceiving such a cue, disparate episodic elements held in neocortex are retrieved through hippocampal pattern completion. We tested this fundamental assumption by applying functional magnetic resonance imaging (fMRI) while objects or scenes were used to cue participants' recall of previously paired scenes or objects, respectively. We first demonstrate functional segregation within the medial temporal lobe (MTL), showing domain specificity in perirhinal and parahippocampal cortices (for object-processing vs scene-processing, respectively), but domain generality in the hippocampus (retrieval of both stimulus types). Critically, using fMRI latency analysis and dynamic causal modeling, we go on to demonstrate functional integration between these MTL regions during successful memory retrieval, with reversible signal flow from the cue region to the target region via the hippocampus. This supports the claim that the human hippocampus provides the vital associative link that integrates information held in different parts of cortex. PMID:23986252

Soil moisture retrievals from the Soil Moisture and Ocean Salinity (SMOS) instrument are assimilated into the Noah land surface model (LSM) within the NASA Land Information System (LIS). Before assimilation, SMOS retrievals are bias-corrected to match the model climatological distribution using a Cumulative Distribution Function (CDF) matching approach. Data assimilation is done via the Ensemble Kalman Filter. The goal is to improve the representation of soil moisture within the LSM, and ultimately to improve numerical weather forecasts through better land surface initialization. We present a case study showing a large area of irrigation in the lower Mississippi River Valley, in an area with extensive rice agriculture. High soil moisture value in this region are observed by SMOS, but not captured in the forcing data. After assimilation, the model fields reflect the observed geographic patterns of soil moisture. Plans for a modeling experiment and operational use of the data are given. This work helps prepare for the assimilation of Soil Moisture Active/Passive (SMAP) retrievals in the near future.

This study explored lexical-syntactic information - syntactic information that is stored in the lexicon - and its relation to syntactic and lexical impairments in aphasia. We focused on two types of lexical-syntactic information: predicate argument structure (PAS) of verbs (the number and types of arguments the verb selects) and grammatical gender of nouns. The participants were 17 Hebrew-speaking individuals with aphasia who had a syntactic deficit (agrammatism) or a lexical retrieval deficit (anomia) located at the semantic lexicon, the phonological output lexicon, or the phonological output buffer. After testing the participants' syntactic and lexical retrieval abilities and establishing the functional loci of their deficits, we assessed their PAS and grammatical gender knowledge. This assessment included sentence completion, sentence production, sentence repetition, and grammaticality judgment tasks. The participants' performance on these tests yielded several important dissociations. Three agrammatic participants had impaired syntax but unimpaired PAS knowledge. Three agrammatic participants had impaired syntax but unimpaired grammatical gender knowledge. This indicates that lexical-syntactic information is represented separately from syntax, and can be spared even when syntax at the sentence level, such as embedding and movement are impaired. All 5 individuals with phonological output buffer impairment and all 3 individuals with phonological output lexicon impairment had preserved lexical-syntactic knowledge. These selective impairments indicate that lexical-syntactic information is represented at a lexical stage prior to the phonological lexicon and the phonological buffer. Three participants with impaired PAS (aPASia) and impaired grammatical gender who showed intact lexical-semantic knowledge indicate that the lexical-syntactic information is represented separately from the semantic lexicon. This led us to conclude that lexical-syntactic information is

The third annual report (covering the 18-month period from January 1969 to June 1970) of the Stanford Physics InformationREtrieval System (SPIRES) project, which is developing an augmented bibliographic retrieval capability, is presented in this document. A first section describes the background of the project and its association with Project…

Ozone (O3) is a greenhouse gas and toxic pollutant which plays a major role in air quality. Typically, monitoring of surface air quality and O3 mixing ratios is primarily conducted using in situ measurement networks. This is partially due to high-quality information related to air quality being limited from space-borne platforms due to coarse spatial resolution, limited temporal frequency, and minimal sensitivity to lower tropospheric and surface-level O3. The Tropospheric Emissions: Monitoring of Pollution (TEMPO) satellite is designed to address these limitations of current space-based platforms and to improve our ability to monitor North American air quality. TEMPO will provide hourly data of total column and vertical profiles of O3 with high spatial resolution to be used as a near-real-time air quality product. TEMPO O3 retrievals will apply the Smithsonian Astrophysical Observatory profile algorithm developed based on work from GOME, GOME-2, and OMI. This algorithm uses a priori O3 profile information from a climatological data-base developed from long-term ozone-sonde measurements (tropopause-based (TB) O3 climatology). It has been shown that satellite O3 retrievals are sensitive to a priori O3 profiles and covariance matrices. During this work we investigate the climatological data to be used in TEMPO algorithms (TB O3) and simulated data from the NASA GMAO Goddard Earth Observing System (GEOS-5) Forward Processing (FP) near-real-time (NRT) model products. These two data products will be evaluated with ground-based lidar data from the Tropospheric Ozone Lidar Network (TOLNet) at various locations of the US. This study evaluates the TB climatology, GEOS-5 climatology, and 3-hourly GEOS-5 data compared to lower tropospheric observations to demonstrate the accuracy of a priori information to potentially be used in TEMPO O3 algorithms. Here we present our initial analysis and the theoretical impact on TEMPO retrievals in the lower troposphere.

This paper describes a hand held device developed to assist people to locate and retrieveinformation about objects in a home. The system developed is a standalone device to assist persons with memory impairments such as people suffering from Alzheimer's disease. A second application is object detection and localization for a mobile robot operating in an ambient assisted living environment. The device relies on computer vision techniques to locate a tagged object situated in the environment. The tag is a 2D color printed pattern with a detection range and a field of view such that the user may point from a distance of over 1 meter.

Teaching of differential diagnostic skills in medical education is often nonsystematic and touched only in a disease-based manner in the context of patient cases. We conducted a controlled study, in which a portion of fifth year students received systematic teaching of differential diagnostics and informationretrieval for a period of ten weeks, whereas another portion continued in conventional basic training. We tested the students' problem-solving skills in both groups with a computer-assisted test. Students in the intervention group were more successful in the test and settled on the correct diagnosis more often than students in the control group.

Methods and systems for automatically generating lists of stop words for informationretrieval and analysis. Generation of the stop words can include providing a corpus of documents and a plurality of keywords. From the corpus of documents, a term list of all terms is constructed and both a keyword adjacency frequency and a keyword frequency are determined. If a ratio of the keyword adjacency frequency to the keyword frequency for a particular term on the term list is less than a predetermined value, then that term is excluded from the term list. The resulting term list is truncated based on predetermined criteria to form a stop word list.

To improve the performance of distributed informationretrieval in search engines, we propose a two-level cache structure based on the queries of the users' logs. We extract the highest rank queries of users from the static cache, in which the queries are the most popular. We adopt the dynamic cache as an auxiliary to optimize the distribution of the cache data. We propose a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache. PMID:24363621

To improve the performance of distributed informationretrieval in search engines, we propose a two-level cache structure based on the queries of the users' logs. We extract the highest rank queries of users from the static cache, in which the queries are the most popular. We adopt the dynamic cache as an auxiliary to optimize the distribution of the cache data. We propose a distribution strategy of the cache data. The experiments prove that the hit rate, the efficiency, and the time consumption of the two-level cache have advantages compared with other structures of cache.

The goal in informationretrieval is to enable users to automatically and accurately find data relevant to their queries. One possible approach to this problem i use the vector space model, which models documents and queries as vectors in the term space. The components of the vectors are determined by the term weighting scheme, a function of the frequencies of the terms in the document or query as well as throughout the collection. We discuss popular term weighting schemes and present several new schemes that offer improved performance.

Pursuant to the specifications of a research contract entered into in December, 1983 with NASA, the Computer Science Departments of the University of Southwestern Louisiana and Southern University will be working jointly to address a variety of research and educational issues relating to the use, by non-computer professionals, of some of the largest and most sophiticated interactive information storage and retrieval systems available. Over the projected 6 to 8 year life of the project, in addition to NASA/RECON, the following systems will be examined: Lockheed DIALOG, DOE/RECON, DOD/DTIC, EPA/CSIN, and LLNL/TIS.

Objective: To assess the impact of clinicians' use of an online informationretrieval system on their performance in answering clinical questions. Design: Pre-/post-intervention experimental design. Measurements: In a computer laboratory, 75 clinicians (26 hospital-based doctors, 18 family practitioners, and 31 clinical nurse consultants) provided 600 answers to eight clinical scenarios before and after the use of an online informationretrieval system. We examined the proportion of correct answers pre- and post-intervention, direction of change in answers, and differences between professional groups. Results: System use resulted in a 21% improvement in clinicians' answers, from 29% (95% confidence interval [CI] 25.4–32.6) correct pre- to 50% (95% CI 46.0–54.0) post-system use. In 33% (95% CI 29.1–36.9) answers were changed from incorrect to correct. In 21% (95% CI 17.1–23.9) correct pre-test answers were supported by evidence found using the system, and in 7% (95% CI 4.9–9.1) correct pre-test answers were changed incorrectly. For 40% (35.4–43.6) of scenarios, incorrect pre-test answers were not rectified following system use. Despite significant differences in professional groups' pre-test scores [family practitioners: 41% (95% CI 33.0–49.0), hospital doctors: 35% (95% CI 28.5–41.2), and clinical nurse consultants: 17% (95% CI 12.3–21.7; χ2 = 29.0, df = 2, p < 0.01)], there was no difference in post-test scores. (χ2 = 2.6, df = 2, p = 0.73). Conclusions: The use of an online informationretrieval system was associated with a significant improvement in the quality of answers provided by clinicians to typical clinical problems. In a small proportion of cases, use of the system produced errors. While there was variation in the performance of clinical groups when answering questions unaided, performance did not differ significantly following system use. Online informationretrieval systems can be an effective tool in improving the accuracy of

The present paper proposes a virtual environment for visualizing virtualized cultural and historical sites. The proposed environment is based on a distributed asynchronous architecture and supports stereo vision and tiled wall display. The system is mobile and can run from two laptops. This virtual environment addresses the problems of intellectual property protection and multimedia informationretrieval through encryptation and content-based management respectively. Experimental results with a fully textured 3D model of the Crypt of Santa Cristina in Italy are presented, evaluating the performances of the proposed virtual environment.

The effect of a prior gist-based versus item-specific retrieval orientation on recognition of objects and words was examined. Prior item-specific retrieval increased item-specific recognition of episodically related but not previously tested objects relative to both conceptual- and perceptual-gist retrieval. An item-specific retrieval advantage…

In order to improve the access precision of polar geospatial information service on web, a new methodology for retrieving global spatial information services based on geospatial service search and ontology reasoning is proposed, the geospatial service search is implemented to find the coarse service from web, the ontology reasoning is designed to find the refined service from the coarse service. The proposed framework includes standardized distributed geospatial web services, a geospatial service search engine, an extended UDDI registry, and a multi-protocol geospatial information service client. Some key technologies addressed include service discovery based on search engine and service ontology modeling and reasoning in the Antarctic geospatial context. Finally, an Antarctica multi protocol OWS portal prototype based on the proposed methodology is introduced.

Workshops were conducted as part of community planning projects by developing a workshop program to reduce the working time taken by preparations for town watching and map making, and to concentrate on community planning discussions. The workshop program allowed participants to introduce town watching and map making by using a geographic information system (GIS). We showed that a group using a GIS would take less time for map making. As a result, more time would be available for community planning discussion. However, there were some conflicts in the memories of the participants about the places discovered during town watching. Producing a map using a GIS resulted in more comprehensive and informative maps. The extra time available for community planning discussions allows a greater number of specific factors to be considered.

Background Different from traditional informationretrieval (IR), promoting diversity in IR takes consideration of relationship between documents in order to promote novelty and reduce redundancy thus to provide diversified results to satisfy various user intents. Diversity IR in biomedical domain is especially important as biologists sometimes want diversified results pertinent to their query. Methods A combined learning-to-rank (LTR) framework is learned through a general ranking model (gLTR) and a diversity-biased model. The former is learned from general ranking features by a conventional learning-to-rank approach; the latter is constructed with diversity-indicating features added, which are extracted based on the retrieved passages' topics detected using Wikipedia and ranking order produced by the general learning-to-rank model; final ranking results are given by combination of both models. Results Compared with baselines BM25 and DirKL on 2006 and 2007 collections, the gLTR has 0.2292 (+16.23% and +44.1% improvement over BM25 and DirKL respectively) and 0.1873 (+15.78% and +39.0% improvement over BM25 and DirKL respectively) in terms of aspect level of mean average precision (Aspect MAP). The LTR method outperforms gLTR on 2006 and 2007 collections with 4.7% and 2.4% improvement in terms of Aspect MAP. Conclusions The learning-to-rank method is an efficient way for biomedical informationretrieval and the diversity-biased features are beneficial for promoting diversity in ranking results. PMID:25560088

Recognition errors of proper nouns and foreign words significantly decrease the performance of ASR-based speech applications such as voice dialing systems, speech summarization, spoken document retrieval, and spoken query-based informationretrieval (IR). The reason is that proper nouns and words that come from other languages are usually the most important key words. The loss of such words due to misrecognition in turn leads to a loss of significant information from the speech source. This paper focuses on how to improve the performance of Indonesian ASR by alleviating the problem of pronunciation variation of proper nouns and foreign words (English words in particular). To improve the proper noun recognition accuracy, proper-noun specific acoustic models are created by supervised adaptation using maximum likelihood linear regression (MLLR). To improve English word recognition, the pronunciation of English words contained in the lexicon is fixed by using rule-based English-to-Indonesian phoneme mapping. The effectiveness of the proposed method was confirmed through spoken query based Indonesian IR. We used Inference Network-based (IN-based) IR and compared its results with those of the classical Vector Space Model (VSM) IR, both using a tf-idf weighting schema. Experimental results show that IN-based IR outperforms VSM IR.

Association rules extraction from a binary relation as well as reasoning and informationretrieval are generally based on the initial representation of the binary relation as an adjacency matrix. This presents some inconvenience in terms of space memory and knowledge organization. A coverage of a binary relation by a minimal number of non enlargeable rectangles generally reduces memory space consumption without any loss of information. It also has the advantage of organizing objects and attributes contained in the binary relation into a conceptual representation. In this paper, we propose new algorithms to extract association rules (i.e. data mining), conclusions from initial attributes (i.e. reasoning), as well as retrieving the total objects satisfying some initial attributes, by using only the minimal coverage. Finally we propose an incremental approximate algorithm to update a binary relation organized as a set of non enlargeable rectangles. Two main operations are mostly used during the organization process: First, separation of existing rectangles when we delete some pairs. Second, join of rectangles when common properties are discovered, after addition or removal of elements from a binary context. The objective is the minimization of the number of rectangles and the maximization of their structure. The article also raises the problems of equational modeling of the minimization criteria, as well as incrementally providing equations to maintain them.

This second part of a two-part study evaluates retrievals of aerosol optical depths, 1 and 2, in Advanced Very High Resolution Radiometer (AVHRR) channels 1 and 2 centered at 1 = 0.63 and 2 = 0.83 m, and an effective Ångström exponent, , derived therefrom as = ln(1/2)/ln(1/2). The retrievals are made with the Second Simulation of the Satellite Signal in the Solar Spectrum (6S) radiative transfer model from four NOAA-14 AVHRR datasets, collected between February 1998 and May 1999 in the latitudinal belt of 5°-25°S. A series of quality control (QC) checks applied to the retrievals to identify outliers are described. These remove a total of 1% of points, which presumably originate from channel misregistration, residual cloud in AVHRR cloud-screened pixels, and substantial deviations from the assumptions used in the retrieval model (e.g., bright coastal and high altitude inland waters). First, from examining histograms of the derived parameters it is found that and are accurately fit by lognormal and normal probability distribution functions (PDFs), respectively. Second, the scattergrams 1 versus 2 are analyzed to see if they form a coherent pattern. They do indeed converge at the origin, as expected, but frequently are outside of the expected domain in 1-2 space, defined by two straight lines corresponding to = 0 and = 2. This results in a low bias in , which tends to fill in an interval of [1, 1] rather than

With the near-overload of online information, it is necessary to equip our students with the skills necessary to deal with Information Problem Solving (IPS). This study also intended to help students develop major IPS strategies with the assistance of an instructor's scaffolding in a designed IPS course as well as on an Online Information…

This report of a two-day workshop on non-traditional careers in librarianship provides the text of five major presentations as well as summaries of the three alternative sessions which followed. Each of the speakers described a different type of career: (1) Alice S. Warner of Warner-Eddison Associates is an information broker; (2) Robert E. Herz…

We developed a phase retrieval algorithm that utilizes pre-determined partial phase information to overcome insufficient oversampling ratio in diffraction data. Implementing the Fourier modulus projection and the modified support projection manifesting the pre-determined information, a generalized difference map and HIO (Hybrid Input-Output) algorithms are developed. Optical laser diffraction data as well as simulated x-ray diffraction data are used to illustrate the validity of the proposed algorithm, which revealed the strength and the limitations of the algorithm. Finally, the proposed algorithm is applied to reconstruct images from coherent x-ray diffraction data of Au patterns. The proposed algorithm can expand the applicability of the diffraction based image reconstruction.

The 1972 Systems Engineering program at Marshall Space Flight Center where 15 participants representing 15 U.S. universities, 1 NASA/MSFC employee, and another specially assigned faculty member, participated in an 11-week program is discussed. The Fellows became acquainted with the philosophy of systems engineering, and as a training exercise, used this approach to produce a conceptional design for an Earth Resources Information Storage, Transformation, Analysis, and Retrieval System. The program was conducted in three phases; approximately 3 weeks were devoted to seminars, tours, and other presentations to subject the participants to technical and other aspects of the information management problem. The second phase, 5 weeks in length, consisted of evaluating alternative solutions to problems, effecting initial trade-offs and performing preliminary design studies and analyses. The last 3 weeks were occupied with final trade-off sessions, final design analyses and preparation of a final report and oral presentation.

Following the loss of NASA's Space Shuttle Columbia in 2003, it was determined that problems in the agency's organization created an environment that led to the accident. One component of the proposed solution resulted in the formation of the NASA Engineering Network (NEN), a suite of informationretrieval and knowledge sharing tools. This paper describes the implementation of this set of search, portal, content management, and semantic technologies, including a unique meta search capability for data from distributed engineering resources. NEN's communities of practice are formed along engineering disciplines where users leverage their knowledge and best practices to collaborate and take informal learning back to their personal jobs and embed it into the procedures of the agency. These results offer insight into using traditional engineering disciplines for virtual teaming and problem solving.

A novel optical information verification and encryption method is proposed based on inference principle and phase retrieval with sparsity constraints. In this method, a target image is encrypted into two phase-only masks (POMs), which comprise sparse phase data used for verification. Both of the two POMs need to be authenticated before being applied for decrypting. The target image can be optically reconstructed when the two authenticated POMs are Fourier transformed and convolved by the correct decryption key, which is also generated in encryption process. No holographic scheme is involved in the proposed optical verification and encryption system and there is also no problem of information disclosure in the two authenticable POMs. Numerical simulation results demonstrate the validity and good performance of this new proposed method.

The Rice TOGO Browser is an online public resource designed to facilitate integration and visualization of mapping data of bacterial artificial chromosome (BAC)/P1-derived artificial chromosome (PAC) clones, genes, restriction fragment length polymorphism (RFLP)/simple sequence repeat (SSR) markers and phenotype data represented as quantitative trait loci (QTLs) onto the genome sequence, and to provide a platform for more efficient utilization of genome information from the point of view of applied genomics as well as functional genomics. Three search options, namely keyword search, region search and trait search, generate various types of data in a user-friendly interface with three distinct viewers, a chromosome viewer, an integrated map viewer and a sequence viewer, thereby providing the opportunity to view the position of genes and/or QTLs at the chromosomal level and to retrieve any sequence information in a user-defined genome region. Furthermore, the gene list, marker list and genome sequence in a specified region delineated by RFLP/SSR markers and any sequences designed as primers can be viewed and downloaded to support forward genetics approaches. An additional feature of this database is the graphical viewer for BLAST search to reveal information not only for regions with significant sequence similarity but also for regions adjacent to those with similarity but with no hits between sequences. An easy to use and intuitive user interface can help a wide range of users in retrieving integrated mapping information including agronomically important traits on the rice genome sequence. The database can be accessed at http://agri-trait.dna.affrc.go.jp/.

This is a short document that explains the materials that will be transmitted to LLNL and DNN HQ regarding the ICP-MS Workshop held at PNNL June 17-19th. The goal of the information is to pass on to LLNL information regarding the planning and preparations for the Workshop at PNNL in preparation of the SIMS workshop at LLNL.

This report summarizes a public workshop that was held on April 27, 1999, in Rockville, Maryland. The workshop was conducted as part of the US Nuclear Regulatory Commission's (NRC) efforts to further develop its understanding of the risks associated with low power and shutdown operations at US nuclear power plants. A sufficient understanding of such risks is required to support decision-making for risk-informed regulation, in particular Regulatory Guide 1.174, and the development of a consensus standard. During the workshop the NRC staff discussed and requested feedback from the public (including representatives of the nuclear industry, state governments, consultants, private industry, and the media) on the risk associated with low-power and shutdown operations.

This proceedings contains information from the IPHE Infrastructure Workshop, a two-day interactive workshop held on February 25-26, 2010, to explore the market implementation needs for hydrogen fueling station development.

The present paper reviews the progress in the methods of retrieving vegetation water content using remote sensing spectral information, including vegetation spectral reflectance information (VIR, SWIR, and NIR) to directly extract vegetation water content and establish vegetation water indices (WI), i. e. NDWI = (R860 - R1 240)/(R860 + R1 240) and PWI = R970/R900; and using radiation transfer (RT) model such as PROSPAIL to detect plant water content information. The authors analyze the method of retrieving vegetation water content under low crop coverage condition. The plant water can be estimated by using canopy physiological parameters firstly, and using vegetation indices and radiation transfer model secondly, which can eliminate soil background effect. The estimated agricultural drought and vegetation water content by using multi-angle polarized reflectance and bi-directional reflectance (BRDF) was discussed in this paper. In the end, the possible development trend of retrieval methods for plant water information under plant low coverage conditions was discussed.

The retrieval and manipulation of data from large public databases like the U.S. National Health and Nutrition Examination Survey (NHANES) may require sophisticated statistical software and significant expertise that may be unavailable in the university setting. In response, we have developed the Data Retrieval And Manipulation System (DReAMS), an automated information system to handle all processes of data extraction and cleaning and then joining different subsets to produce analysis-ready output. The system is a browser-based data warehouse application in which the input data from flat files or operational systems are aggregated in a structured way so that the desired data can be read, re-coded, queried and extracted efficiently. The current pilot implementation of the system provides access to a limited amount of NHANES database. We plan to increase the amount of data available through the system in the near future and to extend the techniques to other large databases from CDU archive with a current holding of about 53 databases. PMID:23920922

The retrieval and manipulation of data from large public databases like the U.S. National Health and Nutrition Examination Survey (NHANES) may require sophisticated statistical software and significant expertise that may be unavailable in the university setting. In response, we have developed the Data Retrieval And Manipulation System (DReAMS), an automated information system to handle all processes of data extraction and cleaning and then joining different subsets to produce analysis-ready output. The system is a browser-based data warehouse application in which the input data from flat files or operational systems are aggregated in a structured way so that the desired data can be read, recoded, queried and extracted efficiently. The current pilot implementation of the system provides access to a limited amount of NHANES database. We plan to increase the amount of data available through the system in the near future and to extend the techniques to other large databases from CDU archive with a current holding of about 53 databases.

Recent research points to a crucial role of eye fixations on the same spatial locations where an item appeared when learned, for the successful retrieval of stored information (e.g., Laeng et al. in Cognition 131:263-283, 2014. doi: 10.1016/j.cognition.2014.01.003 ). However, evidence about whether the specific temporal sequence (i.e., scanpath) of these eye fixations is also relevant for the accuracy of memory remains unclear. In the current study, eye fixations were recorded while looking at a checkerboard-like pattern. In a recognition session (48 h later), animations were shown where each square that formed the pattern was presented one by one, either according to the same, idiosyncratic, temporal sequence in which they were originally viewed by each participant or in a shuffled sequence although the squares were, in both conditions, always in their correct positions. Afterward, participants judged whether they had seen the same pattern before or not. Showing the elements serially according to the original scanpath's sequence yielded a significantly better recognition performance than the shuffled condition. In a forced fixation condition, where the gaze was maintained on the center of the screen, the advantage of memory accuracy for same versus shuffled scanpaths disappeared. Concluding, gaze scanpaths (i.e., the order of fixations and not simply their positions) are functional to visual memory and physical reenacting of the original, embodied, perception can facilitate retrieval.

Soil moisture is a crucial variable for weather prediction because of its influence on evaporation. It is of critical importance for drought and flood monitoring and prediction and for public health applications. The NASA Short-term Prediction Research and Transition Center (SPoRT) has implemented a new module in the NASA Land Information System (LIS) to assimilate observations from the ESA's Soil Moisture and Ocean Salinity (SMOS) satellite. SMOS Level 2 retrievals from the Microwave Imaging Radiometer using Aperture Synthesis (MIRAS) instrument are assimilated into the Noah LSM within LIS via an Ensemble Kalman Filter. The retrievals have a target volumetric accuracy of 4% at a resolution of 35-50 km. Parallel runs with and without SMOS assimilation are performed with precipitation forcing from intentionally degraded observations, and then validated against a model run using the best available precipitation data, as well as against selected station observations. The goal is to demonstrate how SMOS data assimilation can improve modeled soil states in the absence of dense rain gauge and radar networks.

A wealth of genomic information is available in public and private databases. However, this information is underutilized for uncovering population specific and functionally relevant markers underlying complex human traits. Given the huge amount of SNP data available from the annotation of human genetic variation, data mining is a faster and cost effective approach for investigating the number of SNPs that are informative for ancestry. In this study, we present AncestrySNPminer, the first web-based bioinformatics tool specifically designed to retrieve Ancestry Informative Markers (AIMs) from genomic data sets and link these informative markers to genes and ontological annotation classes. The tool includes an automated and simple “scripting at the click of a button” functionality that enables researchers to perform various population genomics statistical analyses methods with user friendly querying and filtering of data sets across various populations through a single web interface. AncestrySNPminer can be freely accessed at https://research.cchmc.org/mershalab/AncestrySNPminer/login.php. PMID:22584067

MRI Neuroimaging provides a rich source of image content including structural (MRI, Diffusion DTI), functional (fMRI, Perfusion ASL), and metabolic (MRS) information. Today MRI capabilities allow to acquire these imaging techniques in one session in most cases. In order to be of diagnostic value, the immense and diverse data needs to be (i) automatically post-processed to extract the relevant information, e.g. 3D brain maps from 4D fMRI, and to be (ii) fused and visualized to correlate the voxel-based findings. The purpose of this study is to demonstrate the feasibility of automatic relevant informationretrieval and fusion of MRI, fMRI, DTI, ASL, and MRS data of a pediatric population into a single semantic data representation. By using advanced imaging, we may able to detect a larger spectrum of abnormalities in the neonatal brain. Each imaging application, provides unique information about the physiology (fMRI, ASL), the anatomy (DTI), and the biochemistry (MRS) of the newborn brain in relation to normal development and brain injury. By being able to integrate this technology, we will be able to combine biochemical, physiologic and anatomic information which can provide unique insight about not only the normal development of the brain, but also injury of the neonatal brain.

Modern Web-based Information Systems (WIS) are becoming increasingly necessary to provide support for users who are in different places with different types of information, by facilitating their access to the information, decision making, workgroups, and so forth. Design of these systems requires the use of standardized methods and techniques that enable a common vocabulary to be defined to represent the underlying knowledge. Thus, mediation elements such as traders enrich the interoperability of web components in open distributed systems. These traders must operate with other third-party traders and/or agents in the system, which must also use a common vocabulary for communication between them. This paper presents the OntoTrader architecture, an Ontological Web Trading agent based on the OMG ODP trading standard. It also presents the ontology needed by some system agents to communicate with the trading agent and the behavioral framework for the SOLERES OntoTrader agent, an Environmental Management Information System (EMIS). This framework implements a “Query-Searching/Recovering-Response” informationretrieval model using a trading service, SPARQL notation, and the JADE platform. The paper also presents reflection, delegation and, federation mediation models and describes formalization, an experimental testing environment in three scenarios, and a tool which allows our proposal to be evaluated and validated. PMID:24977211

Modern Web-based Information Systems (WIS) are becoming increasingly necessary to provide support for users who are in different places with different types of information, by facilitating their access to the information, decision making, workgroups, and so forth. Design of these systems requires the use of standardized methods and techniques that enable a common vocabulary to be defined to represent the underlying knowledge. Thus, mediation elements such as traders enrich the interoperability of web components in open distributed systems. These traders must operate with other third-party traders and/or agents in the system, which must also use a common vocabulary for communication between them. This paper presents the OntoTrader architecture, an Ontological Web Trading agent based on the OMG ODP trading standard. It also presents the ontology needed by some system agents to communicate with the trading agent and the behavioral framework for the SOLERES OntoTrader agent, an Environmental Management Information System (EMIS). This framework implements a "Query-Searching/Recovering-Response" informationretrieval model using a trading service, SPARQL notation, and the JADE platform. The paper also presents reflection, delegation and, federation mediation models and describes formalization, an experimental testing environment in three scenarios, and a tool which allows our proposal to be evaluated and validated.

Terminology use, as a mean for informationretrieval or document indexing, plays an important role in health literacy. Specific types of users, i.e. patients with diabetes need access to various online resources (on foreign and/or native language) searching for information on self-education of basic diabetic knowledge, on self-care activities regarding importance of dietetic food, medications, physical exercises and on self-management of insulin pumps. Automatic extraction of corpus-based terminology from online texts, manuals or professional papers, can help in building terminology lists or list of "browsing phrases" useful in informationretrieval or in document indexing. Specific terminology lists represent an intermediate step between free text search and controlled vocabulary, between user's demands and existing online resources in native and foreign language. The research aiming to detect the role of terminology in online resources, is conducted on English and Croatian manuals and Croatian online texts, and divided into three interrelated parts: i) comparison of professional and popular terminology use ii) evaluation of automatic statistically-based terminology extraction on English and Croatian texts iii) comparison and evaluation of extracted terminology performed on English manual using statistical and hybrid approaches. Extracted terminology candidates are evaluated by comparison with three types of reference lists: list created by professional medical person, list of highly professional vocabulary contained in MeSH and list created by non-medical persons, made as intersection of 15 lists. Results report on use of popular and professional terminology in online diabetes resources, on evaluation of automatically extracted terminology candidates in English and Croatian texts and on comparison of statistical and hybrid extraction methods in English text. Evaluation of automatic and semi-automatic terminology extraction methods is performed by recall

A project undertaken to provide the Bonneville Power Administration (BPA) with information needed to conduct environmental assessments and meet requirements of the National Environmental Policy Act (NEPA) and the Pacific Northwest Electric Power Planning and Conservation Act (Regional Act) is described. Access to information on environmental effects would help BPA fulfill its responsibilities to coordinate power generation on the Columbia River system, protect uses of the river system (e.g., irrigation, recreation, navigation), and enhance fish and wildlife production. Staff members at BPA identified the need to compile and index information resources that would help answer environmental impact questions. A computer retrieval system that would provide ready access to the information was envisioned. This project was supported by BPA to provide an initial step toward a compilation of environmental impact information. Scientists at Pacific Northwest Laboratory (PNL) identified, gathered, and evaluated information related to environmental effects of water level on uses of five study reservoirs and developed and implemented and environmental data retrieval system, which provides for automated storage and retrieval of annotated citations to published and unpublished information. The data retrieval system is operating on BPA's computer facility and includes the reservoir water-level environmental data. This project was divided into several tasks, some of which were conducted simultaneously to meet project deadlines. The tasks were to identify uses of the five study reservoirs, compile and evaluate reservoir information, develop a data entry and retrieval system, identify and analyze research needs, and document the data retrieval system and train users. Additional details of the project are described in several appendixes.

Techniques for building a world-wide information infrastructure by reverse engineering existing databases to link them in a hierarchical system of subject clusters to create an integrated database are explored. The controlled vocabulary of the Library of Congress Subject Headings is used to ensure consistency and group similar items. Each database…

Information and communication technology (ICT) tools are known to facilitate communication and processing of information and sharing of knowledge by electronic means. In Nigeria, the lack of adequate capacity on the use of ICT by health sector policymakers constitutes a major impediment to the uptake of research evidence into the policymaking process. The objective of this study was to improve the knowledge and capacity of policymakers to access and utilize policy relevant evidence. A modified "before and after" intervention study design was used in which outcomes were measured on the target participants both before the intervention is implemented and after. A 4-point likert scale according to the degree of adequacy; 1 = grossly inadequate, 4 = very adequate was employed. This study was conducted in Ebonyi State, south-eastern Nigeria and the participants were career health policy makers. A two-day intensive ICT training workshop was organized for policymakers who had 52 participants in attendance. Topics covered included: (i). intersectoral partnership/collaboration; (ii). Engaging ICT in evidence-informed policy making; use of ICT for evidence synthesis; (iv) capacity development on the use of computer, internet and other ICT. The pre-workshop mean of knowledge and capacity for use of ICT ranged from 2.19-3.05, while the post-workshop mean ranged from 2.67-3.67 on 4-point scale. The percentage increase in mean of knowledge and capacity at the end of the workshop ranged from 8.3%-39.1%. Findings of this study suggest that policymakers' ICT competence relevant to evidence-informed policymaking can be enhanced through training workshop.

Information and communication technology (ICT) tools are known to facilitate communication and processing of information and sharing of knowledge by electronic means. In Nigeria, the lack of adequate capacity on the use of ICT by health sector policymakers constitutes a major impediment to the uptake of research evidence into the policymaking process. The objective of this study was to improve the knowledge and capacity of policymakers to access and utilize policy relevant evidence. A modified “before and after” intervention study design was used in which outcomes were measured on the target participants both before the intervention is implemented and after. A 4-point likert scale according to the degree of adequacy; 1 = grossly inadequate, 4 = very adequate was employed. This study was conducted in Ebonyi State, south-eastern Nigeria and the participants were career health policy makers. A two-day intensive ICT training workshop was organized for policymakers who had 52 participants in attendance. Topics covered included: (i). intersectoral partnership/collaboration; (ii). Engaging ICT in evidence-informed policy making; use of ICT for evidence synthesis; (iv) capacity development on the use of computer, internet and other ICT. The pre-workshop mean of knowledge and capacity for use of ICT ranged from 2.19-3.05, while the post-workshop mean ranged from 2.67-3.67 on 4-point scale. The percentage increase in mean of knowledge and capacity at the end of the workshop ranged from 8.3%-39.1%. Findings of this study suggest that policymakers’ ICT competence relevant to evidence-informed policymaking can be enhanced through training workshop. PMID:26448807

The third of four volumes in a series describing the basic documentation practices involved in the initial setting up and subsequent operation of an information-library organization to provide defense-aerospace scientific and technical information services, this manual consists of three sections. "InformationRetrieval," by Tom Norton,…

Compared effect of workshop presenter personal self-disclosures with hypothetical examples used to convey the same didactic material. Sixty-nine students watched a videotaped workshop presentation in self-disclosure or hypothetical example condition. Found that perceptions of presenters who used self-disclosures did not differ from those who used…

The 3-week workshop concentrated on techniques and procedures for research proposals and projects in vocational education, with special emphasis on research at the local level. Participants were involved both in the formal workshop sessions and in the design of actual research proposals. A summary of two panel discussions and manuscripts for the…

Event-related potentials (ERPs) were acquired during two experiments in order to determine boundary conditions for when recollection of colour information can be controlled strategically. In initial encoding phases, participants saw an equal number of words presented in red or green. In subsequent retrieval phases, all words were shown in white. Participants were asked to endorse old words that had been shown at encoding in one colour (targets), and to reject new test words as well as old words shown in the alternate colour (non-targets). Study and test lists were longer in Experiment 1, and as a result, the accuracy of memory judgments was superior in Experiment 2. The left-parietal ERP old/new effect--the electrophysiological signature of recollection--was reliable for targets in both experiments, and reliable for non-targets in Experiment 1 only. These findings are consistent with the view that participants were able to restrict recollection to targets in Experiment 2, while recollecting information about targets as well as non-targets in Experiment 1. The fact that this selective strategy was implemented in Experiment 2 despite the close correspondence between the kinds of information associated with targets and non-targets indicates that participants were able to exert considerable control over the conditions under which recollection of task-relevant information occurred.

We present a fresh and broad yet simple approach towards informationretrieval in general and diagnostics in particular by applying the theory of complex networks on multidimensional, dynamic images. We demonstrate a successful use of our method with the time series generated from high content thermal imaging videos of patients suffering from the aqueous deficient dry eye (ADDE) disease. Remarkably, network analyses of thermal imaging time series of contact lens users and patients upon whom Laser-Assisted in situ Keratomileusis (Lasik) surgery has been conducted, exhibit pronounced similarity with results obtained from ADDE patients. We also propose a general framework for the transformation of multidimensional images to networks for futuristic biometry. Our approach is general and scalable to other fluctuation-based devices where network parameters derived from fluctuations, act as effective discriminators and diagnostic markers. PMID:26626047

Motor representations are reported to be implicitly evoked when one observes manipulatable objects (action potentiation). The relationship was examined between action potentiation and pantomime deficit in apraxia. Participants responded to line drawings of manipulatable objects with either the left or right hand, according to the color of the stimulus. In normal participants (N= 10, four women, six men, M age = 28.5 yr., SD = 5.6), responses were faster when the orientation of the stimulus was compatible with the response-hand grasp. However, the apraxic patient did not exhibit this compatibility effect. On a control task in which a nonobject (circle) was presented, all participants exhibited the compatibility effect. These results indicated that the apraxic patient was impaired in evoking motor representation associated with objects. Thus, in some cases, apraxic disorders may be attributable to a deficit in retrieving object-specific information for manipulation.

We present a fresh and broad yet simple approach towards informationretrieval in general and diagnostics in particular by applying the theory of complex networks on multidimensional, dynamic images. We demonstrate a successful use of our method with the time series generated from high content thermal imaging videos of patients suffering from the aqueous deficient dry eye (ADDE) disease. Remarkably, network analyses of thermal imaging time series of contact lens users and patients upon whom Laser-Assisted in situ Keratomileusis (Lasik) surgery has been conducted, exhibit pronounced similarity with results obtained from ADDE patients. We also propose a general framework for the transformation of multidimensional images to networks for futuristic biometry. Our approach is general and scalable to other fluctuation-based devices where network parameters derived from fluctuations, act as effective discriminators and diagnostic markers.

Using semantic relations between different terms beside their syntactical similarities in a search engine would result in systems with better overall precision. One major problem in achieving such systems is to find an appropriate way of calculating semantic similarity scores and combining them with those of classic methods. In this paper, we propose a hybrid approach for informationretrieval in medical field using MeSH ontology. Our approach contains proposing a new semantic similarity measure and eliminating records with semantic score less than a specific threshold from syntactic results. Proposed approach in this paper outperforms VSM, graph comparison, neural network, Bayesian network and latent semantic indexing based approaches in terms of precision vs. recall.

Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective informationretrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. PMID:27538578

The technique is described for NASA user evaluation. It consists of sending out an evaluation form with each literature search. The results are presented which are derived from a compilation of user responses. In an eleven-month period in which evaluation forms went out with 3,001 searches, 33.6% of the forms were completed and returned. The returns showed that 88.5% of the respondents found the searches suitable to their needs, 81% learned of valuable new references from the searches, and 93.5% received the searches in time to meet their needs. The significance of relevance or precision ratio in relation to user satisfaction is discussed, and an extrapolation from user responses resulted in a relevance ratio of 49.3%. Some of the general comments found in the responses are analyzed as indicators of what the users expected from the informationretrieval service.

Results of a number of experiments to illuminate the relative effectiveness and costs of computerized informationretrieval in the interactive mode are reported. It was found that for equal time spent in preparing the search strategy, the batch and interactive modes gave approximately equal recall and relevance. The interactive mode however encourages the searcher to devote more time to the task and therefore usually yields improved output. Engineering costs as a result are higher in this mode. Estimates of associated hardware costs also indicate that operation in this mode is more expensive. Skilled RECON users like the rapid feedback and additional features offered by this mode if they are not constrained by considerations of cost.

Potential policies and strategies for building the information society (IS) in countries that are candidates for admission to the European Union were explored at a workshop attended by 39 experts from the European Commission (EC), the EC's Institute for Prospective and Technological Studies, and outside the EC. The workshop focused on the specific…

using mind maps can successfully retrieveinformation in the short term, and does not put them at a disadvantage compared to SNT students. Future studies should explore longitudinal effects of mind-map proficiency training on both short- and long-term informationretrieval and critical thinking. PMID:20846442

There is considerable interest in retrieving profiles of aerosol effective radius, total number concentration, and complex refractive index from lidar measurements of extinction and backscatter at several wavelengths. The combination of three backscatter channels plus two extinction channels (3β + 2α) is particularly important since it is believed to be the minimum configuration necessary for the retrieval of aerosol microphysical properties and because the technological readiness of lidar systems permits this configuration on both an airborne and future spaceborne instrument. The second-generation NASA Langley airborne High Spectral Resolution Lidar (HSRL-2) has been making 3β + 2α measurements since 2012. The planned NASA Aerosol/Clouds/Ecosystems (ACE) satellite mission also recommends the 3β + 2α combination.Here we develop a deeper understanding of the information content and sensitivities of the 3β + 2α system in terms of aerosol microphysical parameters of interest. We use a retrieval-free methodology to determine the basic sensitivities of the measurements independent of retrieval assumptions and constraints. We calculate information content and uncertainty metrics using tools borrowed from the optimal estimation methodology based on Bayes' theorem, using a simplified forward model look-up table, with no explicit inversion. The forward model is simplified to represent spherical particles, monomodal log-normal size distributions, and wavelength-independent refractive indices. Since we only use the forward model with no retrieval, the given simplified aerosol scenario is applicable as a best case for all existing retrievals in the absence of additional constraints. Retrieval-dependent errors due to mismatch between retrieval assumptions and true atmospheric aerosols are not included in this sensitivity study, and neither are retrieval errors that may be introduced in the inversion process. The choice of a simplified model adds clarity to the

Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an informationretrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods. PMID:28134859