In this paper we discuss naturallanguage watermarking, which uses the structure of the sentence constituents in naturallanguage text in order to insert a watermark. This approach is different from techniques, collectively referred to as "text watermarking," which embed information by modifying the appearance of text elements, such as lines, words, or characters. We provide a survey of the current state of the art in naturallanguage watermarking and introduce terminology, techniques, and tools for text processing. We also examine the parallels and differences of the two watermarking domains and outline how techniques from the image watermarking domain may be applicable to the naturallanguage watermarking domain.

The goal of naturallanguage generation is to replicate human writers or speakers: to generate fluent, grammatical, and coherent text or speech. Produced language, using both explicit and implicit means, must clearly and effectively express some intended message. This demands the use of a lexicon and a grammar together with mechanisms which exploit semantic, discourse and pragmatic knowledge to constrain production. Furthermore, special processors may be required to guide focus, extract presuppositions, and maintain coherency. As with interpretation, generation may require knowledge of the world, including information about the discourse participants as well as knowledge of the specific domain of discourse. All of these processes and knowledge sources must cooperate to produce well-written, unambiguous language. Naturallanguage generation has received less attention than language interpretation due to the nature of language: it is important to interpret all the ways of expressing a message but we need to generate only one. Furthermore, the generative task can often be accomplished by canned text (e.g., error messages or user instructions). The advent of more sophisticated computer systems, however, has intensified the need to express multisentential English.

This seminar describes a process and methodology that uses structured naturallanguage to enable the construction of precise information requirements directly from users, experts, and managers. The main focus of this naturallanguage approach is to create the precise information requirements and to do it in such a way that the business and technical experts are fully accountable for the results. These requirements can then be implemented using appropriate tools and technology. This requirement set is also a universal learning tool because it has all of the knowledge that is needed to understand a particular process (e.g., expense vouchers, project management, budget reviews, tax, laws, machine function).

GOAL, is a test engineer oriented language designed to be used to standardize procedure terminology and as the test programming language to be used for ground checkout operations in a space vehicle launch environment. The material presented concerning GOAL includes: (1) a historical review, (2) development objectives and requirements, (3) language scope and format, and (4) language capabilities.

Analogies are drawn between the social aspects of programming and similar aspects of mathematics and naturallanguages. By analogy with the history of auxiliary languages it is suggested that Fortran and Cobol will remain dominant. (Available from the Association of Computing Machinery, 1133 Avenue of the Americas, New York, NY 10036.) (Author/TL)

Four behavioral scientists in a colloquium at the University of Wisconsin discussed various aspects of language learning. Concerned primarily with pre-high-school pupils and addressing their remarks to language teachers, the scientists offered these proposals: (1) language teaching is more effective if taught in a natural setting, (2)…

The book presents papers on naturallanguage processing, focusing on the central issues of representation, reasoning, and recognition. The introduction discusses theoretical issues, historical developments, and current problems and approaches. The book presents work in syntactic models (parsing and grammars), semantic interpretation, discourse interpretation, language action and intentions, language generation, and systems.

A variety of types of evidence are examined to help determine the true nature of "deep structure" and what, if any, implications this has for linguistic theory as well as culture theory generally. The evidence accumulated over the past century on the nature of phonetic and phonemic systems is briefly discussed, and the following areas of analysis…

We assign to each positive variety mathcal V and each natural number k the class of all (positive) Boolean combinations of the restricted polynomials, i.e. the languages of the form L_0a_1 L_1a_2dots a_ell L_ell, text{ where } ell≤ k, a 1,...,a ℓ are letters and L 0,...,L ℓ are languages from the variety mathcal V. For this polynomial operator we give a certain algebraic counterpart which works with identities satisfied by syntactic (ordered) monoids of languages considered. We also characterize the property that a variety of languages is generated by a finite number of languages. We apply our constructions to particular examples of varieties of languages which are crucial for a certain famous open problem concerning concatenation hierarchies.

This report describes an experimental system for drawing simple pictures on a computer graphics terminal using naturallanguage input. The system is capable of drawing lines, points, and circles on command from the user, as well as answering questions about system capabilities and objects on the screen. Erasures are permitted and language input…

Naturallanguage processing employs computational techniques for the purpose of learning, understanding, and producing human language content. Early computational approaches to language research focused on automating the analysis of the linguistic structure of language and developing basic technologies such as machine translation, speech recognition, and speech synthesis. Today's researchers refine and make use of such tools in real-world applications, creating spoken dialogue systems and speech-to-speech translation engines, mining social media for information about health or finance, and identifying sentiment and emotion toward products and services. We describe successes and challenges in this rapidly advancing area. PMID:26185244

Discusses ways that language misrepresents nature, pointing out that frequently used metaphors and problematic language usage provide limited conceptual and emotional understanding of the natural world and contribute to a degraded view of nature. Discusses strategies for changing language as the first step in changing attitudes toward nature. (LP)

To identify the neural components that make a brain ready for language, it is important to have well defined linguistic phenotypes, to know precisely what language is. There are two central features to language: the capacity to form signs (words), and the capacity to combine them into complex structures. We must determine how the human brain enables these capacities. A sign is a link between a perceptual form and a conceptual meaning. Acoustic elements and content elements, are already brain-internal in non-human animals, but as categorical systems linked with brain-external elements. Being indexically tied to objects of the world, they cannot freely link to form signs. A crucial property of a language-ready brain is the capacity to process perceptual forms and contents offline, detached from any brain-external phenomena, so their “representations” may be linked into signs. These brain systems appear to have pleiotropic effects on a variety of phenotypic traits and not to be specifically designed for language. Syntax combines signs, so the combination of two signs operates simultaneously on their meaning and form. The operation combining the meanings long antedates its function in language: the primitive mode of predication operative in representing some information about an object. The combination of the forms is enabled by the capacity of the brain to segment vocal and visual information into discrete elements. Discrete temporal units have order and juxtaposition, and vocal units have intonation, length, and stress. These are primitive combinatorial processes. So the prior properties of the physical and conceptual elements of the sign introduce combinatoriality into the linguistic system, and from these primitive combinatorial systems derive concatenation in phonology and combination in morphosyntax. Given the nature of language, a key feature to our understanding of the language-ready brain is to be found in the mechanisms in human brains that enable the

To identify the neural components that make a brain ready for language, it is important to have well defined linguistic phenotypes, to know precisely what language is. There are two central features to language: the capacity to form signs (words), and the capacity to combine them into complex structures. We must determine how the human brain enables these capacities. A sign is a link between a perceptual form and a conceptual meaning. Acoustic elements and content elements, are already brain-internal in non-human animals, but as categorical systems linked with brain-external elements. Being indexically tied to objects of the world, they cannot freely link to form signs. A crucial property of a language-ready brain is the capacity to process perceptual forms and contents offline, detached from any brain-external phenomena, so their "representations" may be linked into signs. These brain systems appear to have pleiotropic effects on a variety of phenotypic traits and not to be specifically designed for language. Syntax combines signs, so the combination of two signs operates simultaneously on their meaning and form. The operation combining the meanings long antedates its function in language: the primitive mode of predication operative in representing some information about an object. The combination of the forms is enabled by the capacity of the brain to segment vocal and visual information into discrete elements. Discrete temporal units have order and juxtaposition, and vocal units have intonation, length, and stress. These are primitive combinatorial processes. So the prior properties of the physical and conceptual elements of the sign introduce combinatoriality into the linguistic system, and from these primitive combinatorial systems derive concatenation in phonology and combination in morphosyntax. Given the nature of language, a key feature to our understanding of the language-ready brain is to be found in the mechanisms in human brains that enable the unique

The main concern in this work is the illustration of models for naturallanguage processing, and the discussion of their role in the development of computational studies of language. Topics covered include the following: competence and performance in the design of naturallanguage systems; planning and understanding speech acts by interpersonal games; a framework for integrating syntax and semantics; knowledge representation and naturallanguage: extending the expressive power of proposition nodes; viewing parsing as word sense discrimination: a connectionist approach; a propositional language for text representation; from topic and focus of a sentence to linking in a text; language generation by computer; understanding the Chinese language; semantic primitives or meaning postulates: mental models of propositional representations; narrative complexity based on summarization algorithms; using focus to constrain language generation; and towards an integral model of language competence.

The integration of speech recognition with naturallanguage understanding raises issues of how to adapt naturallanguage processing to the characteristics of spoken language; how to cope with errorful recognition output, including the use of naturallanguage information to reduce recognition errors; and how to use information from the speech signal, beyond just the sequence of words, as an aid to understanding. This paper reviews current research addressing these questions in the Spoken Language Program sponsored by the Advanced Research Projects Agency (ARPA). I begin by reviewing some of the ways that spontaneous spoken language differs from standard written language and discuss methods of coping with the difficulties of spontaneous speech. I then look at how systems cope with errors in speech recognition and at attempts to use naturallanguage information to reduce recognition errors. Finally, I discuss how prosodic information in the speech signal might be used to improve understanding. PMID:7479813

Objectives To provide an overview and tutorial of naturallanguage processing (NLP) and modern NLP-system design. Target audience This tutorial targets the medical informatics generalist who has limited acquaintance with the principles behind NLP and/or limited knowledge of the current state of the art. Scope We describe the historical evolution of NLP, and summarize common NLP sub-problems in this extensive field. We then provide a synopsis of selected highlights of medical NLP efforts. After providing a brief description of common machine-learning approaches that are being used for diverse NLP sub-problems, we discuss how modern NLP architectures are designed, with a summary of the Apache Foundation's Unstructured Information Management Architecture. We finally consider possible future directions for NLP, and reflect on the possible impact of IBM Watson on the medical field. PMID:21846786

Language and human are not anything in the outside of nature. Not only babies, even adults can acquire new languagenaturally, if they have a natural multilingual environment around them. The reason it is possible would be that any human has an ability to grasp the whole of language, and at the same time, language has an order which is the easiest to acquire for humans. The process of this natural acquisition and a result of investigating the order of Japanese vowels are introduced. .

The study of large-scale characteristics of graphs that arise in naturallanguage processing is an essential step in finding structural regularities. Structure discovery processes have to be designed with an awareness of these properties. Examining and contrasting the effects of processes that generate graph structures similar to those observed in language data sheds light on the structure of language and its evolution.

Described is a milieu teaching approach to language development of young handicapped children. The method involves prompting and contingent delivery of reinforcers during normal language interactions in such classroom settings as free play, lunch, or instructional periods. (MC)

A working prototype of a flexible 'naturallanguage' interface for command and control situations is presented. This prototype is analyzed from two standpoints. First is the role of naturallanguage for command and control, its realistic requirements, and how well the role can be filled with current practical technology. Second, technical concepts for implementation are discussed and illustrated by their application in the prototype system. It is also shown how adaptive or 'learning' features can greatly ease the task of encoding language knowledge in the language processor.

Discussion of research into information and text retrieval problems highlights the work with automatic naturallanguage processing (NLP) that is reported in this issue. Topics discussed include the occurrences of nominal compounds; anaphoric references; discontinuous language constructs; automatic back-of-the-book indexing; and full-text analysis.…

This article explores how language and the multisemiotic nature of mathematics can present potential challenges for English language learners (ELLs). Based on two qualitative studies of the discourse of mathematics, we discuss some of the linguistic challenges of mathematics for ELLs in order to highlight the potential difficulties they may have…

The development of a NaturalLanguage Interface (NLI) is presented which is semantic-based and uses Conceptual Dependency representation. The system was developed using Lisp and currently runs on a Symbolics Lisp machine.

This dissertation studies how people describe emotions with language and how computers can simulate this descriptive behavior. Although many non-human animals can express their current emotions as social signals, only humans can communicate about emotions symbolically. This symbolic communication of emotion allows us to talk about emotions that we…

The natural phonology theory, related to European structuralism, makes two fundamental assumptions: (1) phonemes are mental images of the sounds of language, and (2) phonological processes represent subconscious mental substitutions of one sound or class of sounds for another that are the natural response to the relative difficulties of sound…

The notion of prediction is studied in cognitive neuroscience with increasing intensity. We investigated the neural basis of 2 distinct aspects of word prediction, derived from information theory, during story comprehension. We assessed the effect of entropy of next-word probability distributions as well as surprisal A computational model determined entropy and surprisal for each word in 3 literary stories. Twenty-four healthy participants listened to the same 3 stories while their brain activation was measured using fMRI. Reversed speech fragments were presented as a control condition. Brain areas sensitive to entropy were left ventral premotor cortex, left middle frontal gyrus, right inferior frontal gyrus, left inferior parietal lobule, and left supplementary motor area. Areas sensitive to surprisal were left inferior temporal sulcus ("visual word form area"), bilateral superior temporal gyrus, right amygdala, bilateral anterior temporal poles, and right inferior frontal sulcus. We conclude that prediction during language comprehension can occur at several levels of processing, including at the level of word form. Our study exemplifies the power of combining computational linguistics with cognitive neuroscience, and additionally underlines the feasibility of studying continuous spoken language materials with fMRI. PMID:25903464

This paper describes research on the development of a methodology for representing the information in texts and of procedures for relating the linguistic structure of a request to the corresponding representations. The work is being done in the context of a prototype system that will allow physicians and other health professionals to access information in a computerized textbook of hepatitis through naturallanguage dialogues. The interpretation of naturallanguage queries is derived from DIAMOND/DIAGRAM, a linguistically motivated, domain-independent naturallanguage interface developed at SRI. A text access component is being developed that uses representations of the propositional content of text passages and of the hierarchical structure of the text as a whole to retrieve relevant information.

The aim of the present work is to investigate the relative contribution of ordered and stochastic components in natural written texts and examine the influence of text category and language on these. To this end, a binary representation of written texts and the generated symbolic sequences are examined by the standard block entropy analysis and the Shannon and Kolmogorov entropies are obtained. It is found that both entropies are sensitive to both language and text category with the text category sensitivity to follow almost the same trends in both languages (English and Greek) considered. The values of these entropies are compared with those of stochastically generated symbolic sequences and the nature of correlations present in this representation of real written texts is identified.

It is proposed that humans have available to them two systems for interpreting naturallanguage. One system is familiar from formal semantics. It is a type based system that pairs a syntactic form with its interpretation using grammatical rules of composition. This system delivers both plausible and implausible meanings. The other proposed system…

The development of a NaturalLanguage Interface which is semantic-based and uses Conceptual Dependency representation is presented. The system was developed using Lisp and currently runs on a Symbolics Lisp machine. A key point is that the parser handles morphological analysis, which expands its capabilities of understanding more words.

Reports on the design and implementation of PBS (Parsing, Boolean Recognition, Stemming), a software module used in conjunction with an intermediary program to interpret naturallanguage queries used for online database searching. Results of a test of the initial version, which is designed for use with bibliographic files, are reported. (13…

The Policy-Based Management NaturalLanguage Parser (PBEM) is a rules-based approach to enterprise management that can be used to automate certain management tasks. This parser simplifies the management of a given endeavor by establishing policies to deal with situations that are likely to occur. Policies are operating rules that can be referred to as a means of maintaining order, security, consistency, or other ways of successfully furthering a goal or mission. PBEM provides a way of managing configuration of network elements, applications, and processes via a set of high-level rules or business policies rather than managing individual elements, thus switching the control to a higher level. This software allows unique management rules (or commands) to be specified and applied to a cross-section of the Global Information Grid (GIG). This software embodies a parser that is capable of recognizing and understanding conversational English. Because all possible dialect variants cannot be anticipated, a unique capability was developed that parses passed on conversation intent rather than the exact way the words are used. This software can increase productivity by enabling a user to converse with the system in conversational English to define network policies. PBEM can be used in both manned and unmanned science-gathering programs. Because policy statements can be domain-independent, this software can be applied equally to a wide variety of applications.

NaturalLanguage Processing (NLP) is that part of Artificial Intelligence (AI) concerned with endowing computers with verbal and listener repertoires, so that people can interact with them more easily. Most attention has been given to accurately parsing and generating syntactic structures, although NLP researchers are finding ways of handling the semantic content of language as well. It is increasingly apparent that understanding the pragmatic (contextual and consequential) dimension of naturallanguage is critical for producing effective NLP systems. While there are some techniques for applying pragmatics in computer systems, they are piecemeal, crude, and lack an integrated theoretical foundation. Unfortunately, there is little awareness that Skinner's (1957) Verbal Behavior provides an extensive, principled pragmatic analysis of language. The implications of Skinner's functional analysis for NLP and for verbal aspects of epistemology lead to a proposal for a “user expert”—a computer system whose area of expertise is the long-term computer user. The evolutionary nature of behavior suggests an AI technology known as genetic algorithms/programming for implementing such a system. ImagesFig. 1 PMID:22477052

NaturalLanguage Processing in the medical domain becomes more and more powerful, efficient, and ready to be used in daily practice. The needs for such tools are enormous in the medical field, due to the vast amount of written texts for medical records. In the authors' point of view, the Electronic Patient Record (EPR) is achieved neither with Information Systems of all kinds nor with commercially available word processing systems. NaturalLanguage Processing (NLP) is one dimension of the EPR, as well as Image Processing and Decision Support Systems. Analysis of medical texts to facilitate indexing and retrieval is well known. The need for a generation tool is to produce progress notes from menu driven systems. The computer systems of tomorrow cannot miss any single dimension. Since 1988, we've been developing an NLP system; it is supported by the European program AIM (Advanced Informatics in Medicine) within the GALEN and HELIOS consortium and the CERS (Commission d'Encouragement á la Recherche Scientifique) in Switzerland. The main directions of development are: a medical language analyzer, a language generator, a query processor, and dictionary building tools to support the Medical Linguistic Knowledge Base (MLKB). The knowledge representation schema is essentially based on Sowa's conceptual graphs, and the MLKB is multilingual from its design phase; it currently incorporates the English and the French languages; it will also continue using German. The goal of this demonstration is to provide evidence of what exists today, what will be soon available, and what is planned for the long term. Complete sentences will be processed in real time, and the browsing capabilities of the MLKB will be exercised. In particular, the following features will be presented: Analysis of complete sentences with verbs and relatives, as extracted from clinical narratives, with special attention to the method of "proximity processing" as developed in our group and the rule based

Despite its ubiquity in human learning, very little work has been done in artificial intelligence on agents that learn from interactive naturallanguage instructions. In this paper, the problem of learning procedures from interactive, situated instruction is examined in which the student is attempting to perform tasks within the instructional domain, and asks for instruction when it is needed. Presented is Instructo-Soar, a system that behaves and learns in response to interactive naturallanguage instructions. Instructo-Soar learns completely new procedures from sequences of instruction, and also learns how to extend its knowledge of previously known procedures to new situations. These learning tasks require both inductive and analytic learning. Instructo-Soar exhibits a multiple execution learning process in which initial learning has a rote, episodic flavor, and later executions allow the initially learned knowledge to be generalized properly.

Users and programmers of small systems typically do not have the skills needed to design a database schema from an English description of a problem. This paper describes a system that automatically designs databases for such small applications from English descriptions provided by end-users. Although the system has been motivated by the space applications at Kennedy Space Center, and portions of it have been designed with that idea in mind, it can be applied to different situations. The system consists of two major components: a naturallanguage understander and a problem-solver. The paper describes briefly the knowledge representation structures constructed by the naturallanguage understander, and, then, explains the problem-solver in detail.

We propose a theory for modeling the semantic and pragmatic properties of naturallanguage expressions used to refer. The sorts of expressions to be discussed include proper names, definite noun phrases and personal pronouns. We will focus in this paper on such expressions in the singular, having discussed elsewhere procedures for extending the present sort of analysis to various plural uses of these expressions. Propositions involving referential expressions are formally redefined in a second order predicate calculus, in which various semantic and pragmatic factors involved in establishing and interpreting references are modeled as rules of inference. Uses of referential utterances are differentiated according to the means used for individuating the object referred to. Analyses are provided for anaphoric, contextual, demonstrative, introductory and citational individuative devices. We analyze sentences like 'The man [or John] is wise' as conditionals of the form 'Whatever is uniquely a man [or named "John"] relevant to the present discourse is wise'. So modeled, the presupposition of existence (which historically has concerned much logical analysis of such sentences) is represented as a conversational implicature of the sort which obtains from any proposition of the form '(P -> Q)' to the corresponding `P'. This formalization is intended to serve as part of an empirical theory of naturallanguage phenomena. Being an empirical theory, ours will strive to model the greatest possible diversity of phenomena using a minimum of formal apparatus. Such a theory may provide a foundation for automatic systems to predict and replicate naturallanguage phenomena for purposes of text understanding and synthesis.

Building of "clever" thesaurus by algebraic means on base of concepts formalization of language image and figurative meaning of the natural-language constructs in the article are proposed. A formal theory based on a binary operator of directional associative relation is constructed and an understanding of an associative normal form of image constructions is introduced. A model of a commutative semigroup, which provides a presentation of a sentence as three components of an interrogative language image construction, is considered.

Naturallanguage processing (NLP) is concerned with getting computers to do useful things with naturallanguage. Major applications include machine translation, text generation, information retrieval, and naturallanguage interfaces. Reviews important developments since 1987 that have led to advances in NLP; current NLP applications; and problems…

LCS (Language Comprehension System) is a software package designed to improve man-machine communication with computer programs. Different simple structures and functions are available to build man-machine interfaces in naturallanguage. A user may write a sentence in good English or in telegraphical style. The system used pattern matching techniques to detect misspelled words (or badly typed words) and to correct them. Several methods of analysis are available at any level (lexical, syntactic, semantic...). A special knowledge acquisition system is used to introduce new works by giving a description in naturallanguage. A semantic network is extended to a representation close to a connexionist graph, for a better understanding of polysemic words and ambiguities. An application is currently used for a man-machine interface of an expert system in computer-aided education, for a better dialogue with the user during the explanation of reasoning phase. The object of this paper is to present the LCS system, especially at the lexical level, the knowledge representation and acquisition level, and the semantic level (for pronoun references and ambiguity).

The proposition that naturallanguage concepts are represented as fuzzy sets, a generalization of the traditional theory of sets, of meaning components and that languageoperators--adverbs, negative markers, and adjectives--can be considered as operators on fuzzy sets was assessed empirically. (Editor/RK)

Text data forms the largest bulk of digital data that people encounter and exchange daily. For this reason the potential usage of text data as a covert channel for secret communication is an imminent concern. Even though information hiding into naturallanguage text has started to attract great interest, there has been no study on attacks against these applications. In this paper we examine the robustness of lexical steganography systems.In this paper we used a universal steganalysis method based on language models and support vector machines to differentiate sentences modified by a lexical steganography algorithm from unmodified sentences. The experimental accuracy of our method on classification of steganographically modified sentences was 84.9%. On classification of isolated sentences we obtained a high recall rate whereas the precision was low.

The textbook provides a semantical explanation accompanying a complete set of GOAL syntax diagrams, system concepts, language component interaction, and general language concepts necessary for efficient language implementation/execution.

This report addresses the problems of using naturallanguage (English) as the communication language for advanced computer-based instructional systems. The instructional environment places requirements on a naturallanguage understanding system that exceed the capabilities of all existing systems, including: (1) efficiency, (2) habitability, (3)…

Computer-based NaturalLanguage Processing (NLP) is the key to enabling humans and their computer-based creations to interact with machines using naturallanguages (English, Japanese, German, etc.) rather than formal computer languages. NLP is a major research area in the fields of artificial intelligence and computational linguistics. Commercial…

As machines grow in scale and complexity, techniques to make the most effective use of network, memory, and processor resources will also become increasingly important. Programming models that rely on one-sided communication or global address space support have demonstrated advantages for productivity and performance, but they are most effective when used with proper OS support. We propose to develop OS and runtime support for programming models like UPC, GA, Charm++, and HPCS languages, which rely on one-sided communication. Rather than a full OS model, we envision applications bundled with only the necessary OS functions linked in to the application in user space -- relying on the hypervisor for protaction, resource sharing, and mangagement of Quality of Service guarantees. Our services will include support for remote reads and writes to memory, along with remote active message handlers, which are essential for support of fast noncontiguous memory operations, atomic operations, and event-driven applications.

Integrating diverse information sources and application software in a principled and general manner will require a very capable advanced information management (AIM) system. In particular, such a system will need a comprehensive addressing scheme to locate the material in its docuverse. It will also need a naturallanguage processing (NLP) system of great sophistication. It seems that the NLP system must serve three functions. First, it provides an naturallanguage interface (NLI) for the users. Second, it serves as the core component that understands and makes use of the real-world interpretations (RWIs) contained in the docuverse. Third, it enables the reasoning specialists (RSs) to arrive at conclusions that can be transformed into procedures that will satisfy the users' requests. The best candidate for an intelligent agent that can satisfactorily make use of RSs and transform documents (TDs) appears to be an object oriented data base (OODB). OODBs have, apparently, an inherent capacity to use the large numbers of RSs and TDs that will be required by an AIM system and an inherent capacity to use them in an effective way.

A commonly held belief is that language is an aspect of the biological system since the capacity to acquire language is innate and evolved along Darwinian lines. Written language, on the other hand, is thought to be an artifact and a surrogate of speech; it is, therefore, neither natural nor biological. This disparaging view of written language,…

Notes similarities between certain aspects of the development of the naturallanguage English and the artificial language FORTRAN. Discusses evolutionary history, grammar, style, syntax, varieties, and attempts at standardization. Emphasizes modifications which natural and artificial languages have undergone. Suggests that some modifications were…

Interaction with computers in naturallanguage requires a language that is flexible and suited to the task. This study of natural dialogue was undertaken to reveal those characteristics which can make computer English more natural. Experiments were made in three modes of communication: face-to-face, terminal-to-terminal, and human-to-computer,…

During this contract period the authors have: (a) continued investigation of events and actions by means of representation schemes called 'event shape diagrams'; (b) written a parsing program which selects appropriate word and sentence meanings by a parallel process known as activation and inhibition; (c) begun investigation of the point of a story or event by modeling the motivations and emotional behaviors of story characters; (d) started work on combining and translating two machine-readable dictionaries into a lexicon and knowledge base which will form an integral part of our naturallanguage understanding programs; (e) made substantial progress toward a general model for the representation of cognitive relations by comparing English scene and event descriptions with similar descriptions in other languages; (f) constructed a general model for the representation of tense and aspect of verbs; (g) made progress toward the design of an integrated robotics system which accepts English requests, and uses visual and tactile inputs in making decisions and learning new tasks.

During this contract period the authors have: (1) continued investigation of events and actions by means of representation schemes called 'event shape diagrams'; (2) written a parsing program which selects appropriate word and sentence meanings by a parallel process know as activation and inhibition; (3) begun investigation of the point of a story or event by modeling the motivations and emotional behaviors of story characters; (4) started work on combining and translating two machine-readable dictionaries into a lexicon and knowledge base which will form an integral part of our naturallanguage understanding programs; (5) made substantial progress toward a general model for the representation of cognitive relations by comparing English scene and event descriptions with similar descriptions in other languages; (6) constructed a general model for the representation of tense and aspect of verbs; (7) made progress toward the design of an integrated robotics system which accepts English requests, and uses visual and tactile inputs in making decisions and learning new tasks.

The paper describes a naturallanguage understanding system, START, that translates English text into a knowledge base. The understanding and the generating modules of START share a Grammar which is built upon reversible transformations. Users can retrieve information by querying the knowledge base in English; the system then produces an English response. START can be easily adapted to many different domains. One such domain is spacecraft sequencing. A high-level overview of sequencing as it is practiced at JPL is presented in the paper, and three areas within this activity are identified for potential application of the START system. Examples are given of an actual dialog with START based on simulated data for the Mars Observer mission.

Discusses five recent books about language that address issues that arise in classrooms with an increasing number of diverse dialects and varied home languages. Discusses the complexities of language, misunderstandings in the Ebonics controversy, socioeducational issues, and classroom ideas for teachers. Describes two web sites. (SR)

A methodology for inferring hierarchies representing heuristic knowledge about the check out, control, and monitoring sub-system (CCMS) of the space shuttle launch processing system from naturallanguage input is explained. Our method identifies failures explicitly and implicitly described in naturallanguage by domain experts and uses those descriptions to recommend classifications for inclusion in the experts' heuristic hierarchies.

Naturallanguage processing (NLP) is a field of computer science and linguistics devoted to creating computer systems that use human (natural) language as input and/or output. The authors propose that NLP can also be used for game studies research. In this article, the authors provide an overview of NLP and describe some research possibilities…

Naturallanguage communication with computers has long been a major goal of Artificial Intelligence both for what it can tell us about intelligence in general and for its practical utility - data bases, software packages, and Al-based expert systems all require flexible interfaces to a growing community of users who are not able or do not wish to communicate with computers in formal, artificial command languages. Whereas many of the fundamental problems of general naturallanguage processing (NLP) by machine remain to be solved, the area has matured in recent years to the point where practical naturallanguage interfaces to software systems can be constructed in many restricted, but nevertheless useful, circumstances. This tutorial is intended to survey the current state of applied naturallanguage processing by presenting computationally effective NLP techniques, by discussing the range of capabilities these techniques provide for NLP systems, an by discussing their current limitations. Following the introduction, this document is divided into two major sections: the first on language recognition strategies at the single sentence level, and the second on language processing issues that arise during interactive dialogues. In both cases, we concentrate on those aspects of the problem appropriate for interactive naturallanguage interfaces, but relate the techniques and systems discussed to more general work on naturallanguage, independent of application domain.

The PRC Adaptive Knowledge-based Text Understanding System (PAKTUS) is an environment for developing naturallanguage understanding (NLU) systems. It uses a knowledge-based approach in an integrated hybrid architecture based on a factoring of the NLU problem into its lexi-cal, syntactic, conceptual, domain-specific, and pragmatic components. The goal is a robust system that benefits from the strengths of several NLU methodologies, each applied where most appropriate. PAKTUS employs a frame-based knowledge representation and associative networks throughout. The lexical component uses morphological knowledge and word experts. Syntactic knowledge is represented in an Augmented Transition Network (ATN) grammar that incorporates rule-based programming. Case grammar is used for canonical conceptual representation with constraints. Domain-specific templates represent knowledge about specific applications as patterns of the form used in logic programming. Pragmatic knowledge may augment any of the other types and is added wherever needed for a particular domain. The system has been constructed in an interactive graphic programming environment. It has been used successfully to build a prototype front end for an expert system. This integration of existing technologies makes limited but practical NLU feasible now for narrow, well-defined domains.

The relation between a real-world category (sex) and a linguistic category (gender) is examined. The gender system of Indo-European languages is discussed, and the way gender works in Greek, one of the older Indo-European languages, is examined at some length. The conclusion is that, but for the existence of separate gender-sensitive adjectival…

The Systems Tests and OperationLanguage (STOL) provides the means for user communication with payloads, applications programs, and other ground system elements. It is a systems operationlanguage that enables an operator or user to communicate a command to a computer system. The system interprets each high level language directive from the user and performs the indicated action, such as executing a program, printing out a snapshot, or sending a payload command. This document presents the following: (1) required language features and implementation considerations; (2) basic capabilities; (3) telemetry, command, and input/output directives; (4) procedure definition and control; (5) listing, extension, and STOL nucleus capabilities.

The currently developed user language interfaces of information systems are generally intended for serious users. These interfaces commonly ignore potentially the largest user group, i.e., casual users. This project discusses the concepts and implementations of a natural query language system which satisfy the nature and information needs of casual users by allowing them to communicate with the system in the form of their native (natural) language. In addition, a framework for the development of such an interface is also introduced for the MADAM (Multics Approach to Data Access and Management) system at the University of Southwestern Louisiana.

Describes a study that investigated the use of naturallanguage questions on Web search engines. Highlights include query languages; differences in search engine syntax; and results of logistic regression and analysis of variance that showed aspects of questions that predicted significantly different performances, including the number of words,…

The objective of this paper is to combine the viewpoint of model-theoretic semantics and generative grammar, to define semantics for context-free languages, and to apply the results to some fragments of naturallanguage. Following the introduction in the first section, Section 2 describes a simple artificial example to illustrate how a semantic…

Focuses on naturallanguage processing (NLP) in information retrieval. Defines the seven levels at which people extract meaning from text/spoken language. Discusses the stages of information processing; how an information retrieval system works; advantages to adding full NLP to information retrieval systems; and common problems with information…

Informatics methods, such as text mining and naturallanguage processing, are always involved in bioinformatics research. In this study, we discuss text mining and naturallanguage processing methods in bioinformatics from two perspectives. First, we aim to search for knowledge on biology, retrieve references using text mining methods, and reconstruct databases. For example, protein-protein interactions and gene-disease relationship can be mined from PubMed. Then, we analyze the applications of text mining and naturallanguage processing techniques in bioinformatics, including predicting protein structure and function, detecting noncoding RNA. Finally, numerous methods and applications, as well as their contributions to bioinformatics, are discussed for future use by text mining and naturallanguage processing researchers. PMID:26525745

This paper gives an overview of the research and implementation challenges we encountered in building an end-to-end naturallanguage processing based watermarking system. With naturallanguage watermarking, we mean embedding the watermark into a text document, using the naturallanguage components as the carrier, in such a way that the modifications are imperceptible to the readers and the embedded information is robust against possible attacks. Of particular interest is using the structure of the sentences in naturallanguage text in order to insert the watermark. We evaluated the quality of the watermarked text using an objective evaluation metric, the BLEU score. BLEU scoring is commonly used in the statistical machine translation community. Our current system prototype achieves 0.45 BLEU score on a scale [0,1].

In this paper, we propose a novel neural network considering deep cases. It can learn knowledge from naturallanguage documents and can perform recall and inference. Various techniques of naturallanguage processing using Neural Network have been proposed. However, naturallanguage sentences used in these techniques consist of about a few words, and they cannot handle complicated sentences. In order to solve these problems, the proposed network divides naturallanguage sentences into a sentence layer, a knowledge layer, ten kinds of deep case layers and a dictionary layer. It can learn the relations among sentences and among words by dividing sentences. The advantages of the method are as follows: (1) ability to handle complicated sentences; (2) ability to restructure sentences; (3) usage of the conceptual dictionary, Goi-Taikei, as the long term memory in a brain. Two kinds of experiments were carried out by using goo dictionary and Wikipedia as knowledge sources. Superior performance of the proposed neural network has been confirmed.

Informatics methods, such as text mining and naturallanguage processing, are always involved in bioinformatics research. In this study, we discuss text mining and naturallanguage processing methods in bioinformatics from two perspectives. First, we aim to search for knowledge on biology, retrieve references using text mining methods, and reconstruct databases. For example, protein-protein interactions and gene-disease relationship can be mined from PubMed. Then, we analyze the applications of text mining and naturallanguage processing techniques in bioinformatics, including predicting protein structure and function, detecting noncoding RNA. Finally, numerous methods and applications, as well as their contributions to bioinformatics, are discussed for future use by text mining and naturallanguage processing researchers. PMID:26525745

This survey of attitude theory and research published between 1996 and 1999 covers the conceptualization of attitude, attitude formation and activation, attitude structure and function, and the attitude-behavior relation. Research regarding the expectancy-value model of attitude is considered, as are the roles of accessible beliefs and affective versus cognitive processes in the formation of attitudes. The survey reviews research on attitude strength and its antecedents and consequences, and covers progress made on the assessment of attitudinal ambivalence and its effects. Also considered is research on automatic attitude activation, attitude functions, and the relation of attitudes to broader values. A large number of studies dealt with the relation between attitudes and behavior. Research revealing additional moderators of this relation is reviewed, as are theory and research on the link between intentions and actions. Most work in this context was devoted to issues raised by the theories of reasoned action and planned behavior. The present review highlights the nature of perceived behavioral control, the relative importance of attitudes and subjective norms, the utility of adding more predictors, and the roles of prior behavior and habit. PMID:11148298

The discordance between expressions interpretable by a naturallanguage interface (NLI) system and those answerable by a knowledge base is a critical problem in the field of NLIs. In order to solve this discordance problem, this paper proposes a method to translate naturallanguage questions into formal queries that can be generated from a graph-based knowledge base. The proposed method considers a subgraph of a knowledge base as a formal query. Thus, all formal queries corresponding to a concept or a predicate in the knowledge base can be generated prior to query time and all possible naturallanguage expressions corresponding to each formal query can also be collected in advance. A naturallanguage expression has a one-to-one mapping with a formal query. Hence, a naturallanguage question is translated into a formal query by matching the question with the most appropriate naturallanguage expression. If the confidence of this matching is not sufficiently high the proposed method rejects the question and does not answer it. Multipredicate queries are processed by regarding them as a set of collected expressions. The experimental results show that the proposed method thoroughly handles answerable questions from the knowledge base and rejects unanswerable ones effectively. PMID:26904105

Describes an activity which provides a starting point for exploring geometric concepts through language. The activity incorporates three general ideas: (1) relating to personal experience; (2) integrating with other disciplines; and (3) requiring students to express their ideas. A sample activity would include writing a story about what life would…

This thesis is concerned with the description and analysis of two semantically different types of definite articles in German. While the existence of distinct article paradigms in various Germanic dialects and other languages has been acknowledged in the descriptive literature for quite some time, the theoretical implications of their existence…

Describes the implementation of FromTo-CLIR, a Web-based natural-language interface for cross-language information retrieval that was tested with Korean and Japanese. Proposes a method that uses a semantic category tree and collocation to resolve the ambiguity of query translation. (Author/LRW)

Three parents of children with autism were taught to implement the NaturalLanguage Paradigm (NLP). Data were collected on parent implementation, multiple measures of child language, and play. The parents were able to learn to implement the NLP procedures quickly and accurately with beneficial results for their children. Increases in the overall…

Describes the role of naturallanguage processing (NLP) techniques, such as parsing and semantic analysis, within current language tutoring systems. Examines trends, design issues and tradeoffs, and potential contributions of NLP techniques with respect to instructional theory and educational practice. Addresses limitations and problems in using…

Computer-based NaturalLanguage Processing (NLP) is the key to enabling humans and their computer-based creations to interact with machines in naturallanguage (like English, Japanese, German, etc., in contrast to formal computer languages). The doors that such an achievement can open have made this a major research area in Artificial Intelligence and Computational Linguistics. Commercial naturallanguage interfaces to computers have recently entered the market and future looks bright for other applications as well. This report reviews the basic approaches to such systems, the techniques utilized, applications, the state of the art of the technology, issues and research requirements, the major participants and finally, future trends and expectations. It is anticipated that this report will prove useful to engineering and research managers, potential users, and others who will be affected by this field as it unfolds.

Computer-based NaturalLanguage-Processing (NLP) is the key to enabling humans and their computer-based creations to interact with machines in naturallanguage (like English, Japanese, German, etc. in contrast to formal computer languages). The doors that such an achievement can open have made this a major research area in Artificial Intelligence and Computational Linguistics. Commercial naturallanguage interfaces to computers have recently entered the market and the future looks bright for other applications as well. This report reviews the basic approaches to such systems, the techniques utilized, applications, the state-of-the-art of the technology, issues and research requirements, the major participants, and finally, future trends and expectations. It is anticipated that this report will prove useful to engineering and research managers, potential users, and other who will be affected by this field as it unfolds.

Computer based NaturalLanguage Processing (NLP) is the key to enabling humans and their computer based creations to interact with machines in naturallanguage (like English, Japanese, German, etc., in contrast to formal computer languages). The doors that such an achievement can open have made this a major research area in Artificial Intelligence and Computational Linguistics. Commercial naturallanguage interfaces to computers have recently entered the market and future looks bright for other applications as well. This report reviews the basic approaches to such systems, the techniques utilized, applications, the state of the art of the technology, issues and research requirements, the major participants and finally, future trends and expectations. It is anticipated that this report will prove useful to engineering and research managers, potential users, and others who will be affected by this field as it unfolds.

An influential line of thought claims that naturallanguage and arithmetic processing require recursion, a putative hallmark of human cognitive processing (Chomsky in Evolution of human language: biolinguistic perspectives. Cambridge University Press, Cambridge, pp 45-61, 2010; Fitch et al. in Cognition 97(2):179-210, 2005; Hauser et al. in Science 298(5598):1569-1579, 2002). First, we question the need for recursion in human cognitive processing by arguing that a generally simpler and less resource demanding process--iteration--is sufficient to account for human naturallanguage and arithmetic performance. We argue that the only motivation for recursion, the infinity in naturallanguage and arithmetic competence, is equally approachable by iteration and recursion. Second, we submit that the infinity in naturallanguage and arithmetic competence reduces to imagining infinite embedding or concatenation, which is completely independent from the ability to implement infinite processing, and thus, independent from both recursion and iteration. Furthermore, we claim that a property of naturallanguage is physically uncountable finity and not discrete infinity. PMID:20652723

This book presents a tractable computational model that can cope with complex morphological operations, especially in Semitic languages, and less complex morphological systems present in Western languages. It outlines a new generalized regular rewrite rule system that uses multiple finite-state automata to cater to root-and-pattern morphology,…

How human language arose is a mystery in the evolution of Homo sapiens. Miyagawa et al. (2013) put forward a proposal, which we will call the Integration Hypothesis of human language evolution, that holds that human language is composed of two components, E for expressive, and L for lexical. Each component has an antecedent in nature: E as found, for example, in birdsong, and L in, for example, the alarm calls of monkeys. E and L integrated uniquely in humans to give rise to language. A challenge to the Integration Hypothesis is that while these non-human systems are finite-state in nature, human language is known to require characterization by a non-finite state grammar. Our claim is that E and L, taken separately, are in fact finite-state; when a grammatical process crosses the boundary between E and L, it gives rise to the non-finite state character of human language. We provide empirical evidence for the Integration Hypothesis by showing that certain processes found in contemporary languages that have been characterized as non-finite state in nature can in fact be shown to be finite-state. We also speculate on how human language actually arose in evolution through the lens of the Integration Hypothesis. PMID:24936195

The basic requirements for a standard test and checkout language applicable to all phases of the space shuttle test and ground operations are determined. The general characteristics outlined here represent the integration of selected ideas and concepts from operational elements within Kennedy Space Center (KSC) that represent diverse disciplines associated with space vehicle testing and launching operations. Special reference is made to two studies conducted in this area for KSC as authorized by the Advanced Development Element of the Office of Manned Space Flight (MSF). Information contained in reports from these studies have contributed significantly to the final selection of language features depicted in this technical report.

The GOAL (Ground Operations Aerospace Language) test programming language was developed for use in ground checkout operations in a space vehicle launch environment. To insure compatibility with a maximum number of applications, a systematic and error-free method of referencing command/response (analog and digital) hardware measurements is a principle feature of the language. Central to the concept of requiring the test language to be independent of launch complex equipment and terminology is that of addressing measurements via symbolic names that have meaning directly in the hardware units being tested. To form the link from test program through test system interfaces to the units being tested the concept of a data bank has been introduced. The data bank is actually a large cross-reference table that provides pertinent hardware data such as interface unit addresses, data bus routings, or any other system values required to locate and access measurements.

A series of NASA and Contractor studies sponsored by NASA/KSC resulted in a specification for the Ground Operations Aerospace Language (GOAL). The Cape Kennedy Facility of the IBM Corporation was given the responsibility, under existing contracts, to perform an analysis of the Language Specification, to design and develop a GOAL Compiler, to provide a specification for a data bank, to design and develop an interpretive code translator, and to perform associated application studies.

Researchers, motivated by the need to improve the efficiency of naturallanguage processing tools to handle web-scale data, have recently arrived at models that remarkably match the expected features of human language processing under the Now-or-Never bottleneck framework. This provides additional support for said framework and highlights the research potential in the interaction between applied computational linguistics and cognitive science. PMID:27561430

An overview of artificial intelligence (AI), its core ingredients, and its applications is presented. The knowledge representation, logic, problem solving approaches, languages, and computers pertaining to AI are examined, and the state of the art in AI is reviewed. The use of AI in expert systems, computer vision, naturallanguage processing, speech recognition and understanding, speech synthesis, problem solving, and planning is examined. Basic AI topics, including automation, search-oriented problem solving, knowledge representation, and computational logic, are discussed.

This paper presents a pilot study on the process of manually annotating naturallanguage EHR questions with a formal meaning representation. This formal representation could then be used as a structured query as part of a naturallanguage interface for electronic health records. This study analyzes the challenges of representing EHR questions as structured queries as well as the feasibility of creating a sufficiently large corpus of manually annotated structured queries for EHR questions. A set of 100 EHR questions, sampled from actual questions asked by ICU physicians[1], is used to perform the analysis. The ultimate goal of this research is to enable automatic methods for understanding EHR questions for use in a naturallanguage EHR interface. PMID:26306260

The availability of highly capable, yet relatively cheap, unmanned aerial vehicles (UAVs) is opening up new areas of use for hobbyists and for professional-related activities. The driving function of this research is allowing a non-UAV pilot, an operator, to define and manage a mission. This paper describes the preliminary usability measures of an interface that allows an operator to define the mission using speech to make inputs. An experiment was conducted to begin to enumerate the efficacy and user acceptance of using voice commands to define a multi-UAV mission and to provide high-level vehicle control commands such as "takeoff." The primary independent variable was input type - voice or mouse. The primary dependent variables consisted of the correctness of the mission parameter inputs and the time needed to make all inputs. Other dependent variables included NASA-TLX workload ratings and subjective ratings on a final questionnaire. The experiment required each subject to fill in an online form that contained comparable required information that would be needed for a package dispatcher to deliver packages. For each run, subjects typed in a simple numeric code for the package code. They then defined the initial starting position, the delivery location, and the return location using either pull-down menus or voice input. Voice input was accomplished using CMU Sphinx4-5prealpha for speech recognition. They then inputted the length of the package. These were the option fields. The subject had the system "Calculate Trajectory" and then "Takeoff" once the trajectory was calculated. Later, the subject used "Land" to finish the run. After the voice and mouse input blocked runs, subjects completed a NASA-TLX. At the conclusion of all runs, subjects completed a questionnaire asking them about their experience in inputting the mission parameters, and starting and stopping the mission using mouse and voice input. In general, the usability of voice commands is acceptable

An approach to naturallanguage meaning-based parsing in which the unit of linguistic knowledge is the word rather than the rewrite rule is described. In the word expert parser, knowledge about language is distributed across a population of procedural experts, each representing a word of the language, and each an expert at diagnosing that word's intended usage in context. The parser is structured around a coroutine control environment in which the generator-like word experts ask questions and exchange information in coming to collective agreement on sentence meaning. The word expert theory is advanced as a better cognitive model of human language expertise than the traditional rule-based approach. The technical discussion is organized around examples taken from the prototype LISP system which implements parts of the theory.

PC software is described which provides flexible naturallanguage process control capability with an IBM PC or compatible machine. Hardware requirements include the PC, and suitable hardware interfaces to all controlled devices. Software required includes the Microsoft Disk Operating System (MS-DOS) operating system, a PC-based FORTRAN-77 compiler, and user-written device drivers. Instructions for use of the software are given as well as a description of an application of the system.

INTENDED FOR SPEECH THERAPISTS, THEACHERS OF THE MENTALLY RETARDED, AND OTHERS IN SPECIAL EDUCATION, THE COLLECTION CONTAINS REPORTS BY VARIOUS AUTHORS ON SPEECH AND LANGUAGE MODIFICATION ATTEMPTS THAT HAVE UTILIZED OPERANT CONDITIONING PROCEDURES, AS WELL AS SEVERAL PAPERS ON BACKGROUND TOPICS. BACKGROUND PAPERS ON TEACHING TREAT ENVIRONMENTAL…

Describes a naturallanguage searching strategy for retrieving current material which has bearing on George Orwell's "1984," and identifies four main themes (technology, authoritarianism, press and psychological/linguistic implications of surveillance, political oppression) which have emerged from cross-database searches of the "Big Brother"…

When preschool children think of objects as organized into collections (e.g., forest, army) they solve certain problems better than when they think of the same objects as organized into classes (e.g., trees, soldiers). Present studies indicate preschool children occasionally distort naturallanguage inclusion hierarchies (e.g., oak, tree) into the…

This report summarizes the capabilities of five computer programs at Yale that do automatic naturallanguage processing as of the end of 1976. For each program an introduction to its overall intent is given, followed by the input/output, a short discussion of the research underlying the program, and a prognosis for future development. The programs…

This article examines the concept of simplification in second language (SL) learning, reviewing research on the simplified input that both naturalistic and classroom SL learners receive. Research indicates that simplified input, particularly if derived from naturally occurring interactions, does aid comprehension but has not been shown to…

Quantifier scope disambiguation (QSD) is one of the most challenging problems in deep naturallanguage understanding (NLU) systems. The most popular approach for dealing with QSD is to simply leave the semantic representation (scope-) underspecified and to incrementally add constraints to filter out unwanted readings. Scope underspecification has…

In this dissertation, I examine the nature of object marking in American Sign Language (ASL). I investigate object marking by means of directionality (the movement of the verb towards a certain location in signing space) and by means of handling classifiers (certain handshapes accompanying the verb). I propose that object marking in ASL is…

Learning is facilitated by conversational interactions both with human tutors and with computer agents that simulate human tutoring and ideal pedagogical strategies. In this article, we describe some intelligent tutoring systems (e.g., AutoTutor) in which agents interact with students in naturallanguage while being sensitive to their cognitive…

Discusses an investigation of certain problems concerning the structural design of lexicons used in computational approaches to naturallanguage understanding. Emphasizes three aspects of design: retrieval of relevant portions of lexicals items, storage requirements, and representation of meaning in the lexicon. (Available from ALLC, Dr. Rex Last,…

We propose a Proof - Theoretic Semantics (PTS) for a (positive) fragment E+0 of NaturalLanguage (NL) (English in this case). The semantics is intended [7] to be incorporated into actual grammars, within the framework of Type - Logical Grammar (TLG) [12]. Thereby, this semantics constitutes an alternative to the traditional model - theoretic semantics (MTS), originating in Montague's seminal work [11], used in TLG.

This review focuses on three main topics related to the nature of poststroke language recovery and reorganization. The first topic pertains to the nature of anatomical and physiological substrates in the infarcted hemisphere in poststroke aphasia, including the nature of the hemodynamic response in patients with poststroke aphasia, the nature of the peri-infarct tissue, and the neuronal plasticity potential in the infarcted hemisphere. The second section of the paper reviews the current neuroimaging evidence for language recovery in the acute, subacute, and chronic stages of recovery. The third and final section examines changes in connectivity as a function of recovery in poststroke aphasia, specifically in terms of changes in white matter connectivity, changes in functional effective connectivity, and changes in resting state connectivity after stroke. While much progress has been made in our understanding of language recovery, more work needs to be done. Future studies will need to examine whether reorganization of language in poststroke aphasia corresponds to a tighter, more coherent, and efficient network of residual and new regions in the brain. Answering these questions will go a long way towards being able to predict which patients are likely to recover and may benefit from future rehabilitation. PMID:23320190

It is argued that pessimistic assessments of the adequacy of artificial neural networks (ANNs) for naturallanguage processing (NLP) on the grounds that they have a finite state architecture are unjustified, and that their adequacy in this regard is an empirical issue. First, arguments that counter standard objections to finite state NLP on the…

Naturallanguage processing (NLP) provides a powerful approach for discourse processing researchers. However, there remains a notable degree of hesitation by some researchers to consider using NLP, at least on their own. The purpose of this article is to introduce and make available a "simple" NLP (SiNLP) tool. The overarching goal of…

CIRCSIM-Tutor is a computer tutor designed to carry out a naturallanguage dialogue with a medical student. Its domain is the baroreceptor reflex, the part of the cardiovascular system that is responsible for maintaining a constant blood pressure. CIRCSIM-Tutor's interaction with students is modeled after the tutoring behavior of two experienced…

This viewgraph presentation reviews the rationale of the program to transform naturallanguage specifications into formal notation.Specifically, automate generation of Linear Temporal Logic (LTL)correctness properties from naturallanguage temporal specifications. There are several reasons for this approach (1) Model-based techniques becoming more widely accepted, (2) Analytical verification techniques (e.g., model checking, theorem proving) significantly more effective at detecting types of specification design errors (e.g., race conditions, deadlock) than manual inspection, (3) Many requirements still written in naturallanguage, which results in a high learning curve for specification languages, associated tools and increased schedule and budget pressure on projects reduce training opportunities for engineers, and (4) Formulation of correctness properties for system models can be a difficult problem. This has relevance to NASA in that it would simplify development of formal correctness properties, lead to more widespread use of model-based specification, design techniques, assist in earlier identification of defects and reduce residual defect content for space mission software systems. The presentation also discusses: potential applications, accomplishments and/or technological transfer potential and the next steps.

To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, naturallanguage approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.

The Ground Operations Aerospace Language (GOAL) was designed to be used by test oriented personnel to write procedures which would be executed in a test environment. A series of discussions between NASA LV-CAP personnel and IBM resulted in some peripheral tasks which would aid in evaluating the applicability of the language in this environment, and provide enhancement for future applications. The results of these tasks are contained within this volume. The GOAL vocabulary provides a high degree of readability and retainability. To achieve these benefits, however, the procedure writer utilizes words and phrases of considerable length. Brief form study was undertaken to determine a means of relieving this burden. The study resulted in a version of GOAL which enables the writer to develop a dialect suitable to his needs and satisfy the syntax equations. The output of the compiler would continue to provide readability by printing out the standard GOAL language. This task is described.

RCA's Advanced Technology Laboratories (ATL) has implemented an integrated system which permits control of high level tasks in a robotics environment through voice input in the form of naturallanguage syntax. The paper to be presented will outline the architecture used to integrate voice recognition and synthesis hardware and naturallanguage and intelligent reasoning software with a supervisory processor that controls robotic and vision operations in the robotic testbed. The application is intended to give the human operator of a Puma 782 industrial robot the ability to combine joystick teleoperation with voice input in order to provide a flexible man-machine interface in a hands-busy environment. The system is designed to give the operator a speech interface which is unobtrusive and undemanding in terms of predetermined syntax requirements. The voice recognizer accepts continuous speech and the naturallanguage processor accepts full and partial sentence fragments and can perform a fair amount of disambiguation and context analysis. Output to the operator comes via the parallel channel of speech synthesis so that the operator does not have to consult the computer's CRT for messages. The messages are generated from the software and offer warnings about unacceptable situations, confirmations of actions completed, and feedback of system data.

This dissertation focuses on developing and evaluating hybrid approaches for analyzing free-form text in the medical domain. This research draws on naturallanguage processing (NLP) techniques that are used to parse and extract concepts based on a controlled vocabulary. Once important concepts are extracted, additional machine learning algorithms,…

The Spacecraft Control and Operations System 2 (SCOSII) is the new generation of Mission Control Systems (MCS) to be used at ESOC. The system is generic because it offers a collection of standard functions configured through a database upon which a dedicated MCS is established for a given mission. An integral component of SCOSII is the support of a dedicated OperationsLanguage (OL). The spacecraft operation engineers edit, test, validate, and install OL scripts as part of the configuration of the system with, e.g., expressions for computing derived parameters and procedures for performing flight operations, all without involvement of software support engineers. A layered approach has been adopted for the implementation centered around the explicit representation of a data model. The data model is object-oriented defining the structure of the objects in terms of attributes (data) and services (functions) which can be accessed by the OL. SCOSII supports the creation of a mission model. System elements as, e.g., a gyro are explicit, as are the attributes which described them and the services they provide. The data model driven approach makes it possible to take immediate advantage of this higher-level of abstraction, without requiring expansion of the language. This article describes the background and context leading to the OL, concepts, language facilities, implementation, status and conclusions found so far.

The principal elements and functions of the Ground Operations Aerospace Language (GOAL) compiler are presented. The technique used to transcribe the syntax diagrams into machine processable format for use by the parsing routines is described. An explanation of the parsing technique used to process GOAL source statements is included. The compiler diagnostics and the output reports generated during a GOAL compilation are explained. A description of the GOAL program package is provided.

If accurate clinical information were available electronically, automated applications could be developed to use this information to improve patient care and lower costs. However, to be fully retrievable, clinical information must be structured or coded. Many online patient reports are not coded, but are recorded in natural-language text that cannot be reliably accessed. Naturallanguage processing (NLP) can solve this problem by extracting and structuring text-based clinical information, making clinical data available for use. NLP systems are quite difficult to develop, as they require substantial amounts of knowledge, but progress has definitely been made. Some NLP systems have been developed and tested and have demonstrated promising performance in practical clinical applications; some of these systems have already been deployed. The authors provide background information about NLP, briefly describe some of the systems that have been recently developed, and discuss the future of NLP in medicine. PMID:10495728

Summary Literature in the field of nanotechnology is exponentially increasing with more and more engineered nanomaterials being created, characterized, and tested for performance and safety. With the deluge of published data, there is a need for naturallanguage processing approaches to semi-automate the cataloguing of engineered nanomaterials and their associated physico-chemical properties, performance, exposure scenarios, and biological effects. In this paper, we review the different informatics methods that have been applied to patent mining, nanomaterial/device characterization, nanomedicine, and environmental risk assessment. Nine naturallanguage processing (NLP)-based tools were identified: NanoPort, NanoMapper, TechPerceptor, a Text Mining Framework, a Nanodevice Analyzer, a Clinical Trial Document Classifier, Nanotoxicity Searcher, NanoSifter, and NEIMiner. We conclude with recommendations for sharing NLP-related tools through online repositories to broaden participation in nanoinformatics. PMID:26199848

We survey a set a recent advances in naturallanguage processing applied to biomedical applications, which were presented in Geneva, Switzerland, in 2004 at an international workshop. While text mining applied to molecular biology and biomedical literature can report several interesting achievements, we observe that studies applied to clinical contents are still rare. In general, we argue that clinical corpora, including electronic patient records, must be made available to fill the gap between bioinformatics and medical informatics. PMID:16139564

Statistical naturallanguage processors have been the focus of much research during the past decade. The main advantage of such an approach over grammatical rule-based approaches is its scalability to new domains. We present a statistical NLP for the domain of radiology and report on methods of knowledge acquisition, parsing, semantic interpretation, and evaluation. Preliminary performance data are given. A discussion of the perceived benefit, limitations and future work is presented. PMID:10566505

An attempt is made to describe second language behavior and language transfer in cybernetic terms. This should make it possible to translate language into machine language and to clarify psycholinguistic explanations of second language performance. (PMJ)

This paper describes a CODASYL (network) database schema for information derived from narrative clinical reports. The goal of this work is to create an automated process that accepts naturallanguage documents as input and maps this information into a database of a type managed by existing database management systems. The schema described here represents the medical events and facts identified through the naturallanguage processing. This processing decomposes each narrative into a set of elementary assertions, represented as MEDFACT records in the database. Each assertion in turn consists of a subject and a predicate classed according to a limited number of medical event types, e.g., signs/symptoms, laboratory tests, etc. The subject and predicate are represented by EVENT records which are owned by the MEDFACT record associated with the assertion. The CODASYL-type network structure was found to be suitable for expressing most of the relations needed to represent the naturallanguage information. However, special mechanisms were developed for storing the time relations between EVENT records and for recording connections (such as causality) between certain MEDFACT records. This schema has been implemented using the UNIVAC DMS-1100 DBMS.

Centuries of biological knowledge are contained in the massive body of scientific literature, written for human-readability but too big for any one person to consume. Large-scale mining of information from the literature is necessary if biology is to transform into a data-driven science. A computer can handle the volume but cannot make sense of the language. This paper reviews and discusses the use of naturallanguage processing (NLP) and machine-learning algorithms to extract information from systematic literature. NLP algorithms have been used for decades, but require special development for application in the biological realm due to the special nature of the language. Many tools exist for biological information extraction (cellular processes, taxonomic names, and morphological characters), but none have been applied life wide and most still require testing and development. Progress has been made in developing algorithms for automated annotation of taxonomic text, identification of taxonomic names in text, and extraction of morphological character information from taxonomic descriptions. This manuscript will briefly discuss the key steps in applying information extraction tools to enhance biodiversity science. PMID:22685456

The migration of imaging reports to electronic medical record systems holds great potential in terms of advancing radiology research and practice by leveraging the large volume of data continuously being updated, integrated, and shared. However, there are significant challenges as well, largely due to the heterogeneity of how these data are formatted. Indeed, although there is movement toward structured reporting in radiology (ie, hierarchically itemized reporting with use of standardized terminology), the majority of radiology reports remain unstructured and use free-form language. To effectively "mine" these large datasets for hypothesis testing, a robust strategy for extracting the necessary information is needed. Manual extraction of information is a time-consuming and often unmanageable task. "Intelligent" search engines that instead rely on naturallanguage processing (NLP), a computer-based approach to analyzing free-form text or speech, can be used to automate this data mining task. The overall goal of NLP is to translate natural human language into a structured format (ie, a fixed collection of elements), each with a standardized set of choices for its value, that is easily manipulated by computer programs to (among other things) order into subcategories or query for the presence or absence of a finding. The authors review the fundamentals of NLP and describe various techniques that constitute NLP in radiology, along with some key applications. PMID:26761536

Graphical manipulation of human figures is essential for certain types of human factors analyses such as reach, clearance, fit, and view. In many situations, however, the animation of simulated people performing various tasks may be based on more complicated functions involving multiple simultaneous reaches, critical timing, resource availability, and human performance capabilities. One rather effective means for creating such a simulation is through a naturallanguage description of the tasks to be carried out. Given an anthropometrically-sized figure and a geometric workplace environment, various simple actions such as reach, turn, and view can be effectively controlled from language commands or standard NASA checklist procedures. The commands may also be generated by external simulation tools. Task timing is determined from actual performance models, if available, such as strength models or Fitts' Law. The resulting action specification are animated on a Silicon Graphics Iris workstation in real-time.

The requirements are identified for a very high order naturallanguage to be used by crew members on board the Space Station. The hardware facilities, databases, realtime processes, and software support are discussed. The operations and capabilities that will be required in both normal (routine) and abnormal (nonroutine) situations are evaluated. A structure and syntax for an interface (front-end) language to satisfy the above requirements are recommended.

For Interactive Patient II, a multimedia case simulation designed to improve history-taking skills, we created a new naturallanguage interface called GRASP (General Recognition and Analysis of Sentences and Phrases) that allows students to interact with the program at a higher level of realism. Requirements included the ability to handle ambiguous word senses and to match user questions/queries to unique Canonical Phrases, which are used to identify case findings in our knowledge database. In a simulation of fifty user queries, some of which contained ambiguous words, this tool was 96% accurate in identifying concepts. PMID:10566424

The field of NaturalLanguage Processing (NLP) is described as it applies to the needs of LLNL in handling free-text. The state of the practice is outlined with the emphasis placed on two specific aspects of NLP: Information Extraction and Discourse Integration. A brief description is included of the NLP applications currently being used at LLNL. A gap analysis provides a look at where the technology needs work in order to meet the needs of LLNL. Finally, recommendations are made to meet these needs.

This paper is devoted to verifying of the empirical Zipf and Hips laws in naturallanguages using Google Books Ngram corpus data. The connection between the Zipf and Heaps law which predicts the power dependence of the vocabulary size on the text size is discussed. In fact, the Heaps exponent in this dependence varies with the increasing of the text corpus. To explain it, the obtained results are compared with the probability model of text generation. Quasi-periodic variations with characteristic time periods of 60-100 years were also found.

A large proportion of the medical record currently available in computerized medical information systems is in the form of free text reports. While the accessibility of this source of data is improved through inclusion in the computerized record, it remains unavailable for automated decision support, medical research, and management of medical delivery systems. Naturallanguage understanding systems (NLUS) designed to encode free text reports represent one approach to making this information available for these uses. Below we describe an experimental NLUS designed to parse the reports of chest radiographs and store the clinical data extracted in a medical data base. PMID:7949928

This paper proposes a very specifically constrained virtual machine design for goal-directed naturallanguage generation based on a refinement of the technique of data-directed control that the author has termed description-directed control. Important psycholinguistic properties of generation follow inescapably from the use of this control technique, including: efficient runtimes, bounded lookahead, indelible decisions, incremental production of the text, and inescapable adherence to gramaticality. The technique also provides a possible explanation for some well-known universal constraints, though this cannot be confirmed without further empirical investigation. 29 references.

Discusses the nature of programing languages, considering the features of BASIC, LOGO, PASCAL, COBOL, FORTH, APL, and LISP. Also discusses machine/assembly codes, the operation of a compiler, and trends in the evolution of programing languages (including interest in notational systems called object-oriented languages). (JN)

Construal level theory proposes that events that are temporally proximate are represented more concretely than events that are temporally distant. We tested this prediction using two large naturallanguage text corpora. In study 1 we examined posts on Twitter that referenced the future, and found that tweets mentioning temporally proximate dates used more concrete words than those mentioning distant dates. In study 2 we obtained all New York Times articles that referenced U.S. presidential elections between 1987 and 2007. We found that the concreteness of the words in these articles increased with the temporal proximity to their corresponding election. Additionally the reduction in concreteness after the election was much greater than the increase in concreteness leading up to the election, though both changes in concreteness were well described by an exponential function. We replicated this finding with New York Times articles referencing US public holidays. Overall, our results provide strong support for the predictions of construal level theory, and additionally illustrate how large naturallanguage datasets can be used to inform psychological theory. PMID:27015347

Suicide is the second leading cause of death among 25–34 year olds and the third leading cause of death among 15–25 year olds in the United States. In the Emergency Department, where suicidal patients often present, estimating the risk of repeated attempts is generally left to clinical judgment. This paper presents our second attempt to determine the role of computational algorithms in understanding a suicidal patient’s thoughts, as represented by suicide notes. We focus on developing methods of naturallanguage processing that distinguish between genuine and elicited suicide notes. We hypothesize that machine learning algorithms can categorize suicide notes as well as mental health professionals and psychiatric physician trainees do. The data used are comprised of suicide notes from 33 suicide completers and matched to 33 elicited notes from healthy control group members. Eleven mental health professionals and 31 psychiatric trainees were asked to decide if a note was genuine or elicited. Their decisions were compared to nine different machine-learning algorithms. The results indicate that trainees accurately classified notes 49% of the time, mental health professionals accurately classified notes 63% of the time, and the best machine learning algorithm accurately classified the notes 78% of the time. This is an important step in developing an evidence-based predictor of repeated suicide attempts because it shows that naturallanguage processing can aid in distinguishing between classes of suicidal notes. PMID:21643548

The aim of the study was to determine the influence of textual feedback on the content and outcome of spoken interaction with a naturallanguage dialogue system. More specifically, the assumption that textual feedback could disrupt spoken interaction was tested in a human-computer dialogue situation. In total, 48 adult participants, familiar with the system, had to find restaurants based on simple or difficult scenarios using a real naturallanguage service system in a speech-only (phone), speech plus textual dialogue history (multimodal) or text-only (web) modality. The linguistic contents of the dialogues differed as a function of modality, but were similar whether the textual feedback was included in the spoken condition or not. These results add to burgeoning research efforts on multimodal feedback, in suggesting that textual feedback may have little or no detrimental effect on information searching with a real system. STATEMENT OF RELEVANCE: The results suggest that adding textual feedback to interfaces for human-computer dialogue could enhance spoken interaction rather than create interference. The literature currently suggests that adding textual feedback to tasks that depend on the visual sense benefits human-computer interaction. The addition of textual output when the spoken modality is heavily taxed by the task was investigated. PMID:20069480

Richly interactive visualization tools are increasingly popular for data exploration and analysis in a wide variety of domains. Existing systems and techniques for recording provenance of interaction focus either on comprehensive automated recording of low-level interaction events or on idiosyncratic manual transcription of high-level analysis activities. In this paper, we present the architecture and translation design of a query-to-question (Q2Q) system that automatically records user interactions and presents them semantically using naturallanguage (written English). Q2Q takes advantage of domain knowledge and uses naturallanguage generation (NLG) techniques to translate and transcribe a progression of interactive visualization states into a visual log of styled text that complements and effectively extends the functionality of visualization tools. We present Q2Q as a means to support a cross-examination process in which questions rather than interactions are the focus of analytic reasoning and action. We describe the architecture and implementation of the Q2Q system, discuss key design factors and variations that effect question generation, and present several visualizations that incorporate Q2Q for analysis in a variety of knowledge domains. PMID:26357239

Research into naturallanguage understanding systems for computers has concentrated on implementing particular grammars and grammatical models of the language concerned. This paper presents a rationale for research into naturallanguage understanding systems based on neurological and psychological principles. Important features of the approach are that it seeks to place the onus of learning the language on the computer, and that it seeks to make use of the vast wealth of relevant psycholinguistic and neurolinguistic theory. 22 references.

Naturallanguage processing (NLP) is a subfield of artificial intelligence and computational linguistics. It studies the problems of automated generation and understanding of natural human languages. This paper outlines a framework to use computer and naturallanguage techniques for various levels of learners to learn foreign languages in Computer-based Learning environment. We propose some ideas for using the computer as a practical tool for learning foreign language where the most of courseware is generated automatically. We then describe how to build Computer Based Learning tools, discuss its effectiveness, and conclude with some possibilities using on-line resources.

Grammars of signed languages tend to be based on grammars established for written languages, particularly the written language in use in the surrounding hearing community of a sign language. Such grammars presuppose categories of discrete elements which are combined into various sorts of structures. Recent analyses of signed languages go beyond…

This article addresses the academic difficulties of children with language disorders (including dyslexia) and suggests that their persistent academic vulnerability results from the lifelong need to acquire language, to learn with language, and to apply language knowledge for academic learning and social development. The need for continuing…

A major obstacle to the effective educational use of computers is the lack of a natural means of communication between the student and the computer. This report describes a technique for generating such naturallanguage front-ends for advanced instructional systems. It discusses: (1) the essential properties of a naturallanguage front-end, (2)…

Rey et al. (2012) present data from a study with baboons that they interpret in support of the idea that center-embedded structures in human language have their origin in low level memory mechanisms and associative learning. Critically, the authors claim that the baboons showed a behavioral preference that is consistent with center-embedded sequences over other types of sequences. We argue that the baboons' response patterns suggest that two mechanisms are involved: first, they can be trained to associate a particular response with a particular stimulus, and, second, when faced with two conditioned stimuli in a row, they respond to the most recent one first, copying behavior they had been rewarded for during training. Although Rey et al. (2012) 'experiment shows that the baboons' behavior is driven by low level mechanisms, it is not clear how the animal behavior reported, bears on the phenomenon of Center Embedded structures in human syntax. Hence, (1) naturallanguage syntax may indeed have been shaped by low level mechanisms, and (2) the baboons' behavior is driven by low level stimulus response learning, as Rey et al. propose. But is the second evidence for the first? We will discuss in what ways this study can and cannot give evidential value for explaining the origin of Center Embedded recursion in human grammar. More generally, their study provokes an interesting reflection on the use of animal studies in order to understand features of the human linguistic system. PMID:26026382

We studied the cognitive abilities of a 13-year-old deaf child, deprived of most linguistic input from late infancy, in a battery of tests designed to reveal the nature of numerical and geometrical abilities in the absence of a full linguistic system. Tests revealed widespread proficiency in basic symbolic and non-symbolic numerical computations involving the use of both exact and approximate numbers. Tests of spatial and geometrical abilities revealed an interesting patchwork of age-typical strengths and localized deficits. In particular, the child performed extremely well on navigation tasks involving geometrical or landmark information presented in isolation, but very poorly on otherwise similar tasks that required the combination of the two types of spatial information. Tests of number- and space-specific language revealed proficiency in the use of number words and deficits in the use of spatial terms. This case suggests that a full linguistic system is not necessary to reap the benefits of linguistic vocabulary on basic numerical tasks. Furthermore, it suggests that language plays an important role in the combination of mental representations of space. PMID:21168425

A close examination of pure neural parsers shows that they either could not guarantee the correctness of their derivations or had to hard-code seriality into the structure of the net. The authors therefore decided to use a hybrid architecture, consisting of a serial parsing algorithm and a trainable net. The system fulfills the following design goals: (1) parsing of sentences without length restriction, (2) soundness and completeness for any context-free language, and (3) learning the applicability of parsing rules with a neural network to increase the efficiency of the whole system. BrainC (backtracktacking and backpropagation in C) combines the well- known shift-reduce parsing technique with backtracking with a backpropagation network to learn and represent typical structures of the trained naturallanguage grammars. The system has been implemented as a subsystem of the Rochester Connectionist Simulator (RCS) on SUN workstations and was tested with several grammars for English and German. The design of the system and then the results are discussed.

We review recent progress in understanding the meaning of mutual information in naturallanguage. Let us define words in a text as strings that occur sufficiently often. In a few previous papers, we have shown that a power-law distribution for so defined words (a.k.a. Herdan's law) is obeyed if there is a similar power-law growth of (algorithmic) mutual information between adjacent portions of texts of increasing length. Moreover, the power-law growth of information holds if texts describe a complicated infinite (algorithmically) random object in a highly repetitive way, according to an analogous power-law distribution. The described object may be immutable (like a mathematical or physical constant) or may evolve slowly in time (like cultural heritage). Here, we reflect on the respective mathematical results in a less technical way. We also discuss feasibility of deciding to what extent these results apply to the actual human communication.

A semantic lexicon which associates words and phrases in text to concepts is critical for extracting and encoding clinical information in free text and therefore achieving semantic interoperability between structured and unstructured data in Electronic Health Records (EHRs). Directly using existing standard terminologies may have limited coverage with respect to concepts and their corresponding mentions in text. In this paper, we analyze how tokens and phrases in a large corpus distribute and how well the UMLS captures the semantics. A corpus-driven semantic lexicon, MedLex, has been constructed where the semantics is based on the UMLS assisted with variants mined and usage information gathered from clinical text. The detailed corpus analysis of tokens, chunks, and concept mentions shows the UMLS is an invaluable source for naturallanguage processing. Increasing the semantic coverage of tokens provides a good foundation in capturing clinical information comprehensively. The study also yields some insights in developing practical NLP systems. PMID:23304329

Computerized Clinical Decision Support (CDS) aims to aid decision making of health care providers and the public by providing easily accessible health-related information at the point and time it is needed. NaturalLanguage Processing (NLP) is instrumental in using free-text information to drive CDS, representing clinical knowledge and CDS interventions in standardized formats, and leveraging clinical narrative. The early innovative NLP research of clinical narrative was followed by a period of stable research conducted at the major clinical centers and a shift of mainstream interest to biomedical NLP. This review primarily focuses on the recently renewed interest in development of fundamental NLP methods and advances in the NLP systems for CDS. The current solutions to challenges posed by distinct sublanguages, intended user groups, and support goals are discussed. PMID:19683066

Literature-based discovery (LBD) is an emerging methodology for uncovering nonovert relationships in the online research literature. Making such relationships explicit supports hypothesis generation and discovery. Currently LBD systems depend exclusively on co-occurrence of words or concepts in target documents, regardless of whether relations actually exist between the words or concepts. We describe a method to enhance LBD through capture of semantic relations from the literature via use of naturallanguage processing (NLP). This paper reports on an application of LBD that combines two NLP systems: BioMedLEE and SemRep, which are coupled with an LBD system called BITOLA. The two NLP systems complement each other to increase the types of information utilized by BITOLA. We also discuss issues associated with combining heterogeneous systems. Initial experiments suggest this approach can uncover new associations that were not possible using previous methods.

While the biomedical informatics community widely acknowledges the utility of domain ontologies, there remain many barriers to their effective use. One important requirement of domain ontologies is that they must achieve a high degree of coverage of the domain concepts and concept relationships. However, the development of these ontologies is typically a manual, time-consuming, and often error-prone process. Limited resources result in missing concepts and relationships as well as difficulty in updating the ontology as knowledge changes. Methodologies developed in the fields of naturallanguage processing, information extraction, information retrieval and machine learning provide techniques for automating the enrichment of an ontology from free-text documents. In this article, we review existing methodologies and developed systems, and discuss how existing methods can benefit the development of biomedical ontologies. PMID:20647054

Hierarchical classifications are used pervasively by humans as a means to organize their data and knowledge about the world. One of their main advantages is that naturallanguage labels, used to describe their contents, are easily understood by human users. However, at the same time, this is also one of their main disadvantages as these same labels are ambiguous and very hard to be reasoned about by software agents. This fact creates an insuperable hindrance for classifications to being embedded in the Semantic Web infrastructure. This paper presents an approach to converting classifications into lightweight ontologies, and it makes the following contributions: (i) it identifies the main NLP problems related to the conversion process and shows how they are different from the classical problems of NLP; (ii) it proposes heuristic solutions to these problems, which are especially effective in this domain; and (iii) it evaluates the proposed solutions by testing them on DMoz data.

In this paper, we present an efficient algorithm for parsing naturallanguage using unification grammars. The algorithm is an extension of left-corner parsing, a bottom-up algorithm which utilizes top-down expectations. The extension exploits unification grammar`s uniform representation of syntactic, semantic, and domain knowledge, by incorporating all types of grammatical knowledge into parser expectations. In particular, we extend the notion of the reachability table, which provides information as to whether or not a top-down expectation can be realized by a potential subconstituent, by including all types of grammatical information in table entries, rather than just phrase structure information. While our algorithm`s worst-case computational complexity is no better than that of many other algorithms, we present empirical testing in which average-case linear time performance is achieved. Our testing indicates this to be much improved average-case performance over previous leftcomer techniques.

We introduce an approach to representing intelligence, surveillance, and reconnaissance (ISR) tasks at a relatively high level in controlled naturallanguage. We demonstrate that this facilitates both human interpretation and machine processing of tasks. More specically, it allows the automatic assignment of sensing assets to tasks, and the informed sharing of tasks between collaborating users in a coalition environment. To enable automatic matching of sensor types to tasks, we created a machine-processable knowledge representation based on the Military Missions and Means Framework (MMF), and implemented a semantic reasoner to match task types to sensor types. We combined this mechanism with a sensor-task assignment procedure based on a well-known distributed protocol for resource allocation. In this paper, we re-formulate the MMF ontology in Controlled English (CE), a type of controlled naturallanguage designed to be readable by a native English speaker whilst representing information in a structured, unambiguous form to facilitate machine processing. We show how CE can be used to describe both ISR tasks (for example, detection, localization, or identication of particular kinds of object) and sensing assets (for example, acoustic, visual, or seismic sensors, mounted on motes or unmanned vehicles). We show how these representations enable an automatic sensor-task assignment process. Where a group of users are cooperating in a coalition, we show how CE task summaries give users in the eld a high-level picture of ISR coverage of an area of interest. This allows them to make ecient use of sensing resources by sharing tasks.

The rapidly emerging field of time domain astronomy is one of the most exciting and vibrant new research frontiers, ranging in scientific scope from studies of the Solar System to extreme relativistic astrophysics and cosmology. It is being enabled by a new generation of large synoptic digital sky surveys - LSST, PanStarrs, CRTS - that cover large areas of sky repeatedly, looking for transient objects and phenomena. One of the biggest challenges facing these is the automated classification of transient events, a process that needs machine-processible astronomical knowledge. Semantic technologies enable the formal representation of concepts and relations within a particular domain. ATELs (http://www.astronomerstelegram.org) are a commonly-used means for reporting and commenting upon new astronomical observations of transient sources (supernovae, stellar outbursts, blazar flares, etc). However, they are loose and unstructured and employ scientific naturallanguage for description: this makes automated processing of them - a necessity within the next decade with petascale data rates - a challenge. Nevertheless they represent a potentially rich corpus of information that could lead to new and valuable insights into transient phenomena. This project lies in the cutting-edge field of astrosemantics, a branch of astroinformatics, which applies semantic technologies to astronomy. The ATELs have been used to develop an appropriate concept scheme - a representation of the information they contain - for transient astronomy using hierarchical clustering of processed naturallanguage. This allows us to automatically organize ATELs based on the vocabulary used. We conclude that we can use simple algorithms to process and extract meaning from astronomical textual data.

This paper argues that, because the documents of the semantic web are created by human beings, they are actually much more like naturallanguage documents than theory would have us believe. We present evidence that naturallanguage words are used extensively and in complex ways in current ontologies. This leads to a number of dangers for the semantic web, but also opens up interesting new challenges for naturallanguage processing. This is illustrated by our own work using naturallanguage generation to present parts of ontologies.

The field of NaturalLanguage Processing (NLP) focuses on the study of how utterances composed of human-level languages can be understood and generated. Typically, there are considered to be three intertwined levels of structure that interact to create meaning in language: syntax, semantics, and pragmatics. Not only is a large amount of…

Soviet operational art today provides a framework for, studying, understanding, preparing for, and conducting war. Together with strategy and tactics, it makes the study of war an academic discipline requiring intense research and scholarship on the part of those who write about and who would have to conduct war. As such, operational art performs distinct tasks associated with the conduct of war.

The classical paradigm of the neural brain as the seat of human natural intelligence is too restrictive. This paper defends the idea that the neural ectoderm is the actual brain, based on the development of the human embryo. Indeed, the neural ectoderm includes the neural crest, given by pigment cells in the skin and ganglia of the autonomic nervous system, and the neural tube, given by the brain, the spinal cord, and motor neurons. So the brain is completely integrated in the ectoderm, and cannot work alone. The paper presents fundamental properties of the brain as follows. Firstly, Paul D. MacLean proposed the triune human brain, which consists to three brains in one, following the species evolution, given by the reptilian complex, the limbic system, and the neo-cortex. Secondly, the consciousness and conscious awareness are analysed. Thirdly, the anticipatory unconscious free will and conscious free veto are described in agreement with the experiments of Benjamin Libet. Fourthly, the main section explains the development of the human embryo and shows that the neural ectoderm is the whole neural brain. Fifthly, a conjecture is proposed that the neural brain is completely programmed with scripts written in biological low-level and high-level languages, in a manner similar to the programmed cells by the genetic code. Finally, it is concluded that the proposition of the neural ectoderm as the whole neural brain is a breakthrough in the understanding of the natural intelligence, and also in the future design of robots with artificial intelligence.

Home language experiences are important for children's development of language and literacy. However, the home language context is complex, especially for Spanish-speaking children in the United States. A child's use of Spanish or English likely ranges along a continuum, influenced by preferences of particular people involved, such as parents,…

INTELLECT(TM) for Rdb/VMS is a naturallanguage system for data base access which runs under the VAX/VMS operating system. It allows English language query, report formatting, and data updates to Rdb/VMS relational databases. INTELLECT translates English requests into database commands, and allows non-technical users to perform a variety of retrieval and processing tasks for decision support or data maintenance using conversational English. The heart of an INTELLECT application is the lexicon, a dictionary of common English words. Initially, the INTELLECT lexicon has about 400 basic words. Customization of this lexicon, including definition of new vocabulary, the basic step in the development of an INTELLECT application. An application of INTELLECT for an Rdb/VMS waste management database was developed. The database, which consists of waste stream characterization and waste management practice information for solid low-level radioactive wastes generated at the three Department of Energy plants in Oak Ridge, is used in disposal resource development and alternatives evaluation. An account of our experience using INTELLECT with the low-level waste management database is given, including the process of lexicon building. The usefulness of the naturallanguage interface in this context is discussed. 13 refs.

The Great English Vowel Shift of 16th-19th centuries and the current Northern Cities Vowel Shift are two examples of collective language processes characterized by regular phonetic changes, that is, gradual changes in vowel pronunciation over time. Here we develop a structured population approach to modeling such regular changes in the vowel systems of naturallanguages, taking into account learning patterns and effects such as social trends. We treat vowel pronunciation as a continuous variable in vowel space and allow for a continuous dependence of vowel pronunciation in time and age of the speaker. The theory of mixtures with continuous diversity provides a framework for the model, which extends the McKendrick-von Foerster equation to populations with age and phonetic structures. We develop the general balance equations for such populations and propose explicit expressions for the factors that impact the evolution of the vowel pronunciation distribution. For illustration, we present two examples of numerical simulations. In the first one we study a stationary solution corresponding to a state of phonetic equilibrium, in which speakers of all ages share a similar phonetic profile. We characterize the variance of the phonetic distribution in terms of a parameter measuring a ratio of phonetic attraction to dispersion. In the second example we show how vowel shift occurs upon starting with an initial condition consisting of a majority pronunciation that is affected by an immigrant minority with a different vowel pronunciation distribution. The approach developed here for vowel systems may be applied also to other learning situations and other time-dependent processes of cognition in self-interacting populations, like opinions or perceptions. PMID:23624180

... 25 Indians 1 2013-04-01 2013-04-01 false May schools operate a language development program... Formula Language Development Programs § 39.137 May schools operate a language development program without a specific appropriation from Congress? Yes, a school may operate a language development...

... 25 Indians 1 2012-04-01 2011-04-01 true May schools operate a language development program without... Formula Language Development Programs § 39.137 May schools operate a language development program without a specific appropriation from Congress? Yes, a school may operate a language development...

... 25 Indians 1 2014-04-01 2014-04-01 false May schools operate a language development program... Formula Language Development Programs § 39.137 May schools operate a language development program without a specific appropriation from Congress? Yes, a school may operate a language development...

... 25 Indians 1 2011-04-01 2011-04-01 false May schools operate a language development program... Formula Language Development Programs § 39.137 May schools operate a language development program without a specific appropriation from Congress? Yes, a school may operate a language development...

... 25 Indians 1 2010-04-01 2010-04-01 false May schools operate a language development program... Formula Language Development Programs § 39.137 May schools operate a language development program without a specific appropriation from Congress? Yes, a school may operate a language development...

The complex environment of the typical research laboratory requires flexible process control. This program provides naturallanguage process control from an IBM PC or compatible machine. Sometimes process control schedules require changes frequently, even several times per day. These changes may include adding, deleting, and rearranging steps in a process. This program sets up a process control system that can either run without an operator, or be run by workers with limited programming skills. The software system includes three programs. Two of the programs, written in FORTRAN77, record data and control research processes. The third program, written in Pascal, generates the FORTRAN subroutines used by the other two programs to identify the user commands with the user-written device drivers. The software system also includes an input data set which allows the user to define the user commands which are to be executed by the computer. To set the system up the operator writes device driver routines for all of the controlled devices. Once set up, this system requires only an input file containing naturallanguage command lines which tell the system what to do and when to do it. The operator can make up custom commands for operating and taking data from external research equipment at any time of the day or night without the operator in attendance. This process control system requires a personal computer operating under MS-DOS with suitable hardware interfaces to all controlled devices. The program requires a FORTRAN77 compiler and user-written device drivers. This program was developed in 1989 and has a memory requirement of about 62 Kbytes.

To create an information system, we employ NIAM (Naturallanguage Information Analysis Methodology). NIAM supports the goals of both the customer and the analyst completely understanding the information. We use the customer`s own unique vocabulary, collect real examples, and validate the information in naturallanguage sentences. Examples are discussed from a successfully implemented information system.

The Hepatitis Knowledge Base (text of prototype information system) was used for modifying and testing "A Navigator of NaturalLanguage Organized (Textual) Data" (ANNOD), a retrieval system which combines probabilistic, linguistic, and empirical means to rank individual paragraphs of full text for similarity to naturallanguage queries proposed by…

Linguistic research has identified abstract properties that seem to be shared by all languages—such properties may be considered defining characteristics. In recent decades, the recognition that human language is found not only in the spoken modality but also in the form of sign languages has led to a reconsideration of some of these potential linguistic universals. In large part, the linguistic analysis of sign languages has led to the conclusion that universal characteristics of language can be stated at an abstract enough level to include languages in both spoken and signed modalities. For example, languages in both modalities display hierarchical structure at sub-lexical and phrasal level, and recursive rule application. However, this does not mean that modality-based differences between signed and spoken languages are trivial. In this article, we consider several candidate domains for modality effects, in light of the overarching question: are signed and spoken languages subject to the same abstract grammatical constraints, or is a substantially different conception of grammar needed for the sign language case? We look at differences between language types based on the use of space, iconicity, and the possibility for simultaneity in linguistic expression. The inclusion of sign languages does support some broadening of the conception of human language—in ways that are applicable for spoken languages as well. Still, the overall conclusion is that one grammar applies for human language, no matter the modality of expression. PMID:25013534

Naturallanguage processing (NLP) techniques to extract data from unstructured text into formal computer representations are valuable for creating robust, scalable methods to mine data in medical documents and radiology reports. As voice recognition (VR) becomes more prevalent in radiology practice, there is opportunity for implementing NLP in real time for decision-support applications such as context-aware information retrieval. For example, as the radiologist dictates a report, an NLP algorithm can extract concepts from the text and retrieve relevant classification or diagnosis criteria or calculate disease probability. NLP can work in parallel with VR to potentially facilitate evidence-based reporting (for example, automatically retrieving the Bosniak classification when the radiologist describes a kidney cyst). For these reasons, we developed and validated an NLP system which extracts fracture and anatomy concepts from unstructured text and retrieves relevant bone fracture knowledge. We implement our NLP in an HTML5 web application to demonstrate a proof-of-concept feedback NLP system which retrieves bone fracture knowledge in real time. PMID:23053906

Molecular imaging is at the crossroads of genomic sciences and medical imaging. Information within the molecular imaging literature could be used to link to genomic and imaging information resources and to organize and index images in a way that is potentially useful to researchers. A number of naturallanguage processing (NLP) systems are available to automatically extract information from genomic literature. One existing NLP system, known as BioMedLEE, automatically extracts biological information consisting of biomolecular substances and phenotypic data. This paper focuses on the adaptation, evaluation, and application of BioMedLEE to the molecular imaging domain. In order to adapt BioMedLEE for this domain, we extend an existing molecular imaging terminology and incorporate it into BioMedLEE. BioMedLEE's performance is assessed with a formal evaluation study. The system's performance, measured as recall and precision, is 0.74 (95% CI: [.70-.76]) and 0.70 (95% CI [.63-.76]), respectively. We adapt a JAVA viewer known as PGviewer for the simultaneous visualization of images with NLP extracted information. PMID:17084109

Most connectionist parsers either cannot guarantee the correctness of their derivations or have to simulate a serial flow of control. In the first case, users have to restrict the tasks (e.g. parse less complex or shorter sentences) of the parser or they need to believe in the soundness of the result. In the second case, the resulting network has lost most of its attractivity because seriality needs to be hard-coded into the structure of the net. We here present a hybrid symbolic connectionist parser, which was designed to fulfill the following goals: (1) parsing of sentences without length restriction, (2) soundness and completeness for any context-free grammar, and (3) learning the applicability of parsing rules with a neural network. Our hybrid architecture consists of a serial parsing algorithm and a trainable net. BrainC (Backtracking and Backpropagation in C) combines the well known shift-reduce parsing technique with backtracking with a backpropagation network to learn and represent the typical properties of the trained naturallanguage grammars. The system has been implemented as a subsystem of the Rochester Connectionist Simulator (RCS) on SUN- Workstations and was tested with several grammars for English and German. We discuss how BrainC reached its design goals and what results we observed.

Crowdsourcing is increasingly utilized for performing tasks in both naturallanguage processing and biocuration. Although there have been many applications of crowdsourcing in these fields, there have been fewer high-level discussions of the methodology and its applicability to biocuration. This paper explores crowdsourcing for biocuration through several case studies that highlight different ways of leveraging ‘the crowd’; these raise issues about the kind(s) of expertise needed, the motivations of participants, and questions related to feasibility, cost and quality. The paper is an outgrowth of a panel session held at BioCreative V (Seville, September 9–11, 2015). The session consisted of four short talks, followed by a discussion. In their talks, the panelists explored the role of expertise and the potential to improve crowd performance by training; the challenge of decomposing tasks to make them amenable to crowdsourcing; and the capture of biological data and metadata through community editing. Database URL: http://www.mitre.org/publications/technical-papers/crowdsourcing-and-curation-perspectives PMID:27504010

Crowdsourcing is increasingly utilized for performing tasks in both naturallanguage processing and biocuration. Although there have been many applications of crowdsourcing in these fields, there have been fewer high-level discussions of the methodology and its applicability to biocuration. This paper explores crowdsourcing for biocuration through several case studies that highlight different ways of leveraging 'the crowd'; these raise issues about the kind(s) of expertise needed, the motivations of participants, and questions related to feasibility, cost and quality. The paper is an outgrowth of a panel session held at BioCreative V (Seville, September 9-11, 2015). The session consisted of four short talks, followed by a discussion. In their talks, the panelists explored the role of expertise and the potential to improve crowd performance by training; the challenge of decomposing tasks to make them amenable to crowdsourcing; and the capture of biological data and metadata through community editing.Database URL: http://www.mitre.org/publications/technical-papers/crowdsourcing-and-curation-perspectives. PMID:27504010

Variability in achievement across learners is a hallmark of second language (L2) learning, especially in academic-based learning. The Twins Early Development Study (TEDS), based on a large, population-representative sample in the United Kingdom, provides the first opportunity to examine individual differences in second language achievement in a…

An adaptation of incidental teaching procedures was used to teach individually defined language responses to four language-delayed children (4-8 years old). Data showed that frequency of each targeted response increased only during incidental teaching, with most targeted responses produced spontaneously; and the teachers correctly implemented the…

Johanne Paradis' Keynote Article can be read as a concise critical review of the research that focuses on the sometimes strained relationship between bilingualism and specific language impairment (SLI). In my comments I will add some thoughts based on our own research on the learning of Dutch as a second language (L2) by children with SLI.

In several countries natural sign languages were considered inadequate for education. Instead, new sign-supported systems were created, based on the belief that spoken/written language is grammatically superior. One such system called SJM (system językowo-migowy) preserves the grammatical and lexical structure of spoken Polish and since 1960s has been extensively employed in schools and on TV. Nevertheless, the Deaf community avoids using SJM for everyday communication, its preferred language being PJM (polski język migowy), a natural sign language, structurally and grammatically independent of spoken Polish and featuring classifier constructions (CCs). Here, for the first time, we compare, with fMRI method, the neural bases of natural vs. devised communication systems. Deaf signers were presented with three types of signed sentences (SJM and PJM with/without CCs). Consistent with previous findings, PJM with CCs compared to either SJM or PJM without CCs recruited the parietal lobes. The reverse comparison revealed activation in the anterior temporal lobes, suggesting increased semantic combinatory processes in lexical sign comprehension. Finally, PJM compared with SJM engaged left posterior superior temporal gyrus and anterior temporal lobe, areas crucial for sentence-level speech comprehension. We suggest that activity in these two areas reflects greater processing efficiency for naturally evolved sign language. PMID:25858311

Background The medical problem list is an important part of the electronic medical record in development in our institution. To serve the functions it is designed for, the problem list has to be as accurate and timely as possible. However, the current problem list is usually incomplete and inaccurate, and is often totally unused. To alleviate this issue, we are building an environment where the problem list can be easily and effectively maintained. Methods For this project, 80 medical problems were selected for their frequency of use in our future clinical field of evaluation (cardiovascular). We have developed an Automated Problem List system composed of two main components: a background and a foreground application. The background application uses NaturalLanguage Processing (NLP) to harvest potential problem list entries from the list of 80 targeted problems detected in the multiple free-text electronic documents available in our electronic medical record. These proposed medical problems drive the foreground application designed for management of the problem list. Within this application, the extracted problems are proposed to the physicians for addition to the official problem list. Results The set of 80 targeted medical problems selected for this project covered about 5% of all possible diagnoses coded in ICD-9-CM in our study population (cardiovascular adult inpatients), but about 64% of all instances of these coded diagnoses. The system contains algorithms to detect first document sections, then sentences within these sections, and finally potential problems within the sentences. The initial evaluation of the section and sentence detection algorithms demonstrated a sensitivity and positive predictive value of 100% when detecting sections, and a sensitivity of 89% and a positive predictive value of 94% when detecting sentences. Conclusion The global aim of our project is to automate the process of creating and maintaining a problem list for hospitalized

The Writing Pal is an intelligent tutoring system that provides writing strategy training. A large part of its artificial intelligence resides in the naturallanguage processing algorithms to assess essay quality and guide feedback to students. Because writing is often highly nuanced and subjective, the development of these algorithms must consider a broad array of linguistic, rhetorical, and contextual features. This study assesses the potential for computational indices to predict human ratings of essay quality. Past studies have demonstrated that linguistic indices related to lexical diversity, word frequency, and syntactic complexity are significant predictors of human judgments of essay quality but that indices of cohesion are not. The present study extends prior work by including a larger data sample and an expanded set of indices to assess new lexical, syntactic, cohesion, rhetorical, and reading ease indices. Three models were assessed. The model reported by McNamara, Crossley, and McCarthy (Written Communication 27:57-86, 2010) including three indices of lexical diversity, word frequency, and syntactic complexity accounted for only 6% of the variance in the larger data set. A regression model including the full set of indices examined in prior studies of writing predicted 38% of the variance in human scores of essay quality with 91% adjacent accuracy (i.e., within 1 point). A regression model that also included new indices related to rhetoric and cohesion predicted 44% of the variance with 94% adjacent accuracy. The new indices increased accuracy but, more importantly, afford the means to provide more meaningful feedback in the context of a writing tutoring system. PMID:23055164

Shown an entity (e.g., a plastic whisk) labeled by a novel noun in neutral syntax, speakers of Japanese, a classifier language, are more likely to assume the noun refers to the substance (plastic) than are speakers of English, a count/mass language, who are instead more likely to assume it refers to the object kind [whisk; Imai, M., & Gentner, D. (1997). A cross-linguistic study of early word meaning: Universal ontology and linguistic influence. Cognition, 62, 169-200]. Five experiments replicated this language type effect on entity construal, extended it to quite different stimuli from those studied before, and extended it to a comparison between Mandarin speakers and English speakers. A sixth experiment, which did not involve interpreting the meaning of a noun or a pronoun that stands for a noun, failed to find any effect of language type on entity construal. Thus, the overall pattern of findings supports a non-Whorfian, language on language account, according to which sensitivity to lexical statistics in a count/mass language leads adults to assign a novel noun in neutral syntax the status of a count noun, influencing construal of ambiguous entities. The experiments also document and explore cross-linguistically universal factors that influence entity construal, and favor Prasada's [Prasada, S. (1999). Names for things and stuff: An Aristotelian perspective. In R. Jackendoff, P. Bloom, & K. Wynn (Eds.), Language, logic, and concepts (pp. 119-146). Cambridge, MA: MIT Press] hypothesis that features indicating non-accidentalness of an entity's form lead participants to a construal of object kind rather than substance kind. Finally, the experiments document the age at which the language type effect emerges in lexical projection. The details of the developmental pattern are consistent with the lexical statistics hypothesis, along with a universal increase in sensitivity to material kind. PMID:19230873

Spontaneous language of 18 patients suffering from Huntington's disease and 15 dysarthric controls suffering from Friedreich's ataxia were investigated. In addition, language functions in various modalities were assessed with the Aachen Aphasia Test (AAT). The Huntington patients exhibited deficits in the syntactical complexity of spontaneous speech and in the Token Test, confrontation naming, and language comprehension subtests of the AAT, which are interpreted as resulting from their dementia. Errors affecting word access mechanisms and production of syntactical structures as such were not encountered. PMID:2452241

Numerous studies over the past decade support the claim that infants are equipped with powerful statistical language learning mechanisms. The primary evidence for statistical language learning in word segmentation comes from studies using artificial languages, continuous streams of synthesized syllables that are highly simplified relative to real speech. To what extent can these conclusions be scaled up to naturallanguage learning? In the current experiments, English-learning 8-month-old infants’ ability to track transitional probabilities in fluent infant-directed Italian speech was tested (N = 72). The results suggest that infants are sensitive to transitional probability cues in unfamiliar naturallanguage stimuli, and support the claim that statistical learning is sufficiently robust to support aspects of real-world language acquisition. PMID:19489896

Numerous studies over the past decade support the claim that infants are equipped with powerful statistical language learning mechanisms. The primary evidence for statistical language learning in word segmentation comes from studies using artificial languages, continuous streams of synthesized syllables that are highly simplified relative to real speech. To what extent can these conclusions be scaled up to naturallanguage learning? In the current experiments, English-learning 8-month-old infants' ability to track transitional probabilities in fluent infant-directed Italian speech was tested (N = 72). The results suggest that infants are sensitive to transitional probability cues in unfamiliar naturallanguage stimuli, and support the claim that statistical learning is sufficiently robust to support aspects of real-world language acquisition. PMID:19489896

The currently developed multi-level language interfaces of information systems are generally designed for experienced users. These interfaces commonly ignore the nature and needs of the largest user group, i.e., casual users. This research identifies the importance of naturallanguage query system research within information storage and retrieval system development; addresses the topics of developing such a query system; and finally, proposes a framework for the development of naturallanguage query systems in order to facilitate the communication between casual users and information storage and retrieval systems.

Shown an entity (e.g., a plastic whisk) labeled by a novel noun in neutral syntax, speakers of Japanese, a classifier language, are more likely to assume the noun refers to the substance (plastic) than are speakers of English, a count/mass language, who are instead more likely to assume it refers to the object kind [whisk; Imai, M., & Gentner, D.…

A theory of organization and control for a meaning-based language understanding system is mapped out. In this theory, words, rather than rules, are the units of knowledge, and assume the form of procedural entities which execute as generator-like coroutines. Parsing a sentence in context demands a control environment in wich experts can ask questions of each other, forward hints and suggestions to each other, and suspend. The theory is a cognitive theory of both language representation and parser control.

Shown an entity (e.g., a plastic whisk) labeled by a novel noun in neutral syntax, speakers of Japanese, a classifier language, are more likely to assume the noun refers to the substance (plastic) than are speakers of English, a count/mass language, who are instead more likely to assume it refers to the object kind (whisk; Imai and Gentner, 1997). Five experiments replicated this language type effect on entity construal, extended it to quite different stimuli from those studied before, and extended it to a comparison between Mandarin-speakers and English-speakers. A sixth experiment, which did not involve interpreting the meaning of a noun or a pronoun that stands for a noun, failed to find any effect of language type on entity construal. Thus, the overall pattern of findings supports a non-Whorfian, language on language account, according to which sensitivity to lexical statistics in a count/mass language leads adults to assign a novel noun in neutral syntax the status of a count noun, influencing construal of ambiguous entities. The experiments also document and explore cross-linguistically universal factors that influence entity construal, and favor Prasada's (1999) hypothesis that features indicating non-accidentalness of an entity's form lead participants to a construal of object-kind rather than substance-kind. Finally, the experiments document the age at which the language type effect emerges in lexical projection. The details of the developmental pattern are consistent with the lexical statistics hypothesis, along with a universal increase in sensitivity to material kind. PMID:19230873

This paper presents a grammar and semantic corpus based similarity algorithm for naturallanguage sentences. Naturallanguage, in opposition to “artificial language”, such as computer programming languages, is the language used by the general public for daily communication. Traditional information retrieval approaches, such as vector models, LSA, HAL, or even the ontology-based approaches that extend to include concept similarity comparison instead of cooccurrence terms/words, may not always determine the perfect matching while there is no obvious relation or concept overlap between two naturallanguage sentences. This paper proposes a sentence similarity algorithm that takes advantage of corpus-based ontology and grammatical rules to overcome the addressed problems. Experiments on two famous benchmarks demonstrate that the proposed algorithm has a significant performance improvement in sentences/short-texts with arbitrary syntax and structure. PMID:24982952

Those who are initially exposed to an unfamiliar language have difficulty separating running speech into individual words, but over time will recognize both words and the grammatical structure of the language. Behavioral studies have used artificial languages to demonstrate that humans are sensitive to distributional information in language input, and can use this information to discover the structure of that language. This is done without direct instruction and learning occurs over the course of minutes rather than days or months. Moreover, learners may attend to different aspects of the language input as their own learning progresses. Here, we examine processing associated with the early stages of exposure to a naturallanguage, using fMRI. Listeners were exposed to an unfamiliar language (Icelandic) while undergoing four consecutive fMRI scans. The Icelandic stimuli were constrained in ways known to produce rapid learning of aspects of language structure. After approximately 4 min of exposure to the Icelandic stimuli, participants began to differentiate between correct and incorrect sentences at above chance levels, with significant improvement between the first and last scan. An independent component analysis of the imaging data revealed four task-related components, two of which were associated with behavioral performance early in the experiment, and two with performance later in the experiment. This outcome suggests dynamic changes occur in the recruitment of neural resources even within the initial period of exposure to an unfamiliar naturallanguage. PMID:25058056

Those who are initially exposed to an unfamiliar language have difficulty separating running speech into individual words, but over time will recognize both words and the grammatical structure of the language. Behavioral studies have used artificial languages to demonstrate that humans are sensitive to distributional information in language input, and can use this information to discover the structure of that language. This is done without direct instruction and learning occurs over the course of minutes rather than days or months. Moreover, learners may attend to different aspects of the language input as their own learning progresses. Here, we examine processing associated with the early stages of exposure to a naturallanguage, using fMRI. Listeners were exposed to an unfamiliar language (Icelandic) while undergoing four consecutive fMRI scans. The Icelandic stimuli were constrained in ways known to produce rapid learning of aspects of language structure. After approximately 4 minutes of exposure to the Icelandic stimuli, participants began to differentiate between correct and incorrect sentences at above chance levels, with significant improvement between the first and last scan. An independent component analysis of the imaging data revealed four task-related components, two of which were associated with behavioral performance early in the experiment, and two with performance later in the experiment. This outcome suggests dynamic changes occur in the recruitment of neural resources even within the initial period of exposure to an unfamiliar naturallanguage. PMID:25058056

This article reports findings from a classroom environment study which was designed to investigate the nature of Chinese Language classroom environments in Singapore secondary schools. We used a perceptual instrument, the Chinese Language Classroom Environment Inventory, to investigate teachers' and students' perceptions towards their Chinese…

Describes the Thai Learning System, which is designed to help learners acquire the Thai word order system. The system facilitates the lessons on the Web using HyperText Markup Language and Perl programming, which interfaces with naturallanguage processing by means of Prolog. (Author/VWL)

The limited language capability of CAI systems has made it difficult to personalize problem-solving instruction. The intelligent tutoring system, ALBERT, is a problem-solving monitor and coach that has been used with high school and college level physics students for several years; it uses a naturallanguage system to understand kinematics…

The NaturalLanguage Paradigm (NLP) has proven effective in increasing spontaneous verbalizations for children with autism. This study investigated the use of NLP with older adults with cognitive impairments served at a leisure-based adult day program for seniors. Three individuals with limited spontaneous use of functional language participated…

Although software testing has been well-studied in computer science, it has received little attention in naturallanguage processing. Nonetheless, a fully developed methodology for glass box evaluation and testing of language processing applications already exists in the field methods of descriptive linguistics. This work lays out a number of…

The creative oral language elicited from 45 preoperational and 40 concrete operational first grade students was analyzed to study the relationship between cognitive development and the types of case relationships produced. Each child's language was analyzed for eight noun/verb relationships, including state, process, action, experience, location,…

The present work outlines the general concept as to how natural environment guidelines will be developed for Space Shuttle activities. The following six categories that might need natural environment support are single out: development tests; preliminary operations and prelaunch; launch to orbit; orbital mission and operations; deorbit, entry, and landing; ferry flights. An example of detailed event requirements for decisions to launch is given. Some artist's conceptions of proposed launch complexes at Kennedy Space Center and Vandenberg AFB are shown.

In an attempt to overcome the lack of natural means of communication between student and computer, this thesis addresses the problem of developing a system which can understand naturallanguage within an educational problem-solving environment. The nature of the environment imposes efficiency, habitability, self-teachability, and awareness of…

This is the second volume in a series that records the official Symposium Proceedings of the Jean Piaget Society and examines the theoretical, empirical, and applied aspects of Jean Piaget's seminal epistemology. The 12 papers are divided into four areas: language development, formal reasoning, social cognition, and applied research. The topics of…

A teleoperated robot was used to assemble the Experimental Assembly of Structures in Extra-vehicular activity (EASE) space structure under neutral buoyancy conditions, simulating a telerobot performing structural assembly in the zero gravity of space. This previous work used a manually controlled teleoperator as a test bed for system performance evaluations. From these results several Artificial Intelligence options were proposed. One of these was further developed into a real time assembly planner. The interface for this system is effective in assembling EASE structures using windowed graphics and a set of networked menus. As the problem space becomes more complex and hence the set of control options increases, a naturallanguage interface may prove to be beneficial to supplement the menu based control strategy. This strategy can be beneficial in situations such as: describing the local environment, maintaining a data base of task event histories, modifying a plan or a heuristic dynamically, summarizing a task in English, or operating in a novel situation.

In this paper, I investigate a problem of finding most similar music tracks using, popular in NaturalLanguage Processing, techniques like: TF-IDF and LDA. I de ned document as music track. Each music track is transformed to spectrogram, thanks that, I can use well known techniques to get words from images. I used SURF operation to detect characteristic points and novel approach for their description. The standard kmeans was used for clusterization. Clusterization is here identical with dictionary making, so after that I can transform spectrograms to text documents and perform TF-IDF and LDA. At the final, I can make a query in an obtained vector space. The research was done on 16 music tracks for training and 336 for testing, that are splitted in four categories: Hiphop, Jazz, Metal and Pop. Although used technique is completely unsupervised, results are satisfactory and encouraging to further research.

There is a considerable interest at Educational Testing Service (ETS) to include performance-based, naturallanguage constructed-response items on standardized tests. Such items can be developed, but the projected time and costs required to have these items scored by human graders would be prohibitive. In order for ETS to include these types of…

It is noted that language teaching in literature departments in many colleges and universities does not confer the same benefits or offer the same rewards as teaching literature. The problems this imbalance creates in academic rigor and continuity and the questions addressing why this is imbalance exists are discussed. (GLR)

Since language is a biological trait, it is necessary to investigate its evolution, development, and functions, along with the mechanisms that have been set aside, and are now recruited, for its acquisition and use. It is argued here that progress toward each of these goals can be facilitated by new programs of research, carried out within a new…

The field of spoken language processing is concerned with creating computer programs that can understand human speech and produce human-like speech. Regarding the problem of understanding human speech, there is currently growing interest in moving beyond speech recognition (the task of transcribing the words in an audio stream) and towards…

This paper relates different kinds of language modeling methods that can be applied to the linguistic decoding part of a speech recognition system with a very large vocabulary. These models are studied experimentally on a pseudophonetic input arising from French stenotypy. The authors propose a model which combines the advantages of a statistical modeling with information theoretic tools, and those of a grammatical approach.

Minnesota Univ., Minneapolis. Center for Curriculum Development in English.

This 10th-grade unit in Minnesota's "language-centered" curriculum introduces the complexity of linguistic meaning by demonstrating the relationships among linguistic symbols, their referents, their interpreters, and the social milieu. The unit begins with a discussion of Ray Bradbury's "The Kilimanjaro Machine," which illustrates how an otherwise…

Autism spectrum disorders (ASD) are pervasive neurodevelopmental disorders involving a number of deficits to linguistic cognition. The gap between genetics and the pathophysiology of ASD remains open, in particular regarding its distinctive linguistic profile. The goal of this article is to attempt to bridge this gap, focusing on how the autistic brain processes language, particularly through the perspective of brain rhythms. Due to the phenomenon of pleiotropy, which may take some decades to overcome, we believe that studies of brain rhythms, which are not faced with problems of this scale, may constitute a more tractable route to interpreting language deficits in ASD and eventually other neurocognitive disorders. Building on recent attempts to link neural oscillations to certain computational primitives of language, we show that interpreting language deficits in ASD as oscillopathic traits is a potentially fruitful way to construct successful endophenotypes of this condition. Additionally, we will show that candidate genes for ASD are overrepresented among the genes that played a role in the evolution of language. These genes include (and are related to) genes involved in brain rhythmicity. We hope that the type of steps taken here will additionally lead to a better understanding of the comorbidity, heterogeneity, and variability of ASD, and may help achieve a better treatment of the affected populations. PMID:27047363

Autism spectrum disorders (ASD) are pervasive neurodevelopmental disorders involving a number of deficits to linguistic cognition. The gap between genetics and the pathophysiology of ASD remains open, in particular regarding its distinctive linguistic profile. The goal of this article is to attempt to bridge this gap, focusing on how the autistic brain processes language, particularly through the perspective of brain rhythms. Due to the phenomenon of pleiotropy, which may take some decades to overcome, we believe that studies of brain rhythms, which are not faced with problems of this scale, may constitute a more tractable route to interpreting language deficits in ASD and eventually other neurocognitive disorders. Building on recent attempts to link neural oscillations to certain computational primitives of language, we show that interpreting language deficits in ASD as oscillopathic traits is a potentially fruitful way to construct successful endophenotypes of this condition. Additionally, we will show that candidate genes for ASD are overrepresented among the genes that played a role in the evolution of language. These genes include (and are related to) genes involved in brain rhythmicity. We hope that the type of steps taken here will additionally lead to a better understanding of the comorbidity, heterogeneity, and variability of ASD, and may help achieve a better treatment of the affected populations. PMID:27047363

In the generative tradition, the language faculty has been shrinking—perhaps to include only the mechanism of recursion. This paper argues that even this view of the language faculty is too expansive. We first argue that a language faculty is difficult to reconcile with evolutionary considerations. We then focus on recursion as a detailed case study, arguing that our ability to process recursive structure does not rely on recursion as a property of the grammar, but instead emerges gradually by piggybacking on domain-general sequence learning abilities. Evidence from genetics, comparative work on non-human primates, and cognitive neuroscience suggests that humans have evolved complex sequence learning skills, which were subsequently pressed into service to accommodate language. Constraints on sequence learning therefore have played an important role in shaping the cultural evolution of linguistic structure, including our limited abilities for processing recursive structure. Finally, we re-evaluate some of the key considerations that have often been taken to require the postulation of a language faculty. PMID:26379567

Existing evidence shows that more abstract mental representations are formed, and more abstract language is used, to characterize phenomena which are more distant from self. Yet the precise form of the functional relationship between distance and linguistic abstractness has been unknown. In four studies, we test whether more abstract language is used in textual references to more geographically distant cities (Study 1), times further into the past or future (Study 2), references to more socially distant people (Study 3), and references to a specific topic (Study 4). Using millions of linguistic productions from thousands of social media users, we determine that linguistic concreteness is a curvilinear function of the logarithm of distance and discuss psychological underpinnings of the mathematical properties of the relationship. We also demonstrate that gradient curvilinear effects of geographic and temporal distance on concreteness are near-identical, suggesting uniformity in representation of abstractness along multiple dimensions. PMID:26239108

Existing evidence shows that more abstract mental representations are formed and more abstract language is used to characterize phenomena that are more distant from the self. Yet the precise form of the functional relationship between distance and linguistic abstractness is unknown. In four studies, we tested whether more abstract language is used in textual references to more geographically distant cities (Study 1), time points further into the past or future (Study 2), references to more socially distant people (Study 3), and references to a specific topic (Study 4). Using millions of linguistic productions from thousands of social-media users, we determined that linguistic concreteness is a curvilinear function of the logarithm of distance, and we discuss psychological underpinnings of the mathematical properties of this relationship. We also demonstrated that gradient curvilinear effects of geographic and temporal distance on concreteness are nearly identical, which suggests uniformity in representation of abstractness along multiple dimensions. PMID:26239108

The Fast Flux Test Facility (FFTF) has been designed for passive, back-up, safety grade decay heat removal utilizing natural circulation of the sodium coolant. This paper discusses the process by which operator preparation for this emergency operating mode has been assured, in paralled with the design verification during the FFTF startup and acceptance testing program. Over the course of the test program, additional insights were gained through the testing program, through on-going plant analyses and through general safety evaluations performed throughout the nuclear industry. These insights led to development of improved operator training material for control of decay heat removal during both forced and natural circulation as well as improvements in the related plant operating procedures.

The natural-language tutorial dialogue system that the authors are developing will allow them to focus on the nature of interactivity during tutoring as a malleable factor. Specifically, it will serve as a research platform for studies that manipulate the frequency and types of verbal alignment processes that take place during tutoring, such as…

A variety of techniques for collecting and analyzing information about the natural use of naturallanguages is surveyed, emphasizing the importance of recognizing the properties of a research task that make a given technique more or less suitable to it rather than comparing techniques globally and ranking them absolutely. An initial goal is to…

We have assembled a large corpus of student submissions to an automatic grading system, where the subject matter involves the translation of naturallanguage sentences into propositional logic. Of the 2.3 million translation instances in the corpus, 286,000 (approximately 12%) are categorized as being in error. We want to understand the nature of…

A textbook for English-as-a-Second-Language (ESL) students presents lessons on U.S. citizenship education and naturalization information. The nine lessons cover the following topics: the U.S. system of government; the Bill of Rights; responsibilities and rights of citizens; voting; requirements for naturalization; the application process; the…

This paper explores the question of interpreting complex texts, particularly scripts and multimedia presentations. The paper first reviews the literature on natural discourse, noting that although interpreting spoken naturallanguage seems rather straightforward, many scholars have discussed what is required to make sense of discourse. The paper…

Objective: The opportunity to integrate clinical decision support systems into clinical practice is limited due to the lack of structured, machine readable data in the current format of the electronic health record. Naturallanguage processing has been designed to convert free text into machine readable data. The aim of the current study was to ascertain the feasibility of using naturallanguage processing to extract clinical information from >76,000 breast pathology reports. Approach and Procedure: Breast pathology reports from three institutions were analyzed using naturallanguage processing software (Clearforest, Waltham, MA) to extract information on a variety of pathologic diagnoses of interest. Data tables were created from the extracted information according to date of surgery, side of surgery, and medical record number. The variety of ways in which each diagnosis could be represented was recorded, as a means of demonstrating the complexity of machine interpretation of free text. Results: There was widespread variation in how pathologists reported common pathologic diagnoses. We report, for example, 124 ways of saying invasive ductal carcinoma and 95 ways of saying invasive lobular carcinoma. There were >4000 ways of saying invasive ductal carcinoma was not present. Naturallanguage processor sensitivity and specificity were 99.1% and 96.5% when compared to expert human coders. Conclusion: We have demonstrated how a large body of free text medical information such as seen in breast pathology reports, can be converted to a machine readable format using naturallanguage processing, and described the inherent complexities of the task. PMID:22934236

The essay argues that Francis Bacon's considerations of parables and cryptography reflect larger interpretative concerns of his natural philosophic project. Bacon describes nature as having a language distinct from those of God and man, and, in so doing, establishes a central problem of his natural philosophy—namely, how can the language of nature be accessed through scientific representation? Ultimately, Bacon's solution relies on a theory of differential and duplicitous signs that conceal within them the hidden voice of nature, which is best recognized in the natural forms of efficient causality. The "alphabet of nature"—those tables of natural occurrences—consequently plays a central role in his program, as it renders nature's language susceptible to a process and decryption that mirrors the model of the bilateral cipher. It is argued that while the writing of Bacon's natural philosophy strives for literality, its investigative process preserves a space for alterity within scientific representation, that is made accessible to those with the interpretative key. PMID:22371983

This study investigates the hypothesis that there is a natural number bias that influences how students understand the effects of arithmetical operations involving both Arabic numerals and numbers that are represented by symbols for missing numbers. It also investigates whether this bias correlates with other aspects of students' understanding of…

Configuring a set of devices for pre- and post-track activities in NASA's Deep Space Network (DSN) involves hundreds of keyboard entries, manual operations, and parameter extractions and confirmations, making it tedious and error prone. This article presents a language called Automation Language for Managing Operations (ALMO), which automates operations of communications links in the DSN. ALMO was developed in response to a number of deficiencies that were identified with the previous languages and techniques used to manage DSN link operations. These included a need to (1) provide visibility to the information that resides in the different link devices in order to recognize an anomaly and alert the operator when it occurs, (2) provide an intuitive and simple language capable of representing the full spectrum of operations procedures, (3) mitigate the variations in operating procedures experienced between different tracking complexes and supports, and (4) automate overall operation, reducing cost by minimizing work hours required to configure devices and perform activities. With ALMO, for the first time in DSN operations, operators are able to capture sequences of activities into simple instructions that can be easily interpreted by both human and machine. Additionally, the device information, which used to be viewable only via screen displays, is now accessible for operator use in automating their tasks, thus reducing the time it takes to perform such tasks while minimizing the chance of error. ALMO currently is being used operationally at the Deep Space Communications Complex in Canberra, Australia. Link operators at the Madrid, Spain, and Goldstone, California, communications complexes also have received training in the use of ALMO.

Pathology computer systems are making increasing use of naturallanguage diagnoses. The Johns Hopkins Medical Institutions integrated pathology reporting system, a commercial product with extensive, locally added enhancements, covers all information management functions within autopsy and surgical pathology divisions and has on-line linkages to clinical laboratory reports and the medical library's Mini-MEDLINE system. All diagnoses are written in naturallanguage, using a word processor and spelling checker. A security system with personal passwords and different levels of access for different staff members allows reports to be signed out with an electronic signature. The system produces financial reports, overdue case reports, and Boolean searches of the database. Our experience with 128,790 consecutively entered pathology reports suggests that the greater precision of naturallanguage diagnoses makes them the most suitable vehicle for follow-up, retrieval, and systems development functions in pathology. PMID:3070549

Mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. That is why mobile robotics problems are complex with many unanswered questions. To reach a high degree of autonomous operation, a new level of learning is required. On the one hand, promising learning theories such as the adaptive critic and creative control have been proposed, while on other hand the human brain"s processing ability has amazed and inspired researchers in the area of Unmanned Ground Vehicles but has been difficult to emulate in practice. A new direction in the fuzzy theory tries to develop a theory to deal with the perceptions conveyed by the naturallanguage. This paper tries to combine these two fields and present a framework for autonomous robot navigation. The proposed creative controller like the adaptive critic controller has information stored in a dynamic database (DB), plus a dynamic task control center (TCC) that functions as a command center to decompose tasks into sub-tasks with different dynamic models and multi-criteria functions. The TCC module utilizes computational theory of perceptions to deal with the high levels of task planning. The authors are currently trying to implement the model on a real mobile robot and the preliminary results have been described in this paper.

QATT, a naturallanguage interface developed for the Qualitative Process Engine (QPE) system is presented. The major goal was to evaluate the use of a preexisting naturallanguage understanding system designed to be tailored for query processing in multiple domains of application. The other goal of QATT is to provide a comfortable environment in which to query envisionments in order to gain insight into the qualitative behavior of physical systems. It is shown that the use of the preexisting system made possible the development of a reasonably useful interface in a few months.

SWAN is an expert system and naturallanguage interface for assessing the war fighting capability of Air Force units in Europe. The expert system is an object oriented knowledge based simulation with an alternate worlds facility for performing what-if excursions. Responses from the system take the form of generated text, tables, or graphs. The naturallanguage interface is an expert system in its own right, with a knowledge base and rules which understand how to access external databases, models, or expert systems. The distinguishing feature of the Air Force expert system is its use of meta-knowledge to generate explanations in the frame and procedure based environment.

We studied the cognitive abilities of a 13-year-old deaf child, deprived of most linguistic input from late infancy, in a battery of tests designed to reveal the nature of numerical and geometrical abilities in the absence of a full linguistic system. Tests revealed widespread proficiency in basic symbolic and non-symbolic numerical computations…

We propose a stochastic model for the number of different words in a given database which incorporates the dependence on the database size and historical changes. The main feature of our model is the existence of two different classes of words: (i) a finite number of core words, which have higher frequency and do not affect the probability of a new word to be used, and (ii) the remaining virtually infinite number of noncore words, which have lower frequency and, once used, reduce the probability of a new word to be used in the future. Our model relies on a careful analysis of the Google Ngram database of books published in the last centuries, and its main consequence is the generalization of Zipf’s and Heaps’ law to two-scaling regimes. We confirm that these generalizations yield the best simple description of the data among generic descriptive models and that the two free parameters depend only on the language but not on the database. From the point of view of our model, the main change on historical time scales is the composition of the specific words included in the finite list of core words, which we observe to decay exponentially in time with a rate of approximately 30 words per year for English.

Natural gas is becoming rapidly the optimal choice for fueling new generating units in electric power system driven by abundant natural gas supplies and environmental regulations that are expected to cause coal-fired generation retirements. The growing reliance on natural gas as a dominant fuel for electricity generation throughout North America has brought the interaction between the natural gas and power grids into sharp focus. The primary concern and motivation of this research is to address the emerging interdependency issues faced by the electric power and natural gas industry. This thesis provides a comprehensive analysis of the interactions between the two systems regarding the short-term operation and long-term infrastructure planning. Natural gas and renewable energy appear complementary in many respects regarding fuel price and availability, environmental impact, resource distribution and dispatchability. In addition, demand response has also held the promise of making a significant contribution to enhance system operations by providing incentives to customers for a more flat load profile. We investigated the coordination between natural gas-fired generation and prevailing nontraditional resources including renewable energy, demand response so as to provide economical options for optimizing the short-term scheduling with the intense natural gas delivery constraints. As the amount and dispatch of gas-fired generation increases, the long-term interdependency issue is whether there is adequate pipeline capacity to provide sufficient gas to natural gas-fired generation during the entire planning horizon while it is widely used outside the power sector. This thesis developed a co-optimization planning model by incorporating the natural gas transportation system into the multi-year resource and transmission system planning problem. This consideration would provide a more comprehensive decision for the investment and accurate assessment for system adequacy and

A combination of traffic demand growth, Next Generation Air Transportation System (NextGen) technologies and operational concepts, and increased utilization of regional airports is expected to increase the occurrence and severity of coupling between operations at proximate airports. These metroplex phenomena constrain the efficiency and/or capacity of airport operations and, in NextGen, have the potential to reduce safety and prevent environmental benefits. Without understanding the nature of metroplexes and developing solutions that provide efficient coordination of operations between closely-spaced airports, the use of NextGen technologies and distribution of demand to regional airports may provide little increase in the overall metroplex capacity. However, the characteristics and control of metroplex operations have not received significant study. This project advanced the state of knowledge about metroplexes by completing three objectives: 1. developed a foundational understand of the nature of metroplexes; 2. provided a framework for discussing metroplexes; 3. suggested and studied an approach for optimally managing metroplexes that is consistent with other NextGen concepts

The P300 speller is an example of a brain-computer interface that can restore functionality to victims of neuromuscular disorders. Although the most common application of this system has been communicating language, the properties and constraints of the linguistic domain have not to date been exploited when decoding brain signals that pertain to language. We hypothesized that combining the standard stepwise linear discriminant analysis with a Naive Bayes classifier and a trigram language model would increase the speed and accuracy of typing with the P300 speller. With integration of naturallanguage processing, we observed significant improvements in accuracy and 40-60% increases in bit rate for all six subjects in a pilot study. This study suggests that integrating information about the linguistic domain can significantly improve signal classification.

This book is designed primarily to help users find meaningful words for naturallanguage, or free-text, computer searching of bibliographic and textual databases in the social and behavioral sciences. Additionally, it covers many socially relevant and technical topics not covered by the usual literary thesaurus, therefore it may also be useful for…

Describes a system that was developed in Germany for naturallanguage processing (NLP) to improve free text analysis for information retrieval. Techniques from empirical linguistics are discussed, system architecture is explained, and rules for dealing with conjunctions in dependency analysis for free text processing are proposed. (13 references)…

The current study investigates the degree to which the lexical properties of students' essays can inform stealth assessments of their vocabulary knowledge. In particular, we used indices calculated with the naturallanguage processing tool, TAALES, to predict students' performance on a measure of vocabulary knowledge. To this end, two corpora were…

The introduction to this special issue on nature-nurture interactions notes that the following articles represent five biologically oriented research approaches which each provide a tutorial on the investigator's major research tool, a summary of current research understandings regarding language and learning differences, and a discussion of…

The existence of verification processes in recognition memory was confirmed in the context of Adams' (Adams & Bray, 1970) closed-loop theory. Subjects' recognition was tested following a learning session. The expectation was that data would reveal consistent internal relationships supporting the position that naturallanguage mediation plays an…

The current study was designed to investigate the timing and nature of interaction between the two languages of bilinguals. For this purpose, we compared discrimination of Canadian French and Canadian English coronal stops by simultaneous bilingual, monolingual and advanced early L2 learners of French and English. French /d/ is phonetically…

BIT BY BIT is an encryption game that is designed to improve students' understanding of naturallanguage processing in computers. Participants encode clear words into binary code using an encryption key and exchange them in the game. BIT BY BIT enables participants who do not understand the concept of binary numbers to perform the process of…

This booklet is intended to serve as an introduction to art experiences that relate to studies in social science, natural science, and language arts. It is designed to develop a better understanding of the dynamics of interaction of the abiotic, biotic, and cultural factors of the total environment as manifest in art forms. Each section, presented…

I review 2 seminal research reports published in this journal during its second decade more than a century ago. Given psychology's subdisciplines, they would not normally be reviewed together because one involves reading and the other speech perception. The small amount of interaction between these domains might have limited research and theoretical progress. In fact, the 2 early research reports revealed common processes involved in these 2 forms of language processing. Their illustration of the role of Wundt's apperceptive process in reading and speech perception anticipated descriptions of contemporary theories of pattern recognition, such as the fuzzy logical model of perception. Based on the commonalities between reading and listening, one can question why they have been viewed so differently. It is commonly believed that learning to read requires formal instruction and schooling, whereas spoken language is acquired from birth onward through natural interactions with people who talk. Most researchers and educators believe that spoken language is acquired naturally from birth onward and even prenatally. Learning to read, on the other hand, is not possible until the child has acquired spoken language, reaches school age, and receives formal instruction. If an appropriate form of written text is made available early in a child's life, however, the current hypothesis is that reading will also be learned inductively and emerge naturally, with no significant negative consequences. If this proposal is true, it should soon be possible to create an interactive system, Technology Assisted Reading Acquisition, to allow children to acquire literacy naturally. PMID:22953690

Few researchers would doubt that ultimate attainment in second language grammar is negatively correlated with age of acquisition, but considerable controversy remains about the nature of this relationship: the exact shape of the age-attainment function and its interpretation. This article presents two parallel studies with native speakers of…

In this paper, we present a study of a large corpus of student logic exercises in which we explore the relationship between two distinct measures of difficulty: the proportion of students whose initial attempt at a given naturallanguage to first-order logic translation is incorrect, and the average number of attempts that are required in order to…

Discussion of information retrieval and relevance focuses on mutual information, a measure which represents the relation between two words. A model of a natural-language information-retrieval system that is based on a two-level document-ranking method using mutual information is presented, and a Korean encyclopedia test collection is explained.…

This dissertation presents a pragmatic interpreter/translator called Real English to serve as a naturallanguage man-machine communication interface in a multi-mode on-line information retrieval system. This multi-mode feature affords the user a library-like searching tool by giving him access to a dictionary, lexicon, thesaurus, synonym table,…

This article discusses the occurrence and measurement of self-regulated learning (SRL) both in human tutoring and in computer tutors with agents that hold conversations with students in naturallanguage and help them learn at deeper levels. One challenge in building these computer tutors is to accommodate, encourage, and scaffold SRL because these…

AutoTutor is a naturallanguage tutoring system that has produced learning gains across multiple domains (e.g., computer literacy, physics, critical thinking). In this paper, we review the development, key research findings, and systems that have evolved from AutoTutor. First, the rationale for developing AutoTutor is outlined and the advantages…

Proposes a theoretical framework called NLPIR that integrates naturallanguage processing (NLP) into information retrieval (IR) based on the assumption that there exists representation distance between queries and documents. Discusses problems in traditional keyword-based IR, including relevance, and describes some existing NLP techniques.…

Significant progress has been made in the application of naturallanguage processing (NLP) to augmentative and alternative communication (AAC), particularly in the areas of interface design and word prediction. This article will survey the current state-of-the-science of NLP in AAC and discuss its future applications for the development of next…

This paper presents a tool for drawing dynamic geometric figures by understanding the texts of geometry problems. With the tool, teachers and students can construct dynamic geometric figures on a web page by inputting a geometry problem in naturallanguage. First we need to build the knowledge base for understanding geometry problems. With the…

Argues that the structure of language reflects and reproduces the dominant model, and reinforces many of the dualistic assumptions which underlie the separation of male and female, nature and culture, mind from body, emotion from reason, and intuition from fact. (LZ)

The use of an authoring system is described that incorporates student interaction with the computer by naturallanguage entry at the keyboard and the use of the microcomputer to direct a random-access laser video-disk player. (Author/MLW)

This paper introduces a method of extending naturallanguage-based processing of qualitative data analysis with the use of a very quantitative tool--graph theory. It is not an attempt to convert qualitative research to a positivist approach with a mathematical black box, nor is it a "graphical solution". Rather, it is a method to help qualitative…

The continued advancement of the programming language HAL to operational status is reported. It is demonstrated that the compiler itself can be written in HAL. A HAL-in-HAL experiment proves conclusively that HAL can be used successfully as a compiler implementation tool.

This study aims to explore the nature of definitions and classifications of Language Learning Strategies (LLSs) in the current studies of second/foreign language learning in order to show the current problems regarding such definitions and classifications. The present study shows that there is not a universal agreeable definition and…

Transformations associated with the increasing speed, scale, and complexity of mobilities, together with the information technology revolution, have changed the demography of most countries of the world and brought about accompanying social, cultural, and economic shifts (Heugh, 2013). This complex diversity has changed the very nature of…

In order to adequately understand the foundations of human social interaction, we need to provide an explanation of our specific mode of living based on linguistic activity and the cultural practices with which it is interwoven. To this end, we need to make explicit the constitutive conditions for the emergence of the phenomena which relate to language and joint activity starting from their operational-relational matrix. The approach presented here challenges the inadequacy of mentalist models to explain the relation between language and interaction. Recent empirical studies concerning joint attention and language acquisition have led scholars such as Tomasello et al. (2005) to postulate the existence of a universal human "sociocognitive infrastructure" that drives joint social activities and is biologically inherited. This infrastructure would include the skill of precocious intention-reading, and is meant to explain human linguistic development and cultural learning. However, the cognitivist and functionalist assumptions on which this model relies have resulted in controversial hypotheses (i.e., intention-reading as the ontogenetic precursor of language) which take a contentious conception of mind and language for granted. By challenging this model, I will show that we should instead turn ourselves towards a constitutive explanation of language within a "bio-logical" understanding of interactivity. This is possible only by abandoning the cognitivist conception of organism and traditional views of language. An epistemological shift must therefore be proposed, based on embodied, enactive and distributed approaches, and on Maturana's work in particular. The notions of languaging and observing that will be discussed in this article will allow for a bio-logically grounded, theoretically parsimonious alternative to mentalist and spectatorial approaches, and will guide us towards a wider understanding of our sociocultural mode of living. PMID:25177308

In order to adequately understand the foundations of human social interaction, we need to provide an explanation of our specific mode of living based on linguistic activity and the cultural practices with which it is interwoven. To this end, we need to make explicit the constitutive conditions for the emergence of the phenomena which relate to language and joint activity starting from their operational-relational matrix. The approach presented here challenges the inadequacy of mentalist models to explain the relation between language and interaction. Recent empirical studies concerning joint attention and language acquisition have led scholars such as Tomasello et al. (2005) to postulate the existence of a universal human “sociocognitive infrastructure” that drives joint social activities and is biologically inherited. This infrastructure would include the skill of precocious intention-reading, and is meant to explain human linguistic development and cultural learning. However, the cognitivist and functionalist assumptions on which this model relies have resulted in controversial hypotheses (i.e., intention-reading as the ontogenetic precursor of language) which take a contentious conception of mind and language for granted. By challenging this model, I will show that we should instead turn ourselves towards a constitutive explanation of language within a “bio-logical” understanding of interactivity. This is possible only by abandoning the cognitivist conception of organism and traditional views of language. An epistemological shift must therefore be proposed, based on embodied, enactive and distributed approaches, and on Maturana’s work in particular. The notions of languaging and observing that will be discussed in this article will allow for a bio-logically grounded, theoretically parsimonious alternative to mentalist and spectatorial approaches, and will guide us towards a wider understanding of our sociocultural mode of living. PMID:25177308

We provide dramatic evidence that 'Mellin space' is the natural home for correlation functions in CFTs with weakly coupled bulk duals. In Mellin space, CFT correlators have poles corresponding to an OPE decomposition into 'left' and 'right' sub-correlators, in direct analogy with the factorization channels of scattering amplitudes. In the regime where these correlators can be computed by tree level Witten diagrams in AdS, we derive an explicit formula for the residues of Mellin amplitudes at the corresponding factorization poles, and we use the conformal Casimir to show that these amplitudes obey algebraic finite difference equations. By analyzing the recursive structure of our factorization formula we obtain simple diagrammatic rules for the construction of Mellin amplitudes corresponding to tree-level Witten diagrams in any bulk scalar theory. We prove the diagrammatic rules using our finite difference equations. Finally, we show that our factorization formula and our diagrammatic rules morph into the flat space S-Matrix of the bulk theory, reproducing the usual Feynman rules, when we take the flat space limit of AdS/CFT. Throughout we emphasize a deep analogy with the properties of flat space scattering amplitudes in momentum space, which suggests that the Mellin amplitude may provide a holographic definition of the flat space S-Matrix.

Lightning, the energetic and broadband electrical discharge produced by thunderstorms, provides a natural remote sensing signal for the study of severe storms and related phenomena on global, regional and local scales. Using this strong signal- one of nature's own probes of severe weather -lightning measurements prove to be straightforward and take advantage of a variety of measurement techniques that have advanced considerably in recent years. We briefly review some of the leading lightning detection systems including satellite-based optical detectors such as the Lightning Imaging Sensor, and ground-based radio frequency systems such as Vaisala's National Lightning Detection Network (NLDN), long range lightning detection systems, and the Lightning Mapping Array (LMA) networks. In addition, we examine some of the exciting new research results and operational capabilities (e.g., shortened tornado warning lead times) derived from these observations. Finally we look forward to the next measurement advance - lightning observations from geostationary orbit.

This manuscript highlights tangible benefits deriving from the dynamic simulation and control of operational transients of natural gas processing plants. Relevant improvements in safety, controllability, operability, and flexibility are obtained not only within the traditional applications, i.e. plant start-up and shutdown, but also in certain fields apparently time-independent such as the feasibility studies of gas processing plant layout and the process design of processes. Specifically, this paper enhances the myopic steady-state approach and its main shortcomings with respect to the more detailed studies that take into consideration the non-steady state behaviors. A portion of a gas processing facility is considered as case study. Process transients, design, and control solutions apparently more appealing from a steady-state approach are compared to the corresponding dynamic simulation solutions. PMID:22056010

In a continuation of the conversation with Fitch, Chomsky, and Hauser on the evolution of language, we examine their defense of the claim that the uniquely human, language-specific part of the language faculty (the ''narrow language faculty'') consists only of recursion, and that this part cannot be considered an adaptation to communication. We…

Parents who enroll their children to be educated through a threatened minority language frequently do not speak that language themselves and classes in the language are sometimes offered to parents in the expectation that this will help them to support their children's education and to use the minority language in the home. Providing…

Argues that reasoning is not governed by mental logic or models. Proposes new operational semantic theory, in which reasoning is based on children's operational understanding of key terms in a given problem. Reports results of a study of class inclusion in which dramatic differences in performance were found as the result of linguistic context.…

Communicative interactions involve a kind of procedural knowledge that is used by the human brain for processing verbal and nonverbal inputs and for language production. Although considerable work has been done on modeling human language abilities, it has been difficult to bring them together to a comprehensive tabula rasa system compatible with current knowledge of how verbal information is processed in the brain. This work presents a cognitive system, entirely based on a large-scale neural architecture, which was developed to shed light on the procedural knowledge involved in language elaboration. The main component of this system is the central executive, which is a supervising system that coordinates the other components of the working memory. In our model, the central executive is a neural network that takes as input the neural activation states of the short-term memory and yields as output mental actions, which control the flow of information among the working memory components through neural gating mechanisms. The proposed system is capable of learning to communicate through naturallanguage starting from tabula rasa, without any a priori knowledge of the structure of phrases, meaning of words, role of the different classes of words, only by interacting with a human through a text-based interface, using an open-ended incremental learning process. It is able to learn nouns, verbs, adjectives, pronouns and other word classes, and to use them in expressive language. The model was validated on a corpus of 1587 input sentences, based on literature on early language assessment, at the level of about 4-years old child, and produced 521 output sentences, expressing a broad range of language processing functionalities. PMID:26560154

Major sources as defined in Title V of the Clean Air Act Amendments of 1990 that are required to submit an operating permit application will need to: Evaluate their compliance status; Determine a strategic method of presenting the general and specific conditions of their Model Operating Permit (MOP); Maintain compliance with air quality regulations. A MOP is prepared to assist permitting agencies and affected facilities in the development of operating permits for a specific source category. This paper includes a brief discussion of example permit conditions that may be applicable to various types of Title V sources. A MOP for a generic natural gas processing plant is provided as an example. The MOP should include a general description of the production process and identify emission sources. The two primary elements that comprise a MOP are: Provisions of all existing state and/or local air permits; Identification of general and specific conditions for the Title V permit. The general provisions will include overall compliance with all Clean Air Act Titles. The specific provisions include monitoring, record keeping, and reporting. Although Title V MOPs are prepared on a case-by-case basis, this paper will provide a general guideline of the requirements for preparation of a MOP. Regulatory agencies have indicated that a MOP included in the Title V application will assist in preparation of the final permit provisions, minimize delays in securing a permit, and provide support during the public notification process.

This presentation concentrates on knowledge acquisition and its application to the development of an expert module and a user interface for an Intelligent Tutoring System (ITS). The Systems Test and OperationsLanguage (STOL) ITS is being developed to assist NASA control center personnel in learning a command and control language as it is used in mission operations rooms. The objective of the tutor is to impart knowledge and skills that will permit the trainee to solve command and control problems in the same way that the STOL expert solves those problems. The STOL ITS will achieve this object by representing the solution space in such a way that the trainee can visualize the intermediate steps, and by having the expert module production rules parallel the STOL expert's knowledge structures.

A naturallanguage challenge devised by Informatics for Integrating Biology and the Bedside (i2b2) was to analyze free-text health data to construct a multi-class, multi-label classification system focused on obesity and its co-morbidities. This report presents a case study in which a naturallanguage processing (NLP) toolkit, called NLTK, was used in the challenge. This report provides a brief review of NLP in the context of EHR applications, briefly surveys and contrasts some existing NLP toolkits, and reports on our experiences with the i2b2 case study. Our efforts uncovered issues including the lack of human annotated physician notes for use as NLP training data, differences between conventional free-text and medical notes, and potential hardware and software limitations affecting future projects. PMID:19380974

Providing descriptions of isolated sensors and sensor networks in naturallanguage, understandable by the general public, is useful to help users find relevant sensors and analyze sensor data. In this paper, we discuss the feasibility of using geographic knowledge from public databases available on the Web (such as OpenStreetMap, Geonames, or DBpedia) to automatically construct such descriptions. We present a general method that uses such information to generate sensor descriptions in naturallanguage. The results of the evaluation of our method in a hydrologic national sensor network showed that this approach is feasible and capable of generating adequate sensor descriptions with a lower development effort compared to other approaches. In the paper we also analyze certain problems that we found in public databases (e.g., heterogeneity, non-standard use of labels, or rigid search methods) and their impact in the generation of sensor descriptions. PMID:26151211

Providing descriptions of isolated sensors and sensor networks in naturallanguage, understandable by the general public, is useful to help users find relevant sensors and analyze sensor data. In this paper, we discuss the feasibility of using geographic knowledge from public databases available on the Web (such as OpenStreetMap, Geonames, or DBpedia) to automatically construct such descriptions. We present a general method that uses such information to generate sensor descriptions in naturallanguage. The results of the evaluation of our method in a hydrologic national sensor network showed that this approach is feasible and capable of generating adequate sensor descriptions with a lower development effort compared to other approaches. In the paper we also analyze certain problems that we found in public databases (e.g., heterogeneity, non-standard use of labels, or rigid search methods) and their impact in the generation of sensor descriptions. PMID:26151211

The 2014 i2b2 naturallanguage processing shared task focused on identifying cardiovascular risk factors such as high blood pressure, high cholesterol levels, obesity and smoking status among other factors found in health records of diabetic patients. In addition, the task involved detecting medications, and time information associated with the extracted data. This paper presents the development and evaluation of a naturallanguage processing (NLP) application conceived for this i2b2 shared task. For increased efficiency, the application main components were adapted from two existing NLP tools implemented in the Apache UIMA framework: Textractor (for dictionary-based lookup) and cTAKES (for preprocessing and smoking status detection). The application achieved a final (micro-averaged) F1-measure of 87.5% on the final evaluation test set. Our attempt was mostly based on existing tools adapted with minimal changes and allowed for satisfying performance with limited development efforts. PMID:26318122

This paper deals with the challenge to create an Artificial Intelligence System with an Artificial Consciousness. For that, an introduction to computing anticipatory systems is presented, with the definitions of strong and weak anticipation. The quasi-anticipatory systems of Robert Rosen are linked to open-loop controllers. Then, some properties of the natural brain are presented in relation to the triune brain theory of Paul D. MacLean, and the mind time of Benjamin Libet, with his veto of the free will. The theory of the hyperincursive discrete anticipatory systems is recalled in view to introduce the concept of hyperincursive free will, which gives a similar veto mechanism: free will as unpredictable hyperincursive anticipation The concepts of endo-anticipation and exo-anticipation are then defined. Finally, some ideas about artificial conscious intelligence with naturallanguage are presented, in relation to the Turing Machine, Formal Language, Intelligent Agents and Mutli-Agent System.

Naturallanguage processing-based knowledge management software, traditionally developed for security organizations, is now becoming commercially available. An informal survey was conducted to discover and examine current NLP and related technologies and potential applications for information retrieval, information extraction, summarization, categorization, terminology management, link analysis, and visualization for possible implementation at Sandia National Laboratories. This report documents our current understanding of the technologies, lists software vendors and their products, and identifies potential applications of these technologies.

Background Metachromatic leukodystrophy (MLD) is a rare, genetic neurodegenerative disease. It leads to progressive demyelination resulting in regression of development and early death. With regard to experimental therapies, knowledge of the natural course of the disease is highly important. We aimed to analyse onset and character of first symptoms in MLD and to provide detailed natural course data concerning language and cognition. Methods Patients with MLD were recruited nationwide within the scope of the German research network LEUKONET. 59 patients’ questionnaires (23 late-infantile, 36 juvenile) were analysed. Results Time from first symptoms (at a median age of 1.5 years in late-infantile and 6 years in juvenile MLD) to diagnosis took one year in late-infantile and two years in juvenile patients on average. Gait disturbances and abnormal movement patterns were first signs in all patients with late-infantile and in most with juvenile MLD. Onset in the latter was additionally characterized by problems in concentration, behaviour and fine motor function (p = 0.0011, p language acquisition. They showed a rapid language decline with first language difficulties at a median age of 2.5 years and complete loss of expressive language within several months (median age 32, range 22–47 months). This was followed by total loss of communication at a median age of around four years. In juvenile patients, language decline was more protracted, and problems in concentration and behaviour were followed by decline in skills for reading, writing and calculating around four years after disease onset. Conclusions Our data reflect the natural course of decline in language and cognition in late-infantile and juvenile MLD in a large cohort over a long observation period. This is especially relevant to juvenile patients where the disease

This paper explores a variety of methods for applying the Latent Dirichlet Allocation (LDA) automated topic modeling algorithm to the modeling of the structure and behavior of virtual organizations found within modern social media and social networking environments. As the field of Big Data reveals, an increase in the scale of social data available presents new challenges which are not tackled by merely scaling up hardware and software. Rather, they necessitate new methods and, indeed, new areas of expertise. Naturallanguage processing provides one such method. This paper applies LDA to the study of scientific virtual organizations whose members employ social technologies. Because of the vast data footprint in these virtual platforms, we found that naturallanguage processing was needed to 'unlock' and render visible latent, previously unseen conversational connections across large textual corpora (spanning profiles, discussion threads, forums, and other social media incarnations). We introduce variants of LDA and ultimately make the argument that naturallanguage processing is a critical interdisciplinary methodology to make better sense of social 'Big Data' and we were able to successfully model nested discussion topics from forums and blog posts using LDA. Importantly, we found that LDA can move us beyond the state-of-the-art in conventional Social Network Analysis techniques. PMID:24930023

All space shuttle events from launch through solid rocket booster recovery and orbiter landing are considered in terms of constraints placed on those operations by the natural environment. Thunderstorm activity is discussed as an example of a possible hazard. The activities most likely to require advanced detection and monitoring techniques are identified as those from deorbit decision to Orbiter landing. The inflexible flight plan will require the transmission of real time wind profile information below 24 km and warnings of thunderstorms or turbulence in the Orbiter flight path. Extensive aerial reconnaissance and communication facilities and procedures to permit immediate transmission of aircraft reports to the mission control authority and to the Orbiter will also be required.

The Menelas project aimed to produce a normalized conceptual representation from naturallanguage patient discharge summaries. Because of the complex and detailed nature of conceptual representations, evaluating the quality of output of such a system is difficult. We present the method designed to measure the quality of Menelas output, and its application to the state of the French Menelas prototype as of the end of the project. We examine this method in the framework recently proposed by Friedman and Hripcsak. We also propose two conditions which enable to reduce the evaluation preparation workload. PMID:9357694

An article and a bibliography constitute this issue of the "Illinois English Bulletin." In "Keep the Natives from Getting Restless," Barry Gadlin examines native language learning by children from infancy through high school and discusses the theories of several authors concerning the teaching of the native language. The "Bibliography of…

One dimension of early Canadian education is the attempt of the government to use the education system as an assimilative tool to integrate the First Nations and Me´tis people into Euro-Canadian society. Despite these attempts, many First Nations and Me´tis people retained their culture and their indigenous language. Few science educators have examined First Nations and Western scientific worldviews and the impact they may have on science learning. This study explored the views some First Nations (Cree) and Euro-Canadian Grade-7-level students in Manitoba had about the nature of science. Both qualitative (open-ended questions and interviews) and quantitative (a Likert-scale questionnaire) instruments were used to explore student views. A central hypothesis to this research programme is the possibility that the different world-views of two student populations, Cree and Euro-Canadian, are likely to influence their perceptions of science. This preliminary study explored a range of methodologies to probe the perceptions of the nature of science in these two student populations. It was found that the two cultural groups differed significantly between some of the tenets in a Nature of Scientific Knowledge Scale (NSKS). Cree students significantly differed from Euro-Canadian students on the developmental, testable and unified tenets of the nature of scientific knowledge scale. No significant differences were found in NSKS scores between language groups (Cree students who speak English in the home and those who speak English and Cree or Cree only). The differences found between language groups were primarily in the open-ended questions where preformulated responses were absent. Interviews about critical incidents provided more detailed accounts of the Cree students' perception of the nature of science. The implications of the findings of this study are discussed in relation to the challenges related to research methodology, further areas for investigation, science

Embracing the dynamic nature of English language can help students learn more about all forms of English. To fully engage students, teachers should not adhere to an anachronistic and static view of English. Instead, they must acknowledge, accept, and even use different language forms within the classroom to make that classroom dynamic, inclusive,…

Famed for his collection of drawings of naturalia and his thoughts on the relationship between painting and natural knowledge, it now appears that the Bolognese naturalist Ulisse Aldrovandi (1522-1605) also pondered specifically color and pigments, compiling not only lists and diagrams of color terms but also a full-length unpublished manuscript entitled De coloribus or Trattato dei colori. Introducing these writings for the first time, this article portrays a scholar not so much interested in the materiality of pigment production, as in the cultural history of hues. It argues that these writings constituted an effort to build a language of color, in the sense both of a standard nomenclature of hues and of a lexicon, a dictionary of their denotations and connotations as documented in the literature of ancients and moderns. This language would serve the naturalist in his artistic patronage and his natural historical studies, where color was considered one of the most reliable signs for the correct identification of specimens, and a guarantee of accuracy in their illustration. Far from being an exception, Aldrovandi's 'color sensibility'spoke of that of his university-educated nature-loving peers. PMID:26856048

Many of the issues that confront designers of interactive computer systems also appear in naturallanguage evolution. Naturallanguages and human-computer interfaces share as their primary mission the support of extended 'dialogues' between responsive entities. Because in each case one participant is a human being, some of the pressures operating on naturallanguages, causing them to evolve in order to better support such dialogue, also operate on human-computer 'languages' or interfaces. This does not necessarily push interfaces in the direction of naturallanguage - since one entity in this dialogue is not a human, this is not to be expected. Nonetheless, by discerning where the pressures that guide naturallanguage evolution also appear in human-computer interaction, we can contribute to the design of computer systems and obtain a new perspective on naturallanguages.

Motivational interviewing (MI) is an efficacious treatment for substance use disorders and other problem behaviors. Studies on MI fidelity and mechanisms of change typically use human raters to code therapy sessions, which requires considerable time, training, and financial costs. Naturallanguage processing techniques have recently been utilized for coding MI sessions using machine learning techniques, rather than human coders, and preliminary results have suggested these methods hold promise. The current study extends this previous work by introducing two naturallanguage processing models for automatically coding MI sessions via computer. The two models differ in the way they semantically represent session content, utilizing either 1) simple discrete sentence features (DSF model) and 2) more complex recursive neural networks (RNN model). Utterance- and session-level predictions from these models were compared to ratings provided by human coders using a large sample of MI sessions (N=341 sessions; 78,977 clinician and client talk turns) from 6 MI studies. Results show that the DSF model generally had slightly better performance compared to the RNN model. The DSF model had "good" or higher utterance-level agreement with human coders (Cohen's kappa>0.60) for open and closed questions, affirm, giving information, and follow/neutral (all therapist codes); considerably higher agreement was obtained for session-level indices, and many estimates were competitive with human-to-human agreement. However, there was poor agreement for client change talk, client sustain talk, and therapist MI-inconsistent behaviors. Naturallanguage processing methods provide accurate representations of human derived behavioral codes and could offer substantial improvements to the efficiency and scale in which MI mechanisms of change research and fidelity monitoring are conducted. PMID:26944234

A flight expert system (FLES) is being developed to assist pilots in monitoring, diagnosisng and recovering from in-flight faults. To provide a communications interface between the flight crew and FLES, a naturallanguage interface, has been implemented. Input to NALI is processed by three processors: (1) the semantic parser, (2) the knowledge retriever, and (3) the response generator. The architecture of NALI has been designed to process both temporal and nontemporal queries. Provisions have also been made to reduce the number of system modifications required for adapting NALI to other domains. This paper describes the architecture and implementation of NALI.

It is shown how certain kinds of domain independent expert systems based on classification problem-solving methods can be constructed directly from naturallanguage descriptions by a human expert. The expert knowledge is not translated into production rules. Rather, it is mapped into conceptual structures which are integrated into long-term memory (LTM). The resulting system is one in which problem-solving, retrieval and memory organization are integrated processes. In other words, the same algorithm and knowledge representation structures are shared by these processes. As a result of this, the system can answer questions, solve problems or reorganize LTM.

MEDSYNDIKATE is a naturallanguage processor, which automatically acquires medical information from findings reports. In the course of text analysis their contents is transferred to conceptual representation structures, which constitute a corresponding text knowledge base. MEDSYNDIKATE is particularly adapted to deal properly with text structures, such as various forms of anaphoric reference relations spanning several sentences. The strong demands MEDSYNDIKATE poses on the availability of expressive knowledge sources are accounted for by two alternative approaches to acquire medical domain knowledge (semi)automatically. We also present data for the information extraction performance of MEDSYNDIKATE in terms of the semantic interpretation of three major syntactic patterns in medical documents. PMID:12460632

A novel parallel model of naturallanguage (NL) understanding is presented which can realize high levels of semantic abstraction, and is designed for implementation on synchronous SIMD architectures and optical processors. Theory is expressed in terms of the Image Algebra (IA), a rigorous, concise, inherently parallel notation which unifies the design, analysis, and implementation of image processing algorithms. The IA has been implemented on numerous parallel architectures, and IA preprocessors and interpreters are available for the FORTRAN and Ada languages. In a previous study, we demonstrated the utility of IA for mapping MEA- conformable (Multiple Execution Array) algorithms to optical architectures. In this study, we extend our previous theory to map serial parsing algorithms to the synchronous SIMD paradigm. We initially derive a two-dimensional image that is based upon the adjacency matrix of a semantic graph. Via IA template mappings, the operations of bottom-up parsing, semantic disambiguation, and referential resolution are implemented as image-processing operations upon the adjacency matrix. Pixel-level operations are constrained to Hadamard addition and multiplication, thresholding, and row/column summation, which are available in magnitude-only optics. Assuming high parallelism in the parse rule base, the parsing of n input symbols with a grammar consisting of M rules of arity H, on an N-processor architecture, could exhibit time complexity of T(n)

ENSCO, Inc. is developing an innovative atmospheric observing system known as Global Environmental Micro Sensors (GEMS). The GEMS concept features an integrated system of miniaturized in situ, airborne probes measuring temperature, relative humidity, pressure, and vector wind velocity. In order for the probes to remain airborne for long periods of time, their design is based on a helium-filled super-pressure balloon. The GEMS probes are neutrally buoyant and carried passively by the wind at predetermined levels. Each probe contains onboard satellite communication, power generation, processing, and geolocation capabilities. ENSCO has partnered with the National Aeronautics and Space Administration's Kennedy Space Center (KSC) for a project called GEMS Test Operations in the Natural Environment (GEMSTONE) that will culminate with limited prototype flights of the system in spring 2007. By leveraging current advances in micro and nanotechnology, the probe mass, size, cost, and complexity can be reduced substantially so that large numbers of probes could be deployed routinely to support ground, launch, and landing operations at KSC and other locations. A full-scale system will improve the data density for the local initialization of high-resolution numerical weather prediction systems by at least an order of magnitude and provide a significantly expanded in situ data base to evaluate launch commit criteria and flight rules. When applied to launch or landing sites, this capability will reduce both weather hazards and weather-related scrubs, thus enhancing both safety and cost-avoidance for vehicles processed by the Shuttle, Launch Services Program, and Constellation Directorates. The GEMSTONE project will conclude with a field experiment in which 10 to 15 probes are released over KSC in east central Florida. The probes will be neutrally buoyant at different altitudes from 500 to 3000 meters and will report their position, speed, heading, temperature, humidity, and

The field of artificial Intelligence strives to produce computer programs that exhibit intelligent behavior. One of the areas of interest is the processing of naturallanguage. This report discusses the role of the computer language PROLOG in NaturalLanguage Processing (NLP) both from theoretic and pragmatic viewpoints. The reasons for using PROLOG for NLP are numerous. First, linguists can write natural-language grammars almost directly as PROLOG programs; this allows fast-prototyping of NLP systems and facilitates analysis of NLP theories. Second, semantic representations of natural-language texts that use logic formalisms are readily produced in PROLOG because of PROLOG's logical foundations. Third, PROLOG's built-in inferencing mechanisms are often sufficient for inferences on the logical forms produced by NLPs. Fourth, the logical, declarative nature of PROLOG may make it the language of choice for parallel computing systems. Finally, the fact that PROLOG has a de facto standard (Edinburgh) makes the porting of code from one computer system to another virtually trouble free. Perhaps the strongest tie one could make between NLP and PROLOG was stated by John Stuart Mill in his inaugural Address at St. Andrews: The structure of every sentence is a lesson in logic.

The Uintah oil and natural gas Basin in Northeastern Utah experienced several days of high ozone levels in early 2011 during cold temperature inversions. To study the chemical and meteorological processes leading to these wintertime ozone pollution events, the State of Utah, EPA region 8 and oil and gas operators pulled together a multi-agency research team, including NOAA ESRL/CIRES scientists. The data gathering took place between January 15 and February 29, 2012.To document the chemical signature of various sources in the Basin, we outfitted a passenger van with in-situ analyzers (Picarro: CH4, CO2, CO, H2O, 13CH4; NOxCaRD: NO, NOx, 2B & NOxCaRD: O3) meteorological sensors, GPS units, discrete flask sampling apparatus, as well as a data logging and "real-time" in-situ data visualization system. The instrumented van, called Mobile Lab, also hosted a KIT Proton Transfer Reaction Mass Spectrometer (suite of VOCs in situ measurements) for part of the campaign. For close to a month, the Mobile Lab traveled the roads of the oil and gas field, documenting ambient levels of several tracers. Close to 180 valid air samples were collected in February by the Mobile Lab for future analysis in the NOAA and CU/INSTAAR labs in Boulder. At the same time as the surface effort was going on, an instrumented light aircraft conducted transects over the Basin collecting air samples mostly in the boundary layer and measuring in situ the following species CH4, CO2, NO2, O3. We will present some of the data collected by the Mobile Lab and the aircraft and discuss analysis results.

ARIS is an artificial intelligence system which uses the English language to learn, understand, and communicate. The system attempts to simulate the psychoneurological processes which enable man to communicate verbally. It uses a modified stratificational grammar model and is being programed in PL/1 (a programing language) for an IBM 360/67…

Demonstrates how innate representational capabilities for serial and temporal structure of language could arise from a common neural architecture, distinct from that required for the representation of abstract structure, and provides a predictive testable model of the initial computational state of the language learner. (Author/VWL)

The Chinese Language Classroom Environment Inventory (CLCEI) is a bilingual instrument developed for use in measuring students' and teachers' perceptions toward their Chinese Language classroom learning environments in Singapore secondary schools. The English version of the CLCEI was customised from the English version of the "What is happening in…

Most existing naturallanguage interfaces to databases (NLIDBs) were designed to be used with ``snapshot'' database systems, that provide very limited facilities for manipulating time-dependent data. Consequently, most NLIDBs also provide very limited support for the notion of time. The database community is becoming increasingly interested in _temporal_ database systems. These are intended to store and manipulate in a principled manner information not only about the present, but also about the past and future. This thesis develops a principled framework for constructing English NLIDBs for _temporal_ databases (NLITDBs), drawing on research in tense and aspect theories, temporal logics, and temporal databases. I first explore temporal linguistic phenomena that are likely to appear in English questions to NLITDBs. Drawing on existing linguistic theories of time, I formulate an account for a large number of these phenomena that is simple enough to be embodied in practical NLITDBs. Exploiting ideas from temporal logics, I then define a temporal meaning representation language, TOP, and I show how the HPSG grammar theory can be modified to incorporate the tense and aspect account of this thesis, and to map a wide range of English questions involving time to appropriate TOP expressions. Finally, I present and prove the correctness of a method to translate from TOP to TSQL2, TSQL2 being a temporal extension of the SQL-92 database language. This way, I establish a sound route from English questions involving time to a general-purpose temporal database language, that can act as a principled framework for building NLITDBs. To demonstrate that this framework is workable, I employ it to develop a prototype NLITDB, implemented using ALE and Prolog.

This paper explores natural hazards teaching and communications through the use of a literary anthology of writings about the earth aimed at non-experts. Teaching natural hazards in high-school and university introductory Earth Science and Geography courses revolves mostly around lectures, examinations, and laboratory demonstrations/activities. Often the results of such a course are that a student 'memorizes' the answers, and is penalized when they miss a given fact [e.g., "You lost one point because you were off by 50 km/hr on the wind speed of an F5 tornado."] Although facts and general methodologies are certainly important when teaching natural hazards, it is a strong motivation to a student's assimilation of, and enthusiasm for, this knowledge, if supplemented by writings about the Earth. In this paper, we discuss a literary anthology which we developed [Language of the Earth, Rhodes, Stone, Malamud, Wiley-Blackwell, 2008] which includes many descriptions about natural hazards. Using first- and second-hand accounts of landslides, earthquakes, tsunamis, floods and volcanic eruptions, through the writings of McPhee, Gaskill, Voltaire, Austin, Cloos, and many others, hazards become 'alive', and more than 'just' a compilation of facts and processes. Using short excerpts such as these, or other similar anthologies, of remarkably written accounts and discussions about natural hazards results in 'dry' facts becoming more than just facts. These often highly personal viewpoints of our catostrophic world, provide a useful supplement to a student's understanding of the turbulent world in which we live.

The central hypothesis underlying this communication is that the methodology and conceptual rigor of a philosophically inspired formal ontology can bring significant benefits in the development and maintenance of application ontologies [A. Flett, M. Dos Santos, W. Ceusters, Some Ontology Engineering Procedures and their Supporting Technologies, EKAW2002, 2003]. This hypothesis has been tested in the collaboration between Language and Computing (L&C), a company specializing in software for supporting naturallanguage processing especially in the medical field, and the Institute for Formal Ontology and Medical Information Science (IFOMIS), an academic research institution concerned with the theoretical foundations of ontology. In the course of this collaboration L&C's ontology, LinKBase, which is designed to integrate and support reasoning across a plurality of external databases, has been subjected to a thorough auditing on the basis of the principles underlying IFOMIS's Basic Formal Ontology (BFO) [B. Smith, Basic Formal Ontology, 2002. http://ontology.buffalo.edu/bfo]. The goal is to transform a large terminology-based ontology into one with the ability to support reasoning applications. Our general procedure has been the implementation of a meta-ontological definition space in which the definitions of all the concepts and relations in LinKBase are standardized in the framework of first-order logic. In this paper we describe how this principles-based standardization has led to a greater degree of internal coherence of the LinKBase structure, and how it has facilitated the construction of mappings between external databases using LinKBase as translation hub. We argue that the collaboration here described represents a new phase in the quest to solve the so-called "Tower of Babel" problem of ontology integration [F. Montayne, J. Flanagan, Formal Ontology: The Foundation for NaturalLanguage Processing, 2003. http://www.landcglobal.com/]. PMID:16153885

In the last decades the popularity of naturallanguage interfaces to databases (NLIDBs) has increased, because in many cases information obtained from them is used for making important business decisions. Unfortunately, the complexity of their customization by database administrators make them difficult to use. In order for a NLIDB to obtain a high percentage of correctly translated queries, it is necessary that it is correctly customized for the database to be queried. In most cases the performance reported in NLIDB literature is the highest possible; i.e., the performance obtained when the interfaces were customized by the implementers. However, for end users it is more important the performance that the interface can yield when the NLIDB is customized by someone different from the implementers. Unfortunately, there exist very few articles that report NLIDB performance when the NLIDBs are not customized by the implementers. This article presents a semantically-enriched data dictionary (which permits solving many of the problems that occur when translating from naturallanguage to SQL) and an experiment in which two groups of undergraduate students customized our NLIDB and English language frontend (ELF), considered one of the best available commercial NLIDBs. The experimental results show that, when customized by the first group, our NLIDB obtained a 44.69 % of correctly answered queries and ELF 11.83 % for the ATIS database, and when customized by the second group, our NLIDB attained 77.05 % and ELF 13.48 %. The performance attained by our NLIDB, when customized by ourselves was 90 %. PMID:27190752

One strategy for machine-aided indexing (MAI) is to provide a concept-level analysis of the textual elements of documents or document abstracts. In such systems, natural-language phrases are analyzed in order to identify and classify concepts related to a particular subject domain. The overall performance of these MAI systems is largely dependent on the quality and comprehensiveness of their knowledge bases. These knowledge bases function to (1) define the relations between a controlled indexing vocabulary and naturallanguage expressions; (2) provide a simple mechanism for disambiguation and the determination of relevancy; and (3) allow the extension of concept-hierarchical structure to all elements of the knowledge file. After a brief description of the NASA Machine-Aided Indexing system, concerns related to the development and maintenance of MAI knowledge bases are discussed. Particular emphasis is given to statistically-based text analysis tools designed to aid the knowledge base developer. One such tool, the Knowledge Base Building (KBB) program, presents the domain expert with a well-filtered list of synonyms and conceptually-related phrases for each thesaurus concept. Another tool, the Knowledge Base Maintenance (KBM) program, functions to identify areas of the knowledge base affected by changes in the conceptual domain (for example, the addition of a new thesaurus term). An alternate use of the KBM as an aid in thesaurus construction is also discussed.

Background Interest is growing in the application of syntactic parsers to naturallanguage processing problems in biology, but assessing their performance is difficult because differences in linguistic convention can falsely appear to be errors. We present a method for evaluating their accuracy using an intermediate representation based on dependency graphs, in which the semantic relationships important in most information extraction tasks are closer to the surface. We also demonstrate how this method can be easily tailored to various application-driven criteria. Results Using the GENIA corpus as a gold standard, we tested four open-source parsers which have been used in bioinformatics projects. We first present overall performance measures, and test the two leading tools, the Charniak-Lease and Bikel parsers, on subtasks tailored to reflect the requirements of a system for extracting gene expression relationships. These two tools clearly outperform the other parsers in the evaluation, and achieve accuracy levels comparable to or exceeding native dependency parsers on similar tasks in previous biological evaluations. Conclusion Evaluating using dependency graphs allows parsers to be tested easily on criteria chosen according to the semantics of particular biological applications, drawing attention to important mistakes and soaking up many insignificant differences that would otherwise be reported as errors. Generating high-accuracy dependency graphs from the output of phrase-structure parsers also provides access to the more detailed syntax trees that are used in several natural-language processing techniques. PMID:17254351

In naturallanguage processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying naturallanguage processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen's interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen's relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions. PMID:27478379

Motivation: While text mining technologies for biomedical research have gained popularity as a way to take advantage of the explosive growth of information in text form in biomedical papers, selecting appropriate naturallanguage processing (NLP) tools is still difficult for researchers who are not familiar with recent advances in NLP. This article provides a comparative evaluation of several state-of-the-art naturallanguage parsers, focusing on the task of extracting protein–protein interaction (PPI) from biomedical papers. We measure how each parser, and its output representation, contributes to accuracy improvement when the parser is used as a component in a PPI system. Results: All the parsers attained improvements in accuracy of PPI extraction. The levels of accuracy obtained with these different parsers vary slightly, while differences in parsing speed are larger. The best accuracy in this work was obtained when we combined Miyao and Tsujii's Enju parser and Charniak and Johnson's reranking parser, and the accuracy is better than the state-of-the-art results on the same data. Availability: The PPI extraction system used in this work (AkanePPI) is available online at http://www-tsujii.is.s.u-tokyo.ac.jp/-100downloads/downloads.cgi. The evaluated parsers are also available online from each developer's site. Contact: yusuke@is.s.u-tokyo.ac.jp PMID:19073593

In naturallanguage processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying naturallanguage processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen’s interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen’s relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions. PMID:27478379

The processes of infant word segmentation and infant word learning have largely been studied separately. However, the ease with which potential word forms are segmented from fluent speech seems likely to influence subsequent mappings between words and their referents. To explore this process, we tested the link between the statistical coherence of sequences presented in fluent speech and infants’ subsequent use of those sequences as labels for novel objects. Notably, the materials were drawn from a naturallanguage unfamiliar to the infants (Italian). The results of three experiments suggest that there is a close relationship between the statistics of the speech stream and subsequent mapping of labels to referents. Mapping was facilitated when the labels contained high transitional probabilities in the forward and/or backward direction (Experiment 1). When no transitional probability information was available (Experiment 2), or when the internal transitional probabilities of the labels were low in both directions (Experiment 3), infants failed to link the labels to their referents. Word learning appears to be strongly influenced by infants’ prior experience with the distribution of sounds that make up words in naturallanguages. PMID:21762650

This report examines how well the current national natural gas pipeline network has been able to handle today's market demand for natural gas. In addition, it identifies those areas of the country where pipeline utilization is continuing to grow rapidly and where new pipeline capacity is needed or is planned over the next several years.

The predicate calculus used currently by mathematical logic in computer science, philosophy and linguistic was found to be too restrictive and inadequate for describing the grammar of natural and artificial language. Therefore many higher order logics have been developed to overcome the limitation of predicate calculus. In this paper a new representation of logic using mathematical principles has been developed for the naturallanguage called Hermeneutic Operative Calculus. This Hermeneutic Operative Calculus is a new language interpretive calculus developed to account for the syntactic, semantic and pragmatic features of naturallanguage and allows removing the restrictions of any particular naturallanguage in the semantic field its map out. The logic of Hermeneutic Operative Calculus capable of represent the syntactic and semantic of factual information of a naturallanguage precisely in any language. The logic of this Hermeneutic Operative Calculus has two different forms of operations called object and meta-operations. The object operation allow for listing the various objects, picturing the various propositions and so forth. The meta-operation would specify what cannot be specified by the object operation like semantical stances of a proposition. The basic operative processes of linguistics and cognitive logic will be mathematically conceptualized and elaborated in this paper.

The natural environment has a great influence on the ability of spacecraft to perform according to mission design specification. Compatibility with the natural environment is a primary factor in determining the functional lifetime of the spacecraft. The spacecraft being designed and developed today are growing in complexity. In many instances, the increased complexity also increases its sensitivity to environmental effects. Sensitivities to the natural environment can be tempered through appropriate design measures, mitigation strategies, and/or the acceptance of known risk. The design engineer must understand the effects of the natural environment on the spacecraft and its components; while having an in-depth knowledge of mitigation strategies. Too much protection incurs unnecessary expense, and often times excessive mass; while too little protection can easily lead to premature mission loss. This presentation will provide a brief overview of both the natural environment and its effects and provide some insight into mitigation strategies.

The thesis describes a logical formalization of natural-language database interfacing. We assume the existence of a ``naturallanguage engine'' capable of mediating between surface linguistic string and their representations as ``literal'' logical forms: the focus of interest will be the question of relating ``literal'' logical forms to representations in terms of primitives meaningful to the underlying database engine. We begin by describing the nature of the problem, and show how a variety of interface functionalities can be considered as instances of a type of formal inference task which we call ``Abductive Equivalential Translation'' (AET); functionalities which can be reduced to this form include answering questions, responding to commands, reasoning about the completeness of answers, answering meta-questions of type ``Do you know...'', and generating assertions and questions. In each case, a ``linguistic domain theory'' (LDT) Γ and an input formula F are given, and the goal is to construct a formula with certain properties which is equivalent to F, given Γ and a set of permitted assumptions. If the LDT is of a certain specified type, whose formulas are either conditional equivalences or Horn-clauses, we show that the AET problem can be reduced to a goal-directed inference method. We present an abstract description of this method, and sketch its realization in Prolog. The relationship between AET and several problems previously discussed in the literature is discussed. In particular, we show how AET can provide a simple and elegant solution to the so-called ``Doctor on Board'' problem, and in effect allows a ``relativization'' of the Closed World Assumption. The ideas in the thesis have all been implemented concretely within the SRI CLARE project, using a real projects and payments database. The LDT for the example database is described in detail, and examples of the types of functionality that can be achieved within the example domain are presented.

In this study, we comparatively examined the linguistic properties of narrative clinician notes created through voice dictation versus those directly entered by clinicians via a computer keyboard. Intuitively, the nature of voice-dictated notes would resemble that of naturallanguage, while typed-in notes may demonstrate distinctive language features for reasons such as intensive usage of acronyms. The study analyses were based on an empirical dataset retrieved from our institutional electronic health records system. The dataset contains 30,000 voice-dictated notes and 30,000 notes that were entered manually; both were encounter notes generated in ambulatory care settings. The results suggest that between the narrative clinician notes created via these two different methods, there exists a considerable amount of lexical and distributional differences. Such differences could have a significant impact on the performance of naturallanguage processing tools, necessitating these two different types of documents being differentially treated. PMID:22195229

The "new paradigm" unifying deductive and inductive reasoning in a Bayesian framework (Oaksford & Chater, 2007; Over, 2009) has been claimed to be falsified by results which show sharp differences between reasoning about necessity vs. plausibility (Heit & Rotello, 2010; Rips, 2001; Rotello & Heit, 2009). We provide a probabilistic model of reasoning with modal expressions such as "necessary" and "plausible" informed by recent work in formal semantics of naturallanguage, and show that it predicts the possibility of non-linear response patterns which have been claimed to be problematic. Our model also makes a strong monotonicity prediction, while two-dimensional theories predict the possibility of reversals in argument strength depending on the modal word chosen. Predictions were tested using a novel experimental paradigm that replicates the previously-reported response patterns with a minimal manipulation, changing only one word of the stimulus between conditions. We found a spectrum of reasoning "modes" corresponding to different modal words, and strong support for our model's monotonicity prediction. This indicates that probabilistic approaches to reasoning can account in a clear and parsimonious way for data previously argued to falsify them, as well as new, more fine-grained, data. It also illustrates the importance of careful attention to the semantics of language employed in reasoning experiments. PMID:25497521

Motivation: With the increasing volume of scientific papers and heterogeneous nomenclature in the biomedical literature, it is apparent that an improvement over standard pattern matching available in existing search engines is required. Cognition Search Information Retrieval (CSIR) is a naturallanguage processing (NLP) technology that possesses a large dictionary (lexicon) and large semantic databases, such that search can be based on meaning. Encoded synonymy, ontological relationships, phrases, and seeds for word sense disambiguation offer significant improvement over pattern matching. Thus, the CSIR has the right architecture to form the basis for a scientific search engine. Result: Here we have augmented CSIR to improve access to the MEDLINE database of scientific abstracts. New biochemical, molecular biological and medical language and acronyms were introduced from curated web-based sources. The resulting system was used to interpret MEDLINE abstracts. Meaning-based search of MEDLINE abstracts yields high precision (estimated at >90%), and high recall (estimated at >90%), where synonym, ontology, phrases and sense seeds have been encoded. The present implementation can be found at http://MEDLINE.cognition.com. Contact: Elizabeth.goldsmith@UTsouthwestern.edu Kathleen.dahlgren@cognition.com PMID:21347167

Many figurative expressions are fully conventionalized in everyday speech. Regarding the neural basis of figurative language processing, research has predominantly focused on metaphoric expressions in minimal semantic context. It remains unclear in how far metaphoric expressions during continuous text comprehension activate similar neural networks as isolated metaphors. We therefore investigated the processing of similes (figurative language, e.g., "He smokes like a chimney!") occurring in a short story. Sixteen healthy, male, native German speakers listened to similes that came about naturally in a short story, while blood-oxygenation-level-dependent (BOLD) responses were measured with functional magnetic resonance imaging (fMRI). For the event-related analysis, similes were contrasted with non-figurative control sentences (CS). The stimuli differed with respect to figurativeness, while they were matched for frequency of words, number of syllables, plausibility, and comprehensibility. Similes contrasted with CS resulted in enhanced BOLD responses in the left inferior (IFG) and adjacent middle frontal gyrus. Concrete CS as compared to similes activated the bilateral middle temporal gyri as well as the right precuneus and the left middle frontal gyrus (LMFG). Activation of the left IFG for similes in a short story is consistent with results on single sentence metaphor processing. The findings strengthen the importance of the left inferior frontal region in the processing of abstract figurative speech during continuous, ecologically-valid speech comprehension; the processing of concrete semantic contents goes along with a down-regulation of bilateral temporal regions. PMID:24065897

Background Examining the naturallanguage college students use to describe various levels of intoxication can provide important insight into subjective perceptions of college alcohol use. Previous research (Levitt et al., 2009) has shown that intoxication terms reflect moderate and heavy levels of intoxication, and that self-use of these terms differs by gender among college students. However, it is still unknown whether these terms similarly apply to other individuals and, if so, whether similar gender differences exist. Method To address these issues, the current study examined the application of intoxication terms to characters in experimentally manipulated vignettes of naturalistic drinking situations within a sample of university undergraduates (N = 145). Results Findings supported and extended previous research by showing that other-directed applications of intoxication terms are similar to self-directed applications, and depend on the gender of both the target and the user. Specifically, moderate intoxication terms were applied to and from women more than men, even when the character was heavily intoxicated, whereas heavy intoxication terms were applied to and from men more than women. Conclusions The findings suggest that gender differences in the application of intoxication terms are other-directed as well as self-directed, and that intoxication language can inform gender-specific prevention and intervention efforts targeting problematic alcohol use among college students. PMID:23841828

Naturallanguage processing (NLP) is a technology that uses computer-based linguistics and artificial intelligence to identify and extract information from free-text data sources such as progress notes, procedure and pathology reports, and laboratory and radiologic test results. With the creation of large databases and the trajectory of health care reform, NLP holds the promise of enhancing the availability, quality, and utility of clinical information with the goal of improving documentation, quality, and efficiency of health care in the United States. To date, NLP has shown promise in automatically determining appropriate colonoscopy intervals and identifying cases of inflammatory bowel disease from electronic health records. The objectives of this review are to provide background on NLP and its associated terminology, to describe how NLP has been used thus far in the field of digestive diseases, and to identify its potential future uses. PMID:24858706

The identification of relevant predicates between co-occurring concepts in scientific literature databases like MEDLINE is crucial for using these sources for knowledge extraction, in order to obtain meaningful biomedical predications as subject-predicate-object triples. We consider the manually assigned MeSH indexing terms (main headings and subheadings) in MEDLINE records as a rich resource for extracting a broad range of domain knowledge. In this paper, we explore the combination of a clustering method for co-occurring concepts based on their related MeSH subheadings in MEDLINE with the use of SemRep, a naturallanguage processing engine, which extracts predications from free text documents. As a result, we generated sets of clusters of co-occurring concepts and identified the most significant predicates for each cluster. The association of such predicates with the co-occurrences of the resulting clusters produces the list of predications, which were checked for relevance. PMID:26958228

Disclosure control of naturallanguage information (DCNL), which we are trying to realize, is described. DCNL will be used for securing human communications over the internet, such as through blogs and social network services. Before sentences in the communications are disclosed, they are checked by DCNL and any phrases that could reveal sensitive information are transformed or omitted so that they are no longer revealing. DCNL checks not only phrases that directly represent sensitive information but also those that indirectly suggest it. Combinations of phrases are also checked. DCNL automatically learns the knowledge of sensitive phrases and the suggestive relations between phrases by using co-occurrence analysis and Web retrieval. The users' burden is therefore minimized, i.e., they do not need to define many disclosure control rules. DCNL complements the traditional access control in the fields where reliability needs to be balanced with enjoyment and objects classes for the access control cannot be predefined.

In April 2012, the National Institutes of Health organized a two-day workshop entitled ‘NaturalLanguage Processing: State of the Art, Future Directions and Applications for Enhancing Clinical Decision-Making’ (NLP-CDS). This report is a summary of the discussions during the second day of the workshop. Collectively, the workshop presenters and participants emphasized the need for unstructured clinical notes to be included in the decision making workflow and the need for individualized longitudinal data tracking. The workshop also discussed the need to: (1) combine evidence-based literature and patient records with machine-learning and prediction models; (2) provide trusted and reproducible clinical advice; (3) prioritize evidence and test results; and (4) engage healthcare professionals, caregivers, and patients. The overall consensus of the NLP-CDS workshop was that there are promising opportunities for NLP and CDS to deliver cognitive support for healthcare professionals, caregivers, and patients. PMID:23921193

This work integrates three related Al search techniques - constraint satisfaction, branch-and-bound and solution synthesis - and applies the result to semantic processing in naturallanguage (NL). We summarize the approach as {open_quote}Hunter-Gatherer:{close_quotes} (1) branch-and-bound and constraint satisfaction allow us to {open_quote}hunt down{close_quotes} non-optimal and impossible solutions and prune them from the search space. (2) solution synthesis methods then {open_quote}gather{close_quotes} all optimal solutions avoiding exponential complexity. Each of the three techniques is briefly described, as well as their extensions and combinations used in our system. We focus on the combination of solution synthesis and branch-and-bound methods which has enabled near-linear-time processing in our applications. Finally, we illustrate how the use of our technique in a large-scale MT project allowed a drastic reduction in search space.

Operation Waste Watch is a series of seven sequential learning units which addresses the subject of litter control and solid waste management. Each unit may be used in a variety of ways, depending on the needs and schedules of individual schools, and may be incorporated into various social studies, science, language arts, health, mathematics, and…

Background Wikipedia is a collaboratively edited encyclopedia. One of the most popular websites on the Internet, it is known to be a frequently used source of health care information by both professionals and the lay public. Objective This paper quantifies the production and consumption of Wikipedia’s medical content along 4 dimensions. First, we measured the amount of medical content in both articles and bytes and, second, the citations that supported that content. Third, we analyzed the medical readership against that of other health care websites between Wikipedia’s naturallanguage editions and its relationship with disease prevalence. Fourth, we surveyed the quantity/characteristics of Wikipedia’s medical contributors, including year-over-year participation trends and editor demographics. Methods Using a well-defined categorization infrastructure, we identified medically pertinent English-language Wikipedia articles and links to their foreign language equivalents. With these, Wikipedia can be queried to produce metadata and full texts for entire article histories. Wikipedia also makes available hourly reports that aggregate reader traffic at per-article granularity. An online survey was used to determine the background of contributors. Standard mining and visualization techniques (eg, aggregation queries, cumulative distribution functions, and/or correlation metrics) were applied to each of these datasets. Analysis focused on year-end 2013, but historical data permitted some longitudinal analysis. Results Wikipedia’s medical content (at the end of 2013) was made up of more than 155,000 articles and 1 billion bytes of text across more than 255 languages. This content was supported by more than 950,000 references. Content was viewed more than 4.88 billion times in 2013. This makes it one of if not the most viewed medical resource(s) globally. The core editor community numbered less than 300 and declined over the past 5 years. The members of this

Recent years have seen a powerful shift towards data-rich environments throughout society. This has extended to a change in how the artifacts and products of scientific knowledge production can be analyzed and understood. Bottom-up approaches are on the rise that combine access to huge amounts of academic publications with advanced computer graphics and data processing tools, including naturallanguage processing. Knowledge domain visualization is one of those multi-technology approaches, with its aim of turning domain-specific human knowledge into highly visual representations in order to better understand the structure and evolution of domain knowledge. For example, network visualizations built from co-author relations contained in academic publications can provide insight on how scholars collaborate with each other in one or multiple domains, and visualizations built from the text content of articles can help us understand the topical structure of knowledge domains. These knowledge domain visualizations need to support interactive viewing and exploration by users. Such spatialization efforts are increasingly looking to geography and GIS as a source of metaphors and practical technology solutions, even when non-georeferenced information is managed, analyzed, and visualized. When it comes to deploying spatialized representations online, web mapping and web GIS can provide practical technology solutions for interactive viewing of knowledge domain visualizations, from panning and zooming to the overlay of additional information. This thesis presents a novel combination of advanced naturallanguage processing - in the form of topic modeling - with dimensionality reduction through self-organizing maps and the deployment of web mapping/GIS technology towards intuitive, GIS-like, exploration of a knowledge domain visualization. A complete workflow is proposed and implemented that processes any corpus of input text documents into a map form and leverages a web

Objective To extract drug indications from structured drug labels and represent the information using codes from standard medical terminologies. Materials and methods We used MetaMap and other publicly available resources to extract information from the indications section of drug labels. Drugs and indications were encoded by RxNorm and UMLS identifiers respectively. A sample was manually reviewed. We also compared the results with two independent information sources: National Drug File-Reference Terminology and the Semantic Medline project. Results A total of 6797 drug labels were processed, resulting in 19 473 unique drug–indication pairs. Manual review of 298 most frequently prescribed drugs by seven physicians showed a recall of 0.95 and precision of 0.77. Inter-rater agreement (Fleiss κ) was 0.713. The precision of the subset of results corroborated by Semantic Medline extractions increased to 0.93. Discussion Correlation of a patient's medical problems and drugs in an electronic health record has been used to improve data quality and reduce medication errors. Authoritative drug indication information is available from drug labels, but not in a format readily usable by computer applications. Our study shows that it is feasible to use publicly available naturallanguage processing resources to extract drug indications from drug labels. The same method can be applied to other sections of the drug label—for example, adverse effects, contraindications. Conclusions It is feasible to use publicly available naturallanguage processing tools to extract indication information from freely available drug labels. Named entity recognition sources (eg, MetaMap) provide reasonable recall. Combination with other data sources provides higher precision. PMID:23475786

Background The Institute of Medicine has identified patient safety as a key goal for health care in the United States. Detecting vaccine adverse events is an important public health activity that contributes to patient safety. Reports about adverse events following immunization (AEFI) from surveillance systems contain free-text components that can be analyzed using naturallanguage processing. To extract Unified Medical Language System (UMLS) concepts from free text and classify AEFI reports based on concepts they contain, we first needed to clean the text by expanding abbreviations and shortcuts and correcting spelling errors. Our objective in this paper was to create a UMLS-based spelling error correction tool as a first step in the naturallanguage processing (NLP) pipeline for AEFI reports. Methods We developed spell checking algorithms using open source tools. We used de-identified AEFI surveillance reports to create free-text data sets for analysis. After expansion of abbreviated clinical terms and shortcuts, we performed spelling correction in four steps: (1) error detection, (2) word list generation, (3) word list disambiguation and (4) error correction. We then measured the performance of the resulting spell checker by comparing it to manual correction. Results We used 12,056 words to train the spell checker and tested its performance on 8,131 words. During testing, sensitivity, specificity, and positive predictive value (PPV) for the spell checker were 74% (95% CI: 74–75), 100% (95% CI: 100–100), and 47% (95% CI: 46%–48%), respectively. Conclusion We created a prototype spell checker that can be used to process AEFI reports. We used the UMLS Specialist Lexicon as the primary source of dictionary terms and the WordNet lexicon as a secondary source. We used the UMLS as a domain-specific source of dictionary terms to compare potentially misspelled words in the corpus. The prototype sensitivity was comparable to currently available tools, but the

In this study we show how complex creative relations can arise from fairly frequent semantic relations observed in everyday language. By doing this, we reflect on some key cognitive aspects of linguistic and general creativity. In our experimentation, we automated the process of solving a battery of Remote Associates Test tasks. By applying…

BioC is a new format and associated code libraries for sharing text and annotations. We have implemented BioC naturallanguage preprocessing pipelines in two popular programming languages: C++ and Java. The current implementations interface with the well-known MedPost and Stanford naturallanguage processing tool sets. The pipeline functionality includes sentence segmentation, tokenization, part-of-speech tagging, lemmatization and sentence parsing. These pipelines can be easily integrated along with other BioC programs into any BioC compliant text mining systems. As an application, we converted the NCBI disease corpus to BioC format, and the pipelines have successfully run on this corpus to demonstrate their functionality. Code and data can be downloaded from http://bioc.sourceforge.net. Database URL: http://bioc.sourceforge.net. PMID:24935050

Numerous studies over the past decade support the claim that infants are equipped with powerful statistical language learning mechanisms. The primary evidence for statistical language learning in word segmentation comes from studies using artificial languages, continuous streams of synthesized syllables that are highly simplified relative to real…

Discussion of Internet use for information searches on health-related topics focuses on a study that examined complexity and variability of naturallanguage in using search terms that express the concept of electronic health (e-health). Highlights include precision of retrieved information; shift in terminology; and queries using the Pub Med…

This investigation matches the emerging techniques in computerized naturallanguage processing against emerging needs for such techniques in the information field to evaluate and extend such techniques for future applications and to establish a basis and direction for further research toward these goals. An overview describes developments in the…

There is the question of whether learning differs when students speak versus type their responses when interacting with intelligent tutoring systems with naturallanguage dialogues. Theoretical bases exist for three contrasting hypotheses. The "speech facilitation" hypothesis predicts that spoken input will "increase" learning, whereas the "text…

Summary Objectives We present a review of recent advances in clinical NaturalLanguage Processing (NLP), with a focus on semantic analysis and key subtasks that support such analysis. Methods We conducted a literature review of clinical NLP research from 2008 to 2014, emphasizing recent publications (2012-2014), based on PubMed and ACL proceedings as well as relevant referenced publications from the included papers. Results Significant articles published within this time-span were included and are discussed from the perspective of semantic analysis. Three key clinical NLP subtasks that enable such analysis were identified: 1) developing more efficient methods for corpus creation (annotation and de-identification), 2) generating building blocks for extracting meaning (morphological, syntactic, and semantic subtasks), and 3) leveraging NLP for clinical utility (NLP applications and infrastructure for clinical use cases). Finally, we provide a reflection upon most recent developments and potential areas of future NLP development and applications. Conclusions There has been an increase of advances within key NLP subtasks that support semantic analysis. Performance of NLP semantic analysis is, in many cases, close to that of agreement between humans. The creation and release of corpora annotated with complex semantic information models has greatly supported the development of new tools and approaches. Research on non-English languages is continuously growing. NLP methods have sometimes been successfully employed in real-world clinical tasks. However, there is still a gap between the development of advanced resources and their utilization in clinical settings. A plethora of new clinical use cases are emerging due to established health care initiatives and additional patient-generated sources through the extensive use of social media and other devices. PMID:26293867

This curriculum guide contains materials necessary to implement the vocational English as a Second Language (ESL) program for Farm Equipment Operators developed by the Ceres Unified School District. The flrst two sections provide the plans and accompanying forms for needs assessment and recruitment/intake/enrollment. In the third section a…

The evolution of the faculty of language largely remains an enigma. In this essay, we ask why. Language's evolutionary analysis is complicated because it has no equivalent in any nonhuman species. There is also no consensus regarding the essential nature of the language "phenotype." According to the "Strong Minimalist Thesis," the key distinguishing feature of language (and what evolutionary theory must explain) is hierarchical syntactic structure. The faculty of language is likely to have emerged quite recently in evolutionary terms, some 70,000-100,000 years ago, and does not seem to have undergone modification since then, though individual languages do of course change over time, operating within this basic framework. The recent emergence of language and its stability are both consistent with the Strong Minimalist Thesis, which has at its core a single repeatable operation that takes exactly two syntactic elements a and b and assembles them to form the set {a, b}. PMID:25157536

Unconventional gas drilling (UGD) has enabled extraordinarily rapid growth in the extraction of natural gas. Despite frequently expressed public concern, human health studies have not kept pace. We investigated the association of proximity to UGD in the Marcellus Shale formation and perinatal outcomes in a retrospective cohort study of 15,451 live births in Southwest Pennsylvania from 2007–2010. Mothers were categorized into exposure quartiles based on inverse distance weighted (IDW) well count; least exposed mothers (first quartile) had an IDW well count less than 0.87 wells per mile, while the most exposed (fourth quartile) had 6.00 wells or greater per mile. Multivariate linear (birth weight) or logistical (small for gestational age (SGA) and prematurity) regression analyses, accounting for differences in maternal and child risk factors, were performed. There was no significant association of proximity and density of UGD with prematurity. Comparison of the most to least exposed, however, revealed lower birth weight (3323 ± 558 vs 3344 ± 544 g) and a higher incidence of SGA (6.5 vs 4.8%, respectively; odds ratio: 1.34; 95% confidence interval: 1.10–1.63). While the clinical significance of the differences in birth weight among the exposure groups is unclear, the present findings further emphasize the need for larger studies, in regio-specific fashion, with more precise characterization of exposure over an extended period of time to evaluate the potential public health significance of UGD. PMID:26039051

Unconventional gas drilling (UGD) has enabled extraordinarily rapid growth in the extraction of natural gas. Despite frequently expressed public concern, human health studies have not kept pace. We investigated the association of proximity to UGD in the Marcellus Shale formation and perinatal outcomes in a retrospective cohort study of 15,451 live births in Southwest Pennsylvania from 2007-2010. Mothers were categorized into exposure quartiles based on inverse distance weighted (IDW) well count; least exposed mothers (first quartile) had an IDW well count less than 0.87 wells per mile, while the most exposed (fourth quartile) had 6.00 wells or greater per mile. Multivariate linear (birth weight) or logistical (small for gestational age (SGA) and prematurity) regression analyses, accounting for differences in maternal and child risk factors, were performed. There was no significant association of proximity and density of UGD with prematurity. Comparison of the most to least exposed, however, revealed lower birth weight (3323 ± 558 vs 3344 ± 544 g) and a higher incidence of SGA (6.5 vs 4.8%, respectively; odds ratio: 1.34; 95% confidence interval: 1.10-1.63). While the clinical significance of the differences in birth weight among the exposure groups is unclear, the present findings further emphasize the need for larger studies, in regio-specific fashion, with more precise characterization of exposure over an extended period of time to evaluate the potential public health significance of UGD. PMID:26039051

This application concerns systems and methods for compressing natural gas with an internal combustion engine. In a representative embodiment, a system for compressing a gas comprises a reciprocating internal combustion engine including at least one piston-cylinder assembly comprising a piston configured to travel in a cylinder and to compress gas in the cylinder in multiple compression stages. The system can further comprise a first pressure tank in fluid communication with the piston-cylinder assembly to receive compressed gas from the piston-cylinder assembly until the first pressure tank reaches a predetermined pressure, and a second pressure tank in fluid communication with the piston-cylinder assembly and the first pressure tank. The second pressure tank can be configured to receive compressed gas from the piston-cylinder assembly until the second pressure tank reaches a predetermined pressure. When the first and second pressure tanks have reached the predetermined pressures, the first pressure tank can be configured to supply gas to the piston-cylinder assembly, and the piston can be configured to compress the gas supplied by the first pressure tank such that the compressed gas flows into the second pressure tank.

Languages function as independent and distinct conventional systems, and so each language uses different words to label the same objects. This study investigated whether 2-year-old children recognize that speakers of their native language and speakers of a foreign language do not share the same knowledge. Two groups of children unfamiliar with Mandarin were tested: monolingual English-learning children (n=24) and bilingual children learning English and another language (n=24). An English speaker taught children the novel label fep. On English mutual exclusivity trials, the speaker asked for the referent of a novel label (wug) in the presence of the fep and a novel object. Both monolingual and bilingual children disambiguated the reference of the novel word using a mutual exclusivity strategy, choosing the novel object rather than the fep. On similar trials with a Mandarin speaker, children were asked to find the referent of a novel Mandarin label kuò. Monolinguals again chose the novel object rather than the object with the English label fep, even though the Mandarin speaker had no access to conventional English words. Bilinguals did not respond systematically to the Mandarin speaker, suggesting that they had enhanced understanding of the Mandarin speaker's ignorance of English words. The results indicate that monolingual children initially expect words to be conventionally shared across all speakers-native and foreign. Early bilingual experience facilitates children's discovery of the nature of foreign language words. PMID:24268905

The Iowa Flood Information System (IFIS) is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to flood inundation maps, real-time flood conditions, flood forecasts, flood-related data, information and interactive visualizations for communities in Iowa. The IFIS is designed for use by general public, often people with no domain knowledge and limited general science background. To improve effective communication with such audience, we have introduced a voice-enabled knowledge engine on flood related issues in IFIS. Instead of navigating within many features and interfaces of the information system and web-based sources, the system provides dynamic computations based on a collection of built-in data, analysis, and methods. The IFIS Knowledge Engine connects to real-time stream gauges, in-house data sources, analysis and visualization tools to answer naturallanguage questions. Our goal is the systematization of data and modeling results on flood related issues in Iowa, and to provide an interface for definitive answers to factual queries. The goal of the knowledge engine is to make all flood related knowledge in Iowa easily accessible to everyone, and support voice-enabled naturallanguage input. We aim to integrate and curate all flood related data, implement analytical and visualization tools, and make it possible to compute answers from questions. The IFIS explicitly implements analytical methods and models, as algorithms, and curates all flood related data and resources so that all these resources are computable. The IFIS Knowledge Engine computes the answer by deriving it from its computational knowledge base. The knowledge engine processes the statement, access data warehouse, run complex database queries on the server-side and return outputs in various formats. This presentation provides an overview of IFIS Knowledge Engine, its unique information interface and functionality as an educational tool, and discusses the future plans

Computational naturallanguage understanding and generation have been a goal of artificial intelligence since McCarthy, Minsky, Rochester and Shannon first proposed to spend the summer of 1956 studying this and related problems. Although statistical approaches dominate current naturallanguage applications, two current research trends bring…

.7%), Skin (11.3%), and Genitourinary (10.5%). Additional subcategories of queries about specific cancer types varied, depending on user input. Queries that were not specific to a cancer type were also tracked and categorized. Conclusions Natural-language searching affords users the opportunity to fully express their information needs and can aid users naïve to the content and vocabulary. The specific queries analyzed for this study reflect news and research studies reported during the study dates and would surely change with different study dates. Analyzing queries from search engines represents one way of knowing what kinds of content to provide to users of a given Web site. Users ask questions using whole sentences and keywords, often misspelling words. Providing the option for natural-language searching does not obviate the need for good information architecture, usability engineering, and user testing in order to optimize user experience. PMID:14713659

Objective Naturallanguage processing (NLP) tasks are commonly decomposed into subtasks, chained together to form processing pipelines. The residual error produced in these subtasks propagates, adversely affecting the end objectives. Limited availability of annotated clinical data remains a barrier to reaching state-of-the-art operating characteristics using statistically based NLP tools in the clinical domain. Here we explore the unique linguistic constructions of clinical texts and demonstrate the loss in operating characteristics when out-of-the-box part-of-speech (POS) tagging tools are applied to the clinical domain. We test a domain adaptation approach integrating a novel lexical-generation probability rule used in a transformation-based learner to boost POS performance on clinical narratives. Methods Two target corpora from independent healthcare institutions were constructed from high frequency clinical narratives. Four leading POS taggers with their out-of-the-box models trained from general English and biomedical abstracts were evaluated against these clinical corpora. A high performing domain adaptation method, Easy Adapt, was compared to our newly proposed method ClinAdapt. Results The evaluated POS taggers drop in accuracy by 8.5–15% when tested on clinical narratives. The highest performing tagger reports an accuracy of 88.6%. Domain adaptation with Easy Adapt reports accuracies of 88.3–91.0% on clinical texts. ClinAdapt reports 93.2–93.9%. Conclusions ClinAdapt successfully boosts POS tagging performance through domain adaptation requiring a modest amount of annotated clinical data. Improving the performance of critical NLP subtasks is expected to reduce pipeline error propagation leading to better overall results on complex processing tasks. PMID:23486109

Naturallanguage understanding is to specify a computational model that maps sentences to their semantic mean representation. In this paper, we propose a novel framework to train the statistical models without using expensive fully annotated data. In particular, the input of our framework is a set of sentences labeled with abstract semantic annotations. These annotations encode the underlying embedded semantic structural relations without explicit word/semantic tag alignment. The proposed framework can automatically induce derivation rules that map sentences to their semantic meaning representations. The learning framework is applied on two statistical models, the conditional random fields (CRFs) and the hidden Markov support vector machines (HM-SVMs). Our experimental results on the DARPA communicator data show that both CRFs and HM-SVMs outperform the baseline approach, previously proposed hidden vector state (HVS) model which is also trained on abstract semantic annotations. In addition, the proposed framework shows superior performance than two other baseline approaches, a hybrid framework combining HVS and HM-SVMs and discriminative training of HVS, with a relative error reduction rate of about 25% and 15% being achieved in F-measure. PMID:25152899

Explosion of disaster health information results in information overload among response professionals. The objective of this project was to determine the feasibility of applying semantic naturallanguage processing (NLP) technology to addressing this overload. The project characterizes concepts and relationships commonly used in disaster health-related documents on influenza pandemics, as the basis for adapting an existing semantic summarizer to the domain. Methods include human review and semantic NLP analysis of a set of relevant documents. This is followed by a pilot-test in which two information specialists use the adapted application for a realistic information seeking task. According to the results, the ontology of influenza epidemics management can be described via a manageable number of semantic relationships that involve concepts from a limited number of semantic types. Test users demonstrate several ways to engage with the application to obtain useful information. This suggests that existing semantic NLP algorithms can be adapted to support information summarization and visualization in influenza epidemics and other disaster health areas. However, additional research is needed in the areas of terminology development (as many relevant relationships and terms are not part of existing standardized vocabularies), NLP, and user interface design. PMID:24311971

A review of published work in clinical naturallanguage processing (NLP) may suggest that the negation detection task has been "solved." This work proposes that an optimizable solution does not equal a generalizable solution. We introduce a new machine learning-based Polarity Module for detecting negation in clinical text, and extensively compare its performance across domains. Using four manually annotated corpora of clinical text, we show that negation detection performance suffers when there is no in-domain development (for manual methods) or training data (for machine learning-based methods). Various factors (e.g., annotation guidelines, named entity characteristics, the amount of data, and lexical and syntactic context) play a role in making generalizability difficult, but none completely explains the phenomenon. Furthermore, generalizability remains challenging because it is unclear whether to use a single source for accurate data, combine all sources into a single model, or apply domain adaptation methods. The most reliable means to improve negation detection is to manually annotate in-domain training data (or, perhaps, manually modify rules); this is a strategy for optimizing performance, rather than generalizing it. These results suggest a direction for future work in domain-adaptive and task-adaptive methods for clinical NLP. PMID:25393544

There is an extensive body of work on Intelligent Tutoring Systems: computer environments for education, teaching and training that adapt to the needs of the individual learner. Work on personalisation and adaptivity has included research into allowing the student user to enhance the system's adaptivity by improving the accuracy of the underlying learner model. Open Learner Modelling, where the system's model of the user's knowledge is revealed to the user, has been proposed to support student reflection on their learning. Increased accuracy of the learner model can be obtained by the student and system jointly negotiating the learner model. We present the initial investigations into a system to allow people to negotiate the model of their understanding of a topic in naturallanguage. This paper discusses the development and capabilities of both conversational agents (or chatbots) and Intelligent Tutoring Systems, in particular Open Learner Modelling. We describe a Wizard-of-Oz experiment to investigate the feasibility of using a chatbot to support negotiation, and conclude that a fusion of the two fields can lead to developing negotiation techniques for chatbots and the enhancement of the Open Learner Model. This technology, if successful, could have widespread application in schools, universities and other training scenarios.

Explosion of disaster health information results in information overload among response professionals. The objective of this project was to determine the feasibility of applying semantic naturallanguage processing (NLP) technology to addressing this overload. The project characterizes concepts and relationships commonly used in disaster health-related documents on influenza pandemics, as the basis for adapting an existing semantic summarizer to the domain. Methods include human review and semantic NLP analysis of a set of relevant documents. This is followed by a pilot-test in which two information specialists use the adapted application for a realistic information seeking task. According to the results, the ontology of influenza epidemics management can be described via a manageable number of semantic relationships that involve concepts from a limited number of semantic types. Test users demonstrate several ways to engage with the application to obtain useful information. This suggests that existing semantic NLP algorithms can be adapted to support information summarization and visualization in influenza epidemics and other disaster health areas. However, additional research is needed in the areas of terminology development (as many relevant relationships and terms are not part of existing standardized vocabularies), NLP, and user interface design. PMID:24311971

One of the most significant recent advances in health information systems has been the shift from paper to electronic documents. While research on automatic text and image processing has taken separate paths, there is a growing need for joint efforts, particularly for electronic health records and biomedical literature databases. This work aims at comparing text-based versus image-based access to multimodal medical documents using state-of-the-art methods of processing text and image components. A collection of 180 medical documents containing an image accompanied by a short text describing it was divided into training and test sets. Content-based image analysis and naturallanguage processing techniques are applied individually and combined for multimodal document analysis. The evaluation consists of an indexing task and a retrieval task based on the “gold standard” codes manually assigned to corpus documents. The performance of text-based and image-based access, as well as combined document features, is compared. Image analysis proves more adequate for both the indexing and retrieval of the images. In the indexing task, multimodal analysis outperforms both independent image and text analysis. This experiment shows that text describing images can be usefully analyzed in the framework of a hybrid text/image retrieval system. PMID:19633735

Knowledge of medication indications is significant for automatic applications aimed at improving patient safety, such as computerized physician order entry and clinical decision support systems. The Electronic Health Record (EHR) contains pertinent information related to patient safety such as information related to appropriate prescribing. However, the reasons for medication prescriptions are usually not explicitly documented in the patient record. This paper describes a method that determines the reasons for medication uses based on information occurring in outpatient notes. The method utilizes drug-indication knowledge that we acquired, and naturallanguage processing. Evaluation showed the method obtained a sensitivity of 62.8%, specificity of 93.9%, precision of 90% and F-measure of 73.9%. This pilot study demonstrated that linking external drug indication knowledge to the EHR for determining the reasons for medication use was promising, but also revealed some challenges. Future work will focus on increasing the accuracy and coverage of the indication knowledge and evaluating its performance using a much larger set of drugs frequently used in the outpatient population. PMID:22195134

Objective: We describe the development and evaluation of a system that uses machine learning and naturallanguage processing techniques to identify potential candidates for surgical intervention for drug-resistant pediatric epilepsy. The data are comprised of free-text clinical notes extracted from the electronic health record (EHR). Both known clinical outcomes from the EHR and manual chart annotations provide gold standards for the patient’s status. The following hypotheses are then tested: 1) machine learning methods can identify epilepsy surgery candidates as well as physicians do and 2) machine learning methods can identify candidates earlier than physicians do. These hypotheses are tested by systematically evaluating the effects of the data source, amount of training data, class balance, classification algorithm, and feature set on classifier performance. The results support both hypotheses, with F-measures ranging from 0.71 to 0.82. The feature set, classification algorithm, amount of training data, class balance, and gold standard all significantly affected classification performance. It was further observed that classification performance was better than the highest agreement between two annotators, even at one year before documented surgery referral. The results demonstrate that such machine learning methods can contribute to predicting pediatric epilepsy surgery candidates and reducing lag time to surgery referral. PMID:27257386

The dissemination and evaluation of evidence-based behavioral treatments for substance abuse problems rely on the evaluation of counselor interventions. In Motivational Interviewing (MI), a treatment that directs the therapist to utilize a particular linguistic style, proficiency is assessed via behavioral coding-a time consuming, nontechnological approach. Naturallanguage processing techniques have the potential to scale up the evaluation of behavioral treatments such as MI. We present a novel computational approach to assessing components of MI, focusing on 1 specific counselor behavior-reflections, which are believed to be a critical MI ingredient. Using 57 sessions from 3 MI clinical trials, we automatically detected counselor reflections in a maximum entropy Markov modeling framework using the raw linguistic data derived from session transcripts. We achieved 93% recall, 90% specificity, and 73% precision. Results provide insight into the linguistic information used by coders to make ratings and demonstrate the feasibility of new computational approaches to scaling up the evaluation of behavioral treatments. (PsycINFO Database Record PMID:26784286

Knowledge of medication indications is significant for automatic applications aimed at improving patient safety, such as computerized physician order entry and clinical decision support systems. The Electronic Health Record (EHR) contains pertinent information related to patient safety such as information related to appropriate prescribing. However, the reasons for medication prescriptions are usually not explicitly documented in the patient record. This paper describes a method that determines the reasons for medication uses based on information occurring in outpatient notes. The method utilizes drug-indication knowledge that we acquired, and naturallanguage processing. Evaluation showed the method obtained a sensitivity of 62.8%, specificity of 93.9%, precision of 90% and F-measure of 73.9%. This pilot study demonstrated that linking external drug indication knowledge to the EHR for determining the reasons for medication use was promising, but also revealed some challenges. Future work will focus on increasing the accuracy and coverage of the indication knowledge and evaluating its performance using a much larger set of drugs frequently used in the outpatient population. PMID:22195134

With an increase in the prevalence of patients having multiple medical conditions, along with the increasing number of medical information sources, an intelligent approach is required to integrate the answers to physicians' patient-related questions into clinical practice in the shortest, most specific way possible. Cochrane Scientific Reviews are currently considered to be the “gold standard” for evidence-based medicine (EBM), because of their well-defined systematic approach to assessing the available medical information. In order to develop semantic approaches for enabling the reuse of these Reviews, a system for producing executable knowledge was designed using a naturallanguage processing (NLP) system we developed (BioMedLEE), and semantic processing techniques. Though BioMedLEE was not designed for or trained over the Cochrane Reviews, this study shows that disease, therapy and drug concepts can be extracted and correlated with an overall recall of 80.3%, coding precision of 94.1%, and concept-concept relationship precision of 87.3%. PMID:17238302

The successful adoption by clinicians of evidence-based clinical practice guidelines (CPGs) contained in clinical information systems requires efficient translation of free-text guidelines into computable formats. Naturallanguage processing (NLP) has the potential to improve the efficiency of such translation. However, it is laborious to develop NLP to structure free-text CPGs using existing formal knowledge representations (KR). In response to this challenge, this vision paper discusses the value and feasibility of supporting symbiosis in text-based knowledge acquisition (KA) and KR. We compare two ontologies: (1) an ontology manually created by domain experts for CPG eligibility criteria and (2) an upper-level ontology derived from a semantic pattern-based approach for automatic KA from CPG eligibility criteria text. Then we discuss the strengths and limitations of interweaving KA and NLP for KR purposes and important considerations for achieving the symbiosis of KR and NLP for structuring CPGs to achieve evidence-based clinical practice. PMID:24943582

Naturallanguage understanding is to specify a computational model that maps sentences to their semantic mean representation. In this paper, we propose a novel framework to train the statistical models without using expensive fully annotated data. In particular, the input of our framework is a set of sentences labeled with abstract semantic annotations. These annotations encode the underlying embedded semantic structural relations without explicit word/semantic tag alignment. The proposed framework can automatically induce derivation rules that map sentences to their semantic meaning representations. The learning framework is applied on two statistical models, the conditional random fields (CRFs) and the hidden Markov support vector machines (HM-SVMs). Our experimental results on the DARPA communicator data show that both CRFs and HM-SVMs outperform the baseline approach, previously proposed hidden vector state (HVS) model which is also trained on abstract semantic annotations. In addition, the proposed framework shows superior performance than two other baseline approaches, a hybrid framework combining HVS and HM-SVMs and discriminative training of HVS, with a relative error reduction rate of about 25% and 15% being achieved in F-measure. PMID:25152899

The order of ‘noun and adposition’ is an important parameter of word ordering rules in the world’s languages. The seven parameters, ‘adverb and verb’ and others, depend strongly on the ‘noun and adposition’. Japanese as well as Korean, Tamil and several other languages seem to have a stable structure of word ordering rules, while Thai and other languages, which have the opposite word ordering rules to Japanese, are also stable in structure. It seems therefore that each language in the world fluctuates between these two structures like the Ising model for finite lattice.

Increasing temperatures are projected to have a positive effect on the length of Alaska's tourism season, but the natural attractions that tourism relies on, such as glaciers, wildlife, fish, or other natural resources, may change. In order to continue to derive benefits from these resources, nature-based tour operators may have to adapt to these changes, and communication is an essential, but poorly understood, component of the climate change adaptation process. The goal of this study was to determine how to provide useful climate change information to nature-based tour operators by answering the following questions: 1. What environmental changes do nature-based tour operators perceive? 2. How are nature-based tour operators responding to climate and environmental change? 3. What climate change information do nature-based tour operators need? To answer these questions, twenty-four nature-based tour operators representing 20 different small and medium sized businesses in Juneau, Alaska were interviewed. The results show that many of Juneau's nature-based tour operators are observing, responding to, and in some cases, actively planning for further changes in the environment. The types of responses tended to vary depending on the participants' certainty in climate change and the perceived risks to their organization. Using these two factors, this study proposes a framework to classify climate change responses for the purpose of generating meaningful information and communication processes that promote adaptation and build adaptive capacity. During the course of the study, several other valuable lessons were learned about communicating about adaptation. The results of this study demonstrate that science communication research has an important place in the practice of promoting and fostering climate change adaptation. While the focus of this study was tour operators, the lessons learned may be valuable to other organizations striving to engage unique groups in climate

In the field of human cognition, language plays a special role that is connected directly to thinking and mental development (e.g., Vygotsky, "1938"). Thanks to "verbal thought", language allows humans to go beyond the limits of immediately perceived information, to form concepts and solve complex problems (Luria, "1975"). So, it appears language…

Although it is often claimed that verbal abilities are relatively well maintained across the adult lifespan, certain aspects of language production have been found to exhibit cross-sectional differences and longitudinal declines. In the current project age-related differences in controlled and naturalistic elicited language production tasks were…

Computerized indexing and retrieval of medical records is increasingly important; but the use of naturallanguage versus coded languages (SNOP, SNOMED) for this purpose remains controversial. In an effort to develop search strategies for naturallanguage text, the authors examined the anatomic diagnosis reports by computer for 7000 consecutive autopsy subjects spanning a 13-year period at The Johns Hopkins Hospital. There were 923,657 words, 11,642 of them distinct. The authors observed an average of 1052 keystrokes, 28 lines, and 131 words per autopsy report, with an average 4.6 words per line and 7.0 letters per word. The entire text file represented 921 hours of secretarial effort. Words ranged in frequency from 33,959 occurrences of "and" to one occurrence for each of 3398 different words. Searches for rare diseases with unique names or for representative examples of common diseases were most readily performed with the use of computer-printed key word in context (KWIC) books. For uncommon diseases designated by commonly used terms (such as "cystic fibrosis"), needs were best served by a computerized search for logical combinations of key words. In an unbalanced word distribution, each conjunction (logical and) search should be performed in ascending order of word frequency; but each alternation (logical inclusive or) search should be performed in descending order of word frequency. Naturallanguage text searches will assume a larger role in medical records analysis as the labor-intensive procedure of translation into a coded language becomes more costly, compared with the computer-intensive procedure of text searching. PMID:6546837

Computerized indexing and retrieval of medical records is increasingly important; but the use of naturallanguage versus coded languages (SNOP, SNOMED) for this purpose remains controversial. In an effort to develop search strategies for naturallanguage text, the authors examined the anatomic diagnosis reports by computer for 7000 consecutive autopsy subjects spanning a 13-year period at The Johns Hopkins Hospital. There were 923,657 words, 11,642 of them distinct. The authors observed an average of 1052 keystrokes, 28 lines, and 131 words per autopsy report, with an average 4.6 words per line and 7.0 letters per word. The entire text file represented 921 hours of secretarial effort. Words ranged in frequency from 33,959 occurrences of "and" to one occurrence for each of 3398 different words. Searches for rare diseases with unique names or for representative examples of common diseases were most readily performed with the use of computer-printed key word in context (KWIC) books. For uncommon diseases designated by commonly used terms (such as "cystic fibrosis"), needs were best served by a computerized search for logical combinations of key words. In an unbalanced word distribution, each conjunction (logical and) search should be performed in ascending order of word frequency; but each alternation (logical inclusive or) search should be performed in descending order of word frequency. Naturallanguage text searches will assume a larger role in medical records analysis as the labor-intensive procedure of translation into a coded language becomes more costly, compared with the computer-intensive procedure of text searching. PMID:6546837

Temporal information is crucial in electronic medical records and biomedical information systems. Processing temporal information in medical narrative data is a very challenging area. It lies at the intersection of temporal representation and reasoning (TRR) in artificial intelligence and medical naturallanguage processing (MLP). Some fundamental concepts and important issues in relation to TRR have previously been discussed, mainly in the context of processing structured data in biomedical informatics; however, it is important that these concepts be re-examined in the context of processing narrative data using MLP. Theoretical and methodological TRR studies in biomedical informatics can be classified into three main categories: category 1 applies theories and models from temporal reasoning in AI; category 2 defines frameworks that meet needs from clinical applications; category 3 resolves issues such as temporal granularity and uncertainty. Currently, most MLP systems are not designed with a formal representation of time, and their ability to reason about temporal relations among medical events is limited. Previous work in processing time with clinical narrative data includes processing time in clinical reports, modeling textual temporal expressions in clinical databases, processing time in clinical guidelines, and building time standards for data exchange and integration. In addition to common problems in MLP, there are challenges specific to TRR in medical text, which occur at each level of linguistic structure and analysis. Despite advances in temporal reasoning in biomedical informatics, processing time in medical text deserves more attention. Besides the need for more research in temporal granularity, fuzzy time, temporal contradiction, intermittent events and uncertainty, broad areas for future research include enhancing functions of current MLP systems on processing temporal information, incorporating medical knowledge into temporal reasoning systems

Large volumes of data are continuously generated from clinical notes and diagnostic studies catalogued in electronic health records (EHRs). Echocardiography is one of the most commonly ordered diagnostic tests in cardiology. This study sought to explore the feasibility and reliability of using naturallanguage processing (NLP) for large-scale and targeted extraction of multiple data elements from echocardiography reports. An NLP tool, EchoInfer, was developed to automatically extract data pertaining to cardiovascular structure and function from heterogeneously formatted echocardiographic data sources. EchoInfer was applied to echocardiography reports (2004 to 2013) available from 3 different on-going clinical research projects. EchoInfer analyzed 15,116 echocardiography reports from 1684 patients, and extracted 59 quantitative and 21 qualitative data elements per report. EchoInfer achieved a precision of 94.06%, a recall of 92.21%, and an F1-score of 93.12% across all 80 data elements in 50 reports. Physician review of 400 reports demonstrated that EchoInfer achieved a recall of 92-99.9% and a precision of >97% in four data elements, including three quantitative and one qualitative data element. Failure of EchoInfer to correctly identify or reject reported parameters was primarily related to non-standardized reporting of echocardiography data. EchoInfer provides a powerful and reliable NLP-based approach for the large-scale, targeted extraction of information from heterogeneous data sources. The use of EchoInfer may have implications for the clinical management and research analysis of patients undergoing echocardiographic evaluation. PMID:27124000

Critical values in anatomic pathology are rare occurrences and difficult to define with precision. Nevertheless, accrediting institutions require effective and timely communication of all critical values generated by clinical and anatomic laboratories. Provisional gating criteria for potentially critical anatomic diagnoses have been proposed, with some success in their implementation reported in the literature. Ensuring effective communication is challenging, however, making the case for programmatic implementation of a turnkey-style integrated information technology solution. To address this need, we developed a generically deployable laboratory information system-based tool, using a tiered naturallanguage processing predicate calculus inference engine to identify qualifying cases that meet criteria for critical diagnoses but lack an indication in the electronic medical record for an appropriate clinical discussion with the ordering physician of record. Using this tool, we identified an initial cohort of 13,790 cases over a 49-month period, which were further explored by reviewing the available electronic medical record for each patient. Of these cases, 35 (0.3%) were judged to require intervention in the form of direct communication between the attending pathologist and the clinical physician of record. In 8 of the 35 cases, this intervention resulted in the conveyance of new information to the requesting physician and/or a change in the patient's clinical plan. The very low percentage of such cases (0.058%) illustrates their rarity in daily practice, making it unlikely that manual identification/notification approaches alone can reliably manage them. The automated turnkey system was useful in avoiding missed handoffs of significant, clinically actionable diagnoses. PMID:22343338

Large volumes of data are continuously generated from clinical notes and diagnostic studies catalogued in electronic health records (EHRs). Echocardiography is one of the most commonly ordered diagnostic tests in cardiology. This study sought to explore the feasibility and reliability of using naturallanguage processing (NLP) for large-scale and targeted extraction of multiple data elements from echocardiography reports. An NLP tool, EchoInfer, was developed to automatically extract data pertaining to cardiovascular structure and function from heterogeneously formatted echocardiographic data sources. EchoInfer was applied to echocardiography reports (2004 to 2013) available from 3 different on-going clinical research projects. EchoInfer analyzed 15,116 echocardiography reports from 1684 patients, and extracted 59 quantitative and 21 qualitative data elements per report. EchoInfer achieved a precision of 94.06%, a recall of 92.21%, and an F1-score of 93.12% across all 80 data elements in 50 reports. Physician review of 400 reports demonstrated that EchoInfer achieved a recall of 92–99.9% and a precision of >97% in four data elements, including three quantitative and one qualitative data element. Failure of EchoInfer to correctly identify or reject reported parameters was primarily related to non-standardized reporting of echocardiography data. EchoInfer provides a powerful and reliable NLP-based approach for the large-scale, targeted extraction of information from heterogeneous data sources. The use of EchoInfer may have implications for the clinical management and research analysis of patients undergoing echocardiographic evaluation. PMID:27124000

Information acquisition, the gathering and interpretation of sensory information, is a basic function of mobile organisms. We describe a new method for measuring this ability in humans, using free-recall responses to sensory stimuli which are scored objectively using a "wisdom of crowds" approach. As an example, we demonstrate this metric using perception of video stimuli. Immediately after viewing a 30 s video clip, subjects responded to a prompt to give a short description of the clip in naturallanguage. These responses were scored automatically by comparison to a dataset of responses to the same clip by normally-sighted viewers (the crowd). In this case, the normative dataset consisted of responses to 200 clips by 60 subjects who were stratified by age (range 22 to 85 y) and viewed the clips in the lab, for 2,400 responses, and by 99 crowdsourced participants (age range 20 to 66 y) who viewed clips in their Web browser, for 4,000 responses. We compared different algorithms for computing these similarities and found that a simple count of the words in common had the best performance. It correctly matched 75% of the lab-sourced and 95% of crowdsourced responses to their corresponding clips. We validated the measure by showing that when the amount of information in the clip was degraded using defocus lenses, the shared word score decreased across the five predetermined visual-acuity levels, demonstrating a dose-response effect (N = 15). This approach, of scoring open-ended immediate free recall of the stimulus, is applicable not only to video, but also to other situations where a measure of the information that is successfully acquired is desirable. Information acquired will be affected by stimulus quality, sensory ability, and cognitive processes, so our metric can be used to assess each of these components when the others are controlled. PMID:24695546

Information acquisition, the gathering and interpretation of sensory information, is a basic function of mobile organisms. We describe a new method for measuring this ability in humans, using free-recall responses to sensory stimuli which are scored objectively using a “wisdom of crowds” approach. As an example, we demonstrate this metric using perception of video stimuli. Immediately after viewing a 30 s video clip, subjects responded to a prompt to give a short description of the clip in naturallanguage. These responses were scored automatically by comparison to a dataset of responses to the same clip by normally-sighted viewers (the crowd). In this case, the normative dataset consisted of responses to 200 clips by 60 subjects who were stratified by age (range 22 to 85y) and viewed the clips in the lab, for 2,400 responses, and by 99 crowdsourced participants (age range 20 to 66y) who viewed clips in their Web browser, for 4,000 responses. We compared different algorithms for computing these similarities and found that a simple count of the words in common had the best performance. It correctly matched 75% of the lab-sourced and 95% of crowdsourced responses to their corresponding clips. We validated the measure by showing that when the amount of information in the clip was degraded using defocus lenses, the shared word score decreased across the five predetermined visual-acuity levels, demonstrating a dose-response effect (N = 15). This approach, of scoring open-ended immediate free recall of the stimulus, is applicable not only to video, but also to other situations where a measure of the information that is successfully acquired is desirable. Information acquired will be affected by stimulus quality, sensory ability, and cognitive processes, so our metric can be used to assess each of these components when the others are controlled. PMID:24695546

The potential of Model Model Systems Engineering (MBSE) using the Architecture Analysis and Design Language (AADL) applied to space systems will be described. AADL modeling is applicable to real-time embedded systems- the types of systems NASA builds. A case study with the Juno mission to Jupiter showcases how this work would enable future missions to benefit from using these models throughout their life cycle from design to flight operations.

This report documents work performed in Phase I of the project entitled: ''Technologies to Enhance Operation of the Existing Natural Gas Compression Infrastructure''. The project objective is to develop and substantiate methods for operating integral engine/compressors in gas pipeline service, which reduce fuel consumption, increase capacity, and enhance mechanical integrity. The report describes a number of potential enhancements to the existing natural gas compression infrastructure that have been identified and qualitatively demonstrated in tests on three different integral engine/compressors in natural gas transmission service.

This report documents work performed in Phase I of the project entitled: ''Technologies to Enhance Operation of the Existing Natural Gas Compression Infracture''. The project objective is to develop and substantiate methods for operating integral engine/compressors in gas pipeline service, which reduce fuel consumption, increase capacity, and enhance mechanical integrity. The report describes a number of potential enhancements to the existing natural gas compression infrastructure that have been identified and tested on four different integral engine/compressors in natural gas transmission service.

The 100-kW photovoltaic power system at Natural Bridges National Monument in southwestern Utah has been in operation since May 1980. A comparison of system simulation with actual operation has been performed, good agreement has been found, and results are presented. In addition, conservation measures and their benefits are described. Operating experience with the system is presented, including measured component performance of the arrays, batteries, inverters, and system overhead loads.

This report documents work performed in the fourth quarter of the project entitled: ''Technologies to Enhance Operation of the Existing Natural Gas Compression Infrastructure''. The project objective is to develop and substantiate methods for operating integral engine/compressors in gas pipeline service, which reduce fuel consumption, increase capacity, and enhance mechanical integrity. The report describes the following work: second field test; test data analysis for the first field test; operational optimization plans.

In view of inconsistent findings regarding bilingual advantages in executive functions (EF), we reviewed the literature to determine whether bilinguals' different language usage causes measureable changes in the shifting aspects of EF. By drawing on the theoretical framework of the adaptive control hypothesis-which postulates a critical link between bilinguals' varying demands on language control and adaptive cognitive control (Green and Abutalebi, 2013), we examined three factors that characterize bilinguals' language-switching experience: (a) the interactional context of conversational exchanges, (b) frequency of language switching, and (c) typology of code-switching. We also examined whether methodological variations in previous task-switching studies modulate task-specific demands on control processing and lead to inconsistencies in the literature. Our review demonstrates that not only methodological rigor but also a more finely grained, theory-based approach will be required to understand the cognitive consequences of bilinguals' varied linguistic practices in shifting EF. PMID:27199800

In view of inconsistent findings regarding bilingual advantages in executive functions (EF), we reviewed the literature to determine whether bilinguals' different language usage causes measureable changes in the shifting aspects of EF. By drawing on the theoretical framework of the adaptive control hypothesis—which postulates a critical link between bilinguals' varying demands on language control and adaptive cognitive control (Green and Abutalebi, 2013), we examined three factors that characterize bilinguals' language-switching experience: (a) the interactional context of conversational exchanges, (b) frequency of language switching, and (c) typology of code-switching. We also examined whether methodological variations in previous task-switching studies modulate task-specific demands on control processing and lead to inconsistencies in the literature. Our review demonstrates that not only methodological rigor but also a more finely grained, theory-based approach will be required to understand the cognitive consequences of bilinguals' varied linguistic practices in shifting EF. PMID:27199800

"Interest" is a widely used term not only in language education but also in our everyday life. However, very little attempt has been made to investigate the nature of "interest" in language teaching and learning. This paper, using a definition of interest proposed in the field of educational psychology, reports on the findings of a study conducted…

The formalized model to construct the syntactic structure of sentences of a naturallanguage is presented. On base of this model the complex algorithm with use of neural networks founded on data of Russian National language Corpus and set of parameters extracted from this data was developed. The resulted accuracy along with possible accuracy which theoretically could be received with these parameters is presented.

We built a naturallanguage processing (NLP) system implementing a "virtual writing conference" for elementary-school children, with German as the target language. Currently, state-of-the-art computer support for writing tasks is restricted to multiple-choice questions or quizzes because automatic parsing of the often ambiguous and fragmentary…

Two methods of analysis, logical and psychological (or, loosely, aesthetic and functional) are used to investigate the many kinds of languages man uses to communicate, the ways in which these languagesoperate, and the reasons for communication failures. Based on a discussion of the nature of symbols, since most languages of communication draw…

The structure, complexity, and peculiarities of the English language are examined in this book, which begins with a discussion of the nature of language. Chapters are devoted to (1) naming--"Language as Answer to a Need"; (2) grammar--"Language as Economy"; (3) words--"Language as the Finding of Minds"; (4) etymology--"Language to Stretch Brains…

The image generator function and author language software support for the CHARGE (Color Halftone Area Graphics Environment) Interactive Graphics System are described. Designed initially for use in computer-assisted instruction (CAI) systems, the CHARGE Interactive Graphics System can provide graphic displays for various applications including…

Written nearly 40 years ago, Peter Medway's "Finding a Language" continues to be an arresting read, which offers a powerful vision of what might be possible in education. In this brief introduction, I set the work in context, referring to ideas that Pete engaged with and recalling a little of the times.

It is justified to assume that part of our genetic endowment contributes to our language skills, yet it is impossible to tell at this moment exactly how genes affect the language faculty. We complement experimental biological studies by an in silico approach in that we simulate the evolution of neuronal networks under selection for language-related skills. At the heart of this project is the Evolutionary Neurogenetic Algorithm (ENGA) that is deliberately biomimetic. The design of the system was inspired by important biological phenomena such as brain ontogenesis, neuron morphologies, and indirect genetic encoding. Neuronal networks were selected and were allowed to reproduce as a function of their performance in the given task. The selected neuronal networks in all scenarios were able to solve the communication problem they had to face. The most striking feature of the model is that it works with highly indirect genetic encoding--just as brains do.

Direct utilization of hydrocarbons in low temperature solid oxide fuel cells is of growing interest in the landscape of alternative energy technologies. Here, we report on performance of self-supported micro-solid oxide fuel cells (μSOFCs) with ruthenium (Ru) nano-porous thin film anodes operating in natural gas and methane. The μSOFCs consist of 8 mol% yttria-stabilized zirconia thin film electrolytes, porous platinum cathodes and porous Ru anodes, and were tested with dry natural gas and methane as fuels and air as the oxidant. At 500 °C, comparable power densities of 410 mW cm-2 and 440 mW cm-2 were obtained with dry natural gas and methane, respectively. In weakly humidified natural gas, open circuit voltage of 0.95 V at 530 °C with peak power density of 800 mW cm-2 was realized. The μSOFC was continuously operated at constant voltage of 0.7 V with methane, where quasi-periodic oscillatory behavior of the performance was observed. Through post-operation XPS studies it was found that the oxidation state of Ru anode surfaces significantly differs depending on the fuel used, oxidation being enhanced with methane or natural gas. The nature of the oscillation is discussed based on the transition in surface oxygen coverage states and electro-catalytic activity of Ru anodes.

The effects of contriving motivating operations (MOs) and script fading on the acquisition of the mand "Where's [object]?" were evaluated in 2 boys with language delays. During each session, trials were alternated in which high-preference items were present (abolishing operation [AO] trials) or missing (establishing operation [EO] trials) from…

Recent research has produced evidence to suggest a strong reciprocal link between school context-specific language constructions that reflect a school's vision and schoolwide pedagogy, and the way that meaning making occurs, and a school's culture is characterized. This research was conducted within three diverse settings: one school in…

This paper discusses in some detail the procedural areas of reconstruction and automatic processing used by the Classroom Interaction Project of the University of Missouri's Center for Research in Social Behavior in the analysis of classroom language. First discussed is the process of reconstruction, here defined as the "process of adding to…

We built an NLP system implementing a "virtual writing conference" for elementary-school children, with German as the target language. Currently, state-of-the-art computer support for writing tasks is restricted to multiple-choice questions or quizzes because automatic parsing of the often ambiguous and fragmentary texts produced by pupils…

Purpose: This study examined the ability of children with language impairment (LI) to dissemble (hide) emotional reactions when socially appropriate to do so. Method: Twenty-two children with LI and their typically developing peers (7;1-10;11 [years;months]) participated in two tasks. First, participants were presented with hypothetical scenarios…

Background The prevalence and value of patient-generated health text are increasing, but processing such text remains problematic. Although existing biomedical naturallanguage processing (NLP) tools are appealing, most were developed to process clinician- or researcher-generated text, such as clinical notes or journal articles. In addition to being constructed for different types of text, other challenges of using existing NLP include constantly changing technologies, source vocabularies, and characteristics of text. These continuously evolving challenges warrant the need for applying low-cost systematic assessment. However, the primarily accepted evaluation method in NLP, manual annotation, requires tremendous effort and time. Objective The primary objective of this study is to explore an alternative approach—using low-cost, automated methods to detect failures (eg, incorrect boundaries, missed terms, mismapped concepts) when processing patient-generated text with existing biomedical NLP tools. We first characterize common failures that NLP tools can make in processing online community text. We then demonstrate the feasibility of our automated approach in detecting these common failures using one of the most popular biomedical NLP tools, MetaMap. Methods Using 9657 posts from an online cancer community, we explored our automated failure detection approach in two steps: (1) to characterize the failure types, we first manually reviewed MetaMap’s commonly occurring failures, grouped the inaccurate mappings into failure types, and then identified causes of the failures through iterative rounds of manual review using open coding, and (2) to automatically detect these failure types, we then explored combinations of existing NLP techniques and dictionary-based matching for each failure cause. Finally, we manually evaluated the automatically detected failures. Results From our manual review, we characterized three types of failure: (1) boundary failures, (2) missed

This study discusses the analysis of various modeling approaches and maintenance techniques applicable to the Liquefied Natural Gas (LNG) carrier operations in the maritime environment. Various novel modeling techniques are discussed; including genetic algorithms, fuzzy logic and evidential reasoning. We also identify the usefulness of these algorithms in the LNG carrier industry in the areas of risk assessment and maintenance modeling.

A study examined the nature and operation of the institutions in which exemplary vocational education programs exist. Three research questions guided the study: Are there common elements that characterize institutions as exemplary? How is the presence of these common elements reflected in educational levels and types of institutions? and What…

This study discusses the analysis of various modeling approaches and maintenance techniques applicable to the Liquefied Natural Gas (LNG) carrier operations in the maritime environment. Various novel modeling techniques are discussed; including genetic algorithms, fuzzy logic and evidential reasoning. We also identify the usefulness of these algorithms in the LNG carrier industry in the areas of risk assessment and maintenance modeling.

... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission Energy Efficiency of the Natural Gas Infrastructure and Operations.... Eastern Time, in the Commission Meeting Room on the second floor of the offices of the Federal...

The management of natural hazards occurring over a territory entails two main phases: a preoperational-or pre-event-phase, whose objective is to relocate resources closer to sites characterized by the highest hazard, and an operational-during the event-phase, whose objective is to manage in real time the available resources by allocating them to sites where their intervention is needed. Obviously, the two phases are closely related, and demand a unified and integrated treatment. This work presents a unifying framework that integrates various decisional problems arising in the management of different kinds of natural hazards. The proposed approach, which is based on a mathematical programming formulation, can support the decisionmakers in the optimal resource allocation before (preoperational phase) and during (operational phase) an emergency due to natural hazard events. Different alternatives of modeling the resources and the territory are proposed and discussed according to their appropriateness in the preoperational and operational phases. The proposed approach can be applied to the management of any natural hazard and, from an integration perspective, may be particularly useful for risk management in civil protection operations. An application related to the management of wildfire hazard is presented. PMID:19000073

A high surface area electrode is functionalized with cobalt-based oxygen evolving catalysts (Co-OEC = electrodeposited from pH 7 phosphate, Pi, pH 8.5 methylphosphonate, MePi, and pH 9.2 borate electrolyte, Bi). Co-OEC prepared from MePi and operated in Pi and Bi achieves a current density of 100 mA cm(-2) for water oxidation at 442 and 363 mV overpotential, respectively. The catalyst retains activity in near-neutral pH buffered electrolyte in natural waters such as those from the Charles River (Cambridge, MA) and seawater (Woods Hole, MA). The efficacy and ease of operation of anodes functionalized with Co-OEC at appreciable current density together with its ability to operate in near neutral pH buffered natural water sources bodes well for the translation of this catalyst to a viable renewable energy storage technology.

The proposal selection process for the Hubble Space Telescope is assisted by a robust and easy to use query program (TACOS). The system parses an English subset language sentence regardless of the order of the keyword phases, allowing the user a greater flexibility than a standard command query language. Capabilities for macro and procedure definition are also integrated. The system was designed for flexibility in both use and maintenance. In addition, TACOS can be applied to any knowledge domain that can be expressed in terms of a single reaction. The system was implemented mostly in Common LISP. The TACOS design is described in detail, with particular attention given to the implementation methods of sentence processing.

FMS/3 is a system for producing hard copy documentation at high speed from free format text and command input. The system was originally written in assembler language for a 12K IBM 360 model 20 using a high speed 1403 printer with the UCS-TN chain option (upper and lower case). Input was from an IBM 2560 Multi-function Card Machine. The model 20…

The frequency distribution of words has been a key object of study in statistical linguistics for the past 70 years. This distribution approximately follows a simple mathematical form known as Zipf ’ s law. This article first shows that human language has a highly complex, reliable structure in the frequency distribution over and above this classic law, although prior data visualization methods have obscured this fact. A number of empirical phenomena related to word frequencies are then reviewed. These facts are chosen to be informative about the mechanisms giving rise to Zipf’s law and are then used to evaluate many of the theoretical explanations of Zipf’s law in language. No prior account straightforwardly explains all the basic facts or is supported with independent evaluation of its underlying assumptions. To make progress at understanding why language obeys Zipf’s law, studies must seek evidence beyond the law itself, testing assumptions and evaluating novel predictions with new, independent data. PMID:24664880

A plain language communication device is presented. It offers a means of automatic paging of events reported electrically by any number of monitoring stations by providing comprehensive and clear information on events to persons directly concerned with operation procedures. It consists of a sound-tape cassette with the spoken message to be put through. Duration and content of the message are freely selectable. The spoken text can be selected via a logic allocation system, called off and then be paged over any distance at any number of outstations (e.g., loudspeaker or telephone). In case of power breakdowns the device is capable of bridging critical situations by spoken information using a buffer storage battery. Test results in underground operations are successful.

The nature of programing languages is discussed, focusing on machine/assembly language and high-level languages. The latter includes systems (such as "Basic") in which an entire set of low-level instructions (in assembly/machine language) are combined. Also discusses the nature of other languages such as "Lisp" and list-processing languages. (JN)

Background Accurate information is needed to direct healthcare systems’ efforts to control methicillin-resistant Staphylococcus aureus (MRSA). Assembling complete and correct microbiology data is vital to understanding and addressing the multiple drug-resistant organisms in our hospitals. Methods Herein, we describe a system that securely gathers microbiology data from the Department of Veterans Affairs (VA) network of databases. Using naturallanguage processing methods, we applied an information extraction process to extract organisms and susceptibilities from the free-text data. We then validated the extraction against independently derived electronic data and expert annotation. Results We estimate that the collected microbiology data are 98.5% complete and that methicillin-resistant Staphylococcus aureus was extracted accurately 99.7% of the time. Conclusions Applying naturallanguage processing methods to microbiology records appears to be a promising way to extract accurate and useful nosocomial pathogen surveillance data. Both scientific inquiry and the data’s reliability will be dependent on the surveillance system’s capability to compare from multiple sources and circumvent systematic error. The dataset constructed and methods used for this investigation could contribute to a comprehensive infectious disease surveillance system or other pressing needs. PMID:22533507

BBN's DARPA project in Knowledge Representation for NaturalLanguage Communication and Planning Assistance has two primary objectives: 1) To perform research on aspects of the interaction between users who are making complex decisions and systems that are assisting them with their task. In particular, this research is focused on communication and the reasoning required for performing its underlying task of discourse processing, planning, and plan recognition and communication repair. 2) Based on the research objectives to build tools for communication, plan recognition, and planning assistance and for the representation of knowledge and reasoning that underlie all of these processes. This final report summarizes BBN's research activities performed under this contract in the areas of knowledge representation and speech and naturallanguage. In particular, the report discusses the work in the areas of knowledge representation, planning, and discourse modeling. We describe a parallel truth maintenance system. We provide an extension to the sentential theory of propositional attitudes by adding a sentential semantics. The report also contains a description of our research in discourse modelling in the areas of planning and plan recognition.

Multi-purpose reservoirs typically provide benefits of water supply, hydroelectric power, and flood mitigation. Hydroelectric power generations generally do not consume water. However, temporal distribution of downstream flows is highly changed due to hydro-peaking effects. Associated with offstream diversion of water supplies for municipal, industrial, and agricultural requirements, natural streamflow characteristics of magnitude, duration, frequency, timing, and rate of change is significantly altered by multi-purpose reservoir operation. Natural flow regime has long been recognized a master factor for ecosystem health and biodiversity. Restoration of altered flow regime caused by multi-purpose reservoir operation is the main objective of this study. This study presents an optimization framework that modifying reservoir operation to seeking balance between human and environmental needs. The methodology presented in this study is applied to the Feitsui Reservoir, located in northern Taiwan, with main purpose of providing stable water-supply and auxiliary purpose of electricity generation and flood-peak attenuation. Reservoir releases are dominated by two decision variables, i.e., duration of water releases for each day and percentage of daily required releases within the duration. The current releasing policy of the Feitsui Reservoir releases water for water-supply and hydropower purposes during 8:00 am to 16:00 pm each day and no environmental flows releases. Although greater power generation is obtained by 100% releases distributed within 8-hour period, severe temporal alteration of streamflow is observed downstream of the reservoir. Modifying reservoir operation by relaxing these two variables and reserve certain ratio of streamflow as environmental flow to maintain downstream natural variability. The optimal reservoir releasing policy is searched by the multi-criterion decision making technique for considering reservoir performance in terms of shortage ratio

The Smart Grid has come to describe a next-generation electrical power system that is typified by the increased use of communications and information technology in the generation, delivery and consumption of electrical energy. Much of the present Smart Grid analysis focuses on utility and consumer interaction. i.e. smart appliances, home automation systems, rate structures, consumer demand response, etc. An identified need is to assess the upstream and midstream operations of natural gas as a result of the smart grid. The nature of Smart Grid, including the demand response and role of information, may require changes in upstream and midstream natural gas operations to ensure availability and efficiency. Utility reliance on natural gas will continue and likely increase, given the backup requirements for intermittent renewable energy sources. Efficient generation and delivery of electricity on Smart Grid could affect how natural gas is utilized. Things that we already know about Smart Grid are: (1) The role of information and data integrity is increasingly important. (2) Smart Grid includes a fully distributed system with two-way communication. (3) Smart Grid, a complex network, may change the way energy is supplied, stored, and in demand. (4) Smart Grid has evolved through consumer driven decisions. (5) Smart Grid and the US critical infrastructure will include many intermittent renewables.

Open source projects do have requirements; they are, however, mostly informal, text descriptions found in requests, forums, and other correspondence. Understanding such requirements provides insight into the nature of open source projects. Unfortunately, manual analysis of naturallanguage requirements is time-consuming, and for large projects,…

The traditional theory equating the brain bases of language with Broca's and Wernicke's neocortical areas is wrong. Neural circuits linking activity in anatomically segregated populations of neurons in subcortical structures and the neocortex throughout the human brain regulate complex behaviors such as walking, talking, and comprehending the meaning of sentences. When we hear or read a word, neural structures involved in the perception or real-world associations of the word are activated as well as posterior cortical regions adjacent to Wernicke's area. Many areas of the neocortex and subcortical structures support the cortical-striatal-cortical circuits that confer complex syntactic ability, speech production, and a large vocabulary. However, many of these structures also form part of the neural circuits regulating other aspects of behavior. For example, the basal ganglia, which regulate motor control, are also crucial elements in the circuits that confer human linguistic ability and abstract reasoning. The cerebellum, traditionally associated with motor control, is active in motor learning. The basal ganglia are also key elements in reward-based learning. Data from studies of Broca's aphasia, Parkinson's disease, hypoxia, focal brain damage, and a genetically transmitted brain anomaly (the putative "language gene," family KE), and from comparative studies of the brains and behavior of other species, demonstrate that the basal ganglia sequence the discrete elements that constitute a complete motor act, syntactic process, or thought process. Imaging studies of intact human subjects and electrophysiologic and tracer studies of the brains and behavior of other species confirm these findings. As Dobzansky put it, "Nothing in biology makes sense except in the light of evolution" (cited in Mayr, 1982). That applies with as much force to the human brain and the neural bases of language as it does to the human foot or jaw. The converse follows: the mark of evolution on

This quarterly report documents work performed under Tasks 10 through 14 of the project entitled: Technologies to Enhance Operation of the Existing Natural Gas Compression Infrastructure. The project objective is to develop and substantiate methods for operating integral engine/compressors in gas pipeline service, which reduce fuel consumption, increase capacity, and enhance mechanical integrity. The report documents the second series of tests performed on a GMW10 engine/compressor after modifications to add high pressure Fuel and a Turbocharger. It also presents baseline testing for air balance investigations and initial simulation modeling of the air manifold for a Cooper GMVH6.

This quarterly report documents work performed in Phase I of the project entitled: ''Technologies to Enhance Operation of the Existing Natural Gas Compression Infrastructure''. The project objective is to develop and substantiate methods for operating integral engine/compressors in gas pipeline service, which reduce fuel consumption, increase capacity, and enhance mechanical integrity. The report documents the second series of tests performed on a turbocharged HBA-6T engine/compressor. It also presents baseline testing for air balance investigations and initial simulation modeling of the air manifold for a Cooper GMVH6.

Culture without nature is empty, nature without culture is deaf Intercultural dialogue in higher education around the globe is needed to improve the theory, policy and practice of science and science education. The culture, cosmology and philosophy of "global" science as practiced today in all societies around the world are seemingly anchored in…

The availability of highly capable, yet relatively cheap, unmanned aerial vehicles (UAVs) is opening up new areas of use for hobbyists and for commercial activities. This research is developing methods beyond classical control-stick pilot inputs, to allow operators to manage complex missions without in-depth vehicle expertise. These missions may entail several heterogeneous UAVs flying coordinated patterns or flying multiple trajectories deconflicted in time or space to predefined locations. This paper describes the functionality and preliminary usability measures of an interface that allows an operator to define a mission using speech inputs. With a defined and simple vocabulary, operators can input the vast majority of mission parameters using simple, intuitive voice commands. Although the operator interface is simple, it is based upon autonomous algorithms that allow the mission to proceed with minimal input from the operator. This paper also describes these underlying algorithms that allow an operator to manage several UAVs.

The language arts are constructed like a doughnut or a bagel, so that at their center where there might be something, there is instead a hole--emptiness. The dominant approach to understanding the nature of language--generative grammar--does not suggest a center for the language arts. An alternative approach to language and mind is "cognitive…

There is a growing consensus that naturallanguage plays a significant role in our cognitive lives. However, this role of language is not adequately characterised. In this paper, I investigate the relationship between naturallanguage and thinking and argue that thinking operates largely according to associationistic rules. Furthermore, I show that language is neither restricted to interfacing between a 'Language of Thought' and the conscious level, nor is it constitutively involved in thinking. Unlike available alternatives, the suggested view predicts and accommodates a large battery of empirical evidence. Furthermore, it avoids problems that associationistic views traditionally faced, e.g. problems of propositional thinking and compositionality of thought. PMID:25976728

In this paper is considered a problem of defining natural star-products on symplectic manifolds, admissible for quantization of classical Hamiltonian systems. First, a construction of a star-product on a cotangent bundle to an Euclidean configuration space is given with the use of a sequence of pair-wise commuting vector fields. The connection with a covariant representation of such a star-product is also presented. Then, an extension of the construction to symplectic manifolds over flat and non-flat pseudo-Riemannian configuration spaces is discussed. Finally, a coordinate free construction of related quantum mechanical operators from Hilbert space over respective configuration space is presented. -- Highlights: •Invariant representations of natural star-products on symplectic manifolds are considered. •Star-products induced by flat and non-flat connections are investigated. •Operator representations in Hilbert space of considered star-algebras are constructed.

This article looks at the contribution of insights from theoretical linguistics to an understanding of language acquisition and the nature of language in terms of their potential benefit to language education. We examine the ideas of innateness and universal language faculty, as well as multilingualism and the language-society relationship. Modern…

The Natural Excitation Technique (NExT) is used to extract modal parameters (natural frequencies, modal damping, and mode shapes) from an operating Horizontal Axis Wind Turbine (HAWT). NExT uses the measured response from the turbine excited by the assumed broad-band, random wind input even though the excitation cannot be directly measured. The damping, measured using NExT, generally increased in the system as the wind speed increased. Such information can be used to aid in the verification and upgrade of codes which predict structural response of operating HAWT's and aid our understanding of the dynamics of wind turbines. The Northern Power Systems 100-kW machine is addressed. Strain data is available from this machine while operating at 72 rpm in 10, 15, 20, 25, and 30 mph winds. The operational modal frequencies and mode shapes were measured for this machine. Reconstructions of the auto and cross spectra are used to verify the validity of the extracted parameters. The modal damping for two modes are presented for this range of wind speeds.

The purpose of this work was to evaluate the physical and chemical properties of emission products from a six-cylinder sedan car under a variety of operating conditions, before and after it has been converted to compressed natural gas (CNG) fuel. The specific focus of the measurements was on emission levels and characteristics of ultra fine particles and the emission levels together with the emissions of gaseous pollutants for a range of operating conditions before and up to 3 months after the vehicle was converted are presented and discussed in the paper. The investigations showed that converting a petrol operating vehicle to CNG has the potential of reducing some of the emissions and thus risks, while it does not appear to have an impact on others. In particular there was no statistically significant change in the emission of particles for the vehicle operating on petrol, before the conversion, compared to the emissions for the vehicle operating on CNG, after the conversion. There was a significant lowering of emissions of total polycyclic aromatic hydrocarbons and formaldehyde when the vehicle was operated on CNG, and a reduction of global warming potential was also observed when the vehicle was run on CNG, but the later gain is only at high vehicle speeds/loads, and would thus have to be considered in view of traffic and transport models for the region (in these models vehicle speed is an important parameter). PMID:15081726

This paper will discuss a two and a half year long project undertaken to develop an English-language interface for the geographical information system GRASS. The work was carried out for NASA by a small business, Netrologic, based in San Diego, California, under Phase 1 and 2 Small Business Innovative Research contracts. We consider here the potential value of this system whose current functionality addresses numerical, categorical and boolean raster layers and includes the display of point sets defined by constraints on one or more layers, answers yes/no and numerical questions, and creates statistical reports. It also handles complex queries and lexical ambiguities, and allows temporarily switching to UNIX or GRASS.

Writing in science can be used to address some of the issues relevant to contemporary scientific literacy, such as the nature of science, which describes the scientific enterprise for science education. This has implications for the kinds of writing tasks students should attempt in the classroom, and for how students should understand the rationale and claims of these tasks. While scientific writing may train the mind to think scientifically in a disciplined and structured way thus encouraging students to gain access to the public domain of scientific knowledge, the counter-argument is that students need to be able to express their thoughts freely in their own language. Writing activities must aim to promote philosophical and epistemological views of science that accurately portray contemporary science. This mixed-methods case study explored language-enriched environments, in this case, secondary science classrooms with a focus on teacher-developed activities, involving diversified writing styles, that were directly linked to the science curriculum. The research foci included: teachers' implementation of these activities in their classrooms; how the activities reflected the teachers' nature of science views; common attributes between students' views of science and how they represented science in their writings; and if, and how the activities influenced students' nature of science views. Teachers' and students' views of writing and the nature of science are illustrated through pre-and post-questionnaire responses; interviews; student work; and classroom observations. Results indicated that diversified writing activities have the potential to accurately portray science to students, personalize learning in science, improve students' overall attitude towards science, and enhance scientific literacy through learning science, learning about science, and doing science. Further research is necessary to develop an understanding of whether the choice of genre has an

Until now, there has been no available naturallanguage interfaces (NLI's) for querying a database of pulsars (rotating neutron stars emitting radiation at regular intervals). Currently, pulsar records are retrieved through an HTML form accessible via the Australia Telescope National Facility (ATNF) website where one needs to be familiar with pulsar attributes used by the interface (e.g. BLC). Using a NLI relinquishes the need for learning form-specific formalism and allows execution of more powerful queries than those supported by the HTML form. Furthermore, on database access that requires comparison of attributes for all the pulsar records (e.g. what is the fastest pulsar?), using a NLI for retrieving answers to such complex questions is definitely much more efficient and less error-prone. This poster presents the first NLI ever created for the ATNF pulsar database (ATNF-Query) to facilitate database access using complex queries. ATNF-Query is built using a machine learning approach that induces a semantic parser from a question corpus; the innovative application is intended to provide pulsar researchers or laymen with an intelligent language understanding database system for friendly information access.

In a combinatorial communication system, some signals consist of the combinations of other signals. Such systems are more efficient than equivalent, non-combinatorial systems, yet despite this they are rare in nature. Why? Previous explanations have focused on the adaptive limits of combinatorial communication, or on its purported cognitive difficulties, but neither of these explains the full distribution of combinatorial communication in the natural world. Here, we present a nonlinear dynamical model of the emergence of combinatorial communication that, unlike previous models, considers how initially non-communicative behaviour evolves to take on a communicative function. We derive three basic principles about the emergence of combinatorial communication. We hence show that the interdependence of signals and responses places significant constraints on the historical pathways by which combinatorial signals might emerge, to the extent that anything other than the most simple form of combinatorial communication is extremely unlikely. We also argue that these constraints can be bypassed if individuals have the socio-cognitive capacity to engage in ostensive communication. Humans, but probably no other species, have this ability. This may explain why language, which is massively combinatorial, is such an extreme exception to nature's general trend for non-combinatorial communication. PMID:24047871

Patent textual descriptions provide a wealth of information that can be used to understand the underlying design approaches that result in the generation of novel and innovative technology. This article will discuss a new approach for estimating Degree of Ideality and Level of Invention metrics from the theory of inventive problem solving (TRIZ) using patent textual information. Patent text includes information that can be used to model both the functions performed by a design and the associated costs and problems that affect a design’s value. The motivation of this research is to use patent data with calculation of TRIZ metrics to help designers understand which combinations of system components and functions result in creative and innovative design solutions. This article will discuss in detail methods to estimate these TRIZ metrics using naturallanguage processing and machine learning with the use of neural networks.

Emergency department (ED) visits due to allergic reactions are common. Allergy information is often recorded in free-text provider notes; however, this domain has not yet been widely studied by the naturallanguage processing (NLP) community. We developed an allergy module built on the MTERMS NLP system to identify and encode food, drug, and environmental allergies and allergic reactions. The module included updates to our lexicon using standard terminologies, and novel disambiguation algorithms. We developed an annotation schema and annotated 400 ED notes that served as a gold standard for comparison to MTERMS output. MTERMS achieved an F-measure of 87.6% for the detection of allergen names and no known allergies, 90% for identifying true reactions in each allergy statement where true allergens were also identified, and 69% for linking reactions to their allergen. These preliminary results demonstrate the feasibility using NLP to extract and encode allergy information from clinical notes. PMID:25954363

The research is concerned with investigating children's understanding of physical, chemical, and biological changes while using an approach developed by the project Energy and Change. This project aimed to provide novel ways of teaching about the nature and direction of changes, in particular introducing ideas related to the Second Law of…

Substantial amount of research regarding L2 learners' beliefs has been conducted in recent years. However, not enough attention has been paid to investigating the nature of learners' beliefs; hence our understanding of the construct is contradictory in the sense that early research studies report stability in beliefs, while more recent studies…

A compiler for recognizing statements of a FORTRAN program which are suited for fast execution on a parallel or pipeline machine such as Illiac-4, Star or ASC is described. The technique employs interval analysis to provide flow information to the vector/parallel recognizer. Where profitable the compiler changes scalar variables to subscripted variables. The output of the compiler is an extension to FORTRAN which shows parallel and vector operations explicitly.

Research on the dimensions of personality represented in the English language has repeatedly led to the identification of five factors (Norman, 1963). An alternative classification of personality traits, based on analyses of standardized questionnaires, is provided by the NEO (Neuroticism, Extraversion, Openness) model (Costa & McCrae, 1980b). In this study we examined the correspondence between these two systems in order to evaluate their comprehensiveness as models of personality. A sample of 498 men and women, participants in a longitudinal study of aging, completed an instrument containing 80 adjective pairs, which included 40 pairs proposed by Goldberg to measure the five dimensions. Neuroticism and extraversion factors from these items showed substantial correlations with corresponding NEO Inventory scales; however, analyses that included psychometric measures of intelligence suggested that the fifth factor in the Norman structure should be reconceptualized as openness to experience. Convergent correlations above .50 with spouse ratings on the NEO Inventory that were made three years earlier confirmed these relations across time, instrument, and source of data. We discuss the relations among culture, conscientiousness, openness, and intelligence, and we conclude that mental ability is a separate factor, though related to openness to experience. PMID:4045699

Understanding written or spoken language presumably involves spreading neural activation in the brain. This process may be approximated by spreading activation in semantic networks, providing enhanced representations that involve concepts not found directly in the text. The approximation of this process is of great practical and theoretical interest. Although activations of neural circuits involved in representation of words rapidly change in time snapshots of these activations spreading through associative networks may be captured in a vector model. Concepts of similar type activate larger clusters of neurons, priming areas in the left and right hemisphere. Analysis of recent brain imaging experiments shows the importance of the right hemisphere non-verbal clusterization. Medical ontologies enable development of a large-scale practical algorithm to re-create pathways of spreading neural activations. First concepts of specific semantic type are identified in the text, and then all related concepts of the same type are added to the text, providing expanded representations. To avoid rapid growth of the extended feature space after each step only the most useful features that increase document clusterization are retained. Short hospital discharge summaries are used to illustrate how this process works on a real, very noisy data. Expanded texts show significantly improved clustering and may be classified with much higher accuracy. Although better approximations to the spreading of neural activations may be devised a practical approach presented in this paper helps to discover pathways used by the brain to process specific concepts, and may be used in large-scale applications. PMID:18614334

Presentations from a colloquium on applications of research on naturallanguages to computer science address the following topics: (1) analysis of complex adverbs; (2) parser use in computerized text analysis; (3) French language utilities; (4) lexicographic mapping of official language notices; (5) phonographic codification of Spanish; (6)…

The conceptual tools used in the communication/language objectives-based system (C/LOBS), which supports the front-end analysis efforts of the Defense Language Institute Foreign Language Center, are examined. The C/LOBS project, which is described in 13 volumes and an executive summary, functions as a subsystem of the instructional systems…

A discussion on the nature of language argues the following: (1) the concept of a closed and finite rule system is inadequate for the description of naturallanguages; (2) as a consequence, the writing of variable rules to modify such rule systems so as to accommodate the properties of naturallanguage is inappropriate; (3) the concept of such…

This quarterly report documents work performed under Tasks 15, 16, and 18 through 23 of the project entitled: ''Technologies to Enhance the Operation of the Existing Natural Gas Compression Infrastructure''. The project objective is to develop and substantiate methods for operating integral engine/compressors in gas pipeline service, which reduce fuel consumption, increase capacity, and enhance mechanical integrity. The report first documents a survey test performed on an HBA-6 engine/compressor installed at Duke Energy's Bedford Compressor Station. This is one of several tests planned, which will emphasize identification and reduction of compressor losses. Additionally, this report presents a methodology for distinguishing losses in compressor attributable to valves, irreversibility in the compression process, and the attached piping (installation losses); it illustrates the methodology with data from the survey test. The report further presents the validation of the simulation model for the Air Balance tasks and outline of conceptual manifold designs.

This quarterly report documents work performed under Tasks 15, 16, and 18 through 23 of the project entitled: ''Technologies to Enhance the Operation of Existing Natural Gas Compression Infrastructure''. The project objective is to develop and substantiate methods for operating integral engine/compressors in gas pipeline service, which reduce fuel consumption, increase capacity, and enhance mechanical integrity. The report presents results of design analysis performed on the TCVC10 engine/compressor installed at Dominion's Groveport Compressor Station to develop options and guide decisions for reducing pulsations and enhancing compressor system efficiency and capacity. The report further presents progress on modifying and testing the laboratory GMVH6 at SwRI for correcting air imbalance.

This quarterly report documents work performed under Tasks 15, 16, and 18 through 23 of the project entitled: ''Technologies to Enhance the Operation of Existing Natural Gas Compression Infrastructure''. The project objective is to develop and substantiate methods for operating integral engine/compressors in gas pipeline service, which reduce fuel consumption, increase capacity, and enhance mechanical integrity. The report first summarizes key results from survey site tests performed on an HBA-6 installed at Duke Energy's Bedford compressor station, and on a TCVC10 engine/compressor installed at Dominion's Groveport Compressor Station. The report then presents results of design analysis performed on the Bedford HBA-6 to develop options and guide decisions for reducing pulsations and enhancing compressor system efficiency and capacity. The report further presents progress on modifying and testing the laboratory GMVH6 at SwRI for correcting air imbalance.

This quarterly report documents work performed under Tasks 15, 16, and 18 through 23 of the project entitled: ''Technologies to Enhance the Operation of Existing Natural Gas Compression Infrastructure''. The project objective is to develop and substantiate methods for operating integral engine/compressors in gas pipeline service, which reduce fuel consumption, increase capacity, and enhance mechanical integrity. The report first documents a survey site test performed on a TCVC10 engine/compressor installed at Dominion's Groveport Compressor Station. This test completes planned screening efforts designed to guide selection of one or more units for design analysis and testing with emphasis on identification and reduction of compressor losses. The report further presents the validation of the simulation model for the Air Balance tasks and outline of conceptual manifold designs.

This report is a summary of the operations and testing of internal combustion engine vehicles that were fueled with 100% hydrogen and various blends of hydrogen and compressed natural gas (HCNG). It summarizes the operations of the Arizona Public Service Alternative Fuel Pilot Plant, which produces, compresses, and dispenses hydrogen fuel. Other testing activities, such as the destructive testing of a CNG storage cylinder that was used for HCNG storage, are also discussed. This report highlights some of the latest technology developments in the use of 100% hydrogen fuels in internal combustion engine vehicles. Reports are referenced and WWW locations noted as a guide for the reader that desires more detailed information. These activities are conducted by Arizona Public Service, Electric Transportation Applications, the Idaho National Laboratory, and the U.S. Department of Energy’s Advanced Vehicle Testing Activity.

Understanding written or spoken language presumably involves spreading neural activation in the brain. This process may be approximated by spreading activation in semantic networks, providing enhanced representations that involve concepts that are not found directly in the text. Approximation of this process is of great practical and theoretical interest. Although activations of neural circuits involved in representation of words rapidly change in time snapshots of these activations spreading through associative networks may be captured in a vector model. Concepts of similar type activate larger clusters of neurons, priming areas in the left and right hemisphere. Analysis of recent brain imaging experiments shows the importance of the right hemisphere non-verbal clusterization. Medical ontologies enable development of a large-scale practical algorithm to re-create pathways of spreading neural activations. First concepts of specific semantic type are identified in the text, and then all related concepts of the same type are added to the text, providing expanded representations. To avoid rapid growth of the extended feature space after each step only the most useful features that increase document clusterization are retained. Short hospital discharge summaries are used to illustrate how this process works on a real, very noisy data. Expanded texts show significantly improved clustering and may be classified with much higher accuracy. Although better approximations to the spreading of neural activations may be devised a practical approach presented in this paper helps to discover pathways used by the brain to process specific concepts, and may be used in large-scale applications. PMID:18614334

Cognitive science has a rich history of interest in the ways that languages represent abstract and concrete concepts (e.g., idea vs. dog). Until recently, this focus has centered largely on aspects of word meaning and semantic representation. However, recent corpora analyses have demonstrated that abstract and concrete words are also marked by phonological, orthographic, and morphological differences. These regularities in sound-meaning correspondence potentially allow listeners to infer certain aspects of semantics directly from word form. We investigated this relationship between form and meaning in a series of four experiments. In Experiments 1–2 we examined the role of metalinguistic knowledge in semantic decision by asking participants to make semantic judgments for aurally presented nonwords selectively varied by specific acoustic and phonetic parameters. Participants consistently associated increased word length and diminished wordlikeness with abstract concepts. In Experiment 3, participants completed a semantic decision task (i.e., abstract or concrete) for real words varied by length and concreteness. Participants were more likely to misclassify longer, inflected words (e.g., “apartment”) as abstract and shorter uninflected abstract words (e.g., “fate”) as concrete. In Experiment 4, we used a multiple regression to predict trial level naming data from a large corpus of nouns which revealed significant interaction effects between concreteness and word form. Together these results provide converging evidence for the hypothesis that listeners map sound to meaning through a non-arbitrary process using prior knowledge about statistical regularities in the surface forms of words. PMID:22879931

Examines role and contributions of natural-language processing in information retrieval and artificial intelligence research in context of large operational information retrieval systems and services. State-of-the-art information retrieval systems combining the functional capabilities of conventional inverted file term adjacency approach with…

The rehabilitation study described here sets out to test the premise of Abutalebi and Green's neurocognitive model--specifically, that language selection and control are components of overall cognitive control. We follow a trilingual woman (first language, L1: Amharic; second language, L2: English; third language, L3: French) with damage to the left frontal lobe and left basal ganglia who presented with cognitive control and naming deficits, through two periods of semantic treatment (French, followed by English) to alleviate naming deficits. The results showed that while the participant improved on trained items, she did not show within- or cross-language generalization. In addition, error patterns revealed a substantial increase of interference of the currently trained language into the nontrained language during each of the two treatment phases. These results are consistent with Abutalebi and Green's neurocognitive model and support the claim that language selection and control are components of overall cognitive control. PMID:26377506

Recent research by NASA indicates that extensive natural laminar flow (NLF) is attainable on modern high performance airplanes currently under development. Modern airframe construction methods and materials, such as milled aluminum skins, bonded aluminum skins, and composite materials, offer the potential for production of aerodynamic surfaces having waviness and roughness below the values which are critical for boundary layer transition. Areas of concern with the certification aspects of Natural Laminar Flow (NLF) are identified to stimulate thought and discussion of the possible problems. During its development, consideration has been given to the recent research information available on several small business and experimental airplanes and the certification and operating rules for general aviation airplanes. The certification considerations discussed are generally applicable to both large and small airplanes. However, from the information available at this time, researchers expect more extensive NLF on small airplanes because of their lower operating Reynolds numbers and cleaner leading edges (due to lack of leading-edge high lift devices). Further, the use of composite materials for aerodynamic surfaces, which will permit incorporation of NLF technology, is currently beginning to appear in small airplanes.

This Working Paper Series entry represents a collection of presentation visuals associated with the companion report entitled NaturalLanguage Query System Design for Interactive Information Storage and Retrieval Systems, USL/DBMS NASA/RECON Working Paper Series report number DBMS.NASA/RECON-17.

Background The identification of patients who pose an epidemic hazard when they are admitted to a health facility plays a role in preventing the risk of hospital acquired infection. An automated clinical decision support system to detect suspected cases, based on the principle of syndromic surveillance, is being developed at the University of Lyon's Hôpital de la Croix-Rousse. This tool will analyse structured data and narrative reports from computerized emergency department (ED) medical records. The first step consists of developing an application (UrgIndex) which automatically extracts and encodes information found in narrative reports. The purpose of the present article is to describe and evaluate this naturallanguage processing system. Methods Narrative reports have to be pre-processed before utilizing the French-language medical multi-terminology indexer (ECMT) for standardized encoding. UrgIndex identifies and excludes syntagmas containing a negation and replaces non-standard terms (abbreviations, acronyms, spelling errors...). Then, the phrases are sent to the ECMT through an Internet connection. The indexer's reply, based on Extensible Markup Language, returns codes and literals corresponding to the concepts found in phrases. UrgIndex filters codes corresponding to suspected infections. Recall is defined as the number of relevant processed medical concepts divided by the number of concepts evaluated (coded manually by the medical epidemiologist). Precision is defined as the number of relevant processed concepts divided by the number of concepts proposed by UrgIndex. Recall and precision were assessed for respiratory and cutaneous syndromes. Results Evaluation of 1,674 processed medical concepts contained in 100 ED medical records (50 for respiratory syndromes and 50 for cutaneous syndromes) showed an overall recall of 85.8% (95% CI: 84.1-87.3). Recall varied from 84.5% for respiratory syndromes to 87.0% for cutaneous syndromes. The most frequent cause of

Holism in interwar Germany provides an excellent example for social and political influences on scientific developments. Deeply impressed by the ubiquitous invocation of a cultural crisis, biologists, physicians, and psychologists presented holistic accounts as an alternative to the "mechanistic worldview" of the nineteenth century. Although the ideological background of these accounts is often blatantly obvious, many holistic scientists did not content themselves with a general opposition to a mechanistic worldview but aimed at a rational foundation of their holistic projects. This article will discuss the work of Kurt Goldstein, who is known for both his groundbreaking contributions to neuropsychology and his holistic philosophy of human nature. By focusing on Goldstein's neurolinguistic research, I want to reconstruct the empirical foundations of his holistic program without ignoring its cultural background. In this sense, Goldstein's work provides a case study for the formation of a scientific theory through the complex interplay between specific empirical evidences and the general cultural developments of the Weimar Republic. PMID:25363384

Purpose Prospective surveillance of invasive mold diseases (IMDs) in haematology patients should be standard of care but is hampered by the absence of a reliable laboratory prompt and the difficulty of manual surveillance. We used a high throughput technology, naturallanguage processing (NLP), to develop a classifier based on machine learning techniques to screen computed tomography (CT) reports supportive for IMDs. Patients and Methods We conducted a retrospective case-control study of CT reports from the clinical encounter and up to 12-weeks after, from a random subset of 79 of 270 case patients with 33 probable/proven IMDs by international definitions, and 68 of 257 uninfected-control patients identified from 3 tertiary haematology centres. The classifier was trained and tested on a reference standard of 449 physician annotated reports including a development subset (n = 366), from a total of 1880 reports, using 10-fold cross validation, comparing binary and probabilistic predictions to the reference standard to generate sensitivity, specificity and area under the receiver-operating-curve (ROC). Results For the development subset, sensitivity/specificity was 91% (95%CI 86% to 94%)/79% (95%CI 71% to 84%) and ROC area was 0.92 (95%CI 89% to 94%). Of 25 (5.6%) missed notifications, only 4 (0.9%) reports were regarded as clinically significant. Conclusion CT reports are a readily available and timely resource that may be exploited by NLP to facilitate continuous prospective IMD surveillance with translational benefits beyond surveillance alone. PMID:25250675

Concepts and methods of complex networks have been applied to probe the properties of a myriad of real systems [1]. The finding that written texts modeled as graphs share several properties of other completely different real systems has inspired the study of language as a complex system [2]. Actually, language can be represented as a complex network in its several levels of complexity. As a consequence, morphological, syntactical and semantical properties have been employed in the construction of linguistic networks [3]. Even the character level has been useful to unfold particular patterns [4,5]. In the review by Cong and Liu [6], the authors emphasize the need to use the topological information of complex networks modeling the various spheres of the language to better understand its origins, evolution and organization. In addition, the authors cite the use of networks in applications aiming at holistic typology and stylistic variations. In this context, I will discuss some possible directions that could be followed in future research directed towards the understanding of language via topological characterization of complex linguistic networks. In addition, I will comment the use of network models for language processing applications. Additional prospects for future practical research lines will also be discussed in this comment.

Two key challenges facing Natural Gas Engines used for cogeneration purposes are spark plug life and high NOx emissions. Using Hydrogen Assisted Lean Operation (HALO), these two keys issues are simultaneously addressed. HALO operation, as demonstrated in this project, allows stable engine operation to be achieved at ultra-lean (relative air/fuel ratios of 2) conditions, which virtually eliminates NOx production. NOx values of 10 ppm (0.07 g/bhp-hr NO) for 8% (LHV H2/LHV CH4) supplementation at an exhaust O2 level of 10% were demonstrated, which is a 98% NOx emissions reduction compared to the leanest unsupplemented operating condition. Spark ignition energy reduction (which will increase ignition system life) was carried out at an oxygen level of 9%, leading to a NOx emission level of 28 ppm (0.13 g/bhp-hr NO). The spark ignition energy reduction testing found that spark energy could be reduced 22% (from 151 mJ supplied to the coil) with 13% (LHV H2/LHV CH4) hydrogen supplementation, and even further reduced 27% with 17% hydrogen supplementation, with no reportable effect on NOx emissions for these conditions and with stable engine torque output. Another important result is that the combustion duration was shown to be only a function of hydrogen supplementation, not a function of ignition energy (until the ignitability limit was reached). The next logical step leading from these promising results is to see how much the spark energy reduction translates into increase in spark plug life, which may be accomplished by durability testing.

The unavoidability of language makes it critical that language policies appeal to some notion of language neutrality as part of their rationale, in order to assuage concerns that the policies might otherwise be unduly discriminatory. However, the idea of language neutrality is deeply ideological in nature, since it is not only an attempt to treat…

The evolution of the faculty of language largely remains an enigma. In this essay, we ask why. Language's evolutionary analysis is complicated because it has no equivalent in any nonhuman species. There is also no consensus regarding the essential nature of the language “phenotype.” According to the “Strong Minimalist Thesis,” the key distinguishing feature of language (and what evolutionary theory must explain) is hierarchical syntactic structure. The faculty of language is likely to have emerged quite recently in evolutionary terms, some 70,000–100,000 years ago, and does not seem to have undergone modification since then, though individual languages do of course change over time, operating within this basic framework. The recent emergence of language and its stability are both consistent with the Strong Minimalist Thesis, which has at its core a single repeatable operation that takes exactly two syntactic elements a and b and assembles them to form the set {a, b}. PMID:25157536

The Natural Excitation Technique (NExT) is a method of modal testing that allows structures to be tested in their ambient environments. This report is a compilation of developments and results since 1990, and contains a new theoretical derivation of NExT, as well as a verification using analytically generated data. In addition, we compare results from NExT with conventional modal testing for a parked, vertical-axis wind turbine, and, for a rotating turbine, NExT is used to calculate the model parameters as functions of the rotation speed, since substantial damping is derived from the aeroelastic interactions during operation. Finally, we compare experimental results calculated using NExT with analytical predictions of damping using aeroelastic theory.

Introduction As many as 3% of computed tomography (CT) scans detect pancreatic cysts. Because pancreatic cysts are incidental, ubiquitous and poorly understood, follow-up is often not performed. Pancreatic cysts may have a significant malignant potential and their identification represents a ‘window of opportunity’ for the early detection of pancreatic cancer. The purpose of this study was to implement an automated NaturalLanguage Processing (NLP)-based pancreatic cyst identification system. Method A multidisciplinary team was assembled. NLP-based identification algorithms were developed based on key words commonly used by physicians to describe pancreatic cysts and programmed for automated search of electronic medical records. A pilot study was conducted prospectively in a single institution. Results From March to September 2013, 566 233 reports belonging to 50 669 patients were analysed. The mean number of patients reported with a pancreatic cyst was 88/month (range 78–98). The mean sensitivity and specificity were 99.9% and 98.8%, respectively. Conclusion NLP is an effective tool to automatically identify patients with pancreatic cysts based on electronic medical records (EMR). This highly accurate system can help capture patients ‘at-risk’ of pancreatic cancer in a registry. PMID:25537257

In this thesis, we present two approaches to a rigorous mathematical and algorithmic foundation of quantitative and statistical inference in constraint-based naturallanguage processing. The first approach, called quantitative constraint logic programming, is conceptualized in a clear logical framework, and presents a sound and complete system of quantitative inference for definite clauses annotated with subjective weights. This approach combines a rigorous formal semantics for quantitative inference based on subjective weights with efficient weight-based pruning for constraint-based systems. The second approach, called probabilistic constraint logic programming, introduces a log-linear probability distribution on the proof trees of a constraint logic program and an algorithm for statistical inference of the parameters and properties of such probability models from incomplete, i.e., unparsed data. The possibility of defining arbitrary properties of proof trees as properties of the log-linear probability model and efficiently estimating appropriate parameter values for them permits the probabilistic modeling of arbitrary context-dependencies in constraint logic programs. The usefulness of these ideas is evaluated empirically in a small-scale experiment on finding the correct parses of a constraint-based grammar. In addition, we address the problem of computational intractability of the calculation of expectations in the inference task and present various techniques to approximately solve this task. Moreover, we present an approximate heuristic technique for searching for the most probable analysis in probabilistic constraint logic programs.

Electronically stored clinical documents may contain both structured data and unstructured data. The use of structured clinical data varies by facility, but clinicians are familiar with coded data such as International Classification of Diseases, Ninth Revision, Systematized Nomenclature of Medicine-Clinical Terms codes, and commonly other data including patient chief complaints or laboratory results. Most electronic health records have much more clinical information stored as unstructured data, for example, clinical narrative such as history of present illness, procedure notes, and clinical decision making are stored as unstructured data. Despite the importance of this information, electronic capture or retrieval of unstructured clinical data has been challenging. The field of naturallanguage processing (NLP) is undergoing rapid development, and existing tools can be successfully used for quality improvement, research, healthcare coding, and even billing compliance. In this brief review, we provide examples of successful uses of NLP using emergency medicine physician visit notes for various projects and the challenges of retrieving specific data and finally present practical methods that can run on a standard personal computer as well as high-end state-of-the-art funded processes run by leading NLP informatics researchers. PMID:26148107

The increasing availability of electronic health records (EHRs) creates opportunities for automated extraction of information from clinical text. We hypothesized that naturallanguage processing (NLP) could substantially reduce the burden of manual abstraction in studies examining outcomes, like cancer recurrence, that are documented in unstructured clinical text, such as progress notes, radiology reports, and pathology reports. We developed an NLP-based system using open-source software to process electronic clinical notes from 1995 to 2012 for women with early-stage incident breast cancers to identify whether and when recurrences were diagnosed. We developed and evaluated the system using clinical notes from 1,472 patients receiving EHR-documented care in an integrated health care system in the Pacific Northwest. A separate study provided the patient-level reference standard for recurrence status and date. The NLP-based system correctly identified 92% of recurrences and estimated diagnosis dates within 30 days for 88% of these. Specificity was 96%. The NLP-based system overlooked 5 of 65 recurrences, 4 because electronic documents were unavailable. The NLP-based system identified 5 other recurrences incorrectly classified as nonrecurrent in the reference standard. If used in similar cohorts, NLP could reduce by 90% the number of EHR charts abstracted to identify confirmed breast cancer recurrence cases at a rate comparable to traditional abstraction. PMID:24488511

Storing and retrieving radiology cases is an important activity for education and clinical research, but this process can be time-consuming. In the process of structuring reports and images into organized teaching files, incidental pathologic conditions not pertinent to the primary teaching point can be omitted, as when a user saves images of an aortic dissection case but disregards the incidental osteoid osteoma. An alternate strategy for identifying teaching cases is text search of reports in radiology information systems (RIS), but retrieved reports are unstructured, teaching-related content is not highlighted, and patient identifying information is not removed. Furthermore, searching unstructured reports requires sophisticated retrieval methods to achieve useful results. An open-source, RadLex(®)-compatible teaching file solution called RADTF, which uses naturallanguage processing (NLP) methods to process radiology reports, was developed to create a searchable teaching resource from the RIS and the picture archiving and communication system (PACS). The NLP system extracts and de-identifies teaching-relevant statements from full reports to generate a stand-alone database, thus converting existing RIS archives into an on-demand source of teaching material. Using RADTF, the authors generated a semantic search-enabled, Web-based radiology archive containing over 700,000 cases with millions of images. RADTF combines a compact representation of the teaching-relevant content in radiology reports and a versatile search engine with the scale of the entire RIS-PACS collection of case material. PMID:20801868

This article will describe research done at the National Institute of Multimedia in Education, Japan and the University of North Texas on the creation of a distributed Internet-based spoken language learning system that would provide more interactive and motivating learning than current multimedia and audiotape-based systems. The project combined…

Natural gas worth tens of billions of dollars is flared annually, which leads to resource waste and environmental issues. This work introduces and analyzes a novel concept for flared gas utilization, wherein the gas that would have been flared is instead used to condense atmospheric moisture. Natural gas, which is currently being flared, can alternatively power refrigeration systems to generate the cooling capacity for large scale atmospheric water harvesting (AWH). This approach solves two pressing issues faced by the oil-gas industry, namely gas flaring, and sourcing water for oilfield operations like hydraulic fracturing, drilling and water flooding. Multiple technical pathways to harvest atmospheric moisture by using the energy of natural gas are analyzed. A modeling framework is developed to quantify the dependence of water harvest rates on flared gas volumes and ambient weather. Flaring patterns in the Eagle Ford Shale in Texas and the Bakken Shale in North Dakota are analyzed to quantify the benefits of AWH. Overall, the gas currently flared annually in Texas and North Dakota can harvest enough water to meet 11% and 65% of the water consumption in the Eagle Ford and the Bakken, respectively. Daily harvests of upto 30 000 and 18 000 gallons water can be achieved using the gas currently flared per well in Texas and North Dakota, respectively. In fifty Bakken sites, the water required for fracturing or drilling a new well can be met via onsite flared gas-based AWH in only 3 weeks, and 3 days, respectively. The benefits of this concept are quantified for the Eagle Ford and Bakken Shales. Assessments of the global potential of this concept are presented using data from countries with high flaring activity. It is seen that this waste-to-value conversion concept offers significant economic benefits while addressing critical environmental issues pertaining to oil-gas production.

This quarterly report documents work performed under Tasks 10 through 14 of the project entitled: ''Technologies to Enhance Operation of the Existing Natural Gas Compression Infrastructure''. The project objective is to develop and substantiate methods for operating integral engine/compressors in gas pipeline service, which reduce fuel consumption, increase capacity, and enhance mechanical integrity. The report first documents tests performed on a KVG103 engine/compressor installed at Duke's Thomaston Compressor Station. This is the first series of tests performed on a four-stroke engine under this program. Additionally, this report presents results, which complete a comparison of performance before and after modification to install High Pressure Fuel Injection and a Turbocharger on a GMW10 at Williams Station 60. Quarterly Reports 7 and 8 already presented detailed data from tests before and after this modification, but the final quantitative comparison required some further analysis, which is presented in Section 5 of this report. The report further presents results of detailed geometrical measurements and flow bench testing performed on the cylinders and manifolds of the Laboratory Cooper GMVH6 engine being employed for two-stroke engine air balance investigations. These measurements are required to enhance the detailed accuracy in modeling the dynamic interaction of air manifold, exhaust manifold, and in-cylinder fuel-air balance.

Government agencies, landowners, lenders, and investors to name a few interested parties increasingly scrutinize companies drilling for and producing oil and natural gas to evaluate their potential impact on the environment and the resulting liabilities. Concerns range from how a planned drilling operation will affect wetlands and aquatic ecosystems to the potential economic effect on an investment in a producing field from governmental or private lawsuits. In this paper, I will discuss the potential environmental obstacles and liabilities that may be presented to oil and gas companies now and in the future as their activities continue to present environmental risks. I will discuss four general categories: (1) environmental permits, licenses, or other governmental authorizations necessary to begin or continue operations, (2) civil and criminal sanctions incurred for failure to comply with environmental statutes and regulations, (3) remedial obligations imposed by environmental laws, and (4) lawsuits by landowners and others claiming property damage or personal injury due to alleged environmental contamination. Many oil and gas companies are not only assessing the effect of environmental legal liabilities on their business but also developing some form of environmental management system to address these risks.

American Electric Power`s more than 30 years of experience in operatingnatural draft cooling towers during freezing winter weather conditions is discussed in the paper. Design features incorporated into the specifications for major rebuild/repack projects for crossflow and counterflow towers to facilitate cold weather operation are also reviewed.

Objectives To test the feasibility of using text mining to depict meaningfully the experience of pain in patients with metastatic prostate cancer, to identify novel pain phenotypes, and to propose methods for longitudinal visualization of pain status. Materials and methods Text from 4409 clinical encounters for 33 men enrolled in a 15-year longitudinal clinical/molecular autopsy study of metastatic prostate cancer (Project to ELIminate lethal CANcer) was subjected to naturallanguage processing (NLP) using Unified Medical Language System-based terms. A four-tiered pain scale was developed, and logistic regression analysis identified factors that correlated with experience of severe pain during each month. Results NLP identified 6387 pain and 13 827 drug mentions in the text. Graphical displays revealed the pain ‘landscape’ described in the textual records and confirmed dramatically increasing levels of pain in the last years of life in all but two patients, all of whom died from metastatic cancer. Severe pain was associated with receipt of opioids (OR=6.6, p<0.0001) and palliative radiation (OR=3.4, p=0.0002). Surprisingly, no severe or controlled pain was detected in two of 33 subjects’ clinical records. Additionally, the NLP algorithm proved generalizable in an evaluation using a separate data source (889 Informatics for Integrating Biology and the Bedside (i2b2) discharge summaries). Discussion Patterns in the pain experience, undetectable without the use of NLP to mine the longitudinal clinical record, were consistent with clinical expectations, suggesting that meaningful NLP-based pain status monitoring is feasible. Findings in this initial cohort suggest that ‘outlier’ pain phenotypes useful for probing the molecular basis of cancer pain may exist. Limitations The results are limited by a small cohort size and use of proprietary NLP software. Conclusions We have established the feasibility of tracking longitudinal patterns of pain by text mining

Native language acquisition is a natural and non-natural stage-by-stage process. The natural first stage is development of speech and listening skills. In this stage, competency is gained in the home environment. The next, non-natural stage is development of literacy, a cultural skill taught in school. Since oral-aural native language development…

Some reports suggest that there is an increase in the number of children identified as having developmental language impairment (Bercow, 2008). yet resource issues have meant that many speech and language therapy services have compromised provision in some way. Thus, efficient ways of identifying need and prioritizing intervention are required.…

Viewing writing as both a form of language learning and an intellectual skill, this book presents essays on how writers acquire trusted inner voices and the roles schools and teachers can play in helping student writers in the learning process. The essays in the book focus on one of three topics: the language of instruction and how response and…

Learner corpora, electronic collections of spoken or written data from foreign language learners, offer unparalleled access to many hitherto uncovered aspects of learner language, particularly in their error-tagged format. This article aims to demonstrate the role that the learner corpus can play in CALL, particularly when used in conjunction with…

The research scientist or engineer wishing to perform large scale simulations or to extract useful information from existing databases is required to have expertise in the details of the particular database, the numerical methods and the computer architecture to be used. This poses a significant practical barrier to the use of simulation data. The goal of this research was to develop a high-level computer language called Tensoral, designed to remove this barrier. The Tensoral language provides a framework in which efficient generic data manipulations can be easily coded and implemented. First of all, Tensoral is general. The fundamental objects in Tensoral represent tensor fields and the operators that act on them. The numerical implementation of these tensors and operators is completely and flexibly programmable. New mathematical constructs and operators can be easily added to the Tensoral system. Tensoral is compatible with existing languages. Tensoral tensor operations co-exist in a natural way with a host language, which may be any sufficiently powerful computer language such as Fortran, C, or Vectoral. Tensoral is very-high-level. Tensor operations in Tensoral typically act on entire databases (i.e., arrays) at one time and may, therefore, correspond to many lines of code in a conventional language. Tensoral is efficient. Tensoral is a compiled language. Database manipulations are simplified optimized and scheduled by the compiler eventually resulting in efficient machine code to implement them.

On September 11, 2001, the terrorist attacks on the World Trade Center (WTC) caused astronomical loss of life and property. Systems in place to manage disaster response were strained to the limit because key first responders were among the casualties when the twin towers collapsed. In addition, the evolution of events required immediate response in a rapidly changing and extremely hazardous situation. Rescue, recovery, and clean up became an overpowering and sustained effort that would utilize the resources of federal, state and local governments and agencies. One issue during the response to the WTC disaster site that did not receive much attention was that of the limited and non-English speaking worker. The Operating Engineers National HAZMAT Program (OENHP), with its history of a Hispanic Outreach Program, was acutely aware of this issue with the Hispanic worker. The Hispanic population comprises approximately 27% of the population of New York City (1). The extremely unfortunate and tragic events of that day provided an opportunity to not only provide assistance for the Hispanic workers, but also to apply lessons learned and conduct studies on worker training with language barriers in a real life environment. However, due to the circumstances surrounding this tragedy, the study of these issues was conducted primarily by observation. Through partnerships with other organizations such as the Occupational Safety and Health Administration (OSHA), the New York Health Department, the New York Department of Design and Construction (DDC), the New York Committee for Occupational Safety and Health (NYCOSH), and private companies such as 3M and MSA, OENHP was able to provide translated information on hazards, protective measures, fit testing of respirators, and site specific safety and health training. The OENHP translated materials on hazards and how to protect workers into Spanish to assist in getting the information to the limited and non- English speaking workers.

An increasing need for collaboration and resources sharing in the NaturalLanguage Processing (NLP) research and development community motivates efforts to create and share a common data model and a common terminology for all information annotated and extracted from clinical text. We have combined two existing standards: the HL7 Clinical Document Architecture (CDA), and the ISO Graph Annotation Format (GrAF; in development), to develop such a data model entitled "CDA+GrAF". We experimented with several methods to combine these existing standards, and eventually selected a method wrapping separate CDA and GrAF parts in a common standoff annotation (i.e., separate from the annotated text) XML document. Two use cases, clinical document sections, and the 2010 i2b2/VA NLP Challenge (i.e., problems, tests, and treatments, with their assertions and relations), were used to create examples of such standoff annotation documents, and were successfully validated with the XML schemata provided with both standards. We developed a tool to automatically translate annotation documents from the 2010 i2b2/VA NLP Challenge format to GrAF, and automatically generated 50 annotation documents using this tool, all successfully validated. Finally, we adapted the XSL stylesheet provided with HL7 CDA to allow viewing annotation XML documents in a web browser, and plan to adapt existing tools for translating annotation documents between CDA+GrAF and the UIMA and GATE frameworks. This common data model may ease directly comparing NLP tools and applications, combining their output, transforming and "translating" annotations between different NLP applications, and eventually "plug-and-play" of different modules in NLP applications. PMID:22197801

Oil and natural gas extraction has dramatically increased in the last decade in the United States due to the increased use of unconventional drilling techniques which include horizontal drilling and hydraulic fracturing. The impact of these drilling activities on local and regional air quality in oil and gas basins across the country are still relatively unknown, especially in recently developed basins such as the Bakken shale formation. This study is the first to conduct a comprehensive characterization of the regional air quality in the Bakken region. The Bakken shale formation, part of the Williston basin, is located in North Dakota and Montana in the United States and Saskatchewan and Manitoba in Canada. Oil and gas drilling operations can impact air quality in a variety of ways, including the generation of atmospheric particulate matter (PM), hazardous air pollutants, ozone, and greenhouse gas emissions. During the winter especially, PM formation can be enhanced and meteorological conditions can favor increased concentrations of PM and other pollutants. In this study, ground-based measurements throughout the Bakken region in North Dakota and Montana were collected over two consecutive winters to gain regional trends of air quality impacts from the oil and gas drilling activities. Additionally, one field site had a comprehensive suite of instrumentation operating at high time resolution to gain detailed characterization of the atmospheric composition. Measurements included organic carbon and black carbon concentrations in PM, the characterization of inorganic PM, inorganic gases, volatile organic compounds (VOCs), precipitation and meteorology. These elevated PM episodes were further investigated using the local meteorological conditions and regional transport patterns. Episodes of elevated concentrations of nitrogen oxides and sulfur dioxide were also detected. The VOC concentrations were analyzed and specific VOCs that are known oil and gas tracers were used

"First language acquisition" commonly means the acquisition of a single language in childhood, regardless of the number of languages in a child's natural environment. Language acquisition is variously viewed as predetermined, wondrous, a source of concern, and as developing through formal processes. "First language teaching" concerns schooling in…

Background We introduce the linguistic annotation of a corpus of 97 full-text biomedical publications, known as the Colorado Richly Annotated Full Text (CRAFT) corpus. We further assess the performance of existing tools for performing sentence splitting, tokenization, syntactic parsing, and named entity recognition on this corpus. Results Many biomedical naturallanguage processing systems demonstrated large differences between their previously published results and their performance on the CRAFT corpus when tested with the publicly available models or rule sets. Trainable systems differed widely with respect to their ability to build high-performing models based on this data. Conclusions The finding that some systems were able to train high-performing models based on this corpus is additional evidence, beyond high inter-annotator agreement, that the quality of the CRAFT corpus is high. The overall poor performance of various systems indicates that considerable work needs to be done to enable naturallanguage processing systems to work well when the input is full-text journal articles. The CRAFT corpus provides a valuable resource to the biomedical naturallanguage processing community for evaluation and training of new models for biomedical full text publications. PMID:22901054

Coalbed natural gas (CBNG) development in western U.S. states has resulted in an increase in an essential energy resource, but has also resulted in environmental impacts and additional regulatory needs. A concern associated with CBNG development relates to the production of the copious quantities of potentially saline-sodic groundwater required to recover the natural gas, hereafter referred to as CBNG water. Management of CBNG water is a major environmental challenge because of its quantity and quality. In this study, a locally available Na-rich natural zeolite (clinoptilolite) from Wyoming (WY) was examined for its potential to treat CBNG water to remove Na+ and lower the sodium adsorption ratio (SAR, mmol1/2 L- 1/2). The zeolite material was Ca-modified before being used in column experiments. Column breakthrough studies indicated that a metric tonne (1000??kg) of Ca-WY-zeolite could be used to treat 60,000??L of CBNG water in order to lower SAR of the CBNG water from 30 to an acceptable level of 10??mmol1/2 L- 1/2. An integrated treatment process using Na-WY-zeolite for alternately treating hard water and CBNG water was also examined for its potential to treat problematic waters in the region. Based on the results of this study, use of WY-zeolite appears to be a cost-effective water treatment technology for maximizing the beneficial use of poor-quality CBNG water. Ongoing studies are evaluating water treatment techniques involving infiltration ponds lined with zeolite. ?? 2008 Elsevier B.V. All rights reserved.

The purpose of this guide is to stimulate the development of nature centers. The guide offers possible solutions for common problems which many schools face when considering an on-campus nature center, for example, lack of readily available open space, minimum knowledge of how to develop and maintain an on-campus nature center, and lack of…

The hemicellulose xylan constitutes a major portion of plant biomass, a renewable feedstock available for conversion to biofuels and other bioproducts. β-xylosidase operates in the deconstruction of the polysaccharide to fermentable sugars. Glycoside hydrolase family 43 is recognized as a source of highly active β-xylosidases, some of which could have practical applications. The biochemical details of four GH43 β-xylosidases (those from Alkaliphilus metalliredigens QYMF, Bacillus pumilus, Bacillus subtilis subsp. subtilis str. 168, and Lactobacillus brevis ATCC 367) are examined here. Sedimentation equilibrium experiments indicate that the quaternary states of three of the enzymes are mixtures of monomers and homodimers (B. pumilus) or mixtures of homodimers and homotetramers (B. subtilis and L. brevis). k cat and k cat/K m values of the four enzymes are higher for xylobiose than for xylotriose, suggesting that the enzyme active sites comprise two subsites, as has been demonstrated by the X-ray structures of other GH43 β-xylosidases. The K i values for D-glucose (83.3-357 mM) and D-xylose (15.6-70.0 mM) of the four enzymes are moderately high. The four enzymes display good temperature (K t (0.5) ∼ 45 °C) and pH stabilities (>4.6 to <10.3). At pH 6.0 and 25 °C, the enzyme from L. brevis ATCC 367 displays the highest reported k cat and k cat/K m on natural substrates xylobiose (407 s(-1), 138 s(-1) mM(-1)), xylotriose (235 s(-1), 80.8 s(-1) mM(-1)), and xylotetraose (146 s(-1), 32.6 s(-1) mM(-1)). PMID:23053115

This project has documented and demonstrated the feasibility of technologies and operational choices for companies who operate the large installed fleet of integral engine compressors in pipeline service. Continued operations of this fleet is required to meet the projected growth of the U.S. gas market. Applying project results will meet the goals of the DOE-NETL Natural Gas Infrastructure program to enhance integrity, extend life, improve efficiency, and increase capacity, while managing NOx emissions. These benefits will translate into lower cost, more reliable gas transmission, and options for increasing deliverability from the existing infrastructure on high demand days. The power cylinders on large bore slow-speed integral engine/compressors do not in general combust equally. Variations in cylinder pressure between power cylinders occur cycle-to-cycle. These variations affect both individual cylinder performance and unit average performance. The magnitude of the variations in power cylinder combustion is dependent on a variety of parameters, including air/fuel ratio. Large variations in cylinder performance and peak firing pressure can lead to detonation and misfires, both of which can be damaging to the unit. Reducing the variation in combustion pressure, and moving the high and low performing cylinders closer to the mean is the goal of engine balancing. The benefit of improving the state of the engine ''balance'' is a small reduction in heat rate and a significant reduction in both crankshaft strain and emissions. A new method invented during the course of this project is combustion pressure ratio (CPR) balancing. This method is more effective than current methods because it naturally accounts for differences in compression pressure, which results from cylinder-to-cylinder differences in the amount of air flowing through the inlet ports and trapped at port closure. It also helps avoid compensation for low compression pressure by the addition of excess fuel

This paper studies phonological processes and constraints on early phonological and lexical development, as well as the strategies employed by a young Spanish-, Portuguese-, and Hebrew-speaking child-Nurit (the author's niece)-in the construction of her early lexicon. Nurit's linguistic development is compared to that of another Spanish-, Portuguese-, and Hebrew-speaking child-Noam (the author's son). Noam and Nurit's linguistic development is contrasted to that of Berman's (1977) English- and Hebrew-speaking daughter (Shelli). The simultaneous acquisition of similar (closely related languages) such as Spanish and Portuguese versus that of nonrelated languages such as English and Hebrew yields different results: Children acquiring similar languages seem to prefer maintenance as a strategy for the construction of their early lexicon, while children exposed to nonrelated languages appear to prefer reduction to a large extent (Faingold, 1990). The Spanish- and Portuguese-speaking children's high accuracy stems from a wider choice of target words, where the diachronic development of two closely related languages provides a simplified model lexicon to the child. PMID:8865623

An important source of human exposure to radiation is the natural world including cosmic rays, cosmogenic radionuclides, natural terrestrial radionuclides, and radon isotopes and its decay products. Considerable effort is being expended on a worldwide basis to characterize the exposure to the natural radiation environment and determine the important pathways for the exposure to result in the dose to tissue that leads to injury and disease. The problem of background exposure to naturally occurring radioactivity has been the subject of research since the initial discovery of the radioactivity of uranium and thorium. However, with the advent of artificial sources of radiation with both benefits and harm the nature and magnitude of the natural radiation environment and the effects on various populations are important in the development of overall public health strategies as ALARA principles are applied to the situation.

A Piagetian, language-experience approach to the early teaching of reading to normal young and older developmentally delayed preoperational children was developed. The approach was discussed in a specially-developed manual which was field tested by 42 teachers among 390 children in 16 school systems in several states and in Canada. Subjects were…

The implementation of gender fair language is often associated with negative reactions and hostile attacks on people who propose a change. This was also the case in Sweden in 2012 when a third gender-neutral pronoun hen was proposed as an addition to the already existing Swedish pronouns for she (hon) and he (han). The pronoun hen can be used both generically, when gender is unknown or irrelevant, and as a transgender pronoun for people who categorize themselves outside the gender dichotomy. In this article we review the process from 2012 to 2015. No other language has so far added a third gender-neutral pronoun, existing parallel with two gendered pronouns, that actually have reached the broader population of language users. This makes the situation in Sweden unique. We present data on attitudes toward hen during the past 4 years and analyze how time is associated with the attitudes in the process of introducing hen to the Swedish language. In 2012 the majority of the Swedish population was negative to the word, but already in 2014 there was a significant shift to more positive attitudes. Time was one of the strongest predictors for attitudes also when other relevant factors were controlled for. The actual use of the word also increased, although to a lesser extent than the attitudes shifted. We conclude that new words challenging the binary gender system evoke hostile and negative reactions, but also that attitudes can normalize rather quickly. We see this finding very positive and hope it could motivate language amendments and initiatives for gender-fair language, although the first responses may be negative. PMID:26191016