Sample records for advanced computer applications

The Internal Fluid Mechanics Division of the NASA Lewis Research Center is combining the key elements of computational fluid dynamics, aerothermodynamic experiments, and advancedcomputational technology to bring internal computational fluid dynamics (ICFM) to a state of practical application for aerospace propulsion system design. This paper presents an overview of efforts underway at NASA Lewis to advance and apply computational technology to ICFM. These efforts include the use of modern, software engineering principles for code development, the development of an AI-based user-interface for large codes, the establishment of a high-performance, data communications network to link ICFM researchers and facilities, and the application of parallel processing to speed up computationally intensive and/or time-critical ICFM problems. A multistage compressor flow physics program is cited as an example of efforts to use advancedcomputational technology to enhance a current NASA Lewis ICFM research program.

Rapidly progressing computer technology, ever-increasing expectations of patients, and a confusing medicolegal environment requires a clarification of the role of computer imaging/applications. Advances in computer technology and its applications are reviewed. A brief historical discussion is included for perspective. Improvements in both hardware and software with the advent of digital imaging have allowed great increases in speed and accuracy in patient imaging. This facilitates doctor-patient communication and possibly realistic patient expectations. Patients seeking cosmetic surgery now often expect preoperative imaging. Although society in general has become more litigious, a literature search up to 1998 reveals no lawsuits directly involving computer imaging. It appears that conservative utilization of computer imaging by the facial plastic surgeon may actually reduce liability and promote communication. Recent advances have significantly enhanced the value of computer imaging in the practice of facial plastic surgery. These technological advances in computer imaging appear to contribute a useful technique for the practice of facial plastic surgery. Inclusion of computer imaging should be given serious consideration as an adjunct to clinical practice.

Cardiothoracic diseases result in substantial morbidity and mortality. Chest computed tomography (CT) has been an imaging modality of choice for assessing a host of chest diseases, and technologic advances have enabled the emergence of coronary CT angiography as a robust noninvasive test for cardiac imaging. Technologic developments in CT have also enabled the application of dual-energy CT scanning for assessing pulmonary vascular and neoplastic processes. Concerns over increasing radiation dose from CT scanning are being addressed with introduction of more dose-efficient wide-area detector arrays and iterative reconstruction techniques. This review article discusses the technologic innovations in CT and their effect on cardiothoracic applications.

Advancements in hardware and software technology are summarized with specific emphasis on spacecraft computer capabilities. Available state of the art technology is reviewed and candidate architectures are defined.

The power grid is becoming far more complex as a result of the grid evolution meeting an information revolution. Due to the penetration of smart grid technologies, the grid is evolving as an unprecedented speed and the information infrastructure is fundamentally improved with a large number of smart meters and sensors that produce several orders of magnitude larger amounts of data. How to pull data in, perform analysis, and put information out in a real-time manner is a fundamental challenge in smart grid operation and planning. The future power grid requires high performance computing to be one of the foundational technologies in developing the algorithms and tools for the significantly increased complexity. New techniques and computational capabilities are required to meet the demands for higher reliability and better asset utilization, including advanced algorithms and computing hardware for large-scale modeling, simulation, and analysis. This chapter summarizes the computational challenges in smart grid and the need for high performance computing, and present examples of how high performance computing might be used for future smart grid operation and planning.

MAX is a fault-tolerant multicomputer hardware and software architecture designed to meet the needs of NASA spacecraft systems. It consists of conventional computing modules (computers) connected via a dual network topology. One network is used to transfer data among the computers and between computers and I/O devices. This network's topology is arbitrary. The second network operates as a broadcast medium for operating system synchronization messages and supports the operating system's Byzantine resilience. A fully distributed operating system supports multitasking in an asynchronous event and data driven environment. A large grain dataflow paradigm is used to coordinate the multitasking and provide easy control of concurrency. It is the basis of the system's fault tolerance and allows both static and dynamical location of tasks. Redundant execution of tasks with software voting of results may be specified for critical tasks. The dataflow paradigm also supports simplified software design, test and maintenance. A unique feature is a method for reliably patching code in an executing dataflow application.

Today’s genomic experiments have to process the so-called “biological big data” that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities. PMID:26604801

Today's genomic experiments have to process the so-called "biological big data" that is now reaching the size of Terabytes and Petabytes. To process this huge amount of data, scientists may require weeks or months if they use their own workstations. Parallelism techniques and high-performance computing (HPC) environments can be applied for reducing the total processing time and to ease the management, treatment, and analyses of this data. However, running bioinformatics experiments in HPC environments such as clouds, grids, clusters, and graphics processing unit requires the expertise from scientists to integrate computational, biological, and mathematical techniques and technologies. Several solutions have already been proposed to allow scientists for processing their genomic experiments using HPC capabilities and parallelism techniques. This article brings a systematic review of literature that surveys the most recently published research involving genomics and parallel computing. Our objective is to gather the main characteristics, benefits, and challenges that can be considered by scientists when running their genomic experiments to benefit from parallelism techniques and HPC capabilities.

The "Computers-for-edu" case is designed to provide students with hands-on exposure to creating Advanced Business Application Programming (ABAP) reports and dialogue programs, as well as navigating various mySAP Enterprise Resource Planning (ERP) transactions needed by ABAP developers. The case requires students to apply a wide variety…

Improving the numerical linear algebra routines for use in new Navier-Stokes codes, specifically Tim Barth's unstructured grid code, with spin-offs to TRANAIR is reported. A fast distance calculation routine for Navier-Stokes codes using the new one-equation turbulence models is written. The primary focus of this work was devoted to improving matrix-iterative methods. New algorithms have been developed which activate the full potential of classical Cray-class computers as well as distributed-memory parallel computers.

Requirements for modern and future control systems for large projects like International Linear Collider demand high availability for control system components. Recently telecom industry came up with a great open hardware specification - Advanced Telecom Computing Architecture (ATCA). This specification is aimed for better reliability, availability and serviceability. Since its first market appearance in 2004, ATCA platform has shown tremendous growth and proved to be stable and well represented by a number of vendors. ATCA is an industry standard for highly available systems. On the other hand Service Availability Forum, a consortium of leading communications and computing companies, describes interaction between hardware and software. SAF defines a set of specifications such as Hardware Platform Interface, Application Interface Specification. SAF specifications provide extensive description of highly available systems, services and their interfaces. Originally aimed for telecom applications, these specifications can be used for accelerator controls software as well. This study describes benefits of using these specifications and their possible adoption to accelerator control systems. It is demonstrated how EPICS Redundant IOC was extended using Hardware Platform Interface specification, which made it possible to utilize benefits of the ATCA platform.

Biotechnologists have stayed at the forefront for practical applications for computing. As hardware and software for computing have evolved, the latest advances have found eager users in the area of bioprocessing. Accomplishments and their significance can be appreciated by tracing the history and the interplay between the computing tools and the problems that have been solved in bioprocessing.

The Center for AdvancedComputational Technology (ACT) was established to serve as a focal point for diverse research activities pertaining to application of advancedcomputational technology to future aerospace systems. These activities include the use of numerical simulations, artificial intelligence methods, multimedia and synthetic environments, and computational intelligence, in the modeling, analysis, sensitivity studies, optimization, design and operation of future aerospace systems. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The Center has four specific objectives: 1) conduct innovative research on applications of advancedcomputational technology to aerospace systems; 2) act as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); 3) help in identifying future directions of research in support of the aeronautical and space missions of the twenty-first century; and 4) help in the rapid transfer of research results to industry and in broadening awareness among researchers and engineers of the state-of-the-art in applications of advancedcomputational technology to the analysis, design prototyping and operations of aerospace and other high-performance engineering systems. In addition to research, Center activities include helping in the planning and coordination of the activities of a multi-center team of NASA and JPL researchers who are developing an intelligent synthesis environment for future aerospace systems; organizing workshops and national symposia; as well as writing state-of-the-art monographs and NASA special publications on timely topics.

Advanced laptop and small personal computer technology is presented in the form of the viewgraphs. The following areas of hand carried computers and mobile workstation technology are covered: background, applications, high end products, technology trends, requirements for the Control Center application, and recommendations for the future.

The AdvancedComputed Tomography Inspection System (ACTIS) was developed by NASA Marshall to support solid propulsion test programs. ACTIS represents a significant advance in state-of-the-art inspection systems. Its flexibility and superior technical performance have made ACTIS very popular, both within and outside the aerospace community. Through technology utilization efforts, ACTIS has been applied to inspection problems in commercial aerospace, lumber, automotive, and nuclear waste disposal industries. ACTIS has been used to inspect items of historical interest. ACTIS has consistently produced valuable results, providing information which was unattainable through conventional inspection methods. Although many successes have already been shown, the full potential of ACTIS has not yet been realized. It is currently being applied in the commercial aerospace industry by Boeing. Smaller systems, based on ACTIS technology, are becoming increasingly available. This technology has much to offer the small business and industry, especially in identifying design and process problems early in the product development cycle to prevent defects. Several options are available to businesses interested in this technology.

The AdvancedComputed Tomography Inspection System (ACTIS) was developed by the Marshall Space Flight Center to support in-house solid propulsion test programs. ACTIS represents a significant advance in state-of-the-art inspection systems. Its flexibility and superior technical performance have made ACTIS very popular, both within and outside the aerospace community. Through Technology Utilization efforts, ACTIS has been applied to inspection problems in commercial aerospace, lumber, automotive, and nuclear waste disposal industries. ACTIS has even been used to inspect items of historical interest. ACTIS has consistently produced valuable results, providing information which was unattainable through conventional inspection methods. Although many successes have already been demonstrated, the full potential of ACTIS has not yet been realized. It is currently being applied in the commercial aerospace industry by Boeing Aerospace Company. Smaller systems, based on ACTIS technology are becoming increasingly available. This technology has much to offer small businesses and industry, especially in identifying design and process problems early in the product development cycle to prevent defects. Several options are available to businesses interested in pursuing this technology.

Detonation spraying is a well-known technology which is applied for deposition of diverse powders, in particular cermets, to form various protective coatings. Actual progress is related to a recently developed technique of computer-controlled detonation spraying and its application in non-traditional domains as development of composite and graded coatings or metallization of plastics. The gas detonation parameters are analyzed to estimate the efficiency of different fuels to vary particle-in-flight velocity and temperature over a broad range thus providing conditions to spray diverse powders. A particle of a given nature and fixed size could be sprayed in a solid state or being strongly overheated above the melting point by variation of the quantity of the explosive gas mixture which is computer-controlled. Particle-in-flight velocity and temperature are calculated and compared with jet monitoring by a CCD-camera-based diagnostic tool and experimental data on splats formation.

In the computation of flowfields about complex configurations, it is very difficult to construct a boundary-fitted coordinate system. An alternative approach is to use several grids at once, each of which is generated independently. This procedure is called the multiple grids or zonal grids approach; its applications are investigated. The method conservative providing conservation of fluxes at grid interfaces. The Euler equations are solved numerically on such grids for various configurations. The numerical scheme used is the finite-volume technique with a three-stage Runge-Kutta time integration. The code is vectorized and programmed to run on the CDC VPS-32 computer. Steady state solutions of the Euler equations are presented and discussed. The solutions include: low speed flow over a sphere, high speed flow over a slender body, supersonic flow through a duct, and supersonic internal/external flow interaction for an aircraft configuration at various angles of attack. The results demonstrate that the multiple grids approach along with the conservative interfacing is capable of computing the flows about the complex configurations where the use of a single grid system is not possible.

Modern graphics processing units (GPU) provide architectures and new programming models that enable to harness their large processing power and to design computational fluid dynamics (CFD) simulations at both high performance and low cost. Possibilities of the use of GPUs for the simulation of internal fluid flows are discussed. The finite volume method is applied to solve three-dimensional (3D) unsteady compressible Euler and Navier-Stokes equations on unstructured meshes. Compute Inified Device Architecture (CUDA) technology is used for programming implementation of parallel computational algorithms. Solution of some fluid dynamics problems on GPUs is presented and approaches to optimization of the CFD code related to the use of different types of memory are discussed. Speedup of solution on GPUs with respect to the solution on central processor unit (CPU) is compared with the use of different meshes and different methods of distribution of input data into blocks. Performance measurements show that numerical schemes developed achieve 20 to 50 speedup on GPU hardware compared to CPU reference implementation. The results obtained provide promising perspective for designing a GPU-based software framework for applications in CFD.

Tibial tuberosity advancement (TTA) is a promising method for the treatment of cruciate ligament rupture in dogs that usually implies the implantation of a titanium cage as bone implant. This cage is non-biodegradable and fails in providing adequate implant-bone tissue integration. The objective of this work is to propose a new process chain for designing and manufacturing an alternative biodegradable cage that can fulfill specific patient requirements. A three-dimensional finite element model (3D FEM) of the TTA system was first created to evaluate the mechanical environment at cage domain during different stages of the dog walk. The cage microstructure was then optimized using a topology optimization tool, which addresses the accessed local mechanical requirements, and at same time ensures the maximum permeability to allow nutrient and oxygen supply to the implant core. The designed cage was then biofabricated by a 3D powder printing of tricalcium phosphate cement. This work demonstrates that the combination of a 3D FEM with a topology optimization approach enabled the design of a novel cage for TTA application with tailored permeability and mechanical properties, that can be successfully 3D printed in a biodegradable bioceramic material. These results support the potential of the design optimization strategy and fabrication method to the development of customized and bioresorbable implants for bone repair.

I was invited to be the guest editor for a special issue of Computing in Science and Engineering along with a colleague from Stony Brook. This is the guest editors' introduction to a special issue of Computing in Science and Engineering. Alan and I have written this introduction and have been the editors for the 4 papers to be published in this special edition.

On behalf of the High Performance Computing and Modernization Program (HPCMP) and NASA Advanced Supercomputing Division (NAS) a study is conducted to assess the role of supercomputers on computational aeroelasticity of aerospace vehicles. The study is mostly based on the responses to a web based questionnaire that was designed to capture the nuances of high performance computational aeroelasticity, particularly on parallel computers. A procedure is presented to assign a fidelity-complexity index to each application. Case studies based on major applications using HPCMP resources are presented.

Computational vehicle models for the analysis of lightweight material performance in automobiles have been developed through collaboration between Oak Ridge National Laboratory, the National Highway Transportation Safety Administration, and George Washington University. The vehicle models have been verified against experimental data obtained from vehicle collisions. The crashed vehicles were analyzed, and the main impact energy dissipation mechanisms were identified and characterized. Important structural parts were extracted and digitized and directly compared with simulation results. High-performance computing played a key role in the model development because it allowed for rapid computational simulations and model modifications. The deformation of the computational model shows a very good agreement with the experiments. This report documents the modifications made to the computational model and relates them to the observations and findings on the test vehicle. Procedural guidelines are also provided that the authors believe need to be followed to create realistic models of passenger vehicles that could be used to evaluate the performance of lightweight materials in automotive structural components.

Computational procedures are developed and applied to the prediction of solar wind interaction with nonmagnetic terrestrial planet atmospheres, with particular emphasis to Venus. The theoretical method is based on a single fluid, steady, dissipationless, magnetohydrodynamic continuum model, and is appropriate for the calculation of axisymmetric, supersonic, super-Alfvenic solar wind flow past terrestrial planets. The procedures, which consist of finite difference codes to determine the gasdynamic properties and a variety of special purpose codes to determine the frozen magnetic field, streamlines, contours, plots, etc. of the flow, are organized into one computational program. Theoretical results based upon these procedures are reported for a wide variety of solar wind conditions and ionopause obstacle shapes. Plasma and magnetic field comparisons in the ionosheath are also provided with actual spacecraft data obtained by the Pioneer Venus Orbiter.

The development status and applicational range of techniques in computational structural mechanics (CSM) are evaluated with a view to advances in computational models for material behavior, discrete-element technology, quality assessment, the control of numerical simulations of structural response, hybrid analysis techniques, techniques for large-scale optimization, and the impact of new computing systems on CSM. Primary pacers of CSM development encompass prediction and analysis of novel materials for structural components, computational strategies for large-scale structural calculations, and the assessment of response prediction reliability together with its adaptive improvement.

During the last 30 years, research into the pathogenesis and progression of cardiovascular disease has had to employ a multidisciplinary approach involving a wide range of subject areas, from molecular and cell biology to computational mechanics and experimental solid and fluid mechanics. In general, research was driven by the need to provide answers to questions of critical importance for disease management. Ongoing improvements in the spatial resolution of medical imaging equipment coupled to an exponential growth in the capacity, flexibility and speed of computational techniques have provided a valuable opportunity for numerical simulations and complex experimental techniques to make a contribution to improving the diagnosis and clinical management of many forms of cardiovascular disease. This paper contains a review of recent progress in the numerical simulation of cardiovascular mechanics, focusing on three particular areas: patient-specific modeling and the optimization of surgery in pediatric cardiology, evaluating the risk of rupture in aortic aneurysms, and noninvasive characterization of intraventricular flow in the management of heart failure.

Abstract— Traditional dynamic security assessment is limited by several factors and thus falls short in providing real-time information to be predictive for power system operation. These factors include the steady-state assumption of current operating points, static transfer limits, and low computational speed. This addresses these factors and frames predictive dynamic security assessment. The primary objective of predictive dynamic security assessment is to enhance the functionality and computational process of dynamic security assessment through the use of high-speed phasor measurements and the application of advancedcomputing technologies for faster-than-real-time simulation. This paper presents algorithms, computing platforms, and simulation frameworks that constitute the predictive dynamic security assessment capability. Examples of phasor application and fast computation for dynamic security assessment are included to demonstrate the feasibility and speed enhancement for real-time applications.

companies within the aerospace industry have internal materials models, often proprietary, based on phenomenological, statistical, and neural network...distributions) it also applies to some mechanical properties (e.g., measurement of elevated temperature dwell fatigue under certain environmental conditions...many, if not all, near-term integrated ICME applications can be integrated the old-fashion-way by piping information between software programs

This is a final report of the DOE award DE-SC0001132, Advanced Artificial Science. The development of an artificial science and engineering research infrastructure to facilitate innovative computational modeling, analysis, and application to interdisciplinary areas of scientific investigation. This document describes the achievements of the goals, and resulting research made possible by this award.

Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to

Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. This talk will review recent progress and future directions for advanced simulations in magnetically-confined plasmas with illustrative examples chosen from areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop MPP's to produce 3-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for tens of thousands time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to stimulate improved cross-cutting collaborations with other fields and also to help attract

The paper reviews the growth and advances in computational capabilities for hypersonic applications over the period from the mid-1980's to the present day. The current status of the code development issues such as surface and field grid generation, algorithms, physical and chemical modeling, and validation is provided. A brief description of some of the major codes being used at NASA Langley Research Center for hypersonic continuum and rarefied flows is provided, along with their capabilities and deficiencies. A number of application examples are presented, and future areas of research to enhance accuracy, reliability, efficiency, and robustness of computational codes are discussed.

This report documents a special study to define a 32-bit radiation hardened, SEU tolerant flight computer architecture, and to investigate current or near-term technologies and development efforts that contribute to the Advanced Flight Computer (AFC) design and development. An AFC processing node architecture is defined. Each node may consist of a multi-chip processor as needed. The modular, building block approach uses VLSI technology and packaging methods that demonstrate a feasible AFC module in 1998 that meets that AFC goals. The defined architecture and approach demonstrate a clear low-risk, low-cost path to the 1998 production goal, with intermediate prototypes in 1996.

Some of the applications of advanced welding techniques are shown in this poster presentation. Included are brief explanations of the use on the Ares I and Ares V launch vehicle and on the Space Shuttle Launch vehicle. Also included are microstructural views from four advanced welding techniques: Variable Polarity Plasma Arc (VPPA) weld (fusion), self-reacting friction stir welding (SR-FSW), conventional FSW, and Tube Socket Weld (TSW) on aluminum.

The Advanced Telecommunications Computing Architecture is a new industry open standard for electronics instrument modules and shelves being evaluated for the International Linear Collider (ILC). It is the first industrial standard designed for High Availability (HA). ILC availability simulations have shown clearly that the capabilities of ATCA are needed in order to achieve acceptable integrated luminosity. The ATCA architecture looks attractive for beam instruments and detector applications as well. This paper provides an overview of ongoing R&D including application of HA principles to power electronics systems.

Presents some novel applications of the computer to physics research. They include (1) a computer program for calculating Compton scattering, (2) speech simulation, (3) data analysis in spectrometry, and (4) measurement of complex alpha-particle spectrum. Bibliography. (LC)

The Research Institute for AdvancedComputer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center. It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. Ames has been designated NASA's Center of Excellence in Information Technology. In this capacity, Ames is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA Ames and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth; (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking. Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a

Increased emphasis on sustained supersonic or hypersonic cruise has revived interest in the supersonic throughflow fan as a possible component in advanced propulsion systems. Use of a fan that can operate with a supersonic inlet axial Mach number is attractive from the standpoint of reducing the inlet losses incurred in diffusing the flow from a supersonic flight Mach number to a subsonic one at the fan face. The design of the experiment using advancedcomputational codes to calculate the components required is described. The rotor was designed using existing turbomachinery design and analysis codes modified to handle fully supersonic axial flow through the rotor. A two-dimensional axisymmetric throughflow design code plus a blade element code were used to generate fan rotor velocity diagrams and blade shapes. A quasi-three-dimensional, thin shear layer Navier-Stokes code was used to assess the performance of the fan rotor blade shapes. The final design was stacked and checked for three-dimensional effects using a three-dimensional Euler code interactively coupled with a two-dimensional boundary layer code. The nozzle design in the expansion region was analyzed with a three-dimensional parabolized viscous code which corroborated the results from the Euler code. A translating supersonic diffuser was designed using these same codes.

Nucleic acid-based logic devices were first introduced in 1994. Since then, science has seen the emergence of new logic systems for mimicking mathematical functions, diagnosing disease and even imitating biological systems. The unique features of nucleic acids, such as facile and high-throughput synthesis, Watson-Crick complementary base pairing, and predictable structures, together with the aid of programming design, have led to the widespread applications of nucleic acids (NA) for logic gate and computing in biotechnology and biomedicine. In this feature article, the development of in vitro NA logic systems will be discussed, as well as the expansion of such systems using various input molecules for potential cellular, or even in vivo, applications.

Nucleic acid-based logic devices were first introduced in 1994. Since then, science has seen the emergence of new logic systems for mimicking mathematical functions, diagnosing disease and even imitating biological systems. The unique features of nucleic acids, such as facile and high-throughput synthesis, Watson-Crick complementary base pairing, and predictable structures, together with the aid of programming design, have led to the widespread applications of nucleic acids (NA) for logic gating and computing in biotechnology and biomedicine. In this feature article, the development of in vitro NA logic systems will be discussed, as well as the expansion of such systems using various input molecules for potential cellular, or even in vivo, applications. PMID:25597946

With consumer concerns increasing over food quality and safety, the food industry has begun to pay much more attention to the development of rapid and reliable food-evaluation systems over the years. As a result, there is a great need for manufacturers and retailers to operate effective real-time assessments for food quality and safety during food production and processing. Computer vision, comprising a nondestructive assessment approach, has the aptitude to estimate the characteristics of food products with its advantages of fast speed, ease of use, and minimal sample preparation. Specifically, computer vision systems are feasible for classifying food products into specific grades, detecting defects, and estimating properties such as color, shape, size, surface defects, and contamination. Therefore, in order to track the latest research developments of this technology in the agri-food industry, this review aims to present the fundamentals and instrumentation of computer vision systems with details of applications in quality assessment of agri-food products from 2007 to 2013 and also discuss its future trends in combination with spectroscopy.

Papers given at the conference present the results of theoretical research on aerodynamic flow problems requiring the use of advancedcomputers. Topics discussed include two-dimensional configurations, three-dimensional configurations, transonic aircraft, and the space shuttle.

Please find attached our final technical report for the BACTER Institute award. BACTER was created as a graduate and postdoctoral training program for the advancement of computational biology applied to questions of relevance to bioenergy research.

Convergence of computer systems and communication technologies are moving to switched high-performance modular system architectures on the basis of high-speed switched interconnections. Multi-core processors become more perspective way to high-performance system, and traditional parallel bus system architectures (VME/VXI, cPCI/PXI) are moving to new higher speed serial switched interconnections. Fundamentals in system architecture development are compact modular component strategy, low-power processor, new serial high-speed interface chips on the board, and high-speed switched fabric for SAN architectures. Overview of advanced modular concepts and new international standards for development high-performance embedded and compact modular systems for real-time applications are described.

We survey results in lattice quantum chromodynamics from groups in the USQCD Collaboration. The main focus is on physics, but many aspects of the discussion are aimed at an audience of computational physicists.

The use of commercial computer technology in specific aerospace mission applications can reduce the cost and project cycle time required for the development of special-purpose computer systems. Additionally, the pace of technological innovation in the commercial market has made new computer capabilities available for demonstrations and flight tests. Three areas of research and development being explored by the Portable Computer Technology Project at NASA Ames Research Center are the application of commercial client/server network computing solutions to crew support and payload operations, the analysis of requirements for portable computing devices, and testing of wireless data communication links as extensions to the wired network. This paper will present computer architectural solutions to portable workstation design including the use of standard interfaces, advanced flat-panel displays and network configurations incorporating both wired and wireless transmission media. It will describe the design tradeoffs used in selecting high-performance processors and memories, interfaces for communication and peripheral control, and high resolution displays. The packaging issues for safe and reliable operation aboard spacecraft and aircraft are presented. The current status of wireless data links for portable computers is discussed from a system design perspective. An end-to-end data flow model for payload science operations from the experiment flight rack to the principal investigator is analyzed using capabilities provided by the new generation of computer products. A future flight experiment on-board the Russian MIR space station will be described in detail including system configuration and function, the characteristics of the spacecraft operating environment, the flight qualification measures needed for safety review, and the specifications of the computing devices to be used in the experiment. The software architecture chosen shall be presented. An analysis of the

The Advanced Biomedical Computing Center (ABCC), located in Frederick Maryland (MD), provides HPC resources for both NIH/NCI intramural scientists and the extramural biomedical research community. Its mission is to provide HPC support, to provide collaborative research, and to conduct in-house research in various areas of computational biology and biomedical research.

Good morning. Welcome to SciDAC 2006 and Denver. I share greetings from the new Undersecretary for Energy, Ray Orbach. Five years ago SciDAC was launched as an experiment in computational science. The goal was to form partnerships among science applications, computer scientists, and applied mathematicians to take advantage of the potential of emerging terascale computers. This experiment has been a resounding success. SciDAC has emerged as a powerful concept for addressing some of the biggest challenges facing our world. As significant as these successes were, I believe there is also significance in the teams that achieved them. In addition to their scientific aims these teams have advanced the overall field of computational science and set the stage for even larger accomplishments as we look ahead to SciDAC-2. I am sure that many of you are expecting to hear about the results of our current solicitation for SciDAC-2. I’m afraid we are not quite ready to make that announcement. Decisions are still being made and we will announce the results later this summer. Nearly 250 unique proposals were received and evaluated, involving literally thousands of researchers, postdocs, and students. These collectively requested more than five times our expected budget. This response is a testament to the success of SciDAC in the community. In SciDAC-2 our budget has been increased to about 70 million for FY 2007 and our partnerships have expanded to include the Environment and National Security missions of the Department. The National Science Foundation has also joined as a partner. These new partnerships are expected to expand the application space of SciDAC, and broaden the impact and visibility of the program. We have, with our recent solicitation, expanded to turbulence, computational biology, and groundwater reactive modeling and simulation. We are currently talking with the Department’s applied energy programs about risk assessment, optimization of complex systems - such

In this paper, the authors will discuss recent advances in computing power and the prospects for using these new capabilities for studying plasticity and failure. They will first review the new capabilities made available with parallel computing. They will discuss how these machines perform and how well their architecture might work on materials issues. Finally, they will give some estimates on the size of problems possible using these computers.

Reliability analysis of fault tolerant computer systems for critical applications is complicated by several factors. These modeling difficulties are discussed and dynamic fault tree modeling techniques for handling them are described and demonstrated. Several advanced fault tolerant computer systems are described, and fault tree models for their analysis are presented. HARP (Hybrid Automated Reliability Predictor) is a software package developed at Duke University and NASA Langley Research Center that is capable of solving the fault tree models presented.

Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This

The atomic pair distribution function (PDF) and extended x-ray absorption fine structure (EXAFS) techniques fill a hole in conventional crystallographic analysis, which resolves the average long-range structure of a material but inadequately determines deviations from the average. These techniques provide structural information on the sub-nanometer scale and are helping characterize modern materials. Despite their successes, PDF and EXAFS often fall short of adequately describing complex nanostructured materials. Parallel PDF and EXAFS refinement, or corefinement, is one attempt at extending the applicability of these techniques. Corefinement combines the best parts of PDF and EXAFS, the chemical-specific and short-range detail of EXAFS and the short and intermediate-range information from the PDF. New ab initio methods are also being employed to find structures from the PDF. These techniques use the bond length information encoded in the PDF to assemble structures without a model. On another front, new software has been developed to introduce the PDF method to a larger community. Broad awareness of the PDF technique will help drive its future development.

Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behavior. Recent advances in simulations of magnetically-confined plasmas are reviewed in this paper with illustrative examples chosen from associated research areas such as microturbulence, magnetohydrodynamics, and other topics. Progress has been stimulated in particular by the exponential growth of computer speed along with significant improvements in computer technology.

Nanomaterials research is one of the fastest growing contemporary research areas. The unprecedented properties of these materials have meant that they are being incorporated into products very quickly. Regulatory agencies are concerned they cannot assess the potential hazards of these materials adequately, as data on the biological properties of nanomaterials are still relatively limited and expensive to acquire. Computational modelling methods have much to offer in helping understand the mechanisms by which toxicity may occur, and in predicting the likelihood of adverse biological impacts of materials not yet tested experimentally. This paper reviews the progress these methods, particularly those QSAR-based, have made in understanding and predicting potentially adverse biological effects of nanomaterials, and also the limitations and pitfalls of these methods.

The ACP Branchbus, a high speed differential bus for data movement in multiprocessing and data acquisition environments, is described. This bus was designed as the central bus in the ACP multiprocessing system. In its full implementation with 16 branches and a bus switch, it will handle data rates of 160 MByte/sec and allow reliable data transmission over inter rack distances. We also summarize applications of the ACP system in experimental data acquisition, triggering and monitoring, with special attention paid to FASTBUS environments.

The rapid advancement of computational capability including speed and memory size has prompted the wide use of computational fluid dynamics (CFD) codes to simulate complex flow systems. CFD simulations are used to study the operating problems encountered in system, to evaluate the impacts of operation/design parameters on the performance of a system, and to investigate novel design concepts. CFD codes are generally developed based on the conservation laws of mass, momentum, and energy that govern the characteristics of a flow. The governing equations are simplified and discretized for a selected computational grid system. Numerical methods are selected to simplify and calculate approximate flow properties. For turbulent, reacting, and multiphase flow systems the complex processes relating to these aspects of the flow, i.e., turbulent diffusion, combustion kinetics, interfacial drag and heat and mass transfer, etc., are described in mathematical models, based on a combination of fundamental physics and empirical data, that are incorporated into the code. CFD simulation has been applied to a large variety of practical and industrial scale flow systems.

This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of the project was to create effective new algorithms for solving N-body problems by computer simulation. The authors concentrated on developing advanced classical and quantum Monte Carlo techniques. For simulations of phase transitions in classical systems, they produced a framework generalizing the famous Swendsen-Wang cluster algorithms for Ising and Potts models. For spin-glass-like problems, they demonstrated the effectiveness of an extension of the multicanonical method for the two-dimensional, random bond Ising model. For quantum mechanical systems, they generated a new method to compute the ground-state energy of systems of interacting electrons. They also improved methods to compute excited states when the diffusion quantum Monte Carlo method is used and to compute longer time dynamics when the stationary phase quantum Monte Carlo method is used.

Quantum light-matter interfaces are at the heart of photonic quantum technologies. Quantum memories for photons, where non-classical states of photons are mapped onto stationary matter states and preserved for subsequent retrieval, are technical realizations enabled by exquisite control over interactions between light and matter. The ability of quantum memories to synchronize probabilistic events makes them a key component in quantum repeaters and quantum computation based on linear optics. This critical feature has motivated many groups to dedicate theoretical and experimental research to develop quantum memory devices. In recent years, exciting new applications, and more advanced developments of quantum memories, have proliferated. In this review, we outline some of the emerging applications of quantum memories in optical signal processing, quantum computation and non-linear optics. We review recent experimental and theoretical developments, and their impacts on more advanced photonic quantum technologies based on quantum memories.

Quantum light–matter interfaces are at the heart of photonic quantum technologies. Quantum memories for photons, where non-classical states of photons are mapped onto stationary matter states and preserved for subsequent retrieval, are technical realizations enabled by exquisite control over interactions between light and matter. The ability of quantum memories to synchronize probabilistic events makes them a key component in quantum repeaters and quantum computation based on linear optics. This critical feature has motivated many groups to dedicate theoretical and experimental research to develop quantum memory devices. In recent years, exciting new applications, and more advanced developments of quantum memories, have proliferated. In this review, we outline some of the emerging applications of quantum memories in optical signal processing, quantum computation and non-linear optics. We review recent experimental and theoretical developments, and their impacts on more advanced photonic quantum technologies based on quantum memories. PMID:27695198

Quantum light-matter interfaces are at the heart of photonic quantum technologies. Quantum memories for photons, where non-classical states of photons are mapped onto stationary matter states and preserved for subsequent retrieval, are technical realizations enabled by exquisite control over interactions between light and matter. The ability of quantum memories to synchronize probabilistic events makes them a key component in quantum repeaters and quantum computation based on linear optics. This critical feature has motivated many groups to dedicate theoretical and experimental research to develop quantum memory devices. In recent years, exciting new applications, and more advanced developments of quantum memories, have proliferated. In this review, we outline some of the emerging applications of quantum memories in optical signal processing, quantum computation and non-linear optics. We review recent experimental and theoretical developments, and their impacts on more advanced photonic quantum technologies based on quantum memories.

fixturing applications, in addition to the existing computer-aided engineering capabilities. o Helix TWT Manufacturing has Implemented a tooling and fixturing...illustrates the ajor features of this computer network. ) The backbone of our system is a Sytek Broadband Network (LAN) which Interconnects terminals and...automatic network analyzer (FANA) which electrically characterizes the slow-wave helices of traveling-wave tubes ( TWTs ) -- both for engineering design

This paper examines the applications most commonly run on the supercomputers at the Numerical Aerospace Simulation (NAS) facility. It analyzes the extent to which such applications are fundamentally oriented to vector computers, and whether or not they can be efficiently implemented on hierarchical memory machines, such as systems with cache memories and highly parallel, distributed memory systems.

A digital computer subsystem fault-tolerant concept was defined, and the potential benefits and costs of such a subsystem were assessed when used as the central element of a new transport's flight control system. The derived advanced reconfigurable computer system (ARCS) is a triple-redundant computer subsystem that automatically reconfigures, under multiple fault conditions, from triplex to duplex to simplex operation, with redundancy recovery if the fault condition is transient. The study included criteria development covering factors at the aircraft's operation level that would influence the design of a fault-tolerant system for commercial airline use. A new reliability analysis tool was developed for evaluating redundant, fault-tolerant system availability and survivability; and a stringent digital system software design methodology was used to achieve design/implementation visibility.

This book chapter, to be published in Advances in Computers, Volume 78, in 2010 describes applications of data intensive computing (DIC). This is an invited chapter resulting from a previous publication on DIC. This work summarizes efforts coming out of the PNNL's Data Intensive Computing Initiative. Advances in technology have empowered individuals with the ability to generate digital content with mouse clicks and voice commands. Digital pictures, emails, text messages, home videos, audio, and webpages are common examples of digital content that are generated on a regular basis. Data intensive computing facilitates human understanding of complex problems. Data-intensive applications provide timely and meaningful analytical results in response to exponentially growing data complexity and associated analysis requirements through the development of new classes of software, algorithms, and hardware.

The focus of this project was to survey the technology of reconfigurable computing determine its level of maturity and suitability for NASA applications. To better understand and assess the effectiveness of the reconfigurable design paradigm that is utilized within the HAL-15 reconfigurable computer system. This system was made available to NASA MSFC for this purpose, from Star Bridge Systems, Inc. To implement on at least one application that would benefit from the performance levels that are possible with reconfigurable hardware. It was originally proposed that experiments in fault tolerance and dynamically reconfigurability would be perform but time constraints mandated that these be pursued as future research.

Specialized computer architectures for advanced robotics applications at ORNL/CESAR are based on the hypercube ensemble concept. The current status of algorithm development is summarized and results for robot dynamics and navigation problems are presented. 13 refs., 1 tab.

The Internet and growth of computer networks have eliminated geographic barriers, creating an environment where education can be brought to a student no matter where that student may be. The success of distance learning programs and the availability of many Web-supported applications and multimedia resources have increased the effectiveness of…

... Advanced Scientific Computing Advisory Committee AGENCY: Department of Energy, Office of Science. ACTION... hereby given that the Advanced Scientific Computing Advisory Committee will be renewed for a two-year... (DOE), on the Advanced Scientific Computing Research Program managed by the Office of...

The Research Institute for AdvancedComputer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administrations missions. RIACS is located at the NASA Ames Research Center, Moffett Field, California. RIACS research focuses on the three cornerstones of IT research necessary to meet the future challenges of NASA missions: 1. Automated Reasoning for Autonomous Systems Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. 2. Human-Centered Computing Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities. 3. High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to analysis of large scientific datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply IT research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, visiting scientist programs and student summer programs, designed to encourage and facilitate collaboration between the university and NASA IT research communities.

We review advanced accelerators for medical applications with respect to the following key technologies: (i) higher RF electron linear accelerator (hereafter “linac”); (ii) optimization of alignment for the proton linac, cyclotron and synchrotron; (iii) superconducting magnet; (iv) laser technology. Advanced accelerators for medical applications are categorized into two groups. The first group consists of compact medical linacs with high RF, cyclotrons and synchrotrons downsized by optimization of alignment and superconducting magnets. The second group comprises laser-based acceleration systems aimed of medical applications in the future. Laser plasma electron/ion accelerating systems for cancer therapy and laser dielectric accelerating systems for radiation biology are mentioned. Since the second group has important potential for a compact system, the current status of the established energy and intensity and of the required stability are given.

The objective of the project was to develop a method for theoretical understanding of nuclear fuel materials whose physical and thermophysical properties can be predicted from first principles using a novel dynamical mean field method for electronic structure calculations. We concentrated our study on uranium, plutonium, their oxides, nitrides, carbides, as well as some rare earth materials whose 4f eletrons provide a simplified framework for understanding complex behavior of the f electrons. We addressed the issues connected to the electronic structure, lattice instabilities, phonon and magnon dynamics as well as thermal conductivity. This allowed us to evaluate characteristics of advanced nuclear fuel systems using computer based simulations and avoid costly experiments.

This paper describes a working flight computer Multichip Module developed jointly by JPL and TRW under their respective research programs in a collaborative fashion. The MCM is fabricated by nCHIP and is packaged within a 2 by 4 inch Al package from Coors. This flight computer module is one of three modules under development by NASA's Advanced Flight Computer (AFC) program. Further development of the Mass Memory and the programmable I/O MCM modules will follow. The three building block modules will then be stacked into a 3D MCM configuration. The mass and volume of the flight computer MCM achieved at 89 grams and 1.5 cubic inches respectively, represent a major enabling technology for future deep space as well as commercial remote sensing applications.

An ASDA model developed to evaluate the heat and mass transfer characteristics of advanced pressurized suit design concepts for low pressure or vacuum planetary applications is presented. The model is based on a generalized 3-layer suit that uses the Systems Integrated Numerical Differencing Analyzer '85 in conjunction with a 41-node FORTRAN routine. The latter simulates the transient heat transfer and respiratory processes of a human body in a suited environment. The user options for the suit encompass a liquid cooled garment, a removable jacket, a CO2/H2O permeable layer, and a phase change layer.

Feature extraction and evaluation are two procedures common to the development of any pattern recognition application. These features are the primary pieces of information which are used to train the pattern recognition tool, whether that tool is a neural network, a fuzzy logic rulebase, or a genetic algorithm. Careful selection of the features to be used by the pattern recognition tool can significantly streamline the overall development and training of the solution for the pattern recognition application. This report summarizes the development of an integrated, computer-based software package called the Feature Extraction Toolbox (FET), which can be used for the development and deployment of solutions to generic pattern recognition problems. This toolbox integrates a number of software techniques for signal processing, feature extraction and evaluation, and pattern recognition, all under a single, user-friendly development environment. The toolbox has been developed to run on a laptop computer, so that it may be taken to a site and used to develop pattern recognition applications in the field. A prototype version of this toolbox has been completed and is currently being used for applications development on several projects in support of the Department of Energy.

Oshkosh Corporation (OSK) is taking an aggressive approach to implementing advanced technologies, including hybrid electric vehicle (HEV) technology, throughout their commercial and military product lines. These technologies have important implications for OSK's commercial and military customers, including fleet fuel efficiency, quiet operational modes, additional on-board electric capabilities, and lower thermal signature operation. However, technical challenges exist with selecting the optimal HEV components and design to work within the performance and packaging constraints of specific vehicle applications. SK desires to use unique expertise developed at the Department of Energy?s (DOE) National Renewable Energy Laboratory (NREL), including HEV modeling and simulation. These tools will be used to overcome technical hurdles to implementing advanced heavy vehicle technology that meet performance requirements while improving fuel efficiency.

complex computational issues are pursued , and that several vendors remain at the leading edge of supercomputing capability in the U.S. In... pursuing the ASC program to help assure that HPC advances are available to the broad national security community. As in the past, many...apply HPC to technical problems related to weapons physics, but that are entirely unclassified. Examples include explosive astrophysical

The Low Cost Microsensors (LCMS) Program recently demonstrated state-of-the-art imagery in a long-range infrared (IR) sensor built upon an uncooled vanadium oxide (VOx) 640 x 480 format focal plane array (FPA) engine. The 640 x 480 sensor is applicable to long-range surveillance and targeting missions. The intent of this DUS&T effort was to further reduce the cost, weight, and power of uncooled IR sensors, and to increase the capability of these sensors, thereby expanding their applicability to military and commercial markets never before addressed by thermal imaging. In addition, the Advanced Uncooled Thermal Imaging Sensors (AUTIS) Program extended this development to light-weight, compact unmanned aerial vehicle (UAV) applications.

During the past decades there has been a continuous growth in the number of physical and societal problems that have been successfully studied and solved by means of computational modeling and simulation. Distinctively, a number of these are important scientific problems ranging in scale from the atomic to the cosmic. For example, ionization is a phenomenon as ubiquitous in modern society as the glow of fluorescent lights and the etching on silicon computer chips; but it was not until 1999 that researchers finally achieved a complete numerical solution to the simplest example of ionization, the collision of a hydrogen atom with an electron. On the opposite scale, cosmologists have long wondered whether the expansion of the Universe, which began with the Big Bang, would ever reverse itself, ending the Universe in a Big Crunch. In 2000, analysis of new measurements of the cosmic microwave background radiation showed that the geometry of the Universe is flat, and thus the Universe will continue expanding forever. Both of these discoveries depended on high performance computer simulations that utilized computational tools included in the AdvancedComputational Testing and Simulation (ACTS) Toolkit. The ACTS Toolkit is an umbrella project that brought together a number of general purpose computational tool development projects funded and supported by the U.S. Department of Energy (DOE). These tools, which have been developed independently, mainly at DOE laboratories, make it easier for scientific code developers to write high performance applications for parallel computers. They tackle a number of computational issues that are common to a large number of scientific applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. The ACTS Toolkit Project enables the use of these tools by a much wider community of computational scientists, and promotes code portability, reusability, reduction of duplicate efforts

Purpose: Teaching advanced technical concepts in a computer science program to students of different technical backgrounds presents many challenges. The purpose of this paper is to present a detailed experimental pedagogy in teaching advancedcomputer science topics, such as computer networking, telecommunications and data structures using…

X-ray computed tomography (CT) has recently been experiencing remarkable growth as a result of technological advances and new clinical applications. This paper reviews the essential physics of X-ray CT and its major components. Also reviewed are recent promising applications of CT, ie, CT-guided procedures, CT-based thermometry, photon-counting technology, hybrid PET-CT, use of ultrafast-high pitch scanners, and potential use of dual-energy CT for material differentiations. These promising solutions and a better knowledge of their potentialities should allow CT to be used in a safe and effective manner in several clinical applications. PMID:26089707

X-ray computed tomography (CT) has recently been experiencing remarkable growth as a result of technological advances and new clinical applications. This paper reviews the essential physics of X-ray CT and its major components. Also reviewed are recent promising applications of CT, ie, CT-guided procedures, CT-based thermometry, photon-counting technology, hybrid PET-CT, use of ultrafast-high pitch scanners, and potential use of dual-energy CT for material differentiations. These promising solutions and a better knowledge of their potentialities should allow CT to be used in a safe and effective manner in several clinical applications.

The computer programs are identified pertaining to planetary quarantine activities within the Project Engineering Division, both at the Air Force Eastern Test Range and on site at the Jet Propulsion Laboratory. A brief description of each program and program inputs are given and typical program outputs are shown.

This paper describes an undergraduate-level course designed to teach the applications of computers that are most relevant in the social sciences, especially psychology. After an introduction to the basic concepts and terminology of computing, separate units were devoted to word processing, data analysis, data acquisition, artificial intelligence,…

The nanobiocatalyst (NBC) is an emerging innovation that synergistically integrates advanced nanotechnology with biotechnology and promises exciting advantages for improving enzyme activity, stability, capability and engineering performances in bioprocessing applications. NBCs are fabricated by immobilizing enzymes with functional nanomaterials as enzyme carriers or containers. In this paper, we review the recent developments of novel nanocarriers/nanocontainers with advanced hierarchical porous structures for retaining enzymes, such as nanofibres (NFs), mesoporous nanocarriers and nanocages. Strategies for immobilizing enzymes onto nanocarriers made from polymers, silicas, carbons and metals by physical adsorption, covalent binding, cross-linking or specific ligand spacers are discussed. The resulting NBCs are critically evaluated in terms of their bioprocessing performances. Excellent performances are demonstrated through enhanced NBC catalytic activity and stability due to conformational changes upon immobilization and localized nanoenvironments, and NBC reutilization by assembling magnetic nanoparticles into NBCs to defray the high operational costs associated with enzyme production and nanocarrier synthesis. We also highlight several challenges associated with the NBC-driven bioprocess applications, including the maturation of large-scale nanocarrier synthesis, design and development of bioreactors to accommodate NBCs, and long-term operations of NBCs. We suggest these challenges are to be addressed through joint collaboration of chemists, engineers and material scientists. Finally, we have demonstrated the great potential of NBCs in manufacturing bioprocesses in the near future through successful laboratory trials of NBCs in carbohydrate hydrolysis, biofuel production and biotransformation.

The nanobiocatalyst (NBC) is an emerging innovation that synergistically integrates advanced nanotechnology with biotechnology and promises exciting advantages for improving enzyme activity, stability, capability and engineering performances in bioprocessing applications. NBCs are fabricated by immobilizing enzymes with functional nanomaterials as enzyme carriers or containers. In this paper, we review the recent developments of novel nanocarriers/nanocontainers with advanced hierarchical porous structures for retaining enzymes, such as nanofibres (NFs), mesoporous nanocarriers and nanocages. Strategies for immobilizing enzymes onto nanocarriers made from polymers, silicas, carbons and metals by physical adsorption, covalent binding, cross-linking or specific ligand spacers are discussed. The resulting NBCs are critically evaluated in terms of their bioprocessing performances. Excellent performances are demonstrated through enhanced NBC catalytic activity and stability due to conformational changes upon immobilization and localized nanoenvironments, and NBC reutilization by assembling magnetic nanoparticles into NBCs to defray the high operational costs associated with enzyme production and nanocarrier synthesis. We also highlight several challenges associated with the NBC-driven bioprocess applications, including the maturation of large-scale nanocarrier synthesis, design and development of bioreactors to accommodate NBCs, and long-term operations of NBCs. We suggest these challenges are to be addressed through joint collaboration of chemists, engineers and material scientists. Finally, we have demonstrated the great potential of NBCs in manufacturing bioprocesses in the near future through successful laboratory trials of NBCs in carbohydrate hydrolysis, biofuel production and biotransformation. PMID:25392397

Under the DOE SciDAC project on Accelerator Science and Technology, a suite of electromagnetic codes has been under development at SLAC that are based on unstructured grids for higher accuracy, and use parallel processing to enable large-scale simulation. The new modeling capability is supported by SciDAC collaborations on meshing, solvers, refinement, optimization and visualization. These advances in computational science are described and the application of the parallel eigensolver Omega3P to the cavity design for the International Linear Collider is discussed.

Computational fluid dynamics was developed to the stage where it has become an indispensable part of aerospace research and design. In view of advances made in aerospace applications, the computational approach can be used for biofluid mechanics research. Several flow simulation methods developed for aerospace problems are briefly discussed for potential applications to biofluids, especially to blood flow analysis.

Next-generation sequencing technologies have revolutionarily advanced sequence-based research with the advantages of high-throughput, high-sensitivity, and high-speed. RNA-seq is now being used widely for uncovering multiple facets of transcriptome to facilitate the biological applications. However, the large-scale data analyses associated with RNA-seq harbors challenges. In this study, we present a detailed overview of the applications of this technology and the challenges that need to be addressed, including data preprocessing, differential gene expression analysis, alternative splicing analysis, variants detection and allele-specific expression, pathway analysis, co-expression network analysis, and applications combining various experimental procedures beyond the achievements that have been made. Specifically, we discuss essential principles of computational methods that are required to meet the key challenges of the RNA-seq data analyses, development of various bioinformatics tools, challenges associated with the RNA-seq applications, and examples that represent the advances made so far in the characterization of the transcriptome. PMID:26609224

In this work we report on the development of the Signature Molecular Descriptor (or Signature) for use in the solution of inverse design problems as well as in highthroughput screening applications. The ultimate goal of using Signature is to identify novel and non-intuitive chemical structures with optimal predicted properties for a given application. We demonstrate this in three studies: green solvent design, glucocorticoid receptor ligand design and the design of inhibitors for Factor XIa. In many areas of engineering, compounds are designed and/or modified in incremental ways which rely upon heuristics or institutional knowledge. Often multiple experiments are performed and the optimal compound is identified in this brute-force fashion. Perhaps a traditional chemical scaffold is identified and movement of a substituent group around a ring constitutes the whole of the design process. Also notably, a chemical being evaluated in one area might demonstrate properties very attractive in another area and serendipity was the mechanism for solution. In contrast to such approaches, computer-aided molecular design (CAMD) looks to encompass both experimental and heuristic-based knowledge into a strategy that will design a molecule on a computer to meet a given target. Depending on the algorithm employed, the molecule which is designed might be quite novel (re: no CAS registration number) and/or non-intuitive relative to what is known about the problem at hand. While CAMD is a fairly recent strategy (dating to the early 1980s), it contains a variety of bottlenecks and limitations which have prevented the technique from garnering more attention in the academic, governmental and industrial institutions. A main reason for this is how the molecules are described in the computer. This step can control how models are developed for the properties of interest on a given problem as well as how to go from an output of the algorithm to an actual chemical structure. This report

2010 UNAVCO Science Workshop; Boulder, Colorado, 8-11 March 2010; Geodesy's reach has expanded rapidly in recent years as EarthScope and international data sets have grown and new disciplinary applications have emerged. To explore advances in geodesy and its applications in geoscience research and education, approximately 170 scientists (representing 11 countries: Colombia, Denmark, Ecuador, France, Japan, Lebanon, Mexico, New Zealand, Russia, Spain, and the United States), including 15 students, gathered at the 2010 UNAVCO Science Workshop in Colorado. UNAVCO is a nonprofit membership-governed consortium that facilitates geoscience research and education using geodesy. Plenary sessions integrated discovery with broad impact and viewed geodesy through three lenses: (1) pixel-by-pixel geodetic imaging where various remote sensing methodologies are revealing fine-scale changes in the near-surface environment and the geologic processes responsible for them; (2) epoch-by-epoch deformation time series measured in seconds to millennia, which are uncovering ephemeral processes associated with the earthquake cycle and glacial and groundwater flow; and (3) emerging observational powers from advancing geodetic technologies. A fourth plenary session dealt with geodesy and water, a new strategic focus on the hydrosphere, cryosphere, and changing climate. Keynotes included a historical perspective by Bernard Minster (Scripps Institution of Oceanography) on space geodesy and its applications to geophysics, and a summary talk by Susan Eriksson (UNAVCO) on the successes of Research Experience in Solid Earth Science for Students (RESESS) and its 5-year follow-on with opportunities to mentor the next generation of geoscientists through cultivation of diversity.

Knowledge management in general tries to organize and make available important know-how, whenever and where ever is needed. Today, organizations rely on decision-makers to produce "mission critical" decisions that am based on inputs from multiple domains. The ideal decision-maker has a profound understanding of specific domains that influence the decision-making process coupled with the experience that allows them to act quickly and decisively on the information. In addition, learning companies benefit by not repeating costly mistakes, and by reducing time-to-market in Research & Development projects. Group-decision making tools can help companies make better decisions by capturing the knowledge from groups of experts. Furthermore, companies that capture their customers preferences can improve their customer service, which translates to larger profits. Therefore collaborative computing provides a common communication space, improves sharing of knowledge, provides a mechanism for real-time feedback on the tasks being performed, helps to optimize processes, and results in a centralized knowledge warehouse. This paper presents the research directions. of a project which seeks to augment an advanced collaborative web-based environment called Postdoc, with workflow capabilities. Postdoc is a "government-off-the-shelf" document management software developed at NASA-Ames Research Center (ARC).

This dissertation merges the optical design and computational aspects of imaging systems to create novel devices that solve engineering problems in optical science and attempts to expand the solution space available to the optical designer. This dissertation is divided into two parts: the first discusses a new active illumination depth sensing modality, while the second part discusses a passive illumination system called plenoptic, or lightfield, imaging. The new depth sensing modality introduced in part one is called depth through controlled aberration. This technique illuminates a target with a known, aberrated projected pattern and takes an image using a traditional, unmodified imaging system. Knowing how the added aberration in the projected pattern changes as a function of depth, we are able to quantitatively determine depth of a series of points from the camera. A major advantage this method permits is the ability for illumination and imaging axes to be coincident. Plenoptic cameras capture both spatial and angular data simultaneously. This dissertation present a new set of parameters that permit the design and comparison of plenoptic devices outside the traditionally published plenoptic 1.0 and plenoptic 2.0 configurations. Additionally, a series of engineering advancements are presented, including full system raytraces of raw plenoptic images, Zernike compression techniques of raw image files, and non-uniform lenslet arrays to compensate for plenoptic system aberrations. Finally, a new snapshot imaging spectrometer is proposed based off the plenoptic configuration.

Advanced composite and ceramic materials are being developed for use in many new defense and commercial applications. In order to achieve the desired mechanical properties of these materials, the structural elements must be carefully analyzed and engineered. A study was conducted to evaluate the use of high resolution computed tomography (CT) as a macrostructural analysis tool for advanced composite and ceramic materials. Several samples were scanned using a laboratory high resolution CT scanner. Samples were also destructively analyzed at the locations of the scans and the nondestructive and destructive results were compared. The study provides useful information outlining the strengths and limitations of this technique and the prospects for further research in this area.

Despite their more stringent plasma heating and confinement requirements, advanced fuel (AF) fusion cycles potentially offer improved environmental compatibility and lower costs. This comes about by elimination of tritium breeding requirements and by a reduction in neutron flux (hence, activation and radiation damage). Also a larger energy fraction carried by charged particles makes direct energy conversion more suitable. As a first application, a symbiotic system of semi-catalyzed-deuterium fueled hybrid fuel factories, supplying both fissle fuel to light water reactors and /sup 3/He to D-/sup 3/He satellite fusion reactors, is proposed. Subsequently, an evolution into a system of synfuel factories with satellite D-/sup 3/He reactors is envisioned.

Research concerning flight computers for use on interplanetary probes is reviewed. The history of these computers from the Viking mission to the present is outlined. The differences between ground commercial computers and computers for planetary exploration are listed. The development of a computer for the Mariner Mark II comet rendezvous asteroid flyby mission is described. Various aspects of recently developed computer systems are examined, including the Max real time, embedded computer, a hypercube distributed supercomputer, a SAR data processor, a processor for the High Resolution IR Imaging Spectrometer, and a robotic vision multiresolution pyramid machine for processsing images obtained by a Mars Rover.

An interactive computerapplication reporting patient pulmonary function data has been developed by Washington, D.C. VA Medical Center staff. A permanent on-line data base of patient demographics, lung capacity, flows, diffusion, arterial blood gases and physician interpretation is maintained by a minicomputer at the hospital. A user oriented application program resulted from development in concert with the clinical users. Rapid program development resulted from employing a newly developed time saving technique that has found wide application at other VA Medical Centers. Careful attention to user interaction has resulted in an application program requiring little training and which has been satisfactorily used by a number of clinicians.

This paper addresses how the image processing steps involved in computational imaging can be adapted to specific image-based recognition tasks, and how significant reductions in computational complexity can be achieved by leveraging the recognition algorithm's robustness to defocus, poor exposure, and the like. Unlike aesthetic applications of computational imaging, recognition systems need not produce the best possible image quality, but instead need only satisfy certain quality thresholds that allow for reliable recognition. The paper specifically addresses light field processing for barcode scanning, and presents three optimizations which bring light field processing within the complexity limits of low-powered embedded processors.

Discusses three trends in computer-oriented chemistry instruction: (1) availability of interfaces to integrate computers with experiments; (2) impact of the development of higher resolution graphics and greater memory capacity; and (3) role of videodisc technology on computer assisted instruction. Includes program listings for auto-titration and…

Fermilab's AdvancedComputer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs.

Since NASA was created in 1958, over 6400 patents have been issued to the agency--nearly one in a thousand of all patents ever issued in the United States. A large number of these inventions have focused on new materials that have made space travel and exploration of the moon, Mars, and the outer planets possible. In the last few years, the materials developed by NASA Langley Research Center embody breakthroughs in performance and properties that will enable great achievements in space. The examples discussed below offer significant advantages for use in small satellites, i.e., those with payloads under a metric ton. These include patented products such as LaRC SI, LaRC RP 46, LaRC RP 50, PETI-5, TEEK, PETI-330, LaRC CP, TOR-LM and LaRC LCR (patent pending). These and other new advances in nanotechnology engineering, self-assembling nanostructures and multifunctional aerospace materials are presented and discussed below, and applications with significant technological and commercial advantages are proposed.

Since NASA was created in 1958, over 6400 patents have been issued to the agency—nearly one in a thousand of all patents ever issued in the United States. A large number of these inventions have focused on new materials that have made space travel and exploration of the moon, Mars, and the outer planets possible. In the last few years, the materials developed by NASA Langley Research Center embody breakthroughs in performance and properties that will enable great achievements in space. The examples discussed below offer significant advantages for use in small satellites, i.e., those with payloads under a metric ton. These include patented products such as LaRC SI, LaRC RP 46, LaRC RP 50, PETI-5, TEEK, PETI-330, LaRC CP, TOR-LM and LaRC LCR (patent pending). These and other new advances in nanotechnology engineering, self-assembling nanostructures and multifunctional aerospace materials are presented and discussed below, and applications with significant technological and commercial advantages are proposed.

The capability to remotely, robotically perform space assembly, inspection, servicing, and science functions would rapidly expand our presence in space, and the cost efficiency of being there. There is considerable interest in developing 'telerobotic' technologies, which also have comparably important terrestrial applications to health care, underwater salvage, nuclear waste remediation and other. Such tasks, both space and terrestrial, require both a robot and operator interface that is highly flexible and adaptive, i.e., capable of efficiently working in changing and often casually structured environments. One systems approach to this requirement is to augment traditional teleoperation with computer assists -- advanced teleoperation. We have spent a number of years pursuing this approach, and highlight some key technology developments and their potential commercial impact. This paper is an illustrative summary rather than self-contained presentation; for completeness, we include representative technical references to our work which will allow the reader to follow up items of particular interest.

Computers and network technology have become inexpensive and powerful tools that can be applied to a wide range of criminal activity. Computers have changed the world's view of evidence because computers are used more and more as tools in committing `traditional crimes' such as embezzlements, thefts, extortion and murder. This paper will focus on reviewing the current state-of-the-art of the data recovery and evidence construction tools used in both the field and laboratory for prosection purposes.

Large aperture (20-inch diameter) sapphire optical windows have been identified as a key element of new and/or upgraded airborne electro-optical systems. These windows typically require a transmitted wave front error of much less than 0.1 waves rms @ 0.63 microns over 7 inch diameter sub-apertures. Large aperture (14-inch diameter by 4-inch thick) sapphire substrates have also been identified as a key optical element of the Laser Interferometer Gravitational Wave Observatory (LIGO). This project is under joint development by the California Institute of Technology (Caltech) and the Massachusetts Institute of Technology under cooperative agreement with the National Science foundation (NSF). These substrates are required to have a transmitted wave front error of 20 nm (0.032 waves) rms @ 0.63 microns over 6-inch sub-apertures with a desired error of 10 nm (0.016 waves) rms. Owing to the spatial variations in the optical index of refraction potentially anticipated within 20-inch diameter sapphire, thin (0.25 - 0.5-inch) window substrates, as well as within the 14-inch diameter by 4-inch thick substrates for the LIGO application, our experience tells us that the required transmitted wave front errors can not be achieved with standard optical finishing techniques as they can not readily compensate for errors introduced by inherent material characteristics. Computer controlled optical finishing has been identified as a key technology likely required to enable achievement of the required transmitted wave front errors. Goodrich has developed this technology and has previously applied it to finish high quality sapphire optical windows with a range of aperture sizes from 4-inch to 13-inch to achieve transmitted wavefront errors comparable to these new requirements. This paper addresses successful recent developments and accomplishments in the application of this optical finishing technology to sequentially larger aperture and thicker sapphire windows to achieve the

Several current challenges in computational fluid dynamics and aerothermodynamics for hypersonic vehicle applications are discussed. Example simulations are presented from code validation and code benchmarking efforts to illustrate capabilities and limitations. Opportunities to advance the state-of-art in algorithms, grid generation and adaptation, and code validation are identified. Highlights of diverse efforts to address these challenges are then discussed. One such effort to re-engineer and synthesize the existing analysis capability in LAURA, VULCAN, and FUN3D will provide context for these discussions. The critical (and evolving) role of agile software engineering practice in the capability enhancement process is also noted.

Direct patient-computer interviews were among the earliest applications of computing in medicine. Yet patient interviewing and other clinical applications have lagged behind fiscal/administrative uses. Several reasons for delays in the development and implementation of clinical computing programs and their resolution are discussed. Patient interviewing, clinician consultation and other applications of clinical computing in mental health are reviewed.

Defines all recurrence relations used in the Nuffield course, to solve first- and second-order differential equations, and describes a typical algorithm for computer generation of solutions. (Author/GA)

The approach to carrying out multi-discipline aerospace design studies in the future, especially in massively parallel computing environments, comprises of choosing (1) suitable solvers to compute solutions to equations characterizing a discipline, and (2) efficient optimization methods. In addition, for aerodynamic optimization problems, (3) smart methodologies must be selected to modify the surface shape. In this research effort, a 'direct' optimization method is implemented on the Cray C-90 to improve aerodynamic design. It is coupled with an existing implicit Navier-Stokes solver, OVERFLOW, to compute flow solutions. The optimization method is chosen such that it can accomodate multi-discipline optimization in future computations. In the work , however, only single discipline aerodynamic optimization will be included.

The principal mission of the Institute for Scientific Computing Research is to foster interactions among LLNL researchers, universities, and industry on selected topics in scientific computing. In the area of computational physics, the Institute has developed a new algorithm, GaPH, to help scientists understand the chemistry of turbulent and driven plasmas or gases at far less cost than other methods. New low-frequency electromagnetic models better describe the plasma etching and deposition characteristics of a computer chip in the making. A new method for modeling realistic curved boundaries within an orthogonal mesh is resulting in a better understanding of the physics associated with such boundaries and much quicker solutions. All these capabilities are being developed for massively parallel implementation, which is an ongoing focus of Institute researchers. Other groups within the Institute are developing novel computational methods to address a range of other problems. Examples include feature detection and motion recognition by computer, improved monitoring of blood oxygen levels, and entirely new models of human joint mechanics and prosthetic devices.

An overview of Northrop programs in computational physics is presented. These programs depend on access to today's supercomputers, such as the Numerical Aerodynamical Simulator (NAS), and future growth on the continuing evolution of computational engines. Descriptions here are concentrated on the following areas: computational fluid dynamics (CFD), computational electromagnetics (CEM), computer architectures, and expert systems. Current efforts and future directions in these areas are presented. The impact of advances in the CFD area is described, and parallels are drawn to analagous developments in CEM. The relationship between advances in these areas and the development of advances (parallel) architectures and expert systems is also presented.

the new GERESS data. The dissertation work emphasized the development and use of advanced computa- tional techniques for studying regional seismic...hand, the possibility of new data sources at regional distances permits using previously ignored signals. Unfortunately, these regional signals will...the Green’s function around this new reference point is containing the propagation effects, and V is the source Gnk(x,t;r,t) - (2) volume where fJk

This paper is based on a transcript of my EPAC'08 presentation on advancedcomputing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics.

The development of quantitative structure-activity relationship (QSAR) is essential for reducing the chemical hazards of new weapon systems. The current collaboration between HEST (toxicology research and testing), MLPJ (computational chemistry) and PRS (computational chemistry, new propellant synthesis) is focusing R&D efforts on basic research goals that will rapidly transition to useful products for propellant development. Computational methods are being investigated that will assist in forecasting cellular toxicological end-points. Models developed from these chemical structure-toxicity relationships are useful for the prediction of the toxicological endpoints of new related compounds. Research is focusing on the evaluation tools to be used for the discovery of such relationships and the development of models of the mechanisms of action. Combinations of computational chemistry techniques, in vitro toxicity methods, and statistical correlations, will be employed to develop and explore potential predictive relationships; results for series of molecular systems that demonstrate the viability of this approach are reported. A number of hydrazine salts have been synthesized for evaluation. Computational chemistry methods are being used to elucidate the mechanism of action of these salts. Toxicity endpoints such as viability (LDH) and changes in enzyme activity (glutahoione peroxidase and catalase) are being experimentally measured as indicators of cellular damage. Extrapolation from computational/in vitro studies to human toxicity, is the ultimate goal. The product of this program will be a predictive tool to assist in the development of new, less toxic propellants.

In discussing the application of advanced materials to rotating machinery, the following topics are covered: the torque speed characteristics of ac and dc machines, motor and transformer losses, the factors affecting core loss in motors, advanced magnetic materials and conductors, and design tradeoffs for samarium cobalt motors.

The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

Advanced metallic materials within the Al-base family are being developed for applications on current and future aerospace vehicles. These advanced materials offer significant improvements in density, strength, stiffness, fracture resistance, and/or higher use temperature which translates into improved vehicle performance. Aerospace applications of advanced metallic materials include space structures, fighters, military and commercial transport aircraft, and missiles. Structural design requirements, including not only static and durability/damage tolerance criteria but also environmental considerations, drive material selections. Often trade-offs must be made regarding strength, fracture resistance, cost, reliability, and maintainability in order to select the optimum material for a specific application. These trade studies not only include various metallic materials but also many times include advanced composite materials. Details of material comparisons, aerospace applications, and material trades will be presented.

The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advancedcomputer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.

In a nutshell, the existing Internet provides to us content in the forms of videos, emails and information served up in web pages. With Cloud Computing, the next generation of Internet will allow us to "buy" IT services from a web portal, drastic expanding the types of merchandise available beyond those on e-commerce sites such as eBay and Taobao. We would be able to rent from a virtual storefront the basic necessities to build a virtual data center: such as CPU, memory, storage, and add on top of that the middleware necessary: web application servers, databases, enterprise server bus, etc. as the platform(s) to support the applications we would like to either rent from an Independent Software Vendor (ISV) or develop ourselves. Together this is what we call as "IT as a Service," or ITaaS, bundled to us the end users as a virtual data center.

Scientific simulation, which provides a natural bridge between theory and experiment, is an essential tool for understanding complex plasma behaviour. Recent advances in simulations of magnetically confined plasmas are reviewed in this paper, with illustrative examples, chosen from associated research areas such as microturbulence, magnetohydrodynamics and other topics. Progress has been stimulated, in particular, by the exponential growth of computer speed along with significant improvements in computer technology. The advances in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics have produced increasingly good agreement between experimental observations and computational modelling. This was enabled by two key factors: (a) innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales and (b) access to powerful new computational resources. Excellent progress has been made in developing codes for which computer run-time and problem-size scale well with the number of processors on massively parallel processors (MPPs). Examples include the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPPs to produce three-dimensional, general geometry, nonlinear particle simulations that have accelerated advances in understanding the nature of turbulence self-regulation by zonal flows. These calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In looking towards the future, the current results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. This

NASA s Constellation Program is developing a lunar surface outpost in which reconfigurable computing will play a significant role. Reconfigurable systems provide a number of benefits over conventional software-based implementations including performance and power efficiency, while the use of standardized reconfigurable hardware provides opportunities to reduce logistical overhead. The current vision for the lunar surface architecture includes habitation, mobility, and communications systems, each of which greatly benefit from reconfigurable hardware in applications including video processing, natural feature recognition, data formatting, IP offload processing, and embedded control systems. In deploying reprogrammable hardware, considerations similar to those of software systems must be managed. There needs to be a mechanism for discovery enabling applications to locate and utilize the available resources. Also, application interfaces are needed to provide for both configuring the resources as well as transferring data between the application and the reconfigurable hardware. Each of these topics are explored in the context of deploying reconfigurable resources as an integral aspect of the lunar exploration architecture.

ABCI (Azimuthal Beam Cavity Interaction) is a computer program which solves the Maxwell equations directly in the time domain when a Gaussian beam goes through an axi-symmetrical structure on or off axis. Many new features have been implemented in the new version of ABCI (presently version 6.6), including the 'moving mesh' and Napoly's method of calculation of wake potentials. The mesh is now generated only for the part of the structure inside a window and moves together with the window frame. This moving mesh option reduces the number of mesh points considerably, and very fine meshes can be used. Napoly's integration method makes it possible to compute wake potentials in a structure such as a collimator, where parts of the cavity material are at smaller radii than that of the beam pipes, in such a way that the contribution from the beam pipes vanishes. For the monopole wake potential, ABCI can be applied even to structures with unequal beam pipe radii. Furthermore, the radial mesh size can be varied over the structure, permitting use a fine mesh only where actually needed. With these improvements, the program allows computation of wake fields for structures far too complicated for older codes. Plots of a cavity shape and wake potentials can be obtained in the form of a Top Drawer file. The program can also calculate and plot the impedance of a structure and/or the distribution of the deposited energy as a function of the frequency from Fourier transforms of wake potentials. Its usefulness is illustrated by showing some numerical examples.

A review is presented of some recent work in the field of inorganic scintillator research for medical imaging applications, in particular scintillation detectors for Single-Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET).

This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batch sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.

In this perspective, we present an overview of nanoscience applications in catalysis, energy conversion, and energy conservation technologies. We discuss how novel physical and chemical properties of nanomaterials can be applied and engineered to meet the advanced material requirements in the new generation of chemical and energy conversion devices. We highlight some of the latest advances in these nanotechnologies and provide an outlook at the major challenges for further developments.

In this paper we report on our recent achievement in application of conventional and cross-polarization OCT (CP OCT) modalities for in vivo clinical diagnostics in different medical areas including gynecology, dermatology, and stomatology. In gynecology, CP OCT was employed for diagnosing fallopian tubes and cervix; in dermatology OCT for monitoring of treatment of psoriasis, scleroderma and atopic dermatitis; and in stomatology for diagnosis of oral diseases. For all considered application, we propose and develop different image processing methods which enhance the diagnostic value of the technique. In particular, we use histogram analysis, Fourier analysis and neural networks, thus calculating different tissue characteristics as revealed by OCT's polarization evolution. These approaches enable improved OCT image quantification and increase its resultant diagnostic accuracy.

Aeroassisted planetary entry uses atmospheric drag to decelerate spacecraft from super-orbital to orbital or suborbital velocities. Numerical simulation of flow fields surrounding these spacecraft during hypersonic atmospheric entry is required to define aerothermal loads. The severe compression in the shock layer in front of the vehicle and subsequent, rapid expansion into the wake are characterized by high temperature, thermo-chemical nonequilibrium processes. Implicit algorithms required for efficient, stable computation of the governing equations involving disparate time scales of convection, diffusion, chemical reactions, and thermal relaxation are discussed. Robust point-implicit strategies are utilized in the initialization phase; less robust but more efficient line-implicit strategies are applied in the endgame. Applications to ballutes (balloon-like decelerators) in the atmospheres of Venus, Mars, Titan, Saturn, and Neptune and a Mars Sample Return Orbiter (MSRO) are featured. Examples are discussed where time-accurate simulation is required to achieve a steady-state solution.

assistance. Special recognition goes to the members of the International Logistics Section. Third, to the members of the Supply community, for their... Recognition of Small Computers. . . 11 Major Command Applications .. ......... 12 Base Level Applications ... . .. . . . . 14 Problem Statement...computer use in the Air Force? Air Force Recognition of Small Computers The Air Force has recently realized the potential offered by these small

The development of aerospace vehicles, over the years, was an evolutionary process in which engineering progress in the aerospace community was based, generally, on prior experience and data bases obtained through wind tunnel and flight testing. Advances in the fundamental understanding of flow physics, wind tunnel and flight test capability, and mathematical insights into the governing flow equations were translated into improved air vehicle design. The modern day field of Computational Fluid Dynamics (CFD) is a continuation of the growth in analytical capability and the digital mathematics needed to solve the more rigorous form of the flow equations. Some of the technical and managerial challenges that result from rapidly developing CFD capabilites, some of the steps being taken by the Fort Worth Division of General Dynamics to meet these challenges, and some of the specific areas of application for high performance air vehicles are presented.

ABSTRACT Project Title: Recovery Act: Advanced Direct Methanol Fuel Cell for Mobile Computing PROJECT OBJECTIVE The objective of the project was to advance portable fuel cell system technology towards the commercial targets of power density, energy density and lifetime. These targets were laid out in the DOE’s R&D roadmap to develop an advanced direct methanol fuel cell power supply that meets commercial entry requirements. Such a power supply will enable mobile computers to operate non-stop, unplugged from the wall power outlet, by using the high energy density of methanol fuel contained in a replaceable fuel cartridge. Specifically this project focused on balance-of-plant component integration and miniaturization, as well as extensive component, subassembly and integrated system durability and validation testing. This design has resulted in a pre-production power supply design and a prototype that meet the rigorous demands of consumer electronic applications. PROJECT TASKS The proposed work plan was designed to meet the project objectives, which corresponded directly with the objectives outlined in the Funding Opportunity Announcement: To engineer the fuel cell balance-of-plant and packaging to meet the needs of consumer electronic systems, specifically at power levels required for mobile computing. UNF used existing balance-of-plant component technologies developed under its current US Army CERDEC project, as well as a previous DOE project completed by PolyFuel, to further refine them to both miniaturize and integrate their functionality to increase the system power density and energy density. Benefits of UNF’s novel passive water recycling MEA (membrane electrode assembly) and the simplified system architecture it enabled formed the foundation of the design approach. The package design was hardened to address orientation independence, shock, vibration, and environmental requirements. Fuel cartridge and fuel subsystems were improved to ensure effective fuel

SummaryAdvances in e-Infrastructure promise to revolutionize sensing systems and the way in which data are collected and assimilated, and complex water systems are simulated and visualized. According to the EU Infrastructure 2010 work-programme, data and compute infrastructures and their underlying technologies, either oriented to tackle scientific challenges or complex problem solving in engineering, are expected to converge together into the so-called knowledge infrastructures, leading to a more effective research, education and innovation in the next decade and beyond. Grid technology is recognized as a fundamental component of e-Infrastructures. Nevertheless, this emerging paradigm highlights several topics, including data management, algorithm optimization, security, performance (speed, throughput, bandwidth, etc.), and scientific cooperation and collaboration issues that require further examination to fully exploit it and to better inform future research policies. The paper illustrates the results of six different surface and subsurface hydrology applications that have been deployed on the Grid. All the applications aim to answer to strong requirements from the Civil Society at large, relatively to natural and anthropogenic risks. Grid technology has been successfully tested to improve flood prediction, groundwater resources management and Black Sea hydrological survey, by providing large computing resources. It is also shown that Grid technology facilitates e-cooperation among partners by means of services for authentication and authorization, seamless access to distributed data sources, data protection and access right, and standardization.

Computer Assisted Design (CAD) and Computer Assisted Manufacturing (CAM) are emerging technologies now being used in home economics and interior design applications. A microcomputer in a computer network system is capable of executing computer graphic functions such as three-dimensional modeling, as well as utilizing office automation packages to…

Describes possible applications of new technologies to special education. Discusses results of a study designed to explore the use of robotics, artificial intelligence, and computer simulations to aid people with handicapping conditions. Presents several scenarios in which specific technological advances may contribute to special education…

The introduction of multidetector row computed tomography (CT) into clinical practice has revolutionized many aspects of the clinical work-up. Lung cancer imaging has benefited from various breakthroughs in computing technology, with advances in the field of lung cancer detection, tissue characterization, lung cancer staging and response to therapy. Our paper discusses the problems of radiation, image visualization and CT examination comparison. It also reviews the most significant advances in lung cancer imaging and highlights the emerging clinical applications that use state of the art CT technology in the field of lung cancer diagnosis and follow-up.

Computer-Assisted Foreign Language Teaching and Learning: Technological Advances highlights new research and an original framework that brings together foreign language teaching, experiments and testing practices that utilize the most recent and widely used e-learning resources. This comprehensive collection of research will offer linguistic…

As part of the Federal Data Processing Reorganization Study submitted by the Science and Technology Team, the Federal Government's role in advancing and diffusing computer technology is discussed. Findings and conclusions assess the state-of-the-art in government and in industry, and five recommendations provide directions for government policy…

The Computational Science Applications in Manufacturing (CSAM) workshop is a program designed to expose and train high school students in the techniques used in computational science as they pertain to manufacturing. This effort was sponsored by the AlliedSignal Inc., Kansas City Division (KCD) in cooperation with the Department of Energy (DOE) and their initiative to support education with respect to the advances in technology.

The computational requirements for design and manufacture of automotive components have seen dramatic increases for producing automobiles with three times the mileage. Automotive component design systems are becoming increasingly reliant on structural analysis requiring both overall larger analysis and more complex analyses, more three-dimensional analyses, larger model sizes, and routine consideration of transient and non-linear effects. Such analyses must be performed rapidly to minimize delays in the design and development process, which drives the need for parallel computing. This paper briefly describes advancedcomputational research in superplastic forming and automotive crash worthiness.

Super Computer processing levels will be necessary to implement real time functions in radar and communication systems. Design and fabrication of such systems require innovative approaches in addressing the problem rather than the normal methods of solution. The mathematical solution to multiple simultaneous equations is the optimum solution in many signal processing problems in radar and communications. Historically this optimum solution was not directly possible, so approximations were derived through various signal processing techniques. The advent of high-speed digital processing devices in concurrent processing architectures has raised the possibility of achieving the optimum solution. This mathematical solution is applicable to many adaptive signal processing problems including spatial, spectral, temporal, and moving target indication filtering. Hazeltine, under contract with RADC on the Systolic Array Processor Brassboard program, has built a processor solving this class of problems in a highly concurrent systolic processing architecture. The application-specific processor built on this program performs over 1.25 billion floating point operations per second (BFLOPS), solving equations with up to twelve complex variables.

The global objective of this effort is to develop advancedcomputational tools for 3-D seismic analysis, and test the products using a model dataset developed under the joint aegis of the United States` Society of Exploration Geophysicists (SEG) and the European Association of Exploration Geophysicists (EAEG). The goal is to enhance the value to the oil industry of the SEG/EAEG modeling project, carried out with US Department of Energy (DOE) funding in FY` 93-95. The primary objective of the ORNL Center for Engineering Systems Advanced Research (CESAR) is to spearhead the computational innovations techniques that would enable a revolutionary advance in 3-D seismic analysis. The CESAR effort is carried out in collaboration with world-class domain experts from leading universities, and in close coordination with other national laboratories and oil industry partners.

-Dominated Sorting Genetic Algorithm II (NSGA-II). This tool was demonstrated through an application for the Upper Neuse Watershed in North Carolina, USA. The objective functions used for the calibration were Nash-Sutcliffe (E) and Percent Bias (PB), and the objective sites were the Flat, Little, and Eno watershed outlets. The results show that the use of multi-objective calibration algorithms for SWAT calibration improved model performance especially in terms of minimizing PB compared to the single objective model calibration. The third study builds upon the first two studies by leveraging the new calibration methods and tools to study future climate impacts on the Upper Neuse watershed. Statistically downscaled outputs from eight Global Circulation Models (GCMs) were used for both low and high emission scenarios to drive a well calibrated SWAT model of the Upper Neuse watershed. The objective of the study was to understand the potential hydrologic response of the watershed, which serves as a public water supply for the growing Research Triangle Park region of North Carolina, under projected climate change scenarios. The future climate change scenarios, in general, indicate an increase in precipitation and temperature for the watershed in coming decades. The SWAT simulations using the future climate scenarios, in general, suggest an increase in soil water and water yield, and a decrease in evapotranspiration within the Upper Neuse watershed. In summary, this dissertation advances the field of watershed-scale hydrologic modeling by (i) providing some of the first work to apply cloud computing for the computationally-demanding task of model calibration; (ii) providing a new, open source library that can be used by SWAT modelers to perform multi-objective calibration of their models; and (iii) advancing understanding of climate change impacts on water resources for an important watershed in the Research Triangle Park region of North Carolina. The third study leveraged the

The Advanced Communications Technology Satellite (ACTS) system provided a national testbed that enabled advancedapplications to be tested and demonstrated over a live satellite link. Of the applications that used ACTS. some offered unique advantages over current methods, while others simply could not be accommodated by conventional systems. The initial technical and experiments results of the program were reported at the 1995 ACTS Results Conference. in Cleveland, Ohio. Since then, the Experiments Program has involved 45 new experiments comprising 30 application experiments and 15 technology related experiments that took advantage of the advanced technologies and unique capabilities offered by ACTS. The experiments are categorized and quantified to show the organizational mix of the experiments program and relative usage of the satellite. Since paper length guidelines preclude each experiment from being individually reported, the application experiments and significant demonstrations are surveyed to show the breadth of the activities that have been supported. Experiments in a similar application category are collectively discussed, such as. telemedicine. or networking and protocol evaluation. Where available. experiment conclusions and impact are presented and references of results and experiment information are provided. The quantity and diversity of the experiments program demonstrated a variety of service areas for the next generation of commercially available, advanced satellite communications.

Advances in Computational Aeroacoustics (CAA) depend critically on the availability of accurate, nondispersive, least dissipative computation algorithm as well as high quality numerical boundary treatments. This paper focuses on the recent developments of numerical boundary conditions. In a typical CAA problem, one often encounters two types of boundaries. Because a finite computation domain is used, there are external boundaries. On the external boundaries, boundary conditions simulating the solution outside the computation domain are to be imposed. Inside the computation domain, there may be internal boundaries. On these internal boundaries, boundary conditions simulating the presence of an object or surface with specific acoustic characteristics are to be applied. Numerical boundary conditions, both external or internal, developed for simple model problems are reviewed and examined. Numerical boundary conditions for real aeroacoustic problems are also discussed through specific examples. The paper concludes with a description of some much needed research in numerical boundary conditions for CAA.

With growing trend toward higher temperature capabilities, lightweight, and multifunctionality, significant advances in ceramic matrix composites (CMCs) will be required for future aerospace applications. The presentation will provide an overview of material requirements for future aerospace missions, and the role of ceramics and CMCs in meeting those requirements. Aerospace applications will include gas turbine engines, aircraft structure, hypersonic and access to space vehicles, space power and propulsion, and space communication.

Computational Electromagnetic Software is used by NASA to analyze the compatibility of systems too large or too complex for testing. Recent advances in software packages and computer capabilities have made it possible to determine the effects of a transmitter inside a launch vehicle fairing, better analyze the environment threats, and perform on-orbit replacements with assured electromagnetic compatibility.

A description is given of an inexpensive nuclear magnetic resonance (NMR) spectrometer suitable for use in advanced laboratory courses. Applications to the nondestructive analysis of the oil content in corn seeds and in monitoring the crystallization of polymers are presented. (SK)

The paper presents the project team's advanced sensor-computer sphere technology for real-time and continuous monitoring of wastewater runoff at the sewer discharge outfalls along the receiving water. This research significantly enhances and extends the previously proposed novel sensor-computer technology. This advanced technology offers new computation models for an innovative use of the sensor-computer sphere comprising accelerometer, programmable in-situ computer, solar power, and wireless communication for real-time and online monitoring of runoff quantity. This innovation can enable more effective planning and decision-making in civil infrastructure, natural environment protection, and water pollution related emergencies. The paper presents the following: (i) the sensor-computer sphere technology; (ii) a significant enhancement to the previously proposed discrete runoff quantity model of this technology; (iii) a new continuous runoff quantity model. Our comparative study on the two distinct models is presented. Based on this study, the paper further investigates the following: (1) energy-, memory-, and communication-efficient use of the technology for runoff monitoring; (2) possible sensor extensions for runoff quality monitoring.

Computerapplications to instruction in any field may be divided into two broad generic classes: computer-managed instruction and computer-assisted instruction. The division is based on how frequently the computer affects the instructional process and how active a role the computer affects the instructional process and how active a role the computer takes in actually providing instruction. There are no inherent characteristics of remote sensing education to preclude the use of one or both of these techniques, depending on the computer facilities available to the instructor. The characteristics of the two classes are summarized, potential applications to remote sensing education are discussed, and the advantages and disadvantages of computerapplications to the instructional process are considered.

The Stockpile Stewardship Program (SSP) is an integrated technical program for maintaining the safety, surety, and reliability of the U.S. nuclear stockpile. The SSP uses nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program requires the continued use of experimental facilities and programs, and the computational capabilities to support these programs. The Advanced Simulation and Computing Program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources that support annual stockpile assessment and certification, study advanced nuclear weapons design and manufacturing processes, analyze accident scenarios and weapons aging, and provide the tools to enable stockpile Life Extension Programs (LEPs) and the resolution of Significant Finding Investigations (SFIs). This requires a balance of resource, including technical staff, hardware, simulation software, and computer science solutions. ASC is now focused on increasing predictive capabilities in a three-dimensional (3D) simulation environment while maintaining support to the SSP. The program continues to improve its unique tools for solving progressively more difficult stockpile problems (sufficient resolution, dimensionality, and scientific details), and quantifying critical margins and uncertainties. Resolving each issue requires increasingly difficult analyses because the aging process has progressively moved the stockpile further away from the original test base. Where possible, the program also enables the use of high performance computing (HPC) and simulation tools to address broader national security needs, such as foreign nuclear weapon assessments and counter nuclear terrorism.

Having delivered over 30,000 uncooled microbolometer based thermal imaging engines, BAE Systems is the world's leading producer. Advancements in technology include the demonstration of broadband microbolometers on a 46 μm pixel pitch which have excellent sensitivity in the MWIR (NETD ~180 mK, 3-5 μm) and LWIR (NETD ~ 15 mK, 8-12 μm) wavebands. Applicationadvancements include the development of a family of thermal weapons sights for the military which will replace current cooled systems with lighter, lower power systems and the introduction of a new generation of handheld and pole mounted thermal imagers for commercial markets.

The Research Institute for AdvancedComputer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities with research programs in the aerospace sciences, under contract with NASA. The primary mission of RIACS is to provide research and expertise in computer science and scientific computing to support the scientific missions of NASA ARC. The research carried out at RIACS must change its emphasis from year to year in response to NASA ARC's changing needs and technological opportunities. Research at RIACS is currently being done in the following areas: (1) parallel computing; (2) advanced methods for scientific computing; (3) high performance networks; and (4) learning systems. RIACS technical reports are usually preprints of manuscripts that have been submitted to research journals or conference proceedings. A list of these reports for the period January 1, 1994 through December 31, 1994 is in the Reports and Abstracts section of this report.

The different uses to which two adult educators put their personal computers illustrate some of the many personal and professional applications of microcomputers and point to some implications for home computers in the field of adult education. Relying primarily upon prepared software packages, the first educator relies on his computer as a…

Intended as a reference for researchers, teachers, and administrators, this book chronicles research, programs, and uses of computers in reading. Chapter 1 provides a broad view of computerapplications in education, while Chapter 2 provides annotated references for computer based reading and language arts programs for children and adults in…

The potential of fuzzy sets and neural networks, often referred to as soft computing, for aiding in all aspects of manufacturing of advanced materials like ceramics is addressed. In design and manufacturing of advanced materials, it is desirable to find which of the many processing variables contribute most to the desired properties of the material. There is also interest in real time quality control of parameters that govern material properties during processing stages. The concepts of fuzzy sets and neural networks are briefly introduced and it is shown how they can be used in the design and manufacturing processes. These two computational methods are alternatives to other methods such as the Taguchi method. The two methods are demonstrated by using data collected at NASA Lewis Research Center. Future research directions are also discussed.

Increased mining production is continuing to challenge engineers and manufacturers to keep up. The pressure to produce larger and more versatile equipment is increasing. This paper will show some recent major projects in the belt conveyor industry that have pushed the limits of design and engineering technology. Also, it will discuss the systems engineering discipline and advancedcomputer modeling tools that have helped make these achievements possible. Several examples of technologically advanced designs will be reviewed. However, new technology can sometimes produce increased problems with equipment availability and reliability if not carefully developed. Computer modeling techniques that help one design larger equipment can also compound operational headaches if engineering processes and algorithms are not carefully analyzed every step of the way.

Contends that courses designed to help students learn about the application of computers to historical study can be prepared and taught in a number of ways. Describes a college-level course that combines an introduction to computerapplications with a student research project. (CFR)

APL (A Programing Language), a computer language used thus far largely for mathematical and scientific applications, can be used to tabulate a survey. Since this computerapplication can be appreciated by social scientists as well as mathematicians, it serves as an invaluable pedagogical tool for presenting APL to nonscientific users. An…

This report presents results from the DOE-sponsored workshop titled, ``Advancing X-Cutting Ideas for Computational Climate Science Workshop,'' known as AXICCS, held on September 12--13, 2016 in Rockville, MD. The workshop brought together experts in climate science, computational climate science, computer science, and mathematics to discuss interesting but unsolved science questions regarding climate modeling and simulation, promoted collaboration among the diverse scientists in attendance, and brainstormed about possible tools and capabilities that could be developed to help address them. Emerged from discussions at the workshop were several research opportunities that the group felt could advance climate science significantly. These include (1) process-resolving models to provide insight into important processes and features of interest and inform the development of advanced physical parameterizations, (2) a community effort to develop and provide integrated model credibility, (3) including, organizing, and managing increasingly connected model components that increase model fidelity yet complexity, and (4) treating Earth system models as one interconnected organism without numerical or data based boundaries that limit interactions. The group also identified several cross-cutting advances in mathematics, computer science, and computational science that would be needed to enable one or more of these big ideas. It is critical to address the need for organized, verified, and optimized software, which enables the models to grow and continue to provide solutions in which the community can have confidence. Effectively utilizing the newest computer hardware enables simulation efficiency and the ability to handle output from increasingly complex and detailed models. This will be accomplished through hierarchical multiscale algorithms in tandem with new strategies for data handling, analysis, and storage. These big ideas and cross-cutting technologies for enabling

One of the most important concerns in parallel computing is the proper distribution of workload across processors. For most scientific applications on massively parallel machines, the best approach to this distribution is to employ data parallelism; that is, to break the datastructures supporting a computation into pieces and then to assign those pieces to different processors. Collectively, these partitioning and assignment tasks comprise the domain mapping problem.

Recent advances in vacuum sciences and applications are reviewed. Novel optical interferometer cavity devices enable pressure measurements with ppm accuracy. The innovative dynamic vacuum standard allows for pressure measurements with temporal resolution of 2 ms. Vacuum issues in the construction of huge ultra-high vacuum devices worldwide are reviewed. Recent advances in surface science and thin films include new phenomena observed in electron transport near solid surfaces as well as novel results on the properties of carbon nanomaterials. Precise techniques for surface and thin-film characterization have been applied in the conservation technology of cultural heritage objects and recent advances in the characterization of biointerfaces are presented. The combination of various vacuum and atmospheric-pressure techniques enables an insight into the complex phenomena of protein and other biomolecule conformations on solid surfaces. Studying these phenomena at solid-liquid interfaces is regarded as the main issue in the development of alternative techniques for drug delivery, tissue engineering and thus the development of innovative techniques for curing cancer and cardiovascular diseases. A review on recent advances in plasma medicine is presented as well as novel hypotheses on cell apoptosis upon treatment with gaseous plasma. Finally, recent advances in plasma nanoscience are illustrated with several examples and a roadmap for future activities is presented.

We present our recent results on the development and experimental testing of advanced dielectric materials that are capable of supporting the high RF electric fields generated by electron beams or pulsed high power microwaves. These materials have been optimized or specially designed for accelerator applications. The materials discussed here include low loss microwave ceramics, quartz, Chemical Vapor Deposition diamonds and nonlinear Barium Strontium Titanate based ferroelectrics.

Increasing public awareness of environmental pollution influences the search and development of technologies that help in clean up of organic and inorganic contaminants such as hydrocarbons and metals. An alternative and eco-friendly method of remediation technology of environments contaminated with these pollutants is the use of biosurfactants and biosurfactant-producing microorganisms. The diversity of biosurfactants makes them an attractive group of compounds for potential use in a wide variety of industrial and biotechnological applications. The purpose of this review is to provide a comprehensive overview of advances in the applications of biosurfactants and biosurfactant-producing microorganisms in hydrocarbon and metal remediation technologies. PMID:21340005

Work to develop and demonstrate the technology of structural ceramics for automotive engines and similar applications is described. Long-range technology is being sought to produce gas turbine engines for automobiles with reduced fuel consumption and reduced environmental impact. The Advanced Turbine Technology Application Project (ATTAP) test bed engine is designed such that, when installed in a 3,000 pound inertia weight automobile, it will provide low emissions, 42 miles per gallon fuel economy on diesel fuel, multifuel capability, costs competitive with current spark ignition engines, and noise and safety characteristics that meet Federal standards.

Increasing public awareness of environmental pollution influences the search and development of technologies that help in clean up of organic and inorganic contaminants such as hydrocarbons and metals. An alternative and eco-friendly method of remediation technology of environments contaminated with these pollutants is the use of biosurfactants and biosurfactant-producing microorganisms. The diversity of biosurfactants makes them an attractive group of compounds for potential use in a wide variety of industrial and biotechnological applications. The purpose of this review is to provide a comprehensive overview of advances in the applications of biosurfactants and biosurfactant-producing microorganisms in hydrocarbon and metal remediation technologies.

Advanced network applications such as remote instrument control, collaborative environments, and remote I/O are distinguished by traditional applications such as videoconferencing by their need to create multiple, heterogeneous flows with different characteristics. For example, a single application may require remote I/O for raw datasets, shared controls for a collaborative analysis system, streaming video for image rendering data, and audio for collaboration. Furthermore, each flow can have different requirements in terms of reliability, network quality of service, security, etc. They argue that new approaches to communication services, protocols, and network architecture are required both to provide high-level abstractions for common flow types and to support user-level management of flow creation and quality. They describe experiences with the development of such applications and communication services.

Carefully engineered playable games can serve as vehicles for students and practitioners to learn and explore the programming of advancedcomputer architectures to execute applications, such as high performance computing (HPC) and complex, inter-networked, distributed systems. The article presents families of playable games that are grounded in…

The tip vortex flowfield plays a significant role in the performance of advanced aircraft propellers. The flowfield in the tip region is complex, three-dimensional and viscous with large secondary velocities. An analysis is presented using an approximate set of equations which contains the physics required by the tip vortex flowfield, but which does not require the resources of the full Navier-Stokes equations. A computer code was developed to predict the tip vortex flowfield of advanced aircraft propellers. A grid generation package was developed to allow specification of a variety of advanced aircraft propeller shapes. Calculations of the tip vortex generation on an SR3 type blade at high Reynolds numbers were made using this code and a parametric study was performed to show the effect of tip thickness on tip vortex intensity. In addition, calculations of the tip vortex generation on a NACA 0012 type blade were made, including the flowfield downstream of the blade trailing edge. Comparison of flowfield calculations with experimental data from an F4 blade was made. A user's manual was also prepared for the computer code (NASA CR-182178).

Reports technical effort by AlliedSignal Engines in sixth year of DOE/NASA funded project. Topics include: gas turbine engine design modifications of production APU to incorporate ceramic components; fabrication and processing of silicon nitride blades and nozzles; component and engine testing; and refinement and development of critical ceramics technologies, including: hot corrosion testing and environmental life predictive model; advanced NDE methods for internal flaws in ceramic components; and improved carbon pulverization modeling during impact. ATTAP project is oriented toward developing high-risk technology of ceramic structural component design and fabrication to carry forward to commercial production by 'bridging the gap' between structural ceramics in the laboratory and near-term commercial heat engine application. Current ATTAP project goal is to support accelerated commercialization of advanced, high-temperature engines for hybrid vehicles and other applications. Project objectives are to provide essential and substantial early field experience demonstrating ceramic component reliability and durability in modified, available, gas turbine engine applications; and to scale-up and improve manufacturing processes of ceramic turbine engine components and demonstrate application of these processes in the production environment.

Nuclear imaging techniques remain today's most reliable modality for the assessment and quantification of myocardial perfusion. In recent years, the field has experienced tremendous progress both in terms of dedicated cameras for cardiac applications and software techniques for image reconstruction. The most recent advances in single-photon emission computed tomography hardware and software are reviewed, focusing on how these improvements have resulted in an even more powerful diagnostic tool with reduced injected radiation dose and acquisition time.

Slides are reproduced that describe the importance of having high performance number crunching and graphics capability. They also indicate the types of research and development underway at Ames Research Center to ensure that, in the near term, Ames is a smart buyer and user, and in the long-term that Ames knows the best possible solutions for number crunching and graphics needs. The drivers for this research are real computational physics applications of interest to Ames and NASA. They are concerned with how to map the applications, and how to maximize the physics learned from the results of the calculations. The computer graphics activities are aimed at getting maximum information from the three-dimensional calculations by using the real time manipulation of three-dimensional data on the Silicon Graphics workstation. Work is underway on new algorithms that will permit the display of experimental results that are sparse and random, the same way that the dense and regular computed results are displayed.

November 2008 will be a few months after the official start of LHC when the highest quantum energy ever produced by mankind will be observed by the most complex piece of scientific equipment ever built. LHC will open a new era in physics research and push further the frontier of Knowledge This achievement has been made possible by new technological developments in many fields, but computing is certainly the technology that has made possible this whole enterprise. Accelerator and detector design, construction management, data acquisition, detectors monitoring, data analysis, event simulation and theoretical interpretation are all computing based HEP activities but also occurring many other research fields. Computing is everywhere and forms the common link between all involved scientists and engineers. The ACAT workshop series, created back in 1990 as AIHENP (Artificial Intelligence in High Energy and Nuclear Research) has been covering the tremendous evolution of computing in its most advanced topics, trying to setup bridges between computer science, experimental and theoretical physics. Conference web-site: http://acat2008.cern.ch/ Programme and presentations: http://indico.cern.ch/conferenceDisplay.py?confId=34666

Translating advances in neuroscience into benefits for patients with mental illness presents enormous challenges because it involves both the most complex organ, the brain, and its interaction with a similarly complex environment. Dealing with such complexities demands powerful techniques. Computational psychiatry combines multiple levels and types of computation with multiple types of data in an effort to improve understanding, prediction and treatment of mental illness. Computational psychiatry, broadly defined, encompasses two complementary approaches: data driven and theory driven. Data-driven approaches apply machine-learning methods to high-dimensional data to improve classification of disease, predict treatment outcomes or improve treatment selection. These approaches are generally agnostic as to the underlying mechanisms. Theory-driven approaches, in contrast, use models that instantiate prior knowledge of, or explicit hypotheses about, such mechanisms, possibly at multiple levels of analysis and abstraction. We review recent advances in both approaches, with an emphasis on clinical applications, and highlight the utility of combining them.

Risk assessment is the process of quantifying the probability of a harmful effect to individuals or populations from human activities. Mechanistic approaches to risk assessment have been generally referred to as systems toxicology. Systems toxicology makes use of advanced analytical and computational tools to integrate classical toxicology and quantitative analysis of large networks of molecular and functional changes occurring across multiple levels of biological organization. Three presentations including two case studies involving both in vitro and in vivo approaches described the current state of systems toxicology and the potential for its future application in chemical risk assessment. PMID:26977253

European aircraft industry demands for reduced development and operating costs. Structural weight reduction by exploitation of structural reserves in composite aerospace structures contributes to this aim, however, it requires accurate and experimentally validated stability analysis of real structures under realistic loading conditions. This paper presents different advances from the area of computational stability analysis of composite aerospace structures which contribute to that field. For stringer stiffened panels main results of the finished EU project COCOMAT are given. It investigated the exploitation of reserves in primary fibre composite fuselage structures through an accurate and reliable simulation of postbuckling and collapse. For unstiffened cylindrical composite shells a proposal for a new design method is presented.

In this paper, described is the development of a numerical simulation system, what we call “AdvancedComputer Science on SRM Internal Ballistics (ACSSIB)”, for the purpose of improvement of performance and reliability of solid rocket motors (SRM). The ACSSIB system is consisting of a casting simulation code of solid propellant slurry, correlation database of local burning-rate of cured propellant in terms of local slurry flow characteristics, and a numerical code for the internal ballistics of SRM, as well as relevant hardware. This paper describes mainly the objectives, the contents of this R&D, and the output of the fiscal year of 2008.

To more accurately treat severe accidents in fast reactors, a program has been set up to investigate new computational models and approaches. The product of this effort is a computer code, the Advanced Fluid Dynamics Model (AFDM). This paper describes some of the basic features of the numerical algorithm used in AFDM. Aspects receiving particular emphasis are the fractional-step method of time integration, the semi-implicit pressure iteration, the virtual mass inertial terms, the use of three velocity fields, higher order differencing, convection of interfacial area with source and sink terms, multicomponent diffusion processes in heat and mass transfer, the SESAME equation of state, and vectorized programming. A calculated comparison with an isothermal tetralin/ammonia experiment is performed. We conclude that significant improvements are possible in reliably calculating the progression of severe accidents with further development.

This viewgraph presentation gives an overview of a debugger for computational grid applications. Details are given on NAS parallel tools groups (including parallelization support tools, evaluation of various parallelization strategies, and distributed and aggregated computing), debugger dependencies, scalability, initial implementation, the process grid, and information on Globus.

Progress achieved during this report period is presented on the following topics: Development and application of gyrokinetic particle codes to tokamak transport, development of techniques to take advantage of parallel computers; model dynamo and bootstrap current drive; and in general maintain our broad-based program in basic plasma physics and computer modeling.

Focusing on how computers can and should be used in intercollegiate forensics, this journal issue offers the perspectives of a number of forensics instructors. The lead article, "Applications of Computer Technology in Intercollegiate Debate" by Theodore F. Sheckels, Jr., discusses five areas in which forensics educators might use computer…

With growing computer power, novel diagnostic and therapeutic medical technologies, coupled with an increasing knowledge of pathophysiology from gene to organ systems, it is increasingly feasible to apply multi-scale patient-specific modeling based on proven disease mechanisms to guide and predict the response to therapy in many aspects of medicine. This is an exciting and relatively new approach, for which efficient methods and computational tools are of the utmost importance. Already, investigators have designed patient-specific models in almost all areas of human physiology. Not only will these models be useful on a large scale in the clinic to predict and optimize the outcome from surgery and non-interventional therapy, but they will also provide pathophysiologic insights from cell to tissue to organ system, and therefore help to understand why specific interventions succeed or fail. PMID:18598988

Among the different extraction techniques used at analytical and preparative scale, supercritical fluid extraction (SFE) is one of the most used. This review covers the most recent developments of SFE in different fields, such as food science, natural products, by-product recovery, pharmaceutical and environmental sciences, during the period 2007-2009. The revision is focused on the most recent advances and applications in the different areas; among them, it is remarkable the strong impact of SFE to extract high value compounds from food and natural products but also its increasing importance in areas such as heavy metals recovery, enantiomeric resolution or drug delivery systems.

The numerical model of ocean acoustic propagation developed in the 1980's are still in widespread use today, and the field of computational ocean acoustics is often considered a mature field. However, the explosive increase in computational power available to the community has created opportunities for modeling phenomena that earlier were beyond reach. Most notably, three-dimensional propagation and scattering problems have been prohibitive computationally, but are now addressed routinely using brute force numerical approaches such as the Finite Element Method, in particular for target scattering problems, where they are being combined with the traditional wave theory propagation models in hybrid modeling frameworks. Also, recent years has seen the development of hybrid approaches coupling oceanographic circulation models with acoustic propagation models, enabling the forecasting of sonar performance uncertainty in dynamic ocean environments. These and other advances made over the last couple of decades support the notion that the field of computational ocean acoustics is far from being mature. [Work supported by the Office of Naval Research, Code 321OA].

The field of nanomagnetism has recently attracted tremendous attention as it can potentially deliver low-power, high-speed and dense non-volatile memories. It is now possible to engineer the size, shape, spacing, orientation and composition of sub-100 nm magnetic structures. This has spurred the exploration of nanomagnets for unconventional computing paradigms. Here, we harness the energy-minimization nature of nanomagnetic systems to solve the quadratic optimization problems that arise in computer vision applications, which are computationally expensive. By exploiting the magnetization states of nanomagnetic disks as state representations of a vortex and single domain, we develop a magnetic Hamiltonian and implement it in a magnetic system that can identify the salient features of a given image with more than 85% true positive rate. These results show the potential of this alternative computing method to develop a magnetic coprocessor that might solve complex problems in fewer clock cycles than traditional processors.

The field of nanomagnetism has recently attracted tremendous attention as it can potentially deliver low-power, high-speed and dense non-volatile memories. It is now possible to engineer the size, shape, spacing, orientation and composition of sub-100 nm magnetic structures. This has spurred the exploration of nanomagnets for unconventional computing paradigms. Here, we harness the energy-minimization nature of nanomagnetic systems to solve the quadratic optimization problems that arise in computer vision applications, which are computationally expensive. By exploiting the magnetization states of nanomagnetic disks as state representations of a vortex and single domain, we develop a magnetic Hamiltonian and implement it in a magnetic system that can identify the salient features of a given image with more than 85% true positive rate. These results show the potential of this alternative computing method to develop a magnetic coprocessor that might solve complex problems in fewer clock cycles than traditional processors.

Representative examples are presented of applications and development of advancedComputational Fluid Dynamics (CFD) codes for aerodynamic design at the McDonnell Douglas Corporation (MDC). Transonic potential and Euler codes, interactively coupled with boundary layer computation, and solutions of slender-layer Navier-Stokes approximation are applied to aircraft wing/body calculations. An optimization procedure using evolution theory is described in the context of transonic wing design. Euler methods are presented for analysis of hypersonic configurations, and helicopter rotors in hover and forward flight. Several of these projects were accepted for access to the Numerical Aerodynamic Simulation (NAS) facility at the NASA-Ames Research Center.

A general method for parallel and vector numerical solutions of stochastic dynamic programming problems is described for optimal control of general nonlinear, continuous time, multibody dynamical systems, perturbed by Poisson as well as Gaussian random white noise. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random atmospheric fluctuations. The numerical formulation is highly suitable for a vector multiprocessor or vectorizing supercomputer, and results exhibit high processor efficiency and numerical stability. Advancedcomputing techniques, data structures, and hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations.

This article reviews the latest developments in computational cardiology. It focuses on the contribution of cardiac modelling to the development of new therapies as well as the advancement of existing ones for cardiac arrhythmias and pump dysfunction. Reviewed are cardiac modelling efforts aimed at advancing and optimizing existent therapies for cardiac disease (defibrillation, ablation of ventricular tachycardia, and cardiac resynchronization therapy) and at suggesting novel treatments, including novel molecular targets, as well as efforts to use cardiac models in stratification of patients likely to benefit from a given therapy, and the use of models in diagnostic procedures. PMID:23104919

NASA has identified water vapor emission into the upper atmosphere from commercial transport aircraft, particularly as it relates to the formation of persistent contrails, as a potential environmental problem. Since 1999, MSE has been working with NASA-LaRC to investigate the concept of a transport-size emissionless aircraft fueled with liquid hydrogen combined with other possible breakthrough technologies. The goal of the project is to significantly advance air transportation in the next decade and beyond. The power and propulsion (P/P) system currently being studied would be based on hydrogen fuel cells (HFCs) powering electric motors, which drive fans for propulsion. The liquid water reaction product is retained onboard the aircraft until a flight mission is completed. As of now, NASA-LaRC and MSE have identified P/P system components that, according to the high-level analysis conducted to date, are light enough to make the emissionless aircraft concept feasible. Calculated maximum aircraft ranges (within a maximum weight constraint) and other performance predictions are included in this report. This report also includes current information on advanced energy-related technologies, which are still being researched, as well as breakthrough physics concepts that may be applicable for advanced energetics and aerospace propulsion in the future.

Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.

This report summarizes the work entitled 'Advances in Hypersonic Vehicle Synthesis with Application to Studies of Advanced Thermal Protection Systems.' The effort was in two areas: (1) development of advanced methods of trajectory and propulsion system optimization; and (2) development of advanced methods of structural weight estimation. The majority of the effort was spent in the trajectory area.

A description is presented of nonaerospace applications for advanced composite materials with special emphasis on the automotive applications. The automotive industry has to satisfy exacting requirements to reduce the average fuel consumption of cars. A feasible approach to accomplish this involves the development of composites cars with a total weight of 2400 pounds and a fuel consumption of 33 miles per gallon. In connection with this possibility, the automotive companies have started to look seriously at composite materials. The aerospace industry has over the past decade accumulated a considerable data base on composite materials and this is being made available to the nonaerospace sector. However, the automotive companies will place prime emphasis on low cost resins which lend themselves to rapid fabrication techniques.

This report presents four geotechnical engineering programs for use on personal computing systems. An Apple II-Plus operating with DOS 3.3 Applesoft language was used. The programs include the solution of the signpost problem, the cantilevered sheet pile problem, the slope stability problem, and the flexible pavement design program. Each chapter is independent and does not rely upon theories or data presented in other chapters. A chapter outlines the theory used and also presents a users guide, a program list, and verification of the program by hand calculation. This report assimilates the product a practicing engineer would expect to receive when procuring software services.

The advent of the technology of Dense Wavelength Division Multiplexing (DWDM) in Optical Fiber Networks (OFNs) has resulted in the necessity of developing advanced Optical Add/Drop Multiplexers (OADMs) on the basis of submicron Bragg gratings. The OADMs for dense multichannel OFNs with bit rates 10 - 40 Gbits/s per channel and channel spacing 200, 100 and 50 GHz must possess rectangular-shaped reflection/transmission spectra and linear phase characteristic within the stop/passband. These features can not be achieved with uniform periodic Bragg gratings and therefore nonuniform gratings with space-modulated coupling coefficient should be used. We present the recent advances in the design and fabrication of narrowband wavelength-selective optical filters for DWDM applications on the basis of single-mode fibers with side-polishing and periodic relief Bragg gratings with apodized coupling coefficient. The peculiarities of propagation, interaction and diffraction of electromagnetic waves in nonuniform Bragg grating structures are considered. Narrowband reflection filters based on side-polished fibers and submicron relief gratings on SiO2 and SiO materials are designed and fabricated. The filters have stopband width 0.4 - 0.8 nm and peak reflectivity R > 98% in the 1.55 mkm wavelength communication region. Narrowband flat-top reflection filters for DWDM applications based on side-polished fibers and periodic relief Bragg gratings are designed. The schemes for multichannel integration of Bragg grating filters into OFNs are presented.

Proteomics plays an increasingly important role in our quest to understand cardiovascular biology. Fueled by analytical and computationaladvances in the past decade, proteomics applications can now go beyond merely inventorying protein species, and address sophisticated questions on cardiac physiology. The advent of massive mass spectrometry datasets has in turn led to increasing intersection between proteomics and big data science. Here we review new frontiers in technological developments and their applications to cardiovascular medicine. The impact of big data science on cardiovascular proteomics investigations and translation to medicine is highlighted.

On November 19, 2014, the Advanced Scientific Computing Advisory Committee (ASCAC) was charged with reviewing the Department of Energy’s conceptual design for the Exascale Computing Initiative (ECI). In particular, this included assessing whether there are significant gaps in the ECI plan or areas that need to be given priority or extra management attention. Given the breadth and depth of previous reviews of the technical challenges inherent in exascale system design and deployment, the subcommittee focused its assessment on organizational and management issues, considering technical issues only as they informed organizational or management priorities and structures. This report presents the observations and recommendations of the subcommittee.

Computational mechanics has been advanced in every area of orthopedic biomechanics. The objective of this paper is to provide a general review of the computational models used in the analysis of the mechanical function of the knee joint in different loading and pathological conditions. Major review articles published in related areas are summarized first. The constitutive models for soft tissues of the knee are briefly discussed to facilitate understanding the joint modeling. A detailed review of the tibiofemoral joint models is presented thereafter. The geometry reconstruction procedures as well as some critical issues in finite element modeling are also discussed. Computational modeling can be a reliable and effective method for the study of mechanical behavior of the knee joint, if the model is constructed correctly. Single-phase material models have been used to predict the instantaneous load response for the healthy knees and repaired joints, such as total and partial meniscectomies, ACL and PCL reconstructions, and joint replacements. Recently, poromechanical models accounting for fluid pressurization in soft tissues have been proposed to study the viscoelastic response of the healthy and impaired knee joints. While the constitutive modeling has been considerably advanced at the tissue level, many challenges still exist in applying a good material model to three-dimensional joint simulations. A complete model validation at the joint level seems impossible presently, because only simple data can be obtained experimentally. Therefore, model validation may be concentrated on the constitutive laws using multiple mechanical tests of the tissues. Extensive model verifications at the joint level are still crucial for the accuracy of the modeling.

Computational mechanics has been advanced in every area of orthopedic biomechanics. The objective of this paper is to provide a general review of the computational models used in the analysis of the mechanical function of the knee joint in different loading and pathological conditions. Major review articles published in related areas are summarized first. The constitutive models for soft tissues of the knee are briefly discussed to facilitate understanding the joint modeling. A detailed review of the tibiofemoral joint models is presented thereafter. The geometry reconstruction procedures as well as some critical issues in finite element modeling are also discussed. Computational modeling can be a reliable and effective method for the study of mechanical behavior of the knee joint, if the model is constructed correctly. Single-phase material models have been used to predict the instantaneous load response for the healthy knees and repaired joints, such as total and partial meniscectomies, ACL and PCL reconstructions, and joint replacements. Recently, poromechanical models accounting for fluid pressurization in soft tissues have been proposed to study the viscoelastic response of the healthy and impaired knee joints. While the constitutive modeling has been considerably advanced at the tissue level, many challenges still exist in applying a good material model to three-dimensional joint simulations. A complete model validation at the joint level seems impossible presently, because only simple data can be obtained experimentally. Therefore, model validation may be concentrated on the constitutive laws using multiple mechanical tests of the tissues. Extensive model verifications at the joint level are still crucial for the accuracy of the modeling. PMID:23509602

Three computerapplications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.

Although computational modeling is common in many areas of science and engineering, only recently have advances in experimental techniques and medical imaging allowed this tool to be applied in cardiac surgery. Despite its infancy in cardiac surgery, computational modeling has been useful in calculating the effects of clinical devices and surgical procedures. In this review, we present several examples that demonstrate the capabilities of computational cardiac modeling in cardiac surgery. Specifically, we demonstrate its ability to simulate surgery, predict myofiber stress and pump function, and quantify changes to regional myocardial material properties. In addition, issues that would need to be resolved in order for computational modeling to play a greater role in cardiac surgery are discussed. PMID:24708036

The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advancedcomputational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.

The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advancedcomputational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less

A primary requirement for the successful deployment of advanced manufacturing applications is the need for a complete and accessible definition of the product. This product definition must not only provide an unambiguous description of a product`s nominal shape but must also contain complete tolerance specification and general property attributes. Likewise, the product definition`s geometry, topology, tolerance data, and modeler manipulative routines must be fully accessible through a robust application programmer interface. This paper describes a tolerancing capability using features that complements a geometric solid model with a representation of conventional and geometric tolerances and non-shape property attributes. This capability guarantees a complete and unambiguous definition of tolerances for manufacturing applications. An object-oriented analysis and design of the feature-based tolerance domain was performed. The design represents and relates tolerance features, tolerances, and datum reference frames. The design also incorporates operations that verify correctness and check for the completeness of the overall tolerance definition. The checking algorithm is based upon the notion of satisfying all of a feature`s toleranceable aspects. Benefits from the feature-based tolerance modeler include: advancing complete product definition initiatives, incorporating tolerances in product data exchange, and supplying computer-integrated manufacturing applications with tolerance information.

Flow about the NASA Trapezoidal Wing is computed with several turbulence models by using grids from the first High Lift Prediction Workshop in an effort to advance understanding of computational fluid dynamics modeling for this type of flowfield. Transition is accounted for in many of the computations. In particular, a recently-developed 4-equation transition model is utilized and works well overall. Accounting for transition tends to increase lift and decrease moment, which improves the agreement with experiment. Upper surface flap separation is reduced, and agreement with experimental surface pressures and velocity profiles is improved. The predicted shape of wakes from upstream elements is strongly influenced by grid resolution in regions above the main and flap elements. Turbulence model enhancements to account for rotation and curvature have the general effect of increasing lift and improving the resolution of the wing tip vortex as it convects downstream. However, none of the models improve the prediction of surface pressures near the wing tip, where more grid resolution is needed.

Computed tomography (CT) evolved into a powerful diagnostic tool and it is impossible to imagine current clinical practice without CT imaging. Due to its widespread availability, ease of clinical application, superb sensitivity for detection of CAD, and non-invasive nature, CT has become a valuable tool within the armamentarium of the cardiologist. In the last few years, numerous technological advances in CT have occurred—including dual energy CT (DECT), spectral CT and CT-based molecular imaging. By harnessing the advances in technology, cardiac CT has advanced beyond the mere evaluation of coronary stenosis to an imaging modality tool that permits accurate plaque characterization, assessment of myocardial perfusion and even probing of molecular processes that are involved in coronary atherosclerosis. Novel innovations in CT contrast agents and pre-clinical spectral CT devices have paved the way for CT-based molecular imaging. PMID:26068288

Methods and tools for design and modelling of tokamak operation scenarios are discussed with particular application to ITER advanced scenarios. Simulations of hybrid and steady-state scenarios performed with the integrated tokamak modelling suite of codes CRONOS are presented. The advantages of a possible steady-state scenario based on cyclic operations, alternating phases of positive and negative loop voltage, with no magnetic flux consumption on average, are discussed. For regimes in which current alignment is an issue, a general method for scenario design is presented, based on the characteristics of the poloidal current density profile.

The Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology has made significant advancements over the last decade with flight test demonstrations and ground development campaigns. The first generation (Gen-1) design and materials were flight tested with the successful third Inflatable Reentry Vehicle Experiment flight test of a 3-m HIAD (IRVE-3). Ground development efforts incorporated materials with higher thermal capabilities for the inflatable structure (IS) and flexible thermal protection system (F-TPS) as a second generation (Gen-2) system. Current efforts and plans are focused on extending capabilities to improve overall system performance and reduce areal weight, as well as expand mission applicability. F-TPS materials that offer greater thermal resistance, and ability to be packed to greater density, for a given thickness are being tested to demonstrated thermal performance benefits and manufacturability at flight-relevant scale. IS materials and construction methods are being investigated to reduce mass, increase load capacities, and improve durability for packing. Previous HIAD systems focused on symmetric geometries using stacked torus construction. Flight simulations and trajectory analysis show that symmetrical HIADs may provide L/D up to 0.25 via movable center of gravity (CG) offsets. HIAD capabilities can be greatly expanded to suit a broader range of mission applications with asymmetric shapes and/or modulating L/D. Various HIAD concepts are being developed to provide greater control to improve landing accuracy and reduce dependency upon propulsion systems during descent and landing. Concepts being studied include a canted stack torus design, control surfaces, and morphing configurations that allow the shape to be actively manipulated for flight control. This paper provides a summary of recent HIAD development activities, and plans for future HIAD developments including advanced materials, improved construction techniques, and alternate

This original research dissertation is composed of a new numerical technique based on Chebyshev polynomials that is applied on scattering problems, a phenomenological kinetics study for CO oxidation on RuO2 surface, and an experimental study on methanol coupling with doped metal oxide catalysts. Minimum Error Method (MEM), a least-squares minimization method, provides an efficient and accurate alternative to solve systems of ordinary differential equations. Existing methods usually utilize matrix methods which are computationally costful. MEM, which is based on the Chebyshev polynomials as a basis set, uses the recursion relationships and fast Chebyshev transforms which scale as O(N). For large basis set calculations this provides an enormous computational efficiency in the calculations. Chebyshev polynomials are also able to represent non-periodic problems very accurately. We applied MEM on elastic and inelastic scattering problems: it is more efficient and accurate than traditionally used Kohn variational principle, and it also provides the wave function in the interaction region. Phenomenological kinetics (PK) is widely used in industry to predict the optimum conditions for a chemical reaction. PK neglects the fluctuations, assumes no lateral interactions, and considers an ideal mix of reactants. The rate equations are tested by fitting the rate constants to the results of the experiments. Unfortunately, there are numerous examples where a fitted mechanism was later shown to be erroneous. We have undertaken a thorough comparison between the phenomenological equations and the results of kinetic Monte Carlo (KMC) simulations performed on the same system. The PK equations are qualitatively consistent with the KMC results but are quantitatively erroneous as a result of interplays between the adsorption and desorption events. The experimental study on methanol coupling with doped metal oxide catalysts demonstrates the doped metal oxides as a new class of catalysts

One remaining barrier to the clinical acceptance of electronic imaging and information systems is the difficulty in providing intuitive access to the information needed for a specific clinical task (such as reaching a diagnosis or tracking clinical progress). The purpose of this research was to create a development environment that enables the design and implementation of advanced digital imaging workstations. We used formal data and process modeling to identify the diagnostic and quantitative data that radiologists use and the tasks that they typically perform to make clinical decisions. We studied a diverse range of radiology applications, including diagnostic neuroradiology in an academic medical center, pediatric radiology in a children's hospital, screening mammography in a breast cancer center, and thoracic radiology consultation for an oncology clinic. We used object- oriented analysis to develop software toolkits that enable a programmer to rapidly implement applications that closely match clinical tasks. The toolkits support browsing patient information, integrating patient images and reports, manipulating images, and making quantitative measurements on images. Collectively, we refer to these toolkits as the UCLA Digital ViewBox toolkit (ViewBox/Tk). We used the ViewBox/Tk to rapidly prototype and develop a number of diverse medical imaging applications. Our task-based toolkit approach enabled rapid and iterative prototyping of workstations that matched clinical tasks. The toolkit functionality and performance provided a 'hands-on' feeling for manipulating images, and for accessing textual information and reports. The toolkits directly support a new concept for protocol based-reading of diagnostic studies. The design supports the implementation of network-based application services (e.g., prefetching, workflow management, and post-processing) that will facilitate the development of future clinical applications.

Under the Department of Energy's (DOE) Solar Thermal Technology Program, Sandia National Laboratories (SNLA) is developing heat engines for terrestrial Solar Distributed Heat Receivers. SNLA has identified the Stirling to be one of the most promising candidates for the terrestrial applications. The free-piston Stirling engine (FPSE) has the potential to meet the DOE goals for both performance and cost. The National Aeronautics and Space Administration (NASA) Lewis Research Center (LeRC) is conducting free-piston Stirling activities which are directed toward a dynamic power source for space applications. Space power system requirements include high efficiency, very long life, high reliability and low vibration. The FPSE has the potential for future high power space conversion systems, either solar or nuclear. Generic free-piston technology is currently being developed by LeRC for DOE/ORNL for use with a residential heat pump under an Interagency Agreement. Since 1983, the SP-100 Program (DOD/NASA/DOE) is developing dynamic power sources for space. Although both applications (heat pump and space power) appear to be quite different, their requirements complement each other. A cooperative Interagency Agreement (IAA) was signed in 1985 with NASA Lewis to provide technical management for an Advanced Stirling Conversion System (ASCS) for SNLA. Conceptual design(s) using a free-piston Stirling (FPSE), and a heat pipe will be discussed. The ASCS will be designed using technology which can reasonably be expected to be available in the 1980's.

The purpose is to document the functional requirements and detailed specifications for the Inter-Computer Communications Services (ICCS) of the Advanced Information Processing System (AIPS). An introductory section is provided to outline the overall architecture and functional requirements of the AIPS and to present an overview of the ICCS. An overview of the AIPS architecture as well as a brief description of the AIPS software is given. The guarantees of the ICCS are provided, and the ICCS is described as a seven-layered International Standards Organization (ISO) Model. The ICCS functional requirements, functional design, and detailed specifications as well as each layer of the ICCS are also described. A summary of results and suggestions for future work are presented.

Good morning. Welcome to SciDAC 2005 and San Francisco. SciDAC is all about computational science and scientific discovery. In a large sense, computational science characterizes SciDAC and its intent is change. It transforms both our approach and our understanding of science. It opens new doors and crosses traditional boundaries while seeking discovery. In terms of twentieth century methodologies, computational science may be said to be transformational. There are a number of examples to this point. First are the sciences that encompass climate modeling. The application of computational science has in essence created the field of climate modeling. This community is now international in scope and has provided precision results that are challenging our understanding of our environment. A second example is that of lattice quantum chromodynamics. Lattice QCD, while adding precision and insight to our fundamental understanding of strong interaction dynamics, has transformed our approach to particle and nuclear science. The individual investigator approach has evolved to teams of scientists from different disciplines working side-by-side towards a common goal. SciDAC is also undergoing a transformation. This meeting is a prime example. Last year it was a small programmatic meeting tracking progress in SciDAC. This year, we have a major computational science meeting with a variety of disciplines and enabling technologies represented. SciDAC 2005 should position itself as a new corner stone for Computational Science and its impact on science. As we look to the immediate future, FY2006 will bring a new cycle to SciDAC. Most of the program elements of SciDAC will be re-competed in FY2006. The re-competition will involve new instruments for computational science, new approaches for collaboration, as well as new disciplines. There will be new opportunities for virtual experiments in carbon sequestration, fusion, and nuclear power and nuclear waste, as well as collaborations

Research was performed in the area of computational modeling and application of hypersonic, high-enthalpy, thermo-chemical nonequilibrium flow (Aerothermodynamics) problems. A number of computational fluid dynamic (CFD) codes were developed and applied to simulate high altitude rocket-plume, the Aeroassist Flight Experiment (AFE), hypersonic base flow for planetary probes, the single expansion ramp model (SERN) connected with the National Aerospace Plane, hypersonic drag devices, hypersonic ramp flows, ballistic range models, shock tunnel facility nozzles, transient and steady flows in the shock tunnel facility, arc-jet flows, thermochemical nonequilibrium flows around simple and complex bodies, axisymmetric ionized flows of interest to re-entry, unsteady shock induced combustion phenomena, high enthalpy pulsed facility simulations, and unsteady shock boundary layer interactions in shock tunnels. Computational modeling involved developing appropriate numerical schemes for the flows on interest and developing, applying, and validating appropriate thermochemical processes. As part of improving the accuracy of the numerical predictions, adaptive grid algorithms were explored, and a user-friendly, self-adaptive code (SAGE) was developed. Aerothermodynamic flows of interest included energy transfer due to strong radiation, and a significant level of effort was spent in developing computational codes for calculating radiation and radiation modeling. In addition, computational tools were developed and applied to predict the radiative heat flux and spectra that reach the model surface.

An original method, measurement devices and software tool for examination of magneto-mechanical phenomena in wide range of SMART applications is proposed. In many Hi-End market constructions it is necessary to carry out examinations of mechanical and magnetic properties simultaneously. Technological processes of fabrication of modern materials (for example cutting, premagnetisation and prestress) and advanced concept of using SMART structures involves the design of next generation system for optimization of electric and magnetic field distribution. The original fast and higher than million point static resolution scanner with mulitsensor probes has been constructed to measure full components of the magnetic field intensity vector H, and to visualize them into end user acceptable variant. The scanner has also the capability to acquire electric potentials on surface to work with magneto-piezo devices. Advanced electronic subsystems have been applied for processing of results in the Magscaner Vison System and the corresponding software - Maglab has been also evaluated. The Dipole Contour Method (DCM) is provided for modeling different states between magnetic and electric coupled materials and to visually explain the information of the experimental data. Dedicated software collaborating with industrial parametric systems CAD. Measurement technique consists of acquiring a cloud of points similarly as in tomography, 3D visualisation. The actually carried verification of abilities of 3D digitizer will enable inspection of SMART actuators with the cylindrical form, pellets with miniature sizes designed for oscillations dampers in various construction, for example in vehicle industry.

The stated goal of High Performance Fortran (HPF) was to 'address the problems of writing data parallel programs where the distribution of data affects performance'. After examining the current version of the language we are led to the conclusion that HPF has not fully achieved this goal. While the basic distribution functions offered by the language - regular block, cyclic, and block cyclic distributions - can support regular numerical algorithms, advancedapplications such as particle-in-cell codes or unstructured mesh solvers cannot be expressed adequately. We believe that this is a major weakness of HPF, significantly reducing its chances of becoming accepted in the numeric community. The paper discusses the data distribution and alignment issues in detail, points out some flaws in the basic language, and outlines possible future paths of development. Furthermore, we briefly deal with the issue of task parallelism and its integration with the data parallel paradigm of HPF.

Surface plasmon resonance (SPR) biosensors represent the most advanced label-free optical affinity biosensor technology. In the last decade numerous SPR sensor platforms have been developed and applied in the life sciences and bioanalytics. This contribution reviews the state of the art in the development of SPR (bio)sensor technology and presents selected results of research into SPR biosensors at the Institute of Photonics and Electronics, Prague. The developments discussed in detail include a miniature fiber optic SPR sensor for localized measurements, a compact SPR sensor for field use and a multichannel SPR sensor for high-throughput screening. Examples of applications for the detection of analytes related to medical diagnostics (biomarkers, hormones, antibodies), environmental monitoring (endocrine disrupting compounds), and food safety (pathogens and toxins) are given.

Among the multiple branches of nanotechnology applications in the area of medicine and biology, Nanoparticle technology is the fastest growing and shows significant future promise. Nanoscale structures, with size similar to many biological molecules, show different physical and chemical properties compared to either small molecules or bulk materials, find many applications in the fields of biomedical imaging and therapy. Gold nanoparticles (AuNPs) are relatively inert in biological environment, and have a number of physical properties that are suitable for several biomedical applications. For example, AuNPs have been successfully employed in inducing localized hyperthermia for the destruction of tumors or radiotherapy for cancer, photodynamic therapy, computed tomography imaging, as drug carriers to tumors, bio-labeling through single particle detection by electron microscopy and in photothermal microscopy. Recent advances in synthetic chemistry makes it possible to make gold nanoparticles with precise control over physicochemical and optical properties that are desired for specific clinical or biological applications. Because of the availability of several methods for easy modification of the surface of gold nanoparticles for attaching a ligand, drug or other targeting molecules, AuNPs are useful in a wide variety of applications. Even though gold is biologically inert and thus shows much less toxicity, the relatively low rate of clearance from circulation and tissues can lead to health problems and therefore, specific targeting of diseased cells and tissues must be achieved before AuNPs find their application for routine human use.

This report is the fifth in a series of Annual Technical Summary Reports for the Advanced Turbine Technology Applications Project (ATTAP), sponsored by the U.S. Department of Energy (DOE). The report was prepared by Garrett Auxiliary Power Division (GAPD), a unit of Allied-Signal Aerospace Company, a unit of Allied Signal, Inc. The report includes information provided by Garrett Ceramic Components, and the Norton Advanced Ceramics Company, (formerly Norton/TRW Ceramics), subcontractors to GAPD on the ATTAP. This report covers plans and progress on ceramics development for commercial automotive applications over the period 1 Jan. through 31 Dec. 1992. Project effort conducted under this contract is part of the DOE Gas Turbine Highway Vehicle System program. This program is directed to provide the U.S. automotive industry the high-risk, long-range technology necessary to produce gas turbine engines for automobiles with reduced fuel consumption, reduced environmental impact, and a decreased reliance on scarce materials and resources. The program is oriented toward developing the high-risk technology of ceramic structural component design and fabrication, such that industry can carry this technology forward to production in the 1990's. The ATTAP test bed engine, carried over from the previous AGT101 project, is being used for verification testing of the durability of next generation ceramic components, and their suitability for service at Reference Powertrain Design conditions. This document reports the technical effort conducted by GAPD and the ATTAP subcontractors during the fifth year of the project. Topics covered include ceramic processing definition and refinement, design improvements to the ATTAP test bed engine and test rigs, and the methodology development of ceramic impact and fracture mechanisms. Appendices include reports by ATTAP subcontractors in the development of silicon nitride materials and processes.

Despite widespread efforts to promote advance directives (ADs), completion rates remain low. Making Your Wishes Known: Planning Your Medical Future (MYWK) is an interactive computer program that guides individuals through the process of advance care planning, explaining health conditions and interventions that commonly involve life or death decisions, helps them articulate their values/goals, and translates users' preferences into a detailed AD document. The purpose of this study was to demonstrate that (in the absence of major life changes) the AD generated by MYWK reliably reflects an individual's values/preferences. English speakers ≥30 years old completed MYWK twice, 4 to 6 weeks apart. Reliability indices were assessed for three AD components: General Wishes; Specific Wishes for treatment; and Quality-of-Life values (QoL). Twenty-four participants completed the study. Both the Specific Wishes and QoL scales had high internal consistency in both time periods (Knuder Richardson formula 20 [KR-20]=0.83-0.95, and 0.86-0.89). Test-retest reliability was perfect for General Wishes (κ=1), high for QoL (Pearson's correlation coefficient=0.83), but lower for Specific Wishes (Pearson's correlation coefficient=0.57). MYWK generates an AD where General Wishes and QoL (but not Specific Wishes) statements remain consistent over time.

The innovations in this updated series of compilations dealing with electronic technology represent a carefully selected collection of digital circuits which have direct application in computer oriented systems. In general, the circuits have been selected as representative items of each section and have been included on their merits of having universal applications in digital computers and digital data processing systems. As such, they should have wide appeal to the professional engineer and scientist who encounter the fundamentals of digital techniques in their daily activities. The circuits are grouped as digital logic circuits, analog to digital converters, and counters and shift registers.

Emerging commercial applications of electron-beam advanced oxidation technology offer a significant advancement in the treatment of waste steams. Both electron beam and X-ray (Brehmsstrahlung) advanced oxidation processes have been shown to be effective in the destruction of volatile and semivolatile organic compounds. Emerging commercial applications, however, far exceed in scope current applications of oxidation technologies for the destruction of simple semivolatile and volatile organic compounds in water. Emerging applications include direct treatment of contaminated soil, removal of metal ions from water and sterilization of water, sludges, and food. Application of electron beam advanced oxidation technologies are reviewed, along with electron- beam-generated X-ray (Brehmsstrahlung) advanced oxidation processes. Advantages of each technology are discussed along with advanced accelerator technologies which are applicable for commercial processing of waste streams. An overview of the U.S. companies and laboratories participating in this research area are included in this discussion.

This paper provides a tutorial introduction to computational studies of how children learn their native languages. Its aim is to make recent advances accessible to the broader research community, and to place them in the context of current theoretical issues. The first section locates computational studies and behavioral studies within a common theoretical framework. The next two sections review two papers that appear in this volume: one on learning the meanings of words and one or learning the sounds of words. The following section highlights an idea which emerges independently in these two papers and which I have dubbed autonomous bootstrapping. Classical bootstrapping hypotheses propose that children begin to get a toc-hold in a particular linguistic domain, such as syntax, by exploiting information from another domain, such as semantics. Autonomous bootstrapping complements the cross-domain acquisition strategies of classical bootstrapping with strategies that apply within a single domain. Autonomous bootstrapping strategies work by representing partial and/or uncertain linguistic knowledge and using it to analyze the input. The next two sections review two more more contributions to this special issue: one on learning word meanings via selectional preferences and one on algorithms for setting grammatical parameters. The final section suggests directions for future research.

The mitral valve (MV) apparatus consists of the two asymmetric leaflets, the saddle-shaped annulus, the chordae tendineae, and the papillary muscles. MV function over the cardiac cycle involves complex interaction between the MV apparatus components for efficient blood circulation. Common diseases of the MV include valvular stenosis, regurgitation, and prolapse. MV repair is the most popular and most reliable surgical treatment for early MV pathology. One of the unsolved problems in MV repair is to predict the optimal repair strategy for each patient. Although experimental studies have provided valuable information to improve repair techniques, computational simulations are increasingly playing an important role in understanding the complex MV dynamics, particularly with the availability of patient-specific real-time imaging modalities. This work presents a review of computational simulation studies of MV function employing finite element (FE) structural analysis and fluid-structure interaction (FSI) approach reported in the literature to date. More recent studies towards potential applications of computational simulation approaches in the assessment of valvular repair techniques and potential pre-surgical planning of repair strategies are also discussed. It is anticipated that further advancements in computational techniques combined with the next generations of clinical imaging modalities will enable physiologically more realistic simulations. Such advancement in imaging and computation will allow for patient-specific, disease-specific, and case-specific MV evaluation and virtual prediction of MV repair. PMID:25134487

Participating media radiation (PMR) in weapon safety calculations for abnormal thermal environments are too costly to do routinely. This cost may be s ubstantially reduced by applying reduced order modeling (ROM) techniques. The application of ROM to PMR is a new and unique approach for this class of problems. This approach was investigated by the authors and shown to provide significant reductions in the computational expense associated with typical PMR simulations. Once this technology is migrated into production heat transfer analysis codes this capability will enable the routine use of PMR heat transfer in higher - fidelity simulations of weapon resp onse in fire environments.

A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging.

Control system designs for nuclear power plants are becoming more advanced through the use of digital technology and automation. This evolution is taking place because of: (1) the limitations in analog based control system performance and maintenance and availability and (2) the promise of significant improvement in plant operation and availability due to advances in digital and other control technologies. Digital retrofits of control systems in US nuclear plants are occurring now. Designs of control and protection systems for advanced LWRs are based on digital technology. The use of small inexpensive, fast, large-capacity computers in these designs is the first step of an evolutionary process described in this paper. Under the sponsorship of the US Department of Energy (DOE), Oak Ridge National Laboratory, Argonne National Laboratory, GE Nuclear Energy and several universities are performing research and development in the application of advances in control theory, software engineering, advancedcomputer architectures, artificial intelligence, and man-machine interface analysis to control system design. The target plant concept for the work described in this paper is the Power Reactor Inherently Safe Module reactor (PRISM), an advanced modular liquid metal reactor concept. This and other reactor designs which provide strong passive responses to operational upsets or accidents afford good opportunities to apply these advances in control technology. 18 refs., 5 figs.

Sandia National Laboratories (SNLA) is developing heat engines for terrestrial Solar distributed Heat Receivers. SNLA has identified the Stirling to be one of the most promising candidates for the terrestrial applications. The free-piston Stirling engine (FPSE) has the potential to meet the DOE goals for both performance and cost. Free-piston Stirling activities which are directed toward a dynamic power source for the space application are being conducted. Space power system requirements include high efficiency, very long life, high reliability and low vibration. The FPSE has the potential for future high power space conversion systems, either solar or nuclear powered. Generic free-piston technology is currently being developed for use with a residential heat pump under an Interagency Agreement. Also, an overview is presented of proposed conceptual designs for the Advanced Stirling Conversion System (ASCS) using a free-piston Stirling engine and a liquid metal heat pipe receiver. Power extraction includes both a linear alternator and hydraulic output capable of delivering approximately 25 kW of electrical power to the electric utility grid. Target cost of the engine/alternator is 300 dollars per kilowatt at a manufacturing rate of 10,000 units per year. The design life of the ASCS is 60,000 h (30 y) with an engine overhaul at 40,000 h (20 y). Also discussed are the key features and characteristics of the ASCS conceptual designs.

This report is the fourth in a series of Annual Technical Summary Reports for the Advanced Turbine Technology Applications Project (ATTAP). This report covers plans and progress on ceramics development for commercial automotive applications over the period 1 Jan. - 31 Dec. 1991. Project effort conducted under this contract is part of the DOE Gas Turbine Highway Vehicle System program. This program is directed to provide the U.S. automotive industry the high-risk, long-range technology necessary to produce gas turbine engines for automobiles with reduced fuel consumption, reduced environmental impact, and a decreased reliance on scarce materials and resources. The program is oriented toward developing the high-risk technology of ceramic structural component design and fabrication, such that industry can carry this technology forward to production in the 1990s. The ATTAP test bed engine, carried over from the previous AGT101 project, is being used for verification testing of the durability of next-generation ceramic components, and their suitability for service at Reference Powertrain Design conditions. This document reports the technical effort conducted by GAPD and the ATTAP subcontractors during the fourth year of the project. Topics covered include ceramic processing definition and refinement, design improvements to the ATTAP test bed engine and test rigs and the methodology development of ceramic impact and fracture mechanisms. Appendices include reports by ATTAP subcontractors in the development of silicon nitride and silicon carbide families of materials and processes.

Small organizations such as local governments would clearly benefit from running simulations prior to making policy decisions. While many of the modeling applications that run such simulations are becoming available in the public domain, the computing resources and expertise to effectively run these impose financial constraints too great for local governments. In this paper, we articulate those requirements and hypothesize that it may be possible to build inexpensive distributed computing environments that would use the?null cycles?, i.e., currently idle time, on an organization?s local area network of personal computers. We describe how to configure one such environment with public domain software (PVM) on machines running Windows 2000, and what in general needs to be done to retrofit an existing modeling application to run in that environment. Finally, we present findings from a project to demonstrate the feasibility of using a distributed network of Windows 2000 machines for a typical environmental simulation used by Washington State?s King County Department of Natural Resources.

The aim of this paper was to present an overview of the most important recent advances in medical imaging and their potential clinical and anatomical applications. Dramatic changes have been particularly observed in the field of computed tomography (CT) and magnetic resonance imaging (MRI). Computed tomography (CT) has been completely overturned by the successive development of helical acquisition, multidetector and large area-detector acquisition. Visualising brain function has become a new challenge for MRI, which is called functional MRI, currently based principally on blood oxygenation level-dependent sequences, which could be completed or replaced by other techniques such as diffusion MRI (DWI). Based on molecular diffusion due to the thermal energy of free water, DWI offers a spectrum of anatomical and clinical applications, ranging from brain ischemia to visualisation of large fibrous structures of the human body such as the anatomical bundles of white matter with diffusion tensor imaging and tractography. In the field of X-ray projection imaging, a new low-dose device called EOS has been developed through new highly sensitive detectors of X-rays, allowing for acquiring frontal and lateral images simultaneously. Other improvements have been briefly mentioned. Technical principles have been considered in order to understand what is most useful in clinical practice as well as in the field of anatomical applications. Nuclear medicine has not been included.

The addition of a motor in 1920, resulted in the first electrical calculators. Charles Babbage is considered the "father" of the computer. In 1883, he...management applications. Watt (1795), Owen (early 1800’s), and Babbage (1832), made the first real- istic management applications in the field of production...network scheduling by the contractor "if" it is felt that the government can benefit by its use. 2 Captain Charles D. Sprick, Resident Air Force

The artificial olfaction, based on electronic systems (electronic noses), includes three basic functions that operate on an odorant: a sample handler, an array of gas sensors, and a signal-processing method. The response of these artificial systems can be the identity of the odorant, an estimate concentration of the odorant, or characteristic properties of the odour as might be perceived by a human. These electronic noses are bio inspired instruments that mimic the sense of smell. The complexity of most odorants makes characterisation difficult with conventional analysis techniques, such as gas chromatography. Sensory analysis by a panel of experts is a costly process since it requires trained people who can work for only relatively short periods of time. The electronic noses are easy to build, provide short analysis times, in real time and on-line, and show high sensitivity and selectivity to the tested odorants. These systems are non-destructive techniques used to characterise odorants in diverse applications linked with the quality of life such as: control of foods, environmental quality, citizen security or clinical diagnostics. However, there is much research still to be done especially with regard to new materials and sensors technology, data processing, interpretation and validation of results. This work examines the main features of modern electronic noses and their most important applications in the environmental, and security fields. The above mentioned main components of an electronic nose (sample handling system, more advanced materials and methods for sensing, and data processing system) are described. Finally, some interesting remarks concerning the strengths and weaknesses of electronic noses in the different applications are also mentioned.

Our accomplishments over the last three years of the DOE project Application- Specific Performance Technology for Productive Parallel Computing (DOE Agreement: DE-FG02-05ER25680) are described below. The project will have met all of its objectives by the time of its completion at the end of September, 2008. Two extensive yearly progress reports were produced in in March 2006 and 2007 and were previously submitted to the DOE Office of Advanced Scientific Computing Research (OASCR). Following an overview of the objectives of the project, we summarize for each of the project areas the achievements in the first two years, and then describe in some more detail the project accomplishments this past year. At the end, we discuss the relationship of the proposed renewal application to the work done on the current project.

The purpose of this discussion is to acquaint pupil personnel workers with some of the applications of computer based information processing systems for pupil services and also to consider some of the legal and ethical concerns relative to data processing in counseling and guidance. Some of the uses discussed are: (1) scheduling; (2) student…

Presents two real world applications that use derivatives and are related to computing the distance required to stop an airplane. Examines the curve-fitting techniques used to develop an equation for braking force and develops equations for the deceleration and speed. (DDR)

ATTAP activities during the past year were highlighted by an extensive materials assessment, execution of a reference powertrain design, test-bed engine design and development, ceramic component design, materials and component characterization, ceramic component process development and fabrication, component rig design and fabrication, test-bed engine fabrication, and hot gasifier rig and engine testing. Materials assessment activities entailed engine environment evaluation of domestically supplied radial gasifier turbine rotors that were available at the conclusion of the Advanced Gas Turbine (AGT) Technology Development Project as well as an extensive survey of both domestic and foreign ceramic suppliers and Government laboratories performing ceramic materials research applicable to advanced heat engines. A reference powertrain design was executed to reflect the selection of the AGT-5 as the ceramic component test-bed engine for the ATTAP. Test-bed engine development activity focused on upgrading the AGT-5 from a 1038 C (1900 F) metal engine to a durable 1371 C (2500 F) structural ceramic component test-bed engine. Ceramic component design activities included the combustor, gasifier turbine static structure, and gasifier turbine rotor. The materials and component characterization efforts have included the testing and evaluation of several candidate ceramic materials and components being developed for use in the ATTAP. Ceramic component process development and fabrication activities were initiated for the gasifier turbine rotor, gasifier turbine vanes, gasifier turbine scroll, extruded regenerator disks, and thermal insulation. Component rig development activities included combustor, hot gasifier, and regenerator rigs. Test-bed engine fabrication activities consisted of the fabrication of an all-new AGT-5 durability test-bed engine and support of all engine test activities through instrumentation/build/repair. Hot gasifier rig and test-bed engine testing

Systems identification methods have recently been applied to rotorcraft to estimate stability derivatives from transient flight control response data. While these applications assumed a linear constant coefficient representation of the rotorcraft, the computer experiments described in this paper used transient responses in flap-bending and torsion of a rotor blade at high advance ratio which is a rapidly time varying periodic system.

The Research Institute for AdvancedComputer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center (ARC). It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. ARC has been designated NASA's Center of Excellence in Information Technology. In this capacity, ARC is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA ARC and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to

Computed tomography involves the use of x-rays to produce cross-sectional images of body regions. It provides non-overlapping, two-dimensional images of all desired planes as well as three-dimensional reconstruction of regions of interest. There are few reports on the clinical use of computed tomography in farm animals. Its use in cattle is limited by high cost, the application of off-label drugs and the need for general anaesthesia. In cattle computed tomography is indicated primarily for diseases of the head, e.g. dental diseases and otitis media, and neurological disorders. Less often it is used for diseases of the vertebrae and limbs. In valuable cattle, the results of computed tomography can be an important part of preoperative planning or be used to avoid unnecessary surgery when the prognosis is poor.

Small computing devices which rival the compact size of traditional personal digital assistants (PDA) have recently established a market niche. These computing devices are small enough to be considered unobtrusive for humans to wear. The computing devices are also powerful enough to run full multi-tasking general purpose operating systems. This paper will explore the wearable computer information system for dismounted applications recently fielded for ground-based US Air Force use. The environments that the information systems are used in will be reviewed, as well as a description of the net-centric, ground-based warrior. The paper will conclude with a discussion regarding the importance of intuitive, usable, and unobtrusive operator interfaces for dismounted operators.

This paper summarizes three recent applications of computer vision techniques in dentistry developed at the Czech Technical University. The first one uses a special optical instrument to capture the image of the tooth arc directly in the patient's mouth. The captured images are used for visualization of teeth position changes during treatment. The second application allows the use of images for checking teeth occlusal contacts and their abrasion. The third application uses photometric measurements to study the resistance of the dental material against microbial growth.

Computed tomography (CT) is an X-ray technique that provides quantitative 3D density information of materials and components and can accurately detail spatial distributions of cracks, voids, and density variations. CT scans of ceramic materials, composites, and engine components were taken and the resulting images will be discussed. Scans were taken with two CT systems with different spatial resolution capabilities. The scans showed internal damage, density variations, and geometrical arrangement of various features in the materials and components. It was concluded that CT can play an important role in the characterization of advanced turbine engine materials and components. Future applications of this technology will be outlined.

Various papers on battery applications and advances are presented. The general topics considered include: power systems in biomedical applications, batteries in electronic and computerapplications, batteries in transportation and energy systems, space power systems, aircraft power systems, applications in defense systems, battery safety issues, and quality assurance and manufacturing.

advanced on multiple fronts, including peak flops, node memory capacity, interconnect latency, interconnect bandwidth, and memory bandwidth. (2) Effective parallel programming interfaces must be developed to exploit the power of emerging hardware. (3) Science application teams must now begin to adapt and reformulate application codes to the new hardware and software, typified by hierarchical and disparate layers of compute, memory and concurrency. (4) Algorithm research must be realigned to exploit this hierarchy. (5) When possible, mathematical libraries must be used to encapsulate the required operations in an efficient and useful way. (6) Software tools must be developed to make the new hardware more usable. (7) Science application software must be improved to cope with the increasing complexity of computing systems. (8) Data management efforts must be readied for the larger quantities of data generated by larger, more accurate science models. Requirements elicitation, analysis, validation, and management comprise a difficult and inexact process, particularly in periods of technological change. Nonetheless, the OLCF requirements modeling process is becoming increasingly quantitative and actionable, as the process becomes more developed and mature, and the process this year has identified clear and concrete steps to be taken. This report discloses (1) the fundamental science case driving the need for the next generation of computer hardware, (2) application usage trends that illustrate the science need, (3) application performance characteristics that drive the need for increased hardware capabilities, (4) resource and process requirements that make the development and deployment of science applications on next-generation hardware successful, and (5) summary recommendations for the required next steps within the computer and computational science communities.

INL has agreed to provide participants in the Nuclear Energy Advanced Mod- eling and Simulation (NEAMS) program with access to its high performance computing (HPC) resources under sponsorship of the Enabling Computational Technologies (ECT) program element. This report documents the process used to select applications and the software stack in place at INL.

The Advanced Turbine Technologies Application Project (ATTAP) is in the fifth year of a multiyear development program to bring the automotive gas turbine engine to a state at which industry can make commercialization decisions. Activities during the past year included reference powertrain design updates, test-bed engine design and development, ceramic component design, materials and component characterization, ceramic component process development and fabrication, ceramic component rig testing, and test-bed engine fabrication and testing. Engine design and development included mechanical design, combustion system development, alternate aerodynamic flow testing, and controls development. Design activities included development of the ceramic gasifier turbine static structure, the ceramic gasifier rotor, and the ceramic power turbine rotor. Material characterization efforts included the testing and evaluation of five candidate high temperature ceramic materials. Ceramic component process development and fabrication, with the objective of approaching automotive volumes and costs, continued for the gasifier turbine rotor, gasifier turbine scroll, extruded regenerator disks, and thermal insulation. Engine and rig fabrication, testing, and development supported improvements in ceramic component technology. Total test time in 1992 amounted to 599 hours, of which 147 hours were engine testing and 452 were hot rig testing.

Over 50 scientists from DOE-DP, DOE-ER, the national laboratories, academia and industry attended a workshop held on November 5-7, 1997 at Lawrence Livermore National Laboratory. Workshop participants were charged to address two questions: Is there a need for a national center for materials analysis using positron techniques and can the capabilities at Lawrence Livermore National Laboratory serve this need. To demonstrate the need for a national center, the workshop participants discussed the technical advantages enabled by high positron currents and advanced measurement techniques, the role that these techniques would play in materials analysis and the demand for the data. Livermore now leads the world in materials analysis capabilities by positrons due to developments in response to demands of stockpile stewardship. The Livermore facilities now include the world`s highest current beam of keV positrons, a scanning pulsed positron microprobe under development capable of three dimensional maps of defect size and concentration, an MeV positron beam for defect analysis of large samples, and electron momentum spectroscopy by positrons. It was concluded that the positron microprobe under development at LLNL and other new instruments that would be relocated at LLNL at the high current keV source are an exciting step forward in providing results for the positron technique. These new data will impact a wide variety of applications.

Combinatorial chemistry has generated chemical libraries and databases with a huge number of chemical compounds, which include prospective drugs. Chemical structures of compounds can be molecular graphs, to which a variety of graph-based techniques in computer science, specifically graph mining, can be applied. The most basic way for analyzing molecular graphs is using structural fragments, so-called subgraphs in graph theory. The mainstream technique in graph mining is frequent subgraph mining, by which we can retrieve essential subgraphs in given molecular graphs. In this article we explain the idea and procedure of mining frequent subgraphs from given molecular graphs, raising some real applications, and we describe the recent advances of graph mining.

The introduction of computed tomography (CT) scanning in the 1970s revolutionized the way clinicians could diagnose and treat stroke. Subsequent advances in CT technology significantly reduced radiation dose, reduced metallic artifact, and achieved speeds that enable dynamic functional studies. The recent addition of whole-brain volumetric CT perfusion technology has given clinicians a powerful tool to assess parenchymal perfusion parameters as well as visualize dynamic changes in blood vessel flow throughout the brain during a single cardiac cycle. This article reviews clinical applications of volumetric multimodal CT that helped to guide and manage care.

Medical imaging is of particular interest in the field of translational myology, as extant literature describes the utilization of a wide variety of techniques to non-invasively recapitulate and quantity various internal and external tissue morphologies. In the clinical context, medical imaging remains a vital tool for diagnostics and investigative assessment. This review outlines the results from several investigations on the use of computed tomography (CT) and image analysis techniques to assess muscle conditions and degenerative process due to aging or pathological conditions. Herein, we detail the acquisition of spiral CT images and the use of advanced image analysis tools to characterize muscles in 2D and 3D. Results from these studies recapitulate changes in tissue composition within muscles, as visualized by the association of tissue types to specified Hounsfield Unit (HU) values for fat, loose connective tissue or atrophic muscle, and normal muscle, including fascia and tendon. We show how results from these analyses can be presented as both average HU values and compositions with respect to total muscle volumes, demonstrating the reliability of these tools to monitor, assess and characterize muscle degeneration. PMID:27478562

The X-Ray Computed Microtomography workstation at beamline X27A at the NSLS has been utilized by scientists from a broad range of disciplines from industrial materials processing to environmental science. The most recent applications are presented here as well as a description of the facility that has evolved to accommodate a wide variety of materials and sample sizes. One of the most exciting new developments reported here resulted from a pursuit of faster reconstruction techniques. A Fast Filtered Back Transform (FFBT) reconstruction program has been developed and implemented, that is based on a refinement of the gridding algorithm first developed for use with radio astronomical data. This program has reduced the reconstruction time to 8.5 sec for a 929 x 929 pixel{sup 2} slice on an R10,000 CPU, more than 8x reduction compared with the Filtered Back-Projection method.

The X-Ray Computed Microtomography workstation at beamline X27A at the NSLS has been utilized by scientists from a broad range of disciplines from industrial materials processing to environmental science. The most recent applications are presented here as well as a description of the facility that has evolved to accommodate a wide variety of materials and sample sizes. One of the most exciting new developments reported here resulted from a pursuit of faster reconstruction techniques. A Fast Filtered Back Transform (FFBT) reconstruction program has been developed and implemented, that is based on a refinement of the ''gridding'' algorithm first developed for use with radio astronomical data. This program has reduced the reconstruction time to 8.5 sec for a 929 x 929 pixel{sup 2} slice on an R10,000 CPU, more than 8x reduction compared with the Filtered Back-Projection method.

The conventional Gibbs-Duhem integration method is very convenient for the prediction of phase equilibria of both pure components and mixtures. However, it turns out to be inefficient. The method requires a number of lengthy simulations to predict the state conditions at which phase coexistence occurs. This number is not known from the outset of the numerical integration process. Furthermore, the molecular configurations generated during the simulations are merely used to predict the coexistence condition and not the liquid- and vapor-phase densities and mole fractions at coexistence. In this publication, an advanced Gibbs-Duhem integration method is presented that overcomes above-mentioned disadvantage and inefficiency. The advanced method is a combination of Gibbs-Duhem integration and multiple-histogram reweighting. Application of multiple-histogram reweighting enables the substitution of the unknown number of simulations by a fixed and predetermined number. The advanced method has a retroactive nature; a current simulation improves the predictions of previously computed coexistence points as well. The advanced Gibbs-Duhem integration method has been applied for the prediction of vapor-liquid equilibria of a number of binary mixtures. The method turned out to be very convenient, much faster than the conventional method, and provided smooth simulation results. As the employed force fields perfectly predict pure-component vapor-liquid equilibria, the binary simulations were very well suitable for testing the performance of different sets of combining rules. Employing Lorentz-Hudson-McCoubrey combining rules for interactions between unlike molecules, as opposed to Lorentz-Berthelot combining rules for all interactions, considerably improved the agreement between experimental and simulated data.

Despite evidence that physical activity reduces the risk of falls and other causes of disability and death, the majority of older adults do not engage in physical activity on a regular basis. Mobile technology applications have emerged as potential resources for promoting physical activity behavior. This article describes features of a new application, Ready~Steady, highlighting approaches used in its design and development, and implications for clinical practice. Iterative processes enabled the design, development, implementation, and evaluation of the application consistent with the wellness motivation theory, as well as established user-specific strategies and theoretical design principles. Implications in terms of potential benefits and constraints are discussed. Integrating technology that promotes health and wellness in the form of mobile computerapplications is a promising adjunct to nursing practice. PMID:23463915

In recent years, computerapplication development has experienced exponential growth, not only in the number of publications but also in the scope or contexts that have benefited from its use. In health science training, and medicine specifically, the gradual incorporation of technological developments has transformed the teaching and learning process, resulting in true "educational technology". The goal of this paper is to review the main features involved in these applications and highlight the main lines of research for the future. The results of peer reviewed literature published recently indicate the following features shared by the key technological developments in the field of health science education: first, development of simulation and visualization systems for a more complete and realistic representation of learning material over traditional paper format; second, portability and versatility of the applications, adapted for an increasing number of devices and operative systems; third, increasing focus on open source applications such as Massive Open Online Course (MOOC).

Despite evidence that physical activity reduces the risk of falls and other causes of disability and death, the majority of older adults do not engage in physical activity on a regular basis. Mobile technology applications have emerged as potential resources for promoting physical activity behavior. This article describes features of a new application, Ready∼Steady, highlighting approaches used in its design and development, and implications for clinical practice. Iterative processes enabled the design, development, implementation, and evaluation of the application consistent with the wellness motivation theory, as well as established user-specific strategies and theoretical design principles. Implications in terms of potential benefits and constraints are discussed. Integrating technology that promotes health and wellness in the form of mobile computerapplications is a promising adjunct to nursing practice.

Dynamic Adaptive Neural Network Arrays (DANNAs) are neuromorphic computing systems developed as a hardware based approach to the implementation of neural networks. They feature highly adaptive and programmable structural elements, which model arti cial neural networks with spiking behavior. We design them to solve problems using evolutionary optimization. In this paper, we highlight the current hardware and software implementations of DANNA, including their features, functionalities and performance. We then describe the development of an Application Development Platform (ADP) to support efficient application implementation and testing of DANNA based solutions. We conclude with future directions.

The development of the technology of ballistics as applied to gun launched Army weapon systems is the main objective of research at the U.S. Army Ballistic Research Laboratory (BRL). The primary research programs at the BRL consist of three major ballistic disciplines: exterior, interior, and terminal. The work done at the BRL in these areas was traditionally highly dependent on experimental testing. A considerable emphasis was placed on the development of computational modeling to augment the experimental testing in the development cycle; however, the impact of the computational modeling to this date is modest. With the availability of supercomputer computational resources recently installed at the BRL, a new emphasis on the application of computational modeling to ballistics technology is taking place. The major application areas are outlined which are receiving considerable attention at the BRL at present and to indicate the modeling approaches involved. An attempt was made to give some information as to the degree of success achieved and indicate the areas of greatest need.

The SANC system is used for systematic calculations of various processes within the Standard Model in the one-loop approximation. QED, electroweak, and QCD corrections are computed to a number of processes being of interest for modern and future high-energy experiments. Several applications for the LHC physics program are presented. Development of the system and the general problems and perspectives for future improvement of the theoretical precision are discussed.

Advanced Turbine Technology Application Project (ATTAP) activities during the past year were highlighted by test-bed engine design and development activities; ceramic component design; materials and component characterization; ceramic component process development and fabrication; component rig testing; and test-bed engine fabrication and testing. Although substantial technical challenges remain, all areas exhibited progress. Test-bed engine design and development activity included engine mechanical design, power turbine flow-path design and mechanical layout, and engine system integration aimed at upgrading the AGT-5 from a 1038 C metal engine to a durable 1371 C structural ceramic component test-bed engine. ATTAP-defined ceramic and associated ceramic/metal component design activities include: the ceramic combustor body, the ceramic gasifier turbine static structure, the ceramic gasifier turbine rotor, the ceramic/metal power turbine static structure, and the ceramic power turbine rotors. The materials and component characterization efforts included the testing and evaluation of several candidate ceramic materials and components being developed for use in the ATTAP. Ceramic component process development and fabrication activities are being conducted for the gasifier turbine rotor, gasifier turbine vanes, gasifier turbine scroll, extruded regenerator disks, and thermal insulation. Component rig testing activities include the development of the necessary test procedures and conduction of rig testing of the ceramic components and assemblies. Four-hundred hours of hot gasifier rig test time were accumulated with turbine inlet temperatures exceeding 1204 C at 100 percent design gasifier speed. A total of 348.6 test hours were achieved on a single ceramic rotor without failure and a second ceramic rotor was retired in engine-ready condition at 364.9 test hours. Test-bed engine fabrication, testing, and development supported improvements in ceramic component technology

High power LEDs were introduced in automotive headlights in 2006-2007, for example as full LED headlights in the Audi R8 or low beam in Lexus. Since then, LED headlighting has become established in premium and volume automotive segments and beginning to enable new compact form factors such as distributed low beam and new functions such as adaptive driving beam. New generations of highly versatile high power LEDs are emerging to meet these application needs. In this paper, we will detail ongoing advances in LED technology that enable revolutionary styling, performance and adaptive control in automotive headlights. As the standards which govern the necessary lumens on the road are well established, increasing luminance enables not only more design freedom but also headlight cost reduction with space and weight saving through more compact optics. Adaptive headlighting is based on LED pixelation and requires high contrast, high luminance, smaller LEDs with high-packing density for pixelated Matrix Lighting sources. Matrix applications require an extremely tight tolerance on not only the X, Y placement accuracy, but also on the Z height of the LEDs given the precision optics used to image the LEDs onto the road. A new generation of chip scale packaged (CSP) LEDs based on Wafer Level Packaging (WLP) have been developed to meet these needs, offering a form factor less than 20% increase over the LED emitter surface footprint. These miniature LEDs are surface mount devices compatible with automated tools for L2 board direct attach (without the need for an interposer or L1 substrate), meeting the high position accuracy as well as the optical and thermal performance. To illustrate the versatility of the CSP LEDs, we will show the results of, firstly, a reflector-based distributed low beam using multiple individual cavities each with only 20mm height and secondly 3x4 to 3x28 Matrix arrays for adaptive full beam. Also a few key trends in rear lighting and impact on LED light

Cloud computing is a novel computing paradigm that offers highly scalable and highly available distributed computing services. The objectives of this research are to: 1. analyze and understand cloud computing and its potential for GIS; 2. discover the feasibilities of migrating truly spatial GIS algorithms to distributed computing infrastructures; 3. explore a solution to host and serve large volumes of raster GIS data efficiently and speedily. These objectives thus form the basis for three professional articles. The first article is entitled "Cloud Computing and Its Applications in GIS". This paper introduces the concept, structure, and features of cloud computing. Features of cloud computing such as scalability, parallelization, and high availability make it a very capable computing paradigm. Unlike High Performance Computing (HPC), cloud computing uses inexpensive commodity computers. The uniform administration systems in cloud computing make it easier to use than GRID computing. Potential advantages of cloud-based GIS systems such as lower barrier to entry are consequently presented. Three cloud-based GIS system architectures are proposed: public cloud- based GIS systems, private cloud-based GIS systems and hybrid cloud-based GIS systems. Public cloud-based GIS systems provide the lowest entry barriers for users among these three architectures, but their advantages are offset by data security and privacy related issues. Private cloud-based GIS systems provide the best data protection, though they have the highest entry barriers. Hybrid cloud-based GIS systems provide a compromise between these extremes. The second article is entitled "A cloud computing algorithm for the calculation of Euclidian distance for raster GIS". Euclidean distance is a truly spatial GIS algorithm. Classical algorithms such as the pushbroom and growth ring techniques require computational propagation through the entire raster image, which makes it incompatible with the distributed nature

The capability to establish adaptive relationships with the environment is an essential characteristic of living cells. Both bacterial computing and bacterial intelligence are two general traits manifested along adaptive behaviors that respond to surrounding environmental conditions. These two traits have generated a variety of theoretical and applied approaches. Since the different systems of bacterial signaling and the different ways of genetic change are better known and more carefully explored, the whole adaptive possibilities of bacteria may be studied under new angles. For instance, there appear instances of molecular “learning” along the mechanisms of evolution. More in concrete, and looking specifically at the time dimension, the bacterial mechanisms of learning and evolution appear as two different and related mechanisms for adaptation to the environment; in somatic time the former and in evolutionary time the latter. In the present chapter it will be reviewed the possible application of both kinds of mechanisms to prokaryotic molecular computing schemes as well as to the solution of real world problems. PMID:24723912

The capability to establish adaptive relationships with the environment is an essential characteristic of living cells. Both bacterial computing and bacterial intelligence are two general traits manifested along adaptive behaviors that respond to surrounding environmental conditions. These two traits have generated a variety of theoretical and applied approaches. Since the different systems of bacterial signaling and the different ways of genetic change are better known and more carefully explored, the whole adaptive possibilities of bacteria may be studied under new angles. For instance, there appear instances of molecular "learning" along the mechanisms of evolution. More in concrete, and looking specifically at the time dimension, the bacterial mechanisms of learning and evolution appear as two different and related mechanisms for adaptation to the environment; in somatic time the former and in evolutionary time the latter. In the present chapter it will be reviewed the possible application of both kinds of mechanisms to prokaryotic molecular computing schemes as well as to the solution of real world problems.

The purpose of this paper is to examine the use of these advanced models, methods and computing environments for nuclear applications to determine if the industry can expect to derive the same benefit as other industries, such as the automotive and the aerospace industries. As an example, the authors will examine the use of modern computational fluid dynamics (CFD) capability for subchannel analysis, which is an important part of the analysis technology used by utilities to ensure safe and economical design and operation of reactors. In the current deregulated environment, it is possible that by use of these enhanced techniques, the thermal and electrical output of current reactors may be increased without any increase in cost and at no compromise in safety.

Generating and returning imagery from great distances has been generally associated with national security activities, with emphasis on reliability of system operation. (While the introduction of such capabilities was usually characterized by high levels of innovation, the evolution of such systems has followed the classical track of proliferation of ``standardized items`` expressing ever more incremental technological advances.) Recent focusing of interest on the use of remote imaging systems for commercial and scientific purposes can be expected to induce comparatively rapid advances along the axes of efficiency and technological sophistication, respectively. This paper reviews the most basic reasons for expecting the next decade of advances to dwarf the impressive accomplishments of the past ten years. The impact of these advances clearly will be felt in all major areas of large-scale human endeavor, commercial, military and scientific.

TECHNICAL REPORT 80-02 QUARTERLY TECHNICAL REPORT: THE DESIGN AND TRANSFER OF ADVANCED COMMAND AND CONTROL (C 2 ) COMPUTER-BASED SYSTEMS ARPA...The Tasks/Objectives and/or Purposes of the overall project are connected with the design , development, demonstration and transfer of advanced...command and control (C2 ) computer-based systems; this report covers work in the computer-based design and transfer areas only. The Technical Problems thus

The focus of the Research Institute for AdvancedComputer Science (RIACS) is to explore matches between advancedcomputing architectures and the processes of scientific research. An architecture evaluation of the MIT static dataflow machine, specification of a graphical language for expressing distributed computations, and specification of an expert system for aiding in grid generation for two-dimensional flow problems was initiated. Research projects for 1984 and 1985 are summarized.

The need for high fidelity electronic structure calculations has catalyzed an explosion in the development of new techniques. Improvements in DFT functionals, many body perturbation theory and dynamical mean field theory are starting to make significant headway towards reaching the accuracy required for a true predictive capability. One technique that is undergoing a resurgence is diffusion Monte Carlo (DMC). The early calculations with this method were of unquestionable accuracy (providing a valuable reference for DFT functionals) but were largely limited to model systems because of their high computational cost. Algorithmic advances and improvements in computer power have reached the point where this is no longer an insurmountable obstacle. In this talk I will present a broad study of DMC applied to condensed matter (arXiv:1310.1047). We have shown excellent agreement for the bulk modulus and lattice constant of solids exhibiting several different types of binding, including ionic, covalent and van der Waals. We will discuss both the opportunities for application of this method as well as opportunities for further theoretical improvements. Sandia National Laboratories is a multiprogram laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's NNSA under Contract No. DE-AC04-94AL85000.

to advance the fundamental understanding of the hydrodynamics by systematically investigating the effect of design and operating variables, to evaluate the reported dimensionless groups as scaling factors, and to establish a reliable scale-up methodology for the TRISO fuel particle spouted bed coaters based on hydrodynamic similarity via advanced measurement and computational techniques. An additional objective is to develop an on-line non-invasive measurement technique based on gamma ray densitometry (i.e. Nuclear Gauge Densitometry) that can be installed and used for coater process monitoring to ensure proper performance and operation and to facilitate the developed scale-up methodology. To achieve the objectives set for the project, the work will use optical probes and gamma ray computed tomography (CT) (for the measurements of solids/voidage holdup cross-sectional distribution and radial profiles along the bed height, spouted diameter, and fountain height) and radioactive particle tracking (RPT) (for the measurements of the 3D solids flow field, velocity, turbulent parameters, circulation time, solids lagrangian trajectories, and many other of spouted bed related hydrodynamic parameters). In addition, gas dynamic measurement techniques and pressure transducers will be utilized to complement the obtained information. The measurements obtained by these techniques will be used as benchmark data to evaluate and validate the computational fluid dynamic (CFD) models (two fluid model or discrete particle model) and their closures. The validated CFD models and closures will be used to facilitate the developed methodology for scale-up, design and hydrodynamic similarity. Successful execution of this work and the proposed tasks will advance the fundamental understanding of the coater flow field and quantify it for proper and safe design, scale-up, and performance. Such achievements will overcome the barriers to AGR applications and will help assure that the US maintains

The synthesis of ceramics and ceramic coatings through the sol-gel process has extensive application with the United States Navy and a broad range of potential commercial applications as well. This paper surveys seven specific applications for which the Navy is investigating these advanced materials. For each area, the synthetic process is described and the characteristics of the materials are discussed.

We are developing an extensible software framework, in response to Air Force and NASA needs for distributed computing facilities for a variety of radar applications. The objective of this work is to develop a Python based software framework, that is the framework elements of the middleware that allows developers to control processing flow on a grid in a distributed computing environment. Framework architectures to date allow developers to connect processing functions together as interchangeable objects, thereby allowing a data flow graph to be devised for a specific problem to be solved. The Pyre framework, developed at the California Institute of Technology (Caltech), and now being used as the basis for next-generation radar processing at JPL, is a Python-based software framework. We have extended the Pyre framework to include new facilities to deploy processing components as services, including components that monitor and assess the state of the distributed network for eventual real-time control of grid resources.

The Generalized Advanced Propeller Analysis System (GAPAS) computer code is described. GAPAS was developed to analyze advanced technology multi-bladed propellers which operate on aircraft with speeds up to Mach 0.8 and altitudes up to 40,000 feet. GAPAS includes technology for analyzing aerodynamic, structural, and acoustic performance of propellers. The computer code was developed for the CDC 7600 computer and is currently available for industrial use on the NASA Langley computer. A description of all the analytical models incorporated in GAPAS is included. Sample calculations are also described as well as users requirements for modifying the analysis system. Computer system core requirements and running times are also discussed.

Although three-dimensional protein structure determination using nuclear magnetic resonance (NMR) spectroscopy is a computationally costly and tedious process that would benefit from advancedcomputational techniques, it has not garnered much research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing methods and outline some open problems in the field. We also discuss current trends in NMR technology development and suggest directions for research on future computational methods for NMR.

Several countries are now involved in efforts aimed at utilizing accelerator-driven technologies to solve problems of national and international importance. These technologies have both economic and environmental implications. The technologies include waste transmutation, plutonium conversion, neutron production for materials science and biological science research, neutron production for fusion materials testing, fission energy production systems, and tritium production. All of these projects require a high-intensity linear accelerator that operates with extremely low beam loss. This presents a formidable computational challenge: One must design and optimize over a kilometer of complex accelerating structures while taking into account beam loss to an accuracy of 10 parts per billion per meter. Such modeling is essential if one is to have confidence that the accelerator will meet its beam loss requirement, which ultimately affects system reliability, safety and cost. At Los Alamos, the authors are developing a capability to model ultra-low loss accelerators using the CM-5 at the AdvancedComputing Laboratory. They are developing PIC, Vlasov/Poisson, and Langevin/Fokker-Planck codes for this purpose. With slight modification, they have also applied their codes to modeling mesoscopic systems and astrophysical systems. In this paper, they will first describe HPC activities in the accelerator community. Then they will discuss the tools they have developed to model classical and quantum evolution equations. Lastly they will describe how these tools have been used to study beam halo in high current, mismatched charged particle beams.

In optical systems just like any other space borne system, thermal control plays an important role. In fact, most advanced designs are plagued with volume constraints that further complicate the thermal control challenges for even the most experienced systems engineers. Peregrine will present advances in satellite thermal control based upon passive heat transfer technologies to dissipate large thermal loads. This will address the use of 700 W/m K and higher conducting products that are five times better than aluminum on a specific basis providing enabling thermal control while maintaining structural support.

The User-Interface Technology Branch at NCCOSC RDT&E Division has been conducting a series of studies to address the suitability of commercial off-the-shelf (COTS) graphic user-interface (GUI) methods for efficiency and performance in critical naval combat systems. This paper presents an advanced selection algorithm and method developed to increase user performance when making selections on tactical displays. The method has also been applied with considerable success to a variety of cursor and pointing tasks. Typical GUI's allow user selection by: (1) moving a cursor with a pointing device such as a mouse, trackball, joystick, touchscreen; and (2) placing the cursor on the object. Examples of GUI objects are the buttons, icons, folders, scroll bars, etc. used in many personal computer and workstation applications. This paper presents an improved method of selection and the theoretical basis for the significant performance gains achieved with various input devices tested. The method is applicable to all GUI styles and display sizes, and is particularly useful for selections on small screens such as notebook computers. Considering the amount of work-hours spent pointing and clicking across all styles of available graphic user-interfaces, the cost/benefit in applying this method to graphic user-interfaces is substantial, with the potential for increasing productivity across thousands of users and applications.

The growing application of computational aerodynamics to nonlinear rotorcraft problems is outlined, with particular emphasis on the development of new methods based on the Euler and thin-layer Navier-Stokes equations. Rotor airfoil characteristics can now be calculated accurately over a wide range of transonic flow conditions. However, unsteady 3-D viscous codes remain in the research stage, and a numerical simulation of the complete flow field about a helicopter in forward flight is not now feasible. Nevertheless, impressive progress is being made in preparation for future supercomputers that will enable meaningful calculations to be made for arbitrary rotorcraft configurations.

Studies indicate that the use of technologies as teaching aids and tools for self-study is influenced by students' attitudes toward computers and their applications. The purpose of this study is to determine whether taking a Computer Literacy and Applications (CLA) course has an impact on students' attitudes toward computerapplications, across…

This paper provides a discussion of the subject and an approach to establish a reliability and qualification methodology to facilitate the utilization of state-of-the-art advanced microelectronic devices and structures in high reliability applications.

The AdvancedComputational Sensors Team at the Johns Hopkins University Applied Physics Laboratory and the Johns Hopkins University Department of Electrical and Computer Engineering has been developing advanced readout integrated circuit (ROIC) technology for more than 10 years with a particular focus on the key challenges of dynamic range, sampling rate, system interface and bandwidth, and detector materials or band dependencies. Because the pixel array offers parallel sampling by default, the team successfully demonstrated that adding smarts in the pixel and the chip can increase performance significantly. Each pixel becomes a smart sensor and can operate independently in collecting, processing, and sharing data. In addition, building on the digital circuit revolution, the effective well size can be increased by orders of magnitude within the same pixel pitch over analog designs. This research has yielded an innovative class of a system-on-chip concept: the Flexible Readout and Integration Sensor (FRIS) architecture. All key parameters are programmable and/or can be adjusted dynamically, and this architecture can potentially be sensor and application agnostic. This paper reports on the testing and evaluation of one prototype that can support either detector polarity and includes sample results with visible, short-wavelength infrared (SWIR), and long-wavelength infrared (LWIR) imaging.

This guide, which is designed for use with student and teacher guides to a 10-unit secondary-level course in natural resources, contains a series of student supplements and advanced assignment and job sheets that provide students with additional opportunities to explore the following areas of natural resources and conservation education: outdoor…

earth quakes, volcanoes , and tsunamis. REPLACEMENT OF CELL -PHONE BACK-HAUL TIMING Though recent advances in miniature atomic clocks have...contact with other soldiers, ships, tanks, and bases. The super - ruggedized construction of each DeSoLoS can withstand the catastrophic events of war

The Agency for Healthcare Research and Quality and its predecessor organizations—collectively referred to here as AHRQ—have a productive history of funding research and development in the field of medical informatics, with grant investments since 1968 totaling $107 million. Many computerized interventions that are commonplace today, such as drug interaction alerts, had their genesis in early AHRQ initiatives. This review provides a historical perspective on AHRQ investment in medical informatics research. It shows that grants provided by AHRQ resulted in achievements that include advancing automation in the clinical laboratory and radiology, assisting in technology development (computer languages, software, and hardware), evaluating the effectiveness of computer-based medical information systems, facilitating the evolution of computer-aided decision making, promoting computer-initiated quality assurance programs, backing the formation and application of comprehensive data banks, enhancing the management of specific conditions such as HIV infection, and supporting health data coding and standards initiatives. Other federal agencies and private organizations have also supported research in medical informatics, some earlier and to a greater degree than AHRQ. The results and relative roles of these related efforts are beyond the scope of this review. PMID:11861630

The New Millennium Program (NMP) Integrated Product Development Team (IPDT) for Microelectronics Systems was planning to validate a newly developed 3D Flight Computer system on its first deep-space flight, DS1, launched in October 1998. This computer, developed in the 1995-97 time frame, contains many new computer technologies previously never used in deep-space systems. They include: advanced 3D packaging architecture for future low-mass and low-volume avionics systems; high-density 3D packaged chip-stacks for both volatile and non-volatile mass memory: 400 Mbytes of local DRAM memory, and 128 Mbytes of Flash memory; high-bandwidth Peripheral Component Interface (Per) local-bus with a bridge to VME; high-bandwidth (20 Mbps) fiber-optic serial bus; and other attributes, such as standard support for Design for Testability (DFT). Even though this computer system did not complete on time for delivery to the DS1 project, it was an important development along a technology roadmap towards highly integrated and highly miniaturized avionics systems for deep-space applications. This continued technology development is now being performed by NASA's Deep Space System Development Program (also known as X2000) and within JPL's Center for Integrated Space Microsystems (CISM).

The invention of thermography, in the 1950s, posed a formidable problem to the research community: What is the relationship between disease and heat radiation captured with Infrared (IR) cameras? The research community responded with a continuous effort to find this crucial relationship. This effort was aided by advances in processing techniques, improved sensitivity and spatial resolution of thermal sensors. However, despite this progress fundamental issues with this imaging modality still remain. The main problem is that the link between disease and heat radiation is complex and in many cases even non-linear. Furthermore, the change in heat radiation as well as the change in radiation pattern, which indicate disease, is minute. On a technical level, this poses high requirements on image capturing and processing. On a more abstract level, these problems lead to inter-observer variability and on an even more abstract level they lead to a lack of trust in this imaging modality. In this review, we adopt the position that these problems can only be solved through a strict application of scientific principles and objective performance assessment. Computing machinery is inherently objective; this helps us to apply scientific principles in a transparent way and to assess the performance results. As a consequence, we aim to promote thermography based Computer-Aided Diagnosis (CAD) systems. Another benefit of CAD systems comes from the fact that the diagnostic accuracy is linked to the capability of the computing machinery and, in general, computers become ever more potent. We predict that a pervasive application of computers and networking technology in medicine will help us to overcome the shortcomings of any single imaging modality and this will pave the way for integrated health care systems which maximize the quality of patient care.

A major thrust of advanced material development is in the area of self-assembled ultra-fine particulate based composites (micro-composites). The application of biologically derived, self-assembled microstructures to form advanced composite materials is discussed. Hollow 0.5 micron diameter cylindrical shaped microcylinders self-assemble from diacetylenic lipids. These microstructures have a multiplicity of potential applications in the material sciences. Exploratory development is proceeding in application areas such as controlled release for drug delivery, wound repair, and biofouling as well as composites for electronic and magnetic applications, and high power microwave cathodes.

A trend toward the abstraction of computing platforms that has been developing in the broader IT arena over the last few years is just beginning to make inroads into the library technology scene. Cloud computing offers for libraries many interesting possibilities that may help reduce technology costs and increase capacity, reliability, and…

This mini review addresses recent developments in computational enzyme design. Successful protocols as well as known issues and limitations are discussed from an energetic perspective. It will be argued that improved results can be obtained by including a dynamic treatment in the design protocol. Finally, a molecular dynamics-based approach for evaluating and refining computational designs is presented. PMID:24688650

Curdlan is a water-insoluble β-(1,3)-glucan produced by Agrobacterium species under nitrogen-limited condition. Its heat-induced gelling properties render curdlan to be very useful in the food industry initially. Recent advances in the understanding of the role curdlan plays in both innate and adaptive immunity lead to its growing applications in biomedicine. Our review focuses on the recent advances on curdlan biosynthesis and the improvements of curdlan fermentation production both from our laboratory and many others as well as the latest advances on the new applications of curdlan and its derivatives particularly in their immunological functions in biomedicine.

Conventional microscopy has been a revolutionary tool for biomedical applications since its invention several centuries ago. Ability to non-destructively observe very fine details of biological objects in real time enabled to answer many important questions about their structures and functions. Unfortunately, most of these advance microscopes are complex, bulky, expensive, and/or hard to operate, so they could not reach beyond the walls of well-equipped laboratories. Recent improvements in optoelectronic components and computational methods allow creating imaging systems that better fulfill the specific needs of clinics or research related biomedical applications. In this respect, lensfree computational microscopy aims to replace bulky and expensive optical components with compact and cost-effective alternatives through the use of computation, which can be particularly useful for lab-on-a-chip platforms as well as imaging applications in low-resource settings. Several high-throughput on-chip platforms are built with this approach for applications including, but not limited to, cytometry, micro-array imaging, rare cell analysis, telemedicine, and water quality screening. The lack of optical complexity in these lensfree on-chip imaging platforms is compensated by using computational techniques. These computational methods are utilized for various purposes in coherent, incoherent and fluorescent on-chip imaging platforms e.g. improving the spatial resolution, to undo the light diffraction without using lenses, localization of objects in a large volume and retrieval of the phase or the color/spectral content of the objects. For instance, pixel super resolution approaches based on source shifting are used in lensfree imaging platforms to prevent under sampling, Bayer pattern, and aliasing artifacts. Another method, iterative phase retrieval, is utilized to compensate the lack of lenses by undoing the diffraction and removing the twin image noise of in-line holograms

The design and application of advanced composites is discussed with emphasis on aerospace, aircraft, automotive, marine, and industrial applications. Failure modes in advanced composites are also discussed.

Mechanochemical synthesis emerged as the most advantageous, environmentally sound alternative to traditional routes for nanomaterials preparation with outstanding properties for advancedapplications. Featuring simplicity, high reproducibility, mild/short reaction conditions and often solvent-free condition (dry milling), mechanochemistry can offer remarkable possibilities in the development of advanced catalytically active materials. The proposed contribution has been aimed to provide a brief account of remarkable recent findings and advances in the mechanochemical synthesis of solid phase advanced catalysts as opposed to conventional systems. The role of mechanical energy in the synthesis of solid catalysts and their application is critically discussed as well as the influence of the synthesis procedure on the physicochemical properties and the efficiency of synthesized catalysts is studied. The main purpose of this feature article is to highlight the possibilities of mechanochemical protocols in (nano)materials engineering for catalytic applications.

The design and performance evaluation of an entry guidance algorithm for future space transportation vehicles is presented. The algorithm performs two functions: on-board trajectory planning and trajectory tracking. The planned longitudinal path is followed by tracking drag acceleration, as is done by the Space Shuttle entry guidance. Unlike the Shuttle entry guidance, lateral path curvature is also planned and followed. A new trajectory planning function for the guidance algorithm is developed that is suitable for suborbital entry and that significantly enhances the overall performance of the algorithm for both orbital and suborbital entry. In comparison with the previous trajectory planner, the new planner produces trajectories that are easier to track, especially near the upper and lower drag boundaries and for suborbital entry. The new planner accomplishes this by matching the vehicle's initial flight path angle and bank angle, and by enforcing the full three-degree-of-freedom equations of motion with control derivative limits. Insights gained from trajectory optimization results contribute to the design of the new planner, giving it near-optimal downrange and crossrange capabilities. Planned trajectories and guidance simulation results are presented that demonstrate the improved performance. Based on the new planner, a method is developed for approximating the landing footprint for entry vehicles in near real-time, as would be needed for an on-board flight management system. The boundary of the footprint is constructed from the endpoints of extreme downrange and crossrange trajectories generated by the new trajectory planner. The footprint algorithm inherently possesses many of the qualities of the new planner, including quick execution, the ability to accurately approximate the vehicle's glide capabilities, and applicability to a wide range of entry conditions. Footprints can be generated for orbital and suborbital entry conditions using a pre

Advanced multi-scale modeling and simulation has the potential to dramatically reduce development time, resulting in considerable cost savings. The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and universities that is developing, demonstrating, and deploying a suite of multi-scale modeling and simulation tools. One significant computational tool is FOQUS, a Framework for Optimization and Quantification of Uncertainty and Sensitivity, which enables basic data submodels, including thermodynamics and kinetics, to be used within detailed process models to rapidly synthesize and optimize a process and determine the level of uncertainty associated with the resulting process. The overall approach of CCSI is described with a more detailed discussion of FOQUS and its application to carbon capture systems.

In order to clarify mechanical phenomena in civil engineering, it is necessary to improve computational theory and technique in consideration of the particularity of objects to be analyzed and to update computational mechanics focusing on practical use. In addition to the analysis of infrastructure, for damage prediction of natural disasters such as earthquake, tsunami and flood, since it is essential to reflect broad ranges in space and time inherent to fields of civil engineering as well as material properties, it is important to newly develop computational method in view of the particularity of fields of civil engineering. In this context, research trend of methods of computational mechanics which is noteworthy for resolving the complex mechanics problems in civil engineering is reviewed in this paper.

The design of thermal processes in the food industry has undergone great developments in the last two decades due to the availability of cheap computer power alongside advanced modelling techniques such as computational fluid dynamics (CFD). CFD uses numerical algorithms to solve the non-linear partial differential equations of fluid mechanics and heat transfer so that the complex mechanisms that govern many food-processing systems can be resolved. In thermal processing applications, CFD can be used to build three-dimensional models that are both spatially and temporally representative of a physical system to produce solutions with high levels of physical realism without the heavy costs associated with experimental analyses. Therefore, CFD is playing an ever growing role in the development of optimization of conventional as well as the development of new thermal processes in the food industry. This paper discusses the fundamental aspects involved in developing CFD solutions and forms a state-of-the-art review on various CFD applications in conventional as well as novel thermal processes. The challenges facing CFD modellers of thermal processes are also discussed. From this review it is evident that present-day CFD software, with its rich tapestries of mathematical physics, numerical methods and visualization techniques, is currently recognized as a formidable and pervasive technology which can permit comprehensive analyses of thermal processing.

The development of commercial and military aircraft depends, to a large extent, on engine manufacturers being able to achieve significant increases in propulsion capability through improved component aerodynamics, materials, and structures. The recent history of propulsion has been marked by efforts to develop computational techniques that can speed up the propulsion design process and produce superior designs. The availability of powerful supercomputers, such as the NASA Numerical Aerodynamic Simulator, and the potential for even higher performance offered by parallel computer architectures, have opened the door to the use of multi-dimensional simulations to study complex physical phenomena in propulsion systems that have previously defied analysis or experimental observation. An overview of several NASA Lewis research efforts is provided that are contributing toward the long-range goal of a numerical test-cell for the integrated, multidisciplinary design, analysis, and optimization of propulsion systems. Specific examples in Internal Computational Fluid Mechanics, Computational Structural Mechanics, Computational Materials Science, and High Performance Computing are cited and described in terms of current capabilities, technical challenges, and future research directions.

BAE Systems continues to advance the technology and performance of microbolometer-based thermal imaging modules and systems. 640x480 digital uncooled infrared focal plane arrays are in full production, illustrated by recent production line test data for two thousand focal plane arrays. This paper presents a snapshot of microbolometer technology at BAE Systems and an overview of two of the most important thermal imaging sensor programs currently in production: a family of thermal weapons sights for the United States Army and a thermal imager for the remote weapons station on the Stryker vehicle.

A Hybrid Optoelectronic Neural Object Recognition System (HONORS), is disclosed, comprising two major building blocks: (1) an advanced grayscale optical correlator (OC) and (2) a massively parallel three-dimensional neural-processor. The optical correlator, with its inherent advantages in parallel processing and shift invariance, is used for target of interest (TOI) detection and segmentation. The three-dimensional neural-processor, with its robust neural learning capability, is used for target classification and identification. The hybrid optoelectronic neural object recognition system, with its powerful combination of optical processing and neural networks, enables real-time, large frame, automatic target recognition (ATR).

Confocal and Two-Photon Microscopy Foundations, Applications, and Advances Edited by Alberto Diaspro Confocal and two-photon fluorescence microscopy has provided researchers with unique possibilities of three-dimensional imaging of biological cells and tissues and of other structures such as semiconductor integrated circuits. Confocal and Two-Photon Microscopy: Foundations, Applications, and Advances provides clear, comprehensive coverage of basic foundations, modern applications, and groundbreaking new research developments made in this important area of microscopy. Opening with a foreword by G. J. Brakenhoff, this reference gathers the work of an international group of renowned experts in chapters that are logically divided into balanced sections covering theory, techniques, applications, and advances, featuring: In-depth discussion of applications for biology, medicine, physics, engineering, and chemistry, including industrial applications Guidance on new and emerging imaging technology, developmental trends, and fluorescent molecules Uniform organization and review-style presentation of chapters, with an introduction, historical overview, methodology, practical tips, applications, future directions, chapter summary, and bibliographical references Companion FTP site with full-color photographs The significant experience of pioneers, leaders, and emerging scientists in the field of confocal and two-photon excitation microscopy Confocal and Two-Photon Microscopy: Foundations, Applications, and Advances is invaluable to researchers in the biological sciences, tissue and cellular engineering, biophysics, bioengineering, physics of matter, and medicine, who use these techniques or are involved in developing new commercial instruments.

Carbon nanostructures—including graphene, fullerenes, etc.—have found applications in a number of areas synergistically with a number of other materials.These multifunctional carbon nanostructures have recently attracted tremendous interest for energy storage applications due to their large aspect ratios, specific surface areas, and electrical conductivity. This succinct review aims to report on the recent advances in energy storage applications involving these multifunctional carbon nanostructures. The advanced design and testing of multifunctional carbon nanostructures for energy storage applications—specifically, electrochemical capacitors, lithium ion batteries, and fuel cells—are emphasized with comprehensive examples. PMID:28347034

The ideal cycle, its application to a practical machine, and the specific advantages of high efficiency, low emissions, multi-fuel capability, and low noise of the stirling engine are discussed. Certain portions of the Stirling engine must operate continuously at high temperature. Ceramics offer the potential of cost reduction and efficiency improvement for advanced engine applications. Potential applications for ceramics in Stirling engines, and some of the special problems pertinent to using ceramics in the Stirling engine are described. The research and technology program in ceramics which is planned to support the development of advanced Stirling engines is outlined.

Virtual Reality (VR) can provide cost effective methods to design and evaluate components and systems for maintenance and refurbishment operations. The Marshall Space Flight Centerr (MSFC) in Huntsville, Alabama began to utilize VR for design analysis in the X-34 experimental reusable space vehicle. Analysts at MSFC's ComputerApplications and Virtual Environments (CAVE) used Head Mounted Displays (HMD) (pictured), spatial trackers and gesture inputs as a means to animate or inhabit a properly sized virtual human model. These models were used in a VR scenario as a way to determine functionality of space and maintenance requirements for the virtual X-34. The primary functions of the virtual X-34 mockup was to support operations development and design analysis for engine removal, the engine compartment and the aft fuselage. This capability provided general visualization support to engineers and designers at MSFC and to the System Design Freeze Review at Orbital Sciences Corporation (OSC). The X-34 program was cancelled in 2001.

Virtual Reality (VR) can provide cost effective methods to design and evaluate components and systems for maintenance and refurbishment operations. Marshall SPace Flight Center (MSFC) is begirning to utilize VR for design analysis in the X-34 experimental reusable space vehicle. Analysts at MSFC's ComputerApplications and Virtual Environments (CAVE) used Head Mounted Displays (HMD) (pictured), spatial trackers and gesture inputs as a means to animate or inhabit a properly sized virtual human model. These models are used in a VR scenario as a way to determine functionality of space and maintenance requirements for the virtual X-34. The primary functions of the virtual X-34 mockup is to support operations development and design analysis for engine removal, the engine compartment and the aft fuselage. This capability provides general visualization support to engineers and designers at MSFC and to the System Design Freeze Review at Orbital Sciences Corporation (OSC).

Virtual Reality (VR) can provide cost effective methods to design and evaluate components and systems for maintenance and refurbishment operations. The Marshall Space Flight Center (MSFC) in Huntsville, Alabama began to utilize VR for design analysis in the X-34 experimental reusable space vehicle. Analysts at MSFC's ComputerApplications and Virtual Environments (CAVE) used Head Mounted Displays (HMD) (pictured), spatial trackers and gesture inputs as a means to animate or inhabit a properly sized virtual human model. These models were used in a VR scenario as a way to determine functionality of space and maintenance requirements for the virtual X-34. The primary functions of the virtual X-34 mockup was to support operations development and design analysis for engine removal, the engine compartment and the aft fuselage. This capability providedgeneral visualization support to engineers and designers at MSFC and to the System Design Freeze Review at Orbital Sciences Corporation (OSC). The X-34 program was cancelled in 2001.

Sandia National Laboratories (SNL) is a multi-program national laboratory in the business of national security, whose primary mission is nuclear weapons (NW). It is a prime contractor to the USDOE, operating under the NNSA and is one of the three NW national laboratories. It has a long history of involvement in the area of geomechanics, starting with the some of the earliest weapons tests at Nevada. Projects in which geomechanics support (in general) and computational geomechanics support (in particular) are at the forefront at Sandia, range from those associated with civilian programs to those in the defense programs. SNL has had significant involvement and participation in the Waste Isolation Pilot Plant (low-level defense nuclear waste), the Yucca Mountain Project (formerly proposed for commercial spent fuel and high-level nuclear waste), and the Strategic Petroleum Reserve (the nation's emergency petroleum store). In addition, numerous industrial partners seek-out our computational/geomechanics expertise, and there are efforts in compressed air and natural gas storage, as well as in CO{sub 2} Sequestration. Likewise, there have also been collaborative past efforts in the areas of compactable reservoir response, the response of salt structures associated with reservoirs, and basin modeling for the Oil & Gas industry. There are also efforts on the defense front, ranging from assessment of vulnerability of infrastructure to defeat of hardened targets, which require an understanding and application of computational geomechanics. Several examples from some of these areas will be described and discussed to give the audience a flavor of the type of work currently being performed at Sandia in the general area of geomechanics.

Members of the Nondestructive Evaluation (NDE) Section at the Lawrence Livermore National Laboratory (LLNL) have implemented the advanced three-dimensional imaging technique of x and {gamma}-ray computed tomography (CAT or CT) for industrial and scientific nondestructive evaluation. This technique provides internal and external views of materials, components, and assemblies nonintrusively. Our research and development includes building CT scanners as well as data preprocessing, image reconstruction, display and analysis algorithms. These capabilities have been applied for a variety of industrial and scientific NDE applications where objects can range in size from 1 mm{sup 3} to 1 m{sup 3}. Here we discuss the usefulness of Cr to evaluate: Ballistic target materials, high-explosives shape charges, missile nosetips, and reactor-fuel tubes.

Initial steps in the application o f a low-order panel method computational fluid dynamic (CFD) code to the calculation of aircraft dynamic stability and control (S&C) derivatives are documented. Several capabilities, unique to CFD but not unique to this particular demonstration, are identified and demonstrated in this paper. These unique capabilities complement conventional S&C techniques and they include the ability to: 1) perform maneuvers without the flow-kinematic restrictions and support interference commonly associated with experimental S&C facilities, 2) easily simulate advanced S&C testing techniques, 3) compute exact S&C derivatives with uncertainty propagation bounds, and 4) alter the flow physics associated with a particular testing technique from those observed in a wind or water tunnel test in order to isolate effects. Also presented are discussions about some computational issues associated with the simulation of S&C tests and selected results from numerous surface grid resolution studies performed during the course of the study.

Current patient monitoring procedures in hospital intensive care units (ICUs) generate vast quantities of medical data, much of which is considered extemporaneous and not evaluated. Although sophisticated monitors to analyze individual types of patient data are routinely used in the hospital setting, this equipment lacks high order signal analysis tools for detecting long-term trends and correlations between different signals within a patient data set. Without the ability to continuously analyze disjoint sets of patient data, it is difficult to detect slow-forming complications. As a result, the early onset of conditions such as pneumonia or sepsis may not be apparent until the advanced stages. We report here on the development of a distributed software architecture test bed and software medical models to analyze both asynchronous and continuous patient data in real time. Hardware and software has been developed to support a multi-node distributed computer cluster capable of amassing data from multiple patient monitors and projecting near and long-term outcomes based upon the application of physiologic models to the incoming patient data stream. One computer acts as a central coordinating node; additional computers accommodate processing needs. A simple, non-clinical model for sepsis detection was implemented on the system for demonstration purposes. This work shows exceptional promise as a highly effective means to rapidly predict and thereby mitigate the effect of nosocomial infections.

Every field is searching for it's better mouse trap, and the field of dosimetry is no different. Until recently, a dosimetrist would have been hard-pressed to identify an affordable and yet reliably accurate dosimeter for mixed neutron and gamma fields. A new technology has reared it head and is vying for position in the dosimetry community. This relatively young technology is building upon the foundation of the bubble chamber, conceptualized by Glaser in 1952 (Glaser 1952). Although the attitudes surrounding this technology, as with any new development, are somewhat mixed, with the proper combination of tweaking and innovative thought, applications of this technology hold great promise for the future of neutron dosimetry. The Dosimetry Applications Research (DOSAR) facility of Oak Ridge National Laboratory (ORNL) is looking into some innovative applications of this technology. We are investigating options for overcoming its limiting features in hopes of achieving an unprecedented level of proficiency in neutron detection. Among these are the developing and testing of a Combination Area Neutron Spectrometer, CANS, assessing the plausibility of extremity applications, the assembly of an alternative reader for research, investigation of temperature-related effects and how to correct them and considerations on the coming of age of neutron dosimetry via real time detection of bubble formation in Bubble Technology Industries Inc. (BTI) detectors. In the space allowed, we will attempt to answer the questions: (1) What areas hold the greatest promise for application of this emerging technology (2) What obstacles must be overcome before full-blown application becomes a reality ; and (3) What might the future hold 11 refs., 6 figs., 3 tabs.

In observational seismology, wavefront tracking techniques are becoming increasingly popular as a means of predicting two point traveltimes and their associated paths. Possible applications include reflection migration, earthquake relocation and seismic tomography at a wide variety of scales. Compared with traditional ray based techniques such as shooting and bending, wavefront tracking has the advantage of locating traveltimes between the source and every point in the medium; in many cases, improved efficiency and robustness; and greater potential for tracking multiple arrivals. In this presentation, two wavefront tracking techniques will be considered: the so-called Fast Marching Method (FMM), and a wavefront construction (WFC) scheme. Over the last several years, FMM has become a mature technique in seismology, with a number of improvements to the underlying theory and the release of software tools that allow it to be used in a variety of applications. At its core, FMM is a grid based solver that implicitly tracks a propagating wavefront by seeking finite difference solutions to the eikonal equation along an evolving narrow band. Recent developments include the use of source grid refinement to improve accuracy, the introduction of a multi-stage scheme to allow reflections and refractions to be tracked in layered media, and extension to spherical coordinates. Implementation of these ideas has led to a number of different applications, including teleseismic tomography, wide-angle reflection and refraction tomography, earthquake relocation, and ambient noise imaging using surface waves. The WFC scheme represents the wavefront surface as a set of points in 6-D phase space; these points are advanced in time using local initial value ray tracing in order to form a sequence of wavefront surfaces that fill the model volume. Surface refinement and simplification techniques inspired by recent developments in computer graphics are used to maintain a fixed density of nodes

A number of the technologies previously developed for the thermal control of spacecraft have found their way into commercial application. Specialized coatings and heat pipes are but two examples. The thermal control of current and future spacecraft is becoming increasingly more demanding, and a variety of new technologies are being developed to meet these needs. Closed two-phase loops are perceived to be the answer to many of the new requirements. All of these technologies are discussed, and their spacecraft and current terrestrial applications are summarized.

In today's highly mobile, networked, and interconnected internet world, the flow and volume of information is overwhelming and continuously increasing. Therefore, it is believed that the next frontier in technological evolution and development will rely in our ability to develop intelligent systems that can help us process, analyze, and make-sense of information autonomously just as a well-trained and educated human expert. In computational intelligence, neuromorphic computing promises to allow for the development of computing systems able to imitate natural neurobiological processes and form the foundation for intelligent system architectures.

Three spacecraft configurations were designed for operation as a high powered synchronous communications satellite. Each spacecraft includes a 1 kw TWT and a 2 kw Klystron power amplifier feeding an antenna with multiple shaped beams. One of the spacecraft is designed to be boosted by a Thor-Delta launch vehicle and raised to synchronous orbit with electric propulsion. The other two are inserted into a elliptical transfer orbit with an Atlas Centaur and injected into final orbit with an apogee kick motor. Advanced technologies employed in the several configurations include tubes with multiple stage collectors radiating directly to space, multiple-contoured beam antennas, high voltage rollout solar cell arrays with integral power conditioning, electric propulsion for orbit raising and on-station attitude control and station-keeping, and liquid metal slip rings.

This report summarizes work performed in support of the development and demonstration of a structural ceramic technology for automotive gas turbine engines. The AGT101 regenerated gas turbine engine developed under the previous DOE/NASA Advanced Gas Turbine (AGT) program is being utilized for verification testing of the durability of next-generation ceramic components and their suitability for service at reference powertrain design conditions. Topics covered in this report include ceramic processing definition and refinement, design improvements to the test bed engine and test rigs, and design methodologies related to ceramic impact and fracture mechanisms. Appendices include reports by ATTAP subcontractors addressing the development of silicon nitride and silicon carbide families of materials and processes.

An advanced satellite payload is proposed for single hop linking of mobile terminals of all classes as well as Very Small Aperture Terminal's (VSAT's). It relies on an intensive use of communications on-board processing and beam hopping for efficient link design to maximize capacity and a large satellite antenna aperture and high satellite transmitter power to minimize the cost of the ground terminals. Intersatellite links are used to improve the link quality and for high capacity relay. Power budgets are presented for links between the satellite and mobile, VSAT, and hub terminals. Defeating the effects of shadowing and fading requires the use of differentially coherent demodulation, concatenated forward error correction coding, and interleaving, all on a single link basis.

Background information concerning the use of laser diodes in pyrotechnic applications is provided in viewgraph form. The following topics are discussed: damage limits, temperature stability, fiber coupling issues, and small (100 micron) and large (400 micron) fiber results. The discussions concerning fiber results concentrate on the areas of package geometry and electro-optical properties.

Current computational developments at the Jet Propulsion Laboratory (JPL) are motivated by the NASA/JPL goal of reducing payload in future space missions while increasing mission capability through miniaturization of active and passive sensors, analytical instruments and communication systems.

Nanoinformatics has recently emerged to address the need of computingapplications at the nano level. In this regard, the authors have participated in various initiatives to identify its concepts, foundations and challenges. While nanomaterials open up the possibility for developing new devices in many industrial and scientific areas, they also offer breakthrough perspectives for the prevention, diagnosis and treatment of diseases. In this paper, we analyze the different aspects of nanoinformatics and suggest five research topics to help catalyze new research and development in the area, particularly focused on nanomedicine. We also encompass the use of informatics to further the biological and clinical applications of basic research in nanoscience and nanotechnology, and the related concept of an extended “nanotype” to coalesce information related to nanoparticles. We suggest how nanoinformatics could accelerate developments in nanomedicine, similarly to what happened with the Human Genome and other –omics projects, on issues like exchanging modeling and simulation methods and tools, linking toxicity information to clinical and personal databases or developing new approaches for scientific ontologies, among many others. PMID:22942787

This ASC Co-design Strategy lays out the full continuum and components of the co-design process, based on what we have experienced thus far and what we wish to do more in the future to meet the program’s mission of providing high performance computing (HPC) and simulation capabilities for NNSA to carry out its stockpile stewardship responsibility.

The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

This study suggests and identifies computer image generation (CIG) algorithms for visual simulation that improve the training effectiveness of CIG simulators and identifies areas of basic research in visual perception that are significant for improving CIG technology. The first phase of the project entailed observing three existing CIG simulators.…

Direct computation of fan noise is presently not possible. One of the major difficulties is the geometrical complexity of the problem. In the case of fan noise, the blade geometry is critical to the loading on the blade and hence the intensity of the radiated noise. The precise geometry must be incorporated into the computation. In computational fluid dynamics (CFD), there are two general ways to handle problems with complex geometry. One way is to use unstructured grids. The other is to use body fitted overset grids. In the overset grid method, accurate data transfer is of utmost importance. For acoustic computation, it is not clear that the currently used data transfer methods are sufficiently accurate as not to contaminate the very small amplitude acoustic disturbances. In CFD, low order schemes are, invariably, used in conjunction with unstructured grids. However, low order schemes are known to be numerically dispersive and dissipative. dissipative errors are extremely undesirable for acoustic wave problems. The objective of this project is to develop a high order unstructured grid Dispersion-Relation-Preserving (DRP) scheme. would minimize numerical dispersion and dissipation errors. contains the results of the funded portion of the project. scheme on an unstructured grid has been developed. constructed in the wave number space. The characteristics of the scheme can be improved by the inclusion of additional constraints. Stability of the scheme has been investigated. Stability can be improved by adopting the upwinding strategy.

This instructional guide, one of a series developed by the Technical Education Advancement Modules (TEAM) project, is a 3-hour introduction to computers. The purpose is to develop the following competencies: (1) orientation to data processing; (2) use of data entry devices; (3) use of computer menus; and (4) entry of data with accuracy and…

The goal of this laboratory / university collaboration of coupled computational and experimental studies is the improvement of predictive methods for supercritical-pressure reactors. The general objective is to develop supporting knowledge needed of advancedcomputational techniques for the technology development of the concepts and their safety systems.

In the teaching of computer networks the main problem that arises is the high price and limited number of network devices the students can work with in the laboratories. Nowadays, with virtualization we can overcome this limitation. In this paper, we present a methodology that allows students to learn advancedcomputer network concepts through…

QCLs are becoming the most important sources of laser radiation in the midwave infrared (MWIR) and longwave infrared (LWIR) regions because of their size, weight, power and reliability advantages over other laser sources in the same spectral regions. The availability of multiwatt RT operation QCLs from 3.5 μm to >16 μm with wall plug efficiency of 10% or higher is hastening the replacement of traditional sources such as OPOs and OPSELs in many applications. QCLs can replace CO2 lasers in many low power applications. Of the two leading groups in improvements in QCL performance, Pranalytica is the commercial organization that has been supplying the highest performance QCLs to various customers for over four year. Using a new QCL design concept, the non-resonant extraction [1], we have achieved CW/RT power of >4.7 W and WPE of >17% in the 4.4 μm - 5.0 μm region. In the LWIR region, we have recently demonstrated QCLs with CW/RT power exceeding 1 W with WPE of nearly 10 % in the 7.0 μm-10.0 μm region. In general, the high power CW/RT operation requires use of TECs to maintain QCLs at appropriate operating temperatures. However, TECs consume additional electrical power, which is not desirable for handheld, battery-operated applications, where system power conversion efficiency is more important than just the QCL chip level power conversion efficiency. In high duty cycle pulsed (quasi-CW) mode, the QCLs can be operated without TECs and have produced nearly the same average power as that available in CW mode with TECs. Multiwatt average powers are obtained even in ambient T>70°C, with true efficiency of electrical power-to-optical power conversion being above 10%. Because of the availability of QCLs with multiwatt power outputs and wavelength range covering a spectral region from ~3.5 μm to >16 μm, the QCLs have found instantaneous acceptance for insertion into multitude of defense and homeland security applications, including laser sources for infrared

Research conducted at the Institute for ComputerApplications in Science and Engineering in applied mathematics, numerical analysis and computer science during the period April 1, 1983 through September 30, 1983 is summarized.

Considerable research is currently going on into the application of distributed computing systems. They appear particularly suitable for the...structured distributed computing system might be adapted to function in this environment. Included in this consideration are the feasibility of

Although the development of computer-aided design (CAD) and computer-aided manufacture (CAM) technology and the benefits of increased productivity became obvious in the automobile and aerospace industries in the 1970s, investigations of this technology's application in the field of dentistry did not begin until the 1980s. Only now are we beginning to see the fruits of this work with the commercial availability of some systems; the potential for this technology seems boundless. This article reviews the recent literature with emphasis on the period from June 1992 to May 1993. This review should familiarize the reader with some of the latest developments in this technology, including a brief description of some systems currently available and the clinical and economical rationale for their acceptance into the dental mainstream. This article concentrates on a particular system, the Cerec (Siemens/Pelton and Crane, Charlotte, NC) system, for three reasons: first, this system has been available since 1985 and, as a result, has a track record of almost 7 years of data. Most of the data have just recently been released and consequently, much of this year's literature on CAD-CAM is monopolized by studies using this system. Second, this system was developed as a mobile, affordable, direct chairside CAD-CAM restorative method. As such, it is of special interest to the dentist who will offer this new technology directly to the patient, providing a one-visit restoration. Third, the author is currently engaged in research using this particular system and has a working knowledge of this system's capabilities.

In this paper, we present new findings in constructing and applications of artificial neural networks that use a biologically inspired spiking neuron model. The used model is a point neuron with the interaction between neurons described by postsynaptic potentials. The synaptic plasticity is achieved by using a temporal correlation learning rule, specified as a function of time difference between the firings of pre- and post-synaptic neurons. Using this rule we show how certain associations between neurons in a network of spiking neurons can be implemented. As an example we analyze the dynamic properties of networks of laterally connected spiking neurons and we show their capability to self-organize into topological maps in response to external stimulation. In another application we explore the capability networks of spiking neurons to solve graph algorithms by using temporal coding of distances in a given spatial configuration. The paper underlines the importance of temporal dimension in artificial neural network information processing.

Specular gloss is an important measurand used in quality control of manufacturing processes of highly reflective parts. In this work we present an in-process quality control system to evaluate the gloss of free-form surfaces to be used in an automated polishing process. Due to the geometry of our test objects the presented sensor is mounted on a robot arm and, therefore, needs to be robust against sensor misalignment. This robustness is achieved using a 2D CCD-camera as detector which allows us to properly handle sensor orientation deviations of up to 10. The required dynamic range of the sensor is obtained based on the acquisition of high dynamic range images. We present first results of a sensor prototype and show its applicability to the target application.

Contactless measurement of temperatures has gained enormous significance in many application fields, ranging from climate protection over quality control to object recognition in public places or military objects. Thereby measurement of linear or spatially temperature distribution is often necessary. For this purposes mostly thermographic cameras or motor driven temperature scanners are used today. Both are relatively expensive and the motor drive devices are limited regarding to the scanning rate additionally. An economic alternative are temperature scanner devices based on micro mirrors. The micro mirror, attached in a simple optical setup, reflects the emitted radiation from the observed heat onto an adapted detector. A line scan of the target object is obtained by periodic deflection of the micro scanner. Planar temperature distribution will be achieved by perpendicularly moving the target object or the scanner device. Using Planck radiation law the temperature of the object is calculated. The device can be adapted to different temperature ranges and resolution by using different detectors - cooled or uncooled - and parameterized scanner parameters. With the basic configuration 40 spatially distributed measuring points can be determined with temperatures in a range from 350°C - 1000°C. The achieved miniaturization of such scanners permits the employment in complex plants with high building density or in direct proximity to the measuring point. The price advantage enables a lot of applications, especially new application in the low-price market segment This paper shows principle, setup and application of a temperature measurement system based on micro scanners working in the near infrared range. Packaging issues and measurement results will be discussed as well.

This paper presents the NASA electronic parts and packaging program for space applications. The topics include: 1) Forecasts; 2) Technology Challenges; 3) Research Directions; 4) Research Directions for Chip on Board (COB); 5) Research Directions for HDPs: Multichip Modules (MCMs); 6) Research Directions for Microelectromechanical systems (MEMS); 7) Research Directions for Photonics; and 8) Research Directions for Materials. This paper is presented in viewgraph form.

Summary: The past decade has seen an explosion in the development and application of models aimed at estimating species occurrence and occupancy dynamics while accounting for possible non-detection or species misidentification. We discuss some recent occupancy estimation methods and the biological systems that motivated their development. Collectively, these models offer tremendous flexibility, but simultaneously place added demands on the investigator. Unlike many mark–recapture scenarios, investigators utilizing occupancy models have the ability, and responsibility, to define their sample units (i.e. sites), replicate sampling occasions, time period over which species occurrence is assumed to be static and even the criteria that constitute ‘detection’ of a target species. Subsequent biological inference and interpretation of model parameters depend on these definitions and the ability to meet model assumptions. We demonstrate the relevance of these definitions by highlighting applications from a single biological system (an amphibian–pathogen system) and discuss situations where the use of occupancy models has been criticized. Finally, we use these applications to suggest future research and model development.

Giant magnetoresistance (GMR) sensors are considered one of the first real applications of nanotechnology. They consist of nm-thick layered structures where ferromagnetic metals are sandwiched by nonmagnetic metals. Such multilayered films produce a large change in resistance (typically 10 to 20%) when subjected to a magnetic field, compared with a maximum change of a few per cent for other types of magnetic sensors. This technology has been intensively used in read heads for hard disk drives and now increasingly finds applications due to the high sensitivity and signal-to-noise ratio. Additionally these sensors are compatible with miniaturization and thus offer a high spatial resolution combined with a frequency range up to the 100 MHz regime and simple electronic conditioning. In this review, we first discuss the basics of the underlying magnetoresistance effects in layered structures and then present three prominent examples for future applications: in the field of current sensing the new GMR sensors offer high bandwidth and good accuracy in a space-saving open loop measurement configuration. In rotating systems they can be used for multiturn angle measurements, and in biotechnology the detection of magnetic particles enables the quantitative measurement of biomolecule concentrations.

The regulatory application of leak before break (LBB) for operating and advanced reactors in the U.S. is described. The U.S. Nuclear Regulatory Commission (NRC) has approved the application of LBB for six piping systems in operating reactors: reactor coolant system primary loop piping, pressurizer surge, safety injection accumulator, residual heat removal, safety injection, and reactor coolant loop bypass. The LBB concept has also been applied in the design of advanced light water reactors. LBB applications, and regulatory considerations, for pressurized water reactors and advanced light water reactors are summarized in this paper. Technology development for LBB performed by the NRC and the International Piping Integrity Research Group is also briefly summarized.

Multi-site damage (MSD) in the form of cracking at rivet holes in lap splice joints has been identified as a serious threat to the integrity of commercial aircraft nearing their design life targets. Consequently, to assure the safety of aircraft that have accumulated large numbers of flights, flight hours and years in service requires requires inspection procedures that are based on the possibility that MSD may be present. For inspections of aircraft components to be properly focused on me defect sizes that are critical for structural integrity, fracture analyses are needed. The current methods are essentially those of linear elastic fracture mechanics (LEFM) which are strictly valid only for cracks that extend in a quasi-static manner under small-scale crack tip plasticity conditions. While LEFM is very likely to be appropriate for subcritical crack growth, quantifying the conditions for fracture instability and subsequent propagation may require advanced fracture mechanics techniques. The specific focus in this paper was to identify the conditions in which inelastic-dynamic effects occur in (1) the linking up Of local damage in a lap splice joint to form a major crack, and (2) large-scale fuselage failure by a rapidly occurring fluid structure interaction process.

The goal of this effort was demonstration of the concepts for an advanced helium magnetometer which meets the demands of future NASA earth orbiting, interplanetary, solar, and interstellar missions. The technical effort focused on optical pumping of helium with tunable solid state lasers. We were able to demonstrate the concept of a laser pumped helium magnetometer with improved accuracy, low power, and sensitivity of the order of 1 pT. A number of technical approaches were investigated for building a solid state laser tunable to the helium absorption line at 1083 nm. The laser selected was an Nd-doped LNA crystal pumped by a diode laser. Two laboratory versions of the lanthanum neodymium hexa-aluminate (LNA) laser were fabricated and used to conduct optical pumping experiments in helium and demonstrate laser pumped magnetometer concepts for both the low field vector mode and the scalar mode of operation. A digital resonance spectrometer was designed and built in order to evaluate the helium resonance signals and observe scalar magnetometer operation. The results indicate that the laser pumped sensor in the VHM mode is 45 times more sensitive than a lamp pumped sensor for identical system noise levels. A study was made of typical laser pumped resonance signals in the conventional magnetic resonance mode. The laser pumped sensor was operated as a scalar magnetometer, and it is concluded that magnetometers with 1 pT sensitivity can be achieved with the use of laser pumping and stable laser pump sources.

Lawrence Berkeley National Laboratory (Berkeley Lab) proposes to create a National Facility for AdvancedComputational Science (NFACS) and to establish a new partnership between the American computer industry and a national consortium of laboratories, universities, and computing facilities. NFACS will provide leadership-class scientific computing capability to scientists and engineers nationwide, independent of their institutional affiliation or source of funding. This partnership will bring into existence a new class of computational capability in the United States that is optimal for science and will create a sustainable path towards petaflops performance.

Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working...use in real-time. Test results are shown for a variety of environments. KEYWORDS: robotics, computer vision, car /license plate detection, SIFT...when detecting the make and model of automobiles , SIFT can be used to achieve very high detection rates at the expense of a hefty performance cost when

Cogeneration computer simulation models to recommend the most desirable models or their components for use by the Southern California Edison Company (SCE) in evaluating potential cogeneration projects was assessed. Existing cogeneration modeling capabilities are described, preferred models are identified, and an approach to the development of a code which will best satisfy SCE requirements is recommended. Five models (CELCAP, COGEN 2, CPA, DEUS, and OASIS) are recommended for further consideration.

Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulation of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.

Molecular dynamics simulations have evolved into a mature technique that can be used effectively to understand macromolecular structure-to-function relationships. Present simulation times are close to biologically relevant ones. Information gathered about the dynamic properties of macromolecules is rich enough to shift the usual paradigm of structural bioinformatics from studying single structures to analyze conformational ensembles. Here, we describe the foundations of molecular dynamics and the improvements made in the direction of getting such ensemble. Specific application of the technique to three main issues (allosteric regulation, docking, and structure refinement) is discussed. PMID:26604800

This Vision 20/20 paper considers what computationaladvances are likely to be implemented in clinical radiation oncology in the coming years and how the adoption of these changes might alter the practice of radiotherapy. Four main areas of likely advancement are explored: cloud computing, aggregate data analyses, parallel computation, and automation. As these developments promise both new opportunities and new risks to clinicians and patients alike, the potential benefits are weighed against the hazards associated with each advance, with special considerations regarding patient safety under new computational platforms and methodologies. While the concerns of patient safety are legitimate, the authors contend that progress toward next-generation clinical informatics systems will bring about extremely valuable developments in quality improvement initiatives, clinical efficiency, outcomes analyses, data sharing, and adaptive radiotherapy.

Research progress in the following areas is reviewed: (1) new version of computer program LCF (logic for computable functions) including a facility to search for proofs automatically; (2) the description of the language PASCAL in terms of both LCF and in first order logic; (3) discussion of LISP semantics in LCF and attempt to prove the correctness of the London compilers in a formal way; (4) design of both special purpose and domain independent proving procedures specifically program correctness in mind; (5) design of languages for describing such proof procedures; and (6) the embedding of ideas in the first order checker.

Cleveland Tool and Machine (CTM) of Cleveland, Ohio in conjunction with Harrington Product Development Center (HPDC) of Cincinnati, Ohio have developed an advanced, dimensionally accurate, temperature-stable, energy-efficient and cost-effective material and process to manufacture patterns for the investment casting industry. In the proposed technology, FOPAT (aFOam PATtern material) has been developed which is especially compatible with the investment casting process and offers the following advantages: increased dimensional accuracy; increased temperature stability; lower cost per pattern; less energy consumption per pattern; decreased cost of pattern making equipment; decreased tooling cost; increased casting yield. The present method for investment casting is "the lost wax" process, which is exactly that, the use of wax as a pattern material, which is then melted out or "lost" from the ceramic shell. The molten metal is then poured into the ceramic shell to produce a metal casting. This process goes back thousands of years and while there have been improvements in the wax and processing technology, the material is basically the same, wax. The proposed technology is based upon an established industrial process of "Reaction Injection Molding" (RIM) where two components react when mixed and then "molded" to form a part. The proposed technology has been modified and improved with the needs of investment casting in mind. A proprietary mix of components has been formulated which react and expand to form a foam-like product. The result is an investment casting pattern with smooth surface finish and excellent dimensional predictability along with the other key benefits listed above.

The aerodynamic design of advanced fuel and oxidizer pump drive turbine systems being developed for application in the main propulsion system of the National Launch System are discussed. The detail design process is presented along with the final baseline fuel and oxidizer turbine configurations. Computed airfoil surface static pressure distributions and flow characteristics are shown. Both turbine configurations employ unconventional high turning blading (approximately 160 deg) and are expected to provide significant cost and performance benefits in comparison with traditional configurations.

The results of a survey on advanced secondary battery systems for space applications are presented. Fifty-five battery experts from government, industry and universities participated in the survey by providing their opinions on the use of several battery types for six space missions, and their predictions of likely technological advances that would impact the development of these batteries. The results of the survey predict that only four battery types are likely to exceed a specific energy of 150 Wh/kg and meet the safety and reliability requirements for space applications within the next 15 years.

Atom interferometer (AI) based sensors exhibit precision and accuracy unattainable with classical sensors, thanks to the inherent stability of atomic properties. Dual atomic sensors operating in a differential mode further extend AI applicability beyond environmental disturbances. Extraction of the phase difference between dual AIs, however, typically introduces uncertainty and systematic in excess of that warranted by each AI's intrinsic noise characteristics, especially in practical applications and real time measurements. In this presentation, we report our efforts in developing practical schemes for reducing noises and enhancing sensitivities in the differential AI measurement implementations. We will describe an active phase extraction method that eliminates the noise overhead and demonstrates a performance boost of a gravity gradiometer by a factor of 3. We will also describe a new long-baseline approach for differential AI measurements in a laser ranging assisted AI configuration. The approach uses well-developed AIs for local measurements but leverage the mature schemes of space laser interferometry for LISA and GRACE. This research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a Contract with NASA.

Protein microarrays represent a powerful technology with the potential to serve as tools for the detection of a broad range of analytes in numerous applications such as diagnostics, drug development, food safety, and environmental monitoring. Key features of analytical protein microarrays include high throughput and relatively low costs due to minimal reagent consumption, multiplexing, fast kinetics and hence measurements, and the possibility of functional integration. So far, especially fundamental studies in molecular and cell biology have been conducted using protein microarrays, while the potential for clinical, notably point-of-care applications is not yet fully utilized. The question arises what features have to be implemented and what improvements have to be made in order to fully exploit the technology. In the past we have identified various obstacles that have to be overcome in order to promote protein microarray technology in the diagnostic field. Issues that need significant improvement to make the technology more attractive for the diagnostic market are for instance: too low sensitivity and deficiency in reproducibility, inadequate analysis time, lack of high-quality antibodies and validated reagents, lack of automation and portable instruments, and cost of instruments necessary for chip production and read-out. The scope of the paper at hand is to review approaches to solve these problems. PMID:28146048

Protein microarrays represent a powerful technology with the potential to serve as tools for the detection of a broad range of analytes in numerous applications such as diagnostics, drug development, food safety, and environmental monitoring. Key features of analytical protein microarrays include high throughput and relatively low costs due to minimal reagent consumption, multiplexing, fast kinetics and hence measurements, and the possibility of functional integration. So far, especially fundamental studies in molecular and cell biology have been conducted using protein microarrays, while the potential for clinical, notably point-of-care applications is not yet fully utilized. The question arises what features have to be implemented and what improvements have to be made in order to fully exploit the technology. In the past we have identified various obstacles that have to be overcome in order to promote protein microarray technology in the diagnostic field. Issues that need significant improvement to make the technology more attractive for the diagnostic market are for instance: too low sensitivity and deficiency in reproducibility, inadequate analysis time, lack of high-quality antibodies and validated reagents, lack of automation and portable instruments, and cost of instruments necessary for chip production and read-out. The scope of the paper at hand is to review approaches to solve these problems.

In the application area of aerospace tribology, researchers and developers must guarantee the highest degree of reliability for materials, components, and systems. Even a small tribological failure can lead to catastrophic results. The absence of the required knowledge of tribology, as Professor H.P. Jost has said, can act as a severe brake in aerospace vehicle systems-and indeed has already done so. Materials and coatings must be able to withstand the aerospace environments that they encounter, such as vacuum terrestrial, ascent, and descent environments; be resistant to the degrading effects of air, water vapor, sand, foreign substances, and radiation during a lengthy service; be able to withstand the loads, stresses, and temperatures encountered form acceleration and vibration during operation; and be able to support reliable tribological operations in harsh environments throughout the mission of the vehicle. This presentation id divided into two sections: surface properties and technology practice related to aerospace tribology. The first section is concerned with the fundamental properties of the surfaces of solid-film lubricants and related materials and coatings, including carbon nanotubes. The second is devoted to applications. Case studies are used to review some aspects of real problems related to aerospace systems to help engineers and scientists to understand the tribological issues and failures. The nature of each problem is analyzed, and the tribological properties are examined. All the fundamental studies and case studies were conducted at the NASA Glenn Research Center.

While applications such as drilling μ-vias and laser direct imaging have been well established in the electronics industry, the mobile device industry's push for miniaturization is generating new demands for packaging technologies that allow for further reduction in feature size while reducing manufacturing cost. CO lasers have recently become available and their shorter wavelength allows for a smaller focus and drilling hole diameters down to 25μm whilst keeping the cost similar to CO2 lasers. Similarly, nanosecond UV lasers have gained significantly in power, become more reliable and lower in cost. On a separate front, the cost of ownership reduction for Excimer lasers has made this class of lasers attractive for structuring redistribution layers of IC substrates with feature sizes down to 2μm. Improvements in reliability and lower up-front cost for picosecond lasers is enabling applications that previously were only cost effective with mechanical means or long-pulsed lasers. We can now span the gamut from 100μm to 2μm for via drilling and can cost effectively structure redistribution layers with lasers instead of UV lamps or singulate packages with picosecond lasers.

Replacing the widespread use of petroleum-derived non-biodegradable materials with green and sustainable materials is a pressing challenge that is gaining increasing attention by the scientific community. One such system is cellulose nanocrystal (CNC) derived from acid hydrolysis of cellulosic materials, such as plants, tunicates and agriculture biomass. The utilization of colloidal CNCs can aid in the reduction of carbon dioxide that is responsible for global warming and climate change. CNCs are excellent candidates for the design and development of functional nanomaterials in many applications due to several attractive features, such as high surface area, hydroxyl groups for functionalization, colloidal stability, low toxicity, chirality and mechanical strength. Several large scale manufacturing facilities have been commissioned to produce CNCs of up to 1000kg/day, and this has generated increasing interests in both academic and industrial laboratories. In this feature article, we will describe the recent development of functionalized cellulose nanocrystals for several important applications in ours and other laboratories. We will highlight some challenges and offer perspectives on the potentials of these sustainable nanomaterials.

This bibliography contains annotations of 95 items of educational and business software with applications in seven marketing and business functions. The annotations, which appear in alphabetical order by title, provide this information: category (related application), title, date, source and price, equipment, supplementary materials, description…

Babson College (a school of business and management in Wellesley, Massachusetts) attempted to make a group of first-year students computer literate through "clustering." The same group of students were enrolled in two courses: a special section of "Composition" which stressed word processing as a composition aid and a regular…

A Jet Propulsion Laboratory (JPL) survey of 23 electrochemical systems for space applications in which experts from universities, industry, and government participated is discussed. They recommended achievable specific energy for these systems and forecast the likelihood of their development of these systems by the year 1995, 2000, and 2005. The highest ranked systems for operation in planetary inner-orbit spacecraft included Na/beta-double prime-alumina/Z, where Z = S, FeCl2 or NiCl2, the upper plateau Li(Al)/FeS2 system, and the H2/O2 alkaline regenerative fuel cell. The achievable specific energy for these as operational batteries was estimated to be 130, 180 and 100 Wh/kg, respectively. For planetary outer-orbit and small geosynchronous (GEO) spacecraft Li/TiS2 (estimated 90 Wh/kg) was the choice.

Heat pipes offer the potential of vibrationless cooling of optical surfaces while maintaining a high degree of temperature uniformity on the cooled surface. The objective of the present program is to develop and demonstrate prototype heat pipes for this application. The material of construction is silicon; the pqwer density range is 5 to 50 Watts/per square centimeter with a nominal objective of 30 W/cm2. This paper describes the first eighteen months of work, during which the contract goals were met. The program was carried out by Thermacore on Contract F33615-82-C-5127 for the Department of the Air Force, Aeronautical Systems Division, Wright-Patterson Air Force Base, Ohio. Dr. Alan K. Hopkins of the Materials Laboratory supplied technical supervision of the program for the Air Force.

An autonomous mobile robotic capability is critical to developing remote work applications for hazardous environments. A few potential applications include humanitarian demining and ordnance neutralization, extraterrestrial science exploration, and hazardous waste cleanup. The ability of the remote platform to sense and maneuver within its environment is a basic technology requirement which is currently lacking. This enabling technology will open the door for force multiplication and cost effective solutions to remote operations. The ultimate goal of this work is to develop a mobile robotic platform that can identify and avoid local obstacles as it traverses from its current location to a specified destination. This goal directed autonomous navigation scheme uses the Global Positioning System (GPS) to identify the robot`s current coordinates in space and neural network processing of LADAR range images for local obstacle detection and avoidance. The initial year funding provided by this LDRD project has developed a small exterior mobile robotic development platform and a fieldable version of Sandia`s Scannerless Range Imager (SRI) system. The robotic testbed platform is based on the Surveillance And Reconnaissance ground Equipment (SARGE) robotic vehicle design recently developed for the US DoD. Contingent upon follow-on funding, future enhancements will develop neural network processing of the range map data to traverse unstructured exterior terrain while avoiding obstacles. The SRI will provide real-time range images to a neural network for autonomous guidance. Neural network processing of the range map data will allow real-time operation on a Pentium based embedded processor board.

With the increasing use of nanoparticles in food processing, filtration/purification and consumer products, as well as the huge potential of their use in nanomedicine, a quantitative understanding of the effects of nanoparticle uptake and transport is needed. We provide examples of novel methods for modeling complex bio-nano interactions which are based on stochastic process algebras. Since model construction presumes sufficient availability of experimental data, recent developments in "nanoinformatics", an emerging discipline analogous to bioinfomatics, in building an accessible information infrastructure are subsequently discussed. Both computational areas offer opportunities for Filipinos to engage in collaborative, cutting edge research in this impactful field.

Typically environmental management problems require analysis of large and complex data sets originating from concurrent data streams with different data collection frequencies and pedigree. These big data sets require on-the-fly integration into a series of models with different complexity for various types of model analyses where the data are applied as soft and hard model constraints. This is needed to provide fast iterative model analyses based on the latest available data to guide decision-making. Furthermore, the data and model are associated with uncertainties. The uncertainties are probabilistic (e.g. measurement errors) and non-probabilistic (unknowns, e.g. alternative conceptual models characterizing site conditions). To address all of these issues, we have developed an integrated framework for real-time data and model analyses for environmental decision-making called ZEM. The framework allows for seamless and on-the-fly integration of data and modeling results for robust and scientifically-defensible decision-making applying advanced decision analyses tools such as Bayesian- Information-Gap Decision Theory (BIG-DT). The framework also includes advanced methods for optimization that are capable of dealing with a large number of unknown model parameters, and surrogate (reduced order) modeling capabilities based on support vector regression techniques. The framework is coded in Julia, a state-of-the-art high-performance programing language (http://julialang.org). The ZEM framework is open-source and can be applied to any environmental management site. The framework will be open-source and released under GPL V3 license.

The history of the development of an aircraft configuration synthesis program using interactive computer graphics was described. A system based on time-sharing was compared to two different concepts based on distributed computing.

General adoption of sustainable energy technologies depends on the discovery and development of new high-performance materials. For instance, waste heat recovery and electricity generation via the solar thermal route require bulk thermoelectrics with a high figure of merit (ZT) and thermal stability at high-temperatures. Energy recovery applications (e.g., regenerative braking) call for the development of rapidly chargeable systems for electrical energy storage, such as electrochemical supercapacitors. Similarly, use of hydrogen as vehicular fuel depends on the ability to store hydrogen at high volumetric and gravimetric densities, as well as on the ability to extract it at ambient temperatures at sufficiently rapid rates. We will discuss how first-principles computational methods based on quantum mechanics and statistical physics can drive the understanding, improvement and prediction of new energy materials. We will cover prediction and experimental verification of new earth-abundant thermoelectrics, transition metal oxides for electrochemical supercapacitors, and kinetics of mass transport in complex metal hydrides. Research has been supported by the US Department of Energy under grant Nos. DE-SC0001342, DE-SC0001054, DE-FG02-07ER46433, and DE-FC36-08GO18136.

The focus in this project was to employ first principles computational methods to study the underlying molecular elementary processes that govern hydrogen diffusion through Pd membranes as well as the elementary processes that govern the CO- and S-poisoning of these membranes. Our computational methodology integrated a multiscale hierarchical modeling approach, wherein a molecular understanding of the interactions between various species is gained from ab-initio quantum chemical Density Functional Theory (DFT) calculations, while a mesoscopic statistical mechanical model like Kinetic Monte Carlo is employed to predict the key macroscopic membrane properties such as permeability. The key developments are: (1) We have coupled systematically the ab initio calculations with Kinetic Monte Carlo (KMC) simulations to model hydrogen diffusion through the Pd based-membranes. The predicted tracer diffusivity of hydrogen atoms through the bulk of Pd lattice from KMC simulations are in excellent agreement with experiments. (2) The KMC simulations of dissociative adsorption of H{sub 2} over Pd(111) surface indicates that for thin membranes (less than 10{micro} thick), the diffusion of hydrogen from surface to the first subsurface layer is rate limiting. (3) Sulfur poisons the Pd surface by altering the electronic structure of the Pd atoms in the vicinity of the S atom. The KMC simulations indicate that increasing sulfur coverage drastically reduces the hydrogen coverage on the Pd surface and hence the driving force for diffusion through the membrane.

Since smart phones have been developed, significant advances in the function of mobile phone due to the development of software, hardware and accessories have been reached. Till now, smart phones have been engaged in daily life with an increasing impact. As a new medical model, mobile phone medicine is emerging and has found wide spread applications in medicine, especially in diagnosing, monitoring and screening various diseases. In addition, mo bile phone medical application shows great potential trend to improve healthcare in resource-limited regions due to its advantageous features of portability and information communication capability. Nowadays, the scientific and technological issues related to mobile phone medicine have attracted worldwide attention. In this review, we summarize state-of-the-art advances of mobile phone medicine with focus on its diagnostics applications in order to expand the fields of their applications and promote healthcare informatization.

Research activities and operations of the AdvancedComputing Research Facility (ACRF) at Argonne National Laboratory are discussed for the period from July 1986 through October 1986. The facility is currently supported by the Department of Energy, and is operated by the Mathematics and Computer Science Division at Argonne. Over the past four-month period, a new commercial multiprocessor, the Intel iPSC-VX/d4 hypercube was installed. In addition, four other commercial multiprocessors continue to be available for research - an Encore Multimax, a Sequent Balance 21000, an Alliant FX/8, and an Intel iPSC/d5 - as well as a locally designed multiprocessor, the Lemur. These machines are being actively used by scientists at Argonne and throughout the nation in a wide variety of projects concerning computer systems with parallel and vector architectures. A variety of classes, workshops, and seminars have been sponsored to train researchers on computing techniques for the advancedcomputer systems at the AdvancedComputing Research Facility. For example, courses were offered on writing programs for parallel computer systems and hosted the first annual Alliant users group meeting. A Sequent users group meeting and a two-day workshop on performance evaluation of parallel computers and programs are being organized.

The ACTS Collection brings together a number of general-purpose computational tools that were developed by independent research projects mostly funded and supported by the U.S. Department of Energy. These tools tackle a number of common computational issues found in many applications, mainly implementation of numerical algorithms, and support for code development, execution and optimization. In this article, we introduce the numerical tools in the collection and their functionalities, present a model for developing more complex computationalapplications on top of ACTS tools, and summarize applications that use these tools. Lastly, we present a vision of the ACTS project for deployment of the ACTS Collection by the computational sciences community.

The development of an advanced photovoltaic power system that would have application for a manned lunar base is currently planned under the Surface Power element of Pathfinder. Significant mass savings over state-of-the-art photovoltaic/battery systems are possible with the use of advanced lightweight solar arrays coupled with regenerative fuel cell storage. The solar blanket, using either ultrathin GaAs or amorphous silicon solar cells, would be integrated with a reduced-g structure. Regenerative fuel cells with high-pressure gas storage in filament-wound tanks are planned for energy storage. An advanced PV/RFC power system is a leading candidate for a manned lunar base as it offers a tremendous weight advantage over state-of-the-art photovoltaic/battery systems and is comparable in mass to other advanced power generation technologies.

To support industry efforts of clean and efficient internal combustion engine development for passenger and commercial applications • This program focuses on turbocharger improvement for medium and light duty diesel applications, from complete system optimization percepective to enable commercialization of advanced diesel combustion technologies, such as HCCI/LTC. • Improve combined turbocharger efficiency up to 10% or fuel economy by 3% on FTP cycle at Tier II Bin 5 emission level.

The purpose of this study was to evaluate potential uses of Global Positioning System (GPS) in spacecraft applications in the following areas: attitude control and tracking; structural control; traffic control; and time base definition (synchronization). Each of these functions are addressed. Also addressed are the hardware related issues concerning the application of GPS technology and comparisons are provided with alternative instrumentation methods for specific functions required for an advanced low earth orbit spacecraft.

This was a special conference. It was small enough (60+ delegates) but covering a wide range of topics, under a broad end-use focussed heading. Most conferences today either have hundreds or thousands of delegates or are small and very focussed. The topics ranged over composite materials, the testing of durability aspects of materials, and an eclectic set of papers on radar screening using weak ionized plasmas, composites for microvascular applications, composites in space rockets, and materials for spallation neutron sources etc. There were several papers of new characterisation techniques and, very importantly, several papers that started with the end-user requirements leading back into materials selection. In my own area, there were three talks about the technology for the ultra-precise positioning of individual atoms, donors, and complete monolayers to take modern electronics and optoelectronics ideas closer to the market place. The President of the Institute opened with an experience-based talk on translating innovative technology into business. Everyone gave a generous introduction to bring all-comers up to speed with the burning contemporary issues. Indeed, I wish that a larger cohort of first-year engineering PhD students were present to see the full gamut of what takes a physics idea to a success in the market place. I would urge groups to learn from Prof Alison McMillan (a Vice President of the Institute of Physics) and Steven Schofield, to set up conferences of similar scale and breadth. I took in more than I do from mega-meetings, and in greater depth. Professor Michael Kelly Department of Engineering University of Cambridge

Over the decade since atomic force microscopy (AFM) was invented, development of new microscopes has been closely intertwined with application of AFM to problems of interest in physics, chemistry, biology, and engineering. New techniques such as tapping mode AFM move quickly in our lab from the designer's bench to the user's table-since this is often the same piece of furniture. In return, designers get ample feedback as to what problems are limiting current instruments, and thus need most urgent attention. Tip sharpness and characterization are such a problem. Chapter 1 describes an AFM designed to operate in a scanning electron microscope, whose electron beam is used to deposit sharp carbonaceous tips. These tips can be tested and used in situ. Another limitation is addressed in Chapter 2: the difficulty of extracting more than just topographic information from a sample. A combined AFM/confocal optical microscope was built to provide simultaneous, independent images of the topography and fluorescence of a sample. In combination with staining or antibody labelling, this could provide submicron information about the composition of a sample. Chapters 3 and 4 discuss two generations of small cantilevers developed for lower-noise, higher-speed AFM of biological samples. In Chapter 4, a 26 mum cantilever is used to image the process of calcite growth from solution at a rate of 1.6 sec/frame. Finally, Chapter 5 explores in detail a biophysics problem that motivates us to develop fast, quiet, and gentle microscopes; namely, the control of crystal growth in seashells by the action of soluble proteins on a growing calcite surface.

Advanced On-board Processor the (AOP) uses large scale integration throughout and is the most advanced space qualified computer of its class in existence today. It was designed to satisfy most spacecraft requirements which are anticipated over the next several years. The AOP design utilizes custom metallized multigate arrays (CMMA) which have been designed specifically for this computer. This approach provides the most efficient use of circuits, reduces volume, weight, assembly costs and provides for a significant increase in reliability by the significant reduction in conventional circuit interconnections. The required 69 CMMA packages are assembled on a single multilayer printed circuit board which together with associated connectors constitutes the complete AOP. This approach also reduces conventional interconnections thus further reducing weight, volume and assembly costs.

The mechanical characterization of materials relies more and more often on sophisticated experimental methods that permit to acquire a large amount of data and, contemporarily, to reduce the invasiveness of the tests. This evolution accompanies the growing demand of non-destructive diagnostic tools that assess the safety level of components in use in structures and infrastructures, for instance in the strategic energy sector. Advanced material systems and properties that are not amenable to traditional techniques, for instance thin layered structures and their adhesion on the relevant substrates, can be also characterized by means of combined experimental-numerical tools elaborating data acquired by full-field measurement techniques. In this context, parameter identification procedures involve the repeated simulation of the laboratory or in situ tests by sophisticated and usually expensive non-linear analyses while, in some situation, reliable and accurate results would be required in real time. The effectiveness and the filtering capabilities of reduced models based on decomposition and interpolation techniques can be profitably used to meet these conflicting requirements. This communication intends to summarize some results recently achieved in this field by the author and her co-workers. The aim is to foster further interaction between engineering and mathematical communities.

Today, meteorological models can predict storm events and climate patterns, but there are few models that connect atmospheric models to the hydraulics of rivers. Such a model is necessary to advance the prediction of events such as floods or droughts. Furthermore, the increasingly available Geographic Information System-based hydrographic datasets offer ways to use actual mapped river for routing. Such GIS-based datasets include NHDPlus at the continental-scale [USEPA and USGS, 2007]and HydroSHEDS at the global- scale [Lehner, et al., 2006]. The objective of ongoing work is to develop RAPID (Routing Application for Parallel computatIon of Discharge), a large-scale river routing model that: - has physical representation of river flow - allows for coupling with both land surface models and groundwater models. In particular enabling bi-directional exchanges between rivers and aquifers through a computation of water volume and flow on a reach-to-reach basis - has some specific treatment for man-made infrastructures and anthropogenic actions (dams, pumping) - benefits from the latest scientific computing developments (supercomputing facilities and high performance parallel computing libraries) - will benefit from the increasingly available Geographic Information System hydrographic databases. RAPID has already been tested over France through a ten-year coupling with SIM-France [Habets, et al., 2008, using a river network made of 24,264 river reaches. RAPID has been adapted to run on the Lonestar supercomputer (http://www.tacc.utexas.edu/resources/hpcsystems/) and to allow the use of the NHDPlus dataset. RAPID is now being tested with the 74,615 river reaches of the entire Texas Gulf. This type of innovative model has strong implications for the future of hydrology. Such a tool can help improve the understanding of the effect of climate change on water resources as well as provide information on how many gages are needed and where are they needed the most. More broadly

Advancements in man-machine interfaces and control technologies used in space telerobotics and teleoperators have potential application wherever human operators need to manipulate multi-dimensional spatial relationships. Bilateral six degree-of-freedom position and force cues exchanged between the user and a complex system can broaden and improve the effectiveness of several diverse man-machine interfaces.

Long range military transport technologies are addressed with emphasis of defining the potential benefits of the hybrid laminar flow control (HLFC) concept currently being flight tested. Results of a 1990's global range transport study are presented showing the expected payoff from application of advanced technologies. Technology forecast for military transports is also presented.

Describes Tele-Immersion, and AdvancedApplications initiative of the Internet 2 to develop group collaboration and interactivity beyond the current practices of the Internet. Discusses research areas that relate to this realm of virtual reality, including depth perception and rendering, which maps digital representations to a human compatible…

National Energy Research Scientific Computing Center (NERSC) provides researchers with high-performance computing tools to tackle science`s biggest and most challenging problems. Founded in 1974 by DOE/ER, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that followed. Over the years the center`s name was changed to the National Magnetic Fusion Energy Computer Center and then to NERSC; it was relocated to LBNL. NERSC, one of the largest unclassified scientific computing resources in the world, is the principal provider of general-purpose computing services to DOE/ER programs: Magnetic Fusion Energy, High Energy and Nuclear Physics, Basic Energy Sciences, Health and Environmental Research, and the Office of Computational and Technology Research. NERSC users are a diverse community located throughout US and in several foreign countries. This brochure describes: the NERSC advantage, its computational resources and services, future technologies, scientific resources, and computational science of scale (interdisciplinary research over a decade or longer; examples: combustion in engines, waste management chemistry, global climate change modeling).

In part 1 of this two-paper series, a brief summary of the basic concepts and theories used in developing the Generalized Stream Tube model for Alluvial River Simulation (GSTARS) computer models was presented. Part 2 provides examples that illustrate some of the capabilities of the GSTARS models and how they can be applied to solve a wide range of river and reservoir sedimentation problems. Laboratory and field case studies are used and the examples show representative applications of the earlier and of the more recent versions of GSTARS. Some of the more recent capabilities implemented in GSTARS3, one of the latest versions of the series, are also discussed here with more detail. ?? 2008 International Research and Training Centre on Erosion and Sedimentation and the World Association for Sedimentation and Erosion Research.

1. Purpose of the study In this study, based on the scenario of great earthquakes along the Nankai trough, we aim on the estimation of the run up and high accuracy inundation process of tsunami in coastal areas including rivers. Here, using a practical method of tsunami analytical model, and taking into account characteristics of detail topography, land use and climate change in a realistic present and expected future environment, we examined the run up and tsunami inundation process. Using these results we estimated the damage due to tsunami and obtained information for the mitigation of human casualties. Considering the time series from the occurrence of the earthquake and the risk of tsunami damage, in order to mitigate casualties we provide contents of disaster risk information displayed in a tsunami hazard and risk map. 2. Creating a tsunami hazard and risk map From the analytical and practical tsunami model (a long wave approximated model) and the high resolution topography (5 m) including detailed data of shoreline, rivers, building and houses, we present a advanced analysis of tsunami inundation considering the land use. Based on the results of tsunami inundation and its analysis; it is possible to draw a tsunami hazard and risk map with information of human casualty, building damage estimation, drift of vehicles, etc. 3. Contents of disaster prevention information To improve the hazard, risk and evacuation information distribution, it is necessary to follow three steps. (1) Provide basic information such as tsunami attack info, areas and routes for evacuation and location of tsunami evacuation facilities. (2) Provide as additional information the time when inundation starts, the actual results of inundation, location of facilities with hazard materials, presence or absence of public facilities and areas underground that required evacuation. (3) Provide information to support disaster response such as infrastructure and traffic network damage prediction

A computer program which can be used to computer water-surface profiles for gradually varied, subcritical flow is described. Profiles are computed by the standard step-backwater method. Water-surface profiles may be obtained for both existing stream conditions and for stream conditions as modified by encroachment. The user may specify encroachment patterns in order to establish floodway limits for land-use management alternatives, or for flood-insurance studies. Applicable theories, computer solution methods, data requirements, and input data preparation are discussed in detail. Examples are presented to illustrate the computer output and potential program applications. (Woodard-USGS)

The application of computational aeroacoustics (CAA) to real problems is discussed in relation to the analysis performed with the aim of assessing the application of the various techniques. It is considered that the applications are limited by the inability of the computational resources to resolve the large range of scales involved in high Reynolds number flows. Possible simplifications are discussed. It is considered that problems remain to be solved in relation to the efficient use of the power of parallel computers and in the development of turbulent modeling schemes. The goal of CAA is stated as being the implementation of acoustic design studies on a computer terminal with reasonable run times.

During the last years we have witnessed an increasing development of imaging techniques applied in Cardiology. Among them, cardiac computed tomography is an emerging and evolving technique. With the current possibility of very low radiation studies, the applications have expanded and go further coronariography In the present article we review the technical developments of cardiac computed tomography and its new applications.

In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods, the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment of objects in multispectral image sequences (see contributions to SPIE 2007-2010 [1], [2], [3], [4]). It comprises a semi-automatic marking of target objects (ground truth generation) including their propagation over the image sequence and the evaluation via user-defined feature extractors as well as methods to assess the object's movement conspicuity. In this fifth part in an annual series at the SPIE conference in Orlando, this paper presents the enhancements over the recent year and addresses the camouflage assessment of static and moving objects in multispectral image data that can show noise or image artefacts. The presented methods fathom the correlations between image processing and camouflage assessment. A novel algorithm is presented based on template matching to assess the structural inconspicuity of an object objectively and quantitatively. The results can easily be combined with an MTI (moving target indication) based movement conspicuity assessment function in order to explore the influence of object movement to a camouflage effect in different environments. As the results show, the presented methods contribute to a significant benefit in the field of camouflage assessment.

As with many other linear algebra algorithms, devising a portable implementation of sparse Cholesky factorization that performs well on the broad range of computer architectures currently available is a formidable challenge. Even after limiting our attention to machines with only one processor, as we have done in this report, there are still several interesting issues to consider. For dense matrices, it is well known that block factorization algorithms are the best means of achieving this goal. We take this approach for sparse factorization as well. This paper has two primary goals. First, we examine two sparse Cholesky factorization algorithms, the multifrontal method and a blocked left-looking sparse Cholesky method, in a systematic and consistent fashion, both to illustrate the strengths of the blocking techniques in general and to obtain a fair evaluation of the two approaches. Second, we assess the impact of various implementation techniques on time and storage efficiency, paying particularly close attention to the work-storage requirement of the two methods and their variants.

Radiological imaging is progressing towards an all-digital future, across the spectrum of medical imaging techniques. Computed radiography (CR) has provided a ready pathway from screen film to digital radiography and a convenient entry point to PACS. This review briefly revisits the principles of modern CR systems and their physical imaging characteristics. Wide dynamic range and digital image enhancement are well-established benefits of CR, which lend themselves to improved image presentation and reduced rates of repeat exposures. However, in its original form CR offered limited scope for reducing the radiation dose per radiographic exposure, compared with screen film. Recent innovations in CR, including the use of dual-sided image readout and channelled storage phosphor have eased these concerns. For example, introduction of these technologies has improved detective quantum efficiency (DQE) by approximately 50 and 100%, respectively, compared with standard CR. As a result CR currently affords greater scope for reducing patient dose, and provides a more substantive challenge to the new solid-state, flat-panel, digital radiography detectors.

This dissertation reports the development and application of a new methodology for the measurement and evaluation of interactive computing, applied to either the users of an interactive computing system or to the system itself, including the service computer and any communications network through which the service is delivered. The focus is on the…

This paper describes new and recent advances in the development of a hybrid transfinite element computational methodology for applicability to conduction/convection/radiation heat transfer problems. The transfinite element methodology, while retaining the modeling versatility of contemporary finite element formulations, is based on application of transform techniques in conjunction with classical Galerkin schemes and is a hybrid approach. The purpose of this paper is to provide a viable hybrid computational methodology for applicability to general transient thermal analysis. Highlights and features of the methodology are described and developed via generalized formulations and applications to several test problems. The proposed transfinite element methodology successfully provides a viable computational approach and numerical test problems validate the proposed developments for conduction/convection/radiation thermal analysis.

A historical weight estimating technique for advanced transportation systems is presented. The classical approach to weight estimation is discussed and sufficient data is presented to estimate weights for a large spectrum of flight vehicles including horizontal and vertical takeoff aircraft, boosters and reentry vehicles. A computer program, WAATS (Weights Analysis for Advanced Transportation Systems) embracing the techniques discussed has been written and user instructions are presented. The program was developed for use in the ODIN (Optimal Design Integration System) system.

Since 1980, under the grant DEFG02-96ER40949, the Department of Energy has supported the educational and research work of the University of Maryland Dynamical Systems and Accelerator Theory (DSAT) Group. The primary focus of this educational/research group has been on the computation and analysis of charged-particle beam transport using Lie algebraic methods, and on advanced methods for the computation of electromagnetic fields and multiparticle phenomena. This Final Report summarizes the accomplishments of the DSAT Group from its inception in 1980 through its end in 2011.

Within the Ambient Assisted Living (AAL) community, Brain-Computer Interfaces (BCIs) have raised great hopes as they provide alternative communication means for persons with disabilities bypassing the need for speech and other motor activities. Although significant advancements have been realized in the last decade, applications of language models (e.g., word prediction, completion) have only recently started to appear in BCI systems. The main goal of this article is to review the language model applications that supplement non-invasive BCI-based communication systems by discussing their potential and limitations, and to discern future trends. First, a brief overview of the most prominent BCI spelling systems is given, followed by an in-depth discussion of the language models applied to them. These language models are classified according to their functionality in the context of BCI-based spelling: the static/dynamic nature of the user interface, the use of error correction and predictive spelling, and the potential to improve their classification performance by using language models. To conclude, the review offers an overview of the advantages and challenges when implementing language models in BCI-based communication systems when implemented in conjunction with other AAL technologies. PMID:24675760

With the recent advances in computing, the opportunities to translate computational models to more integrated roles in patient treatment are expanding at an exciting rate. One area of considerable development has been directed towards correcting soft tissue deformation within image guided neurosurgery applications. This review captures the efforts that have been undertaken towards enhancing neuronavigation by the integration of soft tissue biomechanical models, imaging and sensing technologies, and algorithmic developments. In addition, the review speaks to the evolving role of modeling frameworks within surgery and concludes with some future directions beyond neurosurgical applications. PMID:26354118

The results of a survey on advanced secondary battery systems for space applications are presented. The objectives were: to identify advanced battery systems capable of meeting the requirements of various types of space missions, with significant advantages over currently available batteries, to obtain an accurate estimate of the anticipated improvements of these advanced systems, and to obtain a consensus for the selection of systems most likely to yield the desired improvements. Few advanced systems are likely to exceed a specific energy of 150 Wh/kg and meet the additional requirements of safety and reliability within the next 15 years. The few that have this potential are: (1) regenerative fuel cells, both alkaline and solid polymer electrolyte (SPE) types for large power systems; (2) lithium-intercalatable cathodes, particularly the metal ozides intercalatable cathodes (MnO2 or CoO2), with applications limited to small spacecrafts requiring limited cycle life and low power levels; (3) lithium molten salt systems (e.g., LiAl-FeS2); and (4) Na/beta Alumina/Sulfur or metal chlorides cells. Likely technological advances that would enhance the performance of all the above systems are also identified, in particular: improved bifunctional oxygen electrodes; improved manufacturing technology for thin film lithium electrodes in combination with polymeric electrolytes; improved seals for the lithium molten salt cells; and improved ceramics for sodium/solid electrolyte cells.

The results of a survey on advanced secondary battery systems for space applications are presented. The objectives were: to identify advanced battery systems capable of meeting the requirements of various types of space missions, with significant advantages over currently available batteries, to obtain an accurate estimate of the anticipated improvements of these advanced systems, and to obtain a consensus for the selection of systems most likely to yield the desired improvements. Few advanced systems are likely to exceed a specific energy of 150 Wh/kg and meet the additional requirements of safety and reliability within the next 15 years. The few that have this potential are: (1) regenerative fuel cells, both alkaline and solid polymer electrolyte (SPE) types for large power systems; (2) lithium-intercalatable cathodes, particularly the metal ozides intercalatable cathodes (MnO2 or CoO2), with applications limited to small spacecrafts requiring limited cycle life and low power levels; (3) lithium molten salt systems (e.g., LiAl-FeS2); and (4) Na/beta Alumina/Sulfur or metal chlorides cells. Likely technological advances that would enhance the performance of all the above systems are also identified, in particular: improved bifunctional oxygen electrodes; improved manufacturing technology for thin film lithium electrodes in combination with polymeric electrolytes; improved seals for the lithium molten salt cells; and improved ceramics for sodium/solid electrolyte cells.

In this project, an Eulerian-Lagrangian formulation for analyzing three-phase slurry flows in a bubble column was developed. The approach used an Eulerian analysis of liquid flows in the bubble column, and made use of the Lagrangian trajectory analysis for the bubbles and particle motions. The bubble-bubble and particle-particle collisions are included the model. The model predictions are compared with the experimental data and good agreement was found An experimental setup for studying two-dimensional bubble columns was developed. The multiphase flow conditions in the bubble column were measured using optical image processing and Particle Image Velocimetry techniques (PIV). A simple shear flow device for bubble motion in a constant shear flow field was also developed. The flow conditions in simple shear flow device were studied using PIV method. Concentration and velocity of particles of different sizes near a wall in a duct flow was also measured. The technique of Phase-Doppler anemometry was used in these studies. An Eulerian volume of fluid (VOF) computational model for the flow condition in the two-dimensional bubble column was also developed. The liquid and bubble motions were analyzed and the results were compared with observed flow patterns in the experimental setup. Solid-fluid mixture flows in ducts and passages at different angle of orientations were also analyzed. The model predictions were compared with the experimental data and good agreement was found. Gravity chute flows of solid-liquid mixtures were also studied. The simulation results were compared with the experimental data and discussed A thermodynamically consistent model for multiphase slurry flows with and without chemical reaction in a state of turbulent motion was developed. The balance laws were obtained and the constitutive laws established.

With the development of satellite and remote sensing techniques, more and more image data from airborne/satellite sensors have become available. Multi-sensor image fusion seeks to combine information from different images to obtain more inferences than can be derived from a single sensor. In image-based application fields, image fusion has emerged as a promising research area since the end of the last century. The paper presents an overview of recent advances in multi-sensor satellite image fusion. Firstly, the most popular existing fusion algorithms are introduced, with emphasis on their recent improvements. Advances in main applications fields in remote sensing, including object identification, classification, change detection and maneuvering targets tracking, are described. Both advantages and limitations of those applications are then discussed. Recommendations are addressed, including: (1) Improvements of fusion algorithms; (2) Development of “algorithm fusion” methods; (3) Establishment of an automatic quality assessment scheme. PMID:22408479

Computed microtomography is the extension to micron spatial resolution of the CAT scanning technique developed for medical imaging. Synchrotron sources are ideal for the method, since they provide a monochromatic, parallel beam with high intensity. High energy storage rings such as the Advanced Photon Source at Argonne National Laboratory produce x-rays with high energy, high brilliance, and high coherence. All of these factors combine to produce an extremely powerful imaging tool for earth science research. Techniques that have been developed include: - Absorption and phase contrast computed tomography with spatial resolution below one micron. - Differential contrast computed tomography, imaging above and below the absorption edge of a particular element. - High-pressure tomography, imaging inside a pressure cell at pressures above 10GPa. - High speed radiography and tomography, with 100 microsecond temporal resolution. - Fluorescence tomography, imaging the 3-D distribution of elements present at ppm concentrations. - Radiographic strain measurements during deformation at high confining pressure, combined with precise x-ray diffraction measurements to determine stress. These techniques have been applied to important problems in earth and environmental sciences, including: - The 3-D distribution of aqueous and organic liquids in porous media, with applications in contaminated groundwater and petroleum recovery. - The kinetics of bubble formation in magma chambers, which control explosive volcanism. - Studies of the evolution of the early solar system from 3-D textures in meteorites - Accurate crystal size distributions in volcanic systems, important for understanding the evolution of magma chambers. - The equation-of-state of amorphous materials at high pressure using both direct measurements of volume as a function of pressure and also by measuring the change x-ray absorption coefficient as a function of pressure. - The location and chemical speciation of toxic

Computations are presented for one-dimensional, strong shock waves that are typical of those that form in front of a reentering spacecraft. The fluid mechanics and thermochemistry are modeled using two different approaches. The first employs traditional continuum techniques in solving the Navier-Stokes equations. The second-approach employs a particle simulation technique (the direct simulation Monte Carlo method, DSMC). The thermochemical models employed in these two techniques are quite different. The present investigation presents an evaluation of thermochemical models for nitrogen under hypersonic flow conditions. Four separate cases are considered. The cases are governed, respectively, by the following: vibrational relaxation; weak dissociation; strong dissociation; and weak ionization. In near-continuum, hypersonic flow, the nonequilibrium thermochemical models employed in continuum and particle simulations produce nearly identical solutions. Further, the two approaches are evaluated successfully against available experimental data for weakly and strongly dissociating flows.

Public school counselors and psychologists can expect valuable assistance from computer-based assessment and counseling techniques within a few years, as programs now under development become generally available for the typical computers now used by schools for grade-reporting and class-scheduling. Although routine information-giving and gathering…

A severely physically disabled (quadriplegic) third grade student with high average intellectual abilities was fitted with a computer system adapted for maximum student independence. A scanner, the face of which is an integrated circuit board, was constructed to allow accessibility to the computer by a single switch operated by the student's…

naturally occurring E. coli chorismate mutase (EcCM) enzyme through computational design. Although the stated milestone of creating a novel... chorismate mutase (CM) was not achieved, the enhancement of the underlying computational model through the development of the two-body PB method will facilitate the future design of novel protein catalysts.

In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

Advanced MR imaging techniques, such as spectroscopy, perfusion, diffusion, and functional imaging, have improved the diagnosis of brain tumors in children and also play an important role in defining surgical as well as therapeutic responses in these patients. In addition to the anatomic or structural information gained with conventional MR imaging sequences, advanced MR imaging techniques also provide physiologic information about tumor morphology, metabolism, and hemodynamics. This article reviews the physiology, techniques, and clinical applications of diffusion-weighted and diffusion tensor imaging, MR spectroscopy, perfusion MR imaging, susceptibility-weighted imaging, and functional MR imaging in the setting of neuro-oncology.

Thermoreversible gelcasting (TRG) is an advantageous technique for rapidly producing bulk, net-shape ceramics and laminates. In this method, ceramic powder is suspended in warm acrylate triblock copolymer/alcohol solutions that reversibly gel upon cooling by the formation of endblock aggregates, to produce slurries which are cast into molds. Gel properties can be tailored by controlling the endblock and midblock lengths of the copolymer network-former and selecting an appropriate alcohol solvent. This research focuses on expanding and improving TRG techniques, focusing specifically on advanced energy applications including the solid oxide fuel cell (SOFC). Rapid drying of filled gels can lead to warping and cracking caused by high differential capillary stresses. A new drying technique using concentrated, alcohol-based solutions as liquid desiccants (LDs) to greatly reduce warping is introduced. The optimal LD is a poly(tert-butyl acrylate)/isopropyl alcohol solution with 5 mol% tert-butyl acrylate units. Alcohol emissions during drying are completely eliminated by combining initial drying in an LD with final stage drying in a vacuum oven having an in-line solvent trap. Porous ceramics are important structures for many applications, including SOFCs. Pore network geometries are tailored by the addition of fugitive fillers to TRG slurries. Uniform spherical, bimodal spherical and uniform fibrous fillers are used. Three-dimensional pore structures are visualized by X-ray computed tomography, allowing for direct measurements of physical parameters such as concentration and morphology as well as transport properties such as tortuosity. Tortuosity values as low as 1.52 are achieved when 60 vol% of solids are uniform spherical filler. Functionally graded laminates with layers ranging from 10 mum to > 1 mm thick are produced with a new technique that combines TRG with tape casting. Gels used for bulk casting are not suitable for use with tape casting, and appropriate base

This book is one of a series in Course II of the Relevant Educational Applications of Computer Technology (REACT) Project. It is designed to point out to teachers two of the major applications of computers in the social sciences: simulation and data analysis. The first section contains a variety of simulation units organized under the following…

The nuclear industry is at the eye of a 'perfect storm' with fuel oil and natural gas prices near record highs, worldwide energy demands increasing at an alarming rate, and increased concerns about greenhouse gas (GHG) emissions that have caused many to look negatively at long-term use of fossil fuels. This convergence of factors has led to a growing interest in revitalization of the nuclear power industry within the United States and across the globe. Many are surprised to learn that nuclear power provides approximately 20% of the electrical power in the US and approximately 16% of the world-wide electric power. With the above factors in mind, world-wide over 130 new reactor projects are being considered with approximately 25 new permit applications in the US. Materials have long played a very important role in the nuclear industry with applications throughout the entire fuel cycle; from fuel fabrication to waste stabilization. As the international community begins to look at advanced reactor systems and fuel cycles that minimize waste and increase proliferation resistance, materials will play an even larger role. Many of the advanced reactor concepts being evaluated operate at high-temperature requiring the use of durable, heat-resistant materials. Advanced metallic and ceramic fuels are being investigated for a variety of Generation IV reactor concepts. These include the traditional TRISO-coated particles, advanced alloy fuels for 'deep-burn' applications, as well as advanced inert-matrix fuels. In order to minimize wastes and legacy materials, a number of fuel reprocessing operations are being investigated. Advanced materials continue to provide a vital contribution in 'closing the fuel cycle' by stabilization of associated low-level and high-level wastes in highly durable cements, ceramics, and glasses. Beyond this fission energy application, fusion energy will demand advanced materials capable of withstanding the extreme environments of high

Experience with advancedcomputational methods being used at the Lockheed-Georgia Company to aid in the evaluation and design of new and modified aircraft indicates that large and specialized computers will be needed to make advanced three-dimensional viscous aerodynamic computations practical. The Numerical Aerodynamic Simulation Facility should be used to provide a tool for designing better aerospace vehicles while at the same time reducing development costs by performing computations using Navier-Stokes equations solution algorithms and permitting less sophisticated but nevertheless complex calculations to be made efficiently. Configuration definition procedures and data output formats can probably best be defined in cooperation with industry, therefore, the computer should handle many remote terminals efficiently. The capability of transferring data to and from other computers needs to be provided. Because of the significant amount of input and output associated with 3-D viscous flow calculations and because of the exceedingly fast computation speed envisioned for the computer, special attention should be paid to providing rapid, diversified, and efficient input and output.

A method of controlling a systematic error in transmutation computations is described for a class of problems, in which strictly a one-parental and one-residual nucleus are considered in each nuclear transformation channel. A discrete-logical algorithm is stated for the differential equations system matrix to reduce it to a block-triangular type. A computing procedure is developed determining a strict estimation of a computing error for each value of the computation results for the above named class of transmutation computation problems with some additional restrictions on the complexity of the nuclei transformations scheme. The computer code for this computing procedure - TRANS_MU - compared with an analogue approach has a number of advantages. Besides the mentioned quantitative control of a systematic and computing errors as an important feature of the code TRANS_MU, it is necessary to indicate the calculation of the contribution of each considered reaction to the transmutant accumulation and gas production. The application of the TRANS_MU computer code is shown using copper alloys as an example when the planning of irradiation experiments with fusion reactor material specimens in fission reactors, and processing the experimental results.

Since 1972, the Department of Veterans Affairs has had mental health computerapplications for clinicians, managers, and researchers, operating on main frame and mini computers. The advent of personal computers has provided the opportunity to further enhance mental health automation. With Congressional support, VA's Mental Health and Behavioral Sciences Service placed micro computers in 168 VA Medical Centers and developed additional mental health applications. Using wide area networking procedures, a National Mental Health Database System (NMHDS) was established. In addition, a Computer-assisted Assessment, Psychotherapy, Education, and Research system (CAPER), a Treatment Planner, a Suicide and Assaultive Behavior Monitoring system, and a national registry of VA mental health treatment resources were developed. Each of these computerapplications is demonstrated and discussed.

The innovative development of computer technology promotes the application of the cloud computing platform, which actually is the substitution and exchange of a sort of resource service models and meets the needs of users on the utilization of different resources after changes and adjustments of multiple aspects. "Cloud computing" owns advantages in many aspects which not merely reduce the difficulties to apply the operating system and also make it easy for users to search, acquire and process the resources. In accordance with this point, the author takes the management of digital libraries as the research focus in this paper, and analyzes the key technologies of the mobile internet cloud computing platform in the operation process. The popularization and promotion of computer technology drive people to create the digital library models, and its core idea is to strengthen the optimal management of the library resource information through computers and construct an inquiry and search platform with high performance, allowing the users to access to the necessary information resources at any time. However, the cloud computing is able to promote the computations within the computers to distribute in a large number of distributed computers, and hence implement the connection service of multiple computers. The digital libraries, as a typical representative of the applications of the cloud computing, can be used to carry out an analysis on the key technologies of the cloud computing.

Recent studies indicate that significant increases in system performance (increased efficiency and reduced system mass) are possible for high power space based systems by incorporating technological developments with photovoltaic power systems. The Advanced Photovoltaic Concentrator Program is an effort to take advantage of recent advancements in refractive optical elements. By using a domed Fresnel lens concentrator and a prismatic cell cover, to eliminate metallization losses, dramatic reductions in the required area and mass over current space photovoltaic systems are possible. The advanced concentrator concept also has significant advantages when compared to solar dynamic Organic Rankine Cycle power systems in Low Earth Orbit applications where energy storage is required. The program is currently involved in the selection of a material for the optical element that will survive the space environment and a demonstration of the system performance of the panel design.

Next generation magnetic microwave devices will be planar, smaller, weigh less, and perform well beyond the present state-of-the-art. For this to become a reality advances in ferrite materials must first be realized. These advances include self-bias magnetization, tunability of the magnetic anisotropy, low microwave loss, and volumetric and weight reduction. To achieve these goals one must turn to novel materials processing methods. Here, we review recent advances in the processing of microwave ferrites. Attention is paid to the processing of ferrite films by pulsed laser deposition, liquid phase epitaxy, spin spray ferrite plating, screen printing, and compaction of quasi-single crystals. Conventional and novel applications of ferrite materials, including microwave non-reciprocal passive devices, microwave signal processing, negative index metamaterial-based electronics, and electromagnetic interference suppression are discussed.

This report summarizes work performed by Garrett Auxiliary Power Division (GAPD), a unit of Allied-Signal Aerospace Company, during calendar year 1992, toward development and demonstration of structural ceramic technology for automotive gas turbine engines. This work was performed for the US Department of Energy (DOE) under National Aeronautics and Space Administration (NASA) Contract DEN3-335, Advanced Turbine Technology Applications Project (ATTAP). GAPD utilized the AGT101 regenerated gas turbine engine developed under the previous DOE/NASA Advanced Gas Turbine (AGT) program as the ATTAP test bed for ceramic engine technology demonstration. ATTAP focussed on improving AGT101 test bed reliability, development of ceramic design methodologies, and improvement of fabrication and materials processing technology by domestic US ceramics fabricators. A series of durability tests was conducted to verify technology advancements. This is the fifth in a series of technical summary reports published annually over the course of the five-year contract.

Details classroom use of SuperCalc, a software accounting package developed originally for small businesses, as a computerized gradebook. A procedure which uses data from the computer gradebook to produce weekly printed reports for parents is also described. (MBR)

Analysts and organizations have a tendency to lock themselves into specific codes with the obvious consequences of not addressing the real problem and thus reaching the wrong conclusion. This paper discusses the role of the analyst in selecting computer codes. The participation and support of a computation division in modifying the source program, configuration management, and pre- and post-processing of codes are among the subjects discussed. Specific examples illustrating the computer code selection process are described in the following problem areas: soil structure interaction, structural analysis of nuclear reactors, analysis of waste tanks where fluid structure interaction is important, analysis of equipment, structure-structure interaction, analysis of the operation of the superconductor supercollider which includes friction and transient temperature, and 3D analysis of the 10-meter telescope being built in Hawaii. Validation and verification of computer codes and their impact on the selection process are also discussed.

This chapter provides an overview of computational models that describe various aspects of the source-to-health effect continuum. Fate and transport models describe the release, transportation, and transformation of chemicals from sources of emission throughout the general envir...

This paper presents an application of most recent developed predictive control algorithm an infinite model predictive control (IMPC) to other advanced control schemes. The IMPC strategy was derived for systems with different degrees of nonlinearity on the process gain and time constant. Also, it was shown that IMPC structure uses nonlinear open-loop modeling which is conducted while closed-loop control is executed every sampling instant. The main objective of this work is to demonstrate that the methodology of IMPC can be applied to other advanced control strategies making the methodology generic. The IMPC strategy was implemented on several advanced controllers such as PI controller using Smith-Predictor, Dahlin controller, simplified predictive control (SPC), dynamic matrix control (DMC), and shifted dynamic matrix (m-DMC). Experimental work using these approaches combined with IMPC was conducted on both single-input-single-output (SISO) and multi-input-multi-output (MIMO) systems and compared with the original forms of these advanced controllers. Computer simulations were performed on nonlinear plants demonstrating that the IMPC strategy can be readily implemented on other advanced control schemes providing improved control performance. Practical work included real-time control applications on a DC motor, plastic injection molding machine and a MIMO three zone thermal system.

Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

The importance of the computer in areas of patient care is widely recognized today. This review outlines the information processing tasks which involve the interaction between the patient and the provider of health care services. These areas include the clinical laboratory, automated multiphasic health testing, medical records, patient monitoring, diagnostic support systems, and medical imaging. Health care professionals, including clinical engineers, must recognize the potential, understand basic principles, and utilize computers effectively during the next decade's rapid advances.

Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.

Internet2, a consortium effort of over 100 universities, is investing in upgrading campus and national computer network platforms for such applications as digital libraries, collaboration environments, tele-medicine, and distance-independent instruction. The project is described, issues the project intends to address are detailed, and ways in…

In modern laminar flow flight research, it is important to understand the specific cause(s) of laminar to turbulent boundary-layer transition. Such information is crucial to the exploration of the limits of practical application of laminar flow for drag reduction on aircraft. The transition modes of interest in current flight investigations include the viscous Tollmien-Schlichting instability, the inflectional instability at laminar separation, and the crossflow inflectional instability, as well as others. This paper presents the results to date of research on advanced devices and methods used for the study of laminar boundary-layer transition phenomena in the flight environment. Recent advancements in the development of arrayed hot-film devices and of a new flow visualization method are discussed. Arrayed hot-film devices have been designed to detect the presence of laminar separation, and of crossflow vorticity. The advanced flow visualization method utilizes color changes in liquid-crystal coatings to detect boundary-layer transition at high altitude flight conditions. Flight and wind tunnel data are presented to illustrate the design and operation of these advanced methods. These new research tools provide information on disturbance growth and transition mode which is essential to furthering our understanding of practical design limits for applications of laminar flow technology.

Semi-automated methods for calculating tumor volumes from computed tomography images are a new tool for advancing the development of cancer therapeutics. Volumetric measurements, relying on already widely available standard clinical imaging techniques, could shorten the observation intervals needed to identify cohorts of patients sensitive or resistant to treatment.

EPA announced the release of the final report, Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology. This report describes new approaches that are faster, less resource intensive, and more robust that can help ...

This response by the Virginia Department of Education to House Joint Resolution No. 118 of the General Assembly of Virginia, which requested the Department of Education to study initiatives to advancecomputer-assisted instruction, is based on input from state and national task forces and on a 1986 survey of 80 Viriginia school divisions. The…

On February 18, 2004, the U.S. Environmental Protection Agency and Department of Energy signed a Memorandum of Understanding to expand the research collaboration of both agencies to advance biological, environmental, and computational sciences for protecting human health and the ...

The New Millennium Program (NMP) consists of a series of Deep-Space and Earth Orbiting missions that are technology-driven, in contrast to the more traditional science-driven space exploration missions of the past. These flights are designed to validate technologies that will enable a new era of low-cost highly miniaturized and highly capable spacebome applications in the new millennium. In addition to the series of flight projects managed by separate flight teams, the NMP technology initiatives are managed by the following six focused technology programs: Microelectronics Systems, Autonomy, Telecommunications, Instrument Technologies and Architectures, In-Situ Instruments and Micro-electromechanical Systems, and Modular and Multifunctional Systems. Each technology program is managed as an Integrated Product Development Team (IPDT) of government, academic, and industry partners. In this paper, we will describe elements of the technology roadmap proposed by the NMP Microelectronics IPDT. Moreover, we will relate the proposed technology roadmap to existing NASA technology development programs, such as the Advanced Flight Computing (AFC) program, and the Remote Exploration and Experimentation (REE) program, which constitute part of the on-going NASA technology development pipeline. We will also describe the Microelectronics Systems technologies that have been accepted as part of the first New Millennium Deep-Space One spacecraft, which is an asteroid fly-by mission scheduled for launched in July 1998.

The Idaho National Laboratory (INL) is in the process of modernizing the various reactor physics modeling and simulation tools used to support operation and safety assurance of the Advanced Test Reactor (ATR). Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core depletion HELIOS calculations for all ATR cycles since August 2009 was successfully completed during 2011. This demonstration supported a decision late in the year to proceed with the phased incorporation of the HELIOS methodology into the ATR fuel cycle management process beginning in 2012. On the experimental side of the project, new hardware was fabricated, measurement protocols were finalized, and the first four of six planned physics code validation experiments based on neutron activation spectrometry were conducted at the ATRC facility. Data analysis for the first three experiments, focused on characterization of the neutron spectrum in one of the ATR flux traps, has been completed. The six experiments will ultimately form the basis for a flexible, easily-repeatable ATR physics code validation protocol that is consistent with applicable ASTM standards.

The Internet has provided criminals, terrorists, spies, and other threats to national security a means of communication. At the same time it also provides for the possibility of detecting and tracking their deceptive communication. Recent advances in natural language processing, machine learning and deception research have created an environment where automated and semi-automated deception detection of text-based computer-mediated communication (CMC, e.g. email, chat, instant messaging) is a reachable goal. This paper reviews two methods for discriminating between deceptive and non-deceptive messages in CMC. First, Document Feature Mining uses document features or cues in CMC messages combined with machine learning techniques to classify messages according to their deceptive potential. The method, which is most useful in asynchronous applications, also allows for the visualization of potential deception cues in CMC messages. Second, Speech Act Profiling, a method for quantifying and visualizing synchronous CMC, has shown promise in aiding deception detection. The methods may be combined and are intended to be a part of a suite of tools for automating deception detection.

Results of a study of the development of flutter modules applicable to automated structural design of advanced aircraft configurations, such as a supersonic transport, are presented. Automated structural design is restricted to automated sizing of the elements of a given structural model. It includes a flutter optimization procedure; i.e., a procedure for arriving at a structure with minimum mass for satisfying flutter constraints. Methods of solving the flutter equation and computing the generalized aerodynamic force coefficients in the repetitive analysis environment of a flutter optimization procedure are studied, and recommended approaches are presented. Five approaches to flutter optimization are explained in detail and compared. An approach to flutter optimization incorporating some of the methods discussed is presented. Problems related to flutter optimization in a realistic design environment are discussed and an integrated approach to the entire flutter task is presented. Recommendations for further investigations are made. Results of numerical evaluations, applying the five methods of flutter optimization to the same design task, are presented.

The Applying Computationally Efficient Schemes for BioGeochemical Cycles ACES4BGC Project is advancing the predictive capabilities of Earth System Models (ESMs) by reducing two of the largest sources of uncertainty, aerosols and biospheric feedbacks, with a highly efficient computational approach. In particular, this project is implementing and optimizing new computationally efficient tracer advection algorithms for large numbers of tracer species; adding important biogeochemical interactions between the atmosphere, land, and ocean models; and applying uncertainty quanti cation (UQ) techniques to constrain process parameters and evaluate uncertainties in feedbacks between biogeochemical cycles and the climate system.

Research performed over the past 10 years in engineering data base management and parallel computing is discussed, and certain opportunities for research toward the next generation of structural analysis capability are proposed. Particular attention is given to data base management associated with the IPAD project and parallel processing associated with the Finite Element Machine project, both sponsored by NASA, and a near term strategy for a distributed structural analysis capability based on relational data base management software and parallel computers for a future structural analysis system.

Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors.

The issue of global warming and related climatic changes from increasing concentrations of greenhouse gases in the atmosphere has received prominent attention during the past few years. The Computer Hardware, Advanced Mathematics, and Model Physics (CHAMMP) Climate Modeling Program is designed to contribute directly to this rapid improvement. The goal of the CHAMMP Climate Modeling Program is to develop, verify, and apply a new generation of climate models within a coordinated framework that incorporates the best available scientific and numerical approaches to represent physical, biogeochemical, and ecological processes, that fully utilizes the hardware and software capabilities of new computer architectures, that probes the limits of climate predictability, and finally that can be used to address the challenging problem of understanding the greenhouse climate issue through the ability of the models to simulate time-dependent climatic changes over extended times and with regional resolution.

Advanced stress analysis methods applicable to turbine engine structures are investigated. Constructions of special elements which containing traction-free circular boundaries are investigated. New versions of mixed variational principle and version of hybrid stress elements are formulated. A method is established for suppression of kinematic deformation modes. semiLoof plate and shell elements are constructed by assumed stress hybrid method. An elastic-plastic analysis is conducted by viscoplasticity theory using the mechanical subelement model.

A method for fabrication of novel thin-film continuous dynode electron multipliers is described. The feasibility of crucial manufacturing steps, including anisotropic dry etching of substrates into photolithographically-defined arrays of high-aspect-ratio channels, and the formation of thin-film continuous dynodes by CVD is shown. Potential performance and design advantages of this advanced technology-microchannel plate over the conventional reduced lead silicate glass microchannel plate and implications for new applications are discussed.

A brief historical account of the evolution of continuously variable transmissions (CVT) for automotive use is given. The CVT concepts which are potentially suitable for application with electric and hybrid vehicles are discussed. The arrangement and function of several CVT concepts are cited along with their current developmental status. The results of preliminary design studies conducted on four CVT concepts for use in advanced electric vehicles are discussed.

The Stockpile Stewardship Program (SSP) is a single, highly integrated technical program for maintaining the safety and reliability of the U.S. nuclear stockpile. The SSP uses past nuclear test data along with future non-nuclear test data, computational modeling and simulation, and experimental facilities to advance understanding of nuclear weapons. It includes stockpile surveillance, experimental research, development and engineering programs, and an appropriately scaled production capability to support stockpile requirements. This integrated national program will require the continued use of current facilities and programs along with new experimental facilities and computational enhancements to support these programs. The Advanced Simulation and Computing program (ASC) is a cornerstone of the SSP, providing simulation capabilities and computational resources to support the annual stockpile assessment and certification, to study advanced nuclear weapon design and manufacturing processes, to analyze accident scenarios and weapons aging, and to provide the tools to enable stockpile life extension programs and the resolution of significant finding investigations (SFIs). This requires a balanced system of technical staff, hardware, simulation software, and computer science solutions.

Ultrasonics has been used in many industrial applications for both sensing at low power and processing at higher power. Generally, the high power applications fall within the categories of liquid stream degassing, impurity separation, and sonochemical enhancement of chemical processes. Examples of such industrial applications include metal production, food processing, chemical production, and pharmaceutical production. There are many nuclear process streams that have similar physical and chemical processes to those applications listed above. These nuclear processes could potentially benefit from the use of high-power ultrasonics. There are also potential benefits to applying these techniques in advanced nuclear fuel cycle processes, and these benefits have not been fully investigated. Currently the dominant use of ultrasonic technology in the nuclear industry has been using low power ultrasonics for non-destructive testing/evaluation (NDT/NDE), where it is primarily used for inspections and for characterizing material degradation. Because there has been very little consideration given to how sonoprocessing can potentially improve efficiency and add value to important process streams throughout the nuclear fuel cycle, there are numerous opportunities for improvement in current and future nuclear technologies. In this paper, the relevant fundamental theory underlying sonoprocessing is highlighted, and some potential applications to advanced nuclear technologies throughout the nuclear fuel cycle are discussed.

Recent decades have witnessed many breakthroughs in research on carbon nanotubes (CNTs), particularly regarding controllable synthesis, production scale-up, and applicationadvances for this material. This sp (2)-bonded nanocarbon uniquely combines extreme mechanical strength, exceptionally high electrical conductivity, as well as many other superior properties, making it highly attractive for fundamental research and industrial applications. Synthesis and mass production form the solid basis for high-volume applications of CNTs. During recent decades, CNT production capacity has reached more than thousands of tons per year, greatly decreasing the price of CNTs. Although the unique physiochemical properties of an individual CNT are stated repeatedly, manifestation of such unique properties in a macroscopic material, e.g., realization of high-strength CNT fibers, remains a great challenge. If such challenges are solved, many critical applications will be enabled. Herein we review the critical progress in the development of synthesis and scaled-up production methods for CNTs, and discuss advances in their applications. Scientific problems and technological challenges are discussed together.

In recent years, there have been significant technological advancements in the manufacturing, types, and applications of biosensors. Applications include clinical and non-clinical diagnostics for home, bio-defense, bio-remediation, environment, agriculture, and the food industry. Biosensors have progressed beyond the detection of biological threats such as anthrax and are finding use in a number of non-biological applications. Emerging biosensor technologies such as lab-on-a-chip have revolutionized the integration approaches for a very flexible, innovative, and user-friendly platform. An overview of the fundamentals, types, applications, and manufacturers, as well as the market trends of biosensors is presented here. Two case studies are discussed: one focused on a characterization technique—patch clamping and dielectric spectroscopy as a biological sensor—and the other about lithium phthalocyanine, a material that is being developed for in-vivo oxymetry.

NASA Langley Research Center has continued to develop its long standing computational tools to address new challenges in aircraft and launch vehicle design. This paper discusses the application and development of those computational aeroelastic tools. Four topic areas will be discussed: 1) Modeling structural and flow field nonlinearities; 2) Integrated and modular approaches to nonlinear multidisciplinary analysis; 3) Simulating flight dynamics of flexible vehicles; and 4) Applications that support both aeronautics and space exploration.

the Dengue Decision Support System that has been developed at Colorado State University. Further, to accommodate the dynamic nature of pervasive...Expressiveness of Events using Parameter Con- texts", Proceedings of the 12th East European Conferences on Advances in Databases and Information Systems...Anura Jayasumana and Indrajit Ray, " Key Pre-distribution Based Secure Backbone Design for Wireless Sensor Networks", Proceedings of the 3rd IEEE

The report describes three advanced technologies--robotics, artificial intelligence, and computer simulation--and identifies the ways in which they might contribute to special education. A hybrid methodology was employed to identify existing technology and forecast future needs. Following this framework, each of the technologies is defined,…

Describes a computer simulation for teaching diplomacy that has been used at the University of New Orleans (Louisiana). The program enables students to examine the interaction of war with diplomacy by addressing the subject both historically and socio-psychologically. Discusses the results and makes recommendations for modifications. (KO)

Purpose: The purpose of this study was to determine if older adults are capable and willing to interact with a computerized exercise promotion interface and to determine to what extent they accept computer-generated exercise recommendations. Design and Methods: Time and requests for assistance were recorded while 34 college-educated volunteers,…

Several interrelated problems in the area of neural network computations are described. First an interpolation problem is considered, then a control problem is reduced to a problem of interpolation by a neural network via Lyapunov function approach, and finally a new, faster method of learning as compared with the gradient descent method, was introduced.

Ways in which computer technology can be used to support the development of a library network are proposed by the ComputerApplication Task Force, following the recommendations of the New Jersey Statewide Planning Group and its several task forces. A summary of three primary recommendations is followed by a general discussion of opportunities…

There are many advantages to the use of computers and control in food industry. Software in the food industry takes 2 forms - general purpose commercial computer software and software for specialized applications, such as drying and thermal processing of foods. Many applied simulation models for d...

This paper describes a security guard device that is currently being developed by Computer Sciences Corporation (CSC). The methods used to provide assurance that the system meets its security requirements include the system architecture, a system security evaluation, and the application of formal and informal verification techniques. The combination of state-of-the-art technology and the incorporation of new verification procedures results in a demonstration of the feasibility of computer security technology for operational applications.

Electronic-nose devices have received considerable attention in the field of sensor technology during the past twenty years, largely due to the discovery of numerous applications derived from research in diverse fields of applied sciences. Recent applications of electronic nose technologies have come through advances in sensor design, material improvements, software innovations and progress in microcircuitry design and systems integration. The invention of many new e-nose sensor types and arrays, based on different detection principles and mechanisms, is closely correlated with the expansion of new applications. Electronic noses have provided a plethora of benefits to a variety of commercial industries, including the agricultural, biomedical, cosmetics, environmental, food, manufacturing, military, pharmaceutical, regulatory, and various scientific research fields. Advances have improved product attributes, uniformity, and consistency as a result of increases in quality control capabilities afforded by electronic-nose monitoring of all phases of industrial manufacturing processes. This paper is a review of the major electronic-nose technologies, developed since this specialized field was born and became prominent in the mid 1980s, and a summarization of some of the more important and useful applications that have been of greatest benefit to man. PMID:22346690

The most commonly used solder for electrical interconnections in electronic packages is the near eutectic 60Sn-40Fb alloy. This alloy has a number of processing advantages (suitable melting point of 183C and good wetting behavior). However, under conditions of cyclic strain and temperature (thermomechanical fatigue), the microstructure of this alloy undergoes a heterogeneous coarsening and failure process that makes the prediction of solder joint lifetime complex. A viscoplastic, microstructure dependent, constitutive model for solder, which is currently under development, was implemented into a finite element code. With this computational capability, the thermomechanical response of solder interconnects, including microstructural evolution, can be predicted. This capability was applied to predict the thermomechanical response of a mini ball grid array solder interconnect. In this paper, the constitutive model will first be briefly discussed. The results of computational studies to determine the thermomechanical response of a mini ball grid array solder interconnects then will be presented.

avenue to enhance patient education and comprehension. The purpose of this study was to establish the effectiveness of computer-assisted instruction...significant pre- and post-test differences were found for six of the 10 items and for the total exam, suggesting the use of CAI as a valuable patient ... education tool for dysplasia and colposcopy. The unanimous recommendation by the participants for this type of program for future use suggests user friendliness and high satisfaction with this modality.

avenue to enhance patient education and comprehension. The purpose of this study was to establish the effectiveness of computer-assisted instruction on...significant pre- and post-test differences were found for six of the 10 items and for the total exam, suggesting the use of CAI as a valuable patient ... education tool for dysplasia and colposcopy. The unanimous recommendation by the participants for this type of program for future use suggests user friendliness and high satisfaction with this modality.