We present results with large-scale neuroscience-inspired models for feature detection using multi-spectral visible/
infrared satellite imagery. We describe a model using an artificial neural network architecture and learning rules to build
sparse scene representations over an adaptive dictionary, fusing spectral and spatial textural characteristics of the objects
of interest. Our results with fast codes implemented on clusters of graphical processor units (GPUs) suggest that visual
cortex models are a promising approach to practical pattern recognition problems in remote sensing, even for datasets
using spectral bands not found in natural visual systems.

A new formalism has been developed that produces detection algorithms for model-based problems, in which one or
more parameter values is unknown. Continuum Fusion can be used to generate different flavors of algorithm for any
composite hypothesis testing problem. The methodology is defined by a fusion logic that can be translated into max/min
conditions. Here it is applied to a simple sensor fusion model, but one for which the generalized likelihood ratio test is
intractable. By contrast, a fusion-based response to the same problem can be devised that is solvable in closed form and
represents a good approximation to the GLR test.

The problem of ballistic missile tracking in the presence of clutter is investigated. Probabilistic data association
filter (PDAF) is utilized as the basic filtering algorithm. We propose to use sequential Monte Carlo methods,
i.e., particle filters, aided with amplitude information (AI) in order to improve the tracking performance of a
single target in clutter when severe nonlinearities exist in the system. We call this approach "Monte Carlo
probabilistic data association filter with amplitude information (MCPDAF-AI)." Furthermore, we formulate a
realistic problem in the sense that we use simulated radar cross section (RCS) data for a missile warhead and a
cylinder chaff using Lucernhammer1, a state of the art electromagnetic signature prediction software, to model
target and clutter amplitude returns as additional amplitude features which help to improve data association and
tracking performance. A performance comparison is carried out between the extended Kalman filter (EKF) and
the particle filter under various scenarios using single and multiple sensors. The results show that, when only
one sensor is used, the MCPDAF performs significantly better than the EKF in terms of tracking accuracy under
severe nonlinear conditions for ballistic missile tracking applications. However, when the number of sensors is
increased, even under severe nonlinear conditions, the EKF performs as well as the MCPDAF.

A Bayesian network is a tree structure where each branch represents a classification candidate. The leaves of the tree
represent observable target features such as frequency or length. An optimized tree groups similar features together, e.g.
frequency and pulse width, while collecting dissimilar or disparate information, e.g. spectral and kinematics, all within
the same unifying structure. A vehicular track then is a subset of the a priori candidate library and contains only feasible
branches. The algorithm for updating the confidence of each feasible candidate according to Bayes' rule is embedded in
each track, as is the ability of a track to learn, apply a priori probability distributions, switch modes, switch among
kinematics models, apply tracking history to classification and apply classification history to tracking, and support multisensor
correlation and sensor fusion.

Robust fusion of data from disparate sensor modalities can provide improved target detection performance over those
attainable with the individual sensors. In particular, detection of low-radiance manmade objects or objects under shadow
obscuration in hyperspectral imagery (HSI) with acceptable false alarm rates has proven especially challenging. We have
developed a fusion algorithm for the enhanced detection of difficult targets when the HSI data is simultaneously
collected with LADAR data. Initial detections are obtained by applying a sub-space RX (SSRX) algorithm to the HSI
data. In parallel, LADAR-derived digital elevation map (DEM) is segmented and coordinates of objects within a specific
elevation range and size are returned to the HSI processor for their spectral signature extraction. Each extracted signature
that has not been already detected by SSRX is used in secondary HSI detection employing the adaptive cosine estimator
(ACE) algorithm. We show that spatial distribution of ACE score allows for confident discrimination between
background elevations and manmade objects. Key to cross-characterization of the data is the accurate co-alignment of
the image data. We have also developed an algorithm for automatic co-registration of ladar and HSI imagery, based on
the maximization of mutual information, which can provide accurate, sub-pixel registration even if the case when the
imaging geometries for the two sensors differ. Details of both algorithms will be presented and results from application
to field data will be discussed.

An important component of cognitive robotics is the ability to mentally simulate physical processes and to
compare the expected results with the information reported by a robot's sensors. In previous work, we have proposed an
approach that integrates a 3D game-engine simulation into the robot control architecture. A key part of that architecture
is the Match-Mediated Difference (MMD) operation, an approach to fusing sensory data and synthetic predictions at the
image level. The MMD operation insists that simulated and predicted scenes are similar in terms of the appearance of
the objects in the scene. This is an overly restrictive constraint on the simulation since parts of the predicted scene may
not have been previously viewed by the robot.
In this paper we propose an extended MMD operation that relaxes the constraint and allows the real and
synthetic scenes to differ in some features but not in (selected) other features. Image difference operations that allow a
real image and synthetic image generated from an arbitrarily colored graphical model of a scene to be compared. Scenes
with the same content show a zero difference. Scenes with varying foreground objects can be controlled to compare the
color, size and shape of the foreground.

This paper presents a biomimetic approach involving cognitive process modeling, for use in intelligent robot decisionmaking.
The principle of inner rehearsal, a process believed to occur in human and animal cognition, involves internal
rehearsing of actions prior to deciding on and executing an overt action, such as a motor action. The inner-rehearsal
algorithmic approach we developed is posed and investigated in the context of a relatively complex cognitive task, an
under-rubble search and rescue. The paper presents the approach developed, a synthetic environment which was also
developed to enable its studies, and the results to date. The work reported here is part of a Cognitive Robotics effort in
which we are currently engaged, focused on exploring techniques inspired by cognitive science and neuroscience
insights, towards artificial cognition for robotics and autonomous systems.

Why is there a perception problem in robotics? Given the increases in the speed of computer hardware and technology
which has followed Moore's Law, why haven't there been commensurate advances in computer perception technology,
which would enable a robot to respond appropriately to its environment? Perhaps the algorithms used for perception are
not appropriate for the problem? The computer vision problem was assumed to be easy, and the supposedly more
difficult challenges of problem solving and decision making were tackled first. As it turned out, problem solving and
decision making were handled relatively easily by symbolic representations and predicate logic, however, the perception
of the real world turned out to be much more difficult. What are the algorithms that have been used for perception in
robotics and why do they sometimes fail at reproducing human-like behavior? How can we learn from biological
systems which, through evolution, have made great advances in solving the difficult problems of perception and
classification?

There are strong evidences of that multimodal biometric score fusion can significantly improve human identification
performance. Score level fusion usually involves score normalization, score fusion, and fusion decision. There are
several types of score fusion methods, direct combination of fusion scores, classifier-based fusion, and density-based
fusion. The real applications require achieving greater reliability in determining or verifying person's identity. The goal
of this research is to improve the accuracy and robustness of human identification by using multimodal biometrics score
fusion. The accuracy means high verification rate if tested on a closed dataset, or a high genuine accept rate under low
false accept rate if tested on an open dataset. While the robustness means the fusion performance is stable with variant
biometric scores. We propose a hidden Markov model (HMM) for multiple score fusion, where the biometric scores
include multimodal scores and multi-matcher scores. The state probability density functions in a HHM model are
estimated by Gaussian mixture model. The proposed HMM model for multiple score fusion is accurate for identification,
flexible and reliable with biometrics. The proposed HMM method are tested on three NIST-BSSR1 multimodal
databases and on three face-score databases. The results show the HMM method is an excellent and reliable score fusion
method.

DRDC Valcartier and MDA have created an advanced simulation testbed for the purpose of evaluating the effectiveness
of Network Enabled Operations in a Coastal Wide Area Surveillance situation, with algorithms provided by several
universities. This INFORM Lab testbed allows experimenting with high-level distributed information fusion, dynamic
resource management and configuration management, given multiple constraints on the resources and their
communications networks. This paper describes the architecture of INFORM Lab, the essential concepts of goals and
situation evidence, a selected set of algorithms for distributed information fusion and dynamic resource management, as
well as auto-configurable information fusion architectures. The testbed provides general services which include a multilayer
plug-and-play architecture, and a general multi-agent framework based on John Boyd's OODA loop. The testbed's
performance is demonstrated on 2 types of scenarios/vignettes for 1) cooperative search-and-rescue efforts, and 2) a noncooperative
smuggling scenario involving many target ships and various methods of deceit. For each mission, an
appropriate subset of Canadian airborne and naval platforms are dispatched to collect situation evidence, which is fused,
and then used to modify the platform trajectories for the most efficient collection of further situation evidence. These
platforms are fusion nodes which obey a Command and Control node hierarchy.

Information extraction from multi-sensor remote sensing imagery is an important and challenging task for many
applications such as urban area mapping and change detection. A special acquisition (orthogonal) geometry is of great
importance for optical and radar data fusion. This acquisition geometry allows to minimize displacement effects due
inaccuracy of Digital Elevation Model (DEM) used for data ortho-rectification and existence of unknown 3D structures
in a scene. Final data spatial alignment is performed by recently proposed co-registration method based on a Mutual
Information measure. For a combination of features originating from different sources, which are quite often noncommensurable,
we propose an information fusion framework called INFOFUSE consisting of three main processing
steps: feature fission (feature extraction aiming at complete description of a scene), unsupervised clustering (complexity
reduction and feature representation in a common dictionary) and supervised classification realized by Bayesian or
Neural networks. An example of urban area classification is presented for the orthogonal acquisition of space borne very
high resolution WorldView-2 and TerraSAR-X Spotlight imagery over Munich city, South Germany. Experimental
results confirm our approach and show a great potential also for other applications such as change detection.

Chemical and biological (CB) agent detection and effective use of these observations in hazard assessment models are
key elements of our nation's CB defense program that seeks to ensure that Department of Defense (DoD) operations are
minimally affected by a CB attack. Accurate hazard assessments rely heavily on the source term parameters necessary
to characterize the release in the transport and dispersion (T&D) simulation. Unfortunately, these source parameters are
often not known and based on rudimentary assumptions. In this presentation we describe an algorithm that utilizes
variational data assimilation techniques to fuse CB and meteorological observations to characterize agent release source
parameters and provide a refined hazard assessment. The underlying algorithm consists of a combination of modeling
systems, including the Second order Closure Integrated PUFF model (SCIPUFF), its corresponding Source Term
Estimation (STE) model, a hybrid Lagrangian-Eulerian Plume Model (LEPM), its formal adjoint, and the software
infrastructure necessary to link them. SCIPUFF and its STE model are used to calculate a "first guess" source estimate.
The LEPM and corresponding adjoint are then used to iteratively refine this release source estimate using variational
data assimilation techniques. This algorithm has undergone preliminary testing using virtual "single realization" plume
release data sets from the Virtual THreat Response Emulation and Analysis Testbed (VTHREAT) and data from the
FUSION Field Trials 2007 (FFT07). The end-to-end prototype of this system that has been developed to illustrate its
use within the United States (US) Joint Effects Model (JEM) will be demonstrated.

Large networks of disparate chemical/biological (C/B) sensors, MET sensors, and intelligence, surveillance, and
reconnaissance (ISR) sensors reporting to various command/display locations can lead to conflicting threat information,
questions of alarm confidence, and a confused situational awareness. Sensor netting algorithms (SNA) are being
developed to resolve these conflicts and to report high confidence consensus threat map data products on a common
operating picture (COP) display. A data fusion algorithm design was completed in a Phase I SBIR effort and
development continues in the Phase II SBIR effort. The initial implementation and testing of the algorithm has produced
some performance results. The algorithm accepts point and/or standoff sensor data, and event detection data (e.g., the
location of an explosion) from various ISR sensors (e.g., acoustic, infrared cameras, etc.). These input data are
preprocessed to assign estimated uncertainty to each incoming piece of data. The data are then sent to a weighted
tomography process to obtain a consensus threat map, including estimated threat concentration level uncertainty. The
threat map is then tested for consistency and the overall confidence for the map result is estimated. The map and
confidence results are displayed on a COP. The benefits of a modular implementation of the algorithm and comparisons
of fused / un-fused data results will be presented. The metrics for judging the sensor-netting algorithm performance are
warning time, threat map accuracy (as compared to ground truth), false alarm rate, and false alarm rate v. reported threat
confidence level.

Currently there is no systematic framework for characterizing fused, multisensory systems, and therefore the comparison
of multiple independent systems is difficult without extensive field-testing. Development of a framework would allow
for theoretical comparisons and enable more rapid prototyping of fused sensor systems, guidance for design from
existing sensor components, and more effective engineering of new sensors optimized for use in fused sensor systems.
Recent research at NRL has focused on characterizing Fourier transform infrared spectroscopy (FTIR) and mass
spectrometry data for fused, multisensor applications to enhance chemical detection and discrimination in the presence of
complex interfering backgrounds. An information theoretic approach has been used to elucidate the information content
available from spectral data, quantify the ability of these sensing techniques to distinguish chemicals, and determine their
susceptibility to noise and resolution limitations. The approach has also been applied to feature extraction and data fusion
techniques on these data. Results characterizing the effectiveness of a fused multisensor system combining FTIR and
mass spectrometry are presented.

The efficient and timely management of imagery captured in the battlefield requires methods capable of searching
the voluminous databases and extracting highly symbolic concepts. When processing images, a semantic and
definition gap exists between machine representations and the user's language. Based on matrix completion
techniques, we present a fusion operator that fuses imagery and expert knowledge provided by user inputs during
post analysis. Specifically, an information matrix is formed from imagery and a class map as labeled by an expert.
From this matrix an image operator is derived for the extraction/prediction of information from future imagery. We
will present results using this technique on single mode data.

Image fusion is a process that combines regions of images from different sources into a single fused image based on a
salience selection rule for each region. In this paper, we proposed an algorithmic approach using a mask pyramid to
better localize the selection process. A mask pyramid operates in different scales of the image to improve the fused
image quality beyond a global selection rule. The proposed approach offers a generic methodology for applications in
image enhancement, high dynamic range compression, depth of field extension, and image blending. The mask pyramid
can also be encoded for intelligent analysis of source imagery. Several examples of this mask pyramid method are
provided to demonstrate its performance in a variety of applications. A new embedded system architecture that builds
upon the Acadia® II Vision Processor is proposed.

Multiple source band image fusion can sometimes be a multi-step process that consists of several intermediate
image processing steps. Typically, each of these steps is required to be in a particular arrangement in order to
produce a unique output image. GStreamer is an open source, cross platform multimedia framework, and using
this framework, engineers at NVESD have produced a software package that allows for real time manipulation
of processing steps for rapid prototyping in image fusion.

In this paper, we consider ultrasonic imaging for the visualization of flaws in a material. Ultrasonic imaging is a
powerful nondestructive testing (NDT) tool which assesses material conditions via the detection, localization, and
classification of flaws inside a structure. Multipath exploitations provide extended virtual array apertures and, in turn,
enhance imaging capability beyond the limitation of traditional multisensor approaches. We utilize reflections of
ultrasonic signals which occur when encountering different media and interior discontinuities. The waveforms observed
at the physical as well as virtual sensors yield additional measurements corresponding to different aspect angles.
Exploitation of multipath information addresses unique issues observed in ultrasonic imaging. (1) Utilization of physical
and virtual sensors significantly extends the array aperture for image enhancement. (2) Multipath signals extend the
angle of view of the narrow beamwidth of the ultrasound transducers, allowing improved visibility and array design
flexibility. (3) Ultrasonic signals experience difficulty in penetrating a flaw, thus the aspect angle of the observation is
limited unless access to other sides is available. The significant extension of the aperture makes it possible to yield flaw
observation from multiple aspect angles. We show that data fusion of physical and virtual sensor data significantly
improves the detection and localization performance. The effectiveness of the proposed multipath exploitation approach
is demonstrated through experimental studies.

Image fusion is used to combine multiple images of the same scene into a comprehensive representation. However, an
image fusion method does not suit all fusion requirements in practice. In this paper, we introduce an image fusion
framework based on wavelet transform, which is designed to satisfy the fusion applications as much as possible. The
input images are firstly decomposed into wavelet domain. While analyzing the different frequency coefficients, the
principles of fusion are discussed and adaptively assigned according to the properties of subimages. Finally, the fused
image can be reconstructed via wavelet inverse transform. The experimental results show that our framework can
preserve most features of original images, and the algorithm has some resistance to noise.

Building on our previous work, we extend sonification techniques to common network security data. In this current
work, we examine packet flow and the creation of socket connections between a requestor's IP address and port number
with the server's IP address and port number. Our goals for the aural rendering are twofold: to make certain conditions
immediately apparent to untrained listeners, and to create a sound model capable of enough nuance that there is the
possibility of unexpected patterns becoming apparent to a seasoned listener. This system could be used to potentially
provide better cognitive refinement capabilities for data fusion systems, especially when multiple sources of data at
various levels of refinement are presented to the human analyst.

Current Army logistical systems and databases contain massive amounts of data that need an effective method to extract
actionable information. The databases do not contain root cause and case-based analysis needed to diagnose or predict
breakdowns. A system is needed to find data from as many sources as possible, process it in an integrated fashion, and
disseminate information products on the readiness of the fleet vehicles. 21st Century Systems, Inc. introduces the Agent-
Enabled Logistics Enterprise Intelligence System (AELEIS) tool, designed to assist logistics analysts with assessing the
availability and prognostics of assets in the logistics pipeline. AELEIS extracts data from multiple, heterogeneous data
sets. This data is then aggregated and mined for data trends. Finally, data reasoning tools and prognostics tools evaluate
the data for relevance and potential issues. Multiple types of data mining tools may be employed to extract the data and
an information reasoning capability determines what tools are needed to apply them to extract information. This can be
visualized as a push-pull system where data trends fire a reasoning engine to search for corroborating evidence and then
integrate the data into actionable information. The architecture decides on what reasoning engine to use (i.e., it may start
with a rule-based method, but, if needed, go to condition based reasoning, and even a model-based reasoning engine for
certain types of equipment). Initial results show that AELEIS is able to indicate to the user of potential fault conditions
and root-cause information mined from a database.

Keywords/Phrases

Keywords

in

Remove

in

Remove

in

Remove

+ Add another field

Search In:

Proceedings

Volume

Journals +

Volume

Issue

Page

Advanced PhotonicsJournal of Applied Remote SensingJournal of Astronomical Telescopes Instruments and SystemsJournal of Biomedical OpticsJournal of Electronic ImagingJournal of Medical ImagingJournal of Micro/Nanolithography, MEMS, and MOEMSJournal of NanophotonicsJournal of Photonics for EnergyNeurophotonicsOptical EngineeringSPIE Reviews