Imagery from unmanned aerial systems (UAS) needs compression prior to transmission to a receiver for further processing. Once received, automated image exploitation algorithms, such as frame-to-frame registration, target tracking, and target identification, are performed to extract actionable information from the data. Unfortunately, in a compress-then-analyze system, exploitation algorithms must contend with artifacts introduced by lossy compression and transmission. Identifying metrics that enable compression engines to predict exploitation degradation could allow encoders the ability of tailoring compression for specific exploitation algorithms. This study investigates the impact of H.264 and JPEG2000 compression on target tracking through the use of a multi-hypothesis blob tracker. Used quality metrics include PSNR, VIF, and IW-SSIM.

Unmanned aerial systems (UAS) equipped with electro-optic (EO) full motion video (FMV) sensors often need to transmit image sequences over a limited communications channel, requiring either intense compression, reduced frame rate, or reduced resolution to reach the receiver. In an attempt to improve rate-distortion performance of common video compression algorithms, such as H.264/AVC, several groups are developing compres- sion methods to improve video quality at low bitrates. Concepts of these next generation methods, including H.265/HEVC, Google’s VP9, and Xiph.org’s Daala are examined in contrast to H.264/AVC, BBC’s Dirac, and Motion-JPEG2000 within the context of aerial surveillance. We present a compression performance analysis of these algorithms according to PSNR.

Automated pattern recognition has been around for several decades. Generally we have been successful at tackling a
large variety of technical problems. This paper presents the challenges inherent in wide area motion imagery, which is
problematic to the normal pattern recognition process and associated pattern recognition systems. This paper describes
persistent wide area motion imagery, its role as a manifold for overlaying episodic sensors of various modalities to
present a better view of activity to an analyst. An underlying framework, SPADE, is introduced and a layered sensing
viewer, Pursuer, is also presented to demonstrate the utility of creating a unified view of the sensing world to an analyst.

This paper considers a time domain ultrasonic tomographic imaging method in a multi-static configuration using
the propagation and backpropagation (PBP) method. Under this imaging configuration, ultrasonic excitation
signals from the sources probe the object imbedded in the surrounding medium. The scattering signals are
recorded by the receivers. Starting from the nonlinear ultrasonic wave propagation equation and using the
recorded time domain signals from all the receiver sensors, the object is to be reconstructed. The conventional
PBP method is a modified version of the Kaczmarz method that iteratively updates the estimates of the object
acoustical potential distribution within the image area. Each source takes turns to excite the acoustical field
until all the sources are used. The proposed multi-static image reconstruction method utilizes a significantly
reduced number of sources that are simultaneously excited. We consider two imaging scenarios with regard to
source positions. In the first scenario, sources are uniformly positioned on the perimeter of the imaging area.
In the second scenario, sources are randomly positioned. By numerical experiments we demonstrate that the
proposed multi-static tomographic imaging method using the multiple source excitation schemes results in fast
reconstruction and achieves high resolution imaging quality.

We present an architecture for layered sensing which is constructed on open source and government off-the-shelf
software. This architecture shows how leveraging existing open-source software allows for practical graphical user
interfaces along with the underlying database and messaging architecture to be rapidly assimilated and utilized in real-world
applications. As an example of how this works, we present a system composed of a database and a graphical user
interface which can display wide area motion imagery, ground-based sensor data and overlays from narrow field of view
sensors in one composite image composed of sensor data and other metadata in separate layers on the display. We further
show how the development time is greatly reduced by utilizing open-source software and integrating it into the final
system design. The paper describes the architecture, the pros and cons of the open-source approach with results for a
layered sensing application with data from multiple disparate sensors.

Traditional detection system performance metrics, such a probability of detection and probability of false alarm, depend only on how the system responds to individual target-sized regions-of-interest (ROIs). The composition of the larger scene does not affect those metrics. There are circumstances however, where a user of a detection system wants to know, "For a given cue, what is the probability that the cue is correct?" or perhaps the detector is being used to determine a property of the overall scene. As an example of the latter case, suppose the detection system is looking for diseased cells in a tissue sample. Even if only one diseased cell exists, the whole "scene" represents a diseased individual. In both cases, the user-perspective or the scene-based perspective, the natural performance metrics depend on the scene content, especially the numbers of target and confuser ROIs. This paper defines scene-content dependent (SCD) performance metrics for detection systems, develops a theory for computing them, and illustrates properties of the metrics with examples. The SCD performance theory enabled determination of the example metrics in about two hours of computation; whereas Monte Carlo methods would have taken almost a year and direct testing would have been almost impossible.

Recently there has been a renewed interest in the notion of deploying large numbers of networked sensors for applications ranging from environmental monitoring to surveillance. In a typical scenario a number of sensors are distributed in a region of interest. Each sensor is equipped with sensing, processing and communication
capabilities. The information gathered from the sensors can be used to detect, track and classify objects of interest. For a number of locations the sensors location is crucial in interpreting the data collected from those sensors. Scalability requirements dictate sensor nodes that are inexpensive devices without a dedicated localization
hardware such as GPS. Therefore the network has to rely on information collected within the network to self-localize. In the literature a number of algorithms has been proposed for network localization which uses measurements informative of range, angle, proximity between nodes. Recent work by Patwari and Hero relies on
sensor data without explicit range estimates. The assumption is that the correlation structure in the data is a monotone function of the intersensor distances. In this paper we propose a new method based on unsupervised learning techniques to extract location information from the sensor data itself. We consider a grid consisting of virtual nodes and try to fit grid in the actual sensor network data using the method of self organizing maps. Then known sensor network geometry can be used to rotate and scale the grid to a global coordinate system. Finally, we illustrate how the virtual nodes location information can be used to track a target.

An analysis of training techniques for a machine classifier is presented using three methods of training the weights of the classifier. The decision regions for a four class problem are presented to illustrate the differences made by each of the training methods.

In this paper we discuss the design of sequential detection networks for nonparametric sequential analysis. We present a general probabilistic model for sequential detection problems where the sample size as well as the statistics of the sample can be varied. A general sequential detection network handles three decisions. First, the network decides whether to continue sampling or stop and make a final decision. Second, in the case of continued sampling the network chooses the source for the next sample. Third, once the sampling is concluded the network makes the final classification decision. We present a Q-learning method to train sequential detection networks through reinforcement learning and cross-entropy minimization on labeled data. As a special case we obtain networks that approximate the optimal parametric sequential probability ratio test. The performance of the proposed detection networks is compared to optimal tests using simulations.

With the recent release of the movie AI, there is interest in artificial intelligence and in just how far we can take computational intelligence. This paper discusses the advances made in the computational intelligence arena and brings perspective to what may be possible in the future.

In this paper we consider the design of intelligent control policies for water distribution systems. The controller presented in this paper is based upon a hybrid system that utilizes dynamic programming and rules as design constraints, to minimize average costs over a long time horizon under constraints on operation parameters. The method is very general and is reported here as a controller for water distribution system. In the example presented we obtain a 12.5 percent reduction in energy usage over the optimal level-based control design. We present the guiding principles used in the design and the results for a simulated system that is representative of a typical water pumping station. The design is fully adaptable to changing operating conditions and has applicability to a wide range of scheduling problems.

A series of challenges to making computational intelligence viable in the real world is presented. These challenges include the applicability of artificial neural networks, fuzzy logic and evolutionary computation to limited data set problems. Various design and use perspectives will be presented to explain the challenges. A special panel of experts will address the challenges.

Neural networks are well known for their ability to perform pattern recognition tasks. This paper discusses the use of parallel neural network hardware for performing pattern recognition tasks. We address the need for neural network hardware and how it can dramatically improve system performance both in training and in actual applications. The use of specialized parallel processing hardware is discussed as well as alternative hardware and software approaches. Finally we give some comparisons between multi-processor computer architecture, Pentium class microcomputers and custom hardware.

Helicopters are highly non-linear systems that have dynamics that change significantly with respect to environmental conditions. The system parameters also vary heavily with respect to velocity. These nonlinearities limit the use of traditional fixed controllers, since they can make the aircraft unstable. The purpose of this paper is to make contributions to the development of an `intelligent' control system that can be applied to complex problems such as this in real- time. Using a slowly changing model and a simplified nonlinear model as examples, a neural network based controller is shown to have the ability to learn from these example plants and to generalize this knowledge for previously unseen plants. The adaptability comes from a neural network that adjusts coefficients of the controller in real-time while running on the accurate automation neural network processor.

The Accurate Automation Corporation (AAC) neural network processor (NNP) module is a fully programmable multiple instruction multiple data (MIMD) parallel processor optimized for the implementation of neural networks. The AAC NNP design fully exploits the intrinsic sparseness of neural network topologies. Moreover, by using a MIMD parallel processing architecture one can update multiple neurons in parallel with efficiency approaching 100 percent as the size of the network increases. Each AAC NNP module has 8 K neurons and 32 K interconnections and is capable of 140,000,000 connections per second with an eight processor array capable of over one billion connections per second.

The main thrust of this paper is to encourage the use of neural networks to process raw data for subsequent classification. This article addresses neural network techniques for processing raw pixel information. For this paper the definition of neural networks includes the conventional artificial neural networks such as the multilayer perceptrons and also biologically inspired processing techniques. Previously, we have successfully used the biologically inspired Gabor transform to process raw pixel information and segment images. In this paper we extend those ideas to both segment and track objects in multiframe sequences. It is also desirable for the neural network processing data to learn features for subsequent recognition. A common first step for processing raw data is to transform the data and use the transform coefficients as features for recognition. For example, handwritten English characters become linearly separable in the feature space of the low frequency Fourier coefficients. Much of human visual perception can be modelled by assuming low frequency Fourier as the feature space used by the human visual system. The optimum linear transform, with respect to reconstruction, is the Karhunen-Loeve transform (KLT). It has been shown that some neural network architectures can compute approximations to the KLT. The KLT coefficients can be used for recognition as well as for compression. We tested the use of the KLT on the problem of interfacing a nonverbal patient to a computer. The KLT uses an optimal basis set for object reconstruction. For object recognition, the KLT may not be optimal.

This paper describes the NeuralGraphics software environment used to run interactive neural network training experiments. The NeuralGraphics environment is a collection of software tools, graphical displays, and demonstrations that allow users to easily adapt many of the current neural network paradigms to their particular classification problem. The paper discusses the paradigms implemented in the NeuralGraphics environment as well as the data files required to train and test the learning capability of selected neural networks.

A neural-based optical image segmentation scheme for locating potential targets in cluttered FLIR images is presented. The advantage of such a scheme is speed, i.e., the speed of light. Such a design is critical to achieve real-time segmentation and classification for machine vision applications. The segmentation scheme used was based on texture discrimination and employed biologically based orientation specific filters (wavelet filters) as its main component. These filters are the well-understood impulse response functions of mammalian vision systems from input to striate cortex. By using the proper choice of aperture pair separation, dilation, and orientation, targets in FLIR imagery were optically segmented. Wavelet filtering is illustrated for glass template slides, as well as segmentation for static and real-time FLIR imagery displayed on a liquid crystal television.

My Library

You currently do not have any folders to save your paper to! Create a new folder below.

The course starts with the history of research into biological neural networks used for unsolved problems in information processing. The technology of physiologically motivated information processing to solve engineering problems had advanced. Neural networks for finding patterns in data have progressed out of the laboratory and into products. This course provides the background to understand and apply this technology for recognizing patterns in data.

Neural networks have been around for over forty years. This course presents many examples of artificial neural networks and provides the attendee with a thorough understanding of the most popular neural networks such as back propagation trained feed-forward neural networks, self-organizing feature maps, adaptive resonance theory (ART), generalized linear and hybrid neural networks. The attendee is given the theoretical background needed to understand why one network or combination of networks works on a given problem but may not be a good choice for others. The instructor introduces the latest algorithms and their applications to many engineering problems.

This course provides attendees with a basic working knowledge of artificial neural network design. The course concentrates on various types of neural networks and where they can be used to solve common classification problems. Many practical and useful examples are included throughout the course.

Keywords/Phrases

Keywords

in

Remove

in

Remove

in

Remove

+ Add another field

Search In:

Proceedings

Volume

Journals +

Volume

Issue

Page

Journal of Applied Remote SensingJournal of Astronomical Telescopes Instruments and SystemsJournal of Biomedical OpticsJournal of Electronic ImagingJournal of Medical ImagingJournal of Micro/Nanolithography, MEMS, and MOEMSJournal of NanophotonicsJournal of Photonics for EnergyNeurophotonicsOptical EngineeringSPIE Reviews