The detection of objects, or persons, is a common task in the fields of environment surveillance, object observation or
danger defense. There are several approaches for automated detection with conventional imaging sensors as well as with
LiDAR sensors, but for the latter the real-time detection is hampered by the scanning character and therefore by the data
distortion of most LiDAR systems.
The paper presents a solution for real-time data acquisition of a flash LiDAR sensor with synchronous raw data analysis,
point cloud calculation, object detection, calculation of the next best view and steering of the pan-tilt head of the sensor.
As a result the attention is always focused on the object, independent of the behavior of the object. Even for highly
volatile and rapid changes in the direction of motion the object is kept in the field of view.
The experimental setup used in this paper is realized with an elementary person detection algorithm in medium distances
(20 m to 60 m) to show the efficiency of the system for objects with a high angular speed. It is easy to replace the
detection part by any other object detection algorithm and thus it is easy to track nearly any object, for example a car or a
boat or an UAV in various distances.

Transient light imaging is an emerging technology and interesting sensing approach for fundamental multidisciplinary research ranging from computer science to remote sensing. Recent developments in sensor technologies and computational imaging has made this emerging sensing approach a candidate for next generation sensor systems with rapidly increasing maturity but still relay on laboratory technology demonstrations. At ISL, transient light sensing is investigated by time correlated single photon counting (TCSPC). An eye-safe shortwave infrared (SWIR) TCSPC setup, consisting of an avalanche photodiode array and a pulsed fiber laser source, is used to investigate sparsely scattered light while propagating through air. Fundamental investigation of light in light are carried out with the aim to reconstruct the propagation path of arbitrary light paths. Light pulses are observed in light at various propagation angles and distances. As demonstrated, arbitrary light paths can be distinguished due to a relativistic effect leading to a distortion of temporal signatures. A novel method analyzing the time difference of arrival (TDOA) is carried out to determine the propagation angle and distance with respect to this relativistic effect. Based on our results, the performance of future laser warning receivers can be improved by the use of single photon counting imaging devices. They can detect laser light even when the laser does not directly hit the sensor or is passing at a certain distance.

In this paper, the potential capability of short-wavelength infrared laser gated-viewing for penetrating the pyrotechnic effects smoke and light/heat has been investigated by evaluating data from conducted field trials. The potential of thermal infrared cameras for this purpose has also been considered and the results have been compared to conventional visible cameras as benchmark. The application area is the use in soccer stadiums where pyrotechnics are illegally burned in dense crowds of people obstructing visibility of stadium safety staff and police forces into the involved section of the stadium. Quantitative analyses have been carried out to identify sensor performances. Further, qualitative image comparisons have been presented to give impressions of image quality during the disruptive effects of burning pyrotechnics.

On-going research to improve hyperspectral target detection generally focuses on statistical detector performance, reduction of background or environmental contributions to at-sensor radiance, dimension reduction and many other mathematical or physical techniques. These efforts are all aimed at improving target identification in a single scene or data cube. This focus on single scene performance is driven directly by the airborne collection concept of operations (CONOPS) of a single pass per target location. Today's pushbroom and whiskbroom sensors easily achieve single passes and single collects over a target location. If multiple passes are flown for multiple collects on the same location, the time scale for revisit is several minutes.

Emerging gimbaled hyperspectral imagers have the capability to collect multiple scans over the same target location in a time scale of seconds. The ability to scan the same location from slightly different collection geometries below the time scale of significant solar and atmospheric change forces us to reexamine the methods for target detection via the fundamental radiance equation. By expanding the radiance equation in the spatial and temporal dimensions, data from multiple hyperspectral images is used simultaneously for determining at-sensor radiance and surface leaving radiance with the ultimate goal of improving target detection.

This research reexamines the fundamental radiance equation for multiple scan collection geometries expanding both the spatial and temporal domains. In addition, our assumptions for determining at-sensor radiance are revisited in light of the increased dimensionality. The expanded radiance equation is then applied to data collected by a gimbaled long wave infrared hyperspectral imager. Initial results and future work are discussed.

Vehicle-mounted change detection systems allow to improve situational awareness on outdoor itineraries of inter- est. Since the visibility of acquired images is often affected by illumination effects (e.g., shadows) it is important to enhance local contrast. For the analysis and comparison of color images depicting the same scene at different time points it is required to compensate color and lightness inconsistencies caused by the different illumination conditions. We have developed an approach for image enhancement and color constancy based on the center/surround Retinex model and the Gray World hypothesis. The combination of the two methods using a color processing function improves color rendition, compared to both methods. The use of stacked integral images (SII) allows to efficiently perform local image processing. Our combined Retinex/Gray World approach has been successfully applied to image sequences acquired on outdoor itineraries at different time points and a comparison with previous Retinex-based approaches has been carried out.

Remote detection of vibrational features from an object is important for many short range civil applications, but it is also of interest for long range applications in the defense and security area. The well-established laser Doppler vibrometry technique is widely used as a high-sensitivity, non-contact method. The development of camera technology in recent years made image-based methods reliable passive alternatives for vibration and dynamic measurements. Very sensitive applications have been demonstrated using high speed cameras in the visual spectral range. However, for long range applications, where turbulence becomes a limiting factor, image acquisition in the short- to mid-wave IR region would be desirable, as the atmospheric effects attenuate at longer wavelength.

In this paper, we investigate experimentally the vibration detection from short- and mid-wave IR image sequences using high speed imaging technique. Experiments on the extraction of vibration signature under strong local turbulence conditions are presented.

Remote sensing features are varied and complicated. There is no comprehensive coverage dictionary for reconstruction. The reconstruction precision is not guaranteed. Aiming at the above problems, a novel reconstruction method with multiple compressed sensing data based on energy compensation is proposed in this paper. The multiple measured data and multiple coding matrices compose the reconstruction equation. It is locally solved through the Orthogonal Matching Pursuit (OMP) algorithm. Then the initial reconstruction image is obtained. Further assuming the local image patches have the same compensation gray value, the mathematical model of compensation value is constructed by minimizing the error of multiple estimated measured values and actual measured values. After solving the minimization, the compensation values are added to the initial reconstruction image. Then the final energy compensation image is obtained. The experiments prove that the energy compensation method is superior to those without compensation. Our method is more suitable for remote sensing features.

Recent world events have highlighted that the proliferation of UAVs is bringing with it a new and rapidly increasing threat for national defense and security agencies. Whilst many of the reported UAV incidents seem to indicate that there was no terrorist intent behind them, it is not unreasonable to assume that it may not be long before UAV platforms are regularly employed by terrorists or other criminal organizations. The flight characteristics of many of these mini- and micro-platforms present challenges for current systems which have been optimized over time to defend against the traditional air-breathing airborne platforms. A lot of programs to identify cost-effective measures for the detection, classification, tracking and neutralization have begun in the recent past. In this paper, lSL shows how the performance of a UAV detection and tracking concept based on acousto-optical technology can be powerfully increased through active imaging.

This study has investigated the use of an eye safe lidar (1550 nm) with equivalent performance of an ordinary military laser range finder for cloud monitoring. The aim was to combine lidar data with camera images in the visible, short wave infrared (SWIR) and infrared (IR) to better estimate the cloud density and cloud coverage.

The measurement was concentrated on low clouds, mostly of the cumulus type. We found that these clouds between 0-2 km often showed a layered structure and that they often indicated a limited optical density probably allowing for observation through the cloud. This information is hard to achieve from a passive EO sensor only. This was supported both from the simulation of the lidar response from thin clouds and from inverting the measured lidar waveform.

The comparison between the camera image intensities and the integrated range corrected lidar signals showed both negative and positive correlations. The highest positive correlation was obtained from comparing the lidar signal with the cloud temperature as derived from the FLIR camera. However, there were many cases when one or two of the camera intensities correlated negatively with the lidar signal. We could for example observe that under certain conditions the cloud which was dark in the SWIR appeared as white in the visible camera and vice versa. Example of lidar and image data will be presented and analyzed.

The rapid 2-axis scanning lidar prototype was developed to demonstrate high-precision single-pixel linear-mode lidar
performance. The lidar system is a combined integration of components from various commercial products allowing for
future customization and performance enhancements. The intent of the prototype scanner is to demonstrate current stateof-
the-art high-speed linear scanning technologies.

The system consists of two pieces: the sensor head and control unit. The senor head can be installed up to 4 m from the
control box and houses the lidar scanning components and a small RGB camera. The control unit houses the power
supplies and ranging electronics necessary for operating the electronics housed inside the sensor head.

This paper will discuss the benefits of a 2-axis scanning linear-mode lidar system, such as range performance and a userselectable
FOV. Other features include real-time processing of 3D image frames consisting of up to 200,000 points per
frame.

The importance of the optical turbulence effect along a slant path downward on probability of exceeding the maximum permissible exposure level (MPE) from a laser is discussed.

The optical turbulence is generated by fluctuations (variations) in refractive index of the atmosphere. These fluctuations are caused in turn by changes in atmospheric temperature and humidity. The structure function of refractive index, Cn2, is the single most important parameter in the description of turbulence effects on the propagation of electromagnetic radiation. In the boundary layer, the lowest part of the atmosphere where the ground directly influence the atmosphere, is the variation of Cn2 in Sweden between about 10-17 and 10-12 m-2/3, see Bergström et al. [5]. Along a horizontal path is the Cn 2 often assumed to be constant. The variation of the Cn2 along a slant path is described by the Tatarski model as function of height to the power of -4/3 or -2/3, depending on day or night conditions.

The hazard of laser damage of eye is calculated for a long slant path downward. The probability of exceeding the maximum permissible exposure (MPE) level is given as a function of distance in comparison with nominal ocular hazard distance (NOHD) for adopted levels of turbulence. Furthermore, calculations are carried out for a laser pointer or a designator laser from a high altitude and long distance down to a ground target. The used example shows that there is an 10% risk of exceeding the MPE at a distance 2 km beyond the NOHD, in this example 48 km, due to turbulence level of 5·10-15 m-2/3 at ground height. The turbulence influence on a laser beam along horizontal path on NOHD have been shown before by Zilberman et al. [4].

There is a strong desire to reduce size and weight of single and multiband IR imaging systems in Intelligence, Surveillance and Reconnaissance (ISR) operations on hand-held, helmet mounted or airborne platforms. NRL is developing new IR glasses that expand the glass map and provide compact solutions to multispectral imaging systems. These glasses were specifically designed to have comparable glass molding temperatures and thermal properties to enable lamination and co-molding of the optics which leads to a reduction in the number of air-glass interfaces (lower Fresnel reflection losses). Our multispectral optics designs using these new materials demonstrate reduced size, complexity and improved performance. This presentation will cover discussions on the new optical materials, multispectral designs, as well fabrication and characterization of new optics.
Additionally, graded index (GRIN) optics offer further potential for both weight savings and increased performance but have so far been limited to visible and NIR bands (wavelengths shorter than about 0.9 µm). NRL is developing a capability to extend GRIN optics to longer wavelengths in the infrared by exploiting diffused IR transmitting chalcogenide glasses. These IR-GRIN lenses are compatible with all IR wavebands (SWIR, MWIR and LWIR) and can be used alongside conventional materials. The IR-GRIN lens technology, design space and anti-reflection considerations will be presented in this talk.

A microwave-induced thermoacoustic detection system for embedded targets in lossy media is presented. The system achieves reliable detection of 5 cm × 5 cm × 2 cm targets embedded in a large Agarose sample at a 20 cm acoustic standoff. Repeated measurements across different target and sample configurations confirm the system’s ability to distinguish between a target signal and a baseline control signal generated by the package without embedded targets. Post-processing techniques including filtering and baseline signal characterization further improve detection performance.

The report presents the results of experimental research of the angle measurement system intended for measuring angles
between normal to some mirrors setting directions in the space. Dynamic mode of system operation is defined by
continuous rotation of platform with the autocollimating null-indicator. The angle measurements are provided by the
holographic optical encoder.

For high spatial resolution optical remote sensing imaging system, the performances of sampling imaging system are traditionally designed and evaluated according to the system SNR and the system MTF at Nyquist frequency. On the basis of information theory, this paper proposed an optimization design and evaluation specification based on full remote sensing imaging chain: information density. It combined various imaging quality parameters, such as MTF, SNR and sideband aliasing, as well as included the influences of the scene, atmosphere, remote sensor and satellite platform in in-orbit imaging chain to the imaging quality. The system designs and experiments under different resolutions were also conducted. The experiment result showed that information density can be used to evaluate the performance of sampling imaging system and direct the optimization design of optical remote sensing system with a high spatial resolution.

Sensitive detection of mid-infrared light (2 to 5 μm wavelengths) is crucial to a wide range of applications. Many of the applications require high-sensitivity photodiodes, or even avalanche photodiodes (APDs), with the latter generally accepted as more desirable to provide higher sensitivity when the optical signal is very weak. Using the semiconductor InAs, whose bandgap is 0.35 eV at room temperature (corresponding to a cut-off wavelength of 3.5 μm), Sheffield has developed high-sensitivity APDs for mid-infrared detection for one such application, satellite-based greenhouse gases monitoring at 2.0 μm wavelength. With responsivity of 1.36 A/W at unity gain at 2.0 μm wavelength (84 % quantum efficiency), increasing to 13.6 A/W (avalanche gain of 10) at -10V, our InAs APDs meet most of the key requirements from the greenhouse gas monitoring application, when cooled to 180 K. In the past few years, efforts were also made to develop planar InAs APDs, which are expected to offer greater robustness and manufacturability than mesa APDs previously employed. Planar InAs photodiodes are reported with reasonable responsivity (0.45 A/W for 1550 nm wavelength) and planar InAs APDs exhibited avalanche gain as high as 330 at 200 K. These developments indicate that InAs photodiodes and APDs are maturing, gradually realising their potential indicated by early demonstrations which were first reported nearly a decade ago.

We developed an algorithm for automatically detecting small and poorly contrasted (dim) moving objects in real-time, within video sequences acquired through a steady infrared camera. The algorithm is suitable for different situations since it is independent of the background characteristics and of changes in illumination. Unlike other solutions, small objects of any size (up to single-pixel), either hotter or colder than the background, can be successfully detected. The algorithm is based on accurately estimating the background at the pixel level and then rejecting it. A novel approach permits background estimation to be robust to changes in the scene illumination and to noise, and not to be biased by the transit of moving objects. Care was taken in avoiding computationally costly procedures, in order to ensure the real-time performance even using low-cost hardware. The algorithm was tested on a dataset of 12 video sequences acquired in different conditions, providing promising results in terms of detection rate and false alarm rate, independently of background and objects characteristics. In addition, the detection map was produced frame by frame in real-time, using cheap commercial hardware. The algorithm is particularly suitable for applications in the fields of video-surveillance and computer vision. Its reliability and speed permit it to be used also in critical situations, like in search and rescue, defence and disaster monitoring.

An infrared image contains spatial and radiative information about objects in a scene. Two challenges are to classify pixels in a cluttered environment and to detect partly obscured or buried objects like mines and IEDs. Infrared image sequences provide additional temporal information, which can be utilized for a more robust object detection and an improved classification of object pixels. A manual evaluation of multi-dimensional data is generally time consuming and inefficient and therefore various algorithms are used. By a principal component analysis (PCA) most of the information is retained in a new, reduced system with fewer dimensions. The principal component coefficients (loadings) are here used both for classifying detected object pixels and for reducing the number of images in the analysis by computing of score vectors. For the datasets studied, the number of required images can be reduced significantly without loss of detection and classification ability. This allows for a more sparse sampling and scanning of larger areas when using a UAV, for example.

Screening of aerial images covering large areas is important for many applications such as surveillance, tracing or rescue tasks. To reduce the workload of image analysts, an automatic detection of candidate objects is required. In general, object detection is performed by applying classifiers or a cascade of classifiers within a sliding window algorithm. However, the huge number of windows to classify, especially in case of multiple object scales, makes these approaches computationally expensive. To overcome this challenge, we reduce the number of candidate windows by generating so called object proposals. Object proposals are a set of candidate regions in an image that are likely to contain an object. We apply the Selective Search approach that has been broadly used as proposals method for detectors like R-CNN or Fast R-CNN. Therefore, a set of small regions is generated by initial segmentation followed by hierarchical grouping of the initial regions to generate proposals at different scales. To reduce the computational costs of the original approach, which consists of 80 combinations of segmentation settings and grouping strategies, we only apply the most appropriate combination. Therefore, we analyze the impact of varying segmentation settings, different merging strategies, and various colour spaces by calculating the recall with regard to the number of object proposals and the intersection over union between generated proposals and ground truth annotations. As aerial images differ considerably from datasets that are typically used for exploring object proposals methods, in particular in object size and the image fraction occupied by an object, we further adapt the Selective Search algorithm to aerial images by replacing the random order of generated proposals by a weighted order based on the object proposal size and integrate a termination criterion for the merging strategies. Finally, the adapted approach is compared to the original Selective Search algorithm and to baseline approaches like sliding window on the publicly available DLR 3K Munich Vehicle Aerial Image Dataset to show how the number of candidate windows to classify can be clearly reduced.

In the past decades, laser aided electro-optical sensing has reached high maturity and several commercial systems are available at the market for various but specific applications. These systems can be used for detection i.e. imaging as well as ranging. They cover laser scanning devices like LiDAR and staring full frame imaging systems like laser gated viewing or LADAR. The sensing capabilities of these systems is limited by physical parameter (like FPA array size, temporal band width, scanning rate, sampling rate) and is adapted to specific applications. Change of system parameter like an increase of spatial resolution implies the setup of a new sensing device with high development cost or the purchase and installation of a complete new sensor unit. Computational imaging approaches can help to setup sensor devices with flexible or adaptable sensing capabilities. Especially, compressed sensing is an emerging computational method which is a promising candidate to realize super-resolution sensing with the possibility to adapt its performance to various sensing tasks. It is possible to increase sensing capabilities with compressed sensing to gain either higher spatial and/or temporal resolution. Then, the sensing capabilities depend no longer only on the physical performance of the device but also on the computational effort and can be adapted to the application. In this paper, we demonstrate and discuss laser aided imaging using CS for super-resolution tempo-spatial imaging and ranging.

The use of Improvised Explosive Devices (IEDs) has increased significantly over the world and is a globally widespread phenomenon. Although measures can be taken to anticipate and prevent the opponent's ability to deploy IEDs, detection of IEDs will always be a central activity. There is a wide range of different sensors that are useful but also simple means, such as a pair of binoculars, can be crucial to detect IEDs in time.

Disturbed earth (disturbed soil), such as freshly dug areas, dumps of clay on top of smooth sand or depressions in the ground, could be an indication of a buried IED. This paper brie y describes how a field trial was set-up to provide a realistic data set on a road section containing areas with disturbed soil due to buried IEDs. The road section was imaged using a forward looking land-based sensor platform consisting of visual imaging sensors together with long-, mid-, and shortwave infrared imaging sensors.

The paper investigates the presence of discriminatory information in surface texture comparing areas with disturbed against undisturbed soil. The investigation is conducted for the different wavelength bands available. To extract features that describe texture, image processing tools such as 'Histogram of Oriented Gradients', 'Local Binary Patterns', 'Lacunarity', 'Gabor Filtering' and 'Co-Occurence' is used. It is found that texture as characterized here may provide discriminatory information to detect disturbed soil, but the signatures we found are weak and can not be used alone in e.g. a detector system.

Accurate geo-registration of acquired imagery is an important task when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. As an example, change detection needs accurately geo-registered images for selecting and comparing co-located images taken at different points in time. One challenge using small UAVs lies in the instable flight behavior and using low-weight cameras. Thus, there is a need to stabilize and register the UAV imagery by image processing methods since using only direct approaches based on positional information coming from a GPS and attitude and acceleration measured by an inertial measurement unit (IMU) are not accurate enough. In order to improve this direct geo-registration (or pre-registration"), image matching techniques are applied to align the UAV imagery to geo-registered reference images. The main challenge consists in matching images taken from different sensors at different day time and seasons. In this paper, we present evaluation methods for measuring the performance of image registration algorithms w.r.t. multi-temporal input data. They are based on augmenting a set of aligned image pairs by synthetic pre-registrations to an evaluation data set including truth transformations. The evaluation characteristics are based on quantiles of transformation residuals at certain control points. For a test site, video frames of a UAV mission and several ortho images from a period of 12 years are collected and synthetic pre-registrations corresponding to real flight parameters and registration errors are computed. Two algorithms A1 and A2 based on extracting key-points with a floating point descriptor (A1) and a binary descriptor (A2) are applied to the evaluation data set. As evaluation result, the algorithm A1 turned out to perform better than A2. Using affine or Helmert transformation types, both algorithms perform better than in the projective case. Furthermore, the evaluation classifies the ortho images w.r.t. their degree of difficulty and even for the most unfavorable ortho image, the evaluation characteristics yield better results than those attached to the default pre-registration. Finally, the proposed evaluation methods have been proven to derive valuable results even
for input data with a high degree of difficulty.

In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier’s speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.

Three-dimensional super-resolution range-gated imaging (3D SRGI) is a new technique for high-resolution 3D sensing. Up to now, 3D SRGI has been developed with two range-intensity correlation algorithms, including trapezoidal algorithm and triangular algorithm. To obtain high depth-to-resolution ratio of 3D imaging, coding method was developed for 3D SRGI based on the trapezoidal algorithm in 2011. In this paper, we propose the range-intensity coding based on the triangular algorithm and the hybrid range-intensity coding based on the triangular and trapezoidal algorithms. The theoretical models to predict the maximum coding bin number are developed for different coding methods. In the models, the maximum coding bin number is 7 for three coding gate images under the triangular algorithm, and the maximum is extended to 16 under the hybrid algorithm. The coding examples of 7 bins and 16 bins mentioned above are also given in this paper. The comparison among the three coding methods is performed by the depth-to-resolution ratio defined as the ratio between the 3D imaging depth and the product of the range resolution and raw gate image number, and the hybrid coding method has the highest depth-to-resolution ratio. Higher depth-to-resolution ratio means better 3D imaging capability of 3D SRGI.