Multimedia shooting training systems are increasingly being used in the training of security staff and uniformed services. An advanced practicing–training system SPARTAN for simulation of small arms shooting has been designed and manufactured by Autocomp Management Ltd. and Military Institute of Armament Technology for the Polish Ministry of National Defence.

SPARTAN is a stationary device designed to teach, monitor and evaluate the targeting of small arms and to prepare soldiers for:
• firing the live ammunition at open ranges for combat targets and silhouettes
• detection, classification and engagement of real targets upon different terrains, weather conditions and periods during the day
• team work as a squad during the mission by using different types of arms
• suitable reactions in untypical scenarios.
Placed in any room the training set consists of:
• the projection system that generates realistic 3D imaging of the battlefield (such as combat shooting range) in high-resolution
• system that tracks weapons aiming points
• sound system which delivers realistic mapping of acoustic surroundings
• operator station with which the training is conducted and controlled
• central processing unit based on PC computers equipped with specialist software realizing individual system functions
• units of smart weapons equipped with radio communication modules, injection laser diodes and pneumatic reloading system.

The system make possible training by firing in dynamic scenarios, using combat weapons and live ammunition against visible targets moving on a screen. The use of infrared camera for detecting the position of impact of a projectile.

In military applications, laminates reinforced with aramid, carbon, and glass fibers are used for the construction of protection products against light ballistics. Material layers can be very different by their physical properties. Therefore, such materials represent a difficult inspection task for many traditional techniques of non-destructive testing (NDT). Defects which can appear in this type of many-layered composite materials usually are inaccuracies in gluing composite layers and stratifications or delaminations occurring under hits of fragments and bullets. IR thermographic NDT is considered as a candidate technique to detect such defects. One of the active IR thermography methods used in nondestructive testing is vibrothermography. The term vibrothermography was created in the 1990s to determine the thermal test procedures designed to assess the hidden heterogeneity of structural materials based on surface temperature fields at cyclical mechanical loads. A similar procedure can be done with sound and ultrasonic stimulation of the material, because the cause of an increase in temperature is internal friction between the wall defect and the stimulation mechanical waves. If the cyclic loading does not exceed the flexibility of the material and the rate of change is not large, the heat loss due to thermal conductivity is small, and the test object returns to its original shape and temperature. The most commonly used method is ultrasonic stimulation, and the testing technique is ultrasonic infrared thermography. Ultrasonic IR thermography is based on two basic phenomena. First, the elastic properties of defects differ from the surroundings, and acoustic damping and heating are always larger in the damaged regions than in the undamaged or homogeneous areas. Second, the heat transfer in the sample is dependent on its thermal properties. In this paper, both modelling and experimental results which illustrate the advantages and limitations of ultrasonic IR thermography in inspecting multi-layered aramide composite materials will be presented.

Today’s infrared imaging guiding missiles are facing many challenges. With the development of targets’ stealth, new-style IR countermeasures and penetrating technologies as well as the complexity of the operational environments, infrared imaging guiding missiles must meet the higher requirements of efficient target detection, capability of anti-interference and anti-jamming and the operational adaptability in complex, dynamic operating environments. Missileborne infrared imaging detecting systems are constrained by practical considerations like cost, size, weight and power (SWaP), and lifecycle requirements. Future-generation infrared imaging guiding missiles need to be resilient to changing operating environments and capable of doing more with fewer resources. Advanced IR imaging detecting and information exploring technologies are the key technologies that affect the future direction of IR imaging guidance missiles. Infrared imaging detecting and information exploring technologies research will support the development of more robust and efficient missile-borne infrared imaging detecting systems. Novelty IR imaging technologies, such as Infrared adaptive spectral imaging, are the key to effectively detect, recognize and track target under the complicated operating and countermeasures environments. Innovative information exploring techniques for the information of target, background and countermeasures provided by the detection system is the base for missile to recognize target and counter interference, jamming and countermeasure. Modular hardware and software development is the enabler for implementing multi-purpose, multi-function solutions. Uncooled IRFPA detectors and High-operating temperature IRFPA detectors as well as commercial-off-the-shelf (COTS) technology will support the implementing of low-cost infrared imaging guiding missiles. In this paper, the current status and features of missile-borne IR imaging detecting technologies are summarized. The key technologies and its development trends of missiles’ IR imaging detecting technologies are analyzed.

During the design of a system employing thermal cameras one always faces a problem of choosing the camera types best suited for the task. In many cases such a choice is far from optimal one, and there are several reasons for that. System designers often favor tried and tested solution they are used to. They do not follow the latest developments in the field of infrared technology and sometimes their choices are based on prejudice and not on facts. The paper presents the results of measurements of basic parameters of MWIR and LWIR thermal cameras, carried out in a specialized testing laboratory. The measured parameters are decisive in terms of image quality generated by thermal cameras. All measurements were conducted according to current procedures and standards. However the camera settings were not optimized for a specific test conditions or parameter measurements. Instead the real settings used in normal camera operations were applied to obtain realistic camera performance figures. For example there were significant differences between measured values of noise parameters and catalogue data provided by manufacturers, due to the application of edge detection filters to increase detection and recognition ranges. The purpose of this paper is to provide help in choosing the optimal thermal camera for particular application, answering the question whether to opt for cheaper microbolometer device or apply slightly better (in terms of specifications) yet more expensive cooled unit. Measurements and analysis were performed by qualified personnel with several dozen years of experience in both designing and testing of thermal camera systems with both cooled and uncooled focal plane arrays. Cameras of similar array sizes and optics were compared, and for each tested group the best performing devices were selected.

The optical electronic system for far observing have been developed and tested. This system can use for detecting and tracking objects at night time at long distances. Also it can work in conditions of limited transparency of the atmosphere (the presence of atmospheric phenomena such as rain, snow, drizzle and fog). With the help of the system it is possible to distinguish details of objects with a size of 0.5 m at the maximum distance up to 7 km.

A method for distance determination with the help of range-gated viewing systems suitable for the arbitrary shape of the illumination pulse is proposed. The method is based on finding the delay time at which maximum of the return pulse energy takes place. The maximum position depends on the pulse and gate durations and, generally speaking, on the pulse shape. If the pulse length is less than or equal to the gate duration, the delay time appropriate to the maximum does not depend on the pulse shape. At equal pulse and gate durations, there is a strict local maximum, which turns into a plateau when pulse is shorter than gate duration. A delay time appropriate to the strict local maximum or the far boundary of the plateau (where non-strict maximum is) is directly related to the distance to the object. These findings are confirmed by analytical relationships for trapezoid pulses and numerical results for the real pulse shape. To verify the proposed method we used a vertical wall located at different distances from 15 to 120m as an observed object. Delay time was changing discretely in increments of 5 ns. Maximum of the signal was determined by visual observation of the object on the monitor screen. The distance defined by the proposed method coincided with the direct measurement with accuracy 1- 2m, which is comparable with the delay time step multiplied by half of the light velocity. The results can be useful in the development of 3-D vision systems.

Information management is an inseparable part of the command process. The result is that the person making decisions at the command post interacts with data providing devices in various ways. Tools virtualization process can introduce a number of significant modifications in the design of solutions for management and command. The general idea involves replacing physical devices user interface with their digital representation (so-called Virtual instruments). A more advanced level of the systems “digitalization” is to use the mixed reality environments.

In solutions using Augmented reality (AR) customized HMI is displayed to the operator when he approaches to each device. Identification of device is done by image recognition of photo codes. Visualization is achieved by (optical) see-through head mounted display (HMD). Control can be done for example by means of a handheld touch panel.

Using the immersive virtual environment, the command center can be digitally reconstructed. Workstation requires only VR system (HMD) and access to information network. Operator can interact with devices in such a way as it would perform in real world (for example with the virtual hands).

Because of their procedures (an analysis of central vision, eye tracking) MR systems offers another useful feature of reducing requirements for system data throughput. Due to the fact that at the moment we focus on the single device.

Experiments carried out using Moverio BT-200 and SteamVR systems and the results of experimental application testing clearly indicate the ability to create a fully functional information system with the use of mixed reality technology.

Optical encryption with spatially incoherent illumination does not suffer from speckle noise and does not require holographic registration setup like coherent techniques do. However, as only light intensity distribution is considered, mean value of image to be encrypted is always above zero which leads to intensive zero spatial frequency peak in image spectrum. Consequently, in case of spatially incoherent illumination, image spectrum, as well as an encryption key spectrum, cannot be white. This can be used to crack encryption system. If encryption key is very sparse, encrypted image might contain parts or even whole unhidden original image. In case of denser keys, original image boundaries still might be partially visible. This will not provide correct decryption key, but will allow to significantly narrow the search for one. Therefore, in this paper new attack method on schemes of optical encryption with spatially incoherent illumination is presented. Method is based on detection of original image boundaries in the encrypted image. Because encryption is accomplished via optical convolution of original image with encryption key, encryption key can be found if original image is known. In proposed method, in place of original image (which is unknown to the attacker) we use random matrices. Reconstructed in this way keys are extremely noisy even in case of simplest keys, but after binarization they provide areas of encryption key points possible locations. In case of simplest keys proposed method allows to acquire correct key. In case of complex keys it allows to narrow the search for one. Results of numerical experiments on breaking the system of optical encryption with spatially incoherent illumination are presented.

Intelligent underground fiber optic perimeter security system is presented. Their structure, operation, software and hardware with neural networks elements are described. System allows not only to establish the fact of violation of the perimeter, but also to locate violations. This is achieved through the use of WDM-technology division spectral information channels. As used quasi-distributed optoelectronic recirculation system as a discrete sensor. The principle of operation is based on registration of the recirculation period change in the closed optoelectronic circuit at different wavelengths under microstrain exposed optical fiber. As a result microstrain fiber having additional power loss in a fiber optical propagating pulse, which causes a time delay as a result of switching moments of the threshold device. To separate the signals generated by intruder noise and interference, the signal analyzer is used, based on the principle of a neural network. The system detects walking, running or crawling intruder, as well as undermining attempts to register under the perimeter line. These alarm systems can be used to protect the perimeters of facilities such as airports, nuclear reactors, power plants, warehouses, and other extended territory.

Modern electro-optical surveillance and reconnaissance systems require tracking capability to get exact images of target or to accurately direct the line of sight to target which is moving or still. This leads to the tracking system composed of image based tracking algorithm and servo control algorithm. In this study, we focus on the servo control function to minimize the overshoot in the tracking motion and do not miss the target. The scheme is to limit acceleration and velocity parameters in the tracking controller, depending on the target state information in the image. We implement the proposed techniques by creating a system model of DIRCM and simulate the same environment, validate the performance on the actual equipment.

In this paper, the authors try to determine a procedure for the best choice in selecting one or other type of sensors as a function of the object under observation, background and environmental conditions. In surveillance activities related with different missions and scenarios occurred in day and/or night time, the proper choice and use of video surveillance sensors is of huge importance. Starting from specific scenarios of surveillance, as for example the surveillance of the sky to detect drones, or surveillance of the ground area to detect some manmade objects or intruders, this paper approaches the problem of the image appearance in VIS, SWIR and LWIR spectral ranges, using different passive technologies of surveillance. Relevant images are comparative presented in relation with some theoretical quantifications made through mathematical models or through software simulations.

Starting from a few targets and backgrounds with known spectral reflectivity or emissivity, the contrast was used to show its influence on the signal strength reaching the surface of the video detector (imager) in similar environment conditions. Finally, the authors seek certain characteristics of the electro-optical system itself that can influence most the strength and quality of the optical signal with respect to influences on observation distances of the target. The possibility of using an active technology instead of a passive one, by introducing a pulsed laser illuminator, is also analyzed. The use of some polarizing filters is also considered but in this stage only in laboratory conditions, in order to improve the observability of an object in some special environmental circumstances.

Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera’s photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the accuracy of the obtained temporal noise values was estimated.

This paper presents a modified two-point calibration algorithm with useful method of pixel offset correction coefficients update for infrared focal plane arrays (IRFPAs). The new approach to IRFPA response nonuniformity correction (NUC) consists in the use of pixel response change determined at the actual operating conditions in relation to the reference ones by means of shutter to compensate a pixel temporal drift. This approach permits to estimate a pixel offset efficiently and remove any optics shading effect in the corrected output image as well. Moreover, the proposed NUC algorithm is easy to implement by hardware too. To show efficiency of the modified two-point calibration algorithm some test results for microbolometer IRFPA are presented.

Electronic structure of functional region of the interband cascade infrared photodetector designed to operate with cut-off wavelength of ~10.7 μm is calculated using second nearest neighbor sp3s* tight binding model with spin-orbit interactions. The effective bandgaps and alignment of the band edges are presented. Lattice mismatch of each region to the GaSb substrate is determined. The influence of InAs incorporation into the InSb interfacial layer is investigated. It is shown that up to 5% InAs addition to InSb interface in InAs/GaSb superlattice absorber is allowed if efficient carrier transport is to be kept. Furthermore, interface of up to x=2% InAsxSb1-x can be used in the proposed InAs/AlSb superlattice intraband relaxation region to keep its proper operation.

Fundamental and technological issues associated with the development and exploitation of the most advanced infrared technologies is discussed. In these classes of detectors both photon and thermal detectors are considered. Special attention is directed to HgCdTe ternary alloys, type II superlattices (T2SLs), barrier detectors, quantum wells, extrinsic detectors, and uncooled thermal bolometers.

The sophisticated physics associated with the antimonide-based bandgap engineering will give a new impact and interest in development of infrared detector structures. Important advantage of T2SLs is the high quality, high uniformity and stable nature of the material. In general, III-V semiconductors are more robust than their II-VI counterparts due to stronger, less ionic chemical bonding. As a result, III-V-based FPAs excel in operability, spatial uniformity, temporal stability, scalability, producibility, and affordability – the so-called “ibility” advantages.

In well established uncooled imaging, microbolometer arrays are clearly the most used technology. The microbolometer detectors are now produced in larger volumes than all other IR array technologies together. Present state-of-the-art microbolometers are based on polycrystalline or amorphous materials, typically vanadium oxide (VOx) or amorphous silicon (a-Si), with only modest temperature sensitivity and noise properties. Basic efforts today are mainly focused on pixel reduction and performance enhancement.

Imaging through scattering media is a highly sought capability for military, industrial, and medical applications. Unfortunately, nearly all recent progress was achieved in microscopic light propagation and/or light propagation through thin or weak scatterers which is mostly pertinent to medical research field. Sensing at long ranges through extended scattering media, for example turbid water or dense fog, still represents significant challenge and the best results are demonstrated using conventional approaches of time- or range-gating. The imaging range of such systems is constrained by their ability to distinguish a few ballistic photons that reach the detector from the background, scattered, and ambient photons, as well as from detector noise. Holography can potentially enhance time-gating by taking advantage of extra signal filtering based on coherence properties of the ballistic photons as well as by employing coherent addition of multiple frames. In a holographic imaging scheme ballistic photons of the imaging pulse are reflected from a target and interfered with the reference pulse at the detector creating a hologram. Related approaches were demonstrated previously in one-way imaging through thin biological samples and other microscopic scale scatterers. In this work, we investigate performance of holographic imaging systems under conditions of extreme scattering (less than one signal photon per pixel signal), demonstrate advantages of coherent addition of images recovered from holograms, and discuss image quality dependence on the ratio of the signal and reference beam power.

Laser range-gated viewing experiments in the eye-safe spectral region are conducted where a semiconductor based laser illuminator is associated with a corresponding image detector able to operate in high frequency and high sensitivity shutter mode. After experimental validations of the camera developed for accumulation operation, a high power semiconductor based illuminator has been designed and realized. This technology can be used to develop efficient, compact and high average power SWIR illuminators. In a first step, images of different scenes were recorded in a test tunnel and the results are compared to those recorded simultaneously with a solid state based laser illuminator working in flash mode. Both results are similar in terms of image intensity whereas semiconductor based recordings exhibits lower speckle noise and better homogeneity. In a second step, outdoor experiments were conducted during daylight conditions like full sunshine and cloudy weather and also in night conditions. No significant image degradation is measured even with longer integration time. These results underline the potential of SWIR accumulation mode for outdoor and long range active imaging applications.

The objective is to identify the chemical composition of (isotropic and homogeneous) thin liquid and gel films on various surfaces by their infrared reflectance spectra. A bistatic optical sensing concept is proposed here in which a multi-wavelength laser source and a detector are physically displaced from each other. With the aid of the concept apparatus proposed, key optical variables can be measured in real time. The variables in question (substance thickness, refractive index, etc.) are those whose un-observability causes many types of monostatic sensor (in use today) to give ambiguous identifications. Knowledge of the aforementioned key optical variables would allow an adaptive signal-processing algorithm to make unambiguous identifications of the unknown chemicals by their infrared spectra, despite their variable presentations. The proposed bistatic sensor system consists of an optical transmitter and an optical receiver. The whole system can be mounted on a stable platform. Both the optical transmitter subsystem and the optical receiver subsystem contain auxiliary sensors to determine their relative spatial positions and orientations. For each subsystem, these auxiliary sensors include an orientation sensor, and rotational sensors for absolute angular position. A profilometer-and-machine-vision subsystem is also included. An important aspect of determining the necessary optical variables is an aperture that limits the interrogatory beams to a coherent pair, rejecting those resulting from successive multiple reflections. A set of equations is developed to characterize the propagation of a coherent pair of frequency-modulated thin beams through the system. It is also shown that frequency modulation can produce easily measurable beat frequencies for determination of sample thicknesses on the order of microns to millimeters. Also shown is how the apparatus’s polarization features allow it to measure the refractive index of any isotropic, homogeneous dielectric surface on which the unknown substance can sit. Concave, convex and flat supporting surfaces and menisci are discussed.

Electro-optical sensors as well as unprotected human eyes are extremely sensitive to laser radiation and can be permanently damaged from direct or reflected beams. Laser detector/eye hazard depends on the interaction between the laser beam and the media in which it traverses. The environmental conditions including terrain features, atmospheric particulate and water content, and turbulence, may alter the laser’s effect on the detector/eye. It is possible to estimate the performance of an electro-optical system as long as the atmospheric propagation of the laser beam can be adequately modeled.

More recent experiments and modeling of atmospheric optics phenomena such as inner scale effect, aperture averaging, atmospheric attenuation in NIR-SWIR, and Cn2 modeling justify an update of previous eye/detector safety modeling. In the present work, the influence of the atmospheric channel on laser safety for personnel and instrumentation is shown on the basis of theoretical and experimental data of laser irradiance statistics for different atmospheric conditions. A method for evaluating the probability of damage and hazard distances associated with the use of laser systems in a turbulent atmosphere operating in the visible and NIR-SWIR portions of the electromagnetic spectrum is presented. It can be used as a performance prediction model for directed energy engagement of ground-based or air-based systems.

The development of a multisensor optronic device requires Size, Weight and Power (SWaP), cost-effective and modular rangefinders while keeping a good range performance. We report on a fully fibered monostatic laser rangerfinder based on a one lens collimator used as the aperture of both the emission and reception channels. This has been possible thanks to the use of a diplexer.

This design makes the system compacter and achieves a 200g system weight. In addition to its low volume, the fully fibered architecture allows designing a building block rangefinder with the collimator sub-system on one side and the laser and electronics cards module on the other side. Both are linked up by only an optical fiber. This kit format enables the rangefinder to better fit in any available space in higher level systems such as gimbals and multi-function imagers. Besides, no alignment is needed, and no parallax error is possible: the alignment between channels is guaranteed by design over the whole range.

The emission/reception channels of the first prototype has a 28mm diameter 80mm focal length lens, and a 1.55μm 100μJ pulsed laser firing in a burst mode. The rangefinder is set in a class 1 configuration, and measures at 1Hz. The achieved Extinction Ratio is 30dB, which is equivalent to a range on NATO targets of 7km. The achieved ER being class 1M at 5Hz is even 32dB, which is equivalent to a range of 8.5km on NATO targets.

More configurations are reported in this article with their associated performance.

Airborne photoelectric reconnaissance system with the bore sight down to the ground is an important battlefield situational awareness system, which can be used for reconnaissance and surveillance of complex ground scene. Airborne 3D imaging Lidar system is recognized as the most potential candidates for target detection under the complex background, and is progressing in the directions of high resolution, long distance detection, high sensitivity, low power consumption, high reliability, eye safe and multi-functional. However, the traditional 3D laser imaging system has the disadvantages of lower imaging resolutions because of the small size of the existing detector, and large volume. This paper proposes a high resolution laser 3D imaging technology based on the tunable optical fiber array link. The echo signal is modulated by a tunable optical fiber array link and then transmitted to the focal plane detector. The detector converts the optical signal into electrical signals which is given to the computer. Then, the computer accomplishes the signal calculation and image restoration based on modulation information, and then reconstructs the target image. This paper establishes the mathematical model of tunable optical fiber array signal receiving link, and proposes the simulation and analysis of the affect factors on high density multidimensional point cloud reconstruction.

Long range imaging with visible or infrared observation systems is typically hampered by atmospheric turbulence. The fluctuations in the refractive index of the air produce random shifts and blurs in the recorded imagery that vary across the field of view and over time. This severely complicates their utility for visual detection, recognition and identification at large distances. Software based turbulence mitigation methods aim to restore such recorded image sequences based on the image data only and thereby enable visual identification at larger distances. Although successful restoration has been achieved on static scenes in the past, a significant challenge remains in accounting for moving objects such that they remain visible as moving objects in the output. Under moderate turbulence conditions, the turbulence induced shifts may be several pixels in magnitude and occur on the same length scale as moving objects. This severely complicates the segmentation between these objects and the background. Here we investigate how turbulence mitigation may be accomplished on background as well as large moving objects for both land and sea based imaging under moderate turbulence conditions. We apply optical flow estimation methods to determine both the turbulence induced shifts in image sequences as well as the motion of large moving objects. These motion estimates are used with our TNO turbulence mitigation software to reduce the effects of turbulence and to stabilize the output to a dynamic reference. We apply this approach to both land and sea scenarios. We investigate how different regularization methods for the optical flow affect the accuracy of the segmentation between moving object motion and the background motion. Moreover we qualitatively asses the quality improvement of the resulting imagery in sequences of output images, and show a substantial gain in their apparent sharpness and stability on both the background and moving objects.

High resolution imagery is of crucial importance for the performance on visual recognition tasks. Super-resolution (SR) reconstruction algorithms aim to enhance the image resolution beyond the capability of the image sensor being used. Traditional SR algorithms approach this inverse problem using physical models for the image formation combined with a regularization function to prevent instabilities in the solution. Recently deep neural networks have been put forward as an alternative approach to the SR reconstruction problem. They learn a mapping from low resolution images to their high resolution counterparts from pairs of training images, which allows them to capture more specific information about the space of possible solutions than traditional regularization functions. These networks have achieved state-of-the-art performance on single image SR for sets of generic test images. Here we investigate whether the same performance can be realized when these neural networks for single image SR are applied specifically in the maritime domain. In particular we investigate their ability to reconstruct undersampled images of ships at sea, and demonstrate that the performance is similar to what is achieved on generic test images. In addition we quantify the gain in performance that is achieved when the networks are trained specifically on images of ships, which allows the networks to capture more prior knowledge about the space of possible solutions. Finally we show that the performance deteriorates when the resolution of test images is limited by image blur, for example due to diffraction, rather than undersampling. This highlights the importance of using representative training data that account for the part of the image formation process that limits the resolution in the sensor data.

InAs/GaSb T2SL photodetectors offer similar performance to HgCdTe at an equivalent cutoff wavelength, but with a sizeable penalty in operating temperature, due to the inherent difference in Shockley-Read lifetimes. It is predicted that since the future IR systems will be based on the room temperature operation of depletion-current limited arrays with pixel densities that are fully consistent with background- and diffraction-limited performance due to the system optics, the material system with long Shockley-Read lifetime will be required. Since T2SLs are much resisted in attempts to improve its SR lifetime, currently the only material that meets this requirement is HgCdTe.

Due to less ionic chemical bonding, III-V semiconductors are more robust than their II-VI counterparts. As a result, III-V-based FPAs excel in operability, spatial uniformity, temporal stability, scalability, producibility, and affordability – the so-called “ibility” advantages.

This article reports the parameters and characteristics of recently introduced mid Infrared (3-12um) detection modules for gas sensing applications. In Mid infrared range one can detect almost every simple or complex compound existing on earth. Currently a driving factors for development of gas sensors are related to air/water quality, explosive material detection and medical applications, especially breath analyzers. Gas sensors require source (thermal, diode or laser), sampling compartment and detection module. At VIGO System we are concentrated on designing and manufacturing high operating temperature detectors, fast, sensitive, affordable and reliable required for development of such platforms. We are using active, absorber elements based on complex HgCdTe or InAsSb heterostructures monolithically integrated with optical immersion lens. Additional collective optics, signal amplification, temperature control and heat dissipation will be also discussed in this article. Those functions are critical for ultimate performance of gas sensors.

There has been a significant progress in equipment for testing electro-optical surveillance systems over the last decade. Modern test systems are increasingly computerized, employ advanced image processing and offer software support in measurement process. However, one great challenge, in form of relative low accuracy, still remains not solved. It is quite common that different test stations, when testing the same device, produce different results. It can even happen that two testing teams, while working on the same test station, with the same tested device, produce different results. Rapid growth of electro-optical technology, poor standardization, limited metrology infrastructure, subjective nature of some measurements, fundamental limitations from laws of physics, tendering rules and advances in artificial intelligence are major factors responsible for such situation. Regardless, next decade should bring significant improvements, since improvement in measurement accuracy is needed to sustain fast growth of electro-optical surveillance technology.

Type-II InAs/GaSb interband supperlattice cascade infrared detectors (IB CIDs) proved to be promising candidate for short response time devices operating in room and higher temperatures. The spectral responsivity of mid-wave (MWIR) T2SLs InAs/GaSb based IB CID has been observed even up to 380 K.

Short time constant (τs) is directly related to the unique carrier transport properties of the IB CID structures, where at 380 K ~ 4 ns τs was observed. What is more, thermal generation recombination rates of IB CIDs are orders of magnitude reduced in comparison with corresponding intersubband quantum cascade infrared detectors (IC QCID) giving flexibility in higher operating temperature (HOT) applications. The most important feature is that the multiple-stage architecture is useful for improving the sensitivity of HOT detectors, where the quantum efficiency is limited by short diffusion length. Assuming that absorption depth for IR radiation is longer than the diffusion length, only a limited portion of the photogenerated carriers contribute to the quantum efficiency. That could be circumvented by fabrication of multi-stage devices where each equal stage consist of active, relaxation and barrier layers. IB CID T2SLs InAs/GaSb detector operating at 380 K exhibits Johnson noise limited detectivity at the level of ~ 108 Jones without implementation of immersion lens.

In this paper the current status of novel HOT T2SLs InAs/GaSb IB CID is presented. Analysis of the detector’s performance versus bias voltage and operating temperatures and future trends in development of the quantum cascade detectors are shown. The paper focuses on development of IR HOT detectors and potential approaches related to materials - T2SLs InAs/GaSb where IB CIDs that eliminate the cooling requirements of IR photodetectors operating in MWIR range are presented. The prediction of near future impact of that technology on infrared detector development is also shown.

In this work we investigate the high-operating temperature performance of InAsSb/AlSb heterostructure detectors with cut-off wavelengths near 5 μm at 230 K. The devices have been fabricated with different type of the absorbing layer: nominally undoped absorber, and both n- and p-type doped. The results show that the device performance strongly depends on absorber layer doping. Generally, p-type absorber provides higher values of current responsivity than n-type absorber, but at the same time also higher values of dark current. The device with nominally undoped absorbing layer shows moderate values of both current responsivity and dark current. Resulting detectivities D° of non-immersed devices varies from 2×109 to 7×109 cmHz1/2/W at 230 K, which is easily achievable with a two stage thermoelectric cooler.

The paper presents the performance of the interband cascade type-II infrared InAs/GaSb superlattice photodetectors. Such photodetectors are made up of multiple stages, which are connected in series using an interband tunneling heterostructure. Each stage can be divided into three regions: absorber region, relaxation region and interband tunneling region. Cascade configurations allows to achieve fast response detectors. Making the assumption of bulk-like absorbers, we show how the standard semiconductor transport and recombination equations can be extended to the case of multiplestage devices. We report on the dependence of the Johnson-noise limited detectivity on the absorber thickness for a different number of stages. This allows optimization of the detector architecture, necessary to achieve high value of the detectivity. For this purpose, we make comparison of collection efficiency in single- and multiple-stage devices. The collection efficiency rapidly increases with increasing the number of stages in multiple-absorber detector, especially in situation where the absorber material’s diffusion length is less than absorption depth. We show that the optimal value of the detectivity for different number of stages does not change significantly. The potential benefits of the cascade architecture are shown to be higher in long-term detection regime.

Nowadays technology allows to create highly effective Intruder Detection Systems (IDS), that are able to detect the presence of an intruder within a defined area. In such systems the best performance can be achieved by combining different detection techniques in one system. One group of devices that can be applied in an IDS, are devices based on Fiber Optic Sensors (FOS). The FOS benefits from numerous advantages of optical fibers like: small size, light weight or high sensitivity. In this work we present a novel Microstructured Optical Fiber (MOF) characterized by increased strain sensitivity dedicated to distributed acoustic sensing for intelligent intruder detection systems. By designing the MOF with large air holes in close proximity to a fiber core, we increased the effective refractive index sensitivity to longitudinal strain. The presented fiber can be easily integrated in a floor system in order to detect any movement in the investigated area. We believe that sensors, based on the presented MOF, due to its numerous advantages, can find application in intelligent IDS.

Monitoring the geometry of an moving element is a crucial task for example in robotics. The robots equipped with fiber bend sensor integrated in their arms can be a promising solution for medicine, physiotherapy and also for application in computer games. We report an all-fiber intensity bend sensor, which is based on microstructured multicore optical fiber. It allows to perform a measurement of the bending radius as well as the bending orientation. The reported solution has a special airhole structure which makes the sensor only bend-sensitive. Our solution is an intensity based sensor, which measures power transmitted along the fiber, influenced by bend. The sensor is based on a multicore fiber with the special air-hole structure that allows detection of bending orientation in range of 360°. Each core in the multicore fiber is sensitive to bend in specified direction. The principle behind sensor operation is to differentiate the confinement loss of fundamental mode propagating in each core. Thanks to received power differences one can distinguish not only bend direction but also its amplitude. Multicore fiber is designed to utilize most common light sources that operate at 1.55 μm thus ensuring high stability of operation. The sensitivity of the proposed solution is equal 29,4 dB/cm and the accuracy of bend direction for the fiber end point is up to 5 degrees for 15 cm fiber length. Such sensitivity allows to perform end point detection with millimeter precision.

ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defence and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden.

ECOMOS uses and combines well-accepted existing European tools to build up a strong competitive position. This includes two TA models: the analytical TRM4 model and the image-based TOD model. In addition, it uses the atmosphere model MATISSE.

In this paper, the central idea of ECOMOS is exposed. The overall software structure and the underlying models are shown and elucidated. The status of the project development is given as well as a short discussion of validation tests and an outlook on the future potential of simulation for sensor assessment.

Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to color imaging is proposed, with a discussion about the choice of the working color space depending on the type of image enhancement processing used.

An ongoing challenge for many military imaging systems is the detection and classification of weak target signatures in a cluttered environment. In such cases, the use of image contrast and relative target motion alone does not always provide a sufficient level of target discrimination to give operational confidence and it is therefore necessary to consider the use of other discriminatory scene information. Polarisation is one such source of information and this paper reports on an extensive series of polarimetric trials undertaken across the visible, NIR, SWIR, MWIR and LWIR spectral bands. Using this data, the benefits and limitations of polarisation discrimination are reviewed in the context of practical military scenarios. It is shown that polarisation signatures vary with viewing geometry and atmospheric conditions. This would lead to an unpredictable performance level if the sensor discrimination was based solely on polarisation. However, by carefully combining polarisation with other scene information, useful operational benefits can be obtained and this is illustrated through a consideration of different data fusion approaches.

The common aperture design facilitates superior DRI performance in EO and SWIR, in comparison to the
conventionally configured payloads. Special spectral calibration and color correction extend the effective range of color
imaging. An advanced CMOS FPA and low F-number of the optics facilitate low light performance. SWIR band
provides further atmospheric penetration, as well as see-spot capability at especially long ranges, due to asynchronous
pulse detection. MWIR band has good sharpness in the entire field-of-view and (with full HD FPA) delivers amount of
detail far exceeding one of VGA-equipped FLIRs.

The accurate knowledge of IR detectors specifications becomes of higher importance whatever the application. Among these specifications is the relative spectral response. Spectral response measurement of CMOS Focal Plane Arrays is now possible either thanks to a grating-based monochromator or through an FTIR spectrometer, this later solution easily leading to a 1 cm-1 spectral resolution whatever the wavelength. Through this method, the spectrum is calculated as the Fourier Transform of the signal of the detector. A Fast Fourier Transform algorithm (FFT) is then applied which requires a sampling frequency. Sampling points are selected at most at every zero-path difference of the interferogram of an internal He-Ne laser. Consequently, the analysis of signals with higher wavenumbers the He-Ne laser, i.e. in the visible is theoretically impossible. Our paper reminds the principle of the high resolution spectral response measurement through FTIR and presents the method to pass over the sampling limitation thus extending measurements over the visible for CMOS detectors. It also explains the drawbacks of this method: the existence of a blind range and the limitations toward UV range.

Recently, a hyperspectral imaging system (HIS) with a Fourier Transform InfraRed (FTIR) spectrometer has been widely used due to its strengths in detecting gaseous fumes. Even though numerous algorithms for detecting gaseous fumes have already been studied, it is still difficult to detect target gases properly because of atmospheric interference substances and unclear characteristics of low concentration gases. In this paper, we propose detection algorithms for classifying hazardous gases using a deep neural network (DNN) and a convolutional neural network (CNN). In both the DNN and CNN, spectral signal preprocessing, e.g., offset, noise, and baseline removal, are carried out. In the DNN algorithm, the preprocessed spectral signals are used as feature maps of the DNN with five layers, and it is trained by a stochastic gradient descent (SGD) algorithm (50 batch size) and dropout regularization (0.7 ratio). In the CNN algorithm, preprocessed spectral signals are trained with 1 × 3 convolution layers and 1 × 2 max-pooling layers. As a result, the proposed algorithms improve the classification accuracy rate by 1.5% over the existing support vector machine (SVM) algorithm for detecting and classifying hazardous gases.

InAs/InAs1-xSbx type-II strained-layer superlattices (SLS) are a structure with potential infrared detection applications, owing to its tunable bandgap and suppressed Auger recombination. A series of medium-wavelength infrared (MWIR) InAs/InAs0.815Sb0.185 SLS structures, grown as undoped absorption epilayers on GaAs, were fabricated using molecular beam epitaxy in order to study the dependence of the ground state transitions on temperature and superlattice period thickness. Photoluminescence peaks at 4 K were obtained with the use of a helium-cooled micro-PL system and an InSb detector, and temperature-dependent absorption spectra were measured in the range 77 K – 300 K on a Fourier Transform Infrared (FTIR) spectrometer, equipped with a 1370 K blackbody source and a DTGS detector. An nBn device sample with the absorber structure identical to one of the undoped samples was also grown and processed with the goal of measuring temperature-dependent spectral response. A model for superlattice band alignment was also devised, incorporating the Bir-Pikus transformation results for uniaxial and biaxial strain, and the Einstein oscillator model for bandgap temperature dependence. Absorption coefficients of several 1000 cm-1 throughout the entire MWIR range are found for all samples, and temperature dependence of the bandgaps is extracted and compared to the model. This and photoluminescence data also demonstrate bandgap shifts consistent with the different superlattice periods of the three samples.

The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.

Heat transfers are involved in many phenomena such as friction, tensile stress, shear stress and material rupture. Among
the challenges encountered during the characterization of such thermal patterns is the need for both high spatial and
temporal resolution. Infrared imaging provides information about surface temperature that can be attributed to the stress
response of the material and breaking of chemical bounds. In order to illustrate this concept, tensile and shear tests were
carried out on steel, aluminum and carbon fiber composite materials and monitored using high-speed (Telops FASTM2K)
and high-definition (Telops HD-IR) infrared imaging. Results from split-Hopkinson experiments carried out on a
polymer material at high strain-rate are also presented. The results illustrate how high-speed and high-definition infrared
imaging in the midwave infrared (MWIR, 3 – 5 μm) spectral range can provide detailed information about the thermal
properties of materials undergoing mechanical testing.

Passive ranging is the process of estimating the distance between an observer (own-ship) and one or more objects (targets) by using passive sensors and angle measurements only, without electromagnetic or acoustic emissions. It is the baseline technique to complete the three dimensional tracking capability of IRST systems, able to automatically search, detect and track targets with generally higher angular resolution than Radars in completely silent mode. As well-known from literature, range is univocally linked to angle only data, when specific relative dynamics occur. In other cases, when such univocal relation does not hold, range estimation is still considered an open research topic. In this paper we select a set of informative cases, derived from our experience in analyzing real sorties data and compare four popular algorithms on the basis of a set of new metrics that, in our opinion, captures the system performance in terms of usability and reliability. Ranging algorithms performance is usually evaluated by means of distance-based metrics (as RMSE) which focus on accuracy of the estimation. Usability and reliability are taken into account here by introducing what we call the Average Range Declaration Length (ARDL) and the Truth-Representative Score (TRS).

Infrared (IR) optical systems are at the core of many military, civilian and manufacturing applications and perform mission critical functions. To reliably fulfill the demanding requirements imposed on today’s high performance IR optics, highly accurate, reproducible and fast lens testing is of crucial importance. Testing the optical performance within different temperature ranges becomes key in many military applications.

Due to highly complex IR-Applications in the fields of aerospace, military and automotive industries, MTF Measurement under realistic environmental conditions become more and more relevant. A Modulation Transfer Function (MTF) test bench with an integrated thermal chamber allows measuring several sample sizes in a temperature range from -40 °C to +120°C. To reach reliable measurement results under these difficult conditions, a specially developed temperature stable design including an insulating vacuum are used.

The main function of this instrument is the measurement of the MTF both on- and off-axis at up to +/-70° field angle, as well as measurement of effective focal length, flange focal length and distortion.

The vertical configuration of the system guarantees a small overall footprint. By integrating a high-resolution IR camera with focal plane array (FPA) in the detection unit, time consuming measurement procedures such as scanning slit with liquid nitrogen cooled detectors can be avoided.

The specified absolute accuracy of +/- 3% MTF is validated using internationally traceable reference optics. Together with a complete and intuitive software solution, this makes the instrument a turn-key device for today’s state-of- the-art optical testing.

Keywords/Phrases

Keywords

in

Remove

in

Remove

in

Remove

+ Add another field

Search In:

Proceedings

Volume

Journals +

Volume

Issue

Page

Advanced PhotonicsJournal of Applied Remote SensingJournal of Astronomical Telescopes Instruments and SystemsJournal of Biomedical OpticsJournal of Electronic ImagingJournal of Medical ImagingJournal of Micro/Nanolithography, MEMS, and MOEMSJournal of NanophotonicsJournal of Photonics for EnergyNeurophotonicsOptical EngineeringSPIE Reviews