The Federal Aviation Administration (FAA) is currently evaluating a solar blind ultraviolet (UV) technology, called FogEye, that is being developed by Norris Electro Optical Systems. The technology allows for transmission and reception of low level UV signals that are free of any natural background noise. It also offers favorable atmospheric transmission characteristics. Conclusions of the FAA evaluation thus far are that the technology has considerable merit, and that applications such as preventing runway incursions and use as an Integrity Monitor during low visibility landings should be operationally assessed.

A 94GHz imaging radar with its associated gimbals for stabilisation and scanning has been developed as an airborne test-bed to evaluate radar aided navigation and guidance algorithms. Preliminary results from helicopter based flight tests show sufficient contrast between selected features (including runways) and their surroundings for both computer and human pilot based guidance. Feature extraction and matching algorithms have shown this system to be more accurate than that that achieved by GPS.

The utility of Near-Infrared (NIR) sensors for Enhanced Vision System (EVS) applications has been identified and well documented. In particular, such sensors are well suited to detecting runway approach lighting, and often outperform the pilot's vision for this task.
We present the results of field tests of very low cost NIR sensors, based on sensitive visible-light cameras, for this application; the cost/benefit tradeoffs of these sensors are so favorable that they may well form the core of a basic EVS system, or an effective enhancement to EVS systems based on other primary vision sensors.
Useful processing techniques for imagery from these sensors, in the presence of cooperative sensors, or as a standalone system, are also presented.

Helicopters often strike against thin obstacles such as power lines. To prevent such collisions,we are developing an obstacle detection and warning system for helicopters. An Infrared (IR)camera,a color camera and a Millimeter Wave (MMW)radar are employed as its sensor components.This paper describes the performance of the system. 94GHz FMCW radar has been developed for this system. The Vivaldi antenna has been fabricated for the radar. The range accuracy of the radar is tested by the measurements. The radiation pattern of the Vivaldi antenna is measured in the anechoic chamber. The validity of the IR camera to detect obstacles is evaluated by the flight measurements.IR
images, collected by the measurements are employed to analyze the effect of the background brightness and to develop new rendering techniques to enhance obstacles. The results show that the accuracy of the FMCW radar is within 5%. The Vivaldi antenna has good characteristics but its transition circuit deforms the total antenna pattern. It is shown that IR camera greatly increases the possibility to detect obstacles even in poor visibility. The normal distribution in the IR spectrum proves to be enough to analyze the image and to derive the obstacle information. The IR image rendered by the pseudo color method is effective to enhance obstacles.

Millimetric radar imaging systems have been used to improve situational awareness for flight crew in low visibility approaches. The image from the sensor is projected in a Head-Up Display (HUD) and, for aircraft without Cat-III auto-land facilities, can provide sufficient cues to continue a manual approach past the normal decision height. However, these images may be cluttered, features are often difficult to detect and there is no direct indication of system integrity. Guidance cues can be displayed in the HUD by tracking runway features in the radar image and the use of sensor fusion methods to detect variation in the size and shape of the runway can provide integrity monitoring, for timely warning of system malfunction.
In order to develop real-time tracking algorithms, it is necessary to generate synthetic radar images, which exhibit the properties of actual millimetric radar sensors. This paper outlines the model of a radar sensor used to generate real-time radar images incorporating appropriate attenuation and clutter properties. These images are derived from standard 3D visual databases and have been integrated in a flight simulator using a commercial image generation system. The radar model incorporates the effects of material properties of objects, the sensor range and grazing angles and includes atmospheric attenuation. Examples of the radar images are presented in the paper together with a summary of the real-time performance of the radar model to simulate millimeter wave radar images using a proprietary workstation.

An effective enhanced vision system must operate over a broad spectral range in order to offer a pilot an optimized scene that includes runway background as well as airport lighting and aircraft operations. The large dynamic range of intensities of these images is best handled with separate imaging sensors. The EVS 2000 is a patented dual-band Infrared Enhanced Vision System (EVS) utilizing image fusion concepts to provide a single image from uncooled infrared imagers in both the LWIR and SWIR. The system is designed to provide commercial and corporate airline pilots with improved situational awareness at night and in degraded weather conditions. A prototype of this system was recently fabricated and flown on the Boeing Advanced Technology Demonstrator 737-900 aircraft. This paper will discuss the current EVS 2000 concept, show results taken from the Boeing Advanced Technology Demonstrator program, and discuss future plans for EVS systems.

Image fusion is the generally preferred method to combine two or more images for visual display on a single screen. We demonstrate that perceptual image separation may be preferable over perceptual image fusion for the combined display of enhanced and synthetic imagery. In this context image separation refers to the simultaneous presentation of images on different depth planes of a single display. Image separation allows the user to recognize the source of the information that is displayed. This can be important because synthetic images are more liable to flaws. We have examined methods to optimize perceptual image separation. A true depth difference between enhanced and synthetic imagery works quite well. A standard stereoscopic display based on convergence is less suitable since the two images tend to interfere: the image behind is masked (occluded) by the image in front, which results in poor viewing comfort. This effect places 3D systems based on 3D glasses, as well as most autostereoscopic displays, at a serious disadvantage. A 3D display based on additive or subtractive transparency is acceptable: both the perceptual separation and the viewing comfort are good, but the color of objects depends on the color in the other depth layer(s). A combined additive and subtractive transparent display eliminates this disadvantage and is most suitable for the combined display of enhanced and synthetic
imagery. We suggest that the development of such a display system is of a greater practical value than increasing the number of depth
planes in autostereoscopic displays.

Synthetic Vision Systems (SVS) provide pilots with displays of stored geo-spatial data representing terrain, obstacles, and cultural features. As comprehensive validation is impractical, these databases typically have no quantifiable level of integrity. Futher, updates to the databases may not be provided as changes occur. These issues limit the certification level and constrain the operational context of SVS for civil aviation. Previous work demonstrated the feasibility of using a real-time monitor to bound the integrity of Digital Elevation Models (DEMs) by using radar altimeter measurements during flight. This paper describes an extension of this concept to include X-band Weather Radar (WxR) measurements. This enables the monitor to detect additional classes of DEM errors and to reduce the exposure time associated with integrity threats. Feature extraction techniques are used along with a statistical assessment of similarity measures between the sensed and stored features that are detected. Recent flight-testing in the area around Juneau, Alaska Airport (JNU) has resulted in a comprehensive set of sensor data that is being used to assess the feasibility of the proposed monitor technology. Initial results of this assessment are presented.

One of the key problems in developing Enhanced and Synthetic Vision Systems is evaluating their effectiveness in enhancing human visual performance. A validated simulation of human vision would provide a means of avoiding costly and time-consuming testing of human observers. We describe an image-based simulation of human visual search, detection, and identification, and efforts to further validate
and refine this simulation. One of the advantages of an image-based simulation is that it can predict performance for exactly the same visual stimuli seen by human operators. This makes it possible to assess aspects of the imagery, such as particular types and amounts of background clutter and sensor distortions, that are not usually considered in non-image based models. We present two validation studies - one showing that the simulation accurately predicts human color discrimination, and a second showing that it produces probabilities of detection (Pd's) that closely match Blackwell-type human threshold data.

In anticipation of its ultimate role in transport, business and rotary wing aircraft, we clarify the role of Enhanced Vision Systems (EVS): how the output data will be utilized, appropriate architecture for total avionics integration, pilot and control interfaces, and operational utilization. Ground-map (database) correlation is critical, and we suggest that "synthetic vision" is simply a subset of the monitor/guidance interface issue. The core of integrated EVS is its sensor processor. In order to approximate optimal, Bayesian multi-sensor fusion and ground correlation functionality in real time, we are developing a neural net approach utilizing human visual pathway and self-organizing, associative-engine processing. In addition to EVS/SVS imagery, outputs will include sensor-based navigation and attitude signals as well as hazard detection. A system architecture is described, encompassing an all-weather sensor suite; advanced processing technology; intertial, GPS and other avionics inputs; and pilot and machine interfaces. Issues of total-system accuracy and integrity are addressed, as well as flight operational aspects relating to both civil certification and military applications in IMC.

This paper describes a new approach to situation awareness that combines video sensor technology and synthetic vision technology in a unique fashion to create a hybrid vision system. Our implementation of the technology, called "SmartCam3D" (SCS3D) has been flight tested by both NASA and the Department of Defense with excellent results. This paper details its development and flight test results.
Windshields and windows add considerable weight and risk to vehicle design, and because of this, many future vehicles will employ a windowless cockpit design. This windowless cockpit design philosophy prompted us to look at what would be required to develop a system that provides crewmembers and operations personnel an appropriate level of situation awareness. The system created to date provides a real-time 3D perspective display that can be used during all-weather and visibility conditions. While the advantages of a synthetic vision only system are considerable, the major disadvantage of such a system is that it displays the synthetic scene created using “static” data acquired by an aircraft or satellite at some point in the past. The SCS3D system we are presenting in this paper is a hybrid synthetic vision system that fuses live video stream information with a computer generated synthetic scene. This hybrid system can display a dynamic, real-time scene of a region of interest, enriched by information from a synthetic environment system, see figure 1.
The SCS3D system has been flight tested on several X-38 flight tests performed over the last several years and on an ARMY Unmanned Aerial Vehicle (UAV) ground control station earlier this year. Additional testing using an assortment of UAV ground control stations and UAV simulators from the Army and Air Force will be conducted later this year.
We are also identifying other NASA programs that would benefit from the use of this technology.

Norris Electro Optical Systems (NEOS) has developed a sensor that detects the local presence of aircraft on an airport surface. It operates in the ultraviolet (UV) region where no natural background noise is present, thereby enabling reliable, hands off operation, throughout the environment extremes of high noon to low visibility conditions. These characteristics have been validated by the Federal Aviation Administration (FAA). NEOS is applying these capabilities to enable a low cost, autonomous, electro optically-based, runway incursion prevention system that conforms to the National Transportation Safety Board's (NTSB) recommendation for a direct warning to flight crews of the potential for a runway incursion.

Tower controllers are responsible for maintaining separation between aircraft and expediting the flow of traffic in the air. On the airport surface, they also are responsible for maintaining safe separation between aircraft, ground equipment, and personnel. They do this by sequencing departing and arriving aircraft, and controlling the location and movement of aircraft, vehicles, equipment, and personnel on the airport surface. The local controller and ground controller are responsible for determining aircraft location and intent, and for ensuring that aircraft, vehicles, and other surface objects maintain a safe separation distance. During nighttime or poor visibility conditions, controllers' situation awareness is significantly degraded, resulting in lower safety margins and increased errors. Safety and throughput can be increased by using an Enhanced Vision System, based upon state-of-the-art infrared sensor technology, to restore critical visual cues. We discuss the results of an analysis of tower controller critical visual tasks and information requirements. The analysis identified: representative classes of ground obstacles/targets (e.g., aircraft, vehicles, wildlife); sample airport layouts and tower-to-runway distances; and obstacle subtended visual angles. We performed NVTherm modeling of candidate sensors and field data collections. This resulted in the identification of design factors for an airport surface surveillance Enhanced Vision System.

Several emerging technologies were recently demonstrated in a Boeing 737-900 as part of Boeing's Technology Demonstrator program. Among these technologies were two enhanced vision systems and a synthetic vision system, including synthetic displays to support surface operations. This project gained operational experience with enhanced and synthetic vision systems operating in a context that included Required Navigation Performance (RNP) terminal area operations, Global Navigation Satellite System (GNSS) approach and landing, and Integrated Area Navigation (IAN). The technologies were demonstrated to a broad mix of constituents involved in research, regulation, and acquisition in the transport category environment. This paper describes the systems demonstrated, the context in which they were used, and perceived benefits of integrating them in an operational environment. Lessons learned in the implementation of these technologies throughout the program are described and subjective data from participants are summarized.

In commercial aviation, over 30 percent of all fatal accidents worldwide are categorized as Controlled Flight Into Terrain (CFIT) accidents where a fully functioning airplane is inadvertently flown into the ground, water, or an obstacle.
An experiment was conducted at NASA Langley Research Center investigating the presentation of a synthetic terrain database scene to the pilot on a Primary Flight Display (PFD). The major hypothesis for the experiment is that a synthetic vision system (SVS) will improve the pilot's ability to detect and avoid a potential CFIT compared to conventional flight instrumentation.
All display conditions, including the baseline, contained a Terrain Awareness and Warning System (TAWS) and Vertical Situation Display (VSD) enhanced Navigation Display (ND). Sixteen pilots each flew 22 approach / departure maneuvers in Instrument Meteorological Conditions (IMC) to the terrain challenged Eagle County Regional Airport (EGE) in Colorado. For the final run, the flight guidance cues were altered such that the departure path went into the terrain. All pilots with a SVS enhanced PFD (12 of 16 pilots) noticed and avoided the potential CFIT situation. All of the pilots who flew the anomaly with the baseline display configuration (which included a TAWS and VSD enhanced ND) had a CFIT event.

EVS (Enhanced Vision System) and SVS (Synthesized Vision System) are known as effective tools for pilots to improve situation awareness. ENRI has developed an integrated EVS/SVS experimenta system to study the potential of both EVS and SVS in Japan.
This paper presents the results of ground and flight experiments of the experimental system. It produces the three-dimensional (3D)artificial images. They are synthesized with the position data of GPS,the attitude data obtained by the gyro sensor and the digital map database,which is supplied from GSI (the Geographical Survey Institute)in Japan. The produced image is compared with the actual motion picture of scenery through HUD (Head Up Display) or a computer screen.The image uses the grid lines' expression for the simultaneous recognition of both the 3D image and the real picture. The picture is obtained from two sensors, that is, a visible ray co or sensor and an
infrared sensor. These two kinds of the picture are recorded into respective video recorder. The image recording subsystems are equipped to the ENRI’s experimental aircraft with additional sensors for position and attitude data. The GPS receiver and gyro unit are chosen for additional sensors.
Two methods are examined in the simulation of the fusion system.One is a method that the 3D image is overlapped with the picture of the time to acquire the image from video recorders and display it on a computer screen. The other is a method that the observer watches the image through HUD,where both the image and the picture are overlapped.This paper also discusses the difference of two methods for fusion systems and shows the results

We present a method to give (fused) multiband night-time imagery a natural day-time color appearance. For input, the method requires a false color RGB image that is produced by mapping 3 individual bands (or the first 3 principal components) of a multiband nightvision system to the respective channels of an RGB image. The false color RGB
nightvision image is transformed into a perceptually decorrelated color space. In this color space the first order statistics of a natural color image (target scene) are transferred to the multiband nightvision image (source scene). To obtain a natural color representation of the multiband night-time imagery, the compositions of the source and target scenes should resemble each other to some degree. The inverse transformation to RGB space yields a nightvision image with a day-time color appearance. The luminance contrast of the resulting color image can be enhanced by replacing its luminance component by a grayscale fused representation of the three input bands.

Keywords/Phrases

Keywords

in

Remove

in

Remove

in

Remove

+ Add another field

Search In:

Proceedings

Volume

Journals +

Volume

Issue

Page

Advanced PhotonicsJournal of Applied Remote SensingJournal of Astronomical Telescopes Instruments and SystemsJournal of Biomedical OpticsJournal of Electronic ImagingJournal of Medical ImagingJournal of Micro/Nanolithography, MEMS, and MOEMSJournal of NanophotonicsJournal of Photonics for EnergyNeurophotonicsOptical EngineeringSPIE Reviews