NASA's Synthetic Vision Systems (SVS) project is developing technologies with practical applications to eliminate low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance for transport aircraft. The SVS concept being developed at NASA encompasses the integration of tactical and strategic Synthetic Vision Display Concepts (SVDC) with Runway Incursion Prevention System (RIPS) alerting and display concepts, real-time terrain database integrity monitoring equipment (DIME), and Enhanced Vision Systems (EVS) and/or improved Weather Radar for real-time object detection and database integrity monitoring.
A flight test evaluation was jointly conducted (in July and August 2004) by NASA Langley Research Center and an industry partner team under NASA's Aviation Safety and Security, Synthetic Vision System project. A Gulfstream G-V aircraft was flown over a 3-week period in the Reno/Tahoe International Airport (NV) local area and an additional 3-week period in the Wallops Flight Facility (VA) local area to evaluate integrated Synthetic Vision System concepts. The enabling technologies (RIPS, EVS and DIME) were integrated into the larger SVS concept design. This paper presents experimental methods and the high level results of this flight test.

Continuously growing worldwide air traffic poses an immense challenge to civil aviation. While the number of flight operations is increasing, the overall number of accidents has to be reduced. In order to reach this ambitious goal, both avionics and the human machine interface (HMI) of the cockpit have to be improved in the onboard domain.
An international consortium led by Thales Avionics, one of Europe's leading avionics manufacturers, is therefore - as part of the EC project ISAWARE II of the 5th Framework Program - developing integrated surveillance systems and cockpit displays which intuitively provide pilots with an optimum situational awareness during all flight phases.
The project already uses parts of the interactive Cockpit Display System (CDS) developed for the Airbus A380 as a basis. Primary Flight Display (PFD) and Navigation Display (ND), the two central cockpit displays, are additionally equipped with a so-called “Synthetic Vision System” (SVS), a database-driven representation of terrain and airport features resembling the real outside world.

Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFIT) and runway incursion accidents. NASA’s Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA's Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.

Training is required to correctly interpret NVG imagery. Training night operations with simulated intensified imagery has great potential. Compared to direct viewing with the naked eye, intensified imagery is relatively easy to simulate and the cost of real NVG training is high (logistics, risk, civilian sleep deprivation, pollution). On the surface NVG imagery appears to have a structure similar to daylight imagery. However, in actuality its characteristics differ significantly from those of daylight imagery. As a result, NVG imagery frequently induces visual illusions. To achieve realistic training, simulated NVG imagery should at least reproduce the essential visual limitations of real NVG imagery caused by reduced resolution, reduced contrast, limited field-of-view, the absence of color, and the systems sensitivity to nearby infrared radiation. It is particularly important that simulated NVG imagery represents essential NVG visual characteristics, such as the high reflection of chlorophyll and halos. Current real-time simulation software falls short for training purposes because of an incorrect representation of shadow effects. We argue that the development of shading and shadowing merits priority to close the gap between real and simulated NVG flight conditions. Visual conspicuity can be deployed as an efficient metric to measure the 'perceptual distance' between the real NVG and the simulated NVG image.

A Runway Incursion Prevention System (RIPS) integrated with a Synthetic Vision System concept (SVS) was tested at the Reno/Tahoe International Airport (RNO) and Wallops Flight Facility (WAL) in the summer of 2004. RIPS provides enhanced surface situational awareness and alerts of runway conflicts in order to prevent runway incidents while also improving operational capability. A series of test runs was conducted using a Gulfstream-V (G-V) aircraft as the test platform and a NASA test aircraft and a NASA test van as incurring traffic. The purpose of the study, from the RIPS perspective, was to evaluate the RIPS airborne incursion detection algorithms and associated alerting and airport surface display concepts, focusing on crossing runway incursion scenarios. This paper gives an overview of the RIPS, WAL flight test activities, and WAL test results.

We describe a multisensor (or multimodal)
flight simulator (FS), which is currently capable of generating forwardlooking
infrared (FLIR) imagery and is designed in such a way that modules can easily be added to produce
other types of imagery such as for millimeter-wave radar (MMWR). Such sensors are the basis for the enhanced
vision systems (EVS) that are currently considered for installation aboard commercial and military aircraft to
enhance the safety of operation in poor-visibility weather or even in zero-visibility weather. The main source
of information for our simulator is an airport database, which is, in part, intended for driving synthetic vision
systems (SVS). We describe the architecture of the simulator and of its FLIR module. Preliminary simulation
examples are also shown.

The paper describes flight trials performed in Centennial, CO with a Piper Cheyenne from Marinvent. Six pilots flew the Cheyenne in twelve enroute segments between Denver Centennial and Colorado Springs. Two different settings (paper chart, enroute moving map) were evaluated with randomized settings. The flight trial goal was to evaluate the objective performance of pilots compared among the different settings. As dependent variables, positional accuracy and situational awareness probe (SAP) were measured. Analysis was conducted by an ANOVA test. In parallel, all pilots answered subjective Cooper-Harper, NASA TLX, situation awareness rating technique (SART), Display Readability Rating and debriefing questionnaires.
The tested enroute moving map application has Jeppesen chart compliant symbologies for high-enroute and low-enroute. It has a briefing mode were all information found on today’s enroute paper chart together with a loaded flight plan are displayed in a north-up orientation. The execution mode displays a loaded flight plan routing together with only pertinent flight route relevant information in either a track up or north up orientation. Depiction of an own ship symbol is possible in both modes. All text and symbols are deconflicted. Additional information can be obtained by clicking on symbols. Terrain and obstacle data can be displayed for enhanced situation awareness.
The result shows that pilots flying the 2D enroute moving map display perform no worse than pilots with conventional systems. Flight technical error and workload are equivalent or lower, situational awareness is higher than on conventional paper charts.

While vast numbers of image enhancing algorithms have already been developed, the majority of these algorithms have not been assessed in terms of their visual performance-enhancing effects using militarily relevant scenarios. The goal of this research was to develop a visual performance-based assessment methodology and apply it to assess three Retinex algorithms. The image enhancing algorithms used in this study are the two algorithms described in Funt, Ciurea, and McCann as McCann99 Retinex and Frankle-McCann Retinex, and the multiscale Retinex with color restoration (MSRCR) algorithm. This paper discusses the methodology developed to acquire objective human visual performance data as a means of evaluating various image enhancement algorithms. The basic approach is to determine whether or not standard objective performance metrics, such as response time and error rate, are improved when viewing the enhanced images versus the baseline, non-enhanced images. Four observers completed a visual search task using a spatial-forced-choice paradigm. Observers had to search images for a target (a military vehicle) hidden among foliage and then indicate in which quadrant of the screen the target was located. Response time and percent correct were measured for each observer. Future directions and the viability of this technique are also discussed.

Head-up displays (HUD) and helmet (or head)-mounted displays (HMD) aim at reducing the pilot's visual scanning cost in support of concurrent monitoring of both instrument information (near domain) and the outside environment (far domain). An HMD used in combination with a head tracker enables the assessment of the pilot’s head direction in real time allowing symbologies to remain spatially linked to elements of the outside environment. The paper examines the potential added benefits of improved flight path tracking to be expected by displaying symbologies of a virtual 3D perspective pathway plus predictor information on an HMD. Results of a high-fidelity flight-simulation experiment are reported that involved a series of curved approaches supported with such a pathway HMD. The study used a monocular retinal-scanning HMD and involved 18 pilots. Dependent human performance data were derived from flight path tracking measures, subjective measures of mental workload and situation awareness and pilot reactions in response to an unexpected rare event in the outside scene (intruding aircraft on the active runway for the intended landing). Comparison with a standard head-down ILS baseline condition revealed a mix of performance costs and benefits, which is consistent with most of the human factors literature on the general use of HUDs and of HUDs used in combination with pathway guidance: The pathway HMD promoted substantially better flight path tracking but caused also a delayed response to the unexpected event. This effect points to some disadvantages of HUDs referred to as 'attention capture', which may become exaggerated by the additional use of pathway guidance symbology.

Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these
conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At
NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office
and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines
image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This
system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function
of the system is to enhance and fuse the sensor data in order to increase the information content and quality
of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For
image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for
improving low-contrast range imagery typically seen during poor visibility poor visibility conditions. In general,
real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a
single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In
this paper we give an overview of the EVS and its performance requirements for real-time enhancement and
fusion and we discuss our current real-time Retinex implementations on DSPs.

The Global Positioning System (GPS) consists of a constellation of Earth orbiting satellites that transmit continuous electromagnetic signals to users on or near the Earth surface. At any moment of time, at least four GPS satellites, and sometimes nine or more, are visible from any point. The electromagnetic signal transmitted from the satellites is reflected to at least some degree from virtually every place on the Earth. When this signal is received by a specially constructed receiver, its characteristics can be used to determine information about the reflected surface. One piece of information collected is the time delay encountered by the reflected signal versus the direct signal. This time delay can be used to determine the altitude (or height) above the local terrain when the terrain in the reflection area is level. However, given the potential of simultaneously using multiple reflections, it should be possible to also determine the elevation above even terrains where the reflecting area is not level. Currently an effort is underway to develop the technology to characterize the reflected signal that is received by the GPS Surface Reflection Experiment (GSRE) instrument. Recent aircraft sorties have been flown to collect data that can be used to refine the technology. This paper provides an update on the status of the instrument development to enable determination of terrain proximity using the GPS Reflected signal. Results found in the data collected to date are also discussed.

Synthetic imagery used for training and evaluating visual search and detection tasks should result in the same observer performance as obtained in the field. The generation of synthetic imagery generally involves a range of computational approximations and simplifications of the physical processes involved in the image formation, in order to meet the update rates in real-time systems or simply to achieve reasonable computation times. These approximations reduce the fidelity of the resulting imagery. This in turn affects observer performance. We have recently introduced visual conspicuity as an efficient task-related measure that can be deployed to calibrate synthetic imagery for use in human visual search and detection tasks. Target conspicuity determines mean visual search time. Targets in synthetic imagery with the same visual conspicuity as their real world counterparts will give rise to an observer performance in simulated search and detection tasks that is similar to the performance in equivalent real world scenarios. In the present study we compare the conspicuity and the detection ranges of real and simulated targets with different degrees of shading. When ambient occlusion is taken into account, and when the contrast ratios in a scene are calibrated, the detection ranges and conspicuity values of simulated targets are equivalent to those of their real-world counterparts, for different degrees of shading. When no or incorrect shading is applied in the simulation, this is not the case, and the resulting imagery can not be deployed for training visual search and detection tasks.

The theory of opponent-sensor image fusion is based on neural circuit models of adaptive contrast enhancement and opponent-color interaction, as developed and previously presented by Waxman, Fay et al. This approach can directly fuse 2, 3, 4, and 5 imaging sensors, e.g., VNIR, SWIR, MWIR, and LWIR for fused night vision. The opponent-sensor images also provide input to a point-and-click fast learning approach for target fingerprinting (pattern learning and salient feature discovery) and subsequent target search. We have recently developed a real-time implementation of multi-sensor image fusion and target learning & search on a single board attached processor for a laptop computer. In this paper we will review our approach to image fusion and target learning, and demonstrate fusion and target detection using an array of VNIR, SWIR and LWIR imagers. We will also show results from night data collections in the field. This opens the way to digital fused night vision goggles, weapon sights and turrets that fuse multiple sensors and learn to find targets designated by the operator.

During approach and landing the pilot performs a high-workload task of switching the attention between instrument in-formation and the outside scene. Superimposing both visual domains in head-up (HUD) or head-mounted displays (HMD) reduces the visual scanning load of this task. These displays are collimated at optical infinity; therefore, prevent the pilot's eye from permanent accommodation between both visual domains. Besides these performance benefits, visual clutter and attention fixation, i.e. inattentiveness to outside scene events while attending on HUD symbologies, are found to be performance cost factors. Conformal symbology and flight-phase adapted de-cluttering has been found to be prom-ising approaches to overcome these problems.
In pursuit of these two approaches, the current paper describes the design of a new pathway display on a monocular head-mounted retinal scanning display and its implementation in DLR's generic cockpit simulator. The pathway can be regarded as a means of linking an instrument symbology (the tunnel) with a virtual element of the outside scene (the in-tended flight path). Scene-linked symbology appear to be part of the outside world, e.g. an instrument reading like air-speed, heading, or altitude that is changing its display location conformal with the gate element of the tunnel symbology moving towards the pilot. Examples of flight phase-adaptive de-cluttering is to successively reduce or remove symbol-ogy when the conformal outside element becomes visible (e.g. the runway). In addition the display includes a conformal presentation of the terrain. A checker board pattern representing the terrain is dynamically generated from worldwide available SRTM-3 data.

This paper describes flight trials of Honeywell Advanced 3D Primary Flight Display System. The system employs a large-format flat-panel avionics display presently used in Honeywell PRIMUS EPIC flight-deck products and is coupled to an on-board EGPWS system. The heads-down primary flight display consists of dynamic primary-flight attitude information, flight-path and approach symbology similar to Honeywell HUD2020 heads-up displays, and a synthetic 3D perspective-view terrain environment generated with Honeywell’s EGPWS terrain data. Numerous flights are conducted on-board Honeywell Citation V aircraft and significant amount of pilot feedback are collected with portion of the data summarized in this paper. The system development is aimed at leveraging several well-established avionics components (HUD, EGPWS, large-format displays) in order to produce an integrated system that significantly reduces pilot workload, increases overall situation awareness, and is more beneficial to flight operations than achievable with separated systems.

In an effort to increase situational awareness, the aviation industry is investigating technologies that allow pilots to visualize what is outside of the aircraft during periods of low-visibility. One of these technologies, referred to as Synthetic Vision Systems (SVS), provides the pilot with real-time computer-generated images of obstacles, terrain features, runways, and other aircraft regardless of weather conditions. To help ensure the integrity of such systems, methods of verifying the accuracy of synthetically-derived display elements using onboard remote sensing technologies are under investigation. One such method is based on a shadow detection and extraction (SHADE) algorithm that transforms computer-generated digital elevation data into a reference domain that enables direct comparison with radar measurements. This paper describes machine vision techniques for making this comparison and discusses preliminary results from application to actual flight data.

This paper describes flight trials performed in Centennial, CO using a Piper Cheyenne owned and operated by Marinvent. The goal of the flight trial was to evaluate the objective performance of pilots using conventional paper charts or a 3D SVS display. Six pilots flew thirty-six approaches to the Colorado Springs airport to accomplish this goal. As dependent variables, positional accuracy and situational awareness probe (SAP) statistics were measured while analysis was conducted by an ANOVA test. In parallel, all pilots answered subjective Cooper-Harper, NASA TLX, situation awareness rating technique (SART), Display Readability Rating, Display Flyability Rating and debriefing questionnaires. Three different settings (paper chart, electronic navigation chart, 3D SVS display) were evaluated in a totally randomized manner. This paper describes the comparison between the conventional paper chart and the 3D SVS display. The 3D SVS primary flight display provides a depiction of primary flight data as well as a 3D depiction of airports, terrain and obstacles. In addition, a 3D dynamic channel visualizing the selected approach procedure can be displayed.
The result shows that pilots flying the 3D SVS display perform no worse than pilots with the conventional paper chart. Flight technical error and workload are lower, situational awareness is equivalent with conventional paper charts.

The Air Force Research Laboratory's Human Effectiveness Directorate (AFRL/HE) supports research addressing human factors associated with Unmanned Aerial Vehicle (UAV) operator control stations. Recent research, in collaboration with Rapid Imaging Software, Inc., has focused on determining the value of combining synthetic vision data with live camera video presented on a UAV control station display. Information is constructed from databases (e.g., terrain, cultural features, pre-mission plan, etc.), as well as numerous information updates via networked communication with other sources (e.g., weather, intel). This information is overlaid conformal, in real time, onto the dynamic camera video image display presented to operators. Synthetic vision overlay technology is expected to improve operator situation awareness by highlighting key spatial information elements of interest directly onto the video image, such as threat locations, expected locations of targets, landmarks, emergency airfields, etc. Also, it may help maintain an operator’s situation awareness during periods of video datalink degradation/dropout and when operating in conditions of poor visibility. Additionally, this technology may serve as an intuitive means of distributed communications between geographically separated users. This paper discusses the tailoring of synthetic overlay technology for several UAV applications. Pertinent human factors issues are detailed, as well as the usability, simulation, and flight test evaluations required to determine how best to combine synthetic visual data with live camera video presented on a ground control station display and validate that a synthetic vision system is beneficial for UAV applications.

Synthetic Vision Systems (SVS) provide pilots with a virtual visual depiction of the external environment. When using SVS for aircraft precision approach guidance systems accurate positioning relative to the runway with a high level of integrity is required. Precision approach guidance systems in use today require ground-based electronic navigation components with at least one installation at each airport, and in many cases multiple installations to service approaches to all qualifying runways. A terrain-referenced approach guidance system is envisioned to provide precision guidance to an aircraft without the use of ground-based electronic navigation components installed at the airport. This autonomy makes it a good candidate for integration with an SVS. At the Ohio University Avionics Engineering Center (AEC), work has been underway in the development of such a terrain referenced navigation system. When used in conjunction with an Inertial Measurement Unit (IMU) and a high accuracy/resolution terrain database, this terrain referenced navigation system can provide navigation and guidance information to the pilot on a SVS or conventional instruments.
The terrain referenced navigation system, under development at AEC, operates on similar principles as other terrain
navigation systems: a ground sensing sensor (in this case an airborne laser scanner) gathers range measurements to the
terrain; this data is then matched in some fashion with an onboard terrain database to find the most likely position
solution and used to update an inertial sensor-based navigator. AEC's system design differs from today's common
terrain navigators in its use of a high resolution terrain database (~1 meter post spacing) in conjunction with an airborne
laser scanner which is capable of providing tens of thousands independent terrain elevation measurements per second
with centimeter-level accuracies. When combined with data from an inertial navigator the high resolution terrain
database and laser scanner system is capable of providing near meter-level horizontal and vertical position estimates.
Furthermore, the system under development capitalizes on 1) The position and integrity benefits provided by the Wide
Area Augmentation System (WAAS) to reduce the initial search space size and; 2) The availability of high
accuracy/resolution databases. This paper presents results from flight tests where the terrain reference navigator is used
to provide guidance cues for a precision approach.

Results are presented from formal flight and simulation experiments to test a new primary flight display (PFD)/refined multifunction display (MFD) system, with a computer generated dynamic pathway, as a viable means for a pilot to accurately and efficiently control and navigate an aircraft. For flight control, the PFD uses a computer generated highway-in-the-sky (HITS) pathway and a synthetic vision terrain image of the view outside the aircraft, with an overlay of all the essential flight technical data. For navigation, the MFD provides a moving map with a dynamic pathway to aid the pilot. The total PFD/MFD system provides a predictive method for flying an aircraft, as opposed to the reactive method associated with conventional needle and dial instruments. Fifteen low-to-average-experience subject pilots were selected to compare the PFD instrumentation system to a conventional instrumentation system. A non-precision global positioning system (GPS) area navigation (RNAV) approach to runway 20 at Wakefield Municipal Airport, VA, (AKQ) was used. The hypothesis was that the intuitive nature of the PFD instrumentation system will provide greater situational awareness, improved accuracy, and less pilot workload during flight in instrument meteorological conditions (IMC) compared to using conventional round dial instrumentation.

Proc. SPIE 5802, Initial development of a metric to describe the level of safety associated with piloting an aircraft with synthetic vision systems (SVS) displays, 0000 (25 May 2005); doi: 10.1117/12.605247

Synthetic Vision Systems (SVS) displays provide pilots with a continuous view of terrain combined with integrated guidance symbology in an effort to increase situation awareness (SA) and decrease workload during operations in Instrument Meteorological Conditions (IMC). It is hypothesized that SVS displays can replicate the safety and operational flexibility of flight in Visual Meteorological Conditions (VMC), regardless of actual out-the-window (OTW) visibility or time of day. Throughout the course of recent SVS research, significant progress has been made towards evolving SVS displays as well as demonstrating their ability to increase SA compared to conventional avionics in a variety of conditions. While a substantial amount of data has been accumulated demonstrating the capabilities of SVS displays, the ability of SVS to replicate the safety and operational flexibility of VMC flight performance in all visibility conditions is unknown to any specific degree. The previous piloted simulations and flight tests have shown better SA and path precision is achievable with SVS displays without causing an increase in workload, however none of the previous SVS research attempted to fully capture the significance of SVS displays in terms of their contribution to safety or operational benefits. In order to more fully quantify the relationship of flight operations in IMC with SVS displays to conventional operations conducted in VMC, a fundamental comparison to current day general aviation (GA) flight instruments was warranted. Such a comparison could begin to establish the extent to which SVS display concepts are capable of maintaining an “equivalent level of safety” with the round dials they could one day replace, for both current and future operations. Such a comparison was the focus of the SVS-ES experiment conducted under the Aviation Safety and Security Program's (AvSSP) GA Element of the SVS Project at NASA Langley Research Center in Hampton, Virginia. A combination of subjective and objective data measures were used in this preliminary research to quantify the relationship between selected components of safety that are associated with flying an approach. Four information display methods ranging from a “round dials” baseline through a fully integrated SVS package that includes terrain, pathway based guidance, and a strategic navigation display, were investigated in this high fidelity simulation experiment. In addition, a broad spectrum of pilots, representative of the GA population, were employed for testing in an attempt to enable greater application of the results and determine if “equivalent levels of safety” are achievable through the incorporation of SVS technology regardless of a pilot's flight experience.

Keywords/Phrases

Keywords

in

Remove

in

Remove

in

Remove

+ Add another field

Search In:

Proceedings

Volume

Journals +

Volume

Issue

Page

Advanced PhotonicsJournal of Applied Remote SensingJournal of Astronomical Telescopes Instruments and SystemsJournal of Biomedical OpticsJournal of Electronic ImagingJournal of Medical ImagingJournal of Micro/Nanolithography, MEMS, and MOEMSJournal of NanophotonicsJournal of Photonics for EnergyNeurophotonicsOptical EngineeringSPIE Reviews