S THIS BOOK GOES TO PRESS, the National Aeronautics and Space Administration (NASA) has passed beyond the half century mark, its longevity a tribute to how essential successive Presidential administrations—and the American people whom they serve—have come to regard its scientific and technological expertise. In that half century, flight has advanced from supersonic to orbital velocities, the jetliner has become the dominant means of intercontinental mobility, astronauts have landed on the Moon, and robotic spacecraft developed by the Agency have explored the remote corners of the solar system and even passed into interstellar space. Born of a crisis—the chaotic aftermath of the Soviet Union’s space triumph with Sputnik—NASA rose magnificently to the challenge of the emergent space age. Within a decade of NASA’s establishment, teams of astronauts would be planning for the first lunar landings, accomplished with Neil Armstrong’s “one small step” on July 20, 1969. Few events have been so emotionally charged, and none so publicly visible or fraught with import, as his cautious descent from the spindly little Lunar Module Eagle to leave his historic boot-print upon the dusty plain of Tranquillity Base. In the wake of Apollo, NASA embarked on a series of space initiatives that, if they might have lacked the emotional and attention-getting impact of Apollo, were nevertheless remarkable for their accomplishment and daring. The Space Shuttle, the International Space Station, the Hubble Space Telescope, and various planetary probes, landers, rovers, and flybys speak to the creativity of the Agency, the excellence of its technical personnel, and its dedication to space science and exploration. But there is another aspect to NASA, one that is too often hidden in an age when the Agency is popularly known as America’s space agency and when its most visible employees are the astronauts who courageously vii

NASA’s Contributions to Aeronautics

rocket into space, continuing humanity’s quest into the unknown. That hidden aspect is aeronautics: lift-borne flight within the atmosphere, as distinct from the ballistic flight of astronautics, out into space. It is the first “A” in the Agency’s name, and the oldest-rooted of the Agency’s technical competencies, dating to the formation, in 1915, of NASA’s lineal predecessor, the National Advisory Committee for Aeronautics (NACA). It was the NACA that largely restored America’s aeronautical primacy in the interwar years after 1918, deriving the airfoil profiles and configuration concepts that defined successive generations of ever-morecapable aircraft as America progressed from the subsonic piston era into the transonic and supersonic jet age. NASA, succeeding the NACA after the shock of Sputnik, took American aeronautics across the hypersonic frontier and onward into the era of composite structures, electronic flight controls and energy-efficient flight. As with the first in this series, this second volume traces contributions by NASA and the post–Second World War NACA to aeronautics. The surveys, cases, and biographical examinations presented in this work offer just a sampling of the rich legacy of aeronautics research having been produced by the NACA and NASA. These include •

• •

•

•

viii

Atmospheric turbulence, wind shear, and gust research, subjects of crucial importance to air safety across the spectrum of flight, from the operations of light generalaviation aircraft through large commercial and supersonic vehicles. Research to understand and mitigate the danger of lightning strikes upon aerospace vehicles and facilities. The quest to make safer and more productive skyways via advances in technology, cross-disciplinary integration of developments, design innovation, and creation of new operational architectures to enhance air transportation. Contributions to the melding of human and machine, via the emergent science of human factors, to increase the safety, utility, efficiency, and comfort of flight. The refinement of free-flight model testing for aerodynamic research, the anticipation of aircraft behavior, and design validation and verification, complementing traditional wind tunnel and full-scale aircraft testing.

Foreword

•

•

•

•

•

• •

•

•

The evolution of the wind tunnel and expansion of its capabilities, from the era of the slide rule and subsonic flight to hypersonic excursions into the transatmosphere in the computer and computational fluid dynamics era. The advent of composite structures, which, when coupled with computerized flight control systems, gave aircraft designers a previously unknown freedom enabling them to design aerospace vehicles with optimized aerodynamic and structural behavior. Contributions to improving the safety and efficiency of general-aviation aircraft via better understanding of their unique requirements and operational circumstances, and the application of new analytical and technological approaches. Undertaking comprehensive flight research on sustained supersonic cruise aircraft—with particular attention to their aerodynamic characteristics, airframe heating, use of integrated flying and propulsion controls, and evaluation of operational challenges such as inlet “unstart,” aircrew workload—and blending them into the predominant national subsonic and transonic air traffic network. Development and demonstration of Synthetic Vision Systems, enabling increased airport utilization, more efficient flight deck performance, and safer air and ground aircraft operations. Confronting the persistent challenge of atmospheric icing and its impact on aircraft operations and safety. Analyzing the performance of aircraft at high angles of attack and conducting often high-risk flight-testing to study their behavior characteristics and assess the value of developments in aircraft design and flight control technologies to reduce their tendency to depart from controlled flight. Undertaking pathbreaking flight research on VTOL and V/STOL aircraft systems to advance their ability to enter the mainstream of aeronautical development. Conducting a cooperative international flight-test program to mutually benefit understanding of the potential, behavior, and performance of large supersonic cruise aircraft. ix

NASA’s Contributions to Aeronautics

As this sampling—far from a complete range—of NASA work in aeronautics indicates, the Agency and its aeronautics staff spread across the Nation maintain a lively interest in the future of flight, benefitting NASA’s reputation earned in the years since 1958 as a national repository of aerospace excellence and its legacy of accomplishment in the 43-year history of the National Advisory Committee for Aeronautics, from 1915 to 1958. As America enters the second decade of the second century of winged flight, it is again fitting that this work, like the volume that precedes it, be dedicated, with affection and respect, to the men and women of NASA, and the NACA from whence it sprang. Dr. Richard P. Hallion August 25, 2010

x

THIS PAGE INTENTIONALLY BLANK

NASA 515, Langley Research Center’s Boeing 737 testbed, is about to enter a microburst wind shear. The image is actual test footage, reflecting the murk and menace of wind shear. NASA.

2

CASE

1

Eluding Aeolus: Turbulence, Gusts, and Wind Shear

1

Kristen Starr

Since the earliest days of American aeronautical research, NASA has studied the atmosphere and its influence upon flight. Turbulence, gusts, and wind shears have posed serious dangers to air travelers, forcing imaginative research and creative solutions. The work of NASA’s researchers to understand atmospheric behavior and NASA’s derivation of advanced detection and sensor systems that can be installed in aircraft have materially advanced the safety and utility of air transport.

B

EFORE WORLD WAR II, the National Advisory Committee for Aeronautics (NACA), founded in 1915, performed most of America’s institutionalized and systematic aviation research. The NACA’s mission was “to supervise and direct the scientific study of the problems of flight with a view to their practical solution.” Among the most serious problem it studied was that of atmospheric turbulence, a field related to the Agency’s great interest in fluid mechanics and aerodynamics in general. From the 1930s to the present, the NACA and its successor—the National Aeronautics and Space Administration (NASA), formed in 1958—concentrated rigorously on the problems of turbulence, gusts, and wind shear. Midcentury programs focused primarily on gust load and boundary-layer turbulence research. By the 1980s and 1990s, NASA’s atmospheric turbulence and wind shear programs reached a level of sophistication that allowed them to make significant contributions to flight performance and aircraft reliability. The aviation industry integrated this NASA technology into planes bought by airlines and the United States military. This research has resulted in an aviation transportation system exponentially safer than that envisioned by the pioneers of the early air age.

An Unsettled Sky When laypeople think of the words “turbulence” and “aviation” together, they probably envision the “bumpy air” that passengers are often 3

NASA’s Contributions to Aeronautics

1

subjected to on long-duration plane flights. But the term “turbulence” has a particular technical meaning. Turbulence describes the motion of a fluid (for, our purposes, air) that is characterized by chaotic, seemingly random property changes. Turbulence encompasses fluctuations in diffusion, convection, pressure, and velocity. When an aircraft travels through air that experiences these changes, its passengers feel the turbulence buffeting the aircraft. Engineers and scientists characterize the degree of turbulence with the Reynolds number, a scaling parameter identified in the 1880s by Osborne Reynolds at the University of Manchester. Lower numbers denote laminar (smooth) flows, intermediate values indicate transitional flows, and higher numbers are characteristic of turbulent flow.1 A kind of turbulent airflow causes drag on all objects, including cars, golf balls, and planes, which move through the air. A boundary layer is “the thin reaction zone between an airplane [or missile] and its external environment.” The boundary layer is separated from the contour of a plane’s airfoil, or wing section, by only a few thousandths of an inch. Air particles change from a smooth laminar flow near the leading edge to a turbulent flow toward the airfoil’s rear.2 Turbulent flow increases friction on an aircraft’s skin and therefore increased surface heat while slowing the speed of the aircraft because of the drag it produces. Most atmospheric circulation on Earth causes some kind of turbulence. One of the more common forms of atmospheric turbulence experienced by aircraft passengers is clear air turbulence (CAT), which is caused by the mixing of warm and cold air in the atmosphere by wind, often via the process of wind shear. Wind shear is a difference in wind speed and direction over a relatively short distance in Earth’s atmosphere. One engineer describes it as “any situation where wind velocity varies sharply from point to point.”3 Wind shears can have both horizontal and vertical components. Horizontal wind shear is usually encountered near coastlines and along fronts, while vertical wind shear appears closer to Earth’s surface and sometimes at higher levels in the atmosphere, near frontal zones and upper-level air jets.

Large-scale weather events, such as weather fronts, often cause wind shear. Weather fronts are boundaries between two masses of air that have different properties, such as density, temperature, or moisture. These fronts cause most significant weather changes. Substantial wind shear is observed when the temperature difference across the front is 9 degrees Fahrenheit (°F) or more and the front is moving at 30 knots or faster. Frontal shear is seen both vertically and horizontally and can occur at any altitude between surface and tropopause, which is the lowest portion of Earth’s atmosphere and contains 75 percent of the atmosphere’s mass. Those who study the effects of weather on aviation are concerned more with vertical wind shear above warm fronts than behind cold fronts because of the longer duration of warm fronts.4 The occurrence of wind shear is a microscale meteorological phenomenon. This means that it usually develops over a distance of less than 1 kilometer, even though it can emerge in the presence of large weather patterns (such as cold fronts and squall lines). Wind shear affects the movement of soundwaves through the atmosphere by bending the wave front, causing sounds to be heard where they normally would not. A much more violent variety of wind shear can appear near and within downbursts and microbursts, which may be caused by thunderstorms or weather fronts, particularly when such phenomena occur near mountains. Vertical shear can form on the lee side of mountains when winds blow over them. If the wind flow is strong enough, turbulent eddies known as “rotors” may form. Such rotors pose dangers to both ascending and descending aircraft.5 The microburst phenomenon, discovered and identified in the late 1970s by T. Theodore Fujita of the University of Chicago, involves highly localized, short-lived vertical downdrafts of dense cool air that impact the ground and radiate outward toward all points of the compass at high speed, like a water stream from a kitchen faucet impacting a basin.6

Speed and directional wind shear result at the three-dimensional boundary’s leading edge. The strength of the vertical wind shear is directly proportional to the strength of the outflow boundary. Typically, microbursts are smaller than 3 miles across and last fewer than 15 minutes, with rapidly fluctuating wind velocity.7 Wind shear is also observed near radiation inversions (also called nocturnal inversions), which form during rapid cooling of Earth’s surface at night. Such inversions do not usually extend above the lower few hundred feet in the atmosphere. Favorable conditions for this type of inversion include long nights, clear skies, dry air, little or no wind, and cold or snow-covered surfaces. The difference between the inversion layer and the air above the inversion layer can be up to 90 degrees in direction and 40 knots. It can occur overnight or the following morning. These differences tend to be strongest toward sunrise.8 The troposphere is the lowest layer of the atmosphere in which weather changes occur. Within it, intense vertical wind shear can slow or prevent tropical cyclone development. However, it can also coax thunderstorms into longer life cycles, worsening severe weather.9 Wind shear particularly endangers aircraft during takeoff and landing, when the aircraft are at low speed and low altitude, and particularly susceptible to loss of control. Microburst wind shear typically occurs during thunderstorms but occasionally arises in the absence of rain 7. For microbursts and NASA research on them, see the recommended readings at the end of this paper by Roland L. Bowles, Kelvin K. Droegemeier, Fred H. Proctor, Paul A. Robinson, Russell Targ, and Dan D. Vicroy. 8. NASA has undertaken extensive research on wind shear, as evidenced by numerous reports listed in the recommended readings section following this study. For introduction to the subject, see NASA Langley Research Center, “Windshear,” http://oea.larc.nasa.gov/PAIS/Windshear.html, accessed July 30, 2009; Integrated Publishing, “Meterology: Low-Level Wind Shear,” http://www. tpub.com/weather3/6-15.htm, accessed July 25, 2009; Amos A. Spady, Jr., Roland L. Bowles, and Herbert Schlickenmaier, eds., Airborne Wind Shear Detection and Warning Systems, Second Combined Manufacturers and Technological Conference, two parts, NASA CP-10050 (1990); U.S. National Academy of Sciences, Committee on Low-Altitude Wind Shear and Its Hazard to Aviation, Low Altitude Wind Shear and Its Hazard to Aviation (Washington, DC: National Academy Press, 1983); and Dan D. Vicroy, “Influence of Wind Shear on the Aerodynamic Characteristics of Airplanes,” NASA TP-2827 (1988). 9. Department of Atmospheric Sciences, University of Illinois-Champaign, “Jet Stream,” http:// ww2010.atmos.uiuc.edu/%28Gh%29/guides/mtr/cyc/upa/jet.rxml, accessed July 25, 2009. Lightning aspects of the thunderstorm risk are addressed in an essay by Barrett Tillman and John Tillman in this volume.

6

Case 1 | Eluding Aeolus: Turbulence, Gusts, and Wind Shear

near the ground. There are both “wet” and “dry” microbursts. Before the developing of forward-looking detection and evasion strategies, it was a major cause of aircraft accidents, claiming 26 aircraft and 626 lives, with over 200 injured, between 1964 and 1985.10 Another macro-level weather event associated with wind shear is an upper-level jetstream, which contains vertical and horizontal wind shear at its edges. Jetstreams are fast-flowing, narrow air currents found at certain areas of the tropopause. The tropopause is the transition between the troposphere (the area in the atmosphere where most weather changes occur and temperature decreases with height) and the stratosphere (the area where temperature increases with height).11 A combination of atmospheric heating (by solar radiation or internal planetary heat) and the planet’s rotation on its axis causes jetstreams to form. The strongest jetstreams on Earth are the polar jets (23,000–39,000 feet above sea level) and the higher and somewhat weaker subtropical jets (33,000–52,000 feet). Both the northern and southern hemispheres have a polar jet and a subtropical jet. Wind shear in the upper-level jetstream causes clear air turbulence. The cold-air side of the jet, next to the jet’s axis, is where CAT is usually strongest.12 Although most aircraft passengers experience clear air turbulence as a minor annoyance, this kind of turbulence can be quite hazardous to aircraft when it becomes severe. It has caused fatalities, as in the case of United Airlines Flight 826.13 Flight 826 took off from Narita International Airport in Japan for Honolulu, HI, on December 28, 1997.

At 31,000 feet, 2 hours into the flight, the crew of the plane, a Boeing 747, received warning of severe clear air turbulence in the area. A few minutes later, the plane abruptly dropped 100 feet, injuring many passengers and forcing an emergency return to Tokyo, where one passenger subsequently died of her injuries.14 A low-level jetstream is yet another phenomenon causing wind shear. This kind of jetstream usually forms at night, directly above Earth’s surface, ahead of a cold front. Low-level vertical wind shear develops in the lower part of the low-level jet. This kind of wind shear is also known as nonconvective wind shear, because it is not caused by thunderstorms. The term “jetstream” is often used without further modification to describe Earth’s Northern Hemisphere polar jet. This is the jet most important for meteorology and aviation, because it covers much of North America, Europe, and Asia, particularly in winter. The Southern Hemisphere polar jet, on the other hand, circles Antarctica year-round.15 Commercial use of the Northern Hemisphere polar jet began November 18, 1952, when a Boeing 377 Stratocruiser of Pan American Airlines first flew from Tokyo to Honolulu at an altitude of 25,000 feet. It cut the trip time by over one-third, from 18 to 11.5 hours.16 The jetstream saves fuel by shortening flight duration, since an airplane flying at high altitude can attain higher speeds because it is passing through lessdense air. Over North America, the time needed to fly east across the continent can be decreased by about 30 minutes if an airplane can fly with the jetstream but can increase by more than 30 minutes it must fly against the jetstream.17 Strong gusts of wind are another natural phenomenon affecting aviation. The National Weather Service reports gusts when top wind speed reaches 16 knots and the variation between peaks and lulls reaches 9 knots.18 A gust load is the wind load on a surface caused by gusts. 14. Aviation Safety Network, “ASN Aircraft accident Boeing 747 Tokyo,” http://aviation-safety. net/database/record.php?id=19971228-0, accessed July 4, 2009. 15. U.S. Department of Energy, “Ask a Scientist,” http://www.newton.dep.anl.gov/aas.htm, accessed Aug. 20, 2009. 16. M.D. Klaas, “Stratocruiser: Part Three,” Air Classics (June 2000), at http://findarticles.com/p/ articles/mi_qa3901/is_200006/ai_n8911736/pg_2/, accessed July 8, 2009. 17. Ned Rozell, Alaska Science Forum, “Amazing flying machines allow time travel,” http://www. gi.alaska.edu/ScienceForum/ASF17/1727.html, accessed July 8, 2009. 18. U.S. Weather Service, “Wind Gust,” http://www.weather.gov/forecasts/wfo/definitions/ defineWindGust.html, accessed Aug. 1, 2009.

8

Case 1 | Eluding Aeolus: Turbulence, Gusts, and Wind Shear

1

Otto Lilienthal, the greatest of pre-Wright flight researchers, in flight. National Air and Space Museum.

The more physically fragile a surface, the more danger a gust load will pose. As well, gusts can have an upsetting effect upon the aircraft’s flightpath and attitude. Initial NACA–NASA Research Sudden gusts and their effects upon aircraft have posed a danger to the aviator since the dawn of flight. Otto Lilienthal, the inventor of the hang glider and arguably the most significant aeronautical researcher before the Wright brothers, sustained fatal injuries in an 1896 accident, when a gust lifted his glider skyward, died away, and left him hanging in a stalled flight condition. He plunged to Earth, dying the next day, his last words reputedly being “Opfer müssen gebracht werden”—or “Sacrifices must be made.”19 NASA’s interest in gust and turbulence research can be traced to the earliest days of its predecessor, the NACA. Indeed, the first NACA 19. Richard P. Hallion, Taking Flight: Inventing the Aerial Age from Antiquity Through the First World War (New York: Oxford University Press, 2003), p. 161.

9

NASA’s Contributions to Aeronautics

1

technical report, issued in 1917, examined the behavior of aircraft in gusts.20 Over the first decades of flight, the NACA expanded its interest in gust research, looking at the problems of both aircraft and lighterthan-air airships. The latter had profound problems with atmospheric turbulence and instability: the airship Shenandoah was torn apart over Ohio by violent stormwinds; the Akron was plunged into the Atlantic, possibly from what would now be considered a microburst; and the Macon was doomed when clear air turbulence ripped off a vertical fin and opened its gas cells to the atmosphere. Dozens of airmen lost their lives in these disasters.21 During the early part of the interwar years, much research on turbulence and wind behavior was undertaken in Germany, in conjunction with the development of soaring, and the long-distance and longendurance sailplane. Conceived as a means of preserving German aeronautical skills and interest in the wake of the Treaty of Versailles, soaring evolved as both a means of flight and a means to study atmospheric behavior. No airman was closer to the weather, or more dependent upon an understanding of its intricacies, than the pilot of a sailplane, borne aloft only by thermals and the lift of its broad wings. German soaring was always closely tied to the nation’s excellent technical institutes and the prestigious aerodynamics research of Ludwig Prandtl and the Prandtl school at Göttingen. Prandtl himself studied thermals, publishing a research paper on vertical air currents in 1921, in the earliest years of soaring development.22 One of the key figures in German sailplane development was Dr. Walter Georgii, a wartime meteorologist who headed the postwar German Research Establishment for Soaring Flight (Deutsche Forschungsanstalt für Segelflug ([DFS]). Speaking before

20. J.C. Hunsaker and Edwin Bidwell Wilson, “Report on Behavior of Aeroplanes in Gusts,” NACA TR-1 (1917); see also Edwin Bidwell Wilson, “Theory of an Airplane Encountering Gusts,” pts. II and III, NACA TR-21 and TR-27 (1918). 21. For an example of NACA research, see C.P. Burgess, “Forces on Airships in Gusts,” NACA TR-204 (1925). These—and other—airship disasters are detailed in Douglas A. Robinson, Giants in the Sky: A History of the Rigid Airship (Seattle: University of Washington Press, 1973). 22. Ludwig Prandtl, “Some Remarks Concerning Soaring Flight,” NACA Technical Memorandum No. 47 (Oct. 1921), a translation of a German study; Howard Siepen, “On the Wings of the Wind,” The National Geographic Magazine, vol. 55, no. 6 (June 1929), p. 755. For an example of later research, see Max Kramer, “Increase in the Maximum Lift of an Airplane Wing due to a Sudden Increase in its Effective Angle of Attack Resulting from a Gust,” NACA TM-678 (1932), a translation of a German study.

10

Case 1 | Eluding Aeolus: Turbulence, Gusts, and Wind Shear

Britain’s Royal Aeronautical Society, he proclaimed, “Just as the master of a great liner must serve an apprenticeship in sail craft to learn the secret of sea and wind, so should the air transport pilot practice soaring flights to gain wider knowledge of air currents, to avoid their dangers and adapt them to his service.”23 His DFS championed weather research, and out of German soaring, came such concepts as thermal flying and wave flying. Soaring pilot Max Kegel discovered firsthand the power of storm-generated wind currents in 1926. They caused his sailplane to rise like “a piece of paper that was being sucked up a chimney,” carrying him almost 35 miles before he could land safely.24 Used discerningly, thermals transformed powered flight from gliding to soaring. Pioneers such as Gunter Grönhoff, Wolf Hirth, and Robert Kronfeld set notable records using combinations of ridge lift and thermals. On July 30, 1929, the courageous Grönhoff deliberately flew a sailplane with a barograph into a storm, to measure its turbulence; this flight anticipated much more extensive research that has continued in various nations.25 The NACA first began to look at thunderstorms in the 1930s. During that decade, the Agency’s flagship laboratory—the Langley Memorial Aeronautical Laboratory in Hampton, VA—performed a series of tests to determine the nature and magnitude of gust loadings that occur in storm systems. The results of these tests, which engineers performed in Langley’s signature wind tunnels, helped to improve both civilian and military aircraft.26 But wind tunnels had various limitations, leading to use of specially instrumented research airplanes to effectively use the sky as a laboratory and acquire information unobtainable by traditional tunnel research. This process, most notably associated with the post–World War II X-series of research airplanes, led in time to such future NASA research aircraft as the Boeing 737 “flying laboratory” to study wind shear. Over subsequent decades, the NACA’s successor, NASA,

1

23. Walter Georgii, “Ten Years’ Gliding and Soaring in Germany,” Journal of the Royal Aeronautical Society, vol. 34, no. 237 (Sept. 1930), p. 746. 24. Siepen, “On the Wings of the Wind,” p. 771. 25. Ibid., pp. 735–741; see also B.S. Shenstone and S. Scott Hall’s “Glider Development in Germany: A Technical Survey of Progress in Design in Germany Since 1922,” NACA TM No. 780 (Nov. 1935), pp. 6–8. 26. See also James R. Hansen, Engineer in Charge: A History of the Langley Aeronautical Laboratory, 1917–1958, NASA SP-4305 (Washington, DC: GPO, 1987), p. 181; and Hansen, The Bird is on the Wing: Aerodynamics and the Progress of the American Airplane (College Station, TX: Texas A&M University Press, 2003), p. 73.

11

NASA’s Contributions to Aeronautics

1

would perform much work to help planes withstand turbulence, wind shear, and gust loadings. From the 1930s to the 1950s, one of the NACA’s major areas of research was the nature of the boundary layer and the transition from laminar to turbulent flow around an aircraft. But Langley Laboratory also looked at turbulence more broadly, to include gust research and meteorological turbulence influences upon an aircraft in flight. During the previous decade, experimenters had collected measurements of pressure distribution in wind tunnels and flight, but not until the early 1930s did the NACA begin a systematic program to generate data that could be applied by industry to aircraft design, forming a committee to oversee loads research. Eventually, in the late 1930s, Langley created a separate structures research division with a structures research laboratory. By this time, individuals such as Philip Donely, Walter Walker, and Richard V. Rhode had already undertaken wideranging and influential research on flight loads that transformed understanding about the forces acting on aircraft in flight. Rhode, of Langley, won the Wright Brothers Medal in 1935 for his research of gust loads. He pioneered the undertaking of detailed assessments of the maneuvering loads encountered by an airplane in flight. As noted by aerospace historian James Hansen, his concept of the “sharp edge gust” revised previous thinking of gust behavior and the dangers it posed, and it became “the backbone for all gust research.”27 NACA gust loads research influenced the development of both military and civilian aircraft, as did its research on aerodynamic-induced flight-surface flutter, a problem of particular concern as aircraft design transformed from the era of the biplane to that of the monoplane. The NACA also investigated the loads and stresses experienced by combat aircraft when undertaking abrupt rolling and pullout maneuvers, such as routinely occurred in aerial dogfighting and in dive-bombing.28 A dive bomber encountered particularly punishing aerodynamic and structural loads as the pilot executed a pullout: abruptly recovering the airplane from a dive and resulting in it

swooping back into the sky. Researchers developed charts showing the relationships between dive angle, speed, and the angle required for recovery. In 1935, the Navy used these charts to establish design requirements for its dive bombers. The loads program gave the American aeronautics community a much better understanding of load distributions between the wing, fuselage, and tail surfaces of aircraft, including highperformance aircraft, and showed how different extreme maneuvers “loaded” these individual surfaces. In his 1939 Wilbur Wright lecture, George W. Lewis, the NACA’s legendary Director of Aeronautical Research, enumerated three major questions he believed researchers needed to address: • • •

1

What is the nature or structure of atmospheric gusts? How do airplanes react to gusts of known structure? What is the relation of gusts to weather conditions?29

Answering these questions, posed at the close of the biplane era, would consume researchers for much of the next six decades, well into the era of jet airliners and supersonic flight. The advent of the internally braced monoplane accelerated interest in gust research. The long, increasingly thin, and otherwise unsupported cantilever wing was susceptible to load-induced failure if not well-designed. Thus, the stresses caused by wind gusts became an essential factor in aircraft design, particularly for civilian aircraft. Building on this concern, in 1943, Philip Donely and a group of NACA researchers began design of a gust tunnel at Langley to examine aircraft loads produced by atmospheric turbulence and other unpredictable flow phenomena and to develop devices that would alleviate gusts. The tunnel opened in August 1945. It utilized a jet of air for gust simulation, a catapult for launching scaled models into steady flight, curtains for catching the model after its flight through the gust, and instruments for recording the model’s responses. For several years, the gust tunnel was useful, “often [revealing] values that were not found by the best known methods of calculation . . . in one instance, for example, the gust tunnel tests showed that it would be safe to design the airplane for load increments 17 to 22 percent less than the previously accepted 29. George W. Gray, Frontiers of Flight: the Story of NACA Research (New York: Alfred A. Knopf, 1948), p. 173.

values.”30 As well, gust researchers took to the air. Civilian aircraft— such as the Aeronca C-2 light, general-aviation airplane, Martin M-130 flying boat, and the Douglas DC-2 airliner—and military aircraft, such as the Boeing XB-15 experimental bomber, were outfitted with special loads recorders (so-called “v-g recorders,” developed by the NACA). Extensive records were made on the weather-induced loads they experienced over various domestic and international air routes.31 This work was refined in the postwar era, when new generations of long-range aircraft entered air transport service and were also instrumented to record the loads they experienced during routine airline

30. Ibid., p. 174; Hansen, Engineer in Charge, p. 468. NACA researchers created the gust tunnel to provide information to verify basic concepts and theories. It ultimately became obsolete because of its low Reynolds and Mach number capabilities. After being used as a low-velocity instrument laboratory and noise research facility, the gust tunnel was dismantled in 1965. 31. Philip Donely, “Effective Gust Structure at Low Altitudes as Determined from the Reactions of an Airplane,” NACA TR-692 (1940); Walter G. Walker, “Summary of V-G Records Taken on Transport Airplanes from 1932 to 1942,” NACA WRL-453 (1942); Donely, “Frequency of Occurrence of Atmospheric Gusts and of Related Loads on Airplane Structures,” NACA WRL-121 (1944); Walker, “An Analysis of the Airspeeds and Normal Accelerations of Martin M-130 Airplanes in Commercial Transport Operation,” NACA TN-1693 (1948); and Walker, “An Analysis of the Airspeed and Normal Accelerations of Douglas DC-2 Airplanes in Commercial Transport Operations,” NACA TN-1754 (1948).

14

Case 1 | Eluding Aeolus: Turbulence, Gusts, and Wind Shear

operation.32 Gust load effects likewise constituted a major aspect of early transonic and supersonic aircraft testing, for the high loads involved in transiting from subsonic to supersonic speeds already posed a serious challenge to aircraft designers. Any additional loading, whether from a wind gust or shear, or from the blast of a weapon (such as the overpressure blast wave of an atomic weapon), could easily prove fatal to an already highly loaded aircraft.33 The advent of the long-range jet bomber and transport—a configuration typically having a long and relatively thin swept wing, and large, thin vertical and horizontal tail surfaces— added further complications to gust research, particularly because the penalty for an abrupt gust loading could be a fatal structural failure. Indeed, on one occasion, while flying through gusty air at low altitude, a Boeing B-52 lost much of its vertical fin, though fortunately, its crew was able to recover and land the large bomber.34 The emergence of long-endurance, high-altitude reconnaissance aircraft such as the Lockheed U-2 and Martin RB-57D in the 1950s and the long-range ballistic missile further stimulated research on highaltitude gusts and turbulence. Though seemingly unconnected, both the high-altitude jet airplane and the rocket-boosted ballistic missile required understanding of the nature of upper atmosphere turbulence and gusts. Both transited the upper atmospheric region: the airplane cruising in the high stratosphere for hours, and the ballistic missile

or space launch vehicle transiting through it within seconds on its way into space. Accordingly, from early 1956 through December 1959, the NACA, in cooperation with the Air Weather Service of the U.S. Air Force, installed gust load recorders on Lockheed U-2 strategic reconnaissance aircraft operating from various domestic and overseas locations, acquiring turbulence data from 20,000 to 75,000 feet over much of the Northern Hemisphere. Researchers concluded that the turbulence problem would not be as severe as previous estimates and high-altitude balloon studies had indicated.35 High-altitude loitering aircraft such as the U-2 and RB-57 were followed by high-altitude, high-Mach supersonic cruise aircraft in the early to mid-1960s, typified by Lockheed’s YF-12A Blackbird and North American’s XB-70A Valkyrie, both used by NASA as Mach 3+ Supersonic Transport (SST) surrogates and supersonic cruise research testbeds. Test crews found their encounters with highaltitude gusts at supersonic speeds more objectionable than their exposure to low-altitude gusts at subsonic speeds, even though the given g-loading accelerations caused by gusts were less than those experienced on conventional jet airliners.36 At the other extreme of aircraft performance, in 1961, the Federal Aviation Agency (FAA) requested NASA assistance to document the gust and maneuver loads and performance of general-aviation aircraft. Until the program was terminated in 1982, over 35,000 flight-hours of data were assembled from 95 airplanes, representing every category of general-aviation airplane, from single-engine personal craft to twin-engine business airplanes and including such specialized types as crop-dusters and aerobatic aircraft.37

35. Thomas L. Coleman and Emilie C. Coe, “Airplane Measurements of Atmospheric Turbulence for Altitudes Between 20,000 and 55,000 Feet Over the Western part of the United States,” NACA RM-L57G02 (1957); and Thomas L. Coleman and Roy Steiner, “Atmospheric Turbulence Measurements Obtained from Airplane Operations at Altitudes Between 20,000 and 75,000 Feet for Several Areas in the Northern Hemisphere,” NASA TN-D-548 (1960). 36. Eldon E. Kordes and Betty J. Love, “Preliminary Evaluation of XB-70 Airplane Encounters with High-Altitude Turbulence,” NASA TN-D-4209 (1967); L.J. Ehernberger and Betty J. Love, “High Altitude Gust Acceleration Environment as Experienced by a Supersonic Airplane,” NASA TN-D-7868 (1975). NASA’s supersonic cruise flight test research is the subject of an accompanying essay in this volume by William Flanagan, a former Air Force Blackbird navigator. 37. Joseph W. Jewel, Jr., “Tabulations of Recorded Gust and Maneuver Accelerations and Derived Gust Velocities for Airplanes in the NSA VGH General Aviation Program,” NASA TM-84660 (1983).

16

Case 1 | Eluding Aeolus: Turbulence, Gusts, and Wind Shear

Along with studies of the upper atmosphere by direct measurement came studies on how to improve turbulence detection and avoidance, and how to measure and simulate the fury of turbulent storms. In 1946– 1947, the U.S. Weather Bureau sponsored a study of turbulence as part of a thunderstorm study project. Out of this effort, in 1948, researchers from the NACA and elsewhere concluded that ground radar, if properly used, could detect storms, enabling aircraft to avoid them. Weather radar became a common feature of airliners, their once-metal nose caps replaced by distinctive black radomes.38 By the late 1970s, most wind shear research was being done by specialists in atmospheric science, geophysical scientists, and those in the emerging field of mesometeorology— the study of small atmospheric phenomena, such as thunderstorms and tornadoes, and the detailed structure of larger weather events.39 Although turbulent flow in the boundary layer is important to study in the laboratory, the violent phenomenon of microburst wind shear cannot be sufficiently understood without direct contact, investigation, and experimentation.40 Microburst loadings constitute a threat to aircraft, particularly during approach and landing. No one knows how many aircraft accidents have been caused by wind shear, though the number is certainly considerable. The NACA had done thunderstorm research during World War II, but its instrumentation was not nearly sophisticated enough to detect microburst (or thunderstorm downdraft) wind shear. NASA would join with the FAA in 1986 to systematically fight wind shear and would only have a small pool of existing wind shear research data from which to draw.41

A revealing view taken down the throat of a wingtip vortex, formed by a low-flying cropduster. NASA.

Wind Shear Emerges as an Urgent Aviation Safety Issue In 1972, the FAA had instituted a small wind shear research program, with emphasis upon developing sensors that could plot wind speed and direction from ground level up to 2,000 feet above ground level (AGL). Even so, the agency’s major focus was on wake vortex impingement. The powerful vortexes streaming behind newer-generation wide-body aircraft could—and sometimes did—flip smaller, lighter aircraft out of control. Serious enough at high altitude, these inadvertent excursions could be disastrous if low over the ground, such as during landing and takeoff, where a pilot had little room to recover. By 1975, the FAA had developed an experimental Wake Vortex Advisory System, which it installed later that year at Chicago’s busy O’Hare International Airport. NASA undertook a detailed examination of wake vortex studies, both in tunnel tests and with a variety of aircraft, including the Boeing 727 and 747, Lockheed L-1011, and smaller aircraft, such as the Gates Learjet, helicopters, and general-aviation aircraft. 19

NASA’s Contributions to Aeronautics

1

But it was wind shear, not wake vortex impingement, which grew into a major civil aviation concern, and the onset came with stunning and deadly swiftness.42 Three accidents from 1973 to 1975 highlighted the extreme danger it posed. On the afternoon of December 17, 1973, while making a landing approach in rain and fog, an Iberia Airlines McDonnell-Douglas DC-10 wide-body abruptly sank below the glideslope just seconds before touchdown, impacting amid the approach lights of Runway 33L at Boston’s Logan Airport. No one died, but the crash seriously injured 16 of the 151 passengers and crew. The subsequent National Transportation Safety Board (NTSB) report determined “that the captain did not recognize, and may have been unable to recognize an increased rate of descent” triggered “by an encounter with a lowaltitude wind shear at a critical point in the landing approach.”43 Then, on June 24, 1975, Eastern Air Lines’ Flight 66, a Boeing 727, crashed on approach to John F. Kennedy International Airport’s Runway 22L. This time, 113 of the 124 passengers and crew perished. All afternoon, flights had encountered and reported wind shear conditions, and at least one pilot had recommended closing the runway. Another Eastern captain, flying a Lockheed L-1011 TriStar, prudently abandoned his approach and landed instead at Newark. Shortly after the L-1011 diverted, the EAL Boeing 727 impacted almost a half mile short of the runway threshold, again amid the approach lights, breaking apart and bursting into flames. Again, wind shear was to blame, but the NTSB also faulted Kennedy’s air traffic controllers for not diverting the 727 to another runway, after the EAL TriStar’s earlier aborted approach.44 Just weeks later, on August 7, Continental Flight 426, another Boeing 727, crashed during a stormy takeoff from Denver’s Stapleton

International Airport. Just as the airliner began its climb after lifting off the runway, the crewmembers encountered a wind shear so severe that they could not maintain level flight despite application of full power and maintenance of a flight attitude that ensured the wings were producing maximum lift.45 The plane pancaked in level attitude on flat, open ground, sustaining serious damage. No lives were lost, though 15 of the 134 passengers and crew were injured. Less than a year later, on June 23, 1976, Allegheny Airlines Flight 121, a Douglas DC-9 twin-engine medium-range jetliner, crashed during an attempted go-around at Philadelphia International Airport. The pilot, confronting “severe horizontal and vertical wind shears near the ground,” abandoned his landing approach to Runway 27R. As controllers in the airport tower watched, the straining DC-9 descended in a nosehigh attitude, pancaking onto a taxiway and sliding to a stop. The fact that it hit nose-high, wings level, and on flat terrain undoubtedly saved lives. Even so, 86 of the plane’s 106 passengers and crew were seriously injured, including the entire crew.46 In these cases, wind shear brought about by thunderstorm downdrafts (microbursts), rather than the milder wind shear produced by gust fronts, caused these accidents. This led to a major reinterpretation of the wind shear–causing phenomena that most endangered low-flying planes. Before these accidents, meteorologists believed that gust fronts, or the leading edge of a large dome of rain-cooled air, provided the most dangerous sources of wind shear. Now, using data gathered from the planes that had crashed and from weather radar, scientists, engineers, and designers came to realize that the small, focused, jet-like downdraft columns characteristic of microbursts produced the most threatening kind of wind shear.47 Microburst wind shear poses an insidious danger for an aircraft. An aircraft landing will typically encounter the horizontal outflow of a microburst as a headwind, which increases its lift and airspeed, tempting

the pilot to reduce power. But then the airplane encounters the descending vertical column as an abrupt downdraft, and its speed and altitude both fall. As it continues onward, it will exit the central downflow and experience the horizontal outflow, now as a tailwind. At this point, the airplane is already descending at low speed. The tailwind seals its fate, robbing it of even more airspeed and, hence, lift. It then stalls (that is, loses all lift) and plunges to Earth. As NASA testing would reveal, professional pilots generally need between 10 to 40 seconds of warning to avoid the problems of wind shear.48 Goaded by these accidents and NTSB recommendations that the FAA improve its weather advisory and runway selection procedures, “step up research on methods of detecting the [wind shear] phenomenon,” and develop aircrew wind shear training process, the FAA mandated installation at U.S. airports of a new Low-Level Windshear Alert System (LLWAS), which employed acoustic Doppler radar, technically similar to the FAA’s Wake Vortex Advisory System installed at O’Hare.49 The LLWAS incorporated a variety of equipment that measured wind velocity (wind speed and direction). This equipment included a master station, which had a main computer and system console to monitor LLWAS performance, and a transceiver, which transmitted signals 48. NASA Langley Research Center, “Windshear,” http://oea.larc.nasa.gov/PAIS/Windshear. html, accessed July 30, 2009. 49. Preston, Troubled Passage, p. 197.

22

Case 1 | Eluding Aeolus: Turbulence, Gusts, and Wind Shear

to the system’s remote stations. The master station had several visual computer displays and auditory alarms for aircraft controllers. The remote stations had wind sensors made of sonic anemometers mounted on metal pipes. Each remote station was enclosed in a steel box with a radio transceiver, power supplies, and battery backup. Every airport outfitted with this system used multiple anemometer stations to effectively map the nature of wind events in and around the airport’s runways.50 At the end of March 1981, over 70 representatives from NASA, the FAA, the military, the airline community, the aerospace industry, and academia met at the University of Tennessee Space Institute in Tullahoma to explore weather-related aviation issues. Out of that came a list of recommendations for further joint research, many of which directly addressed the wind shear issue and the need for better detection and warning systems. As the report summarized:

1

1.

There is a critical need to increase the data base for wind and temperature aloft forecasts both from a more frequent updating of the data as well as improved accuracy in the data, and thus, also in the forecasts which are used in flight planning. This will entail the development of rational definitions of short term variations in intensity and scale length (of turbulence) which will result in more accurate forecasts which should also meet the need to improve numerical forecast modeling requirements relative to winds and temperatures aloft. 2. The development of an on-board system to detect wind induced turbulence should be beneficial to meeting the requirement for an investigation of the subjective evaluation of turbulence “feel” as a function of motion drive algorithms. 3. More frequency reporting of wind shift in the terminal area is needed along with greater accuracy in forecasting. 4. There is a need to investigate the effects of unequal wind components acting across the span of an airfoil.

50. Ibid., pp. 197–198; Cox, “Multi-Dimensional Nature,” pp. 141–142. Anemometers are tools that originated in the late Middle Ages and measure wind speed. The first anemometer, a deflection anemometer, was developed by Leonardo da Vinci. Several new varieties, including cup, pressure, and sonic anemometers, have emerged in the intervening centuries.

23

NASA’s Contributions to Aeronautics

1

24

5. The FAA Simulator Certification Division should monitor the work to be done in conjunction with the JAWS project relative to the effects of wind shear on aircraft performance. 6. Robert Steinberg’s ASDAR effort should be utilized as soon as possible, in fact it should be encouraged or demanded as an operational system beneficial for flight planning, specifically where winds are involved. 7. There is an urgent need to review the way pilots are trained to handle wind shear. The present method, as indicated in the current advisory circular, of immediately pulling to stick shaker on encountering wind shear could be a dangerous procedure. It is suggested the circular be changed to recommend the procedure to hold at whatever airspeed the aircraft is at when the pilot realizes he is encountering a wind shear and apply maximum power, and that he not pull to stick shaker except to flare when encountering ground effect to minimize impact or to land successfully or to effect a go-around. 8. Need to develop a clear non-technical presentation of wind shear which will help to provide improved training for pilots relative to wind shear phenomena. Such training is of particular importance to pilots of high performance, corporate, and commercially used aircraft. 9. Need to develop an ICAO type standard terminology for describing the effects of windshear on flight performance. 10. The ATC system should be enhanced to provide operational assistance to pilots regarding hazardous weather areas and in view of the envisioned controller workloads generated, perfecting automated transmissions containing this type of information to the cockpit as rapidly and as economically as practicab1e. 11. In order to improve the detection in real time of hazardous weather, it is recommended that FAA, NOAA, NWS, and DOD jointly address the problem of fragmental meteorological collection, processing, and dissemination pursuant to developing a system dedicated to making effective use of perishable weather information. Coupled with this would be the need to conduct a cost

Case 1 | Eluding Aeolus: Turbulence, Gusts, and Wind Shear

12.

13.

14.

15. 16.

17.

18.

19.

20.

benefit study relative to the benefits that could be realized through the use of such items as a common winds and temperature aloft reporting by use of automated sensors on aircraft. Develop a capabi1ity for very accurate four to six minute forecasts of wind changes which would require terminal reconfigurations or changing runways. Due to the inadequate detection of clear air turbulence an investigation is needed to determine what has happened to the promising detection systems that have been reported and recommended in previous workshops. Improve the detection and warning of windshear by developing on-board sensors as well as continuing the development of emerging technology for groundbased sensors. Need to collect true three and four dimensional wind shear data for use in flight simulation programs. Recommend that any systems whether airborne or ground based that can provide advance or immediate alert to pilots and controllers should be pursued. Need to continue the development of Doppler radar technology to detect the wind shear hazard, and that this be continued at an accelerated pace. Need for airplane manufacturers to take into consideration the effect of phenomena such as microbursts which produce strong periodic longitudinal wind perturbations at the aircraft phugoid frequency. Consideration should be given, by manufacturers, to consider gust alleviation devices on new aircraft to provide a softer ride through turbulence. Need to develop systems to automatically detect hazardous weather phenomena through signature recognition algorithms and automatically data linking alert messages to pilots and air traffic controllers.51

Given the subsequent history of NASA’s research on the wind shear problem (and others), many of these recommendations presciently forecast the direction of Agency and industry research and development efforts. Unfortunately, that did not come in time to prevent yet another series of microburst-related accidents. That series of catastrophes effectively elevated microburst wind shear research to the status of a national air safety emergency. By the early 1980s, 58 U.S. airports had installed LLWAS. Although LLWAS constituted a great improvement over verbal observations and warnings by pilots communicated to air traffic controllers, LLWAS sensing technology was not mature or sophisticated enough to remedy the wind shear threat. Early LLWAS sensors were installed without fullest knowledge of microburst characteristics. They were usually installed in too-few numbers, placed too close to the airport (instead of farther out on the approach and departure paths of the runways), and, worst, were optimized to detect gust fronts (the traditional preFujita way of regarding wind shear)—not the columnar downdrafts and horizontal outflows characteristic of the most dangerous shear flows. Thus, wind shear could still strike, and viciously so. On July 9, 1982, Clipper 759, a Pan American World Airways Boeing 727, took off from the New Orleans airport amid showers and “gusty, variable, and swirling” winds.52 Almost immediately, it began to descend, having attained an altitude of no more than 150 feet. It hit trees, continued onward for almost another half mile, and then crashed into residential housing, exploding in flames. All 146 passengers and crew died, as did 8 people on the ground; 11 houses were destroyed or “substantially” damaged, and another 16 people on the ground were injured. The NTSB concluded that the probable cause of the accident was “the airplane’s encounter during the liftoff and initial climb phase of flight with a microburst-induced wind shear which imposed a downdraft and a decreasing headwind, the effects of which the pilot would have had difficulty recognizing and reacting to in time for the airplane’s descent to be arrested before its impact with trees.” Significantly, it also noted, “Contributing to the accident was the limited capability of current ground based low level wind shear detection technology [the LLWAS] to provide 52. National Transportation Safety Board, “Aircraft Accident Report: Pan American World Airways, Clipper 759, N4737, Boeing 727-235, New Orleans International Airport, Kenner, Louisiana, July 9, 1982,” Report NTSB-AAR-83-02 (Mar. 21, 1983).

26

Case 1 | Eluding Aeolus: Turbulence, Gusts, and Wind Shear

definitive guidance for controllers and pilots for use in avoiding low level wind shear encounters.”53 This tragic accident impelled Congress to direct the FAA to join with the National Academy of Sciences (NAS) to “study the state of knowledge, alternative approaches and the consequences of wind shear alert and severe weather condition standards relating to take off and landing clearances for commercial and general aviation aircraft.”54 As the FAA responded to these misfortunes and accelerated its research on wind shear, NASA researchers accelerated their own wind shear research. In the late 1970s, NASA Ames Research Center contracted with Bolt, Baranek, and Newman, Inc., of Cambridge, MA, to perform studies of “the effects of wind-shears on the approach performance of a STOL aircraft . . . using the optimal-control model of the human operator.” In laymen’s terms, this meant that the company used existing data to mathematically simulate the combined pilot/aircraft reaction to various wind shear situations and to deduce and explain how the pilot should manipulate the aircraft for maximum safety in such situations. Although useful, these studies did not eliminate the wind shear problem.55 Throughout the 1980s, NASA research into thunderstorm phenomena involving wind shear continued. Double-vortex thunderstorms and their potential effects on aviation were of particular interest. Double-vortex storms involve a pair of vortexes present in the storm’s dynamic updraft that rotate in opposite directions. This pair forms when the cylindrical thermal updraft of a thunderstorm penetrates the upper-level air and there is a large amount of vertical wind shear between the lower- and upper-level air layers. Researchers produced a numerical tornado prediction scheme based on the movement of the double-vortex thunderstorm. A component of this scheme was the Energy-Shear Index (ESI), which researchers calculated from radiosonde measurements. The index integrated parameters that were representative of thermal instability and the blocking effect. It indicated

NASA 809, a Martin B-57B flown by Dryden research crews in 1982 for gust and microburst research. NASA.

environments appropriate for the development of double-vortex thunderstorms and tornadoes, which would help pilots and flight controllers determine safe flying conditions.56 In 1982, in partnership with the National Center for Atmospheric Research (NCAR), the University of Chicago, the National Oceanic Atmospheric Administration (NOAA), the National Science Foundation (NSF), and the FAA, NASA vigorously supported the Joint Airport Weather Studies (JAWS) effort. NASA research pilots and flight research engineers from the Ames-Dryden Flight Research Facility (now the NASA Dryden Flight Research Center) participated in the JAWS program from mid-May through mid-August 1982, using a specially instrumented Martin B-57B jet bomber. NASA researchers selected the B-57B for its strength, flying it on low-level wind shear research flights around the Sierra Mountains near Edwards Air Force Base (AFB), CA, about the Rockies near Denver, CO, around Marshall Space Flight Center, AL, and near Oklahoma City, OK. Raw data were digitally collected on microbursts, gust fronts, mesocyclones, torna56. J.R. Connell, et al., “Numeric and Fluid Dynamic Representation of Tornadic Double Vortex Thunderstorms,” NASA CR-171023 (1980).

28

Case 1 | Eluding Aeolus: Turbulence, Gusts, and Wind Shear

does, funnel clouds, and hail storms; converted into engineering format at the Langley Research Center; and then analyzed at Marshall Space Flight Center and the University of Tennessee Space Institute at Tullahoma. Researchers found that some microbursts recorded during the JAWS program created wind shear too extreme for landing or departing airliners to survive if they encountered it at an altitude less than 500 feet.57 In the most severe case recorded, the B-57B experienced an abrupt 30-knot speed increase within less than 500 feet of distance traveled and then a gradual decrease of 50 knots over 3.2 miles, clear evidence of encountering the headwind outflow of a microburst and then the tailwind outflow as the plane transited through the microburst.58 At the same time, the Center for Turbulence Research (CTR), run jointly by NASA and Stanford University, pioneered using an early parallel computer, the Illiac IV, to perform large turbulence simulations, something previously unachievable. CTR performed the first of these simulations and made the data available to researchers around the globe. Scientists and engineers tested theories, evaluated modeling ideas, and, in some cases, calibrated measuring instruments on the basis of these data. A 5-minute motion picture of simulated turbulent flow provided an attention-catching visual for the scientific community.59 In 1984, NASA and FAA representatives met at Langley Research Center to review the status of wind shear research and progress toward developing sensor systems and preventing disastrous accidents. Out of this, researcher Roland L. Bowles conceptualized a joint NASA–FAA

program to develop an airborne detector system, perhaps one that would be forward-looking and thus able to furnish real-time warning to an airline crew of wind shear hazards in its path. Unfortunately, before this program could yield beneficial results, yet another wind shear accident followed the dismal succession of its predecessors: the crash of Delta Flight 191 at Dallas-Fort Worth International Airport (DFW) on August 2, 1985.60 Delta Flight 191 was a Lockheed L-1011 TriStar wide-body jumbo jet. As it descended toward Runway 17L amid a violent turbulenceproducing thunderstorm, a storm cell produced a microburst directly in the airliner’s path. The L-1011 entered the fury of the outflow when only 800 feet above ground and at a low speed and energy state. As the L-1011 transitioned through the microburst, a lift-enhancing headwind of 26 knots abruptly dropped to zero and, as the plane sank in the downdraft column, then became a 46-knot tailwind, robbing it of lift. At low altitude, the pilots had insufficient room for recovery, and so, just 38 seconds after beginning its approach, Delta Flight 191 plunged to Earth, a mile short of the runway threshold. It broke up in a fiery heap of wreckage, slewing across a highway and crashing into some water tanks before coming to a rest, burning furiously. The accident claimed the lives of 136 passengers and crewmembers and the driver of a passing automobile. Just 24 passengers and 3 of its crew survived: only 2 were without injury. 61 Among the victims were several senior staff members from IBM, including computer pioneer Don Estridge, father of the IBM PC. Once again, the NTSB blamed an “encounter at low altitude with a microburst-induced, severe wind shear” from a rapidly developing thunderstorm on the final approach course. But the accident illustrated as well the immature capabilities of the LLWAS at that time; only after Flight 191 had crashed did the DFW LLWAS detect the fatal microburst.62 60. Chambers, Concept to Reality, p. 188. 61. National Transportation Safety Board, “Aircraft Accident Report: Delta Air Lines, Inc., Lockheed L-1011-385-1, N726DA, Dallas/Fort Worth International Airport, Texas, August 2, 1985,” Report NTSB-AAR-86-05 (Aug. 15, 1986). See also James Ott, “Inquiry Focuses on Wind Shear As Cause of Delta L-1011 Crash,” Aviation Week & Space Technology (Aug. 12, 1985), pp. 16–19; F. Caracena, R. Ortiz, and J. Augustine, “The Crash of Delta Flight 191 at Dallas-Fort Worth International Airport on 2 August 1985: Multiscale Analysis of Weather Conditions,” National Oceanic and Atmospheric Report TR ERL 430-ESG-2 (1987); T. Theodore Fujita, “DFW Microburst on August 2, 1985,” Satellite and Mesometeorology Research Project Research Paper 217, Dept. of Geophysical Sciences, University of Chicago, NTIS Report PB-86-131638 (1986). 62. Chambers, Concept to Reality, p. 188.

30

Case 1 | Eluding Aeolus: Turbulence, Gusts, and Wind Shear

The Dallas accident resulted in widespread shock because of its large number of fatalities. It particularly affected airline crews, as American Airlines Capt. Wallace M. Gillman recalled vividly at a NASA-sponsored 1990 meeting of international experts in wind shear:

1

About one week after Delta 191’s accident in Dallas, I was taxiing out to take off on Runway 17R at DFW Airport. Everybody was very conscience of wind shear after that accident. I remember there were some storms coming in from the northwest and we were watching it as we were in a line of airplanes waiting to take off. We looked at the wind socks. We were listening to the tower reports from the LLWAS system, the winds at various portions around the airport. I was number 2 for takeoff and I said to my co-pilot, “I’m not going to go on this runway.” But just at that time, the number 1 crew in line, Pan Am, said, “I’m not going to go.” Then the whole line said, “We’re not going to go” then the tower taxies us all down the runway, took us about 15 minutes, down to the other end. By that time the storm had kind of passed by and we all launched to the north.63

Taming Microburst: NASA’s Wind Shear Research Effort Takes Wing The Dallas crash profoundly accelerated NASA and FAA wind shear research efforts. Two weeks after the accident, responding to calls from concerned constituents, Representative George Brown of California requested a NASA presentation on wind shear and subsequently made a fact-finding visit to the Langley Research Center. Dr. Jeremiah F. Creedon, head of the Langley Flight Systems Directorate, briefed the Congressman on the wind shear problem and potential technologies that might alleviate it. Creedon informed Brown that Langley researchers were running a series of modest microburst and wind shear modeling projects, and that an FAA manager, George “Cliff” Hay, and NASA Langley research engineer Roland L. Bowles had a plan underway for a comprehensive airborne wind shear detection research program. During the briefing, Brown asked how much money it would take; Creedon estimated several million dollars. Brown remarked the amount was “nothing”; Creedon 63. Wallace M. Gillman, “Industry Terms of Reference,” in Spady, et al., eds., Airborne Wind Shear Detection and Warning Systems, pt. 1, p. 16.

31

NASA’s Contributions to Aeronautics

1

replied tellingly, “It’s a lot of money if you don’t have it.” As the Brown party left the briefing, one of his aides confided to a Langley manager “NASA [has] just gotten itself a wind shear program.” The combination of media attention, public concern, and congressional interest triggered the development of “a substantial, coordinated interagency research effort to address the wind shear problem.”64 On July 24, 1986, NASA and the FAA mandated the National Integrated Windshear Plan, an umbrella project overseeing several initiatives at different agencies.65 The joint effort responded both to congressional directives and National Transportation Safety Board recommendations after documentation of the numerous recent wind shear accidents. NASA Langley Research Center’s Roland L. Bowles subsequently oversaw a rigorous plan of wind shear research called the Airborne Wind Shear Detection and Avoidance Program (AWDAP), which included the development of onboard sensors and pilot training. Building upon earlier supercomputer modeling studies by Michael L. Kaplan, Fred H. Proctor, and others, NASA researchers developed the Terminal Area Simulation System (TASS), which took into consideration a variety of storm parameters and characteristics, enabling numerical simulation of microburst formation. Out of this came data that the FAA was able to use to build standards for the certification of airborne wind shear sensors. As well, the FAA created a flight 64. Lane E. Wallace, Airborne Trailblazer: Two Decades with NASA Langley’s 737 Flying Laboratory, NASA SP 4216 (Washington, DC: GPO, 1994), p. 41. 65. NASA Langley Research Center, “NASA Facts On-line: Making the Skies Safe from Windshear,” http://oea.larc.nasa.gov/PAIS/Windshear.html, accessed July 15, 2009. For subsequent research, see for example Roland L. Bowles, “Windshear Detection and Avoidance: Airborne Systems Survey,” Proceedings of the 29th IEEE Conference on Decision and Control, Honolulu, HI (New York: IEEE Publications, 1990); E.M. Bracalente, C.L. Britt, and W.R. Jones, “Airborne Doppler Radar Detection of Low Altitude Windshear,” AIAA Paper 88-4657 (1988); Dan D. Vicroy, “Investigation of the Influence of Wind Shear on the Aerodynamic Characteristics of Aircraft Using a Vortex-Lattice Method,” NASA LRC, NTRS Report 88N17619 (1988); Vicroy, “Influence of Wind Shear on the Aerodynamic Characteristics of Airplanes,” NASA TP-2827 (1988); “Wind Shear Study: Low-Altitude Wind Shear,” Aviation Week & Space Technology (Mar. 28, 1983); Terry Zweifel, “Optimal Guidance during a Windshear Encounter,” Scientific Honeywell (Jan. 1989); Zweifel, “Temperature Lapse Rate as an Adjunct to Windshear Detection,” paper presented at the Airborne Wind Shear Detection and Warning Systems Third Combined Manufacturer’s and Technologist’s Conference, Hampton, VA, Oct. 16–18, 1990; Zweifel, “The Effect of Windshear During Takeoff Roll on Aircraft Stopping Distance” NTRS Report 91N11699 (1990); Zweifel, “Flight Experience with Windshear Detection,” NTRS Report 91N11684 (1990).

32

Case 1 | Eluding Aeolus: Turbulence, Gusts, and Wind Shear

safety program that supported NASA development of wind shear detection technologies.66 At NASA Langley, the comprehensive wind shear studies started with laboratory analysis and continued into simulation and flight evaluation. Some of the sensor systems that Langley tested work better in rain, while others performed more successfully in dry conditions.67 Most were tested using Langley’s modified Boeing 737 systems testbed.68 This research airplane studied not only microburst and wind shear with the Airborne Windshear Research Program, but also tested electronic and computerized control displays (“glass cockpits” and Synthetic Vision Systems) in development, microwave landing systems in development, and Global Positioning System (GPS) navigation.69 NASA’s Airborne Windshear Research Program did not completely resolve the problem of wind shear, but “its investigation of microburst detection systems helped lead to the development of onboard monitoring systems that offered airliners another way to avoid potentially lethal situations.”70 The program achieved much and gave confidence to those pursuing practical applications. The program had three major goals. The first was to find a way to characterize the wind shear threat in a way that would indicate the hazard level that threatened aircraft. The second was to develop airborne remote-sensor technology to provide accurate, forwardlooking wind shear detection. The third was to design flight management systems and concepts to transfer this information to pilots in such a way that they could effectively respond to a wind shear threat. The program had to pursue these goals under tight time constraints.71 Time was of the essence, partly because the public had demanded a solution to the scourge of microburst wind shear and because a proposed FAA regulation stipulated that any “forward-looking” (predictive) wind shear detection technology produced by NASA be swiftly transferred to the airlines. An airborne technology giving pilots advanced warning of wind shear would allow them the time to increase engine power, “clean up”

the aircraft aerodynamically, increase penetration speed, and level the airplane before entering a microburst, so that the pilot would have more energy, altitude, and speed to work with or to maneuver around the microburst completely. But many doubted that a system incorporating all of these concepts could be perfected. The technologies offering most potential were microwave Doppler radar, Doppler Light Detecting and Ranging (LIDAR, a laser-based system), and passive infrared radiometry systems. However, all these forward-looking technologies were challenging. Consequently, developing and exploiting them took a minimum of several years. At Langley, versions of the different detection systems were “flown” as simulations against computer models, which re-created past wind shear accidents. However, computer simulations could only go so far; the new sensors had to be tested in actual wind shear conditions. Accordingly, the FAA and NASA expanded their 1986 memorandum of understanding in May 1990 to support flight research evaluating the efficacy of the advanced wind shear detection systems integrating airborne and ground-based wind shear measurement methodologies. Researchers swiftly discovered that pilots needed as much as 20 seconds of advance warning if they were to avert or survive an encounter with microburst wind shear.72 Key to developing a practical warning system was deriving a suitable means of assessing the level of threat that pilots would face, because this would influence the necessary course of action to avoid potential disaster. Fortunately, NASA Project Manager Roland Bowles devised a hazard index called the “F-Factor.” The F-Factor, as ultimately refined by Bowles and his colleagues Michael Lewis and David Hinton, indicated how much specific excess thrust an airplane would require to fly through wind shear without losing altitude or airspeed.73 For instance, a typical twin-engine jet transport plane might have engines capable 72. P. Douglas Arbuckle, Michael S. Lewis, and David A. Hinton, “Airborne Systems Technology Application to the Windshear Threat,” Paper 96-5.7.1, 20th Congress of the International Council of the Aeronautical Sciences, Sorrento, Italy, 1996; see also Wallace, Airborne Trailblazer, ch. 5. 73. Fred H. Proctor, David A. Hinton, and Roland L. Bowles, “A Windshear Hazard Index,” NASA LRC NTRS Report 200.001.16199 (2000). Specific excess thrust is thrust minus the drag of the airplane, divided by airplane’s weight. It determines the climb gradient (altitude gain vs. horizontal distance), which is expressed as γ = (T - D) / W, where γ is the climb gradient, T is thrust, D is drag, and W is weight. See Roger D. Schaufele, The Elements of Aircraft Preliminary Design (Santa Ana: Aries Publications, 2000), p. 18, and Arbuckle, Lewis, and Hinton, “Airborne Systems Technology Application,” p. 2.

34

Case 1 | Eluding Aeolus: Turbulence, Gusts, and Wind Shear

of producing 0.17 excess thrust on the F-Factor scale. If a microburst wind shear registered higher than 0.17, the airplane would not be able to fly through it without losing airspeed or altitude. The F-Factor provided a way for information from any kind of sensor to reach the pilot in an easily recognizable form. The technology also had to locate the position and track the movement of dangerous air masses and provide information on the wind shear’s proximity and volume.74 Doppler-based wind shear sensors could only measure the first term in the F-Factor equation (the rate of change of horizontal wind). This limitation could result in underestimation of the hazard. Luckily, there were several ways to measure changes in vertical wind from radial wind measurements, using equations and algorithms that were computerized. Although error ranges in the device’s measurement of the F-Factor could not be eliminated, these were taken into account when producing the airborne system.75 The Bowles team derivation and refinement of the F-Factor constituted a major element of NASA’s wind shear research, to some, “the key contribution of NASA in the taming of the wind-shear threat.” The FAA recognized its significance by incorporating F-Factor in its regulations, directing that at F-Factors of 0.13 or greater, wind shear warnings must be issued.76 In 1988, NASA and researchers from Clemson University worked on new ways to eliminate clutter (or data not related to wind shear) from information received via Doppler and other kinds of radar used on an airborne platform. Such methods, including antenna steering and adaptive filtering, were somewhat different from those used to eliminate clutter from information received on a ground-based platform. This was

because the airborne environment had unique problems, such as large clutter-to-signal ratios, ever-changing range requirements, and lack of repeatability.77 The accidents of the 1970s and 1980s stimulated research on a variety of wind shear predictive technologies and methodologies. Langley’s success in pursuing both enabled the FAA to decree in 1988 that all commercial airline carriers were required to install wind shear detection devices by the end of 1993. Most airlines decided to go with reactive systems, which detect the presence of wind shear once the plane has already flown into it. For American, Northwest, and Continental— three airlines already testing predictive systems capable of detecting wind shear before an aircraft flew into it—the FAA extended its deadline to 1995, to permit refinement and certification of these more demanding and potentially more valuable sensors.78 From 1990 onwards, NASA wind shear researchers were particularly energetic, publishing and presenting widely, and distributing technical papers throughout the aerospace community. Working with the FAA, they organized and sponsored well-attended wind shear conferences that drew together other researchers, aviation administrators, and—very importantly—airline pilots and air traffic controllers. Finally, cognizant of the pressing need to transfer the science and technology of wind shear research out of the laboratory and onto the flight line, NASA and the FAA invited potential manufacturers to work with the agencies in pursuing wind shear detector development.79 The invitations were welcomed by industry. Three important avionics manufacturers—Allied Signal, Westinghouse, and Rockwell Collins—sent engineering teams to Langley. These teams followed NASA’s wind shear effort closely, using the Agency’s wind shear simulations to enhance the capabilities of their various systems. In 1990, Lockheed introduced its Coherent LIDAR Airborne Shear Sensor (CLASS), developed under contract to NASA Langley. CLASS was a predictive system allowing pilots to avoid hazards of low-altitude wind shear under all weather conditions. CLASS would detect thunderstorm downburst early in its development

and emphasize avoidance rather than recovery. After consultation with airline and military pilots, Lockheed engineers decided that the system should have a 2- to 4-kilometer range and should provide a warning time of 20 to 40 seconds. A secondary purpose of the system would be to provide predictive warnings of clear air turbulence. In conjunction with NASA, Lockheed conducted a 1-year flight evaluation program on Langley’s 737 during the following year to measure line-of-sight wind velocities from many wind fields, evaluating this against data obtained via air- and ground-based radars and accelerometer-based systems and thus acquiring a comparative database.80 Also in 1990, using technologies developed by NASA, Turbulence Prediction Systems of Boulder, CO, successfully tested its Advance Warning Airborne System (AWAS) on a modified Cessna Citation small, twin-jet research aircraft operated by the University of North Dakota. Technicians loaded AWAS into the luggage compartment in front of the pilot. Pilots intentionally flew the plane into numerous wind shear events over the course of 66 flights, including several wet microbursts in Orlando, FL, and a few dry microbursts in Denver. On the Cessna, AWAS measured the thermal characteristics of microbursts to predict their presence during takeoff and landing. In 1991, AWAS units were flown aboard three American Airlines MD-80s and three Northwest Airlines DC-9s to study and improve the system’s nuisance alert response. Technicians also installed a Honeywell Windshear Computer in the planes, which Honeywell had developed in light of NASA research. The computer processed the data gathered by AWAS via external aircraft measuring instruments. AWAS also flew aboard the NASA Boeing 737 during summer 1991. Unfortunately, results from these research flights were not conclusive, in part because NASA conducted research flights outside AWAS’s normal operating envelope, and in an attempt to compensate for differences in airspeed, NASA personnel sometimes overrode automatic features. These complications did not stop the development of more sophisticated versions of the system and ultimate FAA certification.81

After analyzing data from the Dallas and Denver accidents, Honeywell researchers had concluded that temperature lapse rate, or the drop in temperature with the increase in altitude, could indicate wind shear caused by both wet and dry microbursts. Lapse rate could not, of course, communicate whether air acceleration was horizontal or vertical. Nonetheless, this lapse rate could be used to make reactive systems more “intelligent,” “hence providing added assurance that a dangerous shear has occurred.” Because convective activity was often associated with turbulence, the lapse rate measurements could also be useful in warning of impending “rough air.” Out of this work evolved the firstgeneration Honeywell Windshear Detection and Guidance System, which gained wide acceptance.82 Supporting its own research activities and the larger goal of air safety awareness, NASA developed a thorough wind shear training and familiarization program for pilots and other interested parties. Flightcrews “flew” hundreds of simulated wind shears. Crews and test personnel flew rehearsal flights for 2 weeks in the Langley and Wallops areas before deploying to Orlando or Colorado for actual in-flight microburst encounters in 1991 and 1992. The NASA Langley team tested three airborne systems to predict wind shear. In the creation of these systems, it was often assisted by technology application experts from the Research Triangle Institute of Triangle Park, NC.83 The first system tested was a Langley-sponsored Doppler microwave radar, whose development was overseen by Langley’s Emedio “Brac” Bracalente and the Langley Airborne Radar Development Group. It sent a microwave radar signal ahead of the plane to detect raindrops and other moisture in the air. The returning signal provided information on the motion of raindrops and moisture particles, and it translated this information into wind speed. Microwave radar was best in damp or wet conditions, though not in dry conditions. Rockwell International’s Collins Air Transport Division in Cedar Rapids, IA, made the radar transmitter, extrapolated from the standard Collins 708 weather radar. NASA’s Langley Research Center in Hampton, VA, developed

the receiver/detector subsystem and the signal-processing algorithms and hardware for the wind shear application. So enthusiastic and confident were the members of the Doppler microwave test team that they designed their own flight suit patch, styling themselves the “Burst Busters,” with an international slash-and-circle “stop” sign overlaying a schematic of a microburst.84 The second system was a Doppler LIDAR. Unlike radio beamtransmitting radar, LIDAR used a laser, reflecting energy from aerosol particles rather than from water droplets. This system had fewer problems with ground clutter (interference) than Doppler radar did, but it did not work as well as the microwave system does in heavy rain. The system was made by the Lockheed Corporation’s Missiles and Space Company in Sunnyvale, CA; United Technologies Optical Systems, Inc., in West Palm Beach, FL; and Lassen Research of Chico, CA.85 Researchers noted that an “inherent limitation” of the radar and LIDAR systems was their inability to measure any velocities running perpendicular to the system’s line of sight. A microburst’s presence could be detected by measuring changes in the horizontal velocity profile, but the inability to measure a perpendicular downdraft could result in an underestimation of the magnitude of the hazard, including its spatial size.86 The third plane-based system used an infrared detector to find temperature changes in the airspace in front of the plane. It monitored carbon dioxide’s thermal signatures to find cool columns of air, which often indicate microbursts. The system was less expensive and less complex than the others but also less precise, because it could not directly measure wind speed.87

A June 1991 radar plot of a wind shear at Orlando, showing the classic radial outflow. This one is approximately 5 miles in diameter. NASA.

40

Case 1 | Eluding Aeolus: Turbulence, Gusts, and Wind Shear

In 1990–1992, Langley’s wind shear research team accumulated and evaluated data from 130 sensor-evaluation research flights made using the Center’s 737 testbed. 88 Flight-test crews flew research missions in the Langley local area, Philadelphia, Orlando, and Denver. Risk mitigation was an important program requirement. Thus, wind shear investigation flights were flown at higher speeds than airliners typically flew, so that the 737 crew would have better opportunity to evade any hazard it encountered. As well, preflight ground rules stipulated that no penetrations be made into conditions with an F-Factor greater than 0.15. Of all the systems tested, the airborne radar functioned best. Data were accumulated during 156 weather runs: 109 in the turbulence-prone Orlando area. The 737 made 15 penetrations of microbursts at altitudes ranging from 800 to 1,100 feet. During the tests, the team evaluated the radar at various tilt angles to assess any impact from ground clutter (a common problem in airborne radar clarity) upon the fidelity of the airborne system. Aircraft entry speed into the microburst threat region had little effect on clutter suppression. All together, the airborne Doppler radar tests collected data from approximately 30 microbursts, as well as 20 gust fronts, with every microburst detected by the airborne radar. F-Factors measured with the airborne radar showed “excellent agreement” with the F-Factors measured by Terminal Doppler Weather Radar (TDWR), and comparison of airborne and TDWR data likewise indicated “comparable results.”89 As Joseph Chambers noted subsequently, “The results of the test program demonstrated that Doppler radar systems offered the greatest promise for early introduction to airline service. The Langley forward-looking Doppler radar detected wind shear consistently and at longer ranges than other systems, and it was able to provide 20 to 40 seconds warning of upcoming microburst.”90 The Burst Busters clearly had succeeded. Afterward, forward-looking Doppler radar was adopted by most airlines.

NASA Langley’s wind shear team at Orlando in the cockpit of NASA 515. Left to right: Program Manager Roland Bowles, research pilot Lee Person, Deputy Program Manager Michael Lewis, research engineer David Hinton, and research engineer Emedio Bracalente. Note Bracalente’s “Burst Buster” shoulder patch. NASA.

Assessing NASA’s Wind Shear Research Effort NASA’s wind shear research effort involved complex, cooperative relationships between the FAA, industry manufacturers, and several NASA Langley directorates, with significant political oversight, scrutiny, and public interest. It faced many significant technical challenges, not the least of which were potentially dangerous flight tests and evaluations.91 Yet, during a 7-year effort, NASA, along with industry technicians and researchers, had risen to the challenge. Like many classic NACA research projects, it was tightly focused and mission-oriented, taking “a proven,

significant threat to aviation and air transportation and [developing] new technology that could defeat it.”92 It drew on technical capabilities and expertise from across the Agency—in meteorology, flight systems, aeronautics, engineering, and electronics—and from researchers in industry, academia, and agencies such as the National Center for Atmospheric Research. This collaborative effort spawned several important breakthroughs and discoveries, particularly the derivation of the F-Factor and the invention of Langley’s forward-looking Doppler microwave radar wind shear detector. As a result of this Government-industry-academic partnership, the risk of microburst wind shear could at last be mitigated.93 In 1992, the NASA–FAA Airborne Windshear Research Program was nominated for the Robert J. Collier Trophy, aviation’s most prestigious honor. Industry evaluations described the project as “the perfect role for NASA in support of national needs” and “NASA at its best.” Langley’s Jeremiah Creedon said, “we might get that good again, but we can’t get any better.”94 In any other year, the program might easily have won, but it was the NASA–FAA team’s ill luck to be competing that year with the revolutionary Global Positioning System, which had proven its value in spectacular fashion during the Gulf War of 1991. Not surprisingly, then, it was GPS, not the wind shear program, which was awarded the Collier Trophy. But if the wind shear team members lost their shot at this prestigious award, they could nevertheless take satisfaction in knowing that together, their agencies had developed and demonstrated a “technology base” enabling the manufacture of many subsequent wind shear detection and prediction systems, to the safety and undoubted benefit of the traveling public, and airmen everywhere.95 NASA engineers had coordinated their research with commercial manufacturers from the start of wind shear research and detector development, so its subsequent transfer to the private sector occurred quickly and effectively. Annual conferences hosted jointly by NASA Langley and the FAA during the project’s evolution provided a ready forum for manufacturers to review new technology and for NASA researchers to obtain a better understanding of the issues that manufacturers were

encountering as they developed airborne equipment to meet FAA certification requirements. The fifth and final combined manufacturers’ and technologists’ airborne wind shear conference was held at NASA Langley on September 28–30, 1993, marking an end to what NASA and the FAA jointly recognized as “the highly successful wind shear experiments conducted by government, academic institutions, and industry.” From this point onward, emphasis would shift to certification, regulation, and implementation as the technology transitioned into commercial service.96 There were some minor issues among NASA, the airlines, and plane manufacturers about how to calibrate and where to place the various components of the system for maximum effectiveness. Sometimes, the airlines would begin testing installed systems before NASA finished its testing. Airline representatives said that they were pleased with the system, but they noted that their pilots were highly trained professionals who, historically, had often avoided wind shear on their own. Pilots, who of course had direct control over plane performance, wished to have detailed information about the system’s technical components. Airline representatives debated the necessity of considering the performance specifications of particular aircraft when installing the airborne system but ultimately went with a single Doppler radar system that could work with all passenger airliners.97 Through all this, Langley researchers worked with the FAA and industry to develop certification standards for the wind shear sensors. These standards involved the wind shear hazard, the cockpit interface, alerts given to flight crews, and sensor performance levels. NASA research, as it had in other aspects of aeronautics over the history of American civil aviation, formed the basis for these specifications.98 Although its airborne sensor development effort garnered the greatest attention during the 1980s and 1990s, NASA Langley also developed several ground-based wind shear detection systems. One was the 96. V.E. Delnore, ed., Airborne Windshear Detection and Warning Systems: Fifth and Final Combined Manufacturers’ and Technologists’ Conference, NASA CP-10139, pts. 1–2 (1994). 97. Vicroy, NASA–FAA Wind Shear Review Meeting, “Vertical Wind Estimation from Horizontal Wind Measurements: Results of American in-service Evaluations,” Sept. 28, 1993. 98. G.F. Switzer, J.V. Aanstoos, F.H. Proctor, and D.A. Hinton, “Windshear Database for ForwardLooking Systems Certification,” NASA TM-109012 (1993); and Charles L. Britt, George F. Switzer, and Emedio M. Bracalente, “Certification Methodology Applied to the NASA Experimental Radar System,” paper presented at the Airborne Windshear Detection and Warning Systems’ 5th and Final Combined Manufacturers’ and Technologists’ Conference, pt. 2, pp. 463–488, NTIS Report 95N13205 (1994).

44

Case 1 | Eluding Aeolus: Turbulence, Gusts, and Wind Shear

low-level wind shear alert system installed at over 100 United States airports. By 1994, ground-based radar systems (Terminal Doppler Weather Radar) were in place at hundreds of airports that could predict when such shears would come, but plane-based systems continue to be necessary because not all of the thousands of airports around the world had such systems. Of plane-based systems, NASA’s forward-looking predictive radar worked best.99 The end of the tyranny of microburst did not come without one last serious accident that had its own consequences for wind shear alleviation. On July 2, 1994, US Air Flight 1016, a twin-engine Douglas DC-9, crashed and burned after flying through a microburst during a missed approach at Charlotte-Douglas International Airport. The crew had realized too late that conditions were not favorable for landing on Runway 18R, had tried to go around, and had been caught by a violent microburst that sent the airplane into trees and a home. Of the 57 passengers and crew, 37 perished, and the rest were injured, 16 seriously. The NTSB faulted the crew for continuing its approach “into severe convective activity that was conducive to a microburst,” for “failure to recognize a windshear situation in a timely manner,” and for “failure to establish and maintain the proper airplane attitude and thrust setting necessary to escape the windshear.” As well, it blamed a “lack of realtime adverse weather and windshear hazard information dissemination from air traffic control.”100 Several factors came together to make the accident more tragic. In 1991, US Air had installed a Honeywell wind shear detector in the plane that could furnish the crew with both a visual warning light and an audible “wind shear, wind shear, wind shear” warning once an airplane entered a wind shear. But it failed to function during this encounter. Its operating algorithms were designed to minimize “nuisance alerts,” such as routine changes in aircraft motions induced by flap movement. When Flight 1016 encountered its fatal shear, the plane’s landing flaps were in transition as the crew executed its missed approach, and this likely played a role in its failure to function. As well, Charlotte had been scheduled to be the fifth airport to receive Terminal Doppler Weather Radar, a highly sensitive and precise wind shear

detection system. But a land dispute involving the cost of property that the airport was trying to purchase for the radar site bumped it from 5th to 38th on the list to get the new TDWR. Thus, when the accident occurred, Charlotte only had the far less capable LLWAS in service.101 Clearly, to survive the dangers of wind shear, airline crews needed aircraft equipped with forward-looking predictive wind shear warning systems, airports equipped with up-to-date precise wind shear Doppler radar detection systems, and air traffic controllers cognizant of the problem and willing to unhesitatingly shift flights away from potential wind shear threats. Finally, pilots needed to exercise extreme prudence when operating in conditions conducive to wind shear formation. Not quite 5 months later, on November 30, 1994, Continental Airlines Flight 1637, a Boeing 737 jetliner, lifted off from Washington-Reagan Airport, Washington, DC, bound for Cleveland. It is doubtful whether any passengers realized that they were helping usher in a new chapter in the history of aviation safety. This flight marked the introduction of a commercial airliner equipped with a forward-looking sensor for detecting and predicting wind shear. The sensor was a Bendix RDR-4B developed by Allied Signal Commercial Avionic Systems of Fort Lauderdale, FL. The RDR-4B was the first of the predictive Doppler microwave radar wind shear detection systems based upon NASA Langley’s research to gain FAA certification, achieving this milestone on September 1, 1994. It consisted of an antenna, a receiver-transmitter, and a Planned Position Indicator (PPI), which displayed the direction and distance of a wind shear microburst and the regular weather display. Since then, the number of wind shear accidents has dropped precipitously, reflecting the proliferation and synergistic benefits accruing from both air- and landbased advanced wind shear sensors.102 In the mid-1990s, as part of NASA’s Terminal Area Productivity Program, Langley researchers used numerical modeling to predict weather in the area of airport terminals. Their large-eddy simulation (LES) model had a meteorological framework that allowed the prediction and depiction of the interaction of the airplane’s wake vortexes (the rotating turbulence that streams from an aircraft’s wingtips when it passes through the air) with environments containing crosswind shear, 101. Ibid., pp. 15 and 85. As the NTSB report makes clear, cockpit transcripts and background signals confirmed the failure of the Honeywell system to alert the crew. 102. “Technology for Safer Skies”; “Making the Skies Safer From Windshear.”

46

Case 1 | Eluding Aeolus: Turbulence, Gusts, and Wind Shear

stratification, atmospheric turbulence, and humidity. Meteorological effects can, to a large degree, determine the behavior of wake vortexes. Turbulence can gradually decay the rotation of the vortex, robbing it of strength, and other dynamic instabilities can cause the vortex to collapse. Results from the numerical simulations helped engineers to develop useful algorithms to determine the way aircraft should be spaced when aloft in the narrow approach corridors surrounding the airport terminal, in the presence of wake turbulence. The models utilized both two and three dimensions to obtain the broadest possible picture of phenomena interaction and provided a solid basis for the development of the Aircraft Vortex Spacing System (AVOSS), which safely increased airport capacity.103 In 1999, researchers at NASA’s Goddard Space Flight Center in Greenbelt, MD, concluded a 20-year experiment on wind-stress simulations and equatorial dynamics. The use of existing datasets and the creation of models that paired atmosphere and ocean forecasts of changes in sea surface temperatures helped the researchers to obtain predictions of climatic conditions of large areas of Earth, even months and years in advance. Researchers found that these conditions affect the speed and timing of the transition from laminar to turbulent airflow in a plane’s boundary layer, and their work contributed to a more sophisticated understanding of aerodynamics.104 In 2008, researchers at NASA Goddard compared various NASA satellite datasets and global analyses from the National Centers for Environmental Protection to characterize properties of the Saharan Air Layer (SAL), a layer of dry, dusty, warm air that moves westward off the Saharan Desert of Africa and over the tropical Atlantic. The researchers also examined the effects of the SAL on hurricane development. Although the SAL causes a degree of low-level vertical wind shear that pilots have to be cognizant of, the researchers concluded that the SAL’s effects on hurricane and microburst formation were negligible.105

Advanced research into turbulence will be a vital part of the aerospace sciences as long as vehicles move through the atmosphere. Since 1997, Stanford has been one of five universities sponsored by the U.S. Department of Energy as a national Advanced Simulation and Computing Center. Today, researchers at Stanford’s Center for Turbulence use computer clusters, which are many times more powerful than the pioneering Illiac IV. For large-scale turbulence research projects, they also have access to cutting-edge computational facilities at the National Laboratories, including the Columbia computer at NASA Ames Research Center, which has 10,000 processors. Such advanced research into turbulent flow continues to help steer aerodynamics developments as the aerospace community confronts the challenges of the 21st century.106 In 2003, President George W. Bush signed the Vision 100 Century of Aviation Reauthorization Act.107 This initiative established within the FAA a joint planning and development office to oversee and manage the Next Generation Air Transportation System (NextGen). NextGen incorporated seven goals: 1.

Improve the level of safety, security, efficiency, quality, and affordability of the National Airspace System and aviation services. 2. Take advantage of data from emerging ground-based and space-based communications, navigation, and surveillance technologies. 3. Integrate data streams from multiple agencies and sources to enable situational awareness and seamless global operations for all appropriate users of the system, including users responsible for civil aviation, homeland security, and national security. 4. Leverage investments in civil aviation, homeland security, and national security and build upon current air traffic management and infrastructure initiatives to meet system performance requirements for all system uses.

5. Be scalable to accommodate and encourage substantial growth in domestic and international transportation and anticipate and accommodate continuing technology upgrades and advances. 6. Accommodate a range of aircraft operations, including airlines, air taxis, helicopters, general-aviation, and unmanned aerial vehicles. 7. Take into consideration, to the greatest extent practicable, design of airport approach and departure flight paths to reduce exposure of noise and emissions pollution on affected residents.108

1

NASA is now working with the FAA, industry, the academic community, the Departments of Commerce, Defense, Homeland Security, and Transportation, and the Office of Science and Technology Policy to turn the ambitious goals of NextGen into air transport reality. Continual improvement of Terminal Doppler Weather Radar and the Low-Level Windshear Alert System are essential elements of the reduced weather impact goals within the NextGen initiatives. Service life extension programs are underway to maintain and improve airport TDWR and the older LLWAS capabilities.109 There are LLWAS at 116 airports worldwide, and an improvement plan for the program was completed in 2008, consisting of updating system algorithms and creating new information/ alert displays to increase wind shear detection capabilities, reduce the number of false alarms, and lower maintenance costs.110 FAA and NASA researchers and engineers have not been content to rest on their accomplishment and have continued to perfect the wind shear prediction systems they pioneered in the 1980s and 1990s. Building upon this fruitful NASA–FAA turbulence and wind shear partnership effort, the FAA has developed Graphical Turbulence Guidance (GTG), which provides clear air turbulence forecasts out to 12 hours in advance for planes flying at altitudes of 20,000 feet and higher. An improved system, GTG-2, will enable forecasts out to 12 hours for planes flying at lower altitudes down to 10,000 feet.111 As of 2010, forward-looking 108. Ibid. 109. Section 3, DOT 163. 110. Section 3, DOT 171. 111. Section 3, DOT 171.

49

NASA’s Contributions to Aeronautics

1

50

predictive Doppler microwave radar systems of the type pioneered by Langley are installed on most passenger aircraft. This introduction to NASA research on the hazards of turbulence, gusts, and wind shear offers but a glimpse of the detailed work undertaken by Agency staff. However brief, it furnishes yet another example of how NASA, and the NACA before it, has contributed to aviation safety. This is due, in no small measure, to the unique qualities of its professional staff. The enthusiasm and dedication of those who worked NASA’s wind shear research programs, and the gust and turbulence studies of the NACA earlier, have been evident throughout the history of both agencies. Their work has helped the air traveler evade the hazards of wild winds, turbulence, and storm, to the benefit of all who journey through the world’s skies.

A lightning strike reveals the breadth, power, and majesty of this still mysterious electromagnetic phenomenon. NOAA.

66

Coping With Lightning: 2 A Lethal Threat to Flight

CASE

2

Barrett Tillman and John L. Tillman

The beautiful spectacle and terrible power of lightning have always inspired fear and wonder. In flight, it has posed a significant challenge. While the number of airships, aircraft, and occupants lost to lightning have been few, they offer sobering evidence that lightning is a hazard warranting intensive study and preventative measures. This is an area of NASA research that crosses between the classic fields of aeronautics and astronautics, and that has profound implications for both.

“

I LEARNED MORE ABOUT LIGHTNING from flying at night over Bosnia while wearing night vision goggles than I ever learned from a meteorologist. You’d occasionally see a green flash as a bolt discharged to the ground, but that was nothing compared to what was happening inside the clouds themselves. Even a moderate-sized cloud looked like a bubbling witches’ cauldron, with almost constant green discharges left and right, up and down. You’d think, “Bloody hell! I wouldn’t want to fly through that!” But of course you do, all the time. You just don’t notice if you don’t have the goggles.”1 So stated one veteran airman of his impressions with lightning. Lightning is an electrical discharge in the atmosphere usually generated by thunderstorms but also by dust storms and volcanic eruptions. Because only about a fourth of discharges reach the ground, lightning represents a disproportionate hazard to aviation and rocketry. In any case, lightning is essentially an immense spark that can be many miles long.2

Lightning generates radio waves. Scientists at the National Aeronautics and Space Administration (NASA) discovered that very low frequency (VLF) waves cause a gap between the inner and outer Van Allen radiation belts surrounding Earth. The gap offers satellites a potential safe zone from solar outburst particle streams. But, as will be noted, protection of spacecraft from lightning and electromagnetic pulses (EMPs) represents a lasting concern. There are numerous types of lightning. By far the most common is the streak variety, which actually is the return stroke in open air. Most lightning occurs inside clouds and is seldom witnessed inside thunderstorms. Other types include: ball (spherical, semipersistent), bead (cloud to ground), cloud-to-cloud (aka, sheet or fork lightning), dry (witnessed in absence of moisture), ground-to-cloud, heat (too distant for thunder to be heard), positive (also known as high-voltage lightning), ribbon (in high crosswinds), rocket (horizontal lightning at cloud base), sprites (above thunderstorms, including blue jets), staccato (short cloud to ground), and triggered (caused by aircraft, volcanoes, or lasers). Every year, some 16 million thunderstorms form in the atmosphere. Thus, over any particular hour, Earth experiences over 1,800. Estimates of the average global lightning flash frequency vary from 30 to 100 per second. Satellite observations produce lower figures than did prior scientific studies yet still record more than 3 million worldwide each day.3 Between 1959 and 1994, lightning strikes in the United States killed 3,239 people and injured a further 9,818, a measure of the lethality of this common phenomenon.4 Two American regions are notably prone to ground strikes: Florida and the High Plains, including foothills of the Rocky Mountains. Globally, lightning is most common in the tropics. Therefore, Florida records the most summer lightning strikes per day in the U.S. Heat differentials between land and water on the three sides of peninsular Florida, over its lakes and swamps and along its panhandle coast, drive air circulations that spin off thunderstorms year-round, although most intensely in summer. 3. Data from weather archive at http://www.newton.dep.anl.gov/askasci/wea00/wea00239. htm, accessed Nov. 30, 2009. 4. Joseph R. Chambers, Concept to Reality: Contributions of the NASA Langley Research Center to U.S. Civil Aircraft of the 1990s, NASA SP-2003 (Washington, DC: GPO, 2003), p. 173.

68

Case 2 | Coping With Lightning: A Lethal Threat to Flight

Lightning: What It Is, What It Does Despite recent increases in understanding, scientists are still somewhat mystified by lightning. Modern researchers might concur with stone age shaman and bronze age priests that it partakes of the celestial. Lightning is a form of plasma, the fourth state of matter, after solids, liquids, and gases. Plasma is an ionized gas in which negatively charged electrons have been stripped by high energy from atoms and molecules, creating a cloud of electrons, neutrons, and positively charged ions. As star stuff, plasma is by far the most common state of matter in the universe. Interstellar plasmas, such as solar wind particles, occur at low density. Plasmas found on Earth include flames, the polar auroras, and lightning. Lightning is like outer space conditions coming fleetingly to Earth. The leader of a bolt might zip at 134,000 miles per hour (mph). The energy released heats air instantaneously around the discharge from 36,000 to 54,000 degrees Fahrenheit (°F), or more than three to five times the Sun’s surface temperature. The sudden, astronomical increase in local pressure and temperature causes the atmosphere within and around a lightning bolt to expand rapidly, compressing the surrounding clear air into a supersonic shock wave, which decays to the acoustic wave perceived as thunder. Ranging from a sharp, loud crack to a long, low rumble, the sound of a thunderclap is determined by the hearer’s distance from the flash and by the type of lightning. Lightning originates most often in cumulonimbus thunderclouds. The bases of such large, anvil-shaped masses may stretch for miles. Their tops can bump up against, spread out along, and sometimes blast through the tropopause: the boundary between the troposphere (the lower portion of the atmosphere, in which most weather occurs) and the higher stratosphere. The altitude of the lower stratosphere varies with season and latitude, from about 5 miles above sea level at the poles in winter to 10 miles near the equator. The tropopause is not a “hard” ceiling. Energetic thunderstorms, particularly from the tropics, may punch into the lower stratosphere and oscillate up and down for hours in a multicycle pattern.

2

A Lightning Primer The conditions if not the mechanics that generate lightning are now well known. In essence, this atmospheric fire is started by rubbing particles together. But there is still no agreement on which processes 69

NASA’s Contributions to Aeronautics

2

ignite lightning. Current hypotheses focus on the separation of electric charge and generation of an electric field within a thunderstorm. Recent studies further suggest that lightning initiation requires ice, hail, and semifrozen water droplets, called “graupel.” Storms that do not produce large quantities of ice usually do not develop lightning.5 Graupel forms when super-cooled water droplets condense around a snowflake nucleus into a sphere of rime, from 2 to 5 millimeters across. Scientific debate continues as experts grapple with the mysteries of graupel, but the stages of lightning creation in thunderstorms are clear, as outlined by the National Weather Service of the National Oceanic and Atmospheric Administration (NOAA). First comes charge separation. Thunderstorms are turbulent, with strong updrafts and downdrafts regularly occurring close to one another. The updrafts lift water droplets from warmer lower layers to heights between 35,000 and 70,000 feet, miles above the freezing level. Simultaneously, downdrafts drag hail and ice from colder upper layers. When the opposing air currents meet, water droplets freeze, releasing heat, which keeps hail and ice surfaces slightly warmer than the surrounding environment, so that graupel, a “soft hail,” forms. Electrons carry a negative charge. As newly formed graupel collides with more water droplets and ice particles, electrons are sheared off the ascending particles, charging them positively. The stripped electrons collect on descending bits, charging them negatively. The process results in a storm cloud with a negatively charged base and positively charged top. Once that charge separation has been established, the second step is generation of an electrical field within the cloud and, somewhat like a mirror image, an electrical field below the storm cloud. Electrical opposites attract, and insulators inhibit current flow. The separation of positive and negative charges within a thundercloud generates an electric field between its top and base. This field strengthens with further separation of these charges into positive and negative pools. But the atmosphere acts as an insulator, inhibiting electric flow, so an enormous charge must build up before lightning can occur. When that high charge threshold is finally crossed, the strength of the electric field overpowers atmospheric insulation, unleashing lightning. Another electrical field develops with Earth’s surface below negatively charged storm base, 5. NOAA Online School for Weather, “How Lightning is Created,” at http://www.srh.noaa.gov/ jetstream/lightning/lightning.htm, accessed Nov. 30, 2009.

70

Case 2 | Coping With Lightning: A Lethal Threat to Flight

where positively charged particles begin to pool on land or sea. Whither the storm goes, the positively charged field—responsible for cloudto-ground lightning—will follow it. Because the electric field within the storm is much stronger than the shadowing positive charge pool, most lightning (about 75 to 80 percent) remains within the clouds and is thus not attracted groundward. The third phase is the building of the initial stroke that shoots between the cloud and the ground. As a thunderstorm moves, the pool of positively charged particles traveling with it along the ground gathers strength. The difference in charge between the base of the clouds and ground grows, leading positively charged particles to climb up taller objects like houses, trees, and telephone poles. Eventually a “stepped leader,” a channel of negative charge, descends from the bottom of the storm toward the ground. Invisible to humans, it shoots to the ground in a series of rapid steps, each happening quicker than the blink of an eye. While this negative leader works its way toward Earth, a positive charge collects in the ground and in objects resting upon it. This accumulation of positive charge “reaches out” to the approaching negative charge with its own channel, called a “streamer.” When these channels connect, the resulting electrical transfer appears to the observer as lightning. Finally, a return stroke of lightning flows along a charge channel about 0.39 inches wide between the ground and the cloud. After the initial lightning stroke, if enough charge is left over, additional strokes will flow along the same channel, giving the bolt its flickering appearance. Land struck by a bolt may reach more than 3,300 °F, hot enough to almost instantly melt the silica in conductive soil or sand, fusing the grains together. Within about a second, the fused grains cool into fulgurites, or normally hollow glass tubes that can extend some distance into the ground, showing the path of the lightning and its dispersion over the surface. The tops of trees, skyscrapers, and mountains lie closer to the base of storm clouds than does low-lying ground, so such objects are commonly struck by lightning. The less atmospheric insulation that lightning must burn through, the easier falls its strike. The tallest object beneath a storm will not necessarily suffer a hit, however, because the opposite charges may not accumulate around the highest local point or in the clouds above it. Lightning can strike an open field rather than a nearby line of trees. Lightning leader development depends not only upon the electrical breakdown of air, which requires about 3 million volts per meter, but

2

71

NASA’s Contributions to Aeronautics

2

on prior channel carving. Ambient electric fields required for lightning leader propagation can be one or two orders of magnitude less than the electrical breakdown strength. The potential gradient inside a developed return stroke channel is on the order of hundreds of volts per meter because of intense channel ionization, resulting in a power output on the order of a megawatt per meter for a vigorous return stroke current of 100,000 amperes (100 kiloamperes, kA). Negative, Positive, Helpful, and Harmful Most lightning forms in the negatively charged region under the base of a thunderstorm, whence negative charge is transferred from the cloud to the ground. This so-called “negative lightning” accounts for over 95 percent of strikes. An average bolt of negative lightning carries an electric current of 30 kA, transferring a charge of 5 coulombs, with energy of 500 megajoules (MJ). Large lightning bolts can carry up to 120 kA and 350 coulombs. The voltage is proportional to the length of the bolt.6 Some lightning originates near the top of the thunderstorm in its cirrus anvil, a region of high positive charge. Lightning formed in the upper area behaves similarly to discharges in the negatively charged storm base, except that the descending stepped leader carries a positive charge, while its subsequent ground streamers are negative. Bolts thus created are called “positive lightning,” because they deliver a net positive charge from the cloud to the ground. Positive lightning usually consists of a single stroke, while negative lightning typically comprises two or more strokes. Though less than 5 percent of all strikes consist of positive lightning, it is particularly dangerous. Because it originates in the upper levels of a storm, the amount of air it must burn through to reach the ground is usually much greater. Therefore, its electric field typically is much stronger than a negative strike would be and generates enormous amounts of extremely low frequency (ELF) and VLF waves. Its flash duration is longer, and its peak charge and potential are 6 to 10 times greater than a negative strike, as much as 300 kA and 1 billion volts! Some positive lightning happens within the parent thunderstorm and hits the ground beneath the cloud. However, many positive strikes occur near the edge of the cloud or may even land more than 10 miles away, where perhaps no one would recognize risk or hear thunder. 6. Richard Hasbrouck, “Mitigating Lightning Hazards,” Science & Technology Review (May 1996), p. 7.

72

Case 2 | Coping With Lightning: A Lethal Threat to Flight

Such positive lightning strikes are called “bolts from the blue.” Positive lightning may be the main type of cloud-to-ground during winter months or develop in the late stages of a thunderstorm. It is believed to be responsible for a large percentage of forest fires and power-line damage, and poses a threat to high-flying aircraft. Scientists believe that recently discovered high-altitude discharges called “sprites” and “elves” result from positive lightning. These phenomena occur well above parent thunderstorms, at heights from 18 to 60 miles, in some cases reaching heights traversed only by transatmospheric systems such as the Space Shuttle. Lightning is by no means a uniformly damaging force. For example, fires started by lightning are necessary in the life cycles of some plants, including economically valuable tree species. It is probable that, thanks to the evolution and spread of land plants, oxygen concentrations achieved the 13-percent level required for wildfires before 420 million years ago, in the Paleozoic Era, as evinced by fossil charcoal, itself proof of lightning-caused range fires. In 2003, NASA-funded scientists learned that lightning produces ozone, a molecule composed of three oxygen atoms. High up in the stratosphere (about 6 miles above sea level at midlatitudes), ozone shields the surface of Earth from harmful ultraviolet radiation and makes the land hospitable to life, but low in the troposphere, where most weather occurs, it’s an unwelcome byproduct of manmade pollutants. NASA’s researchers were surprised to find that more low-altitude ozone develops naturally over the tropical Atlantic because of lightning than from the burning of fossil fuels or vegetation to clear land for agriculture. Outdoors, humans can be injured or killed by lightning directly or indirectly. No place outside is truly safe, although some locations are more exposed and dangerous than others. Lightning has harmed victims in improvised shelters or sheds. An enclosure of conductive material does, however, offer refuge. An automobile is an example of such an elementary Faraday cage. Property damage is more common than injuries or death. Around a third of all electric power-line failures and many wildfires result from lightning. (Fires started by lightning are, however, significant in the natural life cycle of forests.) Electrical and electronic devices, such as telephones, computers, and modems, also may be harmed by lightning, when overcurrent surges fritz them out via plug-in outlets, phone jacks, or Ethernet cables.

2

73

NASA’s Contributions to Aeronautics

2

The Lightning Hazard in Aeronautics and Astronautics: A Brief Synopsis Since only about one-fourth of discharges reach Earth’s surface, lightning presents a disproportionate hazard to aviation and rocketry. Commercial aircraft are frequently struck by lightning, but airliners are built to reduce the hazard, thanks in large part to decades of NASA research. Nevertheless, almost every type of aircraft has been destroyed or severely damaged by lightning, ranging from gliders to jet airliners. The following is a partial listing of aircraft losses related to lightning: •

•

•

•

•

•

•

•

August 1940: a Pennsylvania Central Airlines Douglas DC-3A dove into the ground near Lovettsville, VA, killing all 25 aboard (including Senator Ernest Lundeen of Minnesota), after “disabling of the pilots by a severe lightning discharge in the immediate neighborhood of the airplane, with resulting loss of control.”7 June 1959: a Trans World Airlines (TWA) four-engine Lockheed Starliner with 68 passengers and crew was destroyed near Milan, Italy. August 1963: a turboprop Air Inter Vickers Viscount crashed on approach to Lyon, France, killing all 20 on board plus 1 person on the ground. December 1963: a Pan American Airlines Boeing 707 crashed at night when struck by lightning over Maryland. All 82 aboard perished. April 1966: Abdul Salam Arif, President of Iraq, died in a helicopter accident, reportedly in a thunderstorm that could have involved lightning. April 1967: an Iranian Air Force C-130B was destroyed by lightning near Mamuniyeh. The 23 passengers and crew all died. Christmas Eve 1971: a Lockheed Electra of Líneas Aéreas Nacionales Sociedad Anónima (LANSA) was destroyed over Peru with 1 survivor among 92 souls on board. May 1976: an Iranian Air Force Boeing 747 was hit during descent to Madrid, Spain, killing all 17 aboard.

November 1978: a U.S. Air Force (USAF) C-130E was struck by lightning near Charleston, SC, and fatally crashed, with six aboard. September 1980: a Kuwaiti C-130 crashed after a lightning strike near Montelimar, France. The eight-man crew was killed. February 1988: a Swearingen Metro operated by Nürnberger Flugdienst was hit near Mulheim, Germany, with all 21 aboard killed. January 1995: a Super Puma helicopter en route to a North Sea oil platform was struck in the tail rotor, but the pilot autorotated to a water landing. All 16 people aboard were safely recovered. April 1999: a British glider was struck, forcing both pilots to bail; they landed safely.

2

Additionally, lightning posed a persistent threat to rocket-launch operations, forcing extensive use of protective systems such as lightning rods and “tripwire” devices. These devices included small rockets trailing conductive wires that can trigger premature cloud-to-ground strokes, reducing the risk of more powerful lightning strokes. The classic example was the launch of Apollo 12, on November 14, 1969. “The flight of Apollo 12,” NASA historian Roger E. Bilstein has written, “was electrifying, to say the least.”8 During its ascent, it built up a massive static electricity charge that abruptly discharged, causing a brief loss of power. It had been an exceptionally close call. Earlier, the launch had been delayed while technicians dealt with a liquid hydrogen leak. Had a discharge struck the fuel-air mix of the leak, the conflagration would have been disastrous. Of course, three decades earlier, a form of lightning (a brush discharge, commonly called “St. Elmo’s fire”) that ignited a hydrogen gas-air mix was blamed by investigators for the loss of the German airship Hindenburg in 1937 at Lakehurst, NJ.9

8. Roger E. Bilstein, Stages to Saturn: A Technological History of the Apollo/Saturn Launch Vehicles, NASA SP-4206 (Washington, DC: NASA, 1980), p. 374. 9. U.S. Department of Commerce, Bureau of Air Commerce, Robert W. Knight, The Hindenburg Accident: A Comparative Digest of the Investigations and Findings, with the American and Translated German Reports Included, Report No. 11 (Washington, DC: GPO, 1938).

75

NASA’s Contributions to Aeronautics

2

Flight Research on Lightning Benjamin Franklin’s famous kite experiments in the 1750s constituted the first application of lightning’s effect upon “air vehicles.” Though it is uncertain that Franklin personally conducted such tests, they certainly were done by others who were influenced by him. But nearly 200 years passed before empirical data were assembled for airplanes.10 Probably the first systematic study of lightning effects on aircraft was conducted in Germany in 1933 and was immediately translated by NASA’s predecessor, the National Advisory Committee on Aeronautics (NACA). German researcher Heinrich Koppe noted diverse opinions on the subject. He cited the belief that any aircraft struck by lightning “would be immediately destroyed or at least set on fire,” and, contrarily, that because there was no direct connection between the aircraft and the ground, “there could be no force of attraction and, consequently, no danger.”11 Koppe began his survey detailing three incidents in which “the consequences for the airplanes were happily trivial.” However, he expanded the database to 32 occasions in 6 European nations over 8 years. (He searched for reports from America but found none at the time.) By discounting incidents of St. Elmo’s fire and a glider episode, Koppe had 29 lightning strikes to evaluate. All but 3 of the aircraft struck had extended trailing antennas at the moment of impact. His conclusion was that wood and fabric aircraft were more susceptible to damage than were metal airframes, “though all-metal types are not immune.” Propellers frequently attracted lightning, with metal-tipped wooden blades being more susceptible than all-metal props. While no fatalities occurred with the cases in Koppe’s studies, he did note disturbing effects upon aircrew, including temporary blindness, short-term stunning, and brief paralysis; in each case, fortunately, no lingering effects occurred.12 Koppe called for measures to mitigate the effects of lightning strikes, including housing of electrical wires in metal tubes in wood airframes and “lightning protection plates” on the external surfaces. He said radio

masts and the sets themselves should be protected. One occasionally overlooked result was “electrostriction,” which the author defined as “very heavy air pressure effect.” It involved mutual attraction of parallel tracks into the area of the current’s main path. Koppe suggested a shield on the bottom of the aircraft to attract ionized air. He concluded: “airplanes are not ‘hit’ by lightning, neither do they ‘accidentally’ get into the path of a stroke. The hits to airplanes are rather the result of a release of more or less heavy electrostatic discharges whereby the airplane itself forms a part of the current path.”13 American studies during World War II expanded upon prewar examinations in the United States and elsewhere. A 1943 National Bureau of Standards (NBS, now the National Institute for Standards and Technology, NIST) analysis concluded that the power of a lightning bolt was so enormous—from 100 million to 1 billion volts—that there was “no possibility of interposing any insulating barrier that can effectively resist it.” Therefore, aircraft designers needed to provide alternate paths for the discharge via “lightning conductors.”14 Postwar evaluation reinforced Koppe’s 1933 observations, especially regarding lightning effects upon airmen: temporary blindness (from seconds to 10 minutes), momentary loss of hearing, observation of electrical effects ranging from sparks to “a blinding blue flash,” and psychological effects. The latter were often caused more by the violent sensations attending the entrance of a turbulent storm front rather than a direct result of lightning.15 Drawing upon British data, the NACA’s 1946 study further detailed atmospheric discharges by altitude bands from roughly 6,500 to 20,500 feet, with the maximum horizontal gradient at around 8,500 feet. Size and configuration of aircraft became recognized factors in lightning, owing to the greater surface area exposed to the atmosphere. Moisture and dust particles clinging to the airframe had greater potential for drawing a lightning bolt than on a smaller aircraft. Aircraft speed also was considered, because the ram-air effect naturally forced particles closer together.16

A Weather Bureau survey of more than 150 strikes from 1935 to 1944 defined a clear “danger zone”: aircraft flying at or near freezing temperatures and roughly at 1,000 to 2,000 feet above ground level (AGL). The most common factors were 28–34 °F and between 5,000 and 8,000 feet AGL. Only 15 percent of strikes occurred above 10,000 feet.17 On February 19, 1971, a Beechcraft B90 King Air twin-turboprop business aircraft owned by Marathon Oil was struck by a bolt of lightning while descending through 9,000 feet preparatory to landing at Jackson, MI. The strike caused “widespread, rather severe, and unusual” damage. The plane suffered “the usual melted metal and cracked nonmetallic materials at the attachments points” but in addition suffered a local structural implosion on the inboard portions of the lower right wing between the fuselage and right engine nacelle, damage to both flaps, impact-and-crush-type damage to one wingtip at an attachment point, electrical arc pitting of flap support and control rod bearings, a hole burned in a ventral fin, missing rivets, and a brief loss of power. “Metal skins were distorted,” NASA inspectors noted, “due to the ‘magnetic pinch effect’ as the lightning current flowed through them.” Pilots J.R. Day and J.W. Maxie recovered and landed the aircraft safely. Marathon received a NASA commendation for taking numerous photographs of record and contacting NASA so that a much more detailed examination could be performed.18 The jet age brought greater exposure to lightning, prompting further investigation by NOAA (created in 1970 to succeed the Environmental Science Services Administration, which had replaced the Weather Bureau in 1965). The National Severe Storms Laboratory conducted Project Rough Rider, measuring the physical characteristics and effects of thunderstorms, including lightning. The project employed two-seat F-100F and T-33A jets to record the intensity of lightning strikes over Florida and Oklahoma in the mid-1960s and later. The results of the research flights were studied and disseminated to airlines, providing safety guidelines for flight in the areas of thunderstorms.19

In December 1978, two Convair F-106A Delta Dart interceptors were struck within a few minutes near Castle Air Force Base (AFB), CA. Both had lightning protection kits, which the Air Force had installed beginning in early 1976. One Dart was struck twice, with both jets sustaining “severe” damage to the Pitot booms and area around the radomes. The protection kits prevented damage to the electrical systems, though subsequent tests determined that the lightning currents well exceeded norms, in the area of 225 kA. One pilot reported that the strike involved a large flash, and that the impact felt “like someone hit the side of the aircraft with a sledgehammer.” The second strike a few minutes later exceeded the first. The report concluded that absent the protection kits, damage to electrical and avionic systems might have been extensive.20 Though rare, other examples of dual aircraft strikes have been recorded. In January 1982, a Grumman F-14A Tomcat was en route to the Grumman factory at Calverton, NY, flown by CDR Lonny K. McClung from Naval Air Station (NAS) Miramar, CA, when it was struck by lightning. The incident offered a dramatic example of how a modern, highly sophisticated aircraft could be damaged, and its safety compromised, by a lightning strike. As CDR McClung graphically recalled:

2

We were holding over Calverton at 18,000 waiting for a rainstorm to pass. A lightning bolt went down about half a mile in front of us. An arm reached out and zapped the Pitot probe on the nose. I saw the lightning bolt go down and almost as if a time warp, freeze frame, an arm of that lightning came horizontal to the nose of our plane. It shocked me, but not badly, though it fried every computer in the airplane—Grumman had to replace everything. Calverton did not open in time for us to recover immediately so we had to go to McGuire AFB (112 miles southwest) and back on the “peanut gyro” since all our displays were fried. With the computers zapped, we had a bit of an adventure getting the plane going again so we could go to Grumman and get it fixed. When we got back to Calverton, one of the linemen told us that the 20. J. Anderson Plumer, “Investigation of Severe Lightning Strike Incidents to Two USAF F-106A Aircraft,” NASA CR-165794 (1981).

79

NASA’s Contributions to Aeronautics

2

same lightning strike hit a news helo below us. Based on the time, we were convinced it was the same strike that got us. An eerie feeling.21 The 1978 F-106 Castle AFB F-106 strikes stimulated further research on the potential danger of lightning strikes on military aircraft, particularly as the Castle incidents involved currents beyond the strength usually encountered. Coincidentally, the previous year, the National Transportation Safety Board had urged cooperative studies among academics, the aviation community, and Government researchers to address the dangers posed to aircraft operations by thunderstorms. Joseph Stickle and Norman Crabill of the NASA Langley Research Center, strongly supported by Allen Tobiason and John Enders at NASA Headquarters, structured a comprehensive program in thunderstorm research that the Center could pursue. The next year, Langley researchers evaluated a lightning location detector installed on an Agency light research aircraft, a de Havilland of Canada DHC-6 Twin Otter. But the most extensive and prolonged study NASA undertook involved, coincidentally, the very sort of aircraft that had figured so prominently in the Castle AFB strikes: a two-seat NF-106B Delta Dart, lent from the Air Force to NASA for research purposes.22 The NASA Langley NF-106B lightning research program began in 1980 and continued into 1986. Extensive aerial investigations were undertaken after ground testing, modeling, and simulation.23 Employing the NF-106B, Langley researchers studied two subjects in particular: the mechanisms influencing lightning-strike attachments on aircraft and the electrical and physical effects of those strikes. Therefore, the Dart was fitted with sensors in 14 locations: 9 in the fuselage plus 3 in the wings and 2 in the vertical stabilizer. In all, the NF-106B sustained 714 strikes during 1,496 storm penetrations at altitudes from 5,000 to 50,000 feet, typically

flying within a 150-mile radius of its operating base at Langley.24 One NASA pilot—Bruce Fisher—experienced 216 lightning strikes in the twoseat Dart. Many test missions involved multiple strikes; during one 1984 research flight at an altitude of 38,000 feet through a thunderstorm, the NF-106B was struck 72 times within 45 minutes, and the peak recorded on that particular test mission was an astounding 9 strikes per minute.25 NASA’s NF-106B lightning research program constituted the single most influential flight research investigation undertaken in atmospheric electromagnetic phenomena by any nation. The aircraft, now preserved in an aviation museum, proved one of the longest-lived and most productive of all NASA research airplanes, retiring in 1991. As a team composed of Langley Research Center, Old Dominion University, and Electromagnetic Applications, Inc., researchers reported in 1987:

2

This research effort has resulted in the first statistical quantification of the electromagnetic threat to aircraft based on in situ measurements. Previous estimates of the in-flight lightning hazard to aircraft were inferred from ground-based measurements. The electromagnetic measurements made on the F-106 aircraft during these strikes have established a statistical basis for determination of quantiles and “worst-case” amplitudes of electromagnetic parameters of rate of change of current and the rate of change of electric flux density. The 99.3 percentile of the peak rate of change of current on the F-106 aircraft struck by lightning is about two and a half times that of previously accepted airworthiness criteria. The findings are at present being included in new criteria concerning protection of aircraft electrical and 24. Rosemarie L. McDowell, “Users Manual for the Federal Aviation Administration Research and Development Electromagnetic Database (FRED) for Windows: Version 2.0,” Department of Transportation, Federal Aviation Administration, Report DOT/FAA/AR-95/18 (1998), p. 41; and R.L. McDowell, D.J. Grush, D.M. Cook, and M.S. Glynn, “Implementation of the FAA Research and Development Electromagnetic Database,” in NASA KSC, The 1991 International Aerospace and Ground Conference on Lightning and Static Electricity, vol. 2 (1991). Fittingly, the NASA Langley NF-106B is now a permanent exhibit at the Virginia Air and Space Museum, Hampton. 25. Chambers, Concept to Reality, p. 181; NASA News Release, “NASA Lightning Research on ABC 20/20,” Dec. 11, 2007, at http://www.nasa.gov/topics/aeronautics/features/ fisher-2020.html, accessed Nov. 30, 2009.

81

NASA’s Contributions to Aeronautics

2

electronic systems against lightning. Since there are at present no criteria on the rate of change of electric flux density, the new data can be used as the basis for new criteria on the electric characteristics of lightningaircraft electrodynamics. In addition to there being no criteria on the rate of change of electric flux density, there are also no criteria on the temporal durations of this rate of change or rate of change of electric current exceeding a prescribed value. Results on pulse characteristics presented herein can provide the basis for this development. The newly proposed lightning criteria and standards are the first which reflect actual aircraft responses to lightning measured at flight altitudes.26 The data helped shape international certification and design standards governing how aircraft should be shielded or hardened to minimize damage from lightning. Recognizing its contributions to understanding the lightning phenomena, its influence upon design standards, and its ability to focus the attention of lightning researchers across the Federal Government, the Flight Safety Foundation accorded the NF-106B program recognition as an Outstanding Contribution to Flight Safety for 1989. This did not mark the end of the NF-106B’s electromagnetic research, however, for it was extensively tested at the Air Force Weapons Laboratory at Kirtland AFB, NM, in a cooperative Air Force–NASA study comparing lightning effects with electromagnetic pulses produced by nuclear explosions.27 As well, the information developed in F-106B flights led to extension of “triggered” (aircraft-induced) lightning models applied to other aircraft. Based on scaling laws for triggering field levels of differing airframe sizes and configurations, data were compiled for types as diverse as Lockheed C-130 airlifters and light, business aircraft, such as the Gates (now Bombardier) Learjet. The Air Force operated a Lockheed WC-130 during 1981, collecting data to characterize airborne lightning. Operating in Florida, the Hercules flew at altitudes between 1,500 26. Felix L. Pitts, Larry D. Lee, Rodney A. Perala, and Terence H. Rudolph, “New Methods and Results for Quantification of Lightning-Aircraft Electrodynamics,” NASA TP-2737 (1987), p. 18. 27. Chambers, Concept to Reality, p. 182. This NF-106B, NASA 816, is exhibited in the Virginia Air and Space Center, Hampton, VA.

82

Case 2 | Coping With Lightning: A Lethal Threat to Flight

2

The workhorse General Dynamics NF-106B Delta Dart used by NASA for a range of electromagnetic studies and research. NASA.

and 18,000 feet, using 11 sensors to monitor nearby thunderstorms. The flights were especially helpful in gathering data on intercloud and cloud-to-ground strokes. More than 1,000 flashes were recorded by analog and 500 digitally.28 High-altitude research flights were conducted in 1982 with instrumented Lockheed U-2s carrying the research of the NF-106B and the WC-130 at lower altitudes well into the stratosphere. After a smaller 1979 project, the Thunderstorm Overflight Program was cooperatively sponsored by NASA, NOAA, and various universities to develop criteria for a lightning mapping satellite system and to study the physics of lightning. Sensors included a wide-angle optical pulse detector, electric field change meter, optical array sensor, broadband and high-resolution Ebert spectrometers, cameras, and tape recorders. Flights recorded data from Topeka, KS, in May and from Moffett Field, CA, in August. The project collected some 6,400 data samples of visible pulses, which were analyzed by NASA and university researchers.29 NASA expanded the studies to include

flights by an Agency Lockheed ER-2, an Earth-resources research aircraft derived from the TR-2, itself a scaled-up outgrowth of the original U-2.30 Complementing NASA’s lightning research program was a cooperative program of continuing studies at lower altitudes undertaken by a joint American-French study team. The American team consisted of technical experts and aircrew from NASA, the Federal Aviation Administration (FAA), the USAF, the United States Navy (USN), and NOAA, using a specially instrumented American Convair CV-580 twinengine medium transport. The French team was overseen by the Offices Nationales des Études et Recherchés Aerospatiales (National Office for Aerospace Studies and Research, ONERA) and consisted of experts and aircrew from the Centre d’Essais Aéronautique de Toulouse (Toulouse Aeronautical Test Center, CEAT) and the l’Armée de l’Air (French Air Force) flying a twin-engine medium airlifter, the C-160 Transall. The Convair was fitted with a variety of external sensors and flown into thunderstorms over Florida in 1984 to 1985 and 1987. Approximately 60 strikes were received, while flying between 2,000 and 18,000 feet. The hits were categorized as lightning, lightning attachment, direct strike, triggered strike, intercepted strike, and electromagnetic pulse. Flight tests revealed a high proportion of strikes initiated by the aircraft itself. Thirty-five of thirty-nine hits on the CV-580 were determined to be aircraft-induced. Further data were obtained by the C-160 with highspeed video recordings of channel formation, which reinforced the opinion that aircraft initiate the lightning. The Transall operated over southern France (mainly near the Pyrenees Mountains) in 1986–1988, and CEAT furnished reports from its strike data to the FAA, and thence to other agencies and industry.31

Electrodynamic Research Using UAVs Reflecting their growing acceptance for a variety of military missions, unmanned (“uninhabited”) aerial vehicles (UAVs) are being increasingly used for atmospheric research. In 1997, a Goddard Space Flight Center space sciences team consisting of Richard Goldberg, Michael Desch, and William Farrell proposed using UAVs for electrodynamic studies. Much research in electrodynamics centered upon the direct-current (DC) Global Electric Circuit (GEC) concept, but Goldberg and his colleagues wished to study the potential upward electrodynamic flow from thunderstorms. “We were convinced there was an upward flow,” he recalled over a decade later, “and [that] it was AC.”32 To study upward flows, Goldberg and his colleagues decided that a slow-flying, high-altitude UAV had advantages of proximity and duration that an orbiting spacecraft did not. They contacted Richard Blakeslee at Marshall Space Flight Center, who had a great interest in Earth sciences research. The Goddard-Marshall part-

32. Notes of telephone conversation, Richard P. Hallion with Richard A. Goldberg, NASA Goddard Space Flight Center, Sept. 10, 2009, in author’s possession. Goldberg had begun his scientific career studying crystallography but found space science (particularly using sounding rockets) much more exciting. His perception of the upward flow of electrodynamic energy was, as he recalled, “in the pre-sprite days. Sprites are largely insignificant anyway because their duration is so short.”

85

NASA’s Contributions to Aeronautics

2

NASA Altus 2 electrodynamic research aircraft, a derivative of the General Atomics Predator UAV, in flight on July12, 2002. NASA.

nership quickly secured Agency support for an electrodynamic UAV research program to be undertaken by the National Space Science and Technology Center (NSSTC) at Huntsville, AL. The outcome was Altus, a modification of the basic General Atomics Predator UAV, leased from the manufacturer and modified to carry a NASA electrodynamic research package. Altus could fly as slow as 70 knots and as high as 55,000 feet, cruising around and above (but never into) Florida’s formidable and highly energetic thunderstorms. First flown in 2002, Altus constituted the first time that UAV technology had been applied to study electrodynamic phenomena.33 Initially, NASA wished to operate the UAV from Patrick AFB near Cape Canaveral, but concerns about the potential dangers of flying a UAV over a heavily populated area resulted in switching its operational location to the more remote Key West Naval Air Station. Altus flights confirmed the suppositions of Goldberg and his colleagues, and it complemented other research methodologies that took electric, magnetic, and optical measurements of thunderstorms, gauging lightning

33. Although this was not the first time drones had been used for measurements in hazardous environments. Earlier, in the heyday of open-atmospheric tests of nuclear weapons, drone aircraft such as Lockheed QF-80 Shooting Stars were routinely used to “sniff” radioactive clouds formed after a nuclear blast and to map their dispersion in the upper atmosphere. Like the electromagnetic research over a quarter century later, these trials complemented sorties by conventional aircraft such as the U-2, another atomic monitor.

86

Case 2 | Coping With Lightning: A Lethal Threat to Flight

2

The launch of Apollo 12 from the John F. Kennedy Space Center in 1969. NASA.

87

NASA’s Contributions to Aeronautics

2

activity and associated electrical phenomena, including using groundbased radars to furnish broader coverage for comparative purposes.34 While not exposing humans to thunderstorms, the Altus Cumulus Electrification Study (ACES) used UAVs to collect data on cloud properties throughout a 3- or 4-hour thunderstorm cycle—not always possible with piloted aircraft. ACES further gathered material for threedimensional storm models to develop more-accurate weather predictions.

Lightning bolt photographed at the John F. Kennedy Space Center immediately after the launch of Apollo 12 in November 1969. NASA.

Spacecraft and Electrodynamic Effects With advent of piloted orbital flight, NASA anticipated the potential effects of lightning upon launch vehicles in the Mercury, Gemini, and Apollo programs. Sitting atop immense boosters, the spacecraft were especially vulnerable on their launch pads and in the liftoff phase. One NASA lecturer warned his audience in 1965 that explosive squibs, detonators, vapors, and dust were particularly vulnerable to static electrical

detonation; the amount of energy required to initiate detonation was “very small,” and, as a consequence, their triggering was “considerably more frequent than is generally recognized.”35 As mentioned briefly, on November 14, 1969, at 11:22 a.m. EST, Apollo 12, crewed by astronauts Charles “Pete” Conrad, Richard F. Gordon, and Alan L. Bean, thundered aloft from Launch Complex 39A at the Kennedy Space Center. Launched amid a torrential downpour, it disappeared from sight almost immediately, swallowed up amid dark, foreboding clouds that cloaked even its immense flaring exhaust. The rain clouds produced an electrical field, prompting a dual trigger response initiated by the craft. As historian Roger Bilstein wrote subsequently:

2

Within seconds, spectators on the ground were startled to see parallel streaks of lightning flash out of the cloud back to the launch pad. Inside the spacecraft, Conrad exclaimed “I don’t know what happened here. We had everything in the world drop out.” Astronautics Pete Conrad, Richard Gordon, and Alan Bean, inside the spacecraft, had seen a brilliant flash of light inside the spacecraft, and instantaneously, red and yellow warning lights all over the command module panels lit up like an electronic Christmas tree. Fuel cells stopped working, circuits went dead, and the electrically operated gyroscopic platform went tumbling out of control. The spacecraft and rocket had experienced a massive power failure. Fortunately, the emergency lasted only seconds, as backup power systems took over and the instrument unit of the Saturn V launch vehicle kept the rocket operating.36 The electrical disturbance triggered the loss of nine solid-state instrumentation sensors, none of which, fortunately, was essential to the safety or completion of the flight. It resulted in the temporary loss of communications, varying between 30 seconds and 3 minutes, depending upon the particular system. Rapid engagement of backup 35. G.J. Bryan, “Static Electricity and Lightning Hazards, Part II,” NASA Explosive Safety Executive Lecture Series, June 1965, NTRS N67-15981, pp. 6-10, 6-11. 36. Bilstein, Stages to Saturn, pp. 374–375.

89

NASA’s Contributions to Aeronautics

2

systems permitted the mission to continue, though three fuel cells were automatically (and, as subsequently proved, unnecessarily) shut down. Afterward, NASA incident investigators concluded that though lightning could be triggered by the long combined length of the Saturn V rocket and its associated exhaust plume, “The possibility that the Apollo vehicle might trigger lightning had not been considered previously.”37 Apollo 12 constituted a dramatic wake-up call on the hazards of mixing large rockets and lightning. Afterward, the Agency devoted extensive efforts to assessing the nature of the lightning risk and seeking ways to mitigate it. The first fruit of this detailed study effort was the issuance, in August 1970, of revised electrodynamic design criteria for spacecraft. It stipulated various means of spacecraft and launch facility protection, including 1.

2.

3. 4. 5.

6.

Ensuring that all metallic sections are connected electrically (bonded) so that the current flow from a lightning stroke is conducted over the skin without any caps where sparking would occur or current would be carried inside. Protecting objects on the ground, such as buildings, by a system of lightning rods and wires over the outside to carry the lightning stroke to the ground. Providing a cone of protection for the lightning protection plan for Saturn Launch Complex 39. Providing protection devices in critical circuits. Using systems that have no single failure mode; i.e., the Saturn V launch vehicle uses triple-redundant circuitry on the auto-abort system, which requires two out of three of the signals to be correct before abort is initiated. Appropriate shielding of units sensitive to electromagnetic radiation.38

The stakes involved in lightning protection increased greatly with the advent of the Space Shuttle program. Officially named the Space Transportation System (STS), NASA’s Space Shuttle was envisioned as a routine space logistical support vehicle and was touted by some as a “space age DC-3,” a reference to the legendary Douglas airliner that had galvanized air transport on a global scale. Large, complex, and expensive, it required careful planning to avoid lightning damage, particularly surface burnthroughs that could constitute a flight hazard (as, alas, the loss of Columbia would tragically demonstrate three decades subsequently). NASA predicated its studies on Shuttle lightning vulnerabilities on two major strokes, one having a peak current of 200 kA at a current rate of change of 100 kA per microsecond (100 kA / 10-6 sec), and a second of 100 kA at a current rate of change of 50 kA / 10-6 sec. Agency researchers also modeled various intermediate currents of lower energies. Analysis indicated that the Shuttle and its launch stack (consisting of the orbiter, mounted on a liquid fuel tank flanked by two solidfuel boosters) would most likely have lightning entry points at the tip of its tankage and boosters, the leading edges of its wings at mid-span 91

NASA’s Contributions to Aeronautics

2

and at the wingtip, on its upper nose surface, and (least likely) above the cockpit. Likely exit points were the nozzles of the two solid-fuel boosters, the trailing-edge tip of the vertical fin, the trailing edge of the body flap, the trailing edges of the wing tip, and (least likely) the nozzles of its three liquid-fuel Space Shuttle main engines (SSMEs).39 Because the Shuttle orbiter was, effectively, a large delta aircraft, data and criteria assembled previously for conventional aircraft furnished a good reference base for Shuttle lightning prediction studies, even studies dating to the early 1940s. As well, Agency researchers undertook extensive tests to guard against inadvertent triggering of the Shuttle’s solid rocket boosters (SRBs), because their premature ignition would be catastrophic.40 Prudently, NASA ensured that the servicing structure on the Shuttle launch complex received an 80-foot lightning mast plus safety wires to guide strikes to the ground rather than through the launch vehicle. Dramatic proof of the system’s effectiveness occurred in August 1983, when lightning struck the launch pad of the Shuttle Challenger before launching mission STS-8, commanded by Richard H. Truly. It was the first Shuttle night launch, and it subsequently proceeded as planned. The hazards of what lightning could do to a flight control system (FCS) was dramatically illustrated March 26, 1987, when a bolt led to the loss of AC-67, an Atlas-Centaur mission carrying FLTSATCOM 6, a TRW, Inc., communications satellite developed for the Navy’s Fleet Satellite Communications system. Approximately 48 seconds after launch, a cloud-to-ground lightning strike generated a spurious signal into the Centaur launch vehicle’s digital flight control computer, which then sent a hard-over engine command. The resultant abrupt yaw overstressed the vehicle, causing its virtual immediate breakup. Coming after the weather-related loss of the Space Shuttle Challenger the previous year,

the loss of AC-67 was particularly disturbing. In both cases, accident investigators found that the two Kennedy teams had not taken adequate account of meteorological conditions at the time of launch.41 The accident led to NASA establishing a Lightning Advisory Panel to provide parameters for determining whether a launch should proceed in the presence of electrical activity. As well, it understandably stimulated continuing research on the electrodynamic environment at the Kennedy Space Center and on vulnerabilities of launch vehicles and facilities at the launch site. Vulnerability surveys extended to in-flight hardware, launch and ground support equipment, and ultimately almost any facility in areas of thunderstorm activity. Specific items identified as most vulnerable to lightning strikes were electronic systems, wiring and cables, and critical structures. The engineering challenge was to design methods of protecting those areas and systems without adversely affecting structural integrity or equipment performance. To improve the fidelity of existing launch models and develop a better understanding of electrodynamic conditions around the Kennedy Center, between September 14 and November 4, 1988, NASA flew a modified single-seat single-engine Schweizer powered sailplane, the Special Purpose Test Vehicle (SPTVAR), on 20 missions over the spaceport and its reservation, measuring electrical fields. These trials took place in consultation with the Air Force (Detachment 11 of its 4th Weather Wing had responsibility for Cape lightning forecasting) and the New Mexico Institute of Mining and Technology, which selected candidate cloud forms for study and then monitored the realtime acquisition of field data. Flights ranged from 5,000 to 17,000 feet, averaged over an hour in duration, and took off from late morning to as late as 8 p.m. The SPTVAR aircraft dodged around electrified clouds as high as 35,000 feet, while taking measurements of electrical fields, the net airplane charge, atmospheric liquid water content, ice particle concentrations, sky brightness, accelerations, air temperature and

pressure, and basic aircraft parameters, such as heading, roll and pitch angles, and spatial position.42 After the Challenger and AC-67 launch accidents, the ongoing Shuttle program remained a particular subject of Agency concern, particularly the danger of lightning currents striking the Shuttle during rollout, on the pad, or upon liftoff. As verified by the SPTVAR survey, large currents (greater than 100 kA) were extremely rare in the operating area. Researchers concluded that worst-case figures for an on-pad strike ran from 0.0026 to 0.11953 percent. Trends evident in the data showed that specific operating procedures could further reduce the likelihood of a lightning strike. For instance, a study of all lightning probabilities at Kennedy Space Center observed, “If the Shuttle rollout did not occur during the evening hours, but during the peak July afternoon hours, the resultant nominal probabilities for a >220 kA and >50 kA lightning strike are 0.04% and 0.21%, respectively. Thus, it does matter ‘when’ the Shuttle is rolled out.”43 Although estimates for a triggered strike of a Shuttle in ascent were not precisely determined, researchers concluded that the likelihood of triggered strike (one caused by the moving vehicle itself) of any magnitude on an ascending launch vehicle is 140,000 times likelier than a direct hit on the pad. Because Cape Canaveral constitutes America’s premier space launch center, continued interest in lightning at the Cape and its potential impact upon launch vehicles and facilities will remain major NASA concerns. NASA and Electromagnetic Pulse Research The phrase “electromagnetic pulse” usually raises visions of a nuclear detonation, because that is the most frequent context in which it is used. While EMP effects upon aircraft certainly would feature in a thermonuclear event, the phenomenon is commonly experienced in and around lightning storms. Lightning can cause a variety of EMP radiations, including radio-frequency pulses. An EMP “fries” electrical

circuits by passing a magnetic field past the equipment in one direction, then reversing in an extremely short period—typically a few nanoseconds. Therefore, the magnetic field is generated and collapses within that ephemeral time, creating a focused EMP. It can destroy or render useless any electrical circuit within several feet of impact. Any survey of lightning-related EMPs brings attention to the phenomena of “elves,” an acronym for Emissions of Light and Very low-frequency perturbations from Electromagnetic pulses. Elves are caused by lightninggenerated EMPs, usually occurring above thunderstorms and in the ionosphere, some 300,000 feet above Earth. First recorded on Space Shuttle Mission STS-41 in 1990, elves mostly appear as reddish, expanding flashes that can reach 250 miles in diameter, lasting about 1 millisecond. EMP research is multifaceted, conducted in laboratories, on airborne aircraft and rockets, and ultimately outside Earth’s atmosphere. Research into transient electric fields and high-altitude lightning above thunderstorms has been conducted by sounding rockets launched by Cornell University. In 2000, a Black Brant sounding rocket from White Sands was launched over a storm, attaining a height of nearly 980,000 feet. Onboard equipment, including electronic and magnetic instruments, provided the first direct observation of the parallel electric field within 62 miles horizontal from the lightning.44 By definition, NASA’s NF-106B flights in the 1980s involved EMP research. Among the overlapping goals of the project was quantification of lightning’s electromagnetic effects, and Langley’s Felix L. Pitts led the program intended to provide airborne data of lightning-strike traits. Bruce Fisher and two other NASA pilots (plus four Air Force pilots) conducted the flights. Fisher conducted analysis of the information he collected in addition to backseat researchers’ data. Those flying as flight-test engineers in the two-seat jet included Harold K. Carney, Jr., NASA’s lead technician for EMP measurements. NASA Langley engineers built ultra-wide-bandwidth digital transient recorders carried in a sealed enclosure in the Dart’s missile bay. To acquire the fast lightning transients, they adapted or devised electromagnetic sensors based on those used for measurement of nuclear pulse radiation. To aid understanding of the lightning transients recorded on

2

44. D.E. Rowland, et al., “Propagation of the Lightning Electromagnetic Pulse Through the E- and F-region Ionosphere and the Generation of Parallel Electric Fields,” American Geophysical Union (May 2004).

95

NASA’s Contributions to Aeronautics

2

the jet, a team from Electromagnetic Applications, Inc., provided mathematical modeling of the lightning strikes to the aircraft. Owing to the extra hazard of lightning strikes, the F-106 was fueled with JP-5, which is less volatile than the then-standard JP-4. Data compiled from dedicated EMP flights permitted statistical parameters to be established for lightning encounters. The F-106’s onboard sensors showed that lightning strikes to aircraft include bursts of pulses lasting shorter than previously thought, but they were more frequent. Additionally, the bursts are more numerous than better-known strikes involving cloud-to-Earth flashes.45 Rocket-borne sensors provided the first ionospheric observations of lightning-induced electromagnetic waves from ELF through the medium frequency (MF) bands. The payload consisted of a NASA double-probe electric field sensor borne into the upper atmosphere by a Black Brant sounding rocket that NASA launched over “an extremely active thunderstorm cell.” This mission, named Thunderstorm III, measured lightning EMPs up to 2 megahertz (MHz). Below 738,000 feet, a rising whistler wave was found with a nose-whistler wave shape with a propagating frequency near 80 kHz. The results confirmed speculation that the leading intense edge of the lightning EMP was borne on 50–125-kHz waves.46 Electromagnetic compatibility is essential to spacecraft performance. The requirement has long been recognized, as the insulating surfaces on early geosynchronous satellites were charged by geomagnetic substorms to a point where discharges occurred. The EMPs from such discharges coupled into electronic systems, potentially disrupting satellites. Laboratory tests on insulator charging indicated that discharges could be initiated at insulator edges, where voltage gradients could exist.47

Apart from observation and study, detecting electromagnetic pulses is a step toward avoidance. Most lightning detections systems include an antenna that senses atmospheric discharges and a processor to determine whether the strobes are lightning or static charges, based upon their electromagnetic traits. Generally, ground-based weather surveillance is more accurate than an airborne system, owing to the greater number of sensors. For instance, ground-based systems employ numerous antennas hundreds of miles apart to detect a lightning stroke’s radio frequency (RF) pulses. When an RF flash occurs, electromagnetic pulses speed outward from the bolt to the ground at hyper speed. Because the antennas cover a large area of Earth’s surface, they are able to triangulate the bolt’s site of origin. Based upon known values, the RF data can determine with considerable accuracy the strength or severity of a lightning bolt. Space-based lightning detection systems require satellites that, while more expensive than ground-based systems, provide instantaneous visual monitoring. Onboard cameras and sensors not only spot lightning bolts but also record them for analysis. NASA launched its first lightning-detection satellite in 1995, and the Lightning Imaging Sensor, which analyzes lightning through rainfall, was launched 2 years later. From approximately 1993, low-Earth orbit (LEO) space vehicles carried increasingly sophisticated equipment requiring increased power levels. Previously, satellites used 28-volt DC power systems as a legacy of the commercial and military aircraft industry. At those voltage levels, plasma interactions in LEO were seldom a concern. But use of high-voltage solar arrays increased concerns with electromagnetic compatibility and the potential effects of EMPs. Consequently, spacecraft design, testing, and performance assumed greater importance. NASA researchers noted a pattern wherein insulating surfaces on geosynchronous satellites were charged by geomagnetic substorms, building up to electrical discharges. The resultant electromagnetic pulses can couple into satellite electronic systems, creating potentially disruptive results. Reducing power loss received a high priority, and laboratory tests on insulator charging showed that discharges could be initiated at insulator edges, where voltage gradients could exist. The benefits of such tests, coupled with greater empirical knowledge, afforded greater operating efficiency, partly because of greater EMP protection.48

Research into lightning EMPs remains a major focus. In 2008, Stanford’s Dr. Robert A. Marshall and his colleagues reported on timemodeling techniques to study lightning-induced effects upon VLF transmitter signals called “early VLF events.” Marshall explained: This mechanism involves electron density changes due to electromagnetic pulses from successive in-cloud lightning discharges associated with cloud-to-ground discharges (CGs), which are likely the source of continuing current and much of the charge moment change in CGs. Through time-domain modeling of the EMP we show that a sequence of pulses can produce appreciable density changes in the lower ionosphere, and that these changes are primarily electron losses through dissociative attachment to molecular oxygen. Modeling of the propagating VLF transmitter signal through the disturbed region shows that perturbed regions created by successive horizontal EMPs create measurable amplitude changes.49 However, the researchers found that modeling optical signatures was difficult when observation was limited by line of sight, especially by ground-based observers. Observation was further complicated by clouds and distance, because elves and “sprites” (large-scale discharges over thunderclouds) were mostly seen at ranges of 185 to 500 statute miles. Consequently, the originating lightning usually was not visible. But empirical evidence shows that an EMP from lightning is extremely short-lived when compared to the propagation time across an elve’s radius. Observers therefore learned to recognize that the illuminated area at a given moment appears as a thin ring rather than as an actual disk.50 In addition to the effects of EMPs upon personnel directly engaged with aircraft or space vehicles, concern was voiced about researchers being exposed to simulated pulses. Facilities conducting EMP tests upon avionics and communications equipment were a logical area of investi-

gation, but some EMP simulators had the potential to expose operators and the public to electromagnetic fields of varying intensities, including naturally generated lightning bolts. In 1988, the NASA Astrophysics Data System released a study of bioelectromagnetic effects upon humans. The study stated, “Evidence from the available database does not establish that EMPs represent either an occupational or a public health hazard.” Both laboratory research and years of observations on staffs of EMP manufacturing and simulation facilities indicated “no acute or short-term health effects.” The study further noted that the occupational exposure guideline for EMPs is 100 kilovolts per meter, “which is far in excess of usual exposures with EMP simulators.”51 NASA’s studies of EMP effects benefited nonaerospace communities. The Lightning Detection and Ranging (LDAR) system that enhanced a safe work environment at Kennedy Space Center was extended to private industry. Cooperation with private enterprises enhances commercial applications not only in aviation but in corporate research, construction, and the electric utility industry. For example, while two-dimensional commercial systems are limited to cloud-to-ground lightning, NASA’s three-dimensional LDAR provides precise location and elevation of incloud and cloud-to-cloud pulses by measuring arrival times of EMPs. Nuclear- and lightning-caused EMPs share common traits. Nuclear EMPs involve three components, including the “E2” segment, which is similar to lightning. Nuclear EMPs are faster than conventional circuit breakers can handle. Most are intended to stop millisecond spikes caused by lightning flashes rather than microsecond spikes from a highaltitude nuclear explosion. The connection between ionizing radiation and lightning was readily demonstrated during the “Mike” nuclear test at Eniwetok Atoll in November 1952. The yield was 10.4 million tons, with gamma rays causing at least five lightning flashes in the ionized air around the fireball. The bolts descended almost vertically from the cloud above the fireball to the water. The observation demonstrated that, by causing atmospheric ionization, nuclear radiation can trigger a shorting of the natural vertical electric gradient, resulting in a lightning bolt.52

Thus, research overlap between thermonuclear and lightninggenerated EMPs is unavoidable. NASA’s workhorse F-106B, apart from NASA’s broader charter to conduct lightning-strike research, was employed in a joint NASA–USAF program to compare the electromagnetic effects of lightning and nuclear detonations. In 1984, Felix L. Pitts of NASA Langley proposed a cooperative venture, leading to the Air Force lending Langley an advanced, 10-channel recorder for measuring electromagnetic pulses. Langley used the recorder on F-106 test flights, vastly expanding its capability to measure magnetic and electrical change rates, as well as currents and voltages on wires inside the Dart. In July 1993, an Air Force researcher flew in the rear seat to operate the advanced equipment, when 72 lightning strikes were obtained. In EMP tests at Kirtland Air Force Base, the F-106 was exposed to a nuclear electromagnetic pulse simulator while mounted on a special test stand and during flybys. NASA’s Norman Crabill and Lightning Technologies’ J.A. Plumer participated in the Air Force Weapons Laboratory review of the acquired data.53 With helicopters becoming ever-more complex and with increasing dependence upon electronics, it was natural for researchers to extend the Agency’s interest in lightning to rotary wing craft. Drawing upon the Agency’s growing confidence in numerical computational analysis, Langley produced a numerical modeling technique to investigate the response of helicopters to both lightning and nuclear EMPs. Using a UH-60A Black Hawk as the focus, the study derived three-dimensional time domain finite-difference solutions to Maxwell’s equations, computing external currents, internal fields, and cable responses. Analysis indicated that the short-circuit current on internal cables was generally greater for lightning, while the open-circuit voltages were slightly higher for nuclear-generated EMPs. As anticipated, the lightning response was found to be highly dependent upon the rise time of the injected current. Data showed that coupling levels to cables in a helicopter are 20 to 30 decibels (dB) greater than in a fixed wing aircraft.54

Lightning and the Composite, Electronic Airplane FAA Federal Air Regulation (FAR) 23.867 governs protection of aircraft against lightning and static electricity, reflecting the influence of decades of NASA lightning research, particularly the NF-106B program. FAR 23.867 directs that an airplane “must be protected against catastrophic effects from lightning,” by bonding metal components to the airframe or, in the case of both metal and nonmetal components, designing them so that if they are struck, the effects on the aircraft will not be catastrophic. Additionally, for nonmetallic components, FAR 23.867 directs that aircraft must have “acceptable means of diverting the resulting electrical current so as not to endanger the airplane.”55 Among the more effective means of limiting lightning damage to aircraft is using a material that resists or minimizes the powerful pulse of an electromagnetic strike. Late in the 20th century, the aerospace industry realized the excellent potential of composite materials for that purpose. Aside from older bonded-wood-and-resin aircraft of the interwar era, the modern all-composite aircraft may be said to date from the 1960s, with the private-venture Windecker Eagle, anticipating later aircraft as diverse as the Cirrus SR-20 lightplane, the Glasair III LP (the first composite homebuilt aircraft to meet the requirements of FAR 23), and the Boeing 787. The 787 is composed of 50-percent carbon laminate, including the fuselage and wings; a carbon sandwich material in the engine nacelles, control surfaces, and wingtips; and other composites in the wings and vertical fin. Much smaller portions are made of aluminum and titanium. In contrast, indicative of the rising prevalence of composites, the 777 involved just 12-percent composites. An even newer composite testbed design is the Advanced Composite Cargo Aircraft (ACCA). The modified twin-engine Dornier 328Jet’s rear fuselage and vertical stabilizer are composed of advanced composite materials produced by out-of-autoclave curing. First flown in June 2009, the ACCA is the product of a 10-year project by the Air Force Research Laboratory.56 NASA research on lightning protection for conventional aircraft structures translated into use for composite airframes as well. Because experience proved that lightning could strike almost any spot on an

airplane’s surface—not merely (as previously believed) extremities such as wings and propeller tips—researchers found a lesson for designers using new materials. They concluded, “That finding is of great importance to designers employing composite materials, which are less conductive, hence more vulnerable to lightning damage than the aluminum allows they replace.”57 The advantages of fiberglass and other composites have been readily recognized: besides resistance to lightning strikes, composites offer exceptional strength for light weight and are resistant to corrosion. Therefore, it was inevitable that aircraft designers would increasingly rely upon the new materials.58 But the composite revolution was not just the province of established manufacturers. As composites grew in popularity, they increasingly were employed by manufacturers of kit planes. The homebuilt aircraft market, a feature of American aeronautics since the time of the Wrights, expanded greatly over the 1980s and afterward. NASA’s heavy investment in lightning research carried over to the kit-plane market, and Langley released a Small Business Innovation Research (SBIR) contract to StoddardHamilton Aircraft, Inc., and Lightning Technologies, Inc., for development of a low-cost lightning protection system for kit-built composite aircraft. As a result, Stoddard-Hamilton’s composite-structure Glasair III LP became the first homebuilt aircraft to meet the standards of FAR 23.59 One of the benefits of composite/fiberglass airframe materials is inherent resistance to structural damage. Typically, composites are produced by laying spaced bands of high-strength fibers in an angular pattern of perhaps 45 degrees from one another. Selectively winding the material in alternating directions produces a “basket weave” effect that enhances strength. The fibers often are set in a thermoplastic resin four or more layers thick, which, when cured, produces extremely high strength and low weight. Furthermore, the weave pattern affords excellent resistance to peeling and delamination, even when struck by lightning. Among the earliest aviation uses of composites were engine cowlings, but eventually, structural components and then entire composite airframes were envisioned. Composites can provide additional electromagnetic resistance by winding conductive filaments in a 57. D.C. Ferguson and G.B. Hillard, “Low Earth Orbit Spacecraft Charging Design Guidelines,” NASA TP-2003-212287 (2003). 58. The development of the composite aircraft is the subject of a companion essay in this volume. 59. Chambers, Concept to Reality, p. 184.

102

Case 2 | Coping With Lightning: A Lethal Threat to Flight

spiral pattern over the structure before curing the resin. The filaments help dissipate high-voltage energy across a large area and rapidly divert the impulses before they can inflict significant harm.60 It is helpful to compare the effects of lightning on aluminum aircraft to better understand the advantage of fiberglass structures. Aluminum readily conducts electromagnetic energy through the airframe, requiring designers to channel the energy away from vulnerable areas, especially fuel systems and avionics. The aircraft’s outer skin usually offers the path of least resistance, so the energy can be “vented” overboard. Fiberglass is a proven insulator against electromagnetic charges. Though composites conduct electricity, they do so less readily than do aluminum and other metals. Consequently, though it may seem counterintuitive, composites’ resistance to EMP strokes can be enhanced by adding small metallic mesh to the external surfaces, focusing unwanted currents away from the interior. The most common mesh materials are aluminum and copper impressed into the carbon fiber. Repairs of lightningdamaged composites must take into account the mesh in the affected area and the basic material and attendant structure. Composites mitigate the effect of a lightning strike not only by resisting the immediate area of impact, but also by spreading the effects over a wider area. Thus, by reducing the energy for a given surface area (expressed in amps per square inch), a potentially damaging strike can be rendered harmless. Because technology is still emerging for detection and diagnosis of lightning damage, NASA is exploring methods of in-flight and postflight analysis. Obviously, the most critical is in-flight, with aircraft sensors measuring the intensity and location of a lightning strike’s current, employing laboratory simulations to establish baseline data for a specific material. Thus, the voltage/current test measurements can be compared with statistical data to estimate the extent of damage likely upon the composite. Aircrews thereby can evaluate the safety of flight risks after a specific strike and determine whether to continue or to land. NASA’s research interests in addressing composite aircraft are threefold: •

Obtaining conductive paint or other coatings to facilitate current flow, mitigating airframe structural damage, and eliminating requirements for additional internal shielding of electronics and avionics. Compiling physics-based models of complex composites that can be adapted to simulate lightning strikes to quantify electrical, mechanical, and thermal parameters to provide real-time damage information.

As testing continues, NASA will provide modeling data to manufacturers of composite aircraft as a design tool. Similar benefits can accrue to developers of wind turbines, which increasingly are likely to use composite blades. Other nonaerospace applications can include the electric power industry, which experiences high-voltage situations.61 Avionics Lightning effects on avionics can be disastrous, as illustrated by the account of the loss of AC-67. Composite aircraft with internal radio antennas require fiberglass composite “windows” in the lightning-strike mesh near the antenna. (Fiberglass composites are employed because of their transparency to radio frequencies, unlike carbon fiber.) Lightning protection and avoidance are important for planning and conducting flight tests. Consequently, NASA’s development of lightning warning and detection systems has been a priority in furthering fly-by-wire (FBW) systems. Early digital computers in flight control systems encountered conditions in which their processors could be adversely affected by lightning-generated electrical pulses. Subsequently, design processes were developed to protect electronic equipment from lightning strikes. As a study by the North Atlantic Treaty Organization (NATO) noted, such protection is “particularly important on aircraft with composite structures. Although equipment bench tests can be used to demonstrate equipment resistance to lightning strikes and EMP, it is now often considered necessary to perform whole aircraft lightning-strike tests to validate the design and clearance process.”62 Celeste M. Belcastro of Langley contrasted laboratory, ground-based, and in-flight testing of electromagnetic environmental effects, noting: 61. “Lightning Strike Protection for Composite Aircraft,” NASA Tech Briefs (June 1, 2009). 62. F. Webster and T.D. Smith, “Flying Qualities Flight Testing of Digital Flight Control Systems,” in NATO, AGARDograph, No. 300, vol. 21, in the AGARD Flight Test Techniques Series (2001), p. 3.

104

Case 2 | Coping With Lightning: A Lethal Threat to Flight

Laboratory tests are primarily open-loop and static at a few operating points over the performance envelope of the equipment and do not consider system level effects. Full-aircraft tests are also static with the aircraft situated on the ground and equipment powered on during exposure to electromagnetic energy. These tests do not provide a means of validating system performance over the operating envelope or under various flight conditions. . . . The assessment process is a combination of analysis, simulation, and tests and is currently under development for demonstration at the NASA Langley Research Center. The assessment process is comprehensive in that it addresses (i) closed-loop operation of the controller under test, (ii) real-time dynamic detection of controller malfunctions that occur due to the effects of electromagnetic disturbances caused by lightning, HIRF, and electromagnetic interference and incompatibilities, and (iii) the resulting effects on the aircraft relative to the stage of flight, flight conditions, and required operational performance.63

2

A prime example of full-system assessment is the F-16 Fighting Falcon, nicknamed “the electric jet,” because of its fly-by-wire flight control system. Like any operational aircraft, F-16s have received lightning strikes, the effects of which demonstrate FCS durability. Anecdotal evidence within the F-16 community contains references to multiple lightning strikes on multiple aircraft—as many as four at a time in close formation. In another instance, the leader of a two-plane section was struck, and the bolt leapt from his wing to the wingman’s canopy. Aircraft are inherently sensor and weapons platforms, and so the lightning threat to external ordnance is serious and requires examination. In 1977, the Air Force conducted tests on the susceptibility of AIM-9 missiles to lightning strikes. The main concern was whether the Sidewinders, mounted on wingtip rails, could attract strobes that could enter the airframe via the missiles. The evaluators concluded that the optical dome of the missile was vulnerable to simulated lightning strikes 63. C.M. Belcastro, “Assessing Electromagnetic Environment Effects on Flight Critical Aircraft Control Computers,” NASA Langley Research Center Technical Seminar Paper (Nov. 17, 1997), at http://www.ece.odu.edu/~gray/research/abstracts.html#Assessing, accessed Nov. 30, 2009.

105

NASA’s Contributions to Aeronautics

2

even at moderate currents. The AIM-9’s dome was shattered, and burn marks were left on the zinc-coated fiberglass housing. However, there was no evidence of internal arcing, and the test concluded that “it is unlikely that lightning will directly enter the F-16 via AIM-9 missiles.”64 Quite clearly, lightning had the potential of damaging the sensitive optics and sensors of missiles, thus rendering an aircraft impotent. With the increasing digitization and integration of electronic engine controls, in addition to airframes and avionics, engine management systems are now a significant area for lightning resistance research. Transfer of NASA Research into Design Practices Much of NASA’s aerospace research overlaps various fields. For example, improving EMP tolerance of space-based systems involves studying plasma interactions in a high-voltage system operated in the ionosphere. But a related subject is establishing design practices that may have previously increased adverse plasma interactions and recommending means of eliminating or mitigating such reactions in future platforms. Standards for lightning protection tests were developed in the 1950s, under FAA and Department of Defense (DOD) auspices. Those studies mainly addressed electrical bonding of aircraft components and protection of fuel systems. However, in the next decade, dramatic events such as the in-flight destruction of a Boeing 707 and the triggered responses of Apollo 12 clearly demonstrated the need for greater research. With advent of the Space Shuttle, NASA required further means of lightning protection, a process that began in the 1970s and continued well beyond the Shuttle’s inaugural flight, in 1981. Greater interagency cooperation led to new research programs in the 1980s involving NASA, the Air Force, the FAA, and the government of France. The goal was to develop a lightning-protection design philosophy, which in turn required standards and guidelines for various aerospace vehicles. NASA’s approach to lightning research has emphasized detection and avoidance, predicated on minimizing the risk of strikes, but then, if strikes occur nevertheless, ameliorating their damaging effects. Because early detection enhances avoidance, the two approaches work hand in glove. Translating those related philosophies into research and thence 64. Air Force Flight Dynamics Laboratory, Electromagnetic Hazards Group, “Lightning Strike Susceptibility Tests on the AIM-9 Missile,” AFFDL-TR-78-95 (Aug. 1978), p. 23.

106

Case 2 | Coping With Lightning: A Lethal Threat to Flight

to design practices contains obvious benefits. The relationship between lightning research and protective design was noted by researchers for Lightning Technologies, Inc., in evaluating lightning protection for digital engine control systems. They emphasized, “The coordination between the airframe manufacturer and system supplies in this process is fundamental to adequate protection.”65 Because it is usually impractical to perform full-threat tests on fully configured aircraft, lightning protection depends upon accurate simulation using complete aircraft with full systems aboard. NASA, and other Federal agencies and military services, has undertaken such studies, dating to its work on the F-8 DFBW testbed of the early 1970s, as discussed subsequently. In their Storm Hazards Research Program (SHRP) from 1980 to 1986, Langley researchers found that multiple lightning strikes inject random electric currents into an airframe, causing rapidly changing magnetic fields that can lead to erroneous responses, faulty commands, or other “upsets” in electronic systems. In 1987, the FAA (and other nations’ aviation authorities) required that aircraft electronic systems performing flight-critical functions be protected from multiple-burst lightning. At least from the 1970s, NASA recognized that vacuum tube electronics were inherently more resistant to lightning-induced voltage surges than were solid-state avionics. (The same was true for EMP effects. When researchers in the late 1970s were able to examine the avionics of the Soviet MiG-25 Foxbat, after defection of a Foxbat pilot to Japan, they were surprised to discover that much of its avionics were tube-based, clearly with EMP considerations in mind.) While new microcircuitry obviously was more vulnerable to upset or damage, many new-generation aircraft would have critical electronic systems such as fly-by-wire control systems. Therefore, lightning represented a serious potential hazard to safety of flight for aircraft employing first-generation electronic flight control architectures and systems. A partial solution was redundancy of flight controls and other airborne systems, but in 1978, there were few if any standards addressing indirect effects of lightning. That time, however, was one of intensive interest in electronic flight controls. New fly-by-wire aircraft such as the F-16 were on the verge of entering squadron service. Even more radical designs—notably highly unstable early stealth aircraft such as the Lockheed XST Have Blue testbed, the Northrop Tacit Blue,

the Lockheed F-117, and the NASA–Rockwell Space Shuttle orbiter— were either already flying or well underway down the development path. NASA’s digital fly-by-wire (DFBW) F-8C Crusader afforded a ready means of evaluating lightning-induced voltages, via ground simulation and evaluation of electrodynamic effects upon its flight control computer. Dryden’s subsequent research represented the first experimental investigation of lightning-induced effects on any FBW system, digital or analog. A summary concluded: Results are significant, both for this particular aircraft and for future generations of aircraft and other aerospace vehicles such as the Space Shuttle, which will employ digital FBW FCSs. Particular conclusions are: Equipment bays in a typical metallic airframe are poorly shielded and permit substantial voltages to be induced in unshielded electrical cabling. Lightning-induced voltages in a typical a/c cabling system pose a serious hazard to modern electronics, and positive steps must be taken to minimize the impact of these voltages on system operation. Induced voltages of similar magnitudes will appear simultaneously in all channels of a redundant system. A single-point ground does not eliminate lightning-induced voltages. It reduces the amount of diffusion-flux induced and structural IR voltage but permits significant aperture-flux induced voltages. Cable shielding, surge suppression, grounding and interface modifications offer means of protection, but successful design will require a coordinated sharing of responsibility among those who design the interconnecting cabling and those who design the electronics. A set of transient control levels for system cabling and transient design levels for electronics, separated by a margin of safety, should be established as design criteria.66

The F-8 DFBW program is the subject of a companion study on electronic flight controls and so is not treated in greater detail here. In brief, a Navy Ling-Temco-Vought F-8 Crusader jet fighter was modified with a digital electronic flight control system and test-flown at the NASA Flight Research Center (later the NASA Dryden Flight Research Center). When the F-8 DFBW program ended in 1985, it had made 210 flights, with direct benefits to aircraft as varied as the F-16, the F/A-18, the Boeing 777, and the Space Shuttle. It constituted an excellent example of how NASA research can prove and refine design concepts, which are then translated into design practice.67 The versatile F-106B program also yielded useful information on protection of digital computers and other airborne systems that translated into later design concepts. As NASA engineer-historian Joseph Chambers subsequently wrote: “These findings are now reflected in lightning environment and test standards used to verify adequacy of protection for electrical and avionics systems against lightning hazards. They are also used to demonstrate compliance with regulations issued by airworthiness certifying authorities worldwide that require lightning strikes not adversely affect the aircraft systems performing critical and essential functions.”68 Similarly, NASA experience at lightning-prone Florida launch sites provided an obvious basis for identifying and implementing design practices for future use. A 1999 lessons-learned study identified design considerations for lightning-strike survivability. Seeking to avoid natural or triggered lightning in future launches, NASA sought improvements in electromagnetic compatibility (EMC) for launch sites used by the Shuttle and other launch systems. They included proper grounding of vehicle and ground-support equipment, bonding requirements, and circuit protection. Those aims were achieved mainly via wire shielding and transient limiters. In conclusion, it is difficult to improve upon D.L. Johnson and W.W. Vaughn’s blunt assessment that “Lightning protection assessment and design consideration are critical functions in the design and development of an aerospace vehicle. The project’s engineer responsible for

lightning must be involved in preliminary design and remain an integral member of the design and development team throughout vehicle construction and verification tests.” 69 This lesson is applicable to many aerospace technical disciplines and reflects the decades of experience embedded within NASA and its predecessor, the NACA, involving high-technology (and often high-risk) research, testing, and evaluation. Lightning will continue to draw the interest of the Agency’s researchers, for there is still much that remains to be learned about this beautiful and inherently dangerous electrodynamic phenomenon and its interactions with those who fly.

More than 87,000 flight take place each day over the United States. The work of NASA and others has helped develop ways to ensure safety in these crowded skies. Richard P. Hallion.

122

Quest for Safety 3 The Amid Crowded Skies

CASE

3

James Banke Since 1926 and the passage of the Air Commerce Act, the Federal Government has had a vital commitment to aviation safety. Even before this, however, the NACA championed regulation of aeronautics, the establishment of licensing procedures for pilots and aircraft, and the definition of technical criteria to enhance the safety of air operations. NASA has worked closely with the FAA and other aviation organizations to ensure the safety of America’s air transport network.

W

HEN THE FIRST AIRPLANE LIFTED OFF from the sands of Kitty Hawk during 1903, there was no concern of a midair collision with another airplane. The Wright brothers had the North Carolina skies all to themselves. But as more and more aircraft found their way off the ground and then began to share the increasing number of new airfields, the need to coordinate movements among pilots quickly grew. As flight technology matured to allow cross-country trips, methods to improve safe navigation between airports evolved as well. Initially, bonfires lit the airways. Then came light towers, two-way radio, omnidirectional beacons, radar, and—ultimately—Global Positioning System (GPS) navigation signals from space.1 Today, the skies are crowded, and the potential for catastrophic loss of life is ever present, as more than 87,000 flights take place each day over the United States. Despite repeated reports of computer crashes or bad weather slowing an overburdened national airspace system, airrelated fatalities remain historically low, thanks in large part to the technical advances developed by the National Aeronautics and Space Administration (NASA), but especially to the daily efforts of some 15,000 air traffic controllers keeping a close eye on all of those airplanes.2 1. Edmund Preston, FAA Historical Chronology, Civil Aviation and the Federal Government 1926–1996 (Washington, DC: Federal Aviation Administration). 2. NATCA: A History of Air Traffic Control (Washington, DC: National Air Traffic Controllers Asso­ ciation, 2009), p. 16.

123

NASA’s Contributions to Aeronautics

3

From an Australian government slide show in 1956, the basic concepts of an emerging air traffic control system are explained to the public. Airways Museum & Civil Aviation Historical Society, Melbourne, Australia (www.airwaysmuseum.com).

All of those controllers work for, or are under contract to, the Federal Aviation Administration (FAA), which is the Federal agency responsible for keeping U.S. skyways safe by setting and enforcing regulations. Before the FAA (formed in 1958), it was the Civil Aeronautics Administration (formed in 1941), and even earlier than that, it was the Department of Commerce’s Aeronautics Bureau (formed in 1926). That that administrative job today is not part of NASA’s duties is the result of decisions made by the White House, Congress, and NASA’s predecessor organization, the National Advisory Committee for Aeronautics (NACA), during 1920.3 At the time (specifically 1919), the International Commission for Air Navigation had been created to develop the world’s first set of rules for governing air traffic. But the United States did not sign on to the convention. Instead, U.S. officials turned to the NACA and other organizations to determine how best to organize the Government for handling 3. Alex Roland, Model Research: The National Advisory Committee for Aeronautics 1915–1958, NASA SP­4103 (Washington, DC: NASA, 1985).

124

Case 3 | The Quest for Safety Amid Crowded Skies

all aspects of this new transportation system. The NACA in 1920 already was the focal point of aviation research in the Nation, and many thought it only natural, and best, that the Committee be the Government’s allinclusive home for aviation matters. A similar organizational model existed in Europe but didn’t appear to some with the NACA to be an ideal solution. This sentiment was most clearly expressed by John F. Hayford, a charter member of the NACA and a Northwestern University engineer, who said during a meeting, “The NACA is adapted to function well as an advisory committee but not to function satisfactorily as an administrative body.”4 So, in a way, NASA’s earliest contribution to making safer skyways was to shed itself of the responsibility for overseeing improvements to and regulating the operation of the national airspace. With the FAA secure in that management role, NASA has been free to continue to play to its strengths as a research organization. It has provided technical innovation to enhance safety in the cockpits; increase efficiencies along the air routes; introduce reliable automation, navigation, and communication systems for the many air traffic control (ATC) facilities that dot the Nation; and manage complex safety reporting systems that have required creation of new data-crunching capabilities. This case study will present a survey in a more-or-less chronological order of NASA’s efforts to assist the FAA in making safer skyways. An overview of key NASA programs, as seen through the eyes of the FAA until 1996, will be presented first. NASA’s contributions to air traffic safety after the 1997 establishment of national goals for reducing fatal air accidents will be highlighted next. The case study will continue with a survey of NASA’s current programs and facilities related to airspace safety and conclude with an introduction of the NextGen Air Transportation System, which is to be in place by 2025.

3

NASA, as Seen by the FAA Nearly every NASA program related to aviation safety has required the involvement of the FAA. Anything new from NASA that affects—for example, the design of an airliner or the layout of a cockpit panel5 or the introduction of a modified traffic control procedure that relies on 4. Roland, Model Research, p. 57. 5. Part 21 Aircraft Certification Procedures for Products and Parts, Federal Aviation Regulations (Washington, DC: FAA, 2009).

125

NASA’s Contributions to Aeronautics

3

new technology6—must eventually be certified for use by the FAA, either directly or indirectly. This process continues today, extending the legacy of dozens of programs that came before—not all of which can be detailed here. But in terms of a historical overview through the eyes of the FAA, a handful of key collaborations with NASA were considered important enough by the FAA to mention in its official chronology, and they are summarized in this section. Partners in the Sky: 1965 The partnership between NASA and the FAA that facilitates that exchange of ideas and technology was forged soon after both agencies were formally created in 1958. With the growing acceptance of commercial jet airliners and the ever-increasing number of passengers who wanted to get to their destinations as quickly as possible, the United States began exploring the possibility of fielding a Supersonic Transport (SST). By 1964, it was suggested that duplication of effort was underway by researchers at the FAA and NASA, especially in upgrading existing jet powerplants required to propel the speedy airliner. The resulting series of meetings during the next year led to the creation in May 1965 of the NASA–FAA Coordinating Board, which was designed to “strengthen the coordination, planning, and exchange of information between the two agencies.”7 Project Taper: 1965 During that same month, the findings were released of what the FAA’s official historical record details as its first joint research project with NASA.8 A year earlier, during May and June 1964, two series of flight tests were conducted using FAA aircraft with NASA pilots to study the hazards of light to moderate air turbulence to jet aircraft from several perspectives. The effort was called Project Taper, short for Turbulent Air Pilot Environment Research.9 In conjunction with ground-based wind tunnel runs and early use of simulator programs, FAA Convair 880 and

Boeing 720 airliners were flown to define the handling qualities of aircraft as they encountered turbulence and determine the best methods for the pilot to recover from the upset. Another part of the study was to determine how turbulence upset the pilots themselves and if any changes to cockpit displays or controls would be helpful. Results of the project presented at a 1965 NASA Conference on Aircraft Operating Problems indicated that in terms of aircraft control, retrimming the stabilizer and deploying the spoilers were “valuable tools,” but if those devices were to be safely used, an accurate g-meter should be added to the cockpit to assist the pilot in applying the correct amount of control force. The pilots also observed that initially encountering turbulence often created such a jolt that it disrupted their ability to scan the instrument dials (which remained reliable despite the added vibrations) and recommended improvements in their seat cushions and restraint system.10 But the true value of Project Taper to making safer skyways may have been the realization that although aircraft and pilots under controlled conditions and specialized training could safely penetrate areas of turbulence—even if severe—the better course of action was to find ways to avoid the threat altogether. This required further research and improvements in turbulence detection and forecasting, along with the ability to integrate that data in a timely manner to the ATC system and cockpit instrumentation.11

3

Avoiding Bird Hazards: 1966 After millions of years of birds having the sky to themselves, it only took 9 years from the time the Wright brothers first flew in 1903 for the first human fatality brought about by a bird striking an aircraft and causing the plane to crash in 1912. Fast-forward to 1960, when an Eastern Air Lines plane went down near Boston, killing 62 people as a result of a bird strike—the largest loss of life from a single bird incident.12 With the growing number of commercial jet airplanes, faster aircraft increased the potential damage a small bird could inflict and the larger airplanes put more humans at risk during a single flight. The need to address methods for dealing with birds around airports and in the skies also rose in priority. So, on September 9, 1966, the Interagency Bird 10. Ibid. 11. Philip Donely, “Safe Flight in Rough Air,” NASA TM­X­51662 (Hampton, VA: NASA, 1964). 12. Micheline Maynard, “Bird Hazard is Persistent for Planes,” New York Times (Jan. 19, 2009).

127

NASA’s Contributions to Aeronautics

3

A DeTect, Inc., MERLIN bird strike avoidance radar is seen here in use in South Africa. NASA uses the same system at Kennedy Space Center for Space Shuttle missions, and the FAA is considering its use at airports around the Nation. NASA.

Hazard Committee was formed to gather data, share information, and develop methods for mitigating the risk of collisions between birds and airplanes. With the FAA taking the lead, the Committee included representatives from NASA; the Civil Aeronautics Board; the Department of Interior; the Department of Health, Education, and Welfare; and the U.S. Air Force, Navy, and Army.13 Through the years since the Committee was formed, the aviation community has approached the bird strike hazard primarily on three fronts: (1) removing or relocating the birds, (2) designing aircraft components to be less susceptible to damage from bird strikes, and (3) increasing the understanding of bird habitats and migratory patterns so as to alter air traffic routes and minimize the potential for bird strikes. Despite these efforts, the problem persists today, as evidenced by the January 2009 incident involving a US Airways jet that was forced to ditch in the Hudson River. Both of its jet engines failed because of 13. John L. Seubert, “Activities of the FAA Inter­Agency Bird Hazard Committee” (Washington, DC: FAA, 1968).

128

Case 3 | The Quest for Safety Amid Crowded Skies

bird strikes shortly after takeoff. Fortunately, all souls on board survived the water landing thanks to the training and skills of the entire flightcrew.14 NASA’s contributions in this area include research to characterize the extent of damage that birds might inflict on jet engines and other aircraft components in a bid to make those parts more robust or forgiving of a strike,15 and the development of techniques to identify potentially harmful flocks of birds 16 and their local and seasonal flight patterns using radar so that local air traffic routes can be altered.17 Radar is in use to warn pilots and air traffic controllers of bird hazards at the Seattle-Tacoma International Airport. As of this writing, the FAA plans to deploy test systems at Chicago, Dallas, and New York airports, as the technology still needs to be perfected before its deployment across the country, according to an FAA spokeswoman quoted in a Wall Street Journal story published January 26, 2009.18 Meanwhile, a bird detecting radar system first developed for the Air Force by DeTect, Inc., of Panama City, FL, has been in use since 2006 at NASA’s Kennedy Space Center to check for potential bird strike hazards before every Space Shuttle launch. Two customized marine radars scan the sky: one oriented in the vertical, the other in the horizontal. Together with specialized software, the MERLIN system can detect flocks of birds up to 12 miles from the launch pad or runway, according to a company fact sheet. In the meantime, airports with bird problems will continue to rely on broadcasting sudden loud noises, shooting off fireworks, flashing strobe lights, releasing predator animals where the birds are nesting, or, in the worst case, simply eliminating the birds.

Applications Technology Satellite 1 (ATS 1): 1966–1967 Aviation’s use of actual space-based technology was first demonstrated by the FAA using NASA’s Applications Technology Satellite 1 (ATS 1) to relay voice communications between the ground and an airborne FAA aircraft using very high frequency (VHF) radio during 1966 and 1967, with the aim of enabling safer air traffic control over the oceans.19 Launched from Cape Canaveral atop an Atlas Agena D rocket on December 7, 1966, the spin-stabilized ATS 1 was injected into geosynchronous orbit to take up a perch 22,300 miles high, directly over Ecuador. During this early period in space history, the ATS 1 spacecraft was packed with experiments to demonstrate how satellites could be used to provide the communication, navigation, and weather monitoring that we now take for granted. In fact, the ATS 1’s black and white television camera captured the first full-Earth image of the planet’s cloud-covered surface.20 Eight flight tests were conducted using NASA’s ATS 1 to relay voice signals between the ground and an FAA aircraft using VHF band radio, with the intent of allowing air traffic controllers to speak with pilots flying over an ocean. Measurements were recorded of signal level, signal plus noise-to-noise ratio, multipath propagation, voice intelligibility, and adjacent channel interference. In a 1970 FAA report, the author concluded that the “overall communications reliability using the ATS 1 link was considered marginal.”21 All together, the ATS project attempted six satellite launches between 1966 and 1974, with ATS 2 and ATS 4 unable to achieve a useful orbit. ATS 1 and ATS 3 continued the FAA radio relay testing, this time including a specially equipped Pan American Airways 747 as it flew a commercial flight over the ocean. Results were better than when the ATS 1 was tested alone, with a NASA summary of the experiments concluding that The experiments have shown that geostationary satellites can provide high quality, reliable, un-delayed communications

between distant points on the earth and that they can also be used for surveillance. A combination of un-delayed communications and independent surveillance from shore provides the elements necessary for the implementation of effective traffic control for ships and aircraft over oceanic regions. Eventually the same techniques may be applied to continental air traffic control.22

3

Aviation Safety Reporting System: 1975 On December 1, 1974, a Trans World Airlines (TWA) Boeing 727, on final approach to Dulles airport in gusty winds and snow, crashed into a Virginia mountain, killing all aboard. Confusion about the approach to the airport, the navigation charts the pilots were using, and the instructions from air traffic controllers all contributed to the accident. Six weeks earlier, a United Airlines flight nearly succumbed to the same fate. Officials concluded, among other things, that a safety awareness program might have enabled the TWA flight to benefit from the United flight’s experience. In May 1975, the FAA announced the start of an Aviation Safety Reporting Program to facilitate that kind of communication. Almost immediately, it was realized the program would fail because of fear the FAA would retaliate against someone calling into question its rules or personnel. A neutral third party was needed, so the FAA turned to NASA for the job. In August 1975, the agreement was signed, and NASA officially began operating a new Aviation Safety Reporting System (ASRS).23 NASA’s job with the ASRS was more than just emptying a “big suggestion box” from time to time. The memorandum of agreement between the FAA and NASA proposed that the updated ASRS would have four functions: 1.

Take receipt of the voluntary input, remove all evidence of identification from the input, and begin initial processing of the data. 2. Perform analysis and interpretation of the data to identify any trends or immediate problems requiring action.

Prepare and disseminate appropriate reports and other data. Continually evaluate the ASRS, review its performance, and make improvements as necessary.

Two other significant aspects of the ASRS included a provision that no disciplinary action would be taken against someone making a safety report and that NASA would form a committee to advise on the ASRS. The committee would be made up of key aviation organizations, including the Aircraft Owners and Pilots Association, the Air Line Pilots Association, the Aviation Consumer Action Project, the National Business Aircraft Association, the Professional Air Traffic Controllers Organization, the Air Transport Association, the Allied Pilots Association, the American Association of Airport Executives, the Aerospace Industries Association, the General Aviation Manufacturers’ Association, the Department of Defense, and the FAA.24 Now in existence for more than 30 years, the ASRS has racked up an impressive success record of influencing safety that has touched every aspect of flight operations, from the largest airliners to the smallest general-aviation aircraft. According to numbers provided by NASA’s Ames Research Center at Moffett Field, CA, between 1976 and 2006, the ASRS received more than 723,400 incident reports, resulting in 4,171 safety alerts being issued and the instigation of 60 major research studies. Typical of the sort of input NASA receives is a report from a Mooney 20 pilot who was taking a young aviation enthusiast on a sightseeing flight and explaining to the passenger during his landing approach what he was doing and what the instruments were telling him. This distracted his piloting just enough to complicate his approach and cause the plane to flare over the runway. He heard his stall alarm sound, then silence, then another alarm with the same tone. Suddenly, his aircraft hit the runway, and he skidded to a stop just off the pavement. It turned out that the stall warning alarm and landing gear alarm sounded alike. His suggestion was to remind the general-aviation community there were verbal alarms available to remind pilots to check their gear before landing.25 24. C.E. Billings, “Aviation Safety Reporting System,” p. 6. 25. “Horns and Hollers,” CALLBACK From NASA’s Aviation Safety Reporting System, No. 359 (Nov. 2009), p. 2.

132

Case 3 | The Quest for Safety Amid Crowded Skies

Although the ASRS continues today, one negative about the program is that it is passive and only works if information is voluntarily offered. But from April 2001 through December 2004, NASA fielded the National Aviation Operations Monitoring Service (NAOMS) and conducted almost 30,000 interviews to solicit specific safety-related data from pilots, air traffic controllers, mechanics, and other operational personnel. The aim was to identify systemwide trends and establish performance measures, with an emphasis on tracking the effects of new safety-related procedures, technologies, and training. NAOMS was part of NASA’s Aviation Safety Program, detailed later in this case study.26 With all these data in hand, more coming in every day, and none of them in a standard, computer-friendly format, NASA researchers were prompted to develop search algorithms that recognized relevant text. The first such suite of software used to support ASRS was called QUOROM, which at its core was a computer program capable of analyzing, modeling, and ranking text-based reports. NASA programmers then enhanced QUOROM to provide: •

•

• •

3

Keyword searches, which retrieve from the ASRS database narratives that contain one or more user-specified keywords in typical or selected contexts and rank the narratives on their relevance to the keywords in context. Phrase searches, which retrieve narratives that contain user-specified phrases, exactly or approximately, and rank the narratives on their relevance to the phrases. Phrase generation, which produces a list of phrases from the database that contain a user-specified word or phrase. Phrase discovery, which finds phrases from the database that are related to topics of interest.27

QUORUM’s usefulness in accessing the ASRS database would evolve as computers became faster and more powerful, paving the way for a new suite of software to perform what is now called “data mining.” This in turn would enable continual improvement in aviation safety and 26. “NAOMS Reference Report: Concepts, Methods, and Development Roadmap” Battelle Memo­ rial Institute (2007). 27. Michael W. McGreevy, “Searching the ASRS Database Using QUORUM Keyword Search, Phrase Search, Phrase Generation, and Phrase Discovery,” NASA TM­2001­210913 (2001), p. 4.

133

NASA’s Contributions to Aeronautics

3

Microwave Landing System hardware at NASA’s Wallops Flight Research Facility in Virginia as a NASA 737 prepares to take off to test the high-tech navigation and landing aid. NASA.

find applications in everything from real-time monitoring of aircraft systems28 to Earth sciences.29 Microwave Landing System: 1976 As soon as it was possible to join the new inventions of the airplane and the radio in a practical way, it was done. Pilots found themselves “flying the beam” to navigate from one city to another and lining up with the runway, even in poor visibility, using the Instrument Landing System (ILS). ILS could tell the pilots if they were left or right of the runway centerline and if they were higher or lower than the established glide slope during the final approach. ILS required straight-in approaches and separation between aircraft, which limited the number of landings allowed each hour at the busiest airports. To improve upon this, the FAA, NASA, and the Department of Defense (DOD) in 1971 began developing the Microwave Landing System (MLS), which promised,

among other things, to increase the frequency of landings by allowing multiple approach paths to be used at the same time. Five years later, the FAA took delivery of a prototype system and had it installed at the FAA’s National Aviation Facilities Experimental Center in Atlantic City, NJ, and at NASA’s Wallops Flight Research Facility in Virginia.30 Between 1976 and 1994, NASA was actively involved in understanding how MLS could be integrated into the national airspace system. Configuration and operation of aircraft instrumentation,31 pilot procedures and workload,32 air traffic controller procedures,33 use of MLS with helicopters,34 effects of local terrain on the MLS signal,35 and the determination to what extent MLS could be used to automate air traffic control36 were among the topics NASA researchers tackled as the FAA made plans to employ MLS at airports around the Nation. But having proven with NASA’s Applications Technology Satellite program that space-based communication and navigation were more than feasible (but skipping endorsement of the use of satellites in the FAA’s 1982 National Airspace System Plan), the FAA dropped the MLS program in 1994 to pursue the use of GPS technology, which was just beginning to work itself into the public consciousness. GPS signals, when enhanced by a ground-based system known as the Wide Area Augmentation System (WAAS), would provide more accurate position information and do it in a more efficient and potentially less costly manner than by deploying MLS around the Nation.37 Although never widely deployed in the United States for civilian use, MLS remains a tool of the Air Force at its airbases. NASA has

employed a version of the system called the Microwave Scan Beam Landing System for use at its Space Shuttle landing sites in Florida and California. Moreover, Europe has embraced MLS in recent years, and an increasing number of airports there are being equipped with the system, with London’s Heathrow Airport among the first to roll it out.38 NUSAT: 1985 NUSAT, a tiny satellite designed by Weber State College in northern Utah, was deployed into Earth orbit from the cargo bay of the Space Shuttle Challenger on April 29, 1985. Its purpose was to serve as a radar target for the FAA. The satellite employed three L-band receivers, an ultra high frequency (UHF) command receiver, a VHF telemetry transmitter, associated antennas, a microprocessor, fixed solar arrays, and a power supply to acquire, store, and forward signal strength data from radar. All of that was packed inside a basketball-sized, 26-sided polyhedron that weighed about 115 pounds.39 NUSAT was used to optimize ground-based ATC radar systems for the United States and member nations of the International Civil Aviation Organization by measuring antenna patterns.40 National Plan for Civil Aviation Human Factors: 1995 In June 1995, the FAA announced its plans for a joint FAA–DOD–NASA initiative called the National Plan for Civil Aviation Human Factors. The plan detailed a national effort to reduce and eliminate human error as the cause of aviation accidents. The plan called for projects that would identify needs and problems related to human performance, guide research programs that addressed the human element, involve the Nation’s top scientists and aviation professionals, and report the results of these efforts to the aviation community.41 NASA’s extensive involvement in human factors issues is detailed in another case study of this volume.

Aviation Performance Measuring System: 1996 With the Aviation Safety Reporting System fully operational for two decades, NASA in 1996 once again found itself working with the FAA to gather raw data, process it, and make reports—all in the name of identifying potential problems and finding solutions. In this case, as part of a Flight Operations Quality Assurance program that the FAA was working with industry on, the agency partnered with NASA to test a new Aviation Performance Measuring System (APMS). The new system was designed to convert digital data taken from the flight data recorders of participating airlines into a format that could easily be analyzed.42 More specifically, the objectives of the NASA–FAA APMS research project was to establish an objective, scientifically and technically sound basis for performing flight data analysis; identify a flight data analysis system that featured an open and flexible architecture, so that it could easily be modified as necessary; and define and articulate guidelines that would be used in creating a standardized database structure that would form the basis for future flight data analysis programs. This standardized database structure would help ensure that no matter which data-crunching software an airline might choose, it would be compatible with the APMS dataset. Although APMS was not intended to be a nationwide flight data collection system, it was intended to make available the technical tools necessary to more easily enable a large-scale implementation of flight data analysis.43 At that time, commercially available software development was not far enough advanced to meet the needs of the APMS, which sought identification and analysis of trends and patterns in large-scale databases involving an entire airline. Software then was primarily written with the needs of flight crews in mind and was more capable of spotting single events rather than trends. For example, if a pilot threw a series of switches out of order, the onboard computer could sound an alarm. But that computer, or any other, would not know how frequently pilots made the same mistake on other flights.44

The FAA’s air traffic control tower facility at the Dallas/Fort Worth International Airport is a popular site that the FAA uses for testing new ATC systems and procedures, including new Center TRACON Automation System tools. FAA.

A particularly interesting result of this work was featured in the 1998 edition of NASA’s annual Spinoff publication, which highlights successful NASA technology that has found a new home in the commercial sector: A flight data visualization system called FlightViz™ has been created for NASA’s Aviation Performance Measuring System (APMS), resulting in a comprehensive flight visualization and 138

Case 3 | The Quest for Safety Amid Crowded Skies

analysis system. The visualization software is now capable of very high-fidelity reproduction of the complete dynamic flight environment, including airport/airspace, aircraft, and cockpit instrumentation. The APMS program calls for analytic methods, algorithms, statistical techniques, and software for extracting useful information from digitally-recorded flight data. APMS is oriented toward the evaluation of performance in aviation systems, particularly human performance. . . . In fulfilling certain goals of the APMS effort and related Space Act Agreements, SimAuthor delivered to United Airlines in 1997, a state-of-the-art, high-fidelity, reconfigurable flight data replay system. The software is specifically designed to improve airline safety as part of Flight Operations Quality Assurance (FOQA) initiatives underway at United Airlines. . . . Pilots, instructors, human factors researchers, incident investigators, maintenance personnel, flight operations quality assurance staff, and others can utilize the software product to replay flight data from a flight data recorder or other data sources, such as a training simulator. The software can be customized to precisely represent an aircraft of interest. Even weather, time of day and special effects can be simulated.45

3

While by no means a complete list of every project NASA and the FAA have collaborated on, the examples detailed so far represent the diverse range of research conducted by the agencies. Much of the same kind of work continued as improved technology, updated systems, and fresh approaches were applied to address a constantly evolving set of challenges. Aviation Safety Program After the in-flight explosion and crash of TWA 800 in July 1996, President Bill Clinton established a Commission on Aviation Safety and Security, chaired by Vice President Al Gore. The Commission’s emphasis was to find ways to reduce the number of fatal air-related accidents. Ultimately, the Commission challenged the aviation community to lower the fatal aircraft accident rate by 80 percent in 10 years and 90 percent in 25 years.

NASA’s response to this challenge was to create in 1997 the Aviation Safety Program (AvSP) and, as seen before, partner with the FAA and the DOD to conduct research on a number of fronts.46 NASA’s AvSP was set up with three primary objectives: (1) eliminate accidents during targeted phases of flight, (2) increase the chances that passengers would survive an accident, and (3) beef up the foundation upon which aviation safety technologies are based. From those objectives, NASA established six research areas, some having to do directly with making safer skyways and others pointed at increasing aircraft safety and reliability. All produced results, as noted in the referenced technical papers. Those research areas included accident mitigation,47 systemwide accident prevention,48 single aircraft accident prevention,49 weather accident prevention,50 synthetic vision,51 and aviation system modeling and monitoring.52 Of particular note is a trio of contributions that have lasting influence today. They include the introduction and incorporation of the glass cockpit into the pilot’s work environment and a pair of programs to gather key data that can be processed into useful, safety enhancing information. Glass Cockpit As aircraft systems became more complex and the amount of navigation, weather, and air traffic information available to pilots grew in abundance, the nostalgic days of “stick and rudder” men (and women) gave way to “cockpit managers.” Mechanical, analog dials showing a

A prototype “glass cockpit” that replaces analog dials and mechanical tapes with digitally driven flat panel displays is installed inside the cabin of NASA’s 737 airborne laboratory, which tested the new hardware and won support for the concept in the aviation community. NASA.

single piece of information (e.g., airspeed or altitude) weren’t sufficient to give pilots the full status of their increasingly complicated aircraft flying in an increasingly crowded sky. The solution came from engineers at NASA’s Langley Research Center in Hampton, VA, who worked with key industry partners to come up with an electronic flight display—what is generally known now as the glass cockpit—that took advantage of powerful, small computers and liquid crystal display (LCD) flat panel technology. Early concepts of the glass cockpit were flight-proven using NASA’s Boeing 737 flying laboratory and eventually certified for use by the FAA.53 According to a NASA fact sheet, The success of the NASA-led glass cockpit work is reflected in the total acceptance of electronic flight displays beginning with the introduction of the Boeing 767 in 1982. Airlines and their passengers, alike, have benefitted. Safety and efficiency of flight have been increased with improved pilot understanding of the airplane’s situation relative to its environment. 53. Lane E. Wallace, “Airborne Trailblazer: Two Decades with NASA Langley’s 737 Flying Labora­ tory,” NASA SP­4216 (1994).

141

NASA’s Contributions to Aeronautics

3

The cost of air travel is less than it would be with the old technology and more flights arrive on time.54 After developing the first glass cockpits capable of displaying basic flight information, NASA has continued working to make more information available to the pilots,55 while at the same time being conscious of information overload,56 the ability of the flight crew to operate the cockpit displays without distraction during critical phases of flight (takeoff and landing),57 and the effectiveness of training pilots to use the glass cockpit.58 Performance Data Analysis and Reporting System In yet another example of NASA developing a database system with and for the FAA, the Performance Data Analysis and Reporting System (PDARS) began operation in 1999 with the goal of collecting, analyzing, and reporting of performance-related data about the National Airspace System. The difference between PDARS and the Aviation Safety Reporting System is that input for the ASRS comes voluntarily from people who see something they feel is unsafe and report it, while input for PDARS comes automatically—in real time—from electronic sources such as ATC radar tracks and filed flight plans. PDARS was created as an element of NASA’s Aviation Safety Monitoring and Modeling project.59 From these data, PDARS calculates a variety of performance measures related to air traffic patterns, including traffic counts, travel times between airports and other navigation points, distances flown, general traffic flow parameters, and the separation distance from trailing

aircraft. Nearly 1,000 reports to appropriate FAA facilities are automatically generated and distributed each morning, while the system also allows for sharing data and reports among facilities, as well as facilitating larger research projects. With the information provided by PDARS, FAA managers can quickly determine the health, quality, and safety of day-to-day ATC operations and make immediate corrections.60 The system also has provided input for several NASA and FAA studies, including measurement of the benefits of the Dallas/Fort Worth Metroplex airspace, an analysis of the Los Angeles Arrival Enhancement Procedure, an analysis of the Phoenix Dryheat departure procedure, measurement of navigation accuracy of aircraft using area navigation en route, a study on the detection and analysis of in-close approach changes, an evaluation of the benefits of domestic reduced vertical separation minimum implementation, and a baseline study for the airspace flow program. As of 2008, PDARS was in use at 20 Air Route Traffic Control Centers, 19 Terminal Radar Approach Control facilities, three FAA service area offices, the FAA’s Air Traffic Control System Command Center in Herndon, VA, and at FAA Headquarters in Washington, DC.61

3

National Aviation Operations Monitoring Service A further contribution to the Aviation Safety Monitoring and Modeling project provided yet another method for gathering data and crunching numbers in the name of making the Nation’s airspace safer amid increasingly crowded skies. Whereas the Aviation Safety Reporting System involved volunteered safety reports and the Performance Data Analysis and Reporting System took its input in real time from digital data sources, the National Aviation Operations Monitoring Service was a scientifically designed survey of the aviation community to generate statistically valid reports about the number and frequency of incidents that might compromise safety.62

After a survey was developed that would gather credible data from anonymous volunteers, an initial field trial of the NAOMS was held in 2000, followed by the launch of the program in 2001. Initially, the surveyors only sought out air carrier pilots who were randomly chosen from the FAA Airman’s Medical Database. Researchers characterized the response to the NAOMS survey as enthusiastic. Between April 2001 and December 2004, nearly 30,000 pilot interviews were completed, with a remarkable 83-percent return rate, before the project ran short of funds and had to stop. The level of response was enough to achieve statistical validity and prove that NAOMS could be used as a permanent tool for managers to assess the operational health of the ATC system and suggest changes before they were actually needed. Although NASA and the FAA desired for the project to continue, it was shut down on January 31, 2008.63 It’s worth mentioning that the NAOMS briefly became the subject of public controversy in 2007, when NASA received a Freedom of Information Act request by a reporter for the data obtained in the NAOMS survey. NASA denied the request, using language that then NASA Administrator Mike Griffin said left an “unfortunate impression” that the Agency was not acting in the best interest of the public. NASA eventually released the data after ensuring the anonymity originally guaranteed to those who were surveyed. In a January 14, 2008, letter from Griffin to all NASA employees, the Administrator summed up the experience by writing: “As usual in such circumstances, there are lessons to be learned, remembered, and applied. The NAOMS case demonstrates again, if such demonstrations were needed, the importance of peer review, scientific integrity, admitting mistakes when they are made, correcting them as best we can, and keeping our word, despite the criticism that can ensue.”64 An Updated Safety Program In 2006, NASA’s Aeronautics Research Mission Directorate (ARMD) was reorganized. As a result, the projects that fell under ARMD’s Aviation Safety Program were restructured as well, with more of a focus on 63. Statler, “The Aviation System Monitoring and Modeling (ASMM) Project: A Documentation of its History and Accomplishments: 1999–2005,” NASA TP­2007­214556 (2007). 64. Michael Griffin, “Letter from NASA Administrator Mike Griffin” (Washington, DC: NASA, 2008).

144

Case 3 | The Quest for Safety Amid Crowded Skies

aircraft safety than on the skies they fly through. Air traffic improvements in the new plan now fall almost exclusively within the Airspace Systems Program. The Aviation Safety Program is now dedicated to developing the principles, guidelines, concepts, tools, methods, and technologies to address four project areas: the Integrated Vehicle Health Management Project,65 the Integrated Intelligent Flight Deck Technologies Project,66 the Integrated Resilient Aircraft Control Project,67 and the Aircraft Aging and Durability Project.68 Commercial Aviation Safety Team (CAST) When NASA’s Aviation Safety Program was begun in 1997, the agency joined with a large group of aviation-related organizations from Government, industry, and academia in forming a Commercial Aviation Safety Team (CAST) to help reduce the U.S. commercial aviation fatal accident rate by 80 percent in 10 years. During those 10 years, the group analyzed data from some 500 accidents and thousands of safety incidents and helped develop 47 safety enhancements.69 In 2008, the group could boast that the rate had been reduced by 83 percent, and for that, CAST was awarded aviation’s most prestigious honor, the Robert J. Collier Trophy.

3

NASA’s work with improving the National Airspace System has won the Agency two Collier Trophies: one in 2007 for its work with developing the new next-generation ADS-B instrumentation, and one in 2008 as part of the Commercial Aviation Safety Team, which helped improve air safety during the past decade. NASA.

Air Traffic Management Research The work of NASA’s Aeronautics Research Mission Directorate primarily takes place at NASA Field Centers in Virginia, Ohio, and California. It’s at the Ames Research Center at Moffett Field, CA, that a large share of the work to make safer skyways has been managed. Many of the more effective programs to improve the safety and efficiency of the Nation’s air traffic control system began at Ames and continue to be studied.70 Seven programs managed within the divisions of Ames’s Air Traffic Management Research office, described in the next section, reveal how NASA research is making a difference in the skies every day. Airspace Concept Evaluation System The Airspace Concept Evaluation System (ACES) is a computer tool that allows researchers to try out novel Air Traffic Management (ATM) theories, weed out those that are not viable, and identify the most promising concepts. ACES looks at how a proposed air transportation concept can work within the National Airspace System (NAS), with the aim of reducing delays, increasing capacity, and handling projected growth in air traffic. ACES does this by simulating the major components of the NAS, modeling a flight from gate to gate, and taking into account in its models the individual behaviors of those that affect the NAS, from departure clearance to the traffic control tower, the weather office, navigation systems, pilot experience, type of aircraft, and other major components. ACES also is able to predict how one individual behavior can set up a ripple effect that touches, or has the potential to touch, the entire NAS. This modeling approach isolates the individual models so that they can continue to be enhanced, improved, and modified to represent new concepts without impacting development of the overall simulation system.71 Among the variables ACES has been tasked to run through its simulations are environmental impacts when a change is introduced,72 use

of various communication and navigation models,73 validation of certain concepts under different weather scenarios,74 adjustments to spacing and merging of traffic around dense airports,75 and reduction of air traffic controller workload by automating certain tasks.76

3

Future ATM Concepts Evaluation Tool Another NASA air traffic simulation tool, the Future ATM Concepts Evaluation Tool (FACET), was created to allow researchers to explore, develop, and evaluate advanced traffic control concepts. The system can operate in several modes: playback, simulation, live, or in a sort of hybrid mode that connects it with the FAA’s Enhanced Traffic Management System (ETMS). ETMS is an operational FAA program that monitors and reacts to air traffic congestion, and it can also predict when and where congestion might happen. (The ETMS is responsible, for example, for keeping a plane grounded in Orlando because of traffic congestion in Atlanta.) Streaming the ETMS live data into a run of FACET makes the simulation of a new advanced traffic control concept more accurate. Moreover, FACET is able to model airspace operations on a national level, processing the movements of more than 5,000 aircraft on a single desktop computer, taking into account aircraft performance, weather, and other variables.77 Some of the advanced concepts tested in FACET include allowing aircraft to have greater freedom in maintaining separation on their own,78 integrating space launch vehicle and aircraft operations into the 73. Greg Kubat and Don Vandrei, “Airspace Concept Evaluation System, Concept Simulations using Communication, Navigation and Surveillance System Models,” Proceedings of the Sixth Integrated Communications, Navigation and Surveillance Conference & Workshop, Baltimore, May 1–3, 2006. 74. Larry Meyn and Shannon Zelinski, “Validating the Airspace Concept Evaluation System for Different Weather Days,” AIAA Modeling and Simulation Technologies Conference, Keystone, CO, Aug. 21–24, 2006. 75. Art Feinberg, Gary Lohr, Vikram Manikonda, and Michel Santos, “A Simulation Testbed for Airborne Merging and Spacing,” AIAA Atmospheric Flight Mechanics Conference, Honolulu, Aug. 18–21, 2008. 76. Heinz Erzberger and Robert Windhorst, “Fast­time Simulation of an Automated Conflict Detec­ tion and Resolution Concept,” 6th AIAA Aviation Technology, Integration and Operations Conference, Wichita, Sept. 25–27, 2006. 77. Banavar Sridhar, “Future Air Traffic Management Concepts Evaluation Tool,” Ames Research Center Research and Technology 2000 (Moffett Field: NASA, 2000), p. 5. 78. Karl D. Bilimoria and Hilda Q. Lee, “Properties of Air Traffic Conflicts for Free and Structured Routing,” AIAA GN&C Conference, Montreal, Aug. 2001.

147

NASA’s Contributions to Aeronautics

3

airspace, and monitoring how efficiently aircraft comply with ATC instructions when their flights are rerouted.79 In fact, the last of these concepts was so successful that it was deployed into the FAA’s operational ETMS. NASA reports that the success of FACET has lead to its use as a simulation tool not only with the FAA, but also with several airlines, universities, and private companies. For example, Flight Dimensions International—the world’s leading vendor of aircraft situational displays—recently integrated FACET with its already popular Flight Explorer product. FACET won NASA’s 2006 Software of the Year Award.80 Surface Management System Making the skyways safer for aircraft to fly by reducing delays and lowering the stress on the system begins and ends with the short journey on the ground between the active runway and the terminal gate. To better coordinate events between the air and ground sides, NASA developed, in cooperation with the FAA, a software tool called the Surface Management System (SMS), whose purpose is to manage the movements of aircraft on the surface of busy airports to improve capacity, efficiency, and flexibility.81 The SMS has three parts: a traffic management tool, a controller tool, and a National Airspace System information tool.82 The traffic management tool monitors aircraft positions in the sky and on the ground, along with the latest times when a departing airliner is about to be pushed back from its gate, to predict demand for taxiway and runway usage, with an aim toward understanding where backups might take place. Sharing this information among the traffic control tools and systems allows for more efficient planning. Similarly, the controller tool helps personnel in the ATC and ramp towers to better coordinate the movement of arriving and departing flights and to 79. Sarah Stock Patterson, “Dynamic Flow Management Problems in Air Transportation,” NASA CR­97­206395 (1997). 80. “Comprehensive Software Eases Air Traffic Management,” Spinoff 2007 (Washington, DC: NASA, 2007). 81. Dave Jara and Yoon C. Jung, “Development of the Surface Management System Integrated with CTAS Arrival Tools,” AIAA 5th Aviation Technology, Integration and Operations Forum, Arlington, TX, Sept. 2005. 82. Katherine Lee, “CTAS and NASA Air Traffic Management Fact Sheets for En Route Descent Advisor and Surface Management System,” NATCA Safety Conference, Fort Worth, Apr. 2004.

148

Case 3 | The Quest for Safety Amid Crowded Skies

advise pilots on which taxiways to use as they navigate between the runway and the gate.83 Finally, the NAS information tool allows data from the SMS to be passed into the FAA’s national Enhanced Traffic Management System, which in turn allows traffic controllers to have a more accurate picture of the airspace.84

3

Center TRACON Automation System The computer-based tools used to improve the flow of traffic across the National Airspace System—such as SMS, FACET, and ACES already discussed—were built upon the historical foundation of another set of tools that are still in use today. Rolled out during the 1990s, the underlying concepts of these tools go back to 1968, when an Ames Research Center scientist, Heinz Erzberger, first explored the idea of introducing air traffic control concepts—such as 4-D trajectory synthesis—and then proposed what was, in fact, developed: the Center TRACON Automation System (CTAS), the Traffic Manager Adviser (TMA), the En Route Descent Adviser (EDA), and the Final Approach Spacing Tool (FAST). Each of the tools provides controllers with advice, information, and some amount of automation—but each tool does this for a different segment of the NAS.85 CTAS provides automation tools to help air traffic controllers plan for and manage aircraft arriving to a Terminal Radar Approach Control (TRACON), which is the area within about 40 miles of a major airport. It does this by generating air traffic advisories that are designed to increase fuel efficiency and reduce delays, as well as assist controllers in ensuring that there is an acceptable separation between aircraft and that planes are approaching a given airport in the correct order. CTAS’s goals also include improving airport capacity without threatening safety or increasing the workload of controllers.86

Flight controllers test the Traffic Manager Adviser tool at the Denver TRACON. The tool helps manage the flow of air traffic in the area around an airport. National Air and Space Museum.

Traffic Manager Adviser Airspace over the United States is divided into 22 areas. The skies within each of these areas are managed by an Air Route Traffic Control Center. At each center, there are controllers designated Traffic Management Coordinators (TMCs), who are responsible for producing a plan to deliver aircraft to a TRACON within the center at just the right time, with proper separation, and at a rate that does not exceed the capacity of the TRACON and destination airports.87 The NASA-developed Traffic Manager Adviser tool assists the TMCs in producing and updating that plan. The TMA does this by using graphical displays and alerts to increase the TMCs’ situational awareness. The program also computes and provides statistics on the undelayed estimated time of arrival to various navigation milestones of an arriving aircraft and even gives the aircraft a runway assignment and scheduled time of arrival (which might later be changed by FAST). This informa87. Harry N. Swenson and Danny Vincent, “Design and Operational Evaluation of the Traffic Man­ agement Advisor at the Ft. Worth Air Route Traffic Control Center,” United States/Europe Air Traffic Management Research and Development Seminar, Paris, June 16–19, 1997.

150

Case 3 | The Quest for Safety Amid Crowded Skies

tion is constantly updated based on live radar updates and controller inputs and remains interconnected with other CTAS tools.88

3

En Route Descent Adviser The National Airspace System relies on a complex set of actions with thousands of variables. If one aircraft is so much as 5 minutes out of position as it approaches a major airport, the error could trigger a domino effect that results in traffic congestion in the air, too many airplanes on the ground needing to use the same taxiway at the same time, late arrivals to the gate, and missed connections. One specific tool created by NASA to avoid this is the En Route Descent Adviser. Using data from CTAS, TMA, and live radar updates, the EDA software generates specific traffic control instructions for each aircraft approaching a TRACON so that it crosses an exact navigation fix in the sky at the precise time set by the TMA tool. The EDA tool does this with all ATC constraints in mind and with maneuvers that are as fuel efficient as possible for the type of aircraft.89 Improving the efficient flow of air traffic through the TRACON to the airport by using EDA as early in the approach as practical makes it possible for the airport to receive traffic in a constant feed, avoiding the need for aircraft to waste time and fuel by circling in a parking orbit before taking turn to approach the field. Another benefit: EDA allows controllers during certain high-workload periods to concentrate less on timing and more on dealing with variables such as changing weather and airspace conditions or handling special requests from pilots.90 Final Approach Spacing Tool The last of the CTAS tools, which can work independently but is more efficient when integrated into the full CTAS suite, is the Final Approach Spacing Tool. It assists the TRACON controllers to determine the most efficient sequence, schedule, and runway assignments for aircraft intending to land. FAST takes advantage of information provided by the TMA and EDA tools in making its assessments and displaying advisories to

the controller, who then directs the aircraft as usual by radio communication. FAST also makes its determinations by using live radar, weather and wind data, and a series of other static databases, such as aircraft performance models, each airline’s preferred operational procedures, and standard air traffic rules.91 Early tests of a prototype FAST system during the mid-1990s at the Dallas/Fort Worth International Airport TRACON showed immediate benefits of the technology. Using FAST’s runway assignment and sequence advisories during more than 25 peak traffic periods, controllers measured a 10- to 20-percent increase in airport capacity, depending on weather and airport conditions.92 Simulating Safer Skyways From new navigation instruments to updated air traffic control procedures, none of the developments intended to make safer skyways that was produced by NASA could be deployed into the real world until it had been thoroughly tested in simulated environments and certified as ready for use by the FAA. Among the many facilities and aircraft available to NASA to conduct such exercises, the Langley-based Boeing 737 and Ames-based complement of air traffic control simulators stand out as major contributors to the effort of improving the National Airspace System. Langley’s Airborne Trailblazer The first Boeing 737 ever built was acquired by NASA in 1974 and modified to become the Agency’s Boeing 737-100 Transport Systems Research Vehicle. During the next 20 years, it flew 702 missions to help NASA advance aeronautical technology in every discipline possible, first as a NASA tool for specific programs and then more generally as a national airborne research facility. Its contributions to the growth in capability and safety of the National Airspace System included the testing of hardware and procedures using new technology, most notably in the cockpit. Earning its title as an airborne trailblazer, it was the Langley 737 that tried out and won acceptance for new ideas such as the glass

NASA’s Airborne Trailblazer is seen cruising above the Langley Research Center in Virginia. The Boeing 737 served as a flying laboratory for NASA’s aeronautics research for two decades. NASA.

cockpit. Those flat panel displays enabled other capabilities tested by the 737, such as data links for air traffic control communications, the microwave landing system, and satellite-based navigation using the revolutionary Global Positioning System.93 With plans to retire the 737, NASA Langley in 1994 acquired a Boeing 757-200 to be the new flying laboratory, earning the designation Airborne Research Integrated Experiments System (ARIES). In 2006, NASA decided to retire the 757.94 Ames’s SimLabs NASA’s Ames Research Center in California is home to some of the more sophisticated and powerful simulation laboratories, which Ames calls SimLabs. The simulators support a range of research, with an emphasis on aerospace vehicles, aerospace systems and operations, human factors, accident investigations, and studies aimed at improving aviation 93. Wallace, “Airborne Trailblazer,” 1994. 94. Michael S. Wusk, “ARIES: NASA Langley’s Airborne Research Facility,” AIAA 2002­5822 (2002).

153

NASA’s Contributions to Aeronautics

3

safety. They all have played a role in making work new air traffic control concepts and associated technology. The SimLabs include: •

•

•

Future Flight Central, which is a national air traffic control and Air Traffic Management simulation facility dedicated to exploring solutions to the growing problem of traffic congestion and capacity, both in the air and on the ground. The simulator is a two-story facility with a 360-degree, full-scale, real-time simulation of an airport, in which new ideas and technology can be tested or personnel can be trained.95 Vertical Motion Simulator, which is a highly adaptable flight simulator that can be configured to represent any aerospace vehicle, whether real or imagined, and still provide a high-fidelity experience for the pilot. According to a facility fact sheet, existing vehicles that have been simulated include a blimp, helicopters, fighter jets, and the Space Shuttle orbiter. The simulator can be integrated with Future Flight Central or any of the air traffic control simulators to provide real-time interaction.96 Crew-Vehicle Systems Flight Facility,97 which itself has three major simulators, including a state-of-the-art Boeing 747 motion-based cockpit,98 an Advanced Concept Flight Simulator,99 and an Air Traffic Control Simulator consisting of 10 PC-based computer workstations that can be used in a variety of modes.100

A full-sized Air Traffic Control Simulator with a 360-degree panorama display, called Future Flight Central, is available to test new systems or train controllers in extremely realistic scenarios. NASA.

The Future of ATC Fifty years of working to improve the Nation’s airways and the equipment and procedures needed to manage the system have laid the foundation for NASA to help lead the most significant transformation of the National Airspace System in the history of flight. No corner of the air traffic control operation will be left untouched. From airport to airport, every phase of a typical flight will be addressed, and new technology and solutions will be sought to raise capacity in the system, lower operating costs, increase safety, and enhance the security of an air transportation system that is so vital to our economy. This program originated from the 2002 Commission on the Future of Aerospace in the United States, which recommended an overhaul of the air transportation system as a national priority—mostly from the concern that air traffic is predicted to double, at least, during the next 20 years. Congress followed up with some money, and President George W. Bush signed into law a plan to create a Next Generation Air Transportation System (NextGen). To manage the effort, a Joint Planning and Development Office (JPDO) was created, with NASA, the FAA, the DOD, and other key aviation organizations as members.101 101. Jeremy C. Smith and Kurt W. Neitzke, “Metrics for the NASA Airspace Systems Program,” NASA SP­2009­6115 (2009).

155

NASA’s Contributions to Aeronautics

3

NASA then organized itself to manage its NextGen efforts through the Airspace Systems Program. Within the program, NASA’s efforts are further divided into projects that are in support of either NextGen Airspace or NextGen Airportal. The airspace project is responsible for dealing with air traffic control issues such as increasing capacity, determining how much more automation can be introduced, scheduling, spacing of aircraft, and rolling out a GPS-based navigation system that will change the way we perceive flying. Naturally, the airportal project is examining ways to improve terminal operations in and around the airplanes, including the possibility of building new airports.102 Already, several technologies are being deployed as part of NextGen. One is called the Wide Area Augmentation System, another the Automatic Dependent Surveillance-Broadcast-B (ADS-B). Both have to do with deploying a satellite-based GPS tracking system that would end reliance on radars as the primary means of tracking an aircraft’s approach.103 WAAS is designed to enhance the GPS signal from Earth orbit and make it more accurate for use in civilian aviation by correcting for the errors that are introduced in the GPS signal by the planet’s ionosphere.104 Meanwhile, ADS-B, which is deployed at several locations around the U.S., combines information with a GPS signal and drives a cockpit display that tells the pilots precisely where they are and where other aircraft are in their area, but only if those other aircraft are similarly equipped with the ADS-B hardware. By combining ADS-B, GPS, and WAAS signals, a pilot can navigate to an airport even in low visibility.105 NASA was a member of the Government and industry team led by the FAA that conducted an ADS-B field test several years ago with United Parcel Service at its hub in Louisville, KY. This work earned the team the 2007 Collier Trophy. In these various ways, NASA has worked to increase the safety of the air traveler and to enhance the efficiency of the global air transportation 102. Stephen T. Darr, Katherine A. Lemos, and Wendell R. Ricks, “A NextGen Aviation Safety Goal,” 2008 Digital Avionics Systems Conference, St. Paul, MN, Oct. 26–30, 2008. 103. A. Buige, “FAA Global Positioning System Program,” Global Positioning System for Gen. Aviation: Joint FAA–NASA Seminar, Washington, DC, 1978. 104. Muna Demitri, Ian Harris, Byron Iijima, Ulf Lindqwister, Anthony Manucci, Xiaoqing Pi, and Brian Wilson, “Ionosphere Delay Calibration and Calibration Errors for Satellite Navigation of Aircraft,” Jet Propulsion Laboratory, Pasadena, CA, 2000. 105. T. Breen, R. Cassell, C. Evers, R. Hulstrom, and A. Smith, “System­Wide ADS­B Back Up and Validation,” Sixth Integrated Communications, Navigation and Surveillance Conference, Baltimore, May 1–3, 2006.

156

Case 3 | The Quest for Safety Amid Crowded Skies

network. As winged flight enters its second century, it is a safe bet that the Agency’s work in coming years will be as comprehensive and influential as it has been in the past, thanks to the competency, dedication, and creativity of NASA people.

Harry N. Swenson and Danny Vincent, “Design and Operational Evaluation of the Traffic Management Advisor at the Ft. Worth Air Route Traffic Control Center,” United States/Europe Air Traffic Management Research and Development Seminar, Paris, June 16–19, 1997.

A Langley Research Center human factors research engineer inspects the interior of a light business aircraft after a simulated crash to assess the loads experienced during accidents and develop means of improving survivability. NASA.

176

Factors Research: 4 Human Meshing Pilots with Planes

CASE

4

Steven A. Ruffin

The invention of flight exposed human limitations. Altitude effects endangered early aviators. As the capabilities of aircraft grew, so did the challenges for aeromedical and human factors researchers. Open cockpits gave way to pressurized cabins. Wicker seats perched on the leading edge of frail wood-and-fabric wings were replaced by robust metal seats and eventually sophisticated rocket-boosted ejection seats. The casual cloth work clothes and hats presaged increasingly complex suits.

A

S MERCURY ASTRONAUT ALAN B. SHEPARD, JR., lay flat on his back, sealed in a metal capsule perched high atop a Redstone rocket on the morning of May 5, 1961, many thoughts probably crossed his mind: the pride he felt of becoming America’s first man in space, or perhaps, the possibility that the powerful rocket beneath him would blow him sky high . . . in a bad way, or maybe even a greater fear he would “screw the pooch” by doing something to embarrass himself—or far worse—jeopardize the U.S. space program. After lying there nearly 4 hours and suffering through several launch delays, however, Shepard was by his own admission not thinking about any of these things. Rather, he was consumed with an issue much more down to earth: his bladder was full, and he desperately needed to relieve himself. Because exiting the capsule was out of the question at this point, he literally had no place to go. The designers of his modified Goodrich U.S. Navy Mark IV pressure suit had provided for nearly every contingency imaginable, but not this; after all, the flight was only scheduled to last a few minutes. Finally, Shepard was forced to make his need known to the controllers below. As he candidly described later, “You heard me, I’ve got to pee. I’ve been in here forever.”1 Despite the unequivocal reply of “No!” to 1. Alan Shepard and Deke Slayton, with Jay Barbree and Howard Benedict, Moon Shot: The Inside Story of America’s Race to the Moon (Atlanta: Turner Publishers, Inc.,1994), p. 107.

his request, Shepard’s bladder gave him no alternative but to persist. Historic flight or not, he had to go—and now. When the powers below finally accepted that they had no choice, they gave the suffering astronaut a reluctant thumbs up: so, “pee,” he did . . . all over his sensor-laden body and inside his gleaming silver spacesuit. And then, while the world watched—unaware of this behindthe-scenes drama—Shepard rode his spaceship into history . . . drenched in his own urine. This inauspicious moment should have been something of an epiphany for the human factors scientists who worked for the newly formed 178

Case 4 | Human Factors Research: Meshing Pilots with Planes

National Aeronautics and Space Administration (NASA). It graphically pointed out the obvious: human requirements—even the most basic ones—are not optional; they are real, and accommodations must always be made to meet them. But NASA’s piloted space program had advanced so far technologically in such a short time that this was only one of many lessons that the Agency’s planners had learned the hard way. There would be many more in the years to come. As described in the Tom Wolfe book and movie of the same name, The Right Stuff, the first astronauts were considered by many of their contemporary non-astronaut pilots—including the ace who first broke the sound barrier, U.S. Air Force test pilot Chuck Yeager—as little more than “spam in a can.”2 In fact, Yeager’s commander in charge of all the test pilots at Edwards Air Force Base had made it known that he didn’t particularly want his top pilots volunteering for the astronaut program; he considered it a “waste of talent.”3 After all, these new astronauts— more like lab animals than pilots—had little real function in the early flights, other than to survive, and sealed as they were in their tiny metal capsules with no realistic means of escape, the cynical “spam in a can” metaphor was not entirely inappropriate. But all pilots appreciated the dangers faced by this new breed of American hero: based on the space program’s much-publicized recent history of one spectacular experimental launch failure after another, it seemed like a morbidly fair bet to most observers that the brave astronauts, sitting helplessly astride 30 tons of unstable and highly explosive rocket fuel, had a realistic chance of becoming something akin to America’s most famous canned meat dish. It was indeed a dangerous job, even for the 7 overqualified test-pilots-turned-astronauts who had been so carefully chosen from more than 500 actively serving military test pilots.4 Clearly, piloted space flight had to become considerably more human-friendly if it were to become the way of the future. NASA had existed less than 3 years before Shepard’s flight. On July 19, 1958, President Dwight D. Eisenhower signed into law the National Aeronautics and Space Act of 1958, and chief among the provisions was the establishment of NASA. Expanding on this act’s stated purpose of

conducting research into the “problems of flight within and outside the earth’s atmosphere” was an objective to develop vehicles capable of carrying—among other things—“living organisms” through space.5 Because this official directive clearly implied the intention of sending humans into space, NASA was from its inception charged with formulating a piloted space program. Consequently, within 3 years after it was created, the budding space agency managed to successfully launch its first human, Alan Shepard, into space. The astronaut completed NASA Mercury mission MR-3 to become America’s first man in space. Encapsulated in his Freedom 7 spacecraft, he lifted off from Cape Canaveral, FL, and flew to an altitude of just over 116 miles before splashing down into the Atlantic Ocean 302 miles downrange.6 It was only a 15-minute suborbital flight and, as related above, not without problems, but it accomplished its objective: America officially had a piloted space program. This was no small accomplishment. Numerous major technological barriers had to be surmounted during this short time before even this most basic of piloted space flights was possible. Among these obstacles, none was more challenging than the problems associated with maintaining and supporting human life in the ultrahostile environment of space. Thus, from the beginning of the Nation’s space program and continuing to the present, human factors research has been vital to NASA’s comprehensive research program. The Science of Human Factors To be clear, however, NASA did not invent the science of human factors. Not only has the term been in use long before NASA ever existed, the concept it describes has existed since the beginning of mankind. Human factors research encompasses nearly all aspects of science and technology and therefore has been described with several different names. In simplest terms, human factors studies the interface between humans and the machines they operate. One of the pioneers of this science, Dr. Alphonse Chapanis, provided a more inclusive and descriptive definition:

“Human factors discovers and applies information about human behavior, abilities, limitations, and other characteristics to the design of tools, machines, systems, tasks, jobs, and environments for productive, safe, comfortable, and effective human use.”7 The goal of human factors research, therefore, is to reduce error, while increasing productivity, safety, and comfort in the interaction between humans and the tools with which they work.8 As already suggested, the study of human factors involves a myriad of disciplines. These include medicine, physiology, applied psychology, engineering, sociology, anthropology, biology, and education.9 These in turn interact with one another and with other technical and scientific fields, as they relate to behavior and usage of technology. Human factors issues are also described by many similar—though not necessarily synonymous—terms, such as human engineering, human factors engineering, human factors integration, human systems integration, ergonomics, usability, engineering psychology, applied experimental psychology, biomechanics, biotechnology, man-machine design (or integration), and human-centered design.10

4

The Changing Human Factors Dimension Over Time The consideration of human factors in technology has existed since the first man shaped a wooden spear with a sharp rock to help him grasp it more firmly. It therefore stands to reason that the dimension of human factors has changed over time with advancing technology—a trend that has accelerated throughout the 20th century and into the current one.11 Man’s earliest requirements for using his primitive tools and weapons gave way during the Industrial Revolution to more refined needs in operating more complicated tools and machines. During this period, the emergence of more complex machinery necessitated increased consideration of the needs of the humans who were to operate this machinery—even 7. Alphonse Chapanis, “Some reflections on progress,” paper presented at the Proceedings of the Human Factors Society 29th Annual Meeting (Santa Monica, CA: Human Factors Society, 1985), pp. 1–8. 8. Christopher D. Wickens, Sallie E. Gordon, and Yili Liu, An Introduction to Human Factors Engineering (New York: Longman, 1998), p. 2. 9. Peggy Tillman and Barry Tillman, Human Factors Essentials: An Ergonomics Guide for Designers, Engineers, Scientists, and Managers (New York: McGraw-Hill, 1991), p. 4. 10. Ibid., p. 5. 11. Ibid., pp. 9–10.

181

NASA’s Contributions to Aeronautics

4

if it was nothing more complicated than providing a place for the operator to sit, or a handle or step to help this person access instruments and controls. In the years after the Industrial Revolution, human factors concerns became increasingly important.12 The Altitude Problem The interface between humans and technology was no less important for those early pioneers, who, for the first time in history, were starting to reach for the sky. Human factors research in aeronautics did not, however, begin with the Wright brothers’ first powered flight in 1903; it began more than a century earlier. Much of this early work dealt with the effects of high altitude on humans. At greater heights above the Earth, barometric pressure decreases. This allows the air to expand and become thinner. The net effect is diminished breathable oxygen at higher altitudes. In humans operating high above sea level without supplemental oxygen, this translates to a medical condition known as hypoxia. The untoward effects on humans of hypoxia, or altitude sickness, had been known for centuries—long before man ever took to the skies. It was a well-known entity to ancient explorers traversing high mountains, thus the still commonly used term mountain sickness.13 The world’s first aeronauts—the early balloonists—soon noticed this phenomenon when ascending to higher altitudes; eventually, some of the early flying scientists began to study it. As early as 1784, American physician John Jeffries ascended to more than 9,000 feet over London with French balloonist Jean Pierre Blanchard.14 During this flight, they recorded changes in temperature and barometric pressure and became perhaps the first to record an “aeromedical” problem, in the form of ear pain associated with altitude changes.15 Another early flying doctor, British physician John Shelton, also wrote of the detrimental effects of high-altitude flight on humans.16

During the 1870s—with mankind’s first powered, winged human flight still decades in the future—French physiologist Paul Bert conducted important research on the manner in which highaltitude flight affects living organisms. Using the world’s first pressure chamber, he studied the effects of varying barometric pressure and oxygen levels on dogs and later humans—himself included. He conducted 670 experiments at simulated altitudes of up to 36,000 feet. His findings clarified the effects of high-altitude conditions on humans and established the requirement for supplemental oxygen at higher altitudes.17 Later studies by other researchers followed, so that by the time piloted flight in powered aircraft became a reality at Kitty Hawk, NC, on December 17, 1903, the scientific community already had a substantial amount of knowledge concerning the physiology of high-altitude flight. Even so, there was much more to be learned, and additional research in this important area would continue in the decades to come.

4

Early Flight and the Emergence of Human Factors Research During the early years of 20th century aviation, it became apparent that the ability to maintaining human life and function at high altitude was only one of many human factors challenges associated with powered flight. Aviation received its first big technological boost during the World War I years of 1914–1918.18 Accompanying this advancement was a new set of human-related problems associated with flight.19 As a result of the massive, nearly overnight wartime buildup, there were suddenly tens of thousands of newly trained pilots worldwide, flying on a daily basis in aircraft far more advanced than anyone had ever imagined possible. In the latter stages of the war, aeronautical know-how had become so sophisticated that aircraft capabilities had surpassed that of their human operators. These Great War pilots, flying open-cockpit aircraft capable of altitudes occasionally exceeding 20,000 feet, began to routinely

suffer from altitude sickness and frostbite.20 They were also experiencing pressure-induced ear, sinus, and dental pain, as well as motion sickness and vertigo.21 In addition, these early open-cockpit pilots endured the effects of ear-shattering noise, severe vibration, noxious engine fumes, extreme acceleration or gravitational g forces, and a constant hurricaneforce wind blast to their faces.22 And as if these physical challenges were not bad enough, these early pilots also suffered devastating injuries from crashes in aircraft unequipped with practically any basic safety features.23 Less obvious, but still a very real human problem, these early high flyers were exhibiting an array of psychological problems, to which these stresses undoubtedly contributed.24 Indeed, though proof of the human limitations in flying during this period was hardly needed, the British found early in the war that only 2 percent of aviation fatalities came at the hands of the enemy, while 90 percent were attributed to pilot deficiencies; the remainder came from structural and engine failure, and a variety of lesser causes.25 By the end of World War I, it was painfully apparent to flight surgeons, psychologists, aircraft designers, and engineers that much additional work was needed to improve the humanmachine interface associated with piloted flight. Because of the many flight-related medical problems observed in airmen during the Great War, much of the human factors research accomplished during the following two decades leading to the Second World War focused largely on the aeromedical aspects of flight. Flight surgeons, physiologists, engineers, and other professionals of this period devoted themselves to developing better life-support equipment and other protective gear to improve safety and efficiency during flight operations. Great emphasis was also placed on improving pilot selection.26

Of particular note during the interwar period of the 1920s and 1930s were several piloted high-altitude balloon flights conducted to further investigate conditions in the upper part of the Earth’s atmosphere known as the stratosphere. Perhaps the most ambitious and fruitful of these was the 1935 joint U.S. Army Air Corps/National Geographic Society flight that lifted off from a South Dakota Black Hills natural geological depression known as the “Stratobowl.” The two Air Corps officers, riding in a sealed metal gondola—much like a future space capsule—with a virtual laboratory full of scientific monitoring equipment, traveled to a record altitude of 72,395 feet.27 Little did they know it at the time, but the data they collected while aloft would be put to good use decades later by human factors scientists in the piloted space program. This included information about cosmic rays, the distribution of ozone in the upper atmosphere, and the spectra and brightness of sun and sky, as well as the chemical composition, electrical conductivity, and living spore content of the air at that altitude.28 Although the U.S. Army Air Corps and Navy conducted the bulk of the human factors research during this interwar period of the 1920s and 1930s, another important contributor was the National Advisory Committee for Aeronautics (NACA). Established in 1915, the NACA was actively engaged in a variety of aeronautical research for more than 40 years. Starting only with a miniscule $5,000 budget and an ambitious mission to “direct and conduct research and experimentation in aeronautics, with a view to their practical solution,”29 the NACA became one of this country’s leading aeronautical research agencies and remained so up until its replacement in 1958 by the newly established space agency NASA. The work that the NACA accomplished during this era in design engineering and life-support systems, in cooperation with the U.S. military and other agencies and institutions, contributed greatly to information and technology that would become vital to the piloted space program, still decades—and another World War—in the future.30

World War II and the Birth of Human Factors Engineering During World War II, human factors was pushed into even greater prominence as a science. During this wartime period of rapidly advancing military technology, greater demands were being placed on the users of this technology. Success or failure depended on such factors as the operators’ attention span, hand-eye coordination, situational awareness, and decision-making skills. These demands made it increasingly challenging for operators of the latest military hardware—aircraft, tanks, ships, and other complex military machinery—to operate their equipment safely and efficiently.31 Thus, the need for greater consideration of human factors issues in technological design became more obvious than ever before; as a consequence, the discipline of human engineering emerged.32 This branch of human factors research is involved with finding ways of designing “machines, operations, and work environments so that they match human capacities and limitations.” Or, to put it another way, it is the “engineering of machines for human use and the engineering of human tasks for operating machines.”33 During World War II, no area of military technology had a more critical need for both human factors and human engineering considerations than did aviation.34 Many of the biomedical problems afflicting airmen in the First World War had by this time been addressed, but new challenges had appeared. Most noticeable were the increased physiological strains for air crewmen who were now flying faster, higher, for longer periods of time, and—because of wartime demands—more aggressively than ever before. High-performance World War II aircraft were capable of cruising several times faster than they were in the previous war and were routinely approaching the speed of sound in steep dives. Because of these higher speeds, they were also exerting more than enough gravitational g forces during turns and pullouts to render pilots almost instantly unconscious. In addition, some of these advanced air-

craft could climb high into the stratosphere to altitudes exceeding 40,000 feet and were capable of more hours of flight-time endurance than their human operators possessed. Because of this phenomenal increase in aircraft technology, human factors research focused heavily on addressing the problems of high-performance flight.35 The other aspect of the human factors challenge coming into play involved human engineering concerns. Aircraft of this era were exhibiting a rapidly escalating degree of complexity that made flying them—particularly under combat conditions—nearly overwhelming. Because of this combination of challenges to the mortals charged with operating these aircraft, human engineering became an increasingly vital aspect of aircraft design.36 During these wartime years, high-performance military aircraft were still crashing at an alarmingly high rate, in spite of rigorous pilot training programs and structurally well-designed aircraft. It was eventually accepted that not all of these accidents could be adequately explained by the standard default excuse of “pilot error.” Instead, it became apparent that many of these crashes were more a result of “designer error” than operator error.37 Military aircraft designers had to do more to help the humans charged with operating these complex, high-performance aircraft. Thus, not only was there a need during these war years for greater human safety and life support in the increasingly hostile environment aloft, but the crews also needed better-designed cockpits to help them perform the complex tasks necessary to carry out their missions and safely return.38 In earlier aircraft of this era, design and placement of controls and gauges tended to be purely engineer-driven; that is, they were constructed to be as light as possible and located wherever designers could most conveniently place them, using the shortest connections and simplest attachments. Because the needs of the users were not always taken into account, cockpit designs tended not to be as user-friendly as they should have been. This also meant that there was no attempt to standardize

the cockpit layout between different types of aircraft. This contributed to longer and more difficult transitions to new aircraft with different instrument and control arrangements. This disregard for human needs in cockpit design resulted in decreased aircrew efficiency and performance, greater fatigue, and, ultimately, more mistakes.39 An example of this lack of human consideration in cockpit design was one that existed in an early model Boeing B-17 bomber. In this aircraft, the flap and landing gear handles were similar in appearance and proximity, and therefore easily confused. This unfortunate arrangement had already inducted several pilots into the dreaded “gear-up club,” when, after landing, they inadvertently retracted the landing gear instead of the intended flaps. To address this problem, a young Air Corps physiologist and Yale psychology Ph.D. named Alphonse Chapanis proved that the incidence of such pilot errors could be greatly reduced by more logical control design and placement. His ingeniously simple solution of moving the controls apart from one another and attaching different shapes to the various handles allowed pilots to determine by touch alone which control to activate. This fix—though not exactly rocket science—was all that was needed to end a dangerous and costly problem.40 As a result of a host of human-operator problems, such as those described above, wartime aircraft design engineers began routinely working with industrial and engineering psychologists and flight surgeons to optimize human utilization of this technology. Thus was born in aviation the concept of human factors in engineering design, a discipline that would become increasingly crucial in the decades to come.41 The Jet Age: Man Reaches the Edge of Space By the end of the Second World War, aviation was already well into the jet age, and man was flying yet higher and faster in his quest for space. During the years after the end of the war, human factors research continued to evolve in support of this movement. A multiplicity of human and animal studies were conducted during this period by military, civilian, and Government researchers to learn more about such problems as acceleration and deceleration, emergency egress from high-speed jet aircraft, explosive decompression, pressurization of suits and cockpits, 39. Wiener and Nagel, Human Factors in Aviation, pp. 7–9. 40. Chapanis, The Chapanis Chronicles, pp. 15–16. 41. Engle and Lott, Man in Flight, p. 79.

188

Case 4 | Human Factors Research: Meshing Pilots with Planes

and the biological effects of various types of cosmic rays. In addition, a significant amount of work concentrated on instrument design and cockpit display.42 During the years leading up to America’s space program, humans were already operating at the edge of space. This was made possible in large part by the cutting-edge performance of the NACA–NASA highspeed, high-altitude rocket “X-planes”—progressing from the Bell X-1, in which Chuck Yeager became the first person to officially break the sound barrier, on October 14, 1947, to the phenomenal hypersonic X-15 rocket plane, which introduced man to true space flight.43 These unique experimental rocket-propelled aircraft, developed and flown from 1946 through 1968, were instrumental in helping scientists understand how best to sustain human life during high-speed, high-altitude flight. 44 One of the more important human factors developments employed in the first of this series, the Bell X-1 rocket plane, was the T-1 partial pressure suit designed by Dr. James Henry of the University of Southern California and produced by the David Clark Company. 45 This suit proved its worth during an August 25, 1949, test flight, when X-1 pilot Maj. Frank K. “Pete” Everest lost cabin pressure at an altitude of more than 65,000 feet. His pressure suit automatically inflated, and though it constricted him almost to the point of incapacitation, it nevertheless kept him alive until he could descend. He thus became the first pilot saved by the emergency use of a pressure suit.46 During the 1950s and 1960s, the NACA and NASA tested several additional experimental rocket planes after the X-1 series; however, the most famous and accomplished of these by far was the North American X-15. During the 199 flights this phenomenal rocket plane made from 1959 to 1968, it carried its pilots to unprecedented hypersonic speeds of

nearly 7 times the speed of sound (4,520 mph) and as high as 67 miles above the Earth.47 The wealth of information these flights continued to produce, nearly right up until the first piloted Moon flight, enabled technology vital to the success of the NASA piloted space program. One of the X-15 program’s more important challenges was how to keep its pilots alive and functioning in a craft traveling through space at hypersonic speeds. The solution was the development of a fullpressure suit capable of sustaining its occupant in the vacuum of space yet allowing him sufficient mobility to perform his duties. This innovation was an absolute must before human space flight could occur. The MC-2 full-pressure suit provided by the David Clark Co. met these requirements, and more.48 The suit in its later forms, the A/P-22S-2 and A/P-22S-6, not only provided life-sustaining atmospheric pressure, breathable oxygen, temperature control, and ventilation, but also a parachute harness, communications system, electrical leads for physiological monitoring, and an antifogging system for the visor. Even with all these features, the pilot still had enough mobility to function inside the aircraft. By combining the properties of this pressure suit with those of the X-15 ejection seat, the pilot at least had a chance for emergency escape from the aircraft. This suit was so successful that it was also adapted for use in high-altitude military aircraft, and it served as the template for the suit developed by B.F. Goodrich for the Mercury and Gemini piloted space programs.49 The development of a practical spacesuit was not the only human factors contribution of the X-15 program. Its pioneering emphasis on the physiological monitoring of the pilot also formed the basis of that used in the piloted space program. These in-flight measurements and later analysis were an important aspect of each X-15 flight. The aeromedical data collected included heart and respiratory rates, electrocardiograph, skin temperature, oxygen flow, suit pressure, and blood pressure.

Through this information, researchers were able to better understand human adaptation to hypersonic high-altitude flight.50 The many lessons learned from these high-performance rocket planes were invaluable in transforming space flight into reality. From a human factors standpoint, these flights provided the necessary testbed for ushering humans into the deadly environment of high-altitude, high-speed flight—and ultimately, into space. Another hazardous type of human research activity conducted after World War II that contributed to piloted space operations was the series of U.S. military piloted high-altitude balloon flights conducted in the 1950s and 1960s. Most significant among these were the U.S. Navy StratoLab flights and the Air Force Manhigh and Excelsior programs.51

The information these flights provided paved the way for the design of space capsules and astronaut pressure suits, and they gained important biomedical and astronomical data. The Excelsior program, in particular, studied the problem of emergency egress high in the stratosphere. During the flight of August 16, 1960, Air Force pilot Joseph Kittinger, Jr., ascended in Excelsior III to an altitude of 102,800 feet before parachuting to Earth. During this highest-ever jump, Kittinger went into a freefall for a record 4 minutes 36 seconds and attained a record speed for a falling human body outside of an aircraft of 614 mph.52 Although, thankfully, no astronaut has had to repeat this performance, Kittinger showed how it could be done. Yet another human research contribution from this period that proved to be of great value to the piloted space program was the series of impact deceleration tests conducted by U.S. Air Force physician Lt. Col. John P. Stapp. While strapped to a rocket-propelled research sled on a 3,500-foot track at Holloman Air Force Base (AFB), NM, Stapp made 29 sled rides during the years of 1947–1954. During these, he attained speeds of up to 632 mph, making him—at least in the eyes of the press— the fastest man on Earth, and he withstood impact deceleration forces of as high as 46 times the force of gravity. To say this work was hazardous would be an understatement. While conducting this research, Stapp suffered broken bones, concussions, bruises, retinal hemorrhages, and even temporary blindness. But the knowledge he gained about the effects of acceleration and deceleration forces was invaluable in delineating the human limitations that astronauts would have while exiting and reentering the Earth’s atmosphere.53 All of these flying and research endeavors involved great danger for the humans directly involved in them. Injuries and fatalities did occur, but such was the dedication of pioneers such as Stapp and the pilots of these trailblazing aircraft. The knowledge they gained by putting their lives on the line—knowledge that could have been acquired in no other way—would be essential to the establishment of the piloted space program, looming just over the horizon.

NASA Arrives: Taking Human Factors Research to the Next Level It is therefore abundantly evident that when the NACA handed over the keys of its research facilities to NASA on October 1, 1958, the Nation’s new space agency began operations with a large database of information relating to the human factors and human engineering aspects of piloted flight. But though this mass of accumulated knowledge and technology was of inestimable value, the prospect of taking man to the next level, into the great unknown of outer space, was a different proposition from any ever before tackled by aviation research.54 No one had yet comprehensively dealt with such human challenges as the effects of long-term weightlessness, exposure to ionizing radiation and extreme temperature changes, maintaining life in the vacuum of space, or withstanding prolonged impact deceleration forces encountered by humans violently reentering the Earth’s atmosphere.55 NASA began operations in 1958 with a final parting report from the NACA’s Special Committee on Space Technology. This report recommended several technical areas in which NASA should proceed with its human factors research. These included acceleration, high-intensity radiation in space, cosmic radiation, ionization effects, human information processing and communication, displays, closed-cycle living, space capsules, and crew selection and training.56 This Committee’s Working Group on Human Factors and Training further suggested that all experimentation consider crew selection, survival, safety, and efficiency.57 With that, America’s new space agency had its marching orders. It proceeded to assemble “the largest group of technicians and greatest body of knowledge ever used to define man’s performance on the ground and in space environments.”58 Thus, from NASA’s earliest days, it has pioneered the way in humancentered aerospace research and technology. And also from its beginning—and extending to the present—it has shared the benefits of this research with the rest of the world, including the same industry that contributed so much to NASA during its earliest days—aeronautics. This 50-year storehouse of knowledge produced by NASA human factors research has been shared with all areas of the aviation community—

both the Department of Defense (DOD) and all realms of civil aviation, including the Federal Aviation Administration (FAA), the National Transportation and Safety Board (NTSB), the airlines, general aviation, aircraft manufacturing companies, and producers of aviation-related hardware and software. Bioastronautics, Bioengineering, and Some Hard-Learned Lessons Over the past 50 years, NASA has indeed encountered many complex human factors issues. Each of these had to be resolved to make possible the space agency’s many phenomenal accomplishments. Its initial goal of putting a man into space was quickly accomplished by 1961. But in the years to come, NASA progressed beyond that at warp speed— at least technologically speaking.59 By 1973, it had put men into orbit around the Earth; sent them outside the relative safety of their orbiting craft to “walk” in space, with only their pressurized suit to protect them; sent them around the far side of the Moon and back; placed them into an orbiting space station, where they would live, function, and perform complex scientific experiments in weightlessness for months at a time; and, certainly most significantly, accomplished mankind’s greatest technological feat by landing humans onto the surface of the Moon—not just once, but six times—and bringing them all safely back home to Mother Earth.60 NASA’s magnificent accomplishments in its piloted space program during the 1960s and 1970s—nearly unfathomable only a few years before—thus occurred in large part as a result of years of dedicated human factors research. In the early years of the piloted space program, researchers from the NASA Environmental Physiology Branch focused on the biodynamics—or more accurately, the bioastronautics—of man in space. This discipline, which studies the biological and medical effects of space flight on man, evaluated such problems as noise, vibration, acceleration and deceleration, weightlessness, radiation, and the physiology, behavioral aspects, and performance of astronauts operating under confined and often stressful conditions.61 These researchers thus focused on providing life support and ensuring the best possi-

Mercury astronauts experiencing weightlessness in a C-131 aircraft flying a “zero-g” trajectory. This was just one of many aspects of piloted space flight that had never before been addressed. NASA.

ble medical selection and maintenance of the humans who were to fly into space. Also essential for this work to progress was the further development of the technology of biomedical telemetry. This involved monitoring and transmitting a multitude of vital signs from an astronaut in space on a real-time basis to medical personnel on the ground. The comprehensive data collected included such information as body temperature, heart rate and rhythm, blood and pulse pressure, blood oxygen content, respiratory and gastrointestinal functions, muscle size and activity, urinary functions, and varying types of central nervous system activity.62 Although much work had already been done in this field, particularly in the X-15 program, NASA further perfected it during the Mercury program when the need to carefully monitor the physiological condition of astronauts in space became critical.63 62. Engle and Lott, Man in Flight, p. 180. 63. Stillwell, X-15 Research Results, p. 89; Project Mercury Summary, U.S. Manned Spacecraft Center, Houston, TX (Washington, DC: NASA, 1963), pp. 203–207; Stillwell, X-15 Research Results, p. 89.

195

NASA’s Contributions to Aeronautics

4

Finally, this early era of NASA human factors research included an emphasis on the bioengineering aspects of piloted space flight, or the application of engineering principles in order to satisfy the physiological requirements of humans in space. This included the design and application of life-sustaining equipment to maintain atmospheric pressure, oxygen, and temperature; provide food and water; eliminate metabolic waste products; ensure proper restraint; and combat the many other stresses and hazards of space flight. This research also included finding the most expeditious way of arranging the multitude of dials, switches, knobs, and displays in the spacecraft so that the astronaut could efficiently monitor and operate them.64 In addition to the knowledge gained and applied while planning these early space flights was that gleaned from the flights themselves. The data gained and the lessons learned from each flight were essential to further success, and they were continually factored into future piloted space endeavors. Perhaps even more important, however, was the information gained from the failures of this period. They taught NASA researchers many painful but nonetheless important lessons about the cost of neglecting human factors considerations. Perhaps the most glaring example of this was the Apollo 1 fire of January 27, 1967, that killed NASA astronauts Virgil “Gus” Grissom, Roger Chaffee, and Edward White. While the men were sealed in their capsule conducting a launch pad test of the Apollo/Saturn space vehicle that was to be used for the first flight, a flash fire occurred. That such a fire could have happened in such a controlled environment was hard to explain, but the fact that there had been provided no effective means for the astronauts’ rescue or escape in such an emergency was inexplicable.65 This tragedy did, however, serve some purpose; it gave impetus to tangible safety and engineering improvements, including the creation of an escape hatch through which astronauts could more quickly open and egress during an emergency.66 Perhaps more importantly, this tragedy caused NASA to step back and reevaluate all of its safety and human engineering procedures.

A New Direction for NASA’s Human Factors Research By the end of the Apollo program, NASA, though still focused on the many initiatives of its space ventures, began to look in a new direction for its research activities. The impetus for this came from a 1968 Senate Committee on Aeronautical and Space Sciences report recommending that NASA and the recently created Department of Transportation jointly determine which areas of civil aviation might benefit from further research.67 A subsequent study prompted the President’s Office of Science and Technology to direct NASA to begin similar research. The resulting Terminal Configured Vehicle program led to a new focus in NASA human factors research. This included the all-important interface between not only the pilot and airplane, but also the pilot and the air traffic controller.68

. . . to provide improvements in the airborne systems (avionics and air vehicle) and operational flight procedures for reducing approach and landing accidents, reducing weather minima, increasing air traffic controller productivity and airport and airway capacity, saving fuel by more efficient terminal area operations, and reducing noise by operational procedures.69 With this directive, NASA’s human factors scientists were now officially involved with far more than “just” a piloted space program; they would now have to extend their efforts into the expansive world of aviation. With these new aviation-oriented research responsibilities, NASA’s human factors programs would continue to evolve and increase in complexity throughout the remaining decades of the 20th century and into the present one. This advancement in development was inevitable, given the growing technology, especially in the realm of computer science and complex computer-managed systems, as well as the changing space and aeronautical needs that arose throughout this period. During NASA’s first three decades, more and more of the increasingly complex aerospace operating systems it was developing for its space initiatives and the aviation industry were composed of multiple subsystems. For this reason, the need arose for a human systems integration (HSI) plan to help maximize their efficiency. HSI is a multidisciplinary approach that stresses human factors considerations, along with other such issues as health, safety, training, and manpower, in the early design of fully integrated systems.70 To better address the human factors research needs of the aviation community, NASA formed the Flight Management and Human Factors Division at Ames Research Center, Moffett Field, CA. 71 Its name was

later changed to the Human Factors Research & Technology Division; today, it is known as the Human Systems Integrations Division (HSID).72 For the past three decades, this division and its precursors have sponsored and participated in most of NASA’s human factors research affecting both aviation and space flight. HSID describes its goal as “safe, efficient, and cost-effective operations, maintenance, and training, both in space, in flight, and on the ground,” in order to “advance humancentered design and operations of complex aerospace systems through analysis, experimentation and modeling of human performance and human-automation interaction to make dramatic improvements in safety, efficiency and mission success.”73 To accomplish this goal, the division, in its own words, • •

•

4

Studies how humans process information, make decisions, and collaborate with human and machine systems. Develops human-centered automation and interfaces, decision support tools, training, and team and organizational practices. Develops tools, technologies, and countermeasures for safe and effective space operations.74

More specifically, the Human Systems Integrations Division focuses on the following three areas: •

Human performance: This research strives to better define how people react and adapt to various types of technology and differing environments to which they are exposed. By analyzing such human reactions as visual, auditory, and tactile senses; eye movement; fatigue; attention; motor control; and such perceptual cognitive processes as memory, it is possible to better predict and ultimately improve human performance.

Technology interface design: This directly affects human performance, so technology design that is patterned to efficient human use is of utmost importance. Given the complexity and magnitude of modern pilot/aircrew cockpit responsibilities—in commercial, private, and military aircraft, as well as space vehicles—it is essential to simplify and maximize the efficiency of these tasks. Only with cockpit instruments and controls that are easy to operate can human safety and efficiency be maximized. Interface design might include, for example, the development of cockpit instrumentation displays and arrangement, using a graphical user interface. Human-computer interaction: This studies the “processes, dialogues, and actions” a person uses to interact with a computer in all types of environment. This interaction allows the user to communicate with the computer by inputting instructions and then receiving responses back from the computer via such mechanisms as conventional monitor displays or head monitor displays that allows the user to interact with a virtual environment. This interface must be properly adapted to the individual user, task, and environment.75

Some of the more important research challenges HSID is addressing and will continue to address are proactive risk management, human performance in virtual environments, distributed air traffic management, computational models of human-automation interaction, cognitive models of complex performance, and human performance in complex operations.76 Over the years, NASA’s human factors research has covered an almost unbelievably wide array of topics. This work has involved—and benefitted—nearly every aspect of the aviation world, including the FAA, DOD, the airline industry, general aviation, and a multitude of nonaviation areas. To get some idea of the scope of the research with which NASA has been involved, one need only search the NASA Technical Report Server using the term “human factors,” which produces more 75. NASA Human Systems Integration Division Web site. 76. “Human Systems Integration Division Overview,” NASA Human Systems Integration Division Fact Sheet.

200

Case 4 | Human Factors Research: Meshing Pilots with Planes

4

A full-scale aircraft drop test being conducted at the 240-foot-high NASA Langley Impact Dynamics Research Facility. The gantry previously served as the Lunar Landing Research Facility. NASA.

than 3,600 records.77 It follows that no single paper or document—and this case study is no exception—could ever comprehensively describe NASA’s human factors research. It is possible, however, to get some idea of the impact that NASA human factors research has had on aviation safety and technology by reviewing some of the major programs that have driven the Agency’s human factors research over the past decades. NASA’s Human Factors Initiatives: A Boon to Aviation Safety No aspect of NASA’s human factors research has been of greater importance than that which has dealt with improving the safety of those humans who occupy all different types of aircraft—both as operators and as passengers. NASA human factors scientists have over the past several decades joined forces with the FAA, DOD, and nearly all members of the aviation industry to make flying safer for all parties. To understand the scope of the work that has helped accomplish this goal, one should review some of the major safety-oriented human factors programs in which NASA has participated.

A full-scale aircraft drop test being conducted at the Langley Impact Dynamics Research Facility. These NASA–FAA tests helped develop technology to improve crashworthiness and passenger survivability in general-aviation aircraft. NASA.

Landing Impact and Aircraft Crashworthiness/Survivability Research Among NASA’s earliest research conducted primarily in the interest of aviation safety was its Aircraft Crash Test program. Aircraft crash survivability has been a serious concern almost since the beginning of flight. On September 17, 1908, U.S. Army Lt. Thomas E. Selfridge became powered aviation’s first fatality, after the aircraft in which he was a passenger crashed at Fort Myers, VA. His pilot, Orville Wright, survived the crash.78 Since then, untold thousands of humans have perished in aviation accidents. To address this grim aspect of flight, NASA Langley Research Center began in the early 1970s to investigate ways to increase the human survivability of aircraft crashes. This important series of studies has been instrumental in the development of important safety improvements in commercial, general aviation, and military aircraft, as well as NASA space vehicles.79

These unique experiments involved dropping various types and components of aircraft from a 240-foot-high gantry structure at NASA Langley. This towering structure had been built in the 1960s as the Lunar Landing Research Facility to provide a realistic setting for Apollo astronauts to train for lunar landings. At the end of the Apollo program in 1972, the gantry was converted for use as a full-scale crash test facility. The goal was to learn more about the effects of crash impact on aircraft structures and their occupants, and to evaluate seat and restraint systems. At this time, the gantry was renamed the Impact Dynamics Research Facility (IDRF).80 This aircraft test site was the only such testing facility in the country capable of slinging a full-scale aircraft into the ground, similar to the way it would impact during a real crash. To add to the realism, many of the aircraft dropped during these tests carried instrumented anthropomorphic test dummies to simulate passengers and crew. The gantry was able to support aircraft weighing up to 30,000 pounds and drop them from as high as 200 feet above the ground. Each crash was recorded and evaluated using both external and internal cameras, as well as an array of onboard scientific instrumentation.81 Since 1974, NASA has conducted crash tests on a variety of aircraft, including high and low wing, single- and twin-engine general-aviation aircraft and fuselage sections, military rotorcraft, and a variety of other aviation and space components. During the 30-year period after the first full-scale crash test in February 1974, this system was employed to conduct 41 crash/ impact tests on full-sized general-aviation aircraft and 11 full-scale rotorcraft tests. It also provided for 48 Wire Strike Protection System (WSPS) Army helicopter qualification tests, 3 Boeing 707 fuselage section vertical drop tests, and at least 60 drop tests of the F-111 crew escape module.82 The massive amount of data collected in these tests has been used to determine what types of crashes are survivable. More specifically, this information has been used to establish guidelines for aircraft seat design that are still used by the FAA as its standard for certification. It has also contributed to new technologies, such as energy-absorbing seats, and to

improving the impact characteristics of new advanced composite materials, cabin floors, engine support fittings, and other aircraft components and equipment.83 Indeed, much of today’s aircraft safety technology can trace its roots to NASA’s pioneering landing impact research. Full-Scale Transport Controlled Impact Demonstration This dramatic and elaborate crash test program of the early 1980s was one of the most ambitious and well-publicized experiments that NASA has conducted in its decades-long quest for increased aviation safety. In this 1980–1984 study, the NASA Dryden and Langley Research Centers joined with the FAA to quantitatively assess airline crashes. To do this, they set out to intentionally crash a remotely controlled Boeing 720 airliner into the ground. The objective was not simply to crash the airliner, but rather to achieve an “impact-survivable” crash, in which many passengers might be expected to survive.84 This type of crash would allow a more meaningful evaluation of both the existing and experimental cabin safety features that were being observed. Much of the information used to determine just what was “impact-survivable” came from Boeing 707 fuselage drop tests conducted previously at Dryden’s Impact Dynamics Research Facility and a similar but complete aircraft drop conducted by the FAA.85 The FAA’s primary interest in the Controlled Impact Demonstration (CID, also sometimes jokingly referred to as “Crash in the Desert”) was to test an anti-misting kerosene (AMK) fuel additive called FM-9. This highmolecular-weight polymer, when combined with Jet-A fuel, had shown promise during simulated impact tests in inhibiting the spontaneous combustion of fuel spilling from ruptured fuel tanks. The possible benefits of this test were highly significant: if the fireball that usually follows an aircraft crash could be eliminated or diminished, countless lives might be saved. The FAA was also interested, secondarily, in testing new safety-related design features. NASA’s main interest in this study, on the other hand, was to measure airframe structural loads and collect crash dynamics data.86

A remotely controlled Boeing 720 airliner explodes in flame on December 1, 1984, during the Controlled Impact Demonstration. Although the test sank hopes for a new anti-misting kerosene fuel, other information from the test helped increase airline safety. NASA.

With these objectives in mind, researchers from the two agencies filled the seats of the “doomed” passenger jet with anthropomorphic dummies instrumented to measure the transmission of impact loads. They also fitted the airliner with additional crash-survivability testing equipment, such as burn-resistant windows, fireproof cabin materials, experimental seat designs, flight data recorders, and galley and stowage-bin attachments.87 The series of tests included 15 remote-controlled flights, the first 14 of which included safety pilots onboard. The final flight took place on the morning of December 1, 1984. It started at Edwards AFB, NV, and ended with the intentional crash of the four-engine jet airliner onto the bed of Rogers Dry Lake. The designated target was a set of eight steel posts, or cutters, cemented into the lakebed to ensure that the jet’s fuel tanks ruptured. During this flight, NASA Dryden’s Remotely Controlled Vehicle Facility research pilot, Fitzhugh Fulton, controlled the aircraft from the ground.88 The crash was accomplished more or less as planned. As expected, the fuel tanks, containing 76,000 pounds of the anti-misting kerosene jet fuel, were successfully ruptured; unfortunately, the unexpectedly 87. Fasanella, et al., “Impact Data from a Transport Aircraft During a Controlled Impact Demonstration.” 88. Ibid.

spectacular fireball that ensued—and that took an hour to extinguish— was a major disappointment to the FAA. Because of the dramatic failure of the anti-misting fuel, the FAA was forced to curtail its plan to require the use of this additive in airliners.89 In most other ways, however, the CID was a success. Of utmost importance were the lessons learned about crash survivability. New safety initiatives had been tested under realistic conditions, and the effects of a catastrophic crash on simulated humans were filmed inside the aircraft by multiple cameras and later visualized at the crash site. Analysis of these data showed, among many other things, that in a burning airliner, seat cushions with fire-blocking layers were indeed superior to conventional cushions. This finding resulted in FAA-mandated flammability standards requiring these safer seat cushions.90 Another important safety finding that the crash-test data revealed was that the airliner’s adhesive-fastened tritium aisle lights, which would be of utmost importance during postcrash emergency egress, became dislodged and 89. Ibid. 90. “Full-Scale Transport Controlled Impact Demonstration Program: Final Summary Report,” NASA TM-89642 (Sept. 1987), p. 33.

206

Case 4 | Human Factors Research: Meshing Pilots with Planes

nonfunctional during the crash. As a result, the FAA mandated that these lights be mechanically fastened, to maximize their time of usefulness after a crash.91 These and other lessons from this unique research project have made commercial travel safer.

4

Aviation Safety Reporting System NASA initiated and implemented this important human-based safety program in 1976 at the request of the FAA. Its importance can best be judged by the fact it is still in full operation—funded by the FAA and managed by NASA. The Aviation Safety Reporting System (ASRS) collects information voluntarily and confidentially submitted by pilots, controllers, and other aviation professionals. This information is used to identify deficiencies in the National Aviation System (NAS), some of which include those of the human participants themselves. The ASRS analyzes these data and refers them in the form of an “alerting message” to the appropriate agencies so that problems can be corrected. To date, nearly 5,000 alert messages have been issued.92 The ASRS also educates through its operational issues bulletins, its newsletter CALLBACK and its journal ASRS Directline, as well as through the more than 60 research studies it has published.93 The massive database that the ASRS maintains benefits not only NASA and the FAA, but also other agencies worldwide involved in the study and promotion of flight safety. Perhaps most importantly, this system serves to foster further aviation human factors safety research designed to prevent aviation accidents.94 After more than 30 years in operation, the ASRS has been an unqualified success. During this period, pilots, air traffic controllers, and others have provided more than 800,000 reports.95 The many types of ASRS responses to the data it has collected have triggered a variety of safety-oriented actions, including modifications to the Federal Aviation Regulations.96 91. Ibid., p. 39. 92. “ASRS Program Briefing,” via personal communication with Linda Connell, ASRS Program Director, Sept. 25, 2009. 93. Corrie, “The US Aviation Safety Reporting System,” pp. 1–7; “ASRS Program Briefing,” via personal communication with Connell. 94. Ibid. 95. Amy Pritchett, “Aviation Safety Program,” Integrated Intelligent Flight Deck Technologies presentation dated June 17, 2008, http://www.jpdo.gov/library/20080618AllHands/ 04_20080618_Amy_Pritchett.pdf, accessed Oct. 7, 2009; “ASRS Program Briefing.” 96. Wiener and Nagel, Human Factors in Aviation, pp. 268–269.

207

NASA’s Contributions to Aeronautics

4

It is impossible to quantify the number of lives saved by this important long-running human-based program, but there is little dispute that its wide-ranging effect on the spectrum of flight safety has benefitted all areas of aviation. Fatigue Countermeasures Program NASA Ames Research Center began the Fatigue Countermeasures program in the 1980s in response to a congressional request to determine if there existed a safety problem “due to transmeridian flying and a potential problem due to fatigue in association with various factors found in air transport operations.”97 Originally termed the NASA Ames Fatigue/ Jet Lag program, this ongoing program, jointly funded by the FAA, was created to study such issues as fatigue, sleep, flight operations performance, and the biological clock—otherwise known as circadian rhythms. This research was focused on (1) determining the level of fatigue, sleep loss, and circadian rhythm disruption that exists during flight operations, (2) finding out how these factors affect crew performance, and (3) developing ways to counteract these factors to improve crew alertness and proficiency. Many of the findings from this series of field studies, which included such fatigue countermeasures as regular flightcrew naps, breaks, and better scheduling practices, were subsequently adopted by the airlines and the military.98 This research also resulted in Federal Aviation Regulations that are still in effect, which specify the amount of rest flightcrews must have during a 24-hour period.99

Crew Factors and Resource Management Program After a series of airline accidents in the 1970s involving aircraft with no apparent problems, findings were presented at a 1979 NASA workshop indicating that most aviation accidents were indeed caused by human error, rather than mechanical malfunctions or weather. Specifically, there were communication, leadership, and decision-making failures within the cockpit that were causing accidents.100 The concept of Cockpit Resource Management (now often referred to as Crew Resource Management, or CRM) was thus introduced. It describes the process of helping aircrews reduce errors in the cockpit by improving crew coordination and better utilizing all available resources on the flight deck, including information, equipment, and people. 101 Such training has been shown to improve the performance of aircrew members and thus increase efficiency and safety.102 It is considered so successful in reducing accidents caused by human error that the aviation industry has almost universally adopted CRM training. Such training is now considered mandatory not only by NASA, but also the FAA, the airlines, the military, and even a variety of nonaviation fields, such as medicine and emergency services.103 Most recently, measures have been taken to further expand mandatory CRM training to all U.S. Federal Aviation Regulations Part 135 operators, including commuter aircraft. Also included is SinglePilot Resource Management (SRM) training for on-demand pilots who fly without additional crewmembers.104

Presently, the NASA Ames Human Systems Integration Division’s Flight Cognition Laboratory is involved with the evaluation of the thought processes that determine the behavior of air crewmen, controllers, and others involved with flight operations. Among the areas they are studying are prospective memory, concurrent task management, stress, and visual search. As always, the Agency actively shares this information with other governmental and nongovernmental aviation organizations, with the goal of increasing flight safety.105 Workload, Strategic Behavior, and Decision-Making It is well-known that more than half of aircraft incidents and accidents have occurred because of human error. These errors resulted from such factors as flightcrew distractions, interruptions, lapses of attention, and work overload.106 For this reason, NASA researchers have long been interested in characterizing errors made by pilots and other crewmembers while performing the many concurrent flight deck tasks required during normal flight operations. Its Attention Management in the Cockpit program analyzes accident and incident reports, as well as questionnaires completed by experienced pilots, to set up appropriate laboratory experiments to examine the problem of concurrent task management and to develop methods and training programs to reduce errors. This research will help design simulated but realistic training scenarios, assist flightcrew members in understanding their susceptibility to errors caused by lapses in attention, and create ways to help them manage heavy workload demands. The intended result is increased flight safety.107 Likewise, safety in the air can be compromised by errors in judgment and decision making. To tackle this problem, NASA Ames Research

Center joined with the University of Oregon to study how decisions are made and to develop techniques to decrease the likelihood of bad decision making.108 Similarly, mission success has been shown to depend on the degree of cooperation between crewmembers. NASA research specifically studied such factors as building trust, sharing information, and managing resources in stressful situations. The findings of this research will be used as the basis for training crews to manage interpersonal problems on long missions.109 It can therefore be seen that NASA has indeed played a primary role in developing many of the human factors models in use, relating to aircrew efficiency and mental well-being. These models and the training programs that incorporate them have helped both military and civilian flightcrew members improve their management of resources in the cockpit and make better individual and team decisions in the air. This knowledge has also helped more clearly define and minimize the negative effects of crew fatigue and excessive workload demands in the cockpit. Further, NASA has played a key role in assisting both the aviation industry and DOD in setting up many of the training programs that are utilizing this new technology to improve flight safety.

technology industries teamed up to develop and evaluate such a system, TCAS I, which later evolved to the current TCAS II. From 1988 to 1992, NASA Ames Research Center played a pivotal role in this major collaborative effort by evaluating the human performance factors that came into play with the use of TCAS. By employing ground-based simulators operated by actual airline flightcrews, NASA showed that this system was practicable, at least from a human factors standpoint.111 The crews were found to be able to accurately use the system. This research also led to improved displays and aircrew training procedures, as well as the validation of a set of pilot collision-evading performance parameters.112 One example of the new technologies developed for incorporation into the TCAS system is the Advanced Air Traffic Management Display. This innovative system provides pilots with a three-dimensional air traffic virtual-visualization display that increases their situational awareness while decreasing their workload.113 This visualization system has been incorporated into TCAS system displays and has become the industry standard for new designs.114 Automation Design Automation technology is an important factor in helping aircrew members to perform more wide-ranging and complicated cockpit activities. NASA engineers and psychologists have long been actively engaged in developing automated cockpit displays and other technologies.115 These 111. S.L. Chappell, C.E. Billings, B.C. Scott, R.J. Tuttell, M.C. Olsen, and T.E. Kozon, “Pilots’ Use of a Traffic Alert and Collision-Avoidance System (TCAS II) in Simulated Air Carrier Operations,” vol. 1: “Methodology, Summary and Conclusions,” NASA TM-100094, Moffett Field, CA: NASA Ames Research Center. 112. B. Grandchamp, W.D. Burnside, and R.G. Rojas, “A study of the TCAS II Collision Avoidance System Mounted on a Boeing 737 Aircraft,” NASA CR-182457 (1988); R.G. Rojas, P. Law, and W.D. Burnside, “Simulation of an Enhanced TCAS II System in Operation,” NASA CR-181545 (1988); K.S. Sampath, R.G. Rojas, and W.D. Burnside, “Modeling and Performance Analysis of Four and Eight Element TCAS,” NASA CR-187414 (1991). 113. Durand R. Begault and Marc T. Pittman, “3-D Audio Versus Head Down TCAS Displays,” NASA CR-177636 (1994). 114. Durand R. Begault, “Head-Up Auditory Displays for Traffic Collision Avoidance System Advisories: A Preliminary Investigation,” Human Factors, vol. 35, no. 4 (1993), pp. 707–717. 115. Allen C. Cogley, “Automation of Closed Environments in Space for Human Comfort and Safety: Report for Academic Year 1989–1990,” Kansas State University College of Engineering, NASA CR-186834 (1990); John P. Dwyer, “Crew Aiding and Automation: A System Concept for Terminal Area Operations and Guidelines for Automation Design,” NASA CR-4631 (1995); Yvette J. Tenney, William H. Rogers, and Richard W. Pew, “Pilot Opinions on High Level Flight Deck Automation Issues: Toward the Development of a Design Philosophy,” NASA CR-4669 (1995).

212

Case 4 | Human Factors Research: Meshing Pilots with Planes

will be essential to pilots in order for them to safely and effectively operate within a new air traffic system being developed by NASA and others, called Free Flight. This system will use technically advanced aircraft computer systems to reduce the need for air traffic controllers and allow pilots to choose their path and speed, while allowing the computers to ensure proper aircraft separation. It is anticipated that Free Flight will in the upcoming decades become incorporated into the Next Generation Air Transportation System.116

4

NASA Aviation Safety & Security Program As is apparent from the foregoing discussions, a recurring theme in NASA’s human factors research has been its dedication to improving aviation safety. The Agency’s many human factors research initiatives have contributed to such safety issues as crash survival, weather knowledge and information, improved cockpit systems and displays, security, management of air traffic, and aircraft control.117 Though NASA’s involvement with aviation safety has been an important focus of its research activities since its earliest days, this involvement was formalized in 1997. In response to a report by the White House Commission on Aviation Safety and Security, NASA created its Aviation Safety Program (AvSP).118 As NASA’s primary safety program, AvSP dedicated itself and $500 million to researching and developing technologies that would reduce the fatal aircraft accident rate 80 percent by 2007.119 In pursuit of this goal, NASA researchers at Langley, Ames, Dryden, and Glenn Research Centers teamed with the FAA, DOD, the aviation industry, and various aviation employee groups—including the Air Line Pilots Association (ALPA), Allied Pilots Association (APA), Air Transport Association (ATA), and National Air Traffic Controllers Association 116. Robert Jacobsen, “NASA’s Free Flight Air Traffic Management Research,” NASA Free Flight/ DAGATM Workshop, 2000, http://www.asc.nasa.gov/aatt/wspdfs/Jacobsen_Overview.pdf, accessed Oct. 7, 2009. 117. “NASA’s Aviation Safety Accomplishments,” NASA Fact Sheet; Chambers, Concept to Reality: Contributions of the NASA Langley Research Center to U.S. Civil Aircraft of the 1990s. 118. Al Gore, White House Commission on Aviation Safety and Security: Final Report to President Clinton (Washington, DC: Executive Office of the President, Feb. 12, 1997). 119. “NASA Aviation Safety Program,” NASA Facts Online, FS-2000-02-47-LaRC, http://oea. larc.nasa.gov/PAIS/AvSP-factsheet.html, accessed Oct. 7, 2009; Chambers, Innovation in Flight: Research of the NASA Langley Research Center on Revolutionary Advanced Concepts for Aeronautics, NASA SP-2005-4539 (2005), p. 97.

213

NASA’s Contributions to Aeronautics

4

(NATCA)—to form the Commercial Aviation Safety Team (CAST) in 1998. The purpose of this all-inclusive consortium was to develop an integrated and data-driven strategy to make commercial aviation safer.120 As highlighted by the White House Commission report, statistics had shown that the overwhelming majority of the aviation accidents and fatalities in previous years had been caused by human error—specifically, loss of control in flight and so-called controlled flight into terrain (CFIT).121 NASA—along with the FAA, DOD, the aviation industry, and human factors experts—had previously formed a National Aviation Human Factors Plan to develop strategies to decrease these humancaused mishaps.122 Consequently, NASA joined with the FAA and DOD to further develop a human performance research plan, based on the NASA–FAA publication Toward a Safer 21st Century—Aviation Safety Research Baseline and Future Challenges.123 The new AvSP thus incorporated many of the existing human factors initiatives, such as crew fatigue, resource management, and training. Human factors concerns were also emphasized by the program’s focus on developing more sophisticated human-assisting aviation technology. To accomplish its goals, AvSP focused not only on preventing accidents, but also minimizing injuries and loss of life when they did occur. The program also emphasized collection of data to find and address problems. The comprehensive nature of AvSP is beyond the scope of this case study, but some aspects of the program (which, in 2005, became the Aviation Safety & Security Program, or AvSSP) with the greatest human factors implications include accident mitigation, synthetic vision systems, system wide accident prevention, and aviation system monitoring and modeling.124 •

Accident mitigation: The goal of this research is to find ways to make accidents more survivable to aircraft

occupants. This includes a range of activities, some of which have been discussed, to include impact tests, inflight and postimpact fire prevention studies, improved restraint systems, and the creation of airframes better able to withstand crashes. Synthetic vision systems: Unrestricted vision is vital for a pilot’s situational awareness and essential for him to control his aircraft safely. Limited visibility contributes to more fatal air accidents than any other single factor; since 1990, more than 1,750 deaths have been attributed to CFIT—crashing into the ground—not to mention numerous runway incursion accidents that have taken even more lives.125 The traditional approach to this problem has been the development of sensor-based enhanced vision systems to improve pilot awareness. In 2000, however, NASA Langley researchers initiated a different approach. They began developing cockpit displays, termed Synthetic Vision Systems, which incorporate such technologies as Global Positioning System (GPS) and photo-realistic terrain databases to allow pilots to “see” a synthetically derived 3-D digital reproduction of what is outside the cockpit, regardless of the meteorological visibility. Even in zero visibility, these systems allow pilots to synthetically visualize runways and ground obstacles in their path. At the same time, this reduces their workload and decreases the disorientation they experience during lowvisibility flying. Such systems would be useful in avoiding CFIT crashes, loss of aircraft control, and approach and landing errors that can occur amid low visibility.126 Such technology could also be of use in decreasing the risk of runway incursions. For example, the Taxiway

Navigation and Situation Awareness System (T-NASA) was developed to help pilots taxiing in conditions of decreased visibility to “see” what is in front of them. This system allows them to visualize the runway by presenting them with a head-up display (HUD) of a computergenerated representation of the taxi route ahead of them.127 One of the most important synthetic vision systems initiatives arose from the Advanced General Aviation Transport Experiments (AGATE) program, which NASA formed in the mid-1990s to help revitalize the lagging general-aviation industry. NASA joined with the FAA and some 80 industry members, in part to develop an affordable Highway in the Sky (HITS) cockpit display that would enhance safety and pilot situational awareness. In 2000, such a system was installed and demonstrated in a small production aircraft.128 Today, nearly every aviation manufacturer has a Synthetic Vision System either in use or in the planning stages.129 System wide accident prevention: This research, which focuses on the human causes of accidents, is involved with improving the training of aviation professionals and in developing models that would help predict human error before it occurs. Many of the programs addressing this issue were discussed earlier in greater detail.130 Aviation system monitoring and modeling (ASMM) project: This program, which was in existence from 1999 to 2005, involved helping personnel in the aviation indus-

try to preemptively identify aviation system risk. This included using data collection and improved monitoring of equipment to predict problems before they occur.131 One important element of the ASMM project is the Aviation Performance Measuring System (APMS).132 In 1995, NASA and the FAA coordinated with the airlines to develop this program, which utilizes large amounts of information taken from flight data recorders to improve flight safety. The techniques developed are designed to use the data collected to formulate a situational awareness feedback process that improves flight performance and safety.133 Yet another spinoff of ASMM is the National Aviation Operational Monitoring Service (NAOMS). This systemwide survey mechanism serves to quantitatively assess the safety of the National Airspace System and evaluate the effects of technologies and procedures introduced into the system. It uses input from pilots, controllers, mechanics, technicians, and flight attendants. NAOMS therefore serves to assess flight safety risks and the effectiveness of initiatives to decrease these risks.134 APMS impacts air carrier operations by making routine monitoring of flight data possible, which in turn can allow evaluators to identify risks and develop changes that will improve quality and safety of air operations.135 A similar program originating from ASMM is the Performance Data Analysis and Report and System (PDARS). This joint FAA–NASA initiative provides a

way to monitor daily operations in the NAS and to evaluate the effectiveness of air traffic control (ATC) services. This innovative system, which provides daily analysis of huge volumes of real-time information, including radar flight tracks, has been instituted throughout the continental U.S.136 The highly successful AvSP ended in 2005, when it became the Aviation Safety & Security Program. AvSSP exceeded its target goal of reducing aircraft fatalities 80 percent by 2007. In 2008, NASA shared with the other members of CAST the prestigious Robert J. Collier Trophy for its role in helping produce “the safest commercial aviation system in the world.”137 AvSSP continues to move forward with its goal of identifying and developing by 2016 “tools, methods, and technologies for improving overall aircraft safety of new and legacy vehicles operating in the Next Generation Air Transportation System.”138 NASA estimates that the combined efforts of the ongoing safety-oriented programs it has initiated or in which it has participated will decrease general-aviation fatalities by as much as another 90 percent from today’s levels over the next 10–15 years.139 Taking Human Factors Technology into the 21st Century From the foregoing, it is clear that NASA’s human factors research has over the past decades specifically focused on aviation safety. This work, however, has also maintained an equally strong focus on improving the human-machine interface of aviation professionals, both in the air and on the ground. NASA has accomplished this through its many highly developed programs that have emphasized human-centered considerations in the design and engineering of increasingly complex flight systems. These human factors considerations in systems design and integration have directly translated to increased human performance and efficiency and, indirectly, to greater flight safety. The scope of these contributions is

best illustrated by briefly discussing a representative sampling of NASA programs that have benefitted aviation in various ways, including the ManMachine Integration Design and Analysis System (MIDAS), ControllerPilot Data Link Communications (CPDLC), NASA’s High-Speed Research (HSR) program, the Advanced Air Transportation Technologies (AATT) program, and the Agency’s Vision Science and Technology effort.

4

Man-Machine Integration Design and Analysis System NASA jointly initiated this research program in 1980 with the U.S. Army, San Jose State University, and Sterling Software/QSS/Perot Systems, Inc. This ongoing, work-station–based simulation system, which was designed to further develop human performance modeling, links a “virtual human” of a certain physical anthropometric description to a cognitive (visual, auditory, and memory) structure that is representative of human abilities and limitations. MIDAS then uses these human performance models to assess a system’s procedures, displays, and controls. Using these models, procedural and equipment problems can be identified and humansystem performance measures established before more expensive testing using human subjects.140 The aim of MIDAS is to “reduce design cycle time, support quantitative predictions of human-system effectiveness, and improve the design of crew stations and their associated operating procedures.”141 These models thus demonstrate the behavior that might be expected of human operators working with a given automated system without the risk and cost of subjecting humans to these conditions. An important aspect of MIDAS is that it can be applied to any human-machine domain once adapted to the particular requirements of that system. It has in fact been employed in the development of such varied functions as establishing baseline performance measures for U.S. Army crews flying Longbow Apache helicopters with and without chemical warfare gear, evaluating crew performance/workload issues for steep noise abatement approaches into a vertiport, developing an advanced

Controller-Pilot Data Link Communications Research for this program, conducted by NASA’s Advanced Transport Operating System (ATOPS), was initiated in the early 1980s to improve the quality of communication between aircrew and air traffic control personnel.143 With increased aircraft congestion, radio frequency overload had become a potential safety issue. With so many pilots trying to communicate with ATC at the same time on the same radio frequency, the potential for miscommunication, errors, and even missed transmissions had become increasingly great. One solution to this problem was a two-way data link system. This allows communications between aircrew and controllers to be displayed on computer screens both in the cockpit and at the controller’s station on the ground. Here they can be read, verified, and stored for future reference. Additionally, flightcrew personnel flying in remote locations, well out of radio range, can communicate in real time with ground personnel via computers hooked up to a satellite network. The system also allows such enhanced capabilities as the transfer of weather data, charts, and other important information to aircraft flying at nearly any location in the world.144 Yet another aspect of this system allows computers in aircraft and on the ground to “talk” to one another directly. Controllers can thus arrange closer spacing and more direct routing for incoming and outgoing aircraft. This important feature has been calculated to save an estimated 3,000–6,000 pounds of fuel and up to 8 minutes of flight time on a typical transpacific flight.145 Digitized voice communications have even been 142. Sandra G. Hart, Brian F. Gore, and Peter A. Jarvis, “The Man-Machine Integration Design & Analysis System (MIDAS): Recent Improvements,” NASA Ames Research Center, http:// humansystems.arc.nasa.gov/groups/midas/documents/MIDAS(HFS%2010-04).ppt, accessed Oct. 7, 2009; Kevin Corker and Christian Neukom, “Man-Machine Integrated Design and Analysis System (MIDAS): Functional Overview,” Ames Research Center (Dec. 1998). 143. Marvin C. Waller and Gary W. Lohr, “A Piloted Simulation Study of Data Link ATC Message Exchange,” NASA TP-2859 (1989); Charles E. Knox and Charles H. Scanlon, “Flight Tests with a Data Link Used for Air Traffic Control Information Exchange,” NASA TP-3135 (1991). 144. Lane E. Wallace, Airborne Trailblazer, ch. 7-3, “Data Link,” NASA SP-4216 (Washington, DC: 1994). 145. Ibid.

220

Case 4 | Human Factors Research: Meshing Pilots with Planes

4

NASA’s Future Flight Central, which opened at NASA Ames Research Center in 1999, was the first full-scale virtual control tower. Such synthetic vision systems can be used by both aircraft and controllers to visualize clearly what is taking place around them in any conditions. NASA.

added to decrease the amount of aircrew “head-down” time spent reading messages on the screen. This system has gained support from both pilots and the FAA, especially after NASA investigations showed that the system decreased communication errors, aircrew workload, and the need to repeat ATC messages.146 High-Speed Research Program NASA and a group of U.S. aerospace corporations began research for this ambitious program in 1990. Their goal was to develop a jet capable of transporting up to 300 passengers at more than twice the speed of sound. An important human factors–related spinoff of the so-called High-Speed Civil Transport (HSCT) was an External Visibility System. This system replaced forward cockpit windows with displays of video images with computer-generated graphics. This system would have allowed better performance and safety than unaided human vision while

eliminating the need for the “droop nose” that the supersonic Concorde required for low-speed operations. Although this program was phased out in fiscal year (FY) 1999 for budgetary reasons, the successful vision technology produced was handed over to the previously discussed AvSP– AvSSP’s Synthetic Vision Systems element for further development.147 Advanced Air Transportation Technologies Program NASA established this project in 1996 to increase the capability of the Nation’s air transport activities. This program’s specific goal was to develop a set of “decision support tools” that would help air traffic service providers, aircrew members, and airline operations centers in streamlining gate-to-gate operations throughout the NAS.148 Project personnel were tasked with researching and developing advanced

concepts within the air traffic management system to the point where the FAA and the air transport industry could develop a preproduction prototype. The program ended in 2004, but implementation of these tools into the NAS addressed such air traffic management challenges as complex airspace operations and assigning air and ground responsibilities for aircraft separation. Several of the technologies developed by this program received “Turning Goals into Reality” awards, and some of these—for example, the traffic management adviser and the collaborative arrival planner—are in use by ATC and the airlines.149 Vision Science and Technology Scientists at NASA Ames Research Center have for many years been heavily involved with conducting research on visual technology for humans. The major areas explored include vision science, image 149. Advanced Air Transportation Technologies (AATT) project, NASA Web site, http://www. nasa.gov/centers/ames/research/lifeonearth/lifeonearth-aatt.html, accessed Oct. 7, 2009; “Advanced Air Transportation Technologies Overview,” http://www.asc.nasa.gov/aatt/overview. html, accessed Oct. 7, 2009.

223

NASA’s Contributions to Aeronautics

4

compression, imaging and displays, and visual human factors. Specific projects have investigated such issues as eye-tracking accuracy, image enhancement, metrics for measuring image quality, and methods to measure and improve the visibility of in-flight and air traffic control monitor displays.150 The information gained from this and other NASA-conducted research has played an important role in the development of such important and innovative human-assisting technologies as virtual reality goggles, helmet-mounted displays, and so-called glass cockpits.151 The latter concept, which NASA pioneered in the 1970s, refers to the replacement of conventional cockpit analog dials and gauges with a system of cathode ray tubes (CRT) or liquid crystal display (LCD) flatpanels that display the same information in a more readable and usable form.152 Conventional instruments can be difficult to accurately read and monitor, and they are capable of providing only one level of information. Computerized “glass” instrumentation, on the other hand, can display both numerical and graphic color-coded readouts in 3-D format; furthermore, because each display can present several layers of information, fewer are needed. This provides the pilot larger and more readable displays. This technology, which is now used in nearly all airliners, business jets, and an increasing number of general-aviation aircraft, has improved flight safety and aircrew efficiency by decreasing workload, fatigue, and instrument interpretation errors.153 A related vision technology that NASA researchers helped develop is the head-up display.154 This transparent display allows a pilot to view flight data while looking outside the aircraft. This is especially useful during approaches for landing, when the pilot’s attention needs to be focused on events outside the cockpit. This concept was originally developed for the Space Shuttle and military aircraft but has since been 150. “NASA Vision Group,” NASA Ames Research Center, http://vision.arc.nasa.gov/publications. php, accessed Oct. 7, 2009. 151. Andries van Dam, “Three Dimensional User Interfaces for Immersive Virtual Reality: Final Report,” NASA CR-204997 (1997); Joseph W. Clark, “Integrated Helmet Mounted Display Concepts for Air Combat,” NASA CR-198207 (1995); Earl L. Wiener, “Human Factors of Advanced Technology (‘Glass Cockpit’) Transport Aircraft,” NASA CR-177528 (1989). 152. Ibid. 153. Wallace, Airborne Trailblazer. 154. Richard L. Newman, Head-up Displays: Designing the Way Ahead (Brookfield, VT: Ashgate, 1995).

224

Case 4 | Human Factors Research: Meshing Pilots with Planes

adapted to commercial and civil aircraft, air traffic control towers, and even automobiles.155 Into the Future The preceding discussion can serve only as a brief introduction to NASA’s massive research contribution to aviation in the realm of human factors. Hopefully, however, it has clearly made the following point: NASA, since its creation in 1958, has been an equally contributing partner with the aeronautical industry in the sharing of new technology and information resulting from their respective human factors research activities. Because aerospace is but an extension of aeronautics, it is difficult to envision how NASA could have put its first human into space without the knowledge and technology provided by the aeronautical human factors research and development that occurred in the decades leading up to the establishment of NASA and its piloted space program. In return, however, today’s high-tech aviation industry is immeasurably more advanced than it would have been without the past half century of dedicated scientific human factors research conducted and shared by the various components of NASA. Without the thousands of NASA human factors–related research initiatives during this period, many—if not most—of the technologies that are a normal part of today’s flight, air traffic control, and aircraft maintenance operations, would not exist. The high cost, high risk, and lack of tangible cost effectiveness the research and development these advances entailed rendered this kind of research too expensive and speculative for funding by commercial concerns forced to abide by “bottomline” considerations. As a result of NASA research and the many safety programs and technological innovations it has sponsored for the benefit of all, countless additional lives and dollars were saved as many accidents and losses of efficiency were undoubtedly prevented. It is clear that NASA is going to remain in the business of improving aviation safety and technology for the long haul. NASA’s Aeronautics Research Mission Directorate (ARMD), one of the Agency’s four major directorates, will continue improving the safety and efficiency of aviation

with its aviation safety, fundamental aeronautics, airspace systems, and aeronautics test programs. Needless to say, a major aspect of these programs will involve human factors research, as it pertains to aeronautics.156 It is impossible to predict precisely in which direction NASA’s human factors research will go in the decades to come; however, based on the Agency’s remarkably unique 50-year history, it seems safe to assume it will continue to contribute to an ever-safer and more efficient world of aviation.

Hovering flight test of a free-flight model of the Hawker P.1127 V/STOL fighter underway in the return passage of the Full-Scale Tunnel. Flying-model demonstrations of the ease of transition to and from forward flight were key in obtaining the British government’s support. NASA.

238

Scaled 5 Dynamically Free-Flight Models

CASE

Joseph R. Chambers

5

The earliest flying machines were small models and concept demonstrators, and they dramatically influenced the invention of flight. Since the invention of the airplane, free-flight atmospheric model testing—and tests of “flying” models in wind tunnel and ground research facilities— has been a means of undertaking flight research critical to ensuring that designs meet mission objectives. Much of this testing has helped identify problems and solutions while reducing risk.

O

N A HOT, MUGGY DAY IN SUMMER 1959, Joe Walker, the crusty old head of the wind tunnel technicians at the legendary NASA Langley Full-Scale Tunnel, couldn’t believe what he saw in the test section of his beloved wind tunnel. Just a few decades earlier, Walker had led his technician staff during wind tunnel test operations of some of the most famous U.S. aircraft of World War II in its gigantic 30- by 60-foot test section. With names like Buffalo, Airacobra, Warhawk, Lightning, Mustang, Wildcat, Hellcat, Avenger, Thunderbolt, Helldiver, and Corsair, the test subjects were big, powerful fighters that carried the day for the United States and its allies during the war. Early versions of these aircraft had been flown to Langley Field and installed in the tunnel for exhaustive studies of how to improve their aerodynamic performance, engine cooling, and stability and control characteristics. On this day, however, Walker was witnessing a type of test that would markedly change the research agenda at the Full-Scale Tunnel for many years to come. With the creation of the new National Aeronautics and Space Administration (NASA) in 1958 and its focus on human space flight, massive transfers of the old tunnel’s National Advisory Committee for Aeronautics (NACA) personnel to new space flight priorities such as Project Mercury at other facilities had resulted in significant reductions in the tunnel’s staff, test schedule, and workload. The situation had not, however, gone unnoticed by a group of brilliant engineers that had pioneered the use of remotely controlled free-flying model airplanes for 239

NASA’s Contributions to Aeronautics

5

predictions of the flying behavior of full-scale aircraft using a unique testing technique that had been developed and applied in a much smaller tunnel known as the Langley 12-Foot Free Flight Tunnel. The engineers’ activities would benefit tremendously by use of the gigantic test section of the Full-Scale Tunnel, which would provide a tremendous increase in flying space and allow for a significant increase in the size of models used in their experiments. In view of the operational changes occurring at the tunnel, they began a strong advocacy to move their free-flight studies to the larger facility. The decision to transfer the free-flight model testing to the Full-Scale Tunnel was made in 1959 by Langley’s management, and the model flight-testing was underway. Joe Walker was observing a critical NASA free-flight model test that had been requested under joint sponsorship between NASA, industry, and the Department of Defense (DOD) to determine the flying characteristics of a 7-foot-long model of the North American X-15 research aircraft. As Walker watched the model maneuvering across the test section, he lamented the radical change of test subjects in the tunnel with several profanities and a proclamation that the testing had “gone from big-iron hardware to a bunch of damn butterflies.”1 What Walker didn’t appreciate was that the revolutionary efforts of the NACA and NASA to develop tools, facilities, and testing techniques based on the use of subscale flying models were rapidly maturing and being sought by military and civil aircraft designers—not only in the Full-Scale Tunnel, but in several other unique NASA testing facilities. For over 80 years, thousands of flight tests of “butterflies” in NACA and NASA wind tunnel facilities and outdoor test ranges have contributed valuable predictions, data, and risk reduction for the Nation’s high-priority aircraft programs, space flight vehicles, and instrumented planetary probes. Free-flight models have been used in a myriad of studies as far ranging as aerodynamic drag reduction, loads caused by atmospheric gusts and landing impacts, ditching, aeroelasticity and flutter, and dynamic stability and control. The models used in the studies have been flown at conditions ranging from hovering flight to hypersonic speeds. Even a brief description of the wide variety of free-flight model applications is far beyond the intent of this essay; therefore, the following discussion is limited to activities in flight dynamics, which

includes dynamic stability and control, flight at high angles of attack, spin entry, and spinning. Birthing the Testing Techniques The development and use of free-flying model techniques within the NACA originated in the 1920s at the Langley Memorial Aeronautical Laboratory at Hampton, VA. The early efforts had been stimulated by concerns over a critical lack of understanding and design criteria for methods to improve aircraft spin behavior.2 Although early aviation pioneers had been frequently using flying models to demonstrate concepts for flying machines, many of the applications had not adhered to the proper scaling procedures required for realistic simulation of fullscale aircraft motions. The NACA researchers were very aware that certain model features other than geometrical shape required application of scaling factors to ensure that the flight motions of the model would replicate those of the aircraft during flight. In particular, the requirements to scale the mass and the distribution of mass within the model were very specific.3 The fundamental theories and derivation of scaling factors for free-flight models are based on the science known as dimensional analysis. Briefly, dynamic free-flight models are constructed so that the linear and angular motions and rates of the model can be readily scaled to full-scale values. For example, a dynamically scaled 1/9-scale model will have a wingspan 1/9 that of the airplane and it will have a weight of 1/729 that of the airplane. Of more importance is the fact that the scaled model will exhibit angular velocities that are three times faster than those of the airplane, creating a potential challenge for a remotely located human pilot to control its rapid motions. Initial NACA testing of dynamically scaled models consisted of spin tests of biplane models that were hand-launched by a researcher or catapulted from a platform about 100 feet above the ground in an airship hangar at Langley Field.4 As the unpowered model spun toward the ground, its path was tracked and followed by a pair of researchers holding a retrieval net similar to those used in fire rescues. To an observer,

the testing technique contained all the elements of an old silent movie, including the dash for the falling object. The information provided by this free-spin test technique was valuable and provided confidence (or lack thereof) in the ability of the model to predict full-scale behavior, but the briefness of the test and the inevitable delays caused by damage to the model left much to be desired. The free-flight model testing at Langley was accompanied by other forms of analysis, including a 5-foot vertical wind tunnel in which the aerodynamic characteristics of the models could be measured during simulated spinning motions while attached to a motor-driven spinning apparatus. The aerodynamic data gathered in the Langley 5-Foot Vertical Tunnel were used for analyses of spin modes, the effects of various airplane components in spins, and the impact of configuration changes. The airstream in the tunnel was directed downward, therefore freespinning tests could not be conducted.5 Meanwhile, in England, the Royal Aircraft Establishment (RAE) was aware of the NACA’s airship hangar free-spinning technique and had been inspired to explore the use of similar catapulted model spin tests in a large building. The RAE experience led to the same unsatisfactory conclusions and redirected its interest to experiments with a novel 2-foot-diameter vertical free-spinning tunnel. The positive results of tests of very small models (wingspans of a few inches) in the apparatus led the British to construct a 12-foot vertical spin tunnel that became operational in 1932.6 Tests in the facility were conducted with the model launched into a vertically rising airstream, with the model’s weight being supported by its aerodynamic drag in the rising airstream. The model’s vertical position in the test section could be reasonably maintained within the view of an observer by precise and rapid control of the tunnel speed, and the resulting test time could be much longer than that obtained with catapulted models. The advantages of this technique were very apparent to the international research community, and the facility features of the RAE tunnel have influenced the design of all other vertical spin tunnels to this day.

5. C. Wenzinger and T. Harris, “The Vertical Wind Tunnel of the National Advisory Committee for Aeronautics,” NACA TR-387 (1931). The tunnel’s vertical orientation was to minimize cyclical gravitational loads on the spinning model and apparatus as would have occurred in a horizontal tunnel. 6. H.E. Wimperis, “New Methods of Research in Aeronautics,” Journal of the Royal Aeronautical Society (Dec. 1932), p. 985.

242

Case 5 | Dynamically Scaled Free-Flight Models

5

This cross-sectional view of the Langley 20-Foot Vertical Spin Tunnel shows the closed-return tunnel configuration, the location of the drive fan at the top of the facility, and the locations of safety nets above and below the test section to restrain and retrieve models. NASA.

When the NACA learned of the new British tunnel, Charles H. Zimmerman of the Langley staff led the design of a similar tunnel known as the Langley 15-Foot Free-Spinning Wind Tunnel, which became operational in 1935.7 The use of clockwork delayed-action mechanisms to move the control surfaces of the model during the spin enabled the researchers 7. Zimmerman, “Preliminary Tests in the N.A.C.A. Free-Spinning Wind Tunnel.” Zimmerman was a brilliant engineer with a notable career involving the design of dynamic wind tunnels, advanced aircraft configurations, and flying platforms, and he served NASA as a member of aerospace panels.

243

NASA’s Contributions to Aeronautics

5

to evaluate the effectiveness of various combinations of spin recovery techniques. The tunnel was immediately used to accumulate design data for satisfactory spin characteristics, and its workload increased dramatically. Langley replaced its 15-Foot Free-Spinning Wind Tunnel in 1941 with a 20-foot spin tunnel that produced higher test speeds to support scaled models of the heavier aircraft emerging at the time. Control inputs for spin recovery were actuated at the command of a researcher rather than the preset clockwork mechanisms of the previous tunnel. Copper coils placed around the periphery of the tunnel set up a magnetic field in the tunnel when energized, and the magnetic field actuated a magnetic device in the model to operate the model’s aerodynamic control surfaces.8 The Langley 20-Foot Vertical Spin Tunnel has since continued to serve the Nation as the most active facility for spinning experiments and other studies requiring a vertical airstream. Data acquisition is based on a model space positioning system that uses retro-reflective targets attached on the model for determining model position, and results include spin rate, model attitudes, and control positions.9 The Spin Tunnel has supported the development of nearly all U.S. military fighter and attack aircraft, trainers, and bombers during its 68-year history, with nearly 600 projects conducted for different aerospace configurations to date. Wind Tunnel Free-Flight Techniques Charles Zimmerman energetically continued his interest in free-flight models after the successful introduction of his 15-foot free-spinning tunnel. His next ambition was to provide a capability of investigating the dynamic stability and control of aircraft in conventional flight. His approach to this goal was to simulate the unpowered gliding flight of a model airplane in still air but to accomplish this goal in a wind tunnel with the model within view of the tunnel operators. Without power, the model would be in equilibrium in descending flight, so the tunnel airstream had to be at an inclined angle relative to the horizon. Zimmerman designed a 5-foot-diameter wind tunnel that was mounted in a yoke-like support structure such that the tunnel could be pivoted and its airstream could

The Langley 5-Foot Free-Flight Tunnel was mounted in a yoke assembly that permitted the test section to be tilted down for simulation of gliding flight. Its inventor, Charles Zimmerman, is on the left controlling the model, while the tunnel operator is behind the test section. NASA.

simulate various descent angles. Known as the Langley 5-Foot Free-Flight Tunnel, this exploratory apparatus was operated by two researchers—a tunnel operator, who controlled the airspeed and tilt angle of the tunnel, and a pilot, who controlled the model and assessed its behavior via a control box with a fine wire connection to the model’s control actuators.10 Very positive results obtained in this proof-of-concept apparatus led to the design and construction of a larger 12-Foot Free-Flight Tunnel in 1939. Housed in a 60-foot-diameter sphere that permitted the tunnel to tilt upward and downward, the Langley 12-Foot Free-Flight Tunnel was designed for free-flight testing of powered as well as unpowered models. A three-person crew was used in the testing, including a tunnel airspeed controller, a tunnel tilt-angle operator, and an evaluation pilot. The tunnel operated as the premier NACA low-speed free-flight facility for over 20 years, supporting advances in fundamental dynamic 10. Joseph R. Chambers and Mark A. Chambers, Radical Wings and Wind Tunnels (Specialty Press, 2008). Zimmerman was a very proficient model pilot and flew most of the tests in the apparatus.

245

NASA’s Contributions to Aeronautics

5

Test setup for free-flight studies at Langley. The pitch pilot is in a balcony at the side of the test section. The pilot who controls the rolling and yawing motions is at the rear of the tunnel. NASA.

stability and control theory as well as specific airplane development programs. After the 1959 decision to transfer the free-flight activities to the Full-Scale Tunnel, the tunnel pivot was fixed in a horizontal position, and the facility has continued to operate as a NASA low-cost laboratory-type tunnel for exploratory testing of advanced concepts. Relocation of the free-flight testing to the Full-Scale Tunnel made that tunnel the focal point of free-flight applications at Langley for the next 50 years.11 The move required updates to the test technique and the free-flight models. The test crew increased to four or more individuals responsible for piloting duties, thrust control, tunnel operations, and model retrieval and was located at two sites within the wind tunnel building. One group of researchers was in a balcony at one side of the open-throat test section, while a pilot who controlled the rolling and yawing motions of the model was in an enclosure at the rear of the test section within the structure of the tunnel exit-flow collector. Models of jet aircraft were typically powered by compressed air, and the level of 11. John P. Campbell, Jr., was head of the organization at the time of the move. Campbell was one of the youngest research heads ever employed at Langley. In addition to being an expert in flight dynamics, he later became recognized for his expertise in V/STOL aircraft technology.

246

Case 5 | Dynamically Scaled Free-Flight Models

thrust was controlled by a thrust pilot in the balcony. Next to the thrust pilot was a pitch pilot who controlled the longitudinal motions of the model and conducted assessments of dynamic longitudinal stability and control during flight tests. Other key members of the test crew in the balcony included the test conductor and the tunnel airspeed operator. A light, flexible cable attached to the model supplied the model with the compressed air, electric power for control actuators, and transmission of signals for the controls and sensors carried within the model. A portion of the cable was made up of steel cable that passed through a pulley above the test section and was used to retrieve the model when the test was terminated or when an uncontrollable motion occurred. The flight cable was kept slack during the flight tests by a safety-cable operator in the balcony who accomplished the job with a high-speed winch.12 Free-flight models in the Full-Scale Tunnel typically had model wingspans of about 6 feet and weighed about 100 pounds. Propulsion was provided by compressed air ejectors, miniature turbofans, and high thrust/ weight propeller motors. The materials used to fabricate models changed from the simple balsa free-flight construction used in the 12-Foot FreeFlight Tunnel to high-strength, lightweight composite materials. The control systems used by the free-flight models simulated the complex feedback and stabilization logic used in flight control systems for contemporary aircraft. The control signals from the pilot stations were transmitted to a digital computer in the balcony, and a special software program computed the control surface deflections required in response to pilot inputs, sensor feedbacks, and other control system inputs. Typical sensor packages included control-position indicators, linear accelerometers, and angular-rate gyros. Many models used nose-boom–mounted vanes for feedback of angle of attack and angle of sideslip, similar to systems used on full-scale aircraft. Data obtained from the flights included optical and digital recordings of model motions and pilot comments as well as analysis of the model’s response characteristics. The NACA and NASA also developed wind tunnel free-flight testing techniques to determine high-speed aerodynamic characteristics, dynamic stability of aircraft, Earth atmosphere entry configurations, planetary probes, and aerobraking concepts. The NASA Ames Research Center led the development of such facilities starting in the 1940s with the Ames

Supersonic Free-Flight Tunnel (SFFT).13 The SFFT, which was similar in many respects to ballistic range facilities used for testing munitions, was designed for aerodynamic and dynamic stability research at high supersonic Mach numbers (Mach numbers in excess of 10). In the SFFT, the model was fired at high speeds upstream into a supersonic airstream (typically Mach 2.0). Windows for shadowgraph photo­graphy were along the top and sides of the test section. Data obtained from motion time histories and measurements of the model’s attitudes during the brief flights were used to obtain aero­ dynamic and dynamic stability characteristics. The small research models had to be extremely strong to withstand high accelerations during the launch (up to 100,000 g’s), yet light enough to meet requirements for dynamic mass scaling (moments of inertia). Launching the models without angular disturbances or damage was challenging and required extensive development and experience. The SFFT was completed in late 1949 and became operational in the early 1950s. Ames later brought online its most advanced aeroballistic testing capability, the Ames Hypervelocity Free-Flight Aerodynamic Facility (HFFAF), in 1964. This facility was initially developed in support of the Apollo program and utilized both light-gas gun and shock tube technology to produce lunar return and atmospheric entry. At one end of the test section, a family of light-gas gun was used to launch specimens into the test section, while at the opposite end, a large shock tube could be simultaneously used to produce a counterflowing airstream (the result being Mach numbers of about 30). This counterflow mode of operation proved to be very challenging and was used for only a brief time from 1968 to 1971. Throughout much of the 1970s and 1980s, this versatile facility was operated as a traditional aeroballistic range, using the guns to launch models into quiescent air (or some other test gas), or as a hypervelocity impact test facility. From 1989 through 1995, the facility was operated as a shock tube–driven wind tunnel for scramjet propulsion testing. In 1997, the HFFAF underwent a major refurbishment and was returned to an aeroballistic mode of operation. It continues to operate in this mode and is NASA’s only remaining aeroballistic test facility.14 13. Alvin Seiff, Carlton S. James, Thomas N. Canning, and Alfred G. Boissevain, “The Ames Supersonic Free-Flight Wind Tunnel,” NACA RM-A52A24 (1952). 14. Charles J. Cornelison, “Status Report for the Hypervelocity Free-Flight Aerodynamic Facility,” 48th Aero Ballistic Range Association Meeting, Austin, TX, Nov. 1997.

248

Case 5 | Dynamically Scaled Free-Flight Models

Outdoor Free-Flight Facilities and Test Ranges Wind tunnel free-flight testing facilities provide unique and very valuable information regarding the flying characteristics of advanced aerospace vehicles. However, they are inherently limited or unsuitable for certain types of investigations in flight dynamics. For example, vehicle motions involving large maneuvers at elevated g’s, out-ofcontrol conditions, and poststall gyrations result in significant changes in flight trajectories and altitude, which can only be studied in the expanded spaces provided by outdoor facilities. In addition, critical studies associated with high-speed flight could not be conducted in Langley’s low-speed wind tunnels. Outdoor testing of dynamically scaled powered and unpowered free-flight models was therefore developed and applied in many research activities. Although outdoor test techniques are more expensive than wind tunnel free-flight tests, are subject to limitations because of weather conditions, and have inherently slower turnaround time than tunnel tests, the results obtained are unique and especially valuable for certain types of flight dynamics studies. One of the most important outdoor free-flight test techniques developed by NASA is used in the study of aircraft spin entry motions, which includes investigations of spin resistance, poststall gyrations, and recovery controls. A significant void of information exists between the prestall and stall-departure results produced by the wind tunnel free-flight test technique in the Full-Scale Tunnel discussed earlier and the results of fully developed spin evaluations obtained in the Spin Tunnel. The lack of information in this area can be critically misleading for some aircraft designs. For example, some free-flight models exhibit severe instabilities in pitch, yaw, or roll at stall during wind tunnel free-flight tests, and they may also exhibit potentially dangerous spins from which recovery is impossible during spin tunnel tests. However, a combination of aerodynamic, control, and inertial properties can result in this same configuration exhibiting a high degree of resistance to enter the dangerous spin following a departure, despite forced spin entry attempts by a pilot. On the other hand, some configurations easily enter developed spins despite recovery controls applied by the pilot. To evaluate the resistance of aircraft to spins, in 1950 Langley revisited the catapult techniques of the 1930s and experimented with

5

249

NASA’s Contributions to Aeronautics

5

an indoor catapult-launching technique.15 Once again, however, the catapult technique proved to be unsatisfactory, and other approaches to study spin entry were pursued.16 Disappointed by the inherent limitations of the catapult-launched technique, the Langley researchers began to explore the feasibility of an outdoor drop-model technique in which unpowered models would be launched from a helicopter at higher altitudes, permitting more time to study the spin entry and the effects of recovery controls. The technique would use much larger models than those used in the Spin Tunnel, resulting in a desirable increase in the test Reynolds number. After encouraging feasibility experiments were conducted at Langley Air Force Base, a search was conducted to locate a test site for research operations. A suitable low-traffic airport was identified near West Point, VA, about 40 miles from Langley, and research operations began in 1958.17 As testing progressed at West Point, the technique evolved into an operation consisting of launching the unpowered model at an altitude of about 2,000 feet and evaluating its spin resistance with separately located, ground-based pilots who attempted to promote spins by various combinations of control inputs and maneuvers. At the end of the test, an onboard recovery parachute was deployed and used to recover the model and lower it to a ground landing. This approach proved to be the prototype of the extremely successful drop-model testing technique that was continually updated and applied by NASA for over 50 years. Initially, two separate tracking units consisting of modified powerdriven antiaircraft gun trailer mounts were used by two pilots and two tracking operators to track and control the model. One pilot and tracker were to the side of the model’s flight path, where they could control the longitudinal motions following launch, while the other pilot and tracker were about 1,000 feet away, behind the model, to control lateraldirectional motions. However, as the technique was refined in later 15. Ralph W. Stone, Jr., William G. Garner, and Lawrence J. Gale, “Study of Motion of Model of Personal-Owner or Liaison Airplane Through the Stall and into the Incipient Spin by Means of a FreeFlight Testing Technique,” NACA TN-2923 (1953). 16. NASA has, however, used catapulted models for spin entry studies on occasion. See James S. Bowman, Jr., “Spin-Entry Characteristics of a Delta-Wing Airplane as Determined by a Dynamic Model,” NASA TN-D-2656 (1965). 17. Charles E. Libby and Sanger M. Burk, Jr., “A Technique Utilizing Free-Flying Radio-Controlled Models to Study the Incipient-and Developed-Spin Characteristics of Airplanes,” NASA Memo 2-659L (1959).

250

Case 5 | Dynamically Scaled Free-Flight Models

5

F/A-18A drop model mounted on its launch rig on a NASA helicopter in preparation for spin entry investigations at the Langley Plum Tree test site. NASA.

years, both pilots used a single dual gun mount arrangement with a single tracker operator. Researchers continued their search for a test site nearer to Langley, and in 1959, Langley requested and was granted approval by the Air Force to conduct drop tests at the abandoned Plum Tree bombing range near Poquoson, VA, about 5 miles from Langley. The marshy area under consideration had been cleared by the Air Force of depleted bombs and munitions left from the First and Second World War eras. A temporary building and concrete landing pad for the launch helicopter were added for operations at Plum Tree, and a surge of request jobs for U.S. high-performance military aircraft in the mid- to-late 1960s (F-14, F-15, B-1, F/A-18, etc.) brought a flurry of test activities that continued until the early 1990s.18 During operations at Plum Tree, the sophistication of the drop-model technique dramatically increased.19 High-resolution video cameras were

used for tracking the model, and graphic displays were presented to a remote pilot control station, including images of the model in flight and the model’s location within the range. A high-resolution video image of the model was centrally located in front of a pilot station within a building. In addition, digital displays of parameters such as angle of attack, angle of sideslip, altitude, yaw rate, and normal acceleration were also in the pilot’s view. The centerpiece of operational capability was a digital flight control computer programmed with variable research flight control laws and a flight operations computer with telemetry downlinks and uplinks within the temporary building. NASA operations at Plum Tree lasted about 30 years and included a broad scope of free-flight model investigations of military aircraft, general aviation aircraft, parawings, gliding parachutes, and reentry vehicles. In the early 1990s, however, several issues regarding environmental protection forced NASA to close its research activities at Plum Tree and remove all its facilities. After considerable searching and consideration of several candidate sites, the NASA Wallops Flight Facility was chosen for Langley’s drop-model activities. The last NASA drop-model tests of a military fighter for poststall studies began in 1996 and ended in 2000.20 This project, which evaluated the spin resistance of a 22-percent-scale model of the U.S. Navy F/A-18E Super Hornet, was the final evolution of drop-model technology for Langley. Launched from a helicopter at an altitude of about 15,000 feet in the vicinity of Wallops, the Super Hornet model weighed about 1,000 pounds. Recovery of the model at the end of the flight test was again initiated with the deployment of onboard parachutes. The model used a flotation bag after water impact and was retrieved from the Atlantic Ocean by a recovery boat. Outdoor free-flight model testing has also flourished at NASA Dryden Flight Research Center. Dryden’s primary advocate and highly successful user of free-flight models for low-speed research on advanced aerospace vehicles was the late Robert Dale Reed. An avid model builder, pilot, and researcher, Reed was inspired by his perceived need for a subscale free-flight model demonstrator of an emerging lifting body reentry configuration created by NASA Ames in 1962.21 After initial testing of gliders of the Ames M2-F1 lifting body concept, he progressed into 20. Mark A. Croom, Holly M. Kenney, and Daniel G. Murri, “Research on the F/A-18E/F Using a 22%-Dynamically-Scaled Drop Model,” AIAA Paper 2000-3913 (2000). 21. R. Dale Reed, Wingless Flight: The Lifting Body Story, NASA SP-4220 (1997).

252

Case 5 | Dynamically Scaled Free-Flight Models

5

Dryden free-flight research models of reentry lifting bodies. Dale Reed, second from left, and his test team pose with the mother ship and models of the M2-F2 and the Hyper III configurations. NASA.

the technique of using radio-controlled model tow planes to tow and release M2-F1 models. In the late 1960s, the launching technique for the unpowered models evolved with a powered radio-controlled mother ship, and by 1968, Reed’s mother ship had conducted over 120 launches. Dale Reed’s innovation and approach to using radio-controlled mother ships for launching drop models of radical configurations have endured to this day as the preferred method for small-scale free-flight activities at Dryden. In the early 1970s, Reed’s work at Dryden expanded into a series of flight tests of powered and unpowered remotely piloted research vehicles (RPRVs). These activities, which included remote-control evaluations of subscale and full-scale test subjects, used a ground-based cockpit equipped with flight instruments and sensors typical of a representative 253

NASA’s Contributions to Aeronautics

5

full-scale airplane. These projects included the Hyper III lifting body and a three-eighths-scale dynamically scaled model of the F-15. The technique used for the F-15 model consisted of air launches of the test article from a B-52 and control by a pilot in a ground cockpit outfitted with a sophisticated control system.22 The setup featured a digital uplink capability, a ground computer, a television monitor, and a telemetry system. Initially, the F-15 model was recovered on its parachute in flight by helicopter midair snatch, but in later flights, it was landed on skids by the evaluation pilot. NASA Ames also conducted and sponsored outdoor free-flight powered model testing in the 1970s as a result of interest in the oblique wing concept championed by Robert T. Jones. The progression of sophistication in these studies started with simple unpowered catapult-launched models at Ames, followed by cooperative powered model tests at Dryden in the 1970s and piloted flight tests of the AD-1 oblique wing demonstrator aircraft in the 1980s.23 In the 1990s, Ames and Stanford University collaborated on potential designs for oblique wing supersonic transport designs, which led to flight tests of two free-flight models by Stanford. Yet another historic high-speed outdoor free-flight facility was spun off Langley’s interests. In 1945, a proposal was made to develop a new NACA high-speed test range known as the Pilotless Aircraft Research Station, which would use rocket-boosted models to explore the transonic and supersonic flight regimes. The facility ultimately became known as the NACA Wallops Island Flight Test Range.24 From 1945 through 1959, Wallops served as a rocket-model “flying wind tunnel” for researchers in Langley’s Pilotless Aircraft Research Division (PARD), which conducted vital investigations for the Nation’s emerging supersonic aircraft, especially the Century series of advanced fighters in the 1950s. Rocketboosted models were used by the Pilotless Aircraft Research Division of the NACA’s Langley Laboratory in flight tests at Wallops to obtain valuable information on aerodynamic drag, dynamic stability, and control effectiveness at transonic conditions.

Applications Free-flight models are complementary to other tools used in aeronautical engineering. In the absence of adverse scale effects, the aerodynamic characteristics of the models have been found to agree very well with data obtained from other types of wind tunnel tests and theoretical analyses. By providing insight into the impact of aerodynamics on vehicle dynamics, the free-flight results help build the necessary understanding of critical aerodynamic parameters and the impact of modifications to resolve problems. The ability to conduct free-flight tests and aerodynamic measurements with the same model is a powerful advantage for the testing technique. When coupled with more sophisticated static wind tunnel tests, computational fluid dynamics methods, and piloted simulator technology, these tests are extremely informative. Finally, even the very visual results of free-flight tests are impressive, whether they demonstrate to critics and naysayers that radical and unconventional designs can be flown or identify a critical flight problem and potential solutions for a new configuration. The most appropriate applications of free-flight models involve evaluations of unconventional designs for which no experience base exists and the analysis of aircraft behavior for flight conditions that are not easily studied with other methods because of complex aerodynamic phenomena that cannot be modeled at the present time.25 Examples include flight in which separated flows, nonlinear aerodynamic behavior, and large dynamic motions are typically encountered. The following discussion presents a brief overview of the historical applications and technological impacts of the use of free-flight models for studies of flight dynamics by the NACA and NASA in selected areas.

5

The most important applications have been in • Dynamic stability and control. • Flight at high angles of attack.26 • Spinning and spin recovery. • Spin entry and poststall motions.

25. Campbell, “Free and Semi-Free Model Flight-Testing Techniques Used in Low-Speed Studies of Dynamic Stability and Control,” NATO Advisory Group for Aeronautical Research and Development AGARDograph 76 (1963). 26. This topic is discussed for military applications in another case study in this volume by the same author.

255

NASA’s Contributions to Aeronautics

5

Dynamic Stability: Early Applications and a Lesson Learned When Langley began operations of its 12-Foot Free-Flight Tunnel in 1939, it placed a high priority on establishing correlation with full-scale flight results. Immediately, requests came from the Army and Navy for correlation of model tests with flight results for the North American BT-9, Brewster XF2A-1, Vought-Sikorsky V-173, Naval Aircraft Factory SBN-1, and Vought Sikorsky XF4U-1. Meanwhile, the NACA used a powered model of the Curtiss P-36 fighter for an in-house calibration of the free-flight process.27 The results of the P-36 study were, in general, in fair agreement with airplane flight results, but the dynamic longitudinal stability of the model was found to be greater (more damped) than that of the airplane, and the effectiveness of the model’s ailerons was less than that for the airplane. Both discrepancies were attributed to aerodynamic deficiencies of the model caused by the low Reynolds number of the tunnel test and led to one of the first significant lessons learned with the free-flight technique. Using the wing airfoil shape (NACA 2210) of the full-scale P-36 for the model resulted in poor wing aerodynamic performance at the low Reynolds number of the model flight tests. The maximum lift of the model and the angle of attack for maximum lift were both decreased because of scale effects. As a result, the stall occurred at a slightly lower angle of attack for the model. After this experience, researchers conducted an exhaustive investigation of other airfoils that might have more satisfactory performance at low Reynolds numbers. In planning for subsequent tests, the researchers were trained to anticipate the potential existence of scale effects for certain airfoils, even at relatively low angles of attack. As a result of this experience, the wing airfoils of free-flight tunnel models were sometimes modified to airfoil shapes that provided better results at low Reynolds number.28 Progress and Design Data In the 1920s and 1930s, researchers in several wind tunnel and full-scale aircraft flight groups at Langley conducted analytical and experimental investigations to develop design guidelines to ensure satisfactory stability

and control behavior.29 Such studies sought to develop methods to reliably predict the inherent flight characteristics of aircraft as affected by design variables such as the wing dihedral angle, sizes and locations of the vertical and horizontal tails, wing planform shape, engine power, mass distribution, and control surface geometry. The staff of the FreeFlight Tunnel joined in these efforts with several studies that correlated the qualitative behavior of free-flight models with analytical predictions of dynamic stability and control characteristics. Coupled with the results from other facilities and analytical groups, the free-flight results accelerated the maturity of design tools for future aircraft from a qualitative basis to a quantitative methodology, and many of the methods and design data derived from these studies became classic textbook material.30 By combining free-flight testing with theory, the researchers were able to quantify desirable design features, such as the amount of wingdihedral angle and the relative size of vertical tail required for satisfactory behavior. With these data in hand, methods were also developed to theoretically solve the dynamic equations of motion of aircraft and determine dynamic stability characteristics such as the frequency of inherent oscillations and the damping of motions following inputs by pilots or turbulence. During the final days of model flight projects in the Free-Flight Tunnel in the mid-1950s, various Langley organizations teamed to quantify the effects of aerodynamic dynamic stability parameters on flying characteristics. These efforts included correlation of experimentally determined aerodynamic stability derivatives with theoretical predictions and comparisons of the results of qualitative free-flight tests with theoretical predictions of dynamic stability characteristics. In some cases, rate gyroscopes and servos were used to artificially vary the magnitudes of dynamic aerodynamic stability parameters such as yawing moment because of rolling.31 In these studies, the free-flight model result served as a critical test of the validity of theory.

5

29. M.O. McKinney, “Experimental Determination of the Effects of Dihedral, Vertical Tail Area, and Lift Coefficient on Lateral Stability and Control Characteristics,” NACA TN-1094 (1946). 30. Campbell and Seacord, “The Effect of Mass Distribution on the Lateral Stability and Control Characteristics of an Airplane as Determined by Tests of a Model in the Free-Flight Tunnel,” NACA TR-769 (1943). 31. Robert O. Schade and James L. Hassell, Jr., “The Effects on Dynamic Lateral Stability and Control of Large Artificial Variations in the Rotary Stability Derivatives,” NACA TN-2781 (1953).

257

NASA’s Contributions to Aeronautics

5

High-Speed Investigations High-speed studies of dynamic stability were very active at Wallops. The scope and contributions of the Wallops rocket-boosted model research programs for aircraft configurations, missiles, and airframe components covered an astounding number of technical areas, including aerodynamic performance, flutter, stability and control, heat transfer, automatic controls, boundary-layer control, inlet performance, ramjets, and separation behavior of aircraft components and stores. As an example of test productivity, in just 3 years beginning in 1947, over 386 models were launched at Wallops to evaluate a single topic: roll control effectiveness at transonic conditions. These tests included generic configurations and models with wings representative of the historic Douglas D-558-2 Skyrocket, Douglas X-3 Stiletto, and Bell X-2 research aircraft.32 Fundamental studies of dynamic stability and control were also conducted with generic research models to study basic phenomena such as longitudinal trim changes, dynamic longitudinal stability, control-hinge moments, and aerodynamic damping in roll.33 Studies with models of the D-558-2 also detected unexpected coupling of longitudinal and lateral oscillations, a problem that would subsequently prove to be common for configurations with long fuselages and relatively small wings.34 Similar coupled motions caused great concern in the X-3 and F-100 aircraft development programs and spurred on numerous studies of the phenomenon known as inertial coupling. More than 20 specific aircraft configurations were evaluated during the Wallops studies, including early models of such well-known aircraft as the Douglas F4D Skyray, the McDonnell F3H Demon, the Convair B-58 Hustler, the North American F-100 Super Sabre, the Chance Vought F8U Crusader, the Convair F-102 Delta Dagger, the Grumman F11F Tiger, and the McDonnell F-4 Phantom II. 32. Carl A. Sandahl, “Free-Flight Investigation at Transonic and Supersonic Speeds of a WingAileron Configuration Simulating the D558-2 Airplane,” NACA RM-L8E28 (1948); and Sandahl, “Free-Flight Investigation at Transonic and Supersonic Speeds of the Rolling Effectiveness for a 42.7° Sweptback Wing Having Partial-Span Ailerons,” NACA RM-L8E25 (1948). 33. Examples include James H. Parks and Jesse L. Mitchell, “Longitudinal Trim and Drag Characteristics of Rocket-Propelled Models Representing Two Airplane Configurations,” NACA RM-L9L22 (1949); and James L. Edmondson and E. Claude Sanders, Jr., “A Free-Flight Technique for Measuring Damping in Roll by Use of Rocket-Powered Models and Some Initial Results for Rectangular Wings,” NACA RM-L9101 (1949). 34. Parks, “Experimental Evidence of Sustained Coupled Longitudinal and Lateral Oscillations From Rocket-Propelled Model of a 35° Swept-Wing Airplane Configuration,” NACA RM-L54D15 (1954).

258

Case 5 | Dynamically Scaled Free-Flight Models

5

Shadowgraph of X-15 model in free flight during high-speed tests in the Ames SFFT facility. Shock wave patterns emanating from various airframe components are visible. NASA.

High-speed dynamic stability testing techniques at the Ames SFFT included studies of the static and dynamic stability of blunt-nose reentry shapes, including analyses of boundary-layer separation.35 This work included studies of the supersonic dynamic stability characteristics of the Mercury capsule. Noting the experimental observation of nonlinear variations of pitching moment with angle of attack typically exhibited by blunt bodies, Ames researchers contributed a mathematical method for including such nonlinearities in theoretical analyses and predictions of capsule dynamic stability at supersonic speeds. During the X-15 program, Ames conducted free-flight testing in the SFFT to define stability, control, and flow-field characteristics of the configuration at high supersonic speeds.36 Out of the Box: V/STOL Configurations International interest in Vertical Take-Off and Landing (VTOL) and Vertical/Short Take-Off and Landing (V/STOL) configurations escalated during the 1950s and persisted through the mid-1960s with a huge number of radical propulsion/aircraft combinations proposed and evaluated 35. Maurice L. Rasmussen, “Determination of Nonlinear Pitching-Moment Characteristics of Axially Symmetric Models From Free-Flight Data,” NASA TN-D-144 (1960). 36. Alfred G. Boissevain and Peter F. Intrieri, “Determination of Stability Derivatives from Ballistic Range Tests of Rolling Aircraft Models,” NASA TM-X-399 (1961).

259

NASA’s Contributions to Aeronautics

5

throughout industry, DOD, the NACA, and NASA. The configurations included an amazing variety of propulsion concepts to achieve hovering flight and the conversion to and from conventional forward flight. However, all these aircraft concepts were plagued with common issues regarding stability, control, and handling qualities.37 The first VTOL nonhelicopter concept to capture the interests of the U.S. military was the vertical-attitude tail-sitter concept. In 1947, the Air Force and Navy initiated an activity known as Project Hummingbird, which requested design approaches for VTOL aircraft. At Langley, discussions with Navy managers led to exploratory NACA free-flight studies in 1949 of simplified tail-sitter models to evaluate stability and control during hovering flight. Conducted in a large open area within a building, powered-model testing enabled researchers to explore the dynamic stability and control of such configurations.38 The test results provided valuable information on the relative severity of unstable oscillations encountered during hovering flight. The instabilities in roll and pitch were caused by aerodynamic interactions of the propeller during forward or sideward translation, but the period of the growing oscillations was sufficiently long to permit relatively easy control. The model flight tests also provided guidance regarding the level of control power required for satisfactory maneuvering during hovering flight. Navy interest in the tail-sitter concept led to contracts for the development of the Consolidated-Vultee (later Convair) XFY-1 “Pogo” and the Lockheed XFV-1 “Salmon” tail-sitter aircraft in 1951. The Navy asked Langley to conduct dynamic stability and control investigations of both configurations using its free-flight model test techniques. In 1952, hovering flights of the Pogo were conducted within the huge return passage of the Langley Full-Scale Tunnel, followed by transition flights from hovering to forward flight in the tunnel test section during a brief break in the tunnel’s busy test schedule.39 Observed by Convair

personnel (including the XFY-1 test pilot), the flight tests provided encouragement and confidence to the visitors and the Navy. Without doubt, the most successful NASA application of free-flight models for VTOL research was in support of the British P.1127 vectoredthrust fighter program. As the British Hawker Aircraft Company matured its design of the revolutionary P.1127 in the late 1950s, Langley’s senior manager, John P. Stack, became a staunch supporter of the activity and directed that tests in the 16-Foot Transonic Tunnel and free-flight research activities in the Full-Scale Tunnel be used for cooperative development work.40 In response to the directive, a one-sixth-scale free-flight model was flown in the Full-Scale Tunnel to examine the hovering and transition behavior of the design. Results of the free-flight tests were witnessed by Hawker staff members, including the test pilot slated to conduct the first transition flights, were very impressive. The NASA researchers regarded the P.1127 model as the most docile V/STOL configuration ever flown during their extensive experiences with free-flight VTOL designs. As was the case for many free-flight model projects, the motion-picture segments showing successful transitions from hovering to conventional flight in the Full-Scale Tunnel were a powerful influence in convincing critics that the concept was feasible. In this case, the model flight demonstrations helped sway a doubtful British government to fund the project. Refined versions of the P.1127 design were subsequently developed into today’s British Harrier and Boeing AV-8 fighter/attack aircraft. The NACA and NASA also conducted pioneering free-flight model research on tilt wing aircraft for V/STOL missions. In the early 1950s, several generic free-flight propeller-powered models were flown to evaluate some of the stability and control issues that were anticipated to limit the feasibility of the concept.41 The fundamental principle used by the tilt wing concept to convert from hovering to forward flight involves reorienting the wing from a vertical position for takeoff to a conventional position for forward flight. However, this simple conversion of the wing angle relative to the fuselage brings major challenges. For example, the

5

40. Smith, “Flight Tests of a 1/6-Scale Model of the Hawker P.1127 Jet VTOL Airplane,” NASA TM-SX-531 (1961). 41. Lovell and Lysle P. Parlett, “Hovering-Flight Tests of a Model of a Transport Vertical Take-Off Airplane with Tilting Wing and Propellers,” NACA TN-3630 (1956); Lovell and Parlett, “Flight Tests of a Model of a High-Wing Transport Vertical-Take-Off Airplane With Tilting Wing and Propellers and With Jet Controls at the Rear of the Fuselage for Pitch and Yaw Control,” NACA TN-3912 (1957).

261

NASA’s Contributions to Aeronautics

5

wing experiences large changes in its angle of attack relative to the flight path during the transition, and areas of wing stall may be encountered during the maneuver. The asymmetric loss of wing lift during stall can result in wing-dropping, wallowing motions and uncommanded transient maneuvers. Therefore, the wing must be carefully designed to minimize or eliminate flow separation that would otherwise result in degraded or unsatisfactory stability and control characteristics. Extensive wind tunnel and flight research on many generic NACA and NASA models, as well as the Hiller X-18, Vertol VZ-2, and Ling-Temco-Vought XC-142A tilt wing configurations at Langley, included a series of free-flight model tests in the Full-Scale Tunnel.42 Coordinated closely with full-scale flight tests, the model testing initially focused on providing early information on dynamic stability and the adequacy of control power in hovering and transition flight for the configurations. However, all projects quickly encountered the anticipated problem of wing stall, especially in reduced-power descending flight maneuvers. Tilt wing aircraft depend on the high-energy slipstream of large propellers to prevent local wing stall by reducing the effective angle of attack across the wingspan. For reduced-power conditions, which are required for steep descents to accomplish short-field missions, the energy of the slipstream is severely reduced, and wing stall is experienced. Large uncontrolled dynamic motions may be exhibited by the configuration for such conditions, and the undesirable motions can limit the descent capability (or safety) of the airplane. Flying model tests provided valuable information on the acceptability of uncontrolled motions such as wing dropping and lateral-directional wallowing during descent, and the test technique was used to evaluate the effectiveness of aircraft modifications such as wing flaps or slats, which were ultimately adapted by full-scale aircraft such as the XC-142A. As the 1960s drew to a close, the worldwide engineering community began to appreciate that the weight and complexity required for VTOL missions presented significant penalties in aircraft design. It therefore

42. Louis P. Tosti, “Flight Investigation of Stability and Control Characteristics of a 1/8-Scale Model of a Tilt-Wing Vertical-Take-Off-And-Landing Airplane,” NASA TN-D-45 (1960); Tosti, “Longitudinal Stability and Control of a Tilt-Wing VTOL Aircraft Model with Rigid and Flapping Propeller Blades,” NASA TN-D-1365 (1962); William A. Newsom and Robert H. Kirby, “Flight Investigation of Stability and Control Characteristics of a 1/9-Scale Model of a Four-Propeller Tilt-Wing V/STOL Transport,” NASA TN-D-2443 (1964).

262

Case 5 | Dynamically Scaled Free-Flight Models

turned its attention to the possibility of providing less demanding STOL capability with fewer penalties, particularly for large military transport aircraft. Langley researchers had begun to explore methods of using propeller or jet exhaust flows to induce additional lift on wing surfaces in the 1950s, and although the magnitude of lift augmentation was relatively high, practical propulsion limitations stymied the application of most concepts. A particularly promising concept known as the externally blown flap (EBF) used the redirected jet engine exhausts from conventional podmounted engines to induce additional circulation lift at low speeds for takeoff and landing.43 However, the relatively hot exhaust temperatures of turbojets of the 1950s were much too high for structural integrity and feasible applications. Nonetheless, Langley continued to explore and mature such ideas, known as powered-lift concepts. These research studies embodied conventional powered model tests in several wind tunnels, including free-flight investigations of the dynamic stability and control of multiengine EBF configurations in the Full-Scale Tunnel, with emphasis on providing satisfactory lateral control and lateral-directional trim after the failure of an engine. Other powered-lift concepts were also explored, including the upper-surface-blowing (USB) configuration, in which the engine exhaust is directed over the upper surface of the wing to induce additional circulation and lift.44 Advantages of this approach included potential noise shielding and flow-turning efficiency. While Langley continued its fundamental research on EBF and USB configurations, in the early 1970s, an enabling technology leap occurred with the introduction of turbofan engines, which inherently produce relatively cool exhaust fan flows.45 The turbofan was the perfect match for these STOL concepts, and industry’s awareness and participation in the basic NASA research program matured the state of the art for design data for powered-lift aircraft. The free-flight model results, coupled with NASA piloted simulator studies of full-scale aircraft STOL missions, helped provide the fundamental knowledge and data required to reduce

John P. Campbell, Jr., left, inventor of the externally blown flap, and Gerald G. Kayten of NASA Headquarters pose with a free-flight model of an STOL configuration at the Full-Scale Tunnel. Slotted trailing-edge flaps were used to deflect the exhaust flows of turbofan engines. NASA.

risk in development programs. Ultimately applied to the McDonnellDouglas YC-15 and Boeing YC-14 prototype transports in the 1970s and to today’s Boeing C-17, the EBF and USB concepts were the result of over 30 years of NASA research and development, including many valuable studies of free-flight models in the Full-Scale Tunnel.46 Breakthrough: Variable Sweep Spurred on by postwar interests in the variable-wing-sweep concept as a means to optimize mission performance at both low and high speeds, the NACA at Langley initiated a broad research program to identify the potential benefits and problems associated with the concept. The disappointing experiences of the Bell X-5 research aircraft, which used a single wing pivot to achieve variable sweep in the early 1950s, had clearly identified the unacceptable weight penalties associated with the concept of translating the wing along the fuselage centerline to maintain satisfactory levels of longitudinal stability while the wing sweep angle was varied from forward to aft sweep. After the X-5 experience, military interest in variable sweep quickly diminished while aerodynamicists at

46. Campbell originally conceived the EBF concept and was awarded a patent for his invention.

264

Case 5 | Dynamically Scaled Free-Flight Models

Langley continued to explore alternate concepts that might permit variations in wing sweep without moving the wing pivot location and without serious degradation in longitudinal stability and control. After years of intense research and wind tunnel testing, Langley researchers conceived a promising concept known as the outboard pivot.47 The basic principle involved in the NASA solution was to pivot the movable wing panels at two outboard pivot locations on a fixed inner wing and share the lift between the fixed portion of the wing and the movable outer wing panel, thereby minimizing the longitudinal movement of the aerodynamic center of lift for various flight speeds. As the concept was matured in configuration studies and supporting tests, refined designs were continually submitted to intense evaluations in tunnels across the speed range from supersonic cruise conditions to subsonic takeoff and landing.48 The use of dynamically scaled free-flight models to evaluate the stability and control characteristics of variable-sweep configurations was an ideal application of the testing technique. Since variable-sweep designs are capable of an infinite number of wing sweep angles between the forward and aft positions, the number of conventional wind tunnel force tests required to completely document stability and control variations with wing sweep for every sweep angle could quickly become unacceptable. In contrast, a free-flight model with continually variable wing sweep angles could be used to quickly examine qualitative characteristics as its geometry changed, resulting in rapid identification of significant problems. Freeflight model investigations of a configuration based on a proposed Navy combat air patrol (CAP) mission in the Full-Scale Tunnel provided a convincing demonstration that the outboard pivot was ready for applications. The oblique wing concept (sometimes referred to as the “switchblade wing” or “skewed wing”) had originated in the German design studies of the Blohm & Voss P202 jet aircraft during World War II and was pursued at Langley by R.T. Jones. Oblique wing designs use a singlepivot, all-moving wing to achieve variable sweep in an asymmetrical fashion. The wing is positioned in the conventional unswept position for takeoff and landings, and it is rotated about its single pivot point for high-speed flight. As part of a general research effort that included

theoretical aerodynamic studies and conventional wind tunnel tests, a free-flight investigation of the dynamic stability and control of a simplified model was conducted in the Free-Flight Tunnel in 1946.49 This research on the asymmetric swept wing actually predated NACA wind tunnel research on symmetrical variable sweep concepts with a research model of the Bell X-1.50 The test objectives were to determine whether such a radical aircraft configuration would exhibit satisfactory stability characteristics and remain controllable in the swept wing asymmetric state at low-speed flight conditions. The results of the flight tests, which were the first U.S. flight studies of oblique wings ever conducted, showed that the wing could be swept as much as 40 degrees without significant degradation in behavior. However, when the sweep angle was increased to 60 degrees, an unacceptable longitudinal trim change was experienced, and a severe reduction in lateral control occurred at moderate and high angles of attack. Nonetheless, the results obtained with the simple freeflight model provided optimism that the unconventional oblique wing concept might be feasible from a perspective of stability and control. R.T. Jones transferred to the NACA Ames Aeronautical Laboratory in 1947 and continued his brilliant career there, which included his continuing interest in the application of oblique wing technology. In the early 1970s, the scope of NASA studies on potential civil supersonic transport configurations included an effort by an Ames team headed by Jones that examined a possible oblique wing version of the supersonic transport. Although wind tunnel testing was conducted at Ames, the demise and cancellation of the American SST program in the early 1970s terminated this activity. Wind tunnel and computational studies of oblique wing designs continued at Ames throughout the 1970s for subsonic, transonic, and supersonic flight applications.51 Jones stimulated and participated in flight tests of several oblique wing radiocontrolled models, and a joint Ames-Dryden project was initiated to use a remotely piloted research aircraft known as the Oblique Wing Research Aircraft (OWRA) for studies of the aerodynamic characteristics and control requirements to achieve satisfactory handling qualities.

Growing interest in the oblique wing and the success of the OWRA remotely piloted vehicle project led to the design and low-speed flight demonstrations of a full-scale research aircraft known as the AD-1 in the late 1970s. Designed as a low-cost demonstrator, the radical AD-1 proved to be a showstopper during air shows and generated considerable public interest.52 The flight characteristics of the AD-1 were quite satisfactory for wing-sweep angles of less than about 45 degrees, but the handling qualities degraded for higher values of sweep, in agreement with the earlier Langley exploratory free-flight model study. After his retirement, Jones continued his interest in supersonic oblique wing transport configurations. When the NASA High-Speed Research program to develop technologies necessary for a viable supersonic transport began in the 1990s, several industry teams revisited the oblique wing for potential applications. Ames sponsored free-flight radiocontrolled model studies of oblique wing configurations at Stanford University in the early 1990s. As a result of free-flight model contributions from Langley, Ames, Dryden, and academia, major issues regarding potential dynamic stability and control problems for oblique wing configurations have been addressed for low-speed conditions. Unfortunately, funding for transonic and supersonic model flight studies has not been forthcoming, and high-speed studies have not yet been accomplished.

5

Safe Return: Space Capsules The selection of blunt capsule designs for the Mercury, Gemini, and Apollo programs resulted in numerous investigations of the dynamic stability and recovery of such shapes. Nonlinear, unstable variations of aerodynamic forces and moments with angle of attack and sideslip were known to exist for these configurations, and extensive conventional force tests, dynamic free-flight model tests, and analytical studies were conducted to define the nature of potential problems that might be encountered during atmospheric reentry. At Ames, the supersonic and hypersonic free-flight aerodynamic facilities have been used to observe dynamic stability characteristics, extract aerodynamic data from flight tests, provide stabilizing concepts, and develop mathematical models for flight simulation at hypersonic and supersonic speeds. 52. Weneth D. Painter, “AD-1 Oblique Wing Research Aircraft Pilot Evaluation Program,” AIAA Paper 1983-2509 (1983).

267

NASA’s Contributions to Aeronautics

5

Meanwhile, at Langley, researchers in the Spin Tunnel were conducting dynamic stability investigations of the Mercury, Gemini, and Apollo capsules in vertically descending subsonic flight.53 Results of these studies dramatically illustrated potential dynamic stability issues during the spacecraft recovery procedure. For example, the Gemini capsule model was very unstable; it would at various times oscillate, tumble, or spin about a vertical axis with its symmetrical axis tilted as much as 90 degrees from the vertical. However, the deployment of a drogue parachute during any spinning or tumbling motions quickly terminated these unstable motions at subsonic speeds. Extensive tests of various drogue-parachute configurations resulted in definitions of acceptable parachute bridle-line lengths and attachment points. Spin Tunnel results for the Apollo command module configuration were even more dramatic. The Apollo capsule with blunt end forward was dynamically unstable and displayed violent gyrations, including large oscillations, tumbling, and spinning motions. With the apex end forward, the capsule was dynamically stable and would trim at an angle of attack of about 40 degrees and glide in large circles. Once again, the use of a drogue parachute stabilized the capsule, and the researchers also found that retention of the launch escape system, with either a drogue parachute or canard surfaces attached to it, would prevent an unacceptable apex-forward trim condition during launch abort. Following the Apollo program, NASA conducted a considerable effort on unpiloted space probes and planetary exploration. In the Langley Spin Tunnel, several planetary-entry capsule configurations were tested to evaluate their dynamic stability during descent, with a priority in simulating descent in the Martian atmosphere.54 Studies also included assessments of the Pioneer Venus probe in the 1970s. These tests provided considerable design information on the dynamic stability of a variety of potential planetary exploration capsule shapes. Additional studies

Photograph of a free-flight model of the Project Mercury capsule in vertical descent in the Spin Tunnel with drogue parachute deployed. Tests to improve the dynamic stability characteristics of capsules have continued to this day. NASA.

of the stability characteristics of blunt, large-angle capsules were conducted in the late 1990s in the Spin Tunnel. As the new millennium began, NASA’s interests in piloted and unpiloted planetary exploration resulted in additional studies of dynamic stability in the Spin Tunnel. Currently, the tunnel and its dynamic model testing techniques are supporting NASA’s Constellation program for 269

NASA’s Contributions to Aeronautics

lunar exploration. Included in the dynamic stability testing are the Orion launch abort vehicle, the crew module, and alternate launch abort systems.55

5

A Larger Footprint: Reentry Vehicles and Lifting Bodies The NACA and military visionaries initiated early efforts for the X-15 hypersonic research aircraft, in-house design studies for hypersonic vehicles were started at Langley and Ames, and the Air Force began its X-20 Dyna-Soar space plane program. The evolution of long, slender configurations and others with highly swept lifting surfaces was yet another perturbation of new and unusual vehicles with unconventional aerodynamic, stability, and control characteristics requiring the use of freeflight models for assessments of flight dynamics. In addition to the high-speed studies of the X-15 in the Ames supersonic free-flight facility previously discussed, the X-15 program sponsored low-speed investigations of free-flight models at Langley in the Full-Scale Tunnel, the Spin Tunnel, and an outdoor helicopter drop model.56 The most significant contribution of the NASA free-flight tests of the X-15 was confirmation of the effectiveness of the differential tail for control. North American had followed pioneering research at Langley on the use of the tail for roll control. It had used such a design in its YF-107A aircraft and opted to use the concept for the X-15 to avoid ailerons that would have complicated wing design for the hypersonic aircraft. Nonetheless, skepticism existed over the potential effectiveness of the application until the free-flight tests at Langley provided a dramatic demonstration of its success.57 In the late 1950s, scientists at NASA Ames conducted in-depth studies of the aerodynamic and aerothermal challenges of hypersonic reentry and concluded that blunted half-cone shapes could provide adequate thermal protection for vehicle structures while also producing 55. David E. Hahne and Charles M. Fremaux, “Low-Speed Dynamic Tests and Analysis of the Orion Crew Module Drogue Parachute System,” AIAA Paper 2008-09-05 (2008). 56. Peter C. Boisseau, “Investigation of the Low-Speed Stability and Control Characteristics of a 1/7-Scale Model of the North American X-15 Airplane,” NACA RM-L57D09 (1957); Donald E. Hewes and James L. Hassell, Jr., “Subsonic Flight Tests of a 1/7-Scale RadioControlled Model of the North American X-15 Airplane With Particular Reference to High Angle-of-Attack Conditions,” NASA TM-X-283 (1960). 57. Dennis R. Jenkins and Tony R. Landis, Hypersonic-The Story of the North American X-15 (Specialty Press, 2008).

270

Case 5 | Dynamically Scaled Free-Flight Models

a significant expansion in operational range and landing options. As interest in the concept intensified following a major conference in 1958, a series of half-cone free-flight models provided convincing proof that such vehicles exhibited satisfactory flight behavior. The most famous free-flight model activity in support of lifting body development was stimulated by the advocacy and leadership of Dale Reed of the Dryden Flight Research Center. In 1962, Reed became fascinated with the lifting body concept and proposed that a piloted research vehicle be used to validate the potential of lifting bodies.58 He was particularly interested in the flight characteristics of a second-generation Ames lifting body design known as the M2-F1 concept. After Reed’s convincing flights of radio-controlled models of the M2-F1 ranging from kite-like tows to launches from a larger radio-controlled mother ship demonstrated its satisfactory flight characteristics, Reed obtained approval for the construction and flight-testing of his vision of a lowcost piloted unpowered glider. The impact of motion-picture films of Reed’s free-flight model flight tests on skeptics was overwhelming, and management’s support led to an entire decade of highly successful lifting body flight research at Dryden. At Langley, support for the M2-F1 flight program included freeflight tow tests of a model in the Full-Scale Tunnel, and the emergence of Langley’s own lifting body design known as the HL-10 resulted in wind tunnel tests in virtually every facility at Langley. Free-flight testing of a dynamic model of the HL-10 in the Full-Scale Tunnel demonstrated outstanding dynamic stability and control to angles of attack as high as 45 degrees, and rolling oscillations that had been exhibited by the earlier highly swept reentry bodies were completely damped for the HL-10 with three vertical fins.59 In the early 1970s, a new class of lifting body emerged, dubbed “racehorses” by Dale Reed.60 Characterized by high fineness ratios, long pointed noses, and flat bottoms, these configurations were much more efficient at hypersonic speeds than the earlier “flying bathtubs.” One Langley-developed configuration, known as the Hyper III, was evaluated at Dryden by Reed and his team using free-flight models and the

mother ship test technique. Although the Hyper III was efficient at high speeds, it exhibited a very low lift-to-drag ratio at low speeds requiring some form of variable geometry such as a pivot wing, flexible wing, or gliding parachute. Reed successfully advocated for a low-cost, 32-foot-long helicopterlaunched demonstration vehicle of the Hyper III with a pop-out wing, which made its first flight in 1969. Flown from a ground-based cockpit, the Hyper III flight was launched from a helicopter at an altitude of 10,000 feet. After being flown in research maneuvers by a research pilot using instruments, the vehicle was handed off to a safety pilot, who safely landed it. Unfortunately, funding for a low-cost piloted project similar to the earlier M2-F1 activity was not forthcoming for the Hyper III. Avoiding Catastrophe: Vehicle/Store Separation One of the more complex and challenging areas in aerospace technology is the prediction of paths of aircraft components following the release of items such as external stores, canopies, crew modules, or vehicles dropped from mother ships. Aerodynamic interference phenomena between vehicles can cause major safety-of-flight issues, resulting in catastrophic impact of the components with the airplane. Unexpected pressures and shock waves can dramatically change the expected trajectory of stores. Conventional wind tunnel tests used to obtain aerodynamic inputs for calculations of separation trajectories must cover a wide range of test parameters, and the requirement for dynamic aerodynamic information further complicates the task. Measurement of aerodynamic pressures, forces, and moments on vehicles in proximity to one another in wind tunnels is a highly challenging technical procedure. The use of dynamically scaled free-flight models can quickly provide a qualitative indication of separation dynamics, thereby providing guidance for wind tunnel test planning and early identification of potentially critical flight conditions. Separation testing for military aircraft components using dynamic models at Langley evolved into a specialty at the Langley 300-mph 7by 10-Foot Tunnel, where subsonic separation studies included assessments of the trajectories taken by released cockpit capsules, stores, and canopies. In addition, bomb releases were simulated for several bombbay configurations, and the trajectories of model rockets fired from the wingtips of models were also evaluated. As requests for specific separation studies mounted, the staff rapidly accumulated unique expertise in

272

Case 5 | Dynamically Scaled Free-Flight Models

testing techniques for separation clearance.61 One of the more important separation studies conducted in the Langley tunnel was an assessment of the launch dynamics of the X-15/B-52 combination for launches of the X-15. Prior to the X-15, launches of research aircraft from carrier aircraft had only been made from the fuselage centerline location of the mother ship. In view of the asymmetrical location of the X-15 under the right wing of the B-52, concern arose as to the aerodynamic loads encountered during separation and the safety of the launching procedure. Separation studies were therefore conducted in the Langley 300-mph 7- by 10-Foot Tunnel and the Langley High-Speed 7- by 10-Foot Tunnel.62 Detailed measurements of the aerodynamic loads on the X-15 in proximity to the B-52 under its right wing were made during conventional force tests in the high-speed tunnel, while the trajectory of a dynamically scaled X-15 model was observed during a separate investigation in the low-speed tunnel. The test set up for the low-speed drop tests used a dynamically scaled X-15 model under the left wing of the B-52 model to accommodate viewing stations in the tunnel. Initial trim settings for the X-15 were determined to avoid contact with the B-52, and the drop tests showed that the resulting trajectory motions provided adequate clearance for all conditions investigated. During successful subsonic separation events, a bomb or external store is released, and gravity typically pulls it away safely. At supersonic speeds, however, aerodynamic forces are appreciably higher relative to the store weight, shock waves may cause unexpected pressures that severely influence the store trajectory or bomb guidance system, and aerodynamic interference effects may cause catastrophic collisions after launch. Under some conditions, bombs released from within a fuselage bomb bay at supersonic speeds have encountered adverse flow fields, to the extent that the bombs have reentered the bomb bay. In the early 1950s, the NACA advisory committees strongly recommended that focused efforts be initiated by the Agency in store separation, especially for supersonic flight conditions. Researchers within Langley’s Pilotless Aircraft Research Division used their Preflight Jet facility at Wallops to conduct research on supersonic separation characteristics for several

Langley researcher William J. Alford, Jr., observes a free-flight drop model of the X-15 research aircraft as it undergoes separation testing beneath a B-52 model in a Langley tunnel. NASA.

high-priority military programs.63 The Preflight Jet facility was designed to check out ramjet engines prior to rocket launches, consisting of a “blow down”–type tunnel powered by compressed air exhausted through a supersonic nozzle. Test Mach number capability was from 1.4 to 2.25. With an open throat and no danger to a downstream facility drive system, the facility proved to be ideal for dynamic studies of bombs or stores following supersonic releases. One of the more crucial tests conducted in the Wallops Preflight Jet facility was support for the development of the Republic F-105 fighterbomber, which was specifically designed with forcible ejection of bombs from within the bomb bay to avoid the issues associated with external releases at supersonic speeds. For the test program, a half-fuselage model (with bomb bay) was mounted to the top of the nozzle, and the ejection sequence included extension of folding fins on the store after release. A piston and rod assembly from the open bomb bay forcefully ejected the

63. Shortal, A New Dimension.

274

Case 5 | Dynamically Scaled Free-Flight Models

store, and high-speed photography documented the motion of the store and its trajectory. The F-105 program expanded to include numerous specific and generic bomb and store shapes requiring almost 2 years of tests in the facility. Numerous generic and specific aircraft separation studies in the Preflight Jet facility from 1954 to 1959 included F-105 pilot escape, F-104 wing drop-tank separations, F-106 store releases from an internal bomb bay, and B-58 pod drops.

5

Glimpse of the Future: Advanced Civil Aircraft Most of the free-flight model research conducted by NASA to evaluate dynamic stability and control within the flight envelope has focused on military configurations and a few radical civil aviation designs. This situation resulted from advances in the state of the art for design methods for conventional subsonic configurations over the years and many experiences correlating results of model and airplane tests. As a result, transport design teams have collected massive data and experience bases for transports that serve as the corporate knowledge base for derivative aircraft. For example, companies now have considerable experience with the accuracy of their conventional static wind tunnel model tests for the prediction of full-scale aircraft characteristics, including the effects of Reynolds number. Consequently, testing techniques such as free-flight tests do not have high technical priority for such organizations. The radical Blended Wing-Body (BWB) flying wing configuration has been a notable exception to the foregoing trend. Initiated with NASA sponsorship at McDonnell-Douglas (now Boeing) in 1993, the subsonic BWB concept carries passengers or payload within its wing structure to minimize drag and maximize aerodynamic efficiency.64 Over the past 16 years, wind tunnel research and computational studies of various BWB configurations have been conducted by NASA–Boeing teams to assess cruise conditions at high subsonic speeds, takeoff and landing characteristics, spinning and tumbling tendencies, emergency spin/tumble recovery parachute systems, and dynamic stability and control. By 2005, the BWB team had conducted static and dynamic force tests of models in the 12-Foot Low-Speed Tunnel and the 14- by 22-Foot Tunnel to define aerodynamic data used to develop control laws and control limits, as well as trade studies of various control effectors available 64. Chambers, Radical Wings and Wind Tunnels; Chambers, Innovation in Flight: Research of the Langley Research Center on Revolutionary Advanced Concepts for Aeronautics, NASA SP-4539 (2005).

275

NASA’s Contributions to Aeronautics

5

on the trailing edge of the wing. Free-flight testing then occurred in the Full-Scale Tunnel with a 12-foot-span model.65 Results of the flight test indicated satisfactory flight behavior, including assessments of engineout asymmetric thrust conditions. In 2002, Boeing contracted with Cranfield Aerospace, Ltd., for the design and production of a pair of 21-foot-span remotely piloted models of BWB vehicles known as the X-48B configuration. After conventional wind tunnel tests of the first X-48B vehicle in the Langley FullScale Tunnel in 2006, the second X-48B underwent its first flight in July 2007 at the Dryden Flight Research Center. The BWB flight-test team is a cooperative venture between NASA, Boeing Phantom Works, and the Air Force Research Laboratory. The first 11 flight tests of the 8.5percent-scale vehicle in 2007 focused on low-speed dynamic stability and control with wing leading-edge slats deployed. In a second series of flights, which began in April 2008, the slats were retracted, and higher speed studies were conducted. Powered by three model aircraft turbojet engines, the 500-pound X-48B is expected to have a top speed of about 140 mph. A sequence of flight phases is scheduled for the X-48B with various objectives within each study directed at the technology issues facing the implementation of the innovative concept. Final Maturity: Concept Demonstrators The efforts of the NACA and NASA in developing and applying dynamically scaled free-flight model testing techniques have progressed through a truly impressive maturation process. Although the scaling relationships have remained constant since the inception of free-flight testing, the facilities and test attributes have become dramatically more sophisticated. The size and construction of models have changed from unpowered balsa models weighing a few ounces with wingspans of less than 2 feet to very large powered composite models with weights of over 1,000 pounds. Control systems have changed from simple solenoid bang-bang controls operated by a pilot with visual cues provided by model motions to hydraulic systems with digital flight controls and full feedbacks from an array of sensors and adaptive control systems. The level of sophistication integrated into the model testing techniques has now given rise 65. Dan D. Vicroy, “Blended-Wing-Body Low-Speed Flight Dynamics: Summary of Ground Tests and Sample Results,” AIAA Invited Paper presented at the 47th AIAA Aerospace Sciences Meeting and Exhibit, Jan. 2009.

276

Case 5 | Dynamically Scaled Free-Flight Models

5

The Boeing X-48B Blended Wing-Body flying model in flight at NASA Dryden. The configuration has undergone almost 15 years of research, including free-flight testing at Langley and Dryden. NASA.

to a new class of free-flight models that are considered to be integrated concept demonstrators rather than specific technology tools. Thus, the lines between free-flight models and more complex remotely piloted vehicles have become blurred, with a noticeable degree of refinement in the concept demonstrators. Research activities at the NASA Dryden Flight Research Center vividly illustrate how far free-flight testing has come. Since the 1970s, Dryden has continually conducted a broad program of demonstrator applications with emphasis on integrations of advanced technology. In 1997, another milestone was achieved at Dryden in remotely piloted research vehicle technology, when an X-36 vehicle demonstrated the feasibility of using advanced technologies to ensure satisfactory flying qualities for radical tailless fighter designs. The X-36 was designed as a joint effort between the NASA Ames Research Center and the Boeing Phantom Works (previously McDonnell-Douglas) as a 0.28-scale powered free-flight model of an advanced fighter without vertical or horizontal tails to enhance survivability. Powered by a F112 turbofan engine and weighing about 1,200 pounds, the 18-foot-long configuration used 277

NASA’s Contributions to Aeronautics

5

a canard, split aileron surfaces, wing leading- and trailing-edge flaps, and a thrust-vectoring nozzle for control. A single-channel digital flyby-wire system provided artificial stability for the configuration, which was inherently unstable about the pitch and yaw axes.66 Spinning Qualitatively, recovery from the various spin modes is dependent on the type of spins exhibited, the mass distribution of the aircraft, and the sequence of controls applied. Recovering from the steep steady spin tends to be relatively easy because the nose-down orientation of the aircraft control surfaces to the free stream enables at least a portion of the control effectiveness to be retained. In contrast, during a flat spin, the fuselage may be almost horizontal, and the control surfaces are oriented so as to provide little recovery moment, especially a rudder on a conventional vertical tail. In addition to the ineffectiveness of controls for recovery from the flat spin, the rotation of the aircraft about a nearvertical axis near its center of gravity results in extremely high centrifugal forces at the cockpit for configurations with long fuselages. In many cases, the negative (“eyeballs out”) g-loads may be so high as to incapacitate the crewmembers and prevent them from escaping from the aircraft. Establishing Creditability: The Early Days Following the operational readiness of the Langley 15-Foot Free-Spinning Tunnel in 1935, initial testing centered on establishing correlation with full-scale flight-test results of spinning behavior for the XN2Y-1 and F4B-2 biplanes.67 Critical comparisons of earlier results obtained on small-scale models from the Langley 5-Foot Vertical Tunnel and full-scale flight tests indicated considerable scale effects on aerodynamic characteristics; therefore, calibration tests in the new tunnel were deemed imperative. The results of the tests for the two biplane models were very encouraging in terms of the nature of recovery characteristics and served to inspire confidence in the testing technique and promote future tests. During those prewar years, the NACA staff was afforded time to conduct fundamental research studies and to make general conclusions for emerging monoplane designs. Systematic series of investigations were conducted in which, for example, models were tested for combinations 66. Laurence A. Walker, “Flight Testing the X-36-The Test Pilot’s Perspective,” NASA CR-198058 (1997). 67. Zimmerman, “N.A.C.A. Free-Spinning Wind Tunnel,” NACA TR-557.

278

Case 5 | Dynamically Scaled Free-Flight Models

of eight different wings and three different tails.68 Other investigations of tunnel-to-flight correlations occurred, including comparison of results for the BT-9 monoplane trainer. As experience with spin tunnel testing increased, researchers began to observe more troublesome differences between results obtained in flight and in the tunnel. The effects of Reynolds number, model accuracies, control-surface rigging of full-scale aircraft, propeller slipstream effects not present during unpowered model tests, and other factors became appreciated to the point that a general philosophy began to emerge for which model tests were viewed as good predictors of full-scale characteristics but also examples of poor correlation that required even more correlation studies and a conservative interpretation of model results. Critics of small-scale model testing did not accept a growing philosophy that spin predictions were an “art” based on extensive testing to determine the relative sensitivity of results to configuration variables, model damage, and testing technique. Nonetheless, pressure mounted to arrive at design guidelines for satisfactory spin recovery characteristics.

5

Quest for Guidelines: Tail Damping Power Factor An empirical criterion based on the projected side area and mass distribution of the airplane was derived in England, and the Langley staff proposed a design criterion in 1939 based solely on the geometry of aircraft tail surfaces. Known as the tail-damping power factor (TDPF), it was touted as a rapid estimation method for determining whether a new design was likely to comply with the minimum requirements for safety in spinning.69 The beginning of World War II and the introduction of a new Langley 20-Foot Spin Tunnel in 1941 resulted in a tremendous demand for spinning tests of high-priority military aircraft. The workload of the staff increased dramatically, and a tremendous amount of data was gathered for a large number of different configurations. Military requests for spin tunnel tests filled all available tunnel test times, leaving no time for general research. At the same time, configurations were tested with

radical differences in geometry and mass distribution. Tailless aircraft with their masses distributed in a primarily spanwise direction were introduced, along with twin-engine bombers and other unconventional designs with moderately swept wings and canards. In the 1950s, advances in aircraft performance provided by the introduction of jet propulsion resulted in radical changes in aircraft configurations, creating new challenges for spin technology. Military fighters no longer resembled the aircraft of World War II, as the introduction of swept wings and long, pointed fuselages became commonplace. Suddenly, certain factors, such as mass distribution, became even more important, and airflow around the unconventional, long fuselage shapes during spins dominated the spin behavior of some configurations. At the same time, fighter aircraft became larger and heavier, resulting in much higher masses relative to the atmospheric density, especially during flight at high altitudes. Effect of Reynolds Number In the mid-1950s, the NACA encountered an unexpected aerodynamic scale effect related to the long fuselage forebodies being introduced at the time. This experience led to one of the more important and lasting lessons learned in the use of free-spinning models for spin predictions. One particular project stands out as a key experience regarding this topic. As part of the ongoing military requests for NACA support of new aircraft development programs, the Navy requested Langley to conduct spin tunnel tests of a model of its new Chance Vought XF8U-1 Crusader fighter in 1955. The results of spin tunnel tests of a 1/25-scale model indicated that the airplane would exhibit two spin modes.70 The first mode would be a potentially dangerous fast, flat spin at an angle of attack of approximately 87 degrees, from which recoveries were unsatisfactory or unobtainable. The second spin was much steeper, with a lower rate of rotation, and recoveries would probably be satisfactory. As the spin tunnel results were analyzed, Chance Vought engineers directed their focus to identifying factors that were responsible for the flat spin exhibited by the model. The scope of activities stimulated by the XF8U-1 spin tunnel results included, in addition to extended spin tunnel tests, one-degree-of-freedom autorotation tests of a model of the 70. Walter J. Klinar, Henry A. Lee, and L. Faye Wilkes, “Free-Spinning-Tunnel Investigation of a 1/25-Scale Model of the Chance Vought XF8U-1 Airplane,” NACA RM-SL56L31b (1956).

280

Case 5 | Dynamically Scaled Free-Flight Models

XF8U-1 configuration in the Chance Vought Low Speed Tunnel and a NACA wind tunnel research program that measured the aerodynamic sensitivity of a wide range of two-dimensional, noncircular cylinders to Reynolds number.71 The wind tunnel tests were designed and conducted to include variations in Reynolds number from the low values associated with spin tunnel testing to much higher values more representative of flight. With results from the static and autorotation wind tunnel studies in hand, researchers were able to identify an adverse effect of Reynolds number on the forward fuselage shape of the XF8U-1 such that, at the relatively low values of Reynolds number of the spin tunnel tests (about 90,000 based on fuselage-forebody depth), the spin model exhibited a powerful pro-spin aerodynamic yawing moment dominated by forces produced on the forebody. The pro-spin moment caused an autorotative spinning tendency, resulting in the fast flat spin observed in the spin tunnel tests. As the Reynolds number in the tunnel tests was increased to values approaching 300,000, however, the moments produced by the forward fuselage reversed direction and became antispin, remaining so for higher values of Reynolds number. Fundamentally, the researchers had clearly identified the importance of cross-sectional shapes of modern aircraft—particularly those with long forebodies—on spin characteristics and the possibility of erroneous spin tunnel predictions because of the low test Reynolds number. When the full-scale spin tests were conducted, the XF8U-1 airplane exhibited only the steeper spin mode and the fast, flat spin predicted by the spin model that had caused such concern was never encountered. During and after the XF8U-1 project, Langley’s spin tunnel personnel developed expertise in the anticipation of potential Reynolds number effects on the forebody, and in the art of developing methods to geometrically modify models to minimize unrealistic spin predictions, caused by the phenomenon. In this approach, cross-sectional shapes of aircraft are examined before models are constructed, and if the forebody cross section is similar to those known to exhibit scale effects at low Reynolds number, static tests at other wind tunnels are

conducted for a range of Reynolds number to determine if artificial devices, such as nose-mounted strakes at specific locations, can be used to artificially alter the flow separation on the nose at low Reynolds number and cause it to more accurately simulate full-scale conditions.72 In addition to the XF8U-1, it was necessary to apply scale-correction fuselage strakes to the spin tunnel models of the Northrop F-5A and F-5E fighters, the Northrop YF-17 lightweight fighter prototype, and the Fairchild A-10 attack aircraft to avoid erroneous predictions because of fuselage forebody effects. In the case of the X-29, a specific study of the effects of forebody devices for correcting low Reynolds number effects was conducted in detail.73 Effect of External Stores External stores have been found to have large effects on spin and recovery, especially for asymmetric loadings in which stores are located asymmetrically along the wing, resulting in a lateral displacement of the center of gravity of the configuration. For example, some aircraft may not spin in the direction of the “heavy” wing but will spin fast and flat into the “light” wing. In most cases, model tests in which the shapes of the external stores were replaced with equivalent weight ballast indicated that the effects of asymmetric loadings were primarily due to a mass effect, with little or no aerodynamic effect detected. However, very large stores such as fuel tanks were found, on occasion, to have unexpected effects because of aerodynamic characteristics of the component. During the aircraft development phase, spin characteristics of high-performance military aircraft must be assessed for all loadings proposed, including symmetric and asymmetric configurations. Spin tunnel tests can therefore be extensive for some aircraft, especially those with variable-sweep wing capabilities. Testing 72. D.N. Petroff, S.H. Scher, and L.E. Cohen, “Low Speed Aerodynamic Characteristics of an 0.075-Scale F-15 Airplane Model at High Angles of Attack and Sideslip,” NASA TM-X-62360 (1974); Petroff, Scher, and C.E. Sutton, “Low-Speed Aerodynamic Characteristics of a 0.08-Scale YF-17 Airplane Model at High Angles of Attack and Sideslip,” NASA TM-78438 (1978); Raymond D. Whipple and J.L. Ricket, “Low-Speed Aerodynamic Characteristics of a 1/8-scale X-29A Airplane Model at High Angles of Attack and Sideslip,” NASA TM-87722 (1986). 73. Stanley H. Scher and William L. White, “Spin-Tunnel Investigation of the Northrop F-5E Airplane,” NASA TM-SX-3556 (1977); C. Michael Fremaux, “Wind-Tunnel Parametric Investigation of Forebody Devices for Correcting Low Reynolds Number Aerodynamic Characteristics at Spinning Attitudes,” NASA CR-198321 (1996).

282

Case 5 | Dynamically Scaled Free-Flight Models

of the General Dynamics F-111, for example, required several months of test time to determine spin and recovery characteristics for all potential conditions of wing-sweep angles, center-of-gravity positions, and symmetric and asymmetric store loadings.74

5

Parachute Technology The use of tail-mounted parachutes for emergency spin recovery has been common practice from the earliest days of flight to the present day. Properly designed and deployed parachutes have proven to be relatively reliable spin recovery device, always providing an antispin moment, regardless of the orientation of the aircraft or the disorientation or confusion of the pilot. Almost every military aircraft spin program conducted in the Spin Tunnel includes a parachute investigation. Free-spinning model tests are used to determine the critical geometric variables for parachute systems. Paramount among these variables is the minimum size of parachute required for recovery from the most dangerous spin modes. As would be expected, the size of the parachute is constrained by issues regarding system weight and the opening shock loads transmitted to the rear of the aircraft. In addition to parachute size, the length of parachute riser (attachment) lines and the attachment point location on the rear of the aircraft are also critical design parameters. The importance of parachute riser line length can be especially critical to the inflation and effectiveness of the parachute for spin recovery. Results of free-spin tests of hundreds of models in the Spin Tunnel has shown that if the riser length is too short, the parachute will be immersed in the low-energy wake of the spinning airplane and will not inflate. On the other hand, if the towline length is too long, the parachute will inflate but will drift inward and align itself with the axis of rotation, thereby providing no antispin contribution. The design and operational implementation of emergency spin recovery parachutes are a stringent process that begins with spin tunnel tests and proceeds through the design and qualification of the parachute system, including the deployment and release mechanisms. By participation in each of these segments of the process, Langley researchers have amassed tremendous amount of knowledge regarding parachute systems and are called upon frequently by the aviation community for consultation 74. A discussion of the powerful effects of asymmetric mass loadings for the F-15 fighter is presented in an accompanying case study in this volume by the same author.

283

NASA’s Contributions to Aeronautics

before designing and fabricating parachute systems for spin tests of full-scale aircraft.75

5

General-Aviation Spin Technology The dramatic changes in aircraft configurations after World War II required almost complete commitment of the Spin Tunnel to development programs for the military, resulting in stagnation of any research for light personal-owner–type aircraft. In subsequent years, designers had to rely on the database and design guidelines that had been developed based on experiences during the war. Unfortunately, stall/spin accidents in the early 1970s in the general aviation community increased at an alarming rate. Even more troublesome, on several occasions aircraft that had been designed according to the NACA tail-damping power factor criterion had exhibited unsatisfactory recovery characteristics, and the introduction of features such as advanced general aviation airfoils resulted in concern over the technical adequacy and state of the database for general aviation configurations. Finally, in the early 1970s, the pressure of new military aircraft development programs eased, permitting NASA to embark on new studies related to spin technology for general aviation aircraft. A NASA General Aviation Spin Research program was initiated at Langley that focused on the use of radio-control and spin tunnel models to assess the impact of design features on spin and recovery characteristics, and to develop testing techniques that could be used by the industry. The program also included the acquisition of several full-scale aircraft that were modified for spin tests to produce data for correlation with model results.76 One of the key objectives of the program was to evaluate the impact of tail geometry on spin characteristics. The approach taken was to design alternate tail configurations so as to produce variability in the TDPF parameter by changing the vertical and horizontal locations of the

Involved in a study of spinning characteristics of general-aviation configurations in the 1970s were Langley test pilot Jim Patton, center, and researchers Jim Bowman, left, and Todd Burk. NASA.

horizontal tail. A spin tunnel model of a representative low wing configuration was constructed with four interchangeable tails, and results for the individual tail configurations were compared with predictions based on the tail design criteria. The range of tails tested included conventional cruciform-tail configurations, low horizontal tail locations, and a T-tail configuration. As expected, results of the spin tunnel testing indicated that tail configuration had a large influence on spin and recovery characteristics, but many other geometric features also influenced the characteristics, including fuselage cross-sectional shape. In addition, seemingly small configuration features such as wing fillets at the wing trailing-edge juncture with the fuselage had large effects. Importantly, the existing TDPF criterion for light airplanes did not correctly predict the spin recovery characteristics of models for some conditions, especially for those in which ailerons were deflected. NASA’s report to the industry following 285

NASA’s Contributions to Aeronautics

5

the tests stressed that, based on these results, TDPF should not be used to predict spin recovery characteristics. However, the recommendation did provide a recommended “best practice” approach to overall design of the tail of the airplane for spin behavior.77 As part of its General Aviation Spin Research program, NASA continued to provide information on the design of emergency spin recovery parachute systems.78 Parachute diameters and riser line lengths were sized based on free-spinning model results for high and low wing configurations and a variety of tail configurations. Additionally, guidelines for the design and implementation of the mechanical systems required for parachute deployment (such as mechanical jaws and pyrotechnic deployment) and release of the parachute were documented. NASA also encouraged industry to use its spin tunnel facility on a fee-paying basis. Several industry teams proceeded to use the opportunity to conduct proprietary tests for configurations in the tunnel. For example, the Beech Aircraft Corporation sponsored the first fee-paid test in the Langley Spin Tunnel for free-spinning model tests of its Model 77 “Skipper” trainer.79 In such proprietary tests, the industry provided models and personnel for joint participation in the testing experience. Spin Entry The helicopter drop-model technique has been used since the early 1950s to evaluate the spin entry behavior of relatively large unpowered models of military aircraft. The objective of these tests has been to evaluate the relative spin resistance of configurations following various combinations of control inputs, and the effects of timing of recovery control inputs following departures. A related testing technique used to evaluate spin resistance of spin entry evaluations of general aviation configurations employs remotely controlled powered models that take off from ground runways and fly to the test condition. In the late 1950s, industry had become concerned over potential scale effects on long pointed fuselage shapes as a result of the XF8U-1

experiences in the Spin Tunnel, as discussed earlier. Thus, interest was growing over the possible use of much larger models than those used in spin tunnel tests, to eliminate or minimize undesirable scale effects. Finally, a major concern arose for some airplane designs over the launching technique used in the Spin Tunnel. Because the spin tunnel model was launched by hand in a very flat attitude with forced rotation, it would quickly seek the developed spin modes—a very valuable output— but the full-scale airplane might not easily enter the spin because of control limitations, poststall motions, or other factors. One of the first configurations tested, in 1958, to establish the credibility of the drop-model program was a 6.3-foot-long, 90-pound model of the XF8U-1 configuration.80 With previously conducted spin tunnel results in hand, the choice of this design permitted correlation with the earlier tunnel and aircraft flight-test results. As has been discussed, wind tunnel testing of the XF8U-1 fuselage forebody shape had indicated that pro-spin yawing moments would be produced by the fuselage for values of Reynolds number below about 400,000, based on the average depth of the fuselage forebody. The Reynolds number for the drop-model tests ranged from 420,000 to 505,000, at which the fuselage contribution became antispin and the spins and recovery characteristics of the drop model were found to be very similar to the full-scale results. In particular, the drop model did not exhibit a flat-spin mode predicted by the smaller spin tunnel model, and results were in agreement with results of the aircraft flight tests, demonstrating the value of larger models from a Reynolds number perspective. Success in applications of the drop-model technique for studies of spin entry led to the beginning of many military requests for evaluations of emerging fighter aircraft. In 1959, the Navy requested an evaluation of the McDonnell F4H-1 Phantom II airplane using the drop technique.81 Earlier spin tunnel tests of the configuration indicated the possibility of two types of spins: one of which was steep and oscillatory, from which recoveries were satisfactory, and the other was fast and flat, from which recovery was difficult or impossible. As mentioned previously, the spin tunnel launching technique had led to questions regarding whether the airplane would exhibit a tendency toward the steeper spin or the more

dangerous flat spin. The objective of the drop tests was to determine if it was likely, or even possible, for the F4H-1 to enter the flat spin. In the F4H-1 investigation, an additional launching technique was used in an attempt to obtain a developed spin more readily and to possibly obtain the flat spin to verify its existence. This technique consisted of prespinning the model on the helicopter launch rig before it was released in a flat attitude with the helicopter in a hovering condition. To achieve even higher initial rotation rates than could be achieved on the launch rig, a detachable flat metal plate was attached to one wingtip of the model to propel it to spin even faster. After the model appeared to be rotating sufficiently fast after release, the vane was jettisoned by the ground-based pilot, who, at the same time, moved the ailerons against the direction of rotation to help promote the spin. The model was then allowed to spin for several turns, after which recovery controls were applied. In some aspects, this approach to testing replicated the spin tunnel launch technique but at a larger scale. Results of the drop-model investigation for the F4H-1 are especially notable because it established the value of the testing technique to predict spin tendencies as verified by subsequent full-scale results. A total of 35 flights were made, with the model launched 15 times in the prerotated condition and 20 times in forward flight. During these 35 flights, poststall gyrations were obtained on 21 occasions, steep spins were obtained on 10 flights, and only 4 flat spins were obtained. No recoveries were possible from the flat spins, but only one flat spin was obtained without prerotation. The conclusions of the tests stated that the aircraft was more susceptible to poststall gyrations than spins; that the steeper, more oscillatory spin would be more readily obtainable and recovery could be made by the NASA-recommended control technique; and that the likelihood of encountering a fast, flat spin was relatively remote. Ultimately, these general characteristics of the airplane were replicated at full-scale test conditions during spin evaluations by the Navy and Air Force. The Pace Quickens Beginning in the early 1960s, a flurry of new military aircraft development programs resulted in an unprecedented workload for the dropmodel personnel. Support was requested by the military services for the General Dynamics F-111, Grumman F-14, McDonnell-Douglas F-15, Rockwell B-1A, and McDonnell-Douglas F/A-18 development programs. In addition, drop-model tests were conducted in support of the Grumman

288

Case 5 | Dynamically Scaled Free-Flight Models

X-29 and the X-31—sponsored by the Defense Advanced Research Projects Agency (DARPA)—research aircraft programs, which were scheduled for high-angle-of-attack full-scale flight tests at the Dryden flight facility. The specific objectives and test programs conducted with the drop models were considerably different for each configuration. Overviews of the results of the military programs are given in this volume, in another case study by this author.

5

General-Aviation Configurations As part of its General Aviation Spin Research program in the 1970s, Langley included the development of a testing technique using powered radio-controlled models to study spin resistance, spin entry, and spin recovery during the incipient phase of the spin.82 Equally important was a focus on developing a reliable, low-cost model testing technique that could be used by the industry for spin predictions in early design stages. The dynamically scaled models, which were about 1/5-scale (wingspan of about 4–5 feet), were powered and flown with hobby equipment. Although resembling conventional radio-control models flown by hobbyists, the scaling process discussed earlier resulted in models that were much heavier (about 15–20 pounds) than conventional hobby models (about 6–8 pounds). The radio-controlled model activities in the Langley program consisted of three distinct phases. Initially, model testing and analysis was directed at producing timely data for correlation with spin tunnel and full-scale flight results to establish the accuracy of the model results in predicting spin and recovery characteristics, and to gain experience with the testing technique. The second phase of the radio-controlled model program involved assessments of the effectiveness of NASA-developed wing leading-edge modifications to enhance the spin resistance of several general-aviation configurations. The focus of this research was a concept consisting of a drooped leading edge on the outboard wing panel with a sharp discontinuity at the inboard edge of the droop. The third phase of radio-controlled model testing involved cooperative studies of specific general-aviation designs with industry. In this segment of the program, studies centered on industry’s assessment of the radiocontrolled model technique. 82. Bowman and Burk, “Stall/Spin Studies Relating to Light General-Aviation Aircraft,” Society of Automotive Engineers Business Aircraft Meeting, Wichita, KS.

289

NASA’s Contributions to Aeronautics

5

Direct correlation of results for radio-controlled model tests and fullscale airplane results for a low wing NASA configuration was very good, especially with regard to susceptibility of the design to enter a fast, flat spin with poor or no recovery.83 In addition, the effects of various control input strategies agreed very well. For example, with normal pro-spin controls and any use of ailerons, the radio-controlled model and the airplane were both reluctant to enter the flat spin mode that had been predicted by spin tunnel tests; they only exhibited steeper spins from which recovery could still be accomplished. Subsequently, the test pilot and flighttest engineers of the full-scale airplane developed a unique control scheme during spin tests that would aggravate the steeper spin and propel the airplane into a flat spin requiring the emergency parachute for recovery. When a similar control technique was used on the radio-controlled model, it also would enter the flat spin, also requiring its parachute for recovery. Some of the more impressive results of the radio-controlled model program for the low wing configuration related to the ability of the model to demonstrate effects of the discontinuous leading-edge droop concept that had been developed by Langley for improved spin resistance.84 Several wing-leading-edge droop configurations had been derived in wind tunnel tests with the objective to delay wing autorotation and spin entry to high angles of attack. Tests with the radio-controlled model when modified with a full-span droop indicated better stall characteristics than the basic configuration did, but the resistance of the model to enter the unrecoverable flat spin was significantly degraded. The flat spin could be obtained on virtually every flight if pro-spin controls were maintained beyond about three turns after stall. In contrast to this result, when the discontinuous droop was applied to the outer wing, the model would enter a very steep spin from which recovery could be obtained by simply neutralizing controls. When the discontinuity on the inboard edge of the droop was faired over, the model reverted to the same characteristics that had been displayed with the full-span droop and could easily be flown into the flat spin. Correlation between the radio-controlled model and aircraft results in this phase of the project was outstanding. The agreement was particularly noteworthy 83. Bowman, Stough, Burk, and Patton, “Correlation of Model and Airplane Spin Characteristics for a Low-Wing General Aviation Research Airplane,” AIAA Paper 78-1477 (1978). 84. Staff of the Langley Research Center, “Exploratory Study of the Effects of Wing-Leading-Edge Modifications” NASA TP-1589 (1979).

290

Case 5 | Dynamically Scaled Free-Flight Models

in view of the large differences between the model and full-scale flight Reynolds numbers. All of the important stall/spin characteristics displayed by the low wing, radio-controlled model with the full-span droop configuration and the outboard droop configuration (with and without the fairing on the discontinuous juncture) were nearly identical to those exhibited by the full-scale aircraft, including stall characteristics, spin modes, spin resistance, and recovery characteristics.85 While researchers were conducting the technical objectives of the radio-controlled model program, an effort was directed at developing test techniques that might be used by industry for relatively low-cost testing. Innovative instrumentation techniques were developed that used relatively inexpensive hobby-type onboard sensors to measure control positions, angle of attack, airspeed, angular rates, and other variables. Data output from the sensors was transmitted to a low-cost ground-based data acquisition station by modifying a conventional sevenchannel radio-control model transmitter. The ground station consisted of separate receivers for monitoring angle of attack, angle of sideslip, and control commands. The receivers operated servos to drive potentiometers, whose signals were recorded on an oscillograph recorder. Tracking equipment and cameras were also developed. Other facets of the test technique development included the design and operational deployment of emergency spin recovery parachutes for the models. One particularly innovative testing technique demonstrated by NASA in the radio-controlled model flight programs was the use of miniature auxiliary rockets mounted on the wingtips of models to artificially promote flat spins. This approach was particularly useful in determining the potential existence of dangerous flat spins that were difficult to enter from conventional flight. In this application, the pilot remotely ignited one of the rockets during a spin entry, resulting in extremely high spin rates and a transition to very high angles of attack and flat-spin attitudes. After the “spin up” maneuver was complete, the rocket thrust subsided, and the model either remained in a stable flat spin or pitched down to a steeper spin mode. Beech Aircraft used this technique in its subsequent applications to radio-controlled models.

5

85. The impressive results of NASA’s full-scale and model flight-testing, together with evaluations of the droop concept by FAA pilots, led to the creation of a new spin certification category known as “spin resistant design.” See Chambers, Concept to Reality: Contributions of the Langley Research Center to U.S. Civil Aircraft of the 1990s, NASA SP-4529 (2003).

291

NASA’s Contributions to Aeronautics

5

General-aviation manufacturers maintained a close liaison with Langley researchers during the NASA stall/spin program, absorbing data produced by the coordinated testing of models and full-scale aircraft. The radio-controlled testing technique was of great interest, and following frequent interactions with Langley’s test team, industry conducted its own evaluations of radio-controlled models for spin testing. In the mid-1970s, Beech Aircraft conducted radio-controlled testing of its T-34 trainer aircraft, the Model 77 Skipper trainer, and the twin-engine Model 76 Duchess. 86 Piper Aircraft also conducted radio-controlled model testing to explore the spin entry, developed spin, and recovery techniques of a light twin-engine configuration.87 Later in the 1980s, a joint program was conducted with the DeVore Aviation Corporation to evaluate the spin resistance of a model of a high wing trainer design that incorporated the NASA-developed leading-edge droop concept.88 As a result of these cooperative ventures, industry obtained valuable experience in model construction techniques, spin recovery parachute system technology, methods of measuring moments of inertia and scaling engine thrust, the cost and time required to conduct such programs, and correlation with full-scale flight-test results. The Future of Dynamic Model Testing Efforts by the NACA and NASA over the last 80 years with applications of free-flying dynamic model test techniques have resulted in significant contributions to the civil and military aerospace communities. The results of the investigations have documented the testing techniques and lessons learned, and they have been especially valuable in defining critical characteristics of radical new configurations. With the passing of each decade, the free-flight techniques have become more sophisticated, and the accumulation of correlation between model and fullscale results has rapidly increased. In view of this technical progress, it

Langley researchers Long Yip, left, and David Robelen with a radio-controlled model used in a program on spin resistance with the DeVore Aviation Corporation. The model was equipped with NASA-developed discontinuous outboard droops and was extremely spin resistant. NASA.

is appropriate to reflect on the state of the art in free-flight technology and the challenges and opportunities of the future. Forcing Factors One of the more impressive advances in aerospace capability in the last few years has been the acceptance and accelerated development of remotely piloted unmanned aerial vehicles (UAVs) by the military. The progress in innovative hardware and software products to support this focus has truly been impressive and warrants a consideration that properly scaled freeflight models have reached the appropriate limits of development. In comparison to today’s capabilities, the past equipment used by the NACA and NASA seems primitive. It is difficult to anticipate hardware breakthroughs in free-flight model technologies beyond those currently employed, but NASA’s most valuable contributions have come from the applications of the models to specific aerospace issues—especially those that require years of difficult research and participation in model-to-flight correlation studies. Changes in the world situation are now having an impact on aeronautics, with a trickle-down effect on technical areas such as free-flight 293

NASA’s Contributions to Aeronautics

5

testing. The end of the Cold War and industrial mergers have resulted in a dramatic reduction in new aircraft designs, especially for unconventional configurations that would benefit from free-flight testing. Reductions in research budgets for industry and NASA have further aggravated the situation. These factors have led to a slowdown in requirements for the ongoing NASA capabilities in free-flight testing at a time when rollover changes in the NASA workforce is resulting in the retirements of specialists in this and other technologies without adequate transfer of knowledge and mentoring to the new research staffs. In addition, planned closures of key NASA facilities will challenge new generations of researchers to reinvent the free-flight capabilities discussed herein. For example, the planned demolition of the Langley Full-Scale Tunnel in 2009 will terminate that historic 78-year-old facility’s role in providing free-flight testing capability, and although exploratory free-flight tests have been conducted in the much smaller test section of the Langley 14- by 22-Foot Tunnel, it remains to be seen if the technique will continue as a testing capability. Based on the foregoing observations, NASA will be challenged to provide the facilities and expertise required to continue to provide the Nation with contributions from free-flight models. Remaining Technical Challenges Without doubt, the most important technical issues in the application of dynamically scaled free-flight models are the effects of Reynolds number. Although a few research agencies have attempted to minimize these effects by the use of pressurized wind tunnels, a practical approach to free-flight testing without concern for Reynolds number effects has not been identified. In the author’s opinion, the challenge of eliminating Reynolds number effects in spin studies is worthy of an investigation. In particular, the research community should seriously examine the possibilities of combining recent advances in cryogenic wind tunnel technology, magnetic suspension systems, and other relevant fields in a feasibility study of free-spinning tests at full-scale values of Reynolds number. The obvious issues of cost, operational efficiencies, and value added versus today’s testing would be critical factors in the study, although one would hope that the operational experiences gained in the U.S. and Europe with cryogenic tunnels in recent years might provide some optimism for success.

294

Case 5 | Dynamically Scaled Free-Flight Models

Other approaches to analyzing and correcting for Reynolds number effects might involve the application of computational fluid dynamics (CFD) methods. Although applications of CFD methods to dynamic stability and control issues are in their infancy, one can visualize their use in evaluating the impact of Reynolds number on critical phenomena such as the effect of fuselage cross-sectional shape on spin damping. In summary, the next major breakthroughs in dynamic free-flight model technology should come in the area of improving the prediction of Reynolds number effects. However, to make advances toward this goal will require programmatic commitments similar to the ones made during the past 80 years for the continued support of model testing in the specialty areas discussed herein.

Joseph R. Chambers, Innovation in Flight: Research of the Langley Research Center on Revolutionary Advanced Concepts for Aeronautics, NASA 2005-SP-4539 (Washington, DC: GPO, 2005). Joseph R. Chambers, Partners in Freedom: Contributions of the Langley Research Center to U.S. Military Aircraft of the 1990s, NASA SP-20004519 (Washington, DC: GPO, 2000).

Even before the invention of the airplane, wind tunnels have been key in undertaking fundamental research in aerodynamics and evaluating design concepts and configurations. Wind tunnels are essential for aeronautical research, whether for subsonic, transonic, supersonic, or hypersonic flight. The swept wing, delta wing, blended wing body shapes, lifting bodies, hypersonic boost-gliders, and other flight concepts have been evaluated and refined in NACA and NASA tunnels.

I

N NOVEMBER 2004, the small X-43A scramjet hypersonic research vehicle achieved Mach 9.8, roughly 6,600 mph, the fastest speed ever attained by an air-breathing engine. During the course of the vehicle’s 10-second engine burn over the Pacific Ocean, the National Aeronautics and Space Administration (NASA) offered the promise of a new revolution in aviation, that of high-speed global travel and cost-effective entry into space. Randy Voland, project engineer at Langley Research Center, exclaimed that the flight “looked really, really good” and that “in fact, it looked like one of our simulations.”1 In the early 21st century, the public’s awareness of modern aeronautical research recognized advanced computer simulations and dramatic flight tests, such as the launching of the X-43A mounted to the front of a Pegasus rocket booster from NASA’s venerable B-52 platform. A key element in the success of the X-43A was a technology as old as the airplane itself: the wind tunnel, a fundamental research tool that also has evolved over the past century of flight. NASA and its predecessor, the National Advisory Committee for Aeronautics (NACA), have been at the forefront of aerospace research since the early 20th century and on into the 21st. NASA made fundamental contributions to the development and refinement of aircraft and spacecraft—from commercial airliners to the Space Shuttle—for

operation at various speeds. The core of this success has been NASA’s innovation, development, and use of wind tunnels. At crucial moments in the history of the United States, the NACA–NASA introduced state-ofthe-art testing technologies as the aerospace community needed them, placing the organization onto the world stage. The Anatomy of a Wind Tunnel The design of an efficient aircraft or spacecraft involves the use of the wind tunnel. These tools simulate flight conditions, including Mach number and scale effects, in a controlled environment. Over the late 19th, 20th, and early 21st centuries, wind tunnels evolved greatly, but they all incorporate five basic features, often in radically different forms. The main components are a drive system, a controlled fluid flow, a test section, a model, and instrumentation. The drive system creates a fluid flow that replicates flight conditions in the test section. That flow can move at subsonic (up to Mach 1), transonic (Mach 0.75 to 1.25), supersonic (up to Mach 5), or hypersonic (above Mach 5) speeds. The placement of a scale model of an aircraft or spacecraft in the test section via balances allows the measurement of the physical forces acting upon that model with test instrumentation. The specific characteristics of each of these components vary from tunnel to tunnel and reflect the myriad of needs for this testing technology and the times in which experimenters designed them.2 Wind tunnels allow researchers to focus on isolating and gathering data about particular design challenges rooted in the four main systems of aircraft: aerodynamics, control, structures, and propulsion. Wind tunnels measure primarily forces such as lift, drag, and pitching moment, but they also gauge air pressure, flow, density, and temperature. Engineers convert those measurements into aerodynamic data to evaluate performance and design and to verify performance predictions. The data represent design factors such as structural loading and strength, stability and control, the design of wings and other elements, and, most importantly, overall vehicle performance.3 Most NACA and NASA wind tunnels are identified by their location, the size of their test section, the speed of the fluid flow, and the main design characteristic. For example, the Langley 0.3-Meter Transonic 2. Donald D. Baals and William R. Corliss, Wind Tunnels of NASA, NASA SP-440 (Washington, DC: GPO, 1981), p. 2. 3. NASA Ames Applied Aerodynamics Branch, “The Unitary Plan Wind Tunnels” (July 1994), pp. 10–11.

312

Case 6 | NASA and the Evolution of the Wind Tunnel

Cryogenic Tunnel evaluates scale models in its 0.3-meter test section between speeds of Mach 0.2 to 1.25 in a fluid flow of nitrogen gas. A specific application, 9- by 6-Foot Thermal Structures Tunnel, or the exact nature of the test medium, 8-Foot Transonic Pressure Tunnel, can be other characterizing factors for the name of a wind tunnel.

6

The Prehistory of the Wind Tunnel to 1958 The growing interest in and institutionalization of aeronautics in the late 19th century led to the creation of the wind tunnel.4 English scientists and engineers formed the Royal Aeronautical Society in 1866. The group organized lectures, technical meetings, and public exhibitions, published the influential Annual Report of the Aeronautical Society, and funded research to spread the idea of powered flight. One of the more influential members was Francis Herbert Wenham. Wenham, a professional engineer with a variety of interests, found his experiments with a whirling arm to be unsatisfactory. Funded by a grant from the Royal Aeronautical Society, he created the world’s first operating wind tunnel in 1870–1872. Wenham and his colleagues conducted rudimentary lift and drag studies and investigated wing designs with their new research tool.5 Wenham’s wing models were not full-scale wings. In England, University of Manchester researcher Osborne Reynolds recognized in 1883 that the airflow pattern over a scale model would be the same for its full-scale version if a certain flow parameter were the same in both cases. This basic parameter, attributed to its discoverer as the Reynolds number, is a measure of the relative effects of the inertia and viscosity of air flowing over an aircraft. The Reynolds number is used to describe all types of fluid flow, including the shape of flow, heat transfer, and the start of turbulence.6 While Wenham invented the wind tunnel and Reynolds created the basic parameter for understanding its application to full-scale aircraft, Wilbur and Orville Wright were the first to use a wind tunnel in the systematic way that later aeronautical engineers would use it. The brothers, not aware of Wenham’s work, saw their “invention” of the wind tunnel become part of their revolutionary program to create a practical heavier-than-air flying machine from 1896 to 1903. Frustrated by the 4. For a detailed history of wind tunnel development before World War II, see J. Lawrence Lee, “Into the Wind: A History of the American Wind Tunnel, 1896–1941,” dissertation, Auburn University, 2001. 5. Baals and Corliss, Wind Tunnels of NASA, p. 3. 6. Ibid.

313

NASA’s Contributions to Aeronautics

6

poor performance of their 1900 and 1901 gliders on the sandy dunes of the Outer Banks—they did not generate enough lift and were uncontrollable—the Wright brothers began to reevaluate their aerodynamic calculations. They discovered that Smeaton’s coefficient, one of the early contributions to aeronautics, and Otto Lilienthal’s groundbreaking airfoil data were wrong. They found the discrepancy through the use of their wind tunnel, a 6-foot-long box with a fan at one end to generate air that would flow over small metal models of airfoils mounted on balances, which they had created in their bicycle workshop. The lift and drag data they compiled in their notebooks would be the key to the design of wings and propellers during the rest of their experimental program, which culminated in the first controlled, heavier-than-air flight December 17, 1903.7 Over the early flight and World War I eras, aeronautical enthusiasts, universities, aircraft manufacturers, military services, and national governments in Europe and the United States built 20 wind tunnels. The United States built the most at 9, with 4 rapidly appearing during American involvement during the Great War. Of the European countries, Great Britain built 4, but the tunnels in France (2) and Germany (3) proved to be the most innovative. Gustav Eiffel’s 1912 tunnel at Auteiul, France, became a practical tool for the French aviation industry to develop high-performance aircraft for the Great War. At the University of Göttingen in Germany, aerodynamics pioneer Ludwig Prandtl designed what would become the model for all “modern” wind tunnels in 1916. The tunnel featured a closed circuit; a contraction cone, or nozzle, just before the test section that created uniform air velocity and reduced turbulence in the test section; and a chamber upstream of the test section that stilled any remaining turbulent air further.8 The NACA and the Wind Tunnel For the United States, the Great War highlighted the need to achieve parity with Europe in aeronautical development. Part of that effort was the creation of the Government civilian research agency, the NACA, in March 1915. The committee established its first facility, Langley Memorial Aeronautical Laboratory—named in honor of aeronautical experimenter and Smithsonian Secretary Samuel P. Langley—2 years 7. Peter Jakab, Visions of a Flying Machine (Washington, DC: Smithsonian Institution Press, 1990), p. 155. 8. Baals and Corliss, Wind Tunnels of NASA, pp. 9–12.

314

Case 6 | NASA and the Evolution of the Wind Tunnel

6

NACA Wind Tunnel No. 1 with a model of a Curtiss JN-4D Trainer in the test section. NASA.

later near Hampton, VA, on the Chesapeake Bay. In June 1920, NACA Wind Tunnel No. 1 became operational. A close copy of a design built at the British National Physical Laboratory a decade earlier, the tunnel produced no data directly applicable to aircraft design.9

9. Ibid., pp. 13–15.

315

NASA’s Contributions to Aeronautics

6

One of the major obstacles facing the effective use of a wind tunnel was scale effects, meaning the Reynolds number of model did not match the full-scale airplane. Prandtl protege Max Munk proposed the construction of a high-pressure tunnel to solve the problem. His Variable Density Tunnel (VDT) could be used to test a 1/20th-scale model in an airflow pressurized to 20 atmospheres, which would generate identical Reynolds numbers to full-scale aircraft. Built in the Newport News shipyards, the VDT was radical in design with its boilerplate and rivets. More importantly, it proved to be a point of departure from previous tunnels with the data that it produced.10 The VDT became an indispensable tool to airfoil development that effectively reshaped the subsequent direction of American airfoil research and development after it became operational in 1923. Munk’s successor in the VDT, Eastman Jacobs, and his colleagues in the VDT pioneered airfoil design methods with the pivotal Technical Report 460, which influenced aircraft design for decades after its publication in 1933.11 Of the 101 distinct airfoil sections employed on modern Army, Navy, and commercial airplanes by 1937, 66 were NACA designs. Those aircraft included the venerable Douglas DC-3 airliner, considered by many to be the first truly “modern” airplane, and the highly successful Boeing B-17 Flying Fortress of World War II.12 The NACA also addressed the fundamental problem of incorporating a radial engine into aircraft design in the pioneering Propeller Research Tunnel (PRT). Lightweight, powerful, and considered a revolutionary aeronautical innovation, a radial engine featured a flat frontal configuration that created a lot of drag. Engineer Fred E. Weick and his colleagues tested full-size aircraft structures in the tunnel’s 20-foot opening. Their solution, called the NACA cowling, arrived at the right moment to increase the performance of new aircraft. Spectacular demonstrations—such as Frank Hawks flying the Texaco Lockheed Air Express, with a NACA cowling installed, from Los Angeles to New York nonstop in a record time of 18 hours 13 minutes in February 1929—led to the organization’s first Collier Trophy, in 1929.

With the basic formula for the modern airplane in place, the aeronautical community began to push the limits of conventional aircraft design. The NACA built upon its success with the cowling research in the PRT and concentrated on the aerodynamic testing of full-scale aircraft in wind tunnels. The Full-Scale Tunnel (FST) featured a 30- by 60-foot test section and opened at Langley in 1931. The building was a massive structure at 434 feet long, over 200 feet wide, and 9 stories high. The first aircraft to be tested in the FST was a Navy Vought O3U-1 Corsair observation airplane. Testing in the late 1930s focused on removing as much drag from an airplane in flight as possible. NACA engineers—through an extensive program involving the Navy’s first monoplane fighter, Brewster XF2A-1 Buffalo—showed that attention to details such as air intakes, exhaust pipes, and gun ports effectively reduced drag. In the mid- to late 1920s, the first generation of university-trained American aeronautical engineers began to enter work with industry, the Government, and academia. The philanthropic Daniel Guggenheim Fund for the Promotion of Aeronautics created aeronautical engineering schools, complete with wind tunnels, at the California Institute of Technology, Georgia Institute of Technology, Massachusetts Institute of Technology, University of Michigan, New York University, Stanford University, and University of Washington. The creation of these dedicated academic programs ensured that aeronautics would be an institutionalized profession. The university wind tunnels quickly made their mark. The prototype Douglas DC airliner, the DC-1, flew in July 1933. In every sense of the word, it was a streamline airplane because of the extensive amount of wind tunnel testing at Guggenheim Aeronautical Laboratory at the California Institute of Technology used in its design. By the mid-1930s, it was obvious that the sophisticated wind tunnel research program undertaken by the NACA had contributed to a new level of American aeronautical capability. Each of the major American manufacturers built wind tunnels or relied upon a growing number of university facilities to keep up with the rapid pace of innovation. Despite those additions, it was clear in the minds of the editors at the influential trade journal Aviation that the NACA led the field with the grace, style, and coordinated virtuosity of a symphonic orchestra.13

World War II stimulated the need for sophisticated aerodynamic testing, and new wind tunnels met the need. Langley’s 20-Foot Vertical Spin Tunnel (VST) became operational in March 1941. The major difference between the VST and those that came before was its vertical closed-throat, annular return. A variable-speed three-blade, fixed-pitch fan provided vertical airflow at an approximate velocity of 85 feet per second at atmospheric conditions. Researchers threw dynamically scaled, free-flying aircraft models into the tunnel to evaluate their stability as they spun and tumbled out of control. The installation of remotely actuated control surfaces allowed the study of spin recovery characteristics. The NACA solution to spin problems for aircraft was to enlarge the vertical tail, raise the horizontal tail, and extend the length of the ventral fin.14 The NACA founded the Ames Aeronautical Laboratory on December 20, 1939, in anticipation of the need for expanded research and flighttest facilities for the West Coast aviation industry. The NACA leadership wanted to reach parity with European aeronautical research based on the belief that the United States would be entering World War II. The cornerstone facility at Ames was the 40 by 80 Tunnel capable of generating airflow of 265 mph for even larger full-scale aircraft when it opened in 1944. Building upon the revolutionary drag reduction studies pioneered in the FST, Ames researchers continued to modify existing aircraft with fillets and innovated dive recovery flaps to offset a new problem encountered when aircraft entered high-speed dives called compressibility.15 The NACA also desired a dedicated research facility that specialized in aircraft propulsion systems. Construction of the Aircraft Engine Research Laboratory (AERL) began at Cleveland, OH, in January 1941, with the facility becoming operational in May 1943.16 The cornerstone

14. George W. Gray, Frontiers of Flight: The Story of NACA Research (New York: A.A. Knopf, 1948), p. 156; James R. Hansen, Engineer in Charge: A History of the Langley Aeronautical Laboratory, 1917–1958, NASA SP-4305 (Washington, DC: GPO, 1987), pp. 462–463; NASA, “Wind Tunnels at NASA Langley Research Center,” FS-2001-04-64-LaRC, 2001, http://www.nasa. gov/centers/langley/news/factsheets/windtunnels.html, accessed May 28, 2009. 15. Glenn E. Bugos, Atmosphere of Freedom: Sixty Years at the NASA Ames Research Center, NASA SP-4314 (Washington, DC: GPO, 2000), pp. 6–13. 16. The NACA renamed the AERL the Propulsion Research Laboratory in 1947 and changed the name of the facility once again to the Lewis Flight Propulsion Laboratory a year later in honor of George W. Lewis, the committee’s first Director of Aeronautical Research. Virginia P. Dawson, Engines and Innovation: Lewis Laboratory and American Propulsion Technology, NASA SP-4306 (Washington, DC: GPO, 1991), pp. 2–14, 36.

318

Case 6 | NASA and the Evolution of the Wind Tunnel

facility was the Altitude Wind Tunnel (AWT), which became operational in 1944. The AWT was the only wind tunnel in the world capable of evaluating full-scale aircraft engines in realistic flight conditions that simulated altitudes up to 50,000 feet and speeds up to 500 mph. AERL researchers began first with large radial engines and propellers and continued with the new jet technology on through the postwar decades.17 The AERL soon became the center of the NACA’s work on alleviating aircraft icing. The Army Air Forces lost over 100 military transports along with their crews and cargoes over the “Hump,” or the Himalayas, as it tried to supply China by air. The problem was the buildup of ice on wings and control surfaces that degraded the aerodynamic integrity and overloaded the aircraft. The challenge was developing de-icing systems that removed or prevented the ice buildup. The Icing Research Tunnel (IRT) was the largest of its kind when it opened in 1944. It featured a 6- by 9-foot test section, a 160-horsepower electric motor capable of generating a 300 mph airstream, and a 2,100-ton refrigeration system that cooled the airflow down to -40 degrees Fahrenheit (°F).18 The tunnel worked well during the war and the following two decades, before NASA closed it. However, a new generation of icing problems for jet aircraft, rotary wing, and Vertical/Short Take-Off and Landing (V/STOL) aircraft resulted in the reopening of the IRT in 1978.19 During World War II, airplanes ventured into a new aerodynamic regime, the so-called “transonic barrier.” American propeller-driven aircraft suffered from aerodynamic problems caused by high-speed flight. Flight-testing of the P-38 Lightning revealed compressibility problems that resulted in the death of a test pilot in November 1941. As the Lightning dove from 30,000 feet, shock waves formed over the wings and hit the tail, causing violent vibration, which caused the airplane to plummet into a vertical, and unrecoverable, dive. At speeds approaching Mach 1, aircraft experienced sudden changes in stability and control,

extreme buffeting, and, most importantly, a dramatic increase in drag, which created challenges for the aeronautical community involving propulsion, research facilities, and aerodynamics. Bridging the gap between subsonic and supersonic speeds was a major aerodynamic challenge.20 The transonic regime was unknown territory in the 1940s. Four approaches—putting full-size aircraft into terminal velocity dives, dropping models from aircraft, installing miniature wings mounted on flying aircraft, and launching models mounted on rockets—were used in lieu of an available wind tunnel in the 1940s for transonic research. Aeronautical engineers faced a daunting challenge rooted in developing tools and concepts because no known wind tunnel was able to operate and generate data at transonic speeds. NACA Manager John Stack took the lead in American work in transonic development. As the central NACA researcher in the development of the first research airplane, the Bell X-1, he was well-qualified for highspeed research. His part in the first supersonic flight resulted in a joint award of the 1947 Collier Trophy. He ordered the conversion of the 8and 16-Foot High-Speed Tunnels in spring 1948 to a slotted throat to enable research in the transonic regime. Slots in the tunnels’ test sections, or throats, enabled smooth operation at high subsonic speeds and low supersonic speeds. The initial conversion was not satisfactory. Physicist Ray Wright and engineers Virgil S. Ritchie and Richard T. Whitcomb hand-shaped the slots based on their visualization of smooth transonic flow. Working directly with Langley woodworkers, they designed and fabricated a channel at the downstream end of the test section that reintroduced air that traveled through the slots. Their painstaking work led to the inauguration of operations in the newly christened 8-Foot Transonic Tunnel (TT) 7 months later, on October 6, 1950.21 Rumors had been circulating throughout the aeronautical community about the NACA’s new transonic tunnels: the 8-Foot TT and the 16-Foot TT. The NACA wanted knowledge of their existence to remain confidential among the military and industry. Concerns over secrecy were

deemed less important than the acknowledgement of the development of the slotted-throat tunnel, for which John Stack and 19 of his colleagues received a Collier Trophy in 1951. The award specifically recognized the importance of a research tool, which was a first in the 40-year history of the award. When used with already available wind tunnel components and techniques, the tunnel balance, pressure orifice, tuft surveys, and schlieren photographs, slotted-throat tunnels resulted in a new theoretical understanding of transonic drag. The NACA claimed that its slottedthroat transonic tunnels gave the United States a 2-year lead in the design of supersonic military aircraft.22 John Stack’s leadership affected the NACA’s development of state-of-the-art wind tunnel technology. The researchers inspired by or working under him developed a generation of wind tunnels that, according to Joseph R. Chambers, became “national treasures.”23

6

The Transition to NASA In the wake of the launch of Sputnik I in October 1957, the National Air and Space Act of 1958 combined the NACA’s research facilities at Langley, Ames, Lewis, Wallops Island, and Edwards with the Army and Navy rocket programs and the California Institute of Technology’s Jet Propulsion Laboratory to form NASA. Suddenly, the NACA’s scope of American civilian research in aeronautics expanded to include the challenges of space flight driven by the Cold War competition between the United States and the Soviet Union and the unprecedented growth of American commercial aviation on the world stage. NASA inherited an impressive inventory of facilities from the NACA. The wind tunnels at Langley, Ames, and Lewis were the start of the art and reflected the rich four-decade legacy of the NACA and the everevolving need for specialized tunnels. Over the next five decades of NASA history, the work of the wind tunnels reflected equally in the first “A” and the “S” in the administration’s acronym. The Unitary Plan Tunnels In the aftermath of World War II and the early days of the Cold War, the Air Force, Army, Navy, and the NACA evaluated what the aeronautical 22. Ibid., p. 91; Hansen, Engineer in Charge, pp. 329, 330–331. 23. Joseph R. Chambers, Innovation in Flight: Research of the NASA Langley Research Center on Revolutionary Advanced Concepts for Aeronautics, NASA SP-2005-4539 (Washington, DC: GPO, 2005), pp. 18–19.

321

NASA’s Contributions to Aeronautics

6

industry needed to continue leadership and innovation in aircraft and missile development. Specifically, the United States needed more transonic and supersonic tunnels. The joint evaluation resulted in proposal called the Unitary Plan. President Harry S. Truman’s Air Policy Commission urged the passage of the Unitary Plan in January 1948. The draft plan, distributed to the press at the White House, proposed the installation of the 16 wind tunnels “as quickly as possible,” with the remainder to quickly follow.24 Congress passed the Unitary Wind Tunnel Plan Act, and President Truman signed it October 27, 1949. The act authorized the construction of a group of wind tunnels at U.S. Air Force and NACA installations for the testing of supersonic aircraft and missiles and for the high-speed and high-altitude evaluation of engines. The wind tunnel system was to benefit industry, the military, and other Government agencies.25 The portion of the Unitary Plan assigned to the U.S. Air Force led to the creation of the Arnold Engineering Development Center (AEDC) at Tullahoma, TN. Dedicated in June 1951, the AEDC took advantage of abundant hydroelectric power provided by the nearby Tennessee Valley Authority. The Air Force erected facilities, such as the Propulsion Wind Tunnel and two individual 16-Foot wind tunnels that covered the range of Mach 0.2 to Mach 4.75, for the evaluation of full-scale jet and rocket engines in simulated aircraft and missile applications. Starting with 2 wind tunnels and an engine test facility, the research equipment at the AEDC expanded to 58 aerodynamic and propulsion wind tunnels.26 The Aeropropulsion Systems Test Facility, operational in 1985, was the finishing touch, which made the AEDC, in the words of one observer, “the world’s most complete aerospace ground test complex.”27 The sole focus of the AEDC on military aeronautics led the NACA to focus on commercial aeronautics. The Unitary Plan provided two benefits for the NACA. First, it upgraded and repowered the NACA’s existing wind tunnel facilities. Second, and more importantly, the Unitary

Plan and provided for three new tunnels at each of the three NACA laboratories at the cost of $75 million. Overall, those three tunnels represented, to one observer, “a landmark in wind tunnel design by any criterion—size, cost, performance, or complexity.”28 The NACA provided a manual for users of the Unitary Plan Wind Tunnel system in 1956, after the facilities became operational. The document allowed aircraft manufacturers, the military, and other Government agencies to plan development testing. Two general classes of work could be conducted in the Unitary Plan wind tunnels: company or Government projects. Industrial clients were responsible for renting the facility, which amounted to between $25,000 and $35,000 per week (approximately $190,000 to $265,000 in modern currency), depending on the tunnel, the utility costs required to power the facility, and the labor, materials, and overhead related to the creation of the basic test report. The test report consisted of plotted curves, tabulated data, and a description of the methods and procedures that allowed the company to properly interpret the data. The NACA kept the original report in a secure file for 2 years to protect the interests of the company. There were no fees for work initiated by Government agencies.29 The Langley Unitary Plan Wind Tunnel began operations in 1955. NACA researcher Herbert Wilson led a design team that created a closedcircuit, continual flow, variable density supersonic tunnel with two test sections. The test sections, each measuring 4 by 4 feet and 7 feet long, covered the range between low Mach (1.5 to 2.9) and high Mach (2.3 to 4.6). Tests in the Langley Unitary Plan Tunnel included force and moment, surface pressure measurements and distribution, visualization of on- and off-surface airflow patterns, and heat transfer. The tunnel operated at 150 °F, with the capability of generating 300–400 °F in short bursts for heat transfer studies. Built at an initial cost of $15.4 million, the Langley facility was the cheapest of the three NACA Unitary Plan wind tunnels.30 The original intention of the Langley Unitary Plan tunnel was missile development. A long series of missile tests addressed high-speed

A model of the Apollo Launch Escape System in the Unitary Wind Tunnel at NASA Ames. NASA.

performance, stability and control, maneuverability, jet-exhaust effects, and other factors. NACA researchers quickly placed models of the McDonnell-Douglas F-4 Phantom II in the tunnel in 1956, and soon after, various models of the North American X-15, the General Dynamics F-111 Aardvark, proposed supersonic transport configurations, and spacecraft appeared in the tunnel.31 The Ames Unitary Plan Wind Tunnel opened in 1956. It featured three test sections: an 11- by 11-foot transonic section (Mach 0.3 to 1.5) and two supersonic sections that measured 9 by 7 feet (Mach 1.5 to 2.6) and 8 by 7 feet (Mach 2.5 to 3.5). Tunnel personnel could adjust the airflow to simulate flying conditions at various altitudes in each section.32 The power and magnitude of the tunnel facility called for unprecedented design and construction. The 11-stage axial-flow compressor featured a 20-foot diameter and was capable of moving air at 3.2 million cubic feet per minute. The complete assembly, which included over 2,000 rotor and stator blades, weighed 445 tons. The flow diversion valve allowed the compressor to drive either the 9- by 7-foot or 8- by 7-foot 31. Baals and Corliss, Wind Tunnels of NASA, pp. 68–69. 32. NASA Ames Applied Aerodynamics Branch, “The Unitary Plan Wind Tunnels” (July 1994), p. 9; NASA Langley, “NASA’s Wind Tunnels,” IS-1992-05-002-LaRC, May 1992, http://oea.larc. nasa.gov/PAIS/WindTunnel.html, accessed May 26, 2009.

324

Case 6 | NASA and the Evolution of the Wind Tunnel

supersonic wind tunnels. At 24 feet in diameter, the compressor was the largest of its kind in the world in 1956 but took only 3.5 minutes to switch between the two wind tunnels. Four main drive rotors, weighing 150 tons each, powered the facility. They could generate 180,000 horsepower on a continual basis and 216,000 horsepower at 1-hour intervals. Crews used 10,000 cubic yards of concrete for the foundation and 7,500 tons of steel plate for the major structural components. Workers expended 100 tons of welding rods during construction. When the facility began operations in 1956, the project had cost the NACA $35 million.33 The personnel of the Ames Unitary Plan Wind Tunnel evaluated every major craft in the American aerospace industry from the late 1950s to the late 20th century. In aeronautics, models of nearly every commercial transport and military fighter underwent testing. For the space program, the Unitary Plan Wind Tunnel was crucial to the design of the landmark Mercury, Gemini, and Apollo spacecraft, and the Space Shuttle. That record led NASA to assert that the facility was a “unique national asset of vital importance to the nation’s defense and its competitive position in the world aerospace market.” It also reflected the fact that the Unitary Plan facility was NASA’s most heavily used wind tunnel, with over 1,000 test programs conducted during 60,000 hours of operation by 1994.34

The National Park Service designated the Ames Unitary Plan Wind Tunnel Facility a national historic landmark in 1985. The Unitary Plan Wind Tunnel represented “the logical crossover point from NACA to NASA” and “contributed equally to both the development of advanced American aircraft and manned spacecraft.”35 The Unitary Plan facility at Lewis Research Center allowed the observation and development of full-scale jet and rocket engines in a 10- by 10-foot supersonic wind tunnel that cost $24.6 million. Designed by Abe Silverstein and Eugene Wasliewski, the test section featured a flexible wall made up of 10-foot-wide polished stainless steel plates, almost 1.5 inches thick and 76 feet long. Hydraulic jacks changed the shape of the plates to simulate nozzle shapes covering the range of Mach 2 to Mach 3.5. Silverstein and Wasliewski also incorporated both open and closed operation. For propulsion tests, air entered the tunnel and exited on the other side of the test section continually. In the aerodynamic mode, the same air circulated repeatedly to maintain a higher atmospheric pressure, desired temperature, or moisture content. The Lewis Unitary Plan Wind Tunnel contributed to the development of the General Electric F110 and Pratt & Whitney TF30 jet engines intended for the Grumman F-14 Tomcat and the liquid-fueled rocket engines destined for the Space Shuttle.36 Many NACA tunnels found long-term use with NASA. After NASA made modifications in the 1950s, the 20-Foot VST allowed the study of spacecraft and recovery devices in vertical descent. In the early 21st century, researchers used the 20-Foot VST to test the free-fall and dynamic stability characteristics of spacecraft models. It remains one of only two operation spin tunnels in the world.37

Tunnel Visions: Dick Whitcomb’s Creative Forays The slotted-throat transonic tunnels pioneered by John Stack and his associates at Langley proved valuable, especially in the hands of one of the Center’s more creative minds, Richard. T. Whitcomb. In the 8-Foot TT, he investigated the transonic regime. Gaining a better understanding of aircraft speeds between Mach 0.75 and 1.25 was one of the major aerodynamic challenges of the 1950s and a matter of national security during the Cold War. The Air Force’s Convair YF-102 Delta Dagger interceptor was unable to reach supersonic speeds during its first flights in 1953. Tests in the 8-Foot TT revealed that the increase in drag as an airplane approached supersonic speeds was not the result of shock waves forming at the nose but of those forming just behind the wings. Whitcomb created a rule of thumb that decreased transonic drag by narrowing, or pinching, the fuselage where it met the wings.38 The improved YF-102A, with its new “area rule” fuselage, achieved supersonic flight in December 1954. The area rule fuselage increased the YF-102A’s top speed by 25 percent. Embraced by the aviation industry, Whitcomb’s revolutionary idea enabled a generation of military aircraft to achieve supersonic speeds.39

38. Richard T. Whitcomb, “A Study of the Zero-Lift Drag-Rise Characteristics of Wing-Body Combinations Near the Speed of Sound,” NACA RM-L52H08 (Sept. 3, 1952). 39. Richard T. Whitcomb and Thomas L. Fischetti, “Development of a Supersonic Area Rule and an Application to the Design of a Wing-Body Combination Having High Lift-to-Drag Ratios,” NACA RML53H31A (Aug. 18, 1953); Richard T. Whitcomb, “Some Considerations Regarding the Application of the Supersonic Area Rule to the Design of Airplane Fuselages,” NACA RM-L56E23a (July 3, 1956).

327

NASA’s Contributions to Aeronautics

6

As he worked to validate the area rule concept, Whitcomb moved next door to the 8-Foot Transonic Pressure Tunnel (TPT) after it opened in 1953. His colleagues John Stack, Eugene C. Draley, Ray H. Wright, and Axel T. Mattson designed the facility from the outset as a slottedwall transonic tunnel with a maximum speed of Mach 1.2.40 In what quickly became known as “Dick Whitcomb’s tunnel,” he validated and made two additional aerodynamic contributions in the decades that followed—the supercritical wing and winglets. Beginning in 1964, Whitcomb wanted to develop an airfoil for commercial aircraft that delayed the onset of high transonic drag near Mach 1 by reducing air friction and turbulence across an aircraft’s major aerodynamic surface, the wing. Whitcomb went intuitively against conventional airfoil design by envisioning a smoother flow of air by turning a conventional airfoil upside down. Whitcomb’s airfoil was flat on top with a downward curved rear section. The blunt leading edge facilitated better takeoff, landing, and maneuvering performance as the airfoil slowed airflow, which lessened drag and buffeting and improved stability. Spending days at a time in the 8-Foot TPT, he validated his concept with a model he made with his own hands. He called his innovation a “supercritical wing,” combining “super” (meaning “beyond”) with “critical” Mach number, which is the speed supersonic flow revealed itself above the wing.41 After a successful flight program was conducted at NASA Dryden from 1971 to 1973, the aviation industry incorporated the supercritical wing into a new generation of aircraft, including subsonic transports, business jets, Short Take-Off and Landing (STOL) aircraft, and unmanned aerial vehicles (UAVs).42 Whitcomb’s continual quest to improve subsonic aircraft led him to investigate the wingtip vortex, the turbulent air found at the end of an airplane wing that created induced drag, as part of the Aircraft Energy Efficiency (ACEE) program. His solution was the winglet, a vertical winglike surface that extended above and sometimes below the tip of each

wing. Whitcomb and his research team in the 8-Foot TPT investigated the drag-reducing properties of winglets for a first-generation, narrowbody subsonic jet transport from 1974 to 1976.43 Whitcomb found that winglets reduced drag by approximately 20 percent and doubled the improvement in the lift-to-drag (L/D) ratio, to 9 percent, which boosted performance by enabling higher cruise speeds. The first jet-powered airplane to enter production with winglets was the Learjet Model 28 in 1977. The first large U.S. commercial transport to incorporate winglets, the Boeing 747-400, followed in 1985.44

6

Unlocking the Mysteries of Flutter: Langley’s Transonic Dynamics Tunnel The example of the Langley Transonic Dynamics Tunnel (TDT) illustrates how the NACA and NASA took an unsatisfactory tunnel and converted it into one capable of contributing to longstanding aerospace research. The Transonic Dynamics Tunnel began operations as the 19-Foot Pressure Tunnel in June 1939. The NACA design team, which included Smith J. DeFrance and John F. Parsons, wanted to address continued problems with scale effects. Their solution resulted in the first large-scale high-pressure tunnel. Primarily, the tunnel was to evaluate propellers and wings at high Reynolds numbers. Researchers were to use it to study the stability and control characteristics of aircraft models as well. Only able to generate a speed of 330 mph in the closed-throat test section, the NACA shifted the high-speed propeller work to another new facility, the 500 mph 16-Foot High-Speed Tunnel. The slower 19-Foot Pressure Tunnel pressed on in the utilitarian work of testing models at high Reynolds numbers.45 Dissatisfied with the performance of the 19-Foot Pressure Tunnel, the NACA converted it into a closed-circuit, continual flow, variable pressure Mach 1.2 wind tunnel to evaluate such dynamic flight characteristics as aeroelasticity, flutter, buffeting, vortex shedding, and gust loads. From 1955 to 1959, the conversion involved the installation of new components, including a slotted test section, mounts, a quick-stop drive system, an airflow oscillator (or “gust maker”), and a system that

generated natural air or a refrigerant (Freon-12 and later R-134a) test medium. The use of gas improved full-scale aircraft simulation.46 It produced higher Reynolds numbers, eased fabrication of scaled models, reduced tunnel power requirements, and, in the case of rotary wing models, reduced model power requirements.47 After 8 years of design, calibration, and conversion, the TDT became the world’s first aeroelastic testing tunnel, becoming operational in 1960. The tunnel was ready for its first challenge: the mysterious crashes of the first American turboprop airliner, the Lockheed L-188 Electra II. The Electra entered commercial service with American Airlines in December 1958. Powered by 4 Allison 501 turboprop engines, the $2.4-million Electra carried approximately 100 passengers while cruising at 400 mph. On September 29, 1959, Braniff Airways Flight 542 crashed near Buffalo, TX, with the loss of all 34 people aboard the new Electra airliner. A witness saw what appeared to be lightning followed by a ball of fire and a shrieking explosion. The 2.5- by 1-mile debris field included the left wing, which settled over a mile away from the main wreckage. The initial Civil Aeronautics Board crash investigation revealed that failure of the left wing about a foot from the fuselage in flight led to the destruction of the airplane.48 There was no indication of the exact cause of the wing failure. The prevailing theories were sabotage or pilot and crew error. The crash of a Northwest Orient Airlines Electra near Tell City, IN, on March 17, 1960, with a loss of 63 people provided an important clue. The right wing landed 2 miles from the crash site. Federal and Lockheed investigators believed that violent flutter ripped the wings off both Electras, but they did not know the specific cause.49

The future of the new American jet airliner fleet was a stake. While the tragic story of the Electra unfolded, the Langley Transonic Dynamics Tunnel became operational in early 1960. NASA quickly prepared a one-eighth-scale model of an Electra that featured rotating propellers, simulated fuel load changes, and different engine-mount structural configurations. Those features would be important to the wind tunnel tests because a Lockheed engineer believed that the Electra experienced propeller-whirl flutter, a phenomenon stimulated by engine gyroscopic torques, propeller forces and moments, and the aerodynamic loads acting on the wings. Basically, a design flaw, weakened engine mounts, allowed the engine nacelles and the wings to oscillate at the same frequency, which led to catastrophic failure. Reinforced engine mounts ensured that the Electra continued operations through the 1960s and 1970s.50 Flutter has been a consistent problem for aircraft since the 1960s, and the Transonic Dynamics Tunnel contributed to the refinement of many aircraft, including frontline military transports and fighters.

50. Ibid., p. 80.

331

NASA’s Contributions to Aeronautics

6

The Lockheed C-141 Starlifter transport experienced tail flutter in its original configuration. The horizontal tail of the McDonnell-Douglas F-15 Eagle all-weather air superiority fighter-bomber fluttered.51 The inclusion of air-to-air and air-to-ground missiles, bombs, electronic countermeasures pods, and fuel tanks produced wing flutter on the General Dynamics F-16 Fighting Falcon lightweight fighter. NASA and General Dynamics underwent a combined computational, wind tunnel, and flight program from June 1975 to March 1977. The TDT tests sought to minimize expensive flight-testing. They verified analytical methods in determining flutter and determined practical operational methods in which portions of fuel tanks needed to be emptied first to delay the onset of flutter.52 The TDT offered versatility beyond the investigation of flutter on fixed wing aircraft. Tunnel personnel also conducted performance, load, and stability tests of helicopter and tilt rotor configurations. Researchers in the space program used the tunnel to determine the effects of groundwind loads on launch vehicles. Whether it is for a fixed or rotary wing airplane or a spacecraft, the TDT was used to evaluate the effect of wind gusts on flying vehicles.53 The Cold War and the Space Age In 1958, NASA was on a firm foundation for hypersonic and space research. Throughout the 1950s, NACA researchers first addressed the challenge of atmospheric reentry with their work on intercontinental ballistic missiles (ICBMs) for the military. The same fundamental design problems existed for ICBMs, spacecraft, interplanetary probes, and hypersonic aircraft. Each of the NASA Centers specialized in a specific aspect of hypersonic and hypervelocity research that resulted from their heritage as NACA laboratories. Langley’s emphasis was in the creation of facilities applicable to hypersonic cruise aircraft and reentry vehicles—including winged reentry. Ames explored the extreme temperatures and the design shapes that could withstand them as vehicles

returned to Earth from space. Researchers at Lewis focused on propulsion systems for these new craft. With the impetus of the space race, each Center worked with a growing collection of hypersonic and hypervelocity wind tunnels that ranged from conventional aerodynamic facilities to radically different configurations such as shock tubes, arc-jets, and new tunnels designed for the evaluation of aerodynamic heating on spacecraft structures.54 The Advent of Hypersonic Tunnel and Aeroballistic Facilities John V. Becker at Langley led the way in the development of conventional hypersonic wind tunnels. He built America’s first hypersonic wind tunnel in 1947, with an 11-inch test section and the capability of Mach 6.9 flow. To T.A. Heppenheimer, it is “a major advance in hypersonics,” because Becker had built the discipline’s first research instrument.55 Becker and Eugene S. Love followed that success with their design of the 20-Inch Hypersonic Tunnel in 1958. Becker, Love, and their colleagues used the tunnel for the investigation of heat transfer, pressure, 54. Baals and Corliss, Wind Tunnels of NASA, pp. 86, 101; T.A. Heppenheimer, Facing the Heat Barrier: A History of Hypersonics, NASA SP-2007-4232 (Washington, DC: GPO, 2007), p. 42. 55. Ibid., pp. xi, 2.

333

NASA’s Contributions to Aeronautics

6

and forces acting on inlets and complete models at Mach 6. The facility featured an induction drive system that ran for approximately 15 minutes in a nonreturn circuit operating at 220–550 psia (pounds-force per square inch absolute).56 The need for higher Mach numbers led to tunnels that did not rely upon the creation of a flow of air by fans. A counterflow tunnel featured a gun that fired a model into a continual onrushing stream of gas or air, which was an effective tool for supersonic and hypersonic testing. An impulse wind tunnel created high temperature and pressure in a test gas through an explosive release of energy. That expanded gas burst through a nozzle at hypersonic speeds and over a model in the test section in milliseconds. The two types of impulse tunnels—hotshot and shock—introduced the test gas differently and were important steps in reaching ever-higher speeds, but NASA required even faster tunnels.57 The companion to a hotshot tunnel was an arc-jet facility, which was capable of evaluating spacecraft heat shield materials under the extreme heat of planetary reentry. An electric arc preheated the test gas in the stilling chamber upstream of the nozzle to temperatures of 10,000–20,000 °F. Injected under pressure into the nozzle, the heated gas created a flow that was sustainable for several minutes at lowdensity numbers and supersonic Mach numbers. The electric arc required over 100,000 kilowatts of power. Unlike the hotshot, the arc-jet could operate continually.58 NASA combined these different types of nontraditional tunnels into the Ames Hypersonic Ballistic Range Complex in the 1960s.59 The Ames Vertical Gun Range (1964) simulated planetary impact with various modellaunching guns. Ames researchers used the Hypervelocity Free-Flight Aerodynamic Facility (1965) to examine the aerodynamic characteristics of atmospheric entry and hypervelocity vehicle configurations. The research programs investigated Earth atmosphere entry (Mercury, Gemini, Apollo,

and Shuttle), planetary entry (Viking, Pioneer-Venus, Galileo, and Mars Science Lab), supersonic and hypersonic flight (X-15), aerobraking configurations, and scramjet propulsion studies. The Electric Arc Shock Tube (1966) enabled the investigation of the effects of radiation and ionization that occurred during high-velocity atmospheric entries. The shock tube fired a gaseous bullet at a light-gas gun, which fired a small model into the onrushing gas.60 The NACA also investigated the use of test gases other than air. Designed by Antonio Ferri, Macon C. Ellis, and Clinton E. Brown, the Gas Dynamics Laboratory at Langley became operational in 1951. One facility was a high-pressure shock tube consisting of a constant area tube 3.75 inches in diameter, a 20-inch test section, a 14-foot-long highpressure chamber, and 70-foot-long low-pressure section. The induction drive system consisted of a central 300-psi tank farm that provided heated fluid flow at a maximum speed of Mach 8 in a nonreturn circuit at a pressure of 20 atmospheres. Langley researchers investigated aerodynamic heating and fluid mechanical problems at speeds above the capability of conventional supersonic wind tunnels to simulate hypersonic and space-reentry conditions. For the space program, NASA used pure nitrogen and helium instead of heated air as the test medium to simulate reentry speeds.61 NASA built the similar Ames Thermal Protection Laboratory in the early 1960s to solve reentry materials problems for a new generation of craft, whether designed for Earth reentry or the penetration of the atmospheres of the outer planets. A central bank of 10 test cells provided the pressurized flow. Specifically, the Thermal Protection Laboratory found solutions for many vexing heat shield problems associated with the Space Shuttle, interplanetary probes, and intercontinental ballistic missiles. Called the “suicidal wind tunnel” by Donald D. Baals and William R. Corliss because it was self-destructive, the Ames Voitenko Compressor was the only method for replicating the extreme velocities required for the design of interplanetary space probes. It was based on the Voitenko

concept from 1965 that a high-velocity explosive, or shaped, charge developed for military use be used for the acceleration of shock waves. Voitenko’s compressor consisted of a shaped charge, a malleable steel plate, and the test gas. At detonation, the shaped charge exerts pressure on the steel plate to drive it and the test gas forward. Researchers at the Ames Laboratory adapted the Voitenko compressor concept to a selfdestroying shock tube comprised of a 66-pound shaped charge and a glass-walled tube 1.25 inches in diameter and 6.5 feet long. Observation of the tunnel in action revealed that the shock wave traveled well ahead of the rapidly disintegrating tube. The velocities generated upward of 220,000 feet per second could not be reached by any other method.62 Langley, building upon a rich history of research in high-speed flight, started work on two tunnels at the moment of transition from the NACA

62. Baals and Corliss, Wind Tunnels of NASA, p. 92.

336

Case 6 | NASA and the Evolution of the Wind Tunnel

to NASA. Eugene Love designed the Continuous Flow Hypersonic Tunnel for nonstop operation at Mach 10. A series of compressors pushed highspeed air through a 1.25-inch square nozzle into the 31-inch square test section. A 13,000-kilowatt electric resistance heater raised the air temperature to 1,450 °F in the settling chamber, while large water coolers and channels kept the tunnel walls cool. The tunnel became operational in 1962 and became instrumental in study of the aerodynamic performance and heat transfer on winged reentry vehicles such as the Space Shuttle.63 The 8-Foot High-Temperature Structures Tunnel, opened in 1967, permitted full-scale testing of hypersonic and spacecraft components. By burning methane in air at high pressure and through a hypersonic nozzle in the tunnel, Langley researchers could test structures at Mach 7 speeds and at temperatures of 3,000 °F. Too late for the 1960s space program, the tunnel was instrumental in the testing of the insulating tiles used on the Space Shuttle.64 NASA researchers Richard R. Heldenfels and E. Barton Geer developed the 9- by 6-Foot Thermal Structures Tunnel to test aircraft and missile structural components operating under the combined effects of aerodynamic heating and loading. The tunnel became operational in 1957 and featured a Mach 3 drive system consisting of 600-psia air stored in a tank farm filled by a high-capacity compressor. The spent air simply exhausted to the atmosphere. Modifications included additional air storage (1957), a high-speed digital data system (1959), a subsonic diffuser (1960), a Topping compressor (1961), and a boost heater system that generated 2,000 °F of heat (1963). NASA closed the 9- by 6-Foot Thermal Structures Tunnel in September 1971. Metal fatigue in the air storage field led to an explosion that destroyed part of the facility and nearby buildings.65 NASA’s wind tunnels contributed to the growing refinement of spacecraft technology. The multiple design changes made during the transition from the Mercury program to the Gemini program and the need for more information on the effects of angle of attack, heat transfer, and surface pressure resulted in a new wind tunnel and flight-test program. Wind tunnel tests of the Gemini spacecraft were conducted in the range

of Mach 3.51 to 16.8 at the Langley Unitary Plan and tunnels at AEDC and Cornell University. The flight-test program gathered data from the first four launches and reentries of Gemini spacecraft. 66 Correlation revealed that both independent sets of data were in agreement.67

6

Applying Hypersonic Test Facilities to Hypersonic Vehicle Design One of NASA’s first flight research studies was the X-15 program (1959– 1968). The program investigated flight at five or more times the speed of sound at altitudes reaching the fringes of space. Launched from the wing of NASA’s venerable Boeing B-52 mother ship, the North American X-15 was a true “aerospace” plane, with performance that went well beyond the capabilities of existing aircraft within and beyond the atmosphere. Long, black, rocket-powered, and distinctive with its cruciform tail, the X-15 became the highest-flying airplane in history. In one flight, the X-15 flew to 67 miles (354,200 feet) above the Earth at a speed of Mach 6.7, or 4,534 mph. At those speeds and altitudes, the X-15 pilots, made up of the leading military and civilian aviators, had to wear pressure suits, and many of them earned astronaut’s wings. North American used titanium as the primary structural material and covered it with a new high-temperature nickel alloy called Inconel-X. The X-15 relied upon conventional controls in the atmosphere but used reactioncontrol jets to maneuver in space. The 199 flights of X-15 program generated important data on high-speed flight and provided valuable lessons for NASA’s space program. The air traveling over the X-15 at hypersonic speeds generated enough friction and heat that the outside surface of the airplane reached 1,200 °F. A dozen Langley and Ames wind tunnels contributed to the X-15 program. The sole source of aerodynamic data for the X-15 came from tests generated in the pioneering Mach 6.8 11-Inch Hypersonic Tunnel developed by John Becker at Langley in the late 1940s. Fifty percent of the work conducted in the tunnel was for the X-15 program, which focused on aerodynamic heating, stability and control, and load

Part of the Project Fire study included the simulation of reentry heating on high-temperature materials in the 9- by 6-Foot Thermal Structures Tunnel. NASA.

distribution studies. The stability and control investigations contributed to the research airplane’s distinctive cruciform tail. The 7- by 10-Foot High-Speed Wind Tunnel enabled the study of the X-15’s separation from the B-52 at subsonic speeds, a crucial phase in the test flight. At Ames, gun-launched models fired into the free-flight tunnels obtained shadowgraphs of the shock wave patterns between Mach 3.5 and 6, the performance regime for the X-15. The Unitary Plan Supersonic Tunnel generated data on aerodynamic forces and heat transfer. The Lewis Research Center facilities provided additional data on supersonic jetplumes and rocket-nozzle studies.68 There was a concern that wind tunnel tests would not provide correct data for the program. First, the cramped size of the tunnel test sections did not facilitate more accurate full-scale testing. Second, none of NASA’s tunnels was capable of replicating the extreme heat generated by hypersonic flight, which was believed to be a major factor in flying at those speeds. The flights of the X-15 validated the wind tunnel

testing and revealed that lift, drag, and stability values were in agreement with one another at speeds up to Mach 10.69 The wind tunnels of NASA continued to reflect the Agency’s flexibility in the development of craft that operated in and out of the Earth’s atmosphere. Specific components evaluated in the 9- by 6-Foot Thermal Structures Tunnel included the X-15 vertical tail, the heat shields for the Centaur launch vehicle and Project Fire entry vehicle, and components of the Hawk, Falcon, Sam-D, and Minuteman missiles. Researchers also subjected humans, equipment, and structures such as the Mercury Spacecraft to the 162-decibel, high-intensity noise at the tunnel exit. As part of Project Fire, in the early 1960s, personnel in the tunnel evaluated the effects of reentry heating on spacecraft materials.70 The Air Force’s failed X-20 Dyna-Soar project attempted to develop a winged spacecraft. The X-20 never flew, primarily because of bureaucratic entanglements. NASA researchers H. Julian Allen and Alfred J. Eggers, Jr., working on ballistic missiles, found that a blunt shape made reentry possible.71 NASA developed a series of “lifting bodies”— capable of reentry and then being controlled in the atmosphere—to test unconventional blunt configurations. The blunt nose and wing-leading edge of the Space Shuttles that are launched into space and then glide to a landing after reentry, starting with Columbia in April 1981, owe their success to the lifting body tests flown by NASA in the 1960s and 1970s. The knowledge gained in these programs contributed to the Space Shuttle of the 1980s. Analyses of the Shuttle reflected the tradition dating back to the Wright brothers of correlating ground, or wind tunnel, data with flight data. Langley researchers conducted an extended aerodynamic and aerothermodynamic comparison of hypersonic flight- and ground-test results for the program. The research team asserted that the “survival of the vehicle is a tribute to the overall design philosophy, including ground test predictions, and to the designers of the Space Shuttle.”72

H. Julian Allen used the 8- by 7-foot test section of the NACA Ames Unitary Plan Wind Tunnel during the development of his blunt-body theory. NASA.

The latest NASA research program, called Hyper-X, investigated hypersonic flight with a new type of aircraft engine, the X-43A scramjet, or supersonic combustion ramjet. The previous flights of the X-15, the lifting bodies, and the Space Shuttle relied upon rocket power for hypersonic propulsion. A conventional air-breathing jet engine, which relies upon the mixture of air and atomized fuel for combustion, can only propel aircraft to speeds approaching Mach 4. A scramjet can operate well 341

NASA’s Contributions to Aeronautics

6

past Mach 5 because the process of combustion takes place at supersonic speeds. Launch-mounted to the front of rocket booster from a B-52 at 40,000 feet, the 12-foot-long, 2,700-pound X-43A first flew in March 2004. During the 11-second flight, the little engine reached Mach 6.8 and demonstrated the first successful operation of a scramjet. In November 2004, a second flight achieved Mach 9.8, the fastest speed ever attained by an air-breathing engine. Much like Frank Whittle and Hans von Ohain’s turbojets and the Wrights’ invention of the airplane, the X-43A offered the promise of a new revolution in aviation, that of high-speed global travel and a cheaper means to access space. The diminutive X-43A allowed for realistic testing at NASA Langley. First, it was at full-scale for the specific scramjet tests. Moreover, it served as a scale model for the hypersonic engines intended for future aerospace craft. The majority of the testing for the Hyper-X program occurred in the Arc-Heated Scramjet Test Facility, which was the primary Mach 7 scramjet test facility. Introduced in the late 1970s, the Langley facility generated the appropriate flows at 3,500 °F. Additional transonic and supersonic tests of 30-inch X-43A models took place in the 16-Foot Transonic Tunnel and the Unitary Plan Wind Tunnel.73 Researchers in the Langley Aerothermodynamics Branch worked on a critical phase of the flight: the separation of the X-43A from the Pegasus booster. The complete Hyper-X Launch Vehicle stack, consisting of the scramjet and booster, climbed to 20,000 feet under the wing of NASA’s Boeing B-52 Stratofortress in captive/carry flight. Clean separation between the two within less than a second ensured the success of the flight. The X-43A, with its asymmetrical shape, did not facilitate that clean separation. The Langley team required a better aerodynamic understanding of multiple configurations: the combined stack, the X-43A and the Pegasus in close proximity, and each vehicle in open, free flight. The Langley 20-Inch Mach 6 and 31-Inch Mach 10 blow-down tunnels were used for launch, postlaunch, and free-flyer hypersonic testing.74 Matching the Tunnel to the Supercomputer The use of sophisticated wind tunnels and their accompanying complex mathematical equations led observers early on to call aerodynamics the 73. Heppenheimer, Facing the Heat Barrier, pp. 208, 271, 273. 74. William C. Woods, Scott D. Holland, and Michael DiFulvio, “Hyper-X Stage Separation WindTunnel Test Program,” Journal of Spacecraft and Rockets, vol. 38 (Nov.–Dec. 2001), p. 811.

342

Case 6 | NASA and the Evolution of the Wind Tunnel

6

A model of the X-43A and the Pegasus Launch Vehicle in the Langley 31-Inch Mach 10 Tunnel. NASA.

“science” of flight. There were three major methods of evaluating an aircraft or spacecraft: theoretical analysis, the wind tunnel, and full-flight testing. The specific order of use was ambiguous. Ideally, researchers originated a theoretical goal and began their work in a wind tunnel, with the final confirmation of results occurring during full-flight testing. Researchers at Langley sometimes addressed a challenge first by studying it in flight, then moving to the wind tunnel for more extreme testing, such as dangerous and unpredictable high speeds, and then following up with the creation of a theoretical framework. The lack of knowledge of the effect of Reynolds number was at the root of the inability to trust wind tunnel data. Moreover, tunnel structures such as walls, struts, and supports affected the performance of a model in ways that were hard to quantify.75 From the early days of the NACA and other aeronautical research facilities, an essential component of the science was the “computer.” Human computers, primarily women, worked laboriously to finish the myriad of calculations needed to interpret the data generated in wind

tunnel tests. Data acquisition became increasingly sophisticated as the NACA grew in the 1940s. The Langley Unitary Plan Wind Tunnel possessed the capability of remote and automatic collection of pressure, force, temperature data from 85 locations at 64 measurements a second, which was undoubtedly faster than manual collection. Computers processed the data and delivered it via monitors or automated plotters to researchers during the course of the test. The near-instantaneous availability of test data was a leap from the manual (and visual) inspection of industrial scales during testing.76 Computers beginning in the 1970s were capable of mathematically calculating the nature of fluid flows quickly and cheaply, which contributed to the idea of what Baals and Corliss called the “electronic wind tunnel.”77 No longer were computers only a tool to collect and interpret data faster. With the ability to perform billions of calculations in seconds to mathematically simulate conditions, the new supercomputers potentially could perform the job of the wind tunnel. The Royal Aeronautical Society published The Future of Flight in 1970, which included an article on computers in aerodynamic design by Bryan Thwaites, a professor of theoretical aerodynamics at the University of London. His essay would be a clarion call for the rise of computational fluid dynamics (CFD) in the late 20th century.78 Moreover, improvements in computers and algorithms drove down the operating time and cost of computational experiments. At the same time, the time and cost of operating wind tunnels increased dramatically by 1980. The fundamental limitations of wind tunnels centered on the age-old problems related to model size and Reynolds number, temperature, wall interference, model support (“sting”) interference, unrealistic aeroelastic model distortions under load, stream nonuniformity, and unrealistic turbulence levels. Problematic results from the use of test gases were a concern for the design of vehicles for flight in the atmospheres of other planets.79

76. Baals and Corliss, Wind Tunnels of NASA, p. 71. 77. Ibid., p. 136. 78. Reference in James R. Hansen, The Bird is on the Wing: Aerodynamics and the Progress of the American Airplane (College Station: Texas A & M University Press, 2004), p. 221. 79. Victor L. Peterson and William F. Ballhaus, Jr., “History of the Numerical Aerodynamic Simulation Program,” in Paul Kutler and Helen Yee, Supercomputing in Aerospace: Proceedings of a Symposium Held at the NASA Ames Research Center, Moffett Field, CA, Mar. 10–12, 1987, NASA CP-2454 (1987), pp. 1, 3.

344

Case 6 | NASA and the Evolution of the Wind Tunnel

6

The control panels of the Langley Unitary Wind Tunnel in 1956. NASA.

The work of researchers at NASA Ames influenced Thwaites’s assertions about the potential of CFD to benefit aeronautical research. Ames researcher Dean Chapman highlighted the new capabilities of supercomputers in his Dryden Lecture in Research for 1979 at the American Institute of Aeronautics and Astronautics Aerospace Sciences Meeting in New Orleans, LA, in January 1979. To Chapman, innovations in computer speed and memory led to an “extraordinary cost reduction trend in computational aerodynamics,” while the cost of wind tunnel experiments had been “increasing with time.” He brought to the audience’s attention that a meager $1,000 and 30 minutes computer time allowed the numerical simulation of flow over an airfoil. The same task in 1959 would have cost $10 million and would have been completed 30 years later. Chapman made it clear that computers could cure the “many ills of wind-tunnel and turbomachinery experiments” while providing “important new technical capabilities for the aerospace industry.”80 80. Dean R. Chapman, “Computational Aerodynamics Development and Outlook,” Dryden Lecture in Research for 1979, American Institute of Aeronautics and Astronautics, Aerospace Sciences Meeting, New Orleans, LA, Jan. 15–17, 1979, AIAA-1979-129 (1979), p. 1; Baals and Corliss, Wind Tunnels of NASA, p. 137.

345

NASA’s Contributions to Aeronautics

6

The crowning achievement of the Ames work was the establishment of the Numerical Aerodynamic Simulation (NAS) Facility, which began operations in 1987. The facility’s Cray-2 supercomputer was capable of 250 million computations a second and 1.72 billion per second for short periods, with the possibility of expanding capacity to 1 billion computations per second. That capability reduced the time and cost of developing aircraft designs and enabled engineers to experiment with new designs without resorting to the expense of building a model and testing it in a wind tunnel. Ames researcher Victor L. Peterson said the new facility, and those like it, would allow engineers “to explore more combinations of the design variables than would be practical in the wind tunnel.”81 The impetus for the NAS program arose from several factors. First, its creation recognized that computational aerodynamics offered new capabilities in aeronautical research and development. Primarily, that meant the use of computers as a complement to wind tunnel testing, which, because of the relative youth of the discipline, also placed heavy demands on those computer systems. The NAS Facility represented the committed role of the Federal Government in the development and use of large-scale scientific computing systems dating back to the use of the ENIAC for hydrogen bomb and ballistic missile calculations in the late 1940s.82 It was clear to NASA that supercomputers were part of the Agency’s future in the late 1980s. Futuristic projects that involved NASA supercomputers included the National Aero-Space Plane (NASP), which had an anticipated speed of Mach 25; new main engines and a crew escape system for the Space Shuttle; and refined rotors for helicopters. Most importantly from the perspective of supplanting the wind tunnel, a supercomputer generated data and converted them into pictures that captured flow phenomena that had been previously unable to be simulated.83 In other words, the “mind’s eye” of the wind tunnel engineer could be captured on film. Nevertheless, computer simulations were not to replace the wind tunnel. At a meeting sponsored by Advisory Group for Aerospace

Research & Development (AGARD) on the Integration of Computers and Wind Testing in September 1980, Joseph G. Marvin, the chief of the Experimental Fluid Dynamics Branch at Ames, asserted CFD was an “attractive means of providing that necessary bridge between windtunnel simulation and flight.” Before that could happen, a careful and critical program of comparison with wind tunnel experiments had to take place. In other words, the wind tunnel was the tool to verify the accuracy of CFD.84 Dr. Seymour M. Bogdonoff of Princeton University commented in 1988 that “computers can’t do anything unless you know what data to put in them.” The aerospace community still had to discover and document the key phenomena to realize the “future of flight” in the hypersonic and interplanetary regimes. The next step was inputting the data into the supercomputers.85 Researchers Victor L. Peterson and William F. Ballhaus, Jr., who worked in the NAS Facility, recognized the “complementary nature of computation and wind tunnel testing,” where the “combined use” of each captured the “strengths of each tool.” Wind tunnels and computers brought different strengths to the research. The wind tunnel was best for providing detailed performance data once a final configuration was selected, especially for investigations involving complex aerodynamic phenomena. Computers facilitated the arrival and analysis of that final configuration through several steps. They allowed development of design concepts such as the forward-swept wing or jet flap for lift augmentation and offered a more efficient process of choosing the most promising designs to evaluate in the wind tunnel. Computers also made the instrumentation of test models easier and corrected wind tunnel data for scaling and interference errors.86

6

The Future of the Tunnel in the Era of CFD A longstanding flaw with wind tunnels was the aerodynamic interference caused by the “sting,” or the connection between the model and the test instrumentation. Researchers around the world experimented with magnetic suspension systems beginning in the late 1950s. Langley,

in collaboration with the AEDC, constructed the 13-Inch Magnetic Suspension and Balance System (MSBS). The transparent test section measured about 12.6 inches high and 10.7 inches wide. Five powerful electromagnets installed in the test section suspended the model and provided lift, drag, side forces, and pitching and yaw moments. Control of the iron-cored model over these five axes removed the need for a model support. The lift force of the system enabled the suspension of a 6-pound iron-cored model. The rest of the tunnel was conventional: a continualflow, closed-throat, open-circuit design capable of speeds up to Mach 0.5.87 When the 13-Inch MSBS became operational in 1965, NASA used the tunnel for wake studies and general research. Persistent problems with the system led to its closing in 1970. New technology and renewed interest revived the tunnel in 1979, and it ran until the early 1990s.88 NASA’s work on magnetic suspension and balance systems led to a newfound interest in a wind tunnel capable of generating cryogenic test temperatures in 1971. Testing a model at below -150 °F permitted theoretically an increase in Reynolds number. There was a precedent for a cryogenic wind tunnel. R. Smelt at the Royal Aircraft Establishment at Farnborough conducted an investigation into the use of airflow at cryogenic temperatures in a wind tunnel. His work revealed that a cryogenic wind tunnel could be reduced in size and required less power as compared with a similar ambient temperature wind tunnel operated at the same pressure, Mach number, and Reynolds number.89 The state of the art in cooling techniques and structural materials required to build a cryogenic tunnel did not exist in the 1940s. American and European interest in the development of a transonic tunnel that generated high Reynolds numbers, combined with advances in cryogenics and structures in the 1960s, revived interest in Smelt’s findings. A team of Langley researchers led by Robert A. Kilgore initiated a study of the viability of a cryogenic wind tunnel. The first experiment with a low87. R.P. Boyden, “A Review of Magnetic Suspension and Balance Systems,” AIAA Paper 88-2008 (May 1988); NASA Langley, “NASA’s Wind Tunnels,” IS-1992-05-002-LaRC, May 1992, http:// oea.larc.nasa.gov/PAIS/WindTunnel.html, accessed May 26, 2009. 88. Marie H. Tuttle, Deborah L. Moore, and Robert A. Kilgore, “Magnetic Suspension and Balance Systems: A Comprehensive, Annotated Bibliography,” NASA TM-4318 (1991), p. iv; Langley Research Center, “Research and Test Facilities,” p. 9. 89. R. Smelt, “Power Economy in High Speed Wind Tunnels by Choice of Working Fluid and Temperature,” Report No. Aero. c081, Royal Aircraft Establishment, Farnborough, England, Aug. 1945.

348

Case 6 | NASA and the Evolution of the Wind Tunnel

speed tunnel during summer 1972 resulted in an extension of the program into the transonic regime. Kilgore and his team began design of the tunnel in December 1972, and the Langley Pilot Transonic Cryogenic Tunnel became operational in September 1973.90 The pilot tunnel was a continual-flow, fan-driven tunnel with a slotted octagonal test section, 0.3 meters (1 foot) across the flats, and was constructed almost entirely out of aluminum alloy. The normal test medium was gaseous nitrogen, but air could be used at ambient temperatures. The experimental tunnel provided true simulation of full-scale transonic Reynolds numbers (up to 100 x 106 per foot) from Mach 0.1 to 0.9 and was a departure from conventional wind tunnel design. The key was decreasing air temperature, which increased the density and decreased the viscosity factor in the denominator of the Reynolds number. The result was the simulation of full-scale flight conditions at transonic speeds with great accuracy.91 Kilgore and his team’s work generated fundamental conclusions about cryogenic tunnels. First, cooling with liquid nitrogen was practical at the power levels required for transonic testing. It was also simple to operate. Researchers could predict accurately the amount of time required to cool the tunnel, a basic operational parameter, and the amount of liquid nitrogen needed for testing. Through the use of a simple liquid nitrogen injection system, tunnel personnel could control and evenly distribute the temperature. Finally, the cryogenic tunnel was quieter than was an identical tunnel operating at ambient temperature. The experiment was such a success and generated such promising results that NASA reclassified the temporary tunnel as a “permanent” facility and renamed it the 0.3-Meter Transonic Cryogenic Tunnel (TCT).92

After 6 years of operation, NASA researchers shared their experiences at the First International Symposium on Cryogenic Wind Tunnels at the University of Southampton, England, in 1979. Their operation of the 0.3-Meter TCT demonstrated that there were no insurmountable problems associated with a variety of aerodynamic tests with gaseous nitrogen at transonic Mach numbers and high Reynolds numbers. The 350

Case 6 | NASA and the Evolution of the Wind Tunnel

team found that the injection of liquid nitrogen into the tunnel circuit to induce cryogenic cooling caused no problems with temperature distribution or dynamic response characteristics. Not everything, however, was known about cryogenic tunnels. There would be a significant learning process, which included the challenges of tunnel control, run logic, economics, instrumentation, and model technology.93 Developments in computer technology in the mid-1980s allowed continual improvement in transonic data collection in the 0.3-Meter TCT, which alleviated a long-term problem with all wind tunnels. The walls, floor, and ceiling of all tunnels provided artificial constraints on flight simulation. The installation of computer-controlled adaptive, or “smart,” tunnel walls in March 1986 lessened airflow disturbances, because they allowed the addition or expulsion of air through the expansion and contraction along the length, width, and height of the tunnel walls. The result was a more realistic simulation of an aircraft flying in the open atmosphere. The 0.3-Meter TCT’s computer system also automatically tailored Mach number, pressure, temperature, and angle of attack to a specific test program and monitored the drive, electrical, lubrication, hydraulic, cooling, and pneumatic systems for dangerous leaks and failures. The success of the 0.3-Meter TCT led to further investigation of smart walls at Langley and Lewis.94 NASA’s success with the 0.3-Meter Transonic Cryogenic Tunnel led to the creation of the National Transonic Facility (NTF) at Langley. Both NASA and the Air Force were considering the construction of a large transonic wind tunnel. NASA proposed a larger cryogenic tunnel, and the Air Force wanted a Ludweig-tube tunnel. The Federal Government decided in 1974 to fund a facility to meet commercial, military, and scientific needs based on NASA’s pioneering operation of the cryogenic tunnel. Contractors built the tunnel on the site of the 4-Foot Supersonic Pressure Tunnel and incorporated the old tunnel’s drive motors, support buildings, and cooling towers.95

Becoming operational in 1983, the NTF was a high-pressure, cryogenic, closed-circuit wind tunnel with a Mach number range from 0.1 to 1.2 and a Reynolds number range of 4 x 106 to 145 x 106 per foot. It featured a 2.5-meter test section with 12 slots and 14 reentry flaps in the ceiling and floor. Langley personnel designed a drive system to include a fan with variable inlet guide vanes for precise Mach number control. Injected as super-cold liquid and evaporated into a gas, nitrogen is the primary test medium. Air is the test gas in the ambient temperature mode, while a heat exchanger maintains the tunnel temperature. Thermal insulation of the tunnel’s pressure shell ensured minimal energy consumption. The NTF continues to be one of Langley’s more advanced facilities as researchers evaluate the stability and control, cruise performance, stall buffet onset, and aerodynamic configurations of model aircraft and airfoil sections.96 The movement toward the establishment of national aeronautical facilities led NASA to expand the operational flexibility of the highly successful subsonic 40- by 80-foot wind tunnel at Ames Research Center. A major renovation project added an additional 80- by 120-foot test section capable of testing a full-size Boeing 737 airliner, making it the world’s largest wind tunnel. A central drive system that featured fans almost 4 stories tall and electric motors capable of generating 135,000 horsepower created the airflow for both sections through movable vanes that directed air through either section. The 40- by 80-foot test section acted as a closed circuit up to 345 mph. The air driven through the 80by 120-foot test section traveled up to 115 mph before exhausting into the atmosphere. Each section incorporated a range of model supports to facilitate a variety of experiments. The two sections became operational in 1987 (40- by 80-foot) and 1988 (80- by 120-foot). NASA christened the tunnel the National Full-Scale Aerodynamics Complex (NFAC) at Ames Research Center.97 96. Marie H. Tuttle, Robert A. Kilgore, and Deborah L. Moore, “Cryogenic Wind Tunnels: A Comprehensive, Annotated Bibliography,” NASA TM-4273 (1991), p. iv; NASA Langley, “NASA’s Wind Tunnels,” IS-1992-05-002-LaRC, May 1992, http://oea.larc.nasa.gov/PAIS/WindTunnel. html, accessed May 26, 2009; NASA, “Wind Tunnels at NASA Langley Research Center,” FS2001-04-64-LaRC, 2001, http://www.nasa.gov/centers/langley/news/factsheets/windtunnels. html, accessed May 28, 2009. 97. H. Kipling Edenborough, “Research at NASA’s NFAC Wind Tunnels,” NASA TM-102827 (June 1990), pp. 1–6; NASA Langley, “NASA’s Wind Tunnels,” IS-1992-05-002-LaRC, May 1992, http://oea.larc.nasa.gov/PAIS/WindTunnel.html, accessed May 26, 2009.

352

Case 6 | NASA and the Evolution of the Wind Tunnel

6

A Pathfinder I advanced transport model being prepared for a test in the super-cold nitrogen and high-pressure environment of the National Transonic Facility (NTF) in 1986. NASA.

Bringing the Tunnel to Industry and Academia NASA has always justified its existence by making itself available for outside research. In an effort to advertise the services and capabilities of Langley’s wind tunnels, NASA published the technical memorandum, “Characteristics of Major Active Wind Tunnels at the Langley Research Center,” by William T. Shaefer, Jr., in July 1965. Unlike the NACA’s goal of assisting industry through the use of its pioneering wind tunnels at a time when there were few facilities to rely upon, NASA’s wind tunnels first and foremost met the needs of the Agency’s fundamental research and development. Secondary to that priority were projects that were important to other Government agencies. Two specific committees handled U.S. Army, Navy, and Air Force requests concerning aircraft and missiles and propulsion projects. Finally, the aerospace industry had access to NASA facilities, primarily the Unitary Plan Wind Tunnels, on a fee basis for the evaluation of proprietary designs. No NASA wind tunnel was to be used for testing that could be done at a commercial facility, and all projects had to be “clearly in the national interest.”98

98. Shaefer, “Characteristics of Major Active Wind Tunnels at the Langley Research Center,” p. 2.

353

NASA’s Contributions to Aeronautics

NASA continued to “sell” its tunnels on through the following decades. In 1992, the Agency confidently announced:

6

NASA’s wind tunnels are a national technological resource. They have provided vast knowledge that has contributed to the development and advancement of the nation’s aviation industry, space program, economy and the national security. Amid today’s increasingly fierce international, commercial and technological competition, NASA’s wind tunnels are crucial tools for helping the United States retain its global leadership in aviation and space flight.99 According to this rhetoric, NASA’s wind tunnels were central to the continued leadership of the United States in aerospace. As part of the selling of the tunnels, NASA initiated the Technology Opportunities Showcase (TOPS) in the early 1990s. The program distributed to the aerospace industry a catalog of available facilities similar to a real estate sampler. A prospective user could check a box marked “Please Send More Information” or “Would Like To Discuss Facility Usage” as part of the process. NASA wind tunnels were used on a space-available basis. If the research was of interest to NASA, there would be no facility charge, and the Agency would publish the results. If a manufacturing concern had a proprietary interest and the client did not want the test results to be public, then it had to bear all costs, primarily the use of the facility.100 The TOPS evolved into the NASA Aeronautics Test Program (ATP) in the early 21st century to include all four Research Centers at Langley, Ames, Glenn, and Dryden.101 The ATP offered Government, corporations, and institutions the opportunity to contract 14 facilities, which included a “nationwide team of highly trained and certified staff, whose backgrounds and education encompass every aspect of aerospace testing and engineering,” for a “wide range” of experimental test services that reflected “sixty years of unmatched aerospace test history.” The ATP

99. NASA Langley, “NASA’s Wind Tunnels,” IS-1992-05-002-LaRC, May 1992, http://oea.larc. nasa.gov/PAIS/WindTunnel.html, accessed May 26, 2009. 100. Langley Research Center, “Research and Test Facilities,” p. 12. 101. NASA changed the name of the Lewis Research Center to the John H. Glenn Research Center at Lewis Field in 1999 to recognize the achievements of the astronaut and Ohio Senator.

The Wind Tunnel’s Future Is the wind tunnel obsolete? In a word, no. But the value and merit of the tunnel in the early 21st century must be evaluated in the light of manifold other techniques that researchers can now employ. The range of these new techniques, particularly CFD, coupled with the seeming maturity of the airplane, has led some observers to conclude that there is little need for extensive investment in research, development, and infrastructure.103 That facile assumption has been carried over into the question of whether there is a continued need for wind tunnels. It brings into question the role of the wind tunnel in contemporary aerospace research and development. A 1988 New York Times article titled “In the Space Age, the Old Wind Tunnel Is Being Left Behind” proclaimed “aerospace engineers have hit 102. NASA, “NASA’s Aeronautics Test Program: The Right Facility at the Right Time,” B–1240 (Oct. 2006); NASA, “Aeronautics Test Program,” NF-2009-03-486-HQ (n.d. [2009]). 103. Hansen, The Bird is on the Wing, p. 212.

355

NASA’s Contributions to Aeronautics

6

a dead end in conventional efforts to test designs for the next generation of spaceships, planetary probes and other futuristic flying machines.” The technology for the anticipated next generation in spacecraft technology that would appear in the 21st century included speeds in the escape velocity range and the ability to maneuver in and out of planetary atmospheres rather than the now-familiar single direction and uncontrolled descents of today. At the core of the problem was getting realistic flight data from a “nineteenth century invention used by the Wright brothers,” the wind tunnel. William I. Scallion of NASA Langley asserted, “We’ve pushed beyond the capacity of most of our ground facilities.” NASA, the Air Force, and various national universities began work on methods to simulate the speeds, temperatures, stress, forces, and vibration challenging the success of these new craft. The proposed solutions were improved wind tunnels capable of higher speeds, the firing of small-scale models atop rockets into the atmosphere, and the dropping of small test vehicles from the Space Shuttle while in orbit.104 The need for new testing methods and facilities reflected the changing nature of aerospace craft missions and design. Several programs perceived to be pathways to the future in the 1980s exemplified the need for new testing facilities. Proponents of the X-30 aerospace plane believed it would be able to take off and fly directly into space by reaching Mach 25, or 17,000 mph, while being powered by air-breathing engines. In 1988, wind tunnels could only simulate speeds up to Mach 12.5. NASA intended the Aeromanuevering Orbit Transfer Vehicle to be a low-cost “space tug” that could move payloads between high- and low-Earth orbits beginning in the late 1990s. The vehicle slowed itself in orbit by grazing the Earth’s outer atmosphere with an aerobrake, or a lightweight shield, rather than relying upon heavy retrorockets, a technique that was impossible to replicate in a wind tunnel. NASA planned to launch small models from the Space Shuttle for evaluation. The final program concerned new interplanetary probes destined for Mars; Jupiter; Saturn’s moon, Titan; and their atmospheres, which were much unlike Earth’s. They no longer just dropped back into Earth’s or another planet’s atmosphere from space. The craft required maneuverability and flexibility as incorporated into the Space Shuttle for better economy.105 104. William J. Broad, “In the Space Age, the Old Wind Tunnel Is Being Left Behind,” New York Times, Jan. 5, 1988, p. C1. 105. Ibid., p. C4.

356

Case 6 | NASA and the Evolution of the Wind Tunnel

NASA allocated funds for the demolition of unused facilities for the first time in the long history of the Agency in 2003. The process required that each of the Research Centers submit listings of target facilities.106 NASA’s Assistant Inspector General for Auditing conducted a survey of the utilization of NASA’s wind tunnels at three Centers in 2003 and reported the findings to the directors of Langley, Ames, and Lewis and to the Associate Administrator for Aerospace Technology. Private industry and the Department of Defense spent approximately 28,000 hours in NASA tunnels in 2002. The number dwindled to 10,000 hours in 2003, dipping to about 2,500 hours in 2008. NASA managers acknowledged there was a direct correlation between a higher user fee schedule introduced in 2002 and the decline in usage. The audit also included the first complete list of tunnel closures for the Agency. Of the 19 closed facilities, NASA classified 5 as having been “mothballed,” with the remaining 14 being “abandoned.”107 Budget pressures also forced NASA to close running facilities. Unfortunately, NASA’s operation of the NFAC was short-lived when the Agency closed the facility in 2003. Recognizing the need for full-scale testing of rotorcraft and powered-lift V/STOL aircraft, the Air Force leased the facility in 2006 for use by the AEDC. The NFAC became operational again in 2008. Besides aircraft, the schedule at the NFAC accommodated nontraditional test subjects, including wind turbines, parachutes, and trucks.108 In 2005, NASA announced its plan to reduce its aeronautics budget by 20 percent over the following 5 years. The budget cuts included the closing of wind tunnels and other research facilities and the elimination of hundreds of jobs. NASA had spread thin what was left of the aeronautics budget (down $54 million to $852 million) over too many programs. NASA did receive a small increase in its overall budget to cover the costs of the new Moon-Mars initiative, which meant cuts in aviationrelated research. In a hearing before the House Science Subcommittee

on Space and Aeronautics to discuss the budget cuts, aerospace industry experts and politicians commented on the future of fundamental aeronautics research in the United States. Dr. John M. Klineberg, a former NASA official and industry executive, asserted that the NASA aeronautics program was “on its way to becoming irrelevant to the future of aeronautics in this country and in the world.” Representative Dennis Kucinich, whose district included Cleveland, the home of NASA Glenn, warned that the United States was “going to take the ‘A’ out” of NASA and that the new Agency was “just going to be the National Space Administration.”109 Philip S. Antón, Director of the RAND Corporation’s Acquisition and Technology Policy Center, spoke before the Committee. RAND concluded a 3-year investigation that revealed that only 2 of NASA’s 31 wind tunnels warranted closure.110 As to the lingering question of the supremacy of CFD, Antón asserted that NASA should pursue wind tunnel facility, CFD, and flight-testing to meet national testing needs. RAND recommended a veritable laundry list of suggested improvements that ranged from the practical—the establishment of a minimum set of facilities that could serve national needs and the financial support to keep them running—to the visionary—continued investment in CFD and focus on the challenge of hypersonic air-breathing research. RAND analysts had concluded in 2004 that NASA’s wind tunnel facilities continued to be important to continued American competitiveness in the military, commercial, and space sectors of the world aerospace industry while “management issues” were “creating real risks.” NASA needed a clear aeronautics test technology vision based on the idea of a national test facility plan that identified and maintained a minimum set of facilities. For RAND, the bottom line was the establishment of shared financial support that kept NASA’s underutilized but essential facilities from crumbling into ruin. 111 Antón found the alternative—the use of foreign tunnels, a practice many of the leading

aerospace manufacturers embraced—problematic because of the myriad of security, access, and availability challenges.112 NASA’s wind tunnel heritage and the Agency’s viability in the international aerospace community came to a head in 2009. Those issues centered on the planned demolition of the most famous, recognizable, and oldest operating research facility at Langley, the 30- by 60-Foot Tunnel, in 2009 or 2010. Better known by its NACA name, the Full-Scale Tunnel was, according to many, “old, inefficient and not designed for the computer age” in 2009. 113 The Deputy of NASA’s Aeronautics Test Program, Tim Marshall, explained that the Agency decided “to focus its abilities on things that are strategically more important to the nation.” NASA’s focus was supersonic and hypersonic research that required smaller, faster tunnels for experiments on new technologies such as scramjets, not subsonic testing. In the case of the last operator of the FST, Old Dominion University, it had an important mission, refining the aerodynamics of motor trucks at a time of high fuel prices. It was told that economics, NASA’s strategic mission, and the desire of the Agency’s landlord, the U.S. Air Force, to regain the land, even if only for a parking lot in a flood zone, overrode its desire to continue using the FST for landlocked aerodynamic research.114 In conclusion, wind tunnels have been a central element in the success of NACA and NASA research throughout the century of flight. They are the physical representation of the rich and dynamic legacy of the organization. Their evolution, shaped by the innovative minds at Langley, Ames, and Glenn, paralleled the continual development of aircraft and spacecraft as national, economic, and technological missions shaped both. As newer, smaller, and cheaper digital technologies emerged in the late 20th century, wind tunnels and the testing methodologies pioneered in them still retained a place in the aerospace engineer’s toolbox, no matter how low-tech they appeared. What resulted was a richer fabric of opportunities and modes of research that continued to contribute to the future of flight.

The Micarta Controllable Pitch Propeller, pictured second from left, at the National Museum of the U.S. Air Force. Designed by McCook Field (now Wright-Patterson Air Force Base) engineers in 1922, this 9-foot propeller changed pitch in flight. U.S. Air Force.

366

Evolving the Modern 7 Composite Airplane

CASE

Stephen Trimble

Structures and structural materials have undergone progressive refinement. Originally, aircraft were fabricated much like ships and complex wooden musical instruments: of wood, wire, and cloth. Then, metal gradually supplanted these materials. Now, high-strength composite materials have become the next generation, allowing for synthetic structures with even better structural properties for much less weight. NASA has assiduously pursued development of composite structures.

7

W

HEN THE LOCKHEED MARTIN X-55 advanced composite cargo aircraft (ACCA) took flight early on the morning of June 2, 2009,1 it marked a watershed moment in a century-long quest to marry the high-strength yet lightweight properties of plastics with the structure required to support a heavily loaded flying vehicle. As the X-55, a greatly modified Dornier 328Jet, headed east from the runway at the U.S. Air Force’s Plant 42 outside Palmdale, CA, it gave the appearance of a conventional cargo aircraft. But the X-55’s fuselage structure aft of the fuselage represented perhaps the promising breakthrough in four decades of composite technology development. The single barrel, measuring 55 feet long by 9 feet wide,2 revolutionizes expectations for structural performance at the same time that it proposes to dramatically reduce manufacturing costs. In the long history of applying composites to aircraft structures, the former seemed always to come at the expense of the latter, or vice versa. Yet the X-55 defies experience, with both aluminum skins and traditional composites. To distinguish it from the aluminum skin of the 328Jet, Lockheed used fewer than 4,000 fasteners to assemble the aircraft with the single1. “Cargo X-Plane Shows Benefits of Advanced Composites,” Aviation Week & Space Technology, June 8, 2009, p. 18. 2. Stephen Trimble, “Skunk Works nears flight for new breed of all-composite aircraft,” Flight International, June 5, 2009.

367

NASA’s Contributions to Aeronautics

7

piece fuselage barrel. The metal 328Jet requires nearly 30,000 fasteners for all the pieces to fit together.3 Unlike traditional composites, the X-55 did not require hours of time baking in a complex and costly industrial oven called an autoclave. Neither was the X-55 skin fashioned from textile preforms with resins requiring a strictly controlled climate that can be manipulated only within a precise window of time. Instead, Lockheed relied on an advanced composite resin called MTM45-1, an “outof-autoclave” material flexible enough to assemble on a production line yet strong enough to support the X-55’s normal aerodynamic loads and payload of three 463L-standard cargo pallets.4 Lockheed attributed the program’s success to the fruits of a 10-year program sponsored by the Air Force Research Laboratory called the composites affordability initiative.5 In truth, the X-55 bears the legacy of nearly a century’s effort to make plastic suitable in terms of both performance and cost for serving as a load-bearing structure for large military and commercial aircraft. It was an effort that began almost as soon as a method to massproduce plastic became viable within 4 years after the Wright brothers’ first flight in 1903. In aviation’s formative years, plastics spread from cockpit dials to propellers to the laminated wood that formed the fuselage structure for small aircraft. Several decades would pass, however, before the properties of all but the most advanced plastics could be considered for mainstream aerospace applications. The spike in fuel prices of the early 1970s accelerated the search for a basic construction material for aircraft more efficient than aluminum, and composites finally moved to the forefront. Just as the National Advisory Committee for Aeronautics (NACA) fueled the industry’s transition from spruce to metal in the early 1930s, the National Aeronautics and Space Administration (NASA) would pioneer the progression from all-metal airframes to allcomposite material over four decades. The first flight of the X-55 moved the progression of composite technology one step further. As a reward, the Air Force Research Laboratory announced 4 months later that it would continue to support the X-55

program, injecting more funding to continue a series of flight tests. 6 Where the X-55 technology goes from here can only be guessed. Composites and the Airplane: Birth Through the 1930s The history of composite development reveals at least as many false starts and technological blind alleys as genuine progress. Leo Baekeland, an American inventor of Dutch descent, started a revolution in materials science in 1907. Forming a new polymer of phenol and formaldehyde, Baekeland had succeeded in inventing the first thermosetting plastic, called Bakelite. Although various types of plastic had been developed in previous decades, Bakelite was the first commercial success. Baekeland’s true breakthrough was inventing a process that allowed the mass production of a thermosetting plastic to be done cheaply enough to serve the mechanical and fiscal needs of a huge cross section of products, from industrial equipment to consumer goods. It is no small irony that powered flight and thermosetting plastics were invented within a few years of each other. William F. Durand, the first Chairman of the NACA, the forerunner of NASA, in 1918 summarized the key structural issue facing any aircraft designer. Delivering the sixth Wilbur Wright Memorial Lecture to the Royal Aeronautical Society, the former naval officer and mechanical engineer said, “Broadly speaking, the fundamental problem in all airplane construction is adequate strength or function on minimum weight.” 7 A second major structural concern, which NACA officials would soon come to fully appreciate, was the effect of corrosion on first wood, then metal, structures. Thermosetting plastics, one of two major forms of composite materials, present a tantalizing solution to both problems. The challenge has been to develop composite matrices and production processes that can mass-produce materials strong enough to replace wood and metal, yet affordable enough to meet commercial interests. While Baekeland’s grand innovation in 1907 immediately made strides in other sectors, aviation would be slow to realize the benefit of thermosetting plastics.

The substance was too brittle and too week in tensional strength to be used immediately in contemporary aircraft structures. But Bakelite eventually found its place by 1912, when some aircraft manufacturers started using the substance as a less corrosive glue to bind the joints between wooden structures.8 The material shortages of World War I, however, would force the Government and its fledgling NACA organization to start considering alternative sources to wood for primary structures. In 1917, in fact, the NACA began what would become a decades-long effort to investigate and develop alternatives to wood, beginning with metal. As a very young bureaucracy with few resources for staffing or research, the NACA would not gain its own facilities to conduct research until the Langley laboratory in Virginia was opened in 1920. Instead, the NACA committee formed to investigate potential solutions to materials problems, such as a shortage of wood for war production of aircraft, and recommended that the Army and the Bureau of Standards study commercially available aluminum alloys and steels for their suitability as wing spars.9 Even by this time, Bakelite could be found inside cockpits for instruments and other surfaces, but it was not yet considered as a primary or secondary load-bearing structure, even for the relatively lightweight aircraft of this age. Perhaps the first evidence that Bakelite could serve as an instrumental component in aircraft came in 1924. With funding provided by the NACA, two early aircraft materials scientists— Frank W. Caldwell and N.S. Clay—ran tests on propellers made of Micarta material. The material was a generational improvement upon the phenolic resin introduced by Baekeland. Micarta is a laminated fabric—in this case cotton duck, or canvas—impregnated with the Bakelite resin.10 Caldwell was the Government’s chief propeller engineer through 1928 and later served as chief engineer for Hamilton Standard. Caldwell is credited with the invention of variable pitch propellers during the interwar period, which would eventually enable the Boeing Model 247 to achieve altitudes greater than 6,000 feet, thus clearing the Rocky Mountains and becoming a truly intercontinental aircraft. Micarta had already served

as a material for fixed-pitch blades in World War I engines, including the Liberty and the 300-horsepower Wright.11 Fixed-pitch blades were optimized neither for takeoff or cruise. Caldwell wanted to allow the pilot to change the pitch of the blade as the airplane climbed, allowing the pitch to remain efficient in all phases of flight. Using the same technique, the pilot could also reverse the pitch of the blade after landing. The propeller blades now functioned as a brake, allowing the aircraft to operate on shorter runways. Finding the right material to use for the blades was foremost among the challenges for Caldwell and Clay. It had to be strong enough to survive the stronger aerodynamic forces as the blade changed its pitch. The extra strength had to be balanced with the weight of the material, and metal alloys had not yet advanced far enough in the early 1920s. However, Caldwell and Clay found that Micarta was suitable. In an NACA technical report, they concluded: “The reversible and adjustable propeller with micarta blades . . . is one of the most practical devices yet worked out for this purpose. It is quite strong in all details, weighs very little more than the fixed pitch propeller and operates so easily that the pitch may be adjusted with two fingers on the control level when the engine is running.” The authors had performed flight tests comparing the same aircraft and engine using both Micarta and wooden propeller blades. The former exceeded the top speed of the wooden propeller by 2 miles per hour (mph), while turning the engine at about 120 fewer revolutions per minute (rpm) and maintaining a similar rate of climb. The Micarta propeller was not only faster, it was also 7 percent more fuel efficient.12 The propeller work on Micarta showed that even if full-up plastics remained too weak for load-bearing applications, laminating wood with plastic glues provided a suitable alternative for that era’s demands for structural strength in aircraft designs. While American developers continued to make advances, critical research also was occurring overseas. By the late 1920s, Otto Kraemer—a research scientist at Deutsche Versuchsanstalt fur Luftfahrt (DVL), the NACA’s equivalent body in Germany—had started combining phenolic resins with paper or cloth. When this fiber-reinforced resin failed to yield a material with a structural stiffness superior to wood, Kraemer in 1933 started to investigate

birch veneers instead as a filler. Thin sheets of birch veneer impregnated with the phenolic resin were laminated into a stack 1 centimeter thick. The material proved stronger than wood and offered the capability of being molded into complex shapes, finally making plastic a viable option for aircraft production.13 Kraemer also got the aviation industry’s attention by testing the durability of fiber-reinforced plastic resins. He exposed 1-millimeter-thick sheets of the material to outdoor exposure for 15 months. His results showed that although the material frayed at the edges, its strength had eroded by only 14 percent. In comparison to other contemporary materials, these results were observed as “practically no loss of strength.”14 In the late 1930s, European designers also fabricated propellers using a wood veneer impregnated with a resin varnish.15 A critical date in aircraft structural history is March 31, 1931, the day a Fokker F-10A Trimotor crashed in Kansas, with Notre Dame football coach Knute Rockne among the eight passengers killed. Crash investigators determined that the glues joining the wing strut to the F-10A’s fuselage had been seriously deteriorated by exposure to moisture. The cumulative weakening of the joint caused the wing to break off in flight. The crash triggered a surge of nationwide negative publicity about the weaknesses of wood materials used in aircraft structures. This caused the aviation industry and passengers to embrace the transition from wood to metal for airplane materials, even as progress in synthetic materials, especially involving wood impregnated with phenolic resins, had started to develop in earnest.16 In his landmark text on the aviation industry’s transition from wood to metal, Eric Schatzberg sharply criticizes the ambivalence of the NACA’s leadership toward nonmetal alternatives as shortsightedness. For example, “In the case of the NACA, this neglect involved more than passive ignorance,” Schatzberg argues, “but rather an active rejection of research on the new adhesives.” However, with the military, airlines, and the traveling public all “voting with their feet,” or, more precisely, their bank accounts, in favor of the metal option, it is not difficult to understand the NACA leadership’s reluctance to invest scarce resources to develop

wood-based synthetic aircraft materials. The specimens developed during this period clearly lacked the popular support devoted to metal. Indeed, given the dominant role that metal structures were to play in aircraft and aerospace technology for most of the next 70 years, the priority placed on metal by the NACA’s experts could be viewed as strategically prescient. That is not to say that synthetic materials, such as plastic resins, were ignored by the aerospace industry in the 1930s. The technology of phenol- and formaldehyde-based resins had already grown beyond functioning as an adhesive with superior properties for resisting corrosion. The next step was using these highly moisture-resistant mixtures to form plywood and other laminated wood parts.17 Ultimately, the same resins could be used as an impregnant that could be reinforced by wood,18 essentially a carbon-based material. These early researchers had discovered the building blocks for what would become the carbonfiber-reinforced plastic material that dominates the composite structures market for aircraft. Of course, there were also plenty of early applications, albeit with few commercial successes. A host of early attempts to bypass the era of metal aircraft, with its armies of riveters and concerns over corrosion and metal fatigue, would begin in the mid-1930s. Clarence Chamberlin, who missed his chance by a few weeks to beat Charles Lindbergh across the Atlantic in 1927, flew an all-composite airplane. Called the Airmobile, it was designed by Harry Atwood, once a pupil of the Wright brothers, who flew from Boston to Washington, DC, in 1910, landing on the White House lawn.19 Unfortunately, the full story of the Airmobile would expose Atwood as a charlatan and fraud. However, even if Atwood’s dubious financing schemes ultimately hurt his reputation, his design for the Airmobile was legitimate; for its day, it was a major achievement. With a 22-foot wingspan and a 16-footlong cabin, the Airmobile weighed only 800 pounds. Its low weight was achieved by constructing the wings, fuselage, tail surfaces, and ailerons with a new material called Duply, a thin veneer from a birch tree impregnated with a cellulose acetate.20

Writing a technical note for the NACA in 1937, G.M. Kline, working for the Bureau of Standards, described the Airmobile’s construction: “The wings and fuselage were each molded in one piece of extremely thin films of wood and cellulose acetate.”21 To raise money and attract public attention, however, Atwood oversold his ability to manufacture the aircraft cheaply and reliably. According to his farfetched publicity claims, 10 workers starting at 8 a.m. could build a new Airmobile from a single, 6-inch-diameter birch tree and have the airplane flying by dinner. After a 12-minute first flight before 2,000 gawkers at the Nashua, NH, airport, Chamberlin complained that the aircraft was “nose heavy” but otherwise flew well. But any chance of pursuing full-scale manufacturing of the Airmobile would be short-lived. To develop the Airmobile, Atwood had accumulated more than 200 impatient creditors and a staggering debt greater than $100,000. The Airmobile’s manufacturing process needed a long time to mature, and the Duply material was not nearly as easy to fabricate as advertised. The Airmobile idea was dropped as Atwood’s converted furniture factory fell into insolvency.22 Also in the late 1930s, two early aviation legends—Eugene Vidal and Virginius Clark—pursued separate paths to manufacture an aircraft made of a laminated wood. Despite the military’s focus on developing and buying all-metal aircraft, Vidal secured a contract in 1938 to provide a wing assembly molded from a thermoplastic resin. Vidal also received a small contract to deliver a static test model for a basic trainer designated the BT-11. Schatzberg writes: “A significant innovation in the Vidal process was the molding of stiffeners and the skin in a single step.” Clark, meanwhile, partnered with Fairchild and Haskelite to build the F-46, the first airliner type made of all-synthetic materials. Haskelite reported that only nine men built the first half-shell of the fuselage within 2 hours. The F-46 first flew in 1937 and generated a great amount of interest. However, the estimated costs to develop the molds necessary to build Clark’s proposed production system (greater than $230,000) exceeded the amount private or military investors were willing to spend. Clark’s duramold technology was later acquired by Howard Hughes and put to use on the HK-1 flying boat (famously nicknamed—inaccurately—the “Spruce Goose”).23 21. Kline, “Plastics as Structural Materials for Aircraft.” 22. Howard Mansfield, Skylark: The Life, Lies and Inventions of Harry Atwood. 23. Schatzberg, Wings of Wood, Wings of Metal, pp. 182–191.

374

Case 7 | Evolving the Modern Composite Airplane

The February 16, 1939, issue of the U.K.-based Flight magazine offers a fascinating contemporary account of Clark’s progress: Recent reports from America paint in glowing terms a new process said to have been invented by Col Virginius Clark (of Clark Y wing section fame) by which aeroplane fuselages and wings can, it is claimed, be built of plastic materials in two hours by nine men. . . . There is little doubt that Col Clark and his associates of the Bakelite Corporation and the Haskelite Manufacturing Corporation have evolved a method of production which is rapid and cheap. Exactly how rapid and how cheap time will show. In the meantime, it is well to remember that we are not standing still in this country. Dr. Norman de Bruyne has been doing excellent work on plastics at Duxford, and the Airscrew Company of Weybridge is doing some very interesting and promising experimental and development work with reinforced wood.24

7

The NACA first moved to undertake research in plastics for aircraft in 1936, tasking Kline to conduct a review of the technical research already completed.25 Kline conducted a survey of “reinforced phenolformaldehyde resin” as a structural material for aircraft. The survey was made with the “cooperation and financial support” of the NACA. Kline also summarized the industry’s dilemma in an NACA technical note: In the fabrication of aircraft today the labor costs are high relative to the costs of tools. If large sections could be molded in one piece, the labor costs would be reduced but the cost of the molds and presses would be very high. Such a change in type construction would be economically practicable excepting the mass production of aircraft of a standard design. Langley suggests, therefore, that progress in the utilization of plastics in aircraft construction will be made by the gradual introduction of these materials into an otherwise orthodox 24. “Towards an Ideal,” Flight, Feb. 16, 1939. 25. Schatzberg, Wings of Wood, Wings of Metal, p. 181.

375

NASA’s Contributions to Aeronautics

structure, and that the early stages of this development will involve the molding of such small units as fins and rudders and the fabrication of the larger units from reinforced sheets and molded sections by conventional methods of jointing.26

7

Kline essentially was predicting the focus of a massive NASA research program that would not get started for nearly four more decades. The subsequent effort was conducted along the lines that Kline prescribed and will be discussed later in this essay. Kline also seemed to understand how far ahead the age of composite structure would be for the aviation industry, especially as aircraft would quickly grow larger and more capable than he probably imagined. “It is very difficult to outline specific problems on this subject,” Kline wrote, “because the exploration of the potential applications of reinforced plastics to aircraft construction is in its infancy, and is still uncharted.”27 In 1939, an NACA technical report noted that synthetic materials had already started making an impact in aircraft construction of that era. The technology was still unsuited for supporting the weight of the aircraft in flight or on the ground, but the relative lightness and durability of synthetics made them popular for a range of accessories. Inside a wood or metal cockpit, a pilot scanned instruments with dials and casings made of synthetics and looked out a synthetic windshield. Synthetics also were employed for cabin soundproofing, lights encasings, pulleys, and the streamlined housings around loop antennas. The 1939 NACA paper concludes: “It is realized, at present, that the use of synthetic resin materials in the aircraft industry have been limited to miscellaneous accessories. The future is promising, however, for with continued development, resin materials suitable for aircraft structures will be produced.”28 The Second World War Impetus One man’s vision for the possibilities of new synthetic adhesives had a powerful impact on history. Before World War II, Geoffrey de Havilland had designed the recordbreaking Comet racer and Albatross airliner, both

made of wood.29 Delivering a speech at the Royal Aeronautical Society in London in April 1935, however, de Havilland seemed to have already written off wooden construction. “Few will doubt, however,” he said, “that metal or possibly synthetic material will eventually be used universally, because it is in this direction we must look for lighter construction.” 30 Yet de Havilland would introduce 6 years later the immortal D.H. 98 Mosquito, a lightweight, speedy, multirole aircraft mass-produced for the Royal Air Force (RAF). De Havilland’s decision to offer the RAF an essentially all-wooden aircraft might seem to be based more on logistical pragmatism than aerodynamic performance. After all, the British Empire’s metal stocks were already committed to building the heavy Lancaster bombers and Spitfire fighters. Wooden materials were all that were left, not to mention the thousands of untapped and experienced woodworkers.31 But the Mosquito, designed as a lightweight bomber, became a success because it could outperform opposing fighters. Lacking guns for self-defense, the Merlin-powered Mosquito survived by outracing its all-metal opponents.32 Unlike metal airplanes, which obtain rigidity by using stringers to connect a series of bulkheads,33 the Mosquito employed a plywood fuselage that was built in two halves and glued together.34 De Havilland used a new resin called Aerolite as the glue, replacing the casein-type resins that had proved so susceptible to corrosion.35 The Mosquito’s construction technique anticipated the simplicity and strength of one-piece fuselage structures, not seen again until the first flight of Lockheed’s X-55 ACCA, nearly six decades later. For most of the 1940s, both the Government and industry focused on keeping up with wartime demand for vast fleets of all-metal aircraft. Howard Hughes pushed the boundaries of conventional flight at the

time with the first—and ultimately singular—flight of the Spruce Goose, which adopted a fuselage structure developed from the same Haskelite material pioneered by Clark in the late 1930s. Pioneering work on plastic structures continued, with researchers focusing on the basic foundations of the processes that would later gain wide application. For example, the NACA funded a study by the Laboratory for Insulation Research at the Massachusetts Institute of Technology (MIT) that would explore problems later solved by autoclaves. The goal of the MIT researchers was to address a difficulty in the curing process for thermoset plastics based on heating a wood-resin composite between hot plates. Because wood and resin were poor heat conductors, it would take several hours to raise the center of the material to the curing temperature. In the process, temperatures at the surface could rise above desired levels, potentially damaging the material even as it was being cured. The NACA-funded study looked for new ways to rapidly heat the material uniformly on the surface and at the center. The particular method involved inserting the material into a highfrequency electrical field, attempting to heat the material from the inside using the “dielectric loss of the material.”36 This was an ambitious objective, anticipating and appropriating the same principles used in microwave ovens for building aircraft structures. Not surprisingly, the study’s authors hoped to manage expectations. As they were not attempting to arrive at a final solution, the authors of the final report said their contribution was to “lay the groundwork for further development.” Their final conclusion: “The problem of treating complicated shapes remains to be solved.”37 Meanwhile, a Douglas Aircraft engineer hired shortly before World War II began would soon have a profound impact on the plastic composite industry. Brandt Goldsworthy served as a plastics engineer at Douglas during the war, where he was among the first to combine fiberglass and phenolic resin to produce laminated tooling.38 The invention did not spark radical progress in the aviation industry, although the

material was used to design ammunition chutes used to channel machine gun cartridges from storage boxes and into aircraft machine guns.39 More noteworthy, after leaving Douglas in 1945 to start his own company, Goldsworthy would pioneer the automation of the manufacturing process for composite materials. Goldsworthy’s invention of the pultrusion process in the 1950s would make durable and high-strength composites affordable for a range of applications, from cars to aircraft parts to fishing rods.40 As plastic composites continued to mature, the U.S. Army Air Corps began an ambitious series of experiments in the early 1940s on new composite material made from fiberglass-polyester blends. In the next two decades, the material would prove useful on aircraft as nose radomes and as both helicopter and propeller blades.41 The combination of fiberglass and polyester also proved tempting to the military as a potential new load-bearing structural material for aircraft. In 1943, researchers at Wright-Patterson Air Force Base fabricated an aft fuselage for the Vultee BT-15 basic trainer using fiberglass and a polyester material called Plaskon, with balsa used as a sandwich core material.42 The Wright Field experiments also included the development of an outer wing panel made of cloth and cellulose acetate for a North American AT-6C.43 The BT-15 experiment proved unsuccessful, but the plastic wing of the AT-6C was more promising, showing only minor wing cracks after 245 flight hours.44

Composite structure remained mostly a novelty item in aerospace construction. Progress continued to be made with developing composites, but demand was driven mainly by unique performance requirements, such as for high-speed atmospheric flight or exo-atmospheric travel. A few exceptions emerged in the general-aviation market. The Federal Aviation Agency (FAA) certified the Taylorcraft Model 20 in 1955, which was based on a steel substructure but incorporated fiberglass for the skins and cowlings.46 Even more progress was made by Piper Aircraft, which launched the PA-29 “plastic plane” project a few years later.47 The PA-29 was essentially a commercial X-plane, experimenting with materials that could replace aluminum alloy for light aircraft.48 The PA-29’s all-fiberglass structure demonstrated the potential strength properties of composite material. Piper’s engineers reported that the wing survived to 200 percent of ultimate load in static tests; the fuselage cracked at 180 percent because of a weakened bolt hole near the cockpit.49 Piper concluded that it “is not only possible but also quite practical to build primary aircraft structures of fiberglass reinforced plastic.”50 Commercial airliners built in the early 1950s relied almost exclusively upon aluminum and steel for structures. Boeing selected 2024 aluminum alloy for the fuselage skin and lower wing cover of the fourengine 707.51 It was not until Boeing started designing the 747 jumbo airliner in 1966 that it paid serious attention to composites. Composites were used on the 747’s rudder and elevators. Fiberglass, however, was in even greater demand on the 747, used as the structure for variablecamber leading-edge flaps.52 In 1972, NASA started a program with Boeing to redesign the 737’s aluminum spoilers with skins made of graphite-epoxy composite and an aluminum honeycomb core, while the rest of the spoiler structure—the hinges and spar—remained unchanged. Each of the four spoilers on the 737 measures roughly 24 inches wide by 52 inches long. The composite 46. Ibid., p. 20. 47. F.S. Snyder and R.E. Drake, “Experience with Reinforced Plastic Primary Aircraft Structures,” presented at the Society of Automotive Engineers’ Automotive Engineering Congress in Detroit, MI, Jan. 14–18, 1963, p. 1. 48. Ibid. 49. Ibid., p. 4. 50. Ibid., p. 4. 51. Swihart, “Commercial Jet Transportation Structures and Materials Evolution,” p. 5-3. 52. Ibid., pp. 5–6.

380

Case 7 | Evolving the Modern Composite Airplane

material comprised about 35 percent of the weight of the new structure of each spoiler, which measured about 13 pounds, or 17 percent less than an all-metal structure.53 The composite spoilers initiated flight operations on 27 737s owned by the airlines Aloha, Lufthansa, New Zealand National, Piedmont, PSA, and VASP. Five years later, Boeing reported no problems with durability and projected a long service life for the components.54 The impact of the 1973 oil embargo finally forced airlines to start reexamining their fuel-burn rates. After annual fuel price increases of 5 percent before the embargo, the gas bill for airlines jumped by 10 cents to 28 cents per gallon almost overnight.55 Most immediately, airframers looked to the potential of the recently developed high-bypass turbofan engine, as typified by the General Electric TF39/CF6 engine family, to gain rapid improvements in fuel efficiency for airliners. But against the backdrop of the oil embargo, the potential of composites to drive another revolution in airframe efficiency could not be ignored. Graphite-epoxy composite weighed 25 percent less than comparable aluminum structure, potentially boosting fuel efficiency by 15 percent.56 The stage was set for launching the most significant change in aircraft structural technology since the rapid transition to aluminum in the early 1930s. However, it would be no easy transition. In the early 1970s, composite design for airframes was still in its infancy, despite its many advances in military service. Recalling this period, a Boeing executive would later remember the words of caution from one of his mentors in 1975: “One of Boeing most senior employees said, when composites were first introduced in 1975, that he had lived through the transition from spruce and fabric to aluminum. It took three airplane generations before the younger designers were able to put aluminum to its best use, and he thought that we would have to be very clever to avoid that with composites.”57 The anonymous commentary would prove eerily prescient. From 1975, Boeing would advance through two generations of aircraft—beginning with the 757/767 and progressing with the 777 and

Next Generation 737—before mastering the manufacturing and design requirements to mass-produce an all-composite fuselage barrel, one of the key design features of the 787, launched in 2003. By the early 1970s, the transition to composites was a commercial imperative, but it took projects and studies launched by NASA and the military to start building momentum. Unlike the transition from spruce to metal structures four decades before, the industry’s leading aircraft makers now postured conservatively. The maturing air travel industry presented manufacturers with a new set of regulatory and legal barriers to embracing innovative ideas. In this new era, passengers would not be the unwitting guinea pigs as engineers worked out the problems of a new construction material. Conservatism in design would especially apply to load-bearing primary structures. “Today’s climate of government regulatory nervousness and aircraft/airline industry liability concerns demand that any new structural material system be equally reliable,” Boeing executive G.L. Brower commented in 1978.58 The Path to the Modern Era A strategy began forming in 1972 with the launch of the Air Force–NASA Long Range Planning Study for Composites (RECAST), which focused priorities for the research projects that would soon begin.59 That was prelude to what NASA research Marvin Dow would later call the “golden age of composites research,”60 a period stretching from roughly 1975 until funding priorities shifted in 1986. As airlines looked to airframers for help, military aircraft were already making great strides with composite structure. The Grumman F-14 Tomcat, then the McDonnell-Douglas F-15 Eagle, incorporated boron-epoxy composites into the empennage skin, a primary structure.61 With the first flight of the McDonnell-Douglas AV-8B Harrier in 1978, composite usage had drifted to the wing as well. In all,

Air Force engineer Norris Krone prompted NASA to develop the X-29 to prove that high-strength composites were capable of supporting forward-swept wings. NASA.

about one-fourth of the AV-8B’s weight,62 including 75 percent in the weight of the wing alone,63 was made of composite material. Meanwhile, composite materials studies by top Grumman engineer Norris Krone opened the door to experimenting with forward-swept wings. NASA responded to Krone’s papers in 1976 by launching the X-29 technology demonstrator, which incorporated an all-composite wing.64 Composites also found a fertile atmosphere for innovation in the rotorcraft industry during this period. As NASA pushed the commercial aircraft industry forward in the use of composites, the U.S. Army spurred progress among its helicopter suppliers. In 1981, the Army selected Bell Helicopter Textron and Sikorsky to design all-composite airframes under the advanced composite airframe program (ACAP).65

Perhaps already eyeing the need for a new light airframe to replace the Bell OH-58 Kiowa scout helicopter, the Army tasked the contractors to design a new utility helicopter under 10,000 pounds that could fly for up to 2 hours 20 minutes.66 Bell first flew the D-292 in 1984, and Sikorsky flew the S-75 ACAP in 1985.67 Boeing complemented their efforts by designing the Model 360, an all-composite helicopter airframe with a gross weight of 30,500 pounds.68 Each of these projects provided the steppingstones needed for all three contractors to fulfill the design goals for both the now-canceled Sikorsky–Boeing RAH-66 Comanche and the Bell–Boeing V-22 Osprey tilt rotor. The latter also drove developments in automated fiber placement technology, relieving the need to lay up by hand about 50 percent of the airframe’s weight.69 In the midst of this rapid progress, the makers of executive and “general” aircraft required neither the encouragement nor the financial assistance of the Government to move wholesale into composite airframe manufacturing. While Boeing dabbled with composite spoilers, ailerons, and wing covers on its new 767, William P. Lear, founder of LearAvia, was developing the Lear Fan 2100—a twin-engine, nineseat aircraft powered by a pusher-propeller with a 3,650-pound airframe made almost entirely from a graphite-epoxy composite.70 About a decade later, Beechcraft unveiled the popular and stylish Starship 1, an 8- to 10-passenger twin turboprop weighing 7,644 pounds empty.71 Composite materials—mainly using graphite-epoxy and NOMEX sandwich panels—accounted for 72 percent of the airframe’s weight.72 Actual performance fell far short of the original expectations during this period. Dow’s NASA colleagues in 1975 had outlined a strategy that should have led to full-scale tests of an all-composite fuselage and wing box for a civil airliner by the late 1980s. Although the dream was delayed by more than a decade, it is true that state of knowledge and 66. Ibid. 67. Ibid., p. 68. 68. D.A. Reed and R. Gable, “Ground Shake Test of the Boeing Model 360 Helicopter Airframe,” NASA CR-181766 (1989), p. 6. 69. Deo, Starnes, and Holzwarth, “Low-Cost Composite Materials and Structures for Aircraft Applications.” 70. “Lightweight Composites Are Displacing Metals,” Business Week, July 30, 1979, p. 36D. 71. E.H. Hooper, “Starship 1,” presented at the AIAA Evolution of Aircraft/Aerospace Structures and Materials Symposium, Dayton, OH, Apr. 24–25, 1985, p. 6–1. 72. Ibid.

384

Case 7 | Evolving the Modern Composite Airplane

understanding of composite materials leaped dramatically during this period. The three major U.S. commercial airframers of the era—Boeing, Lockheed, and McDonnell-Douglas—each made contributions. However, the agenda was led by NASA’s $435-million investment in the Aircraft Energy Efficiency (ACEE) program. ACEE’s top goal, in terms of funding priority, was to develop an energy-efficient engine. The program also invested greatly to improve how airframers control for laminar flow. But a major pillar of ACEE was to drive the civil industry to fundamentally change its approach to aircraft structures and shift from metal to the new breed of composites then emerging from laboratories. As of 1979, NASA had budgeted $75 million toward achieving that goal,73 with the manufacturers responsible for providing a 10-percent match. ACEE proposed a gradual development strategy. The first step was to install a graphite-epoxy composite material called Narmco T300/520874 on lightly loaded secondary structures of existing commercial aircraft in operational service. For their parts, Boeing selected the 727 elevator, Lockheed chose the L-1011 inboard aileron, and Douglas opted to change the DC-10 upper aft rudder.75 From this starting point, NASA engaged the manufacturers to move on to medium-primary components, which became the 737 horizontal stabilizer, the L-1011 vertical fin, and the DC-10 vertical stabilizer.76 The weight savings for each of the medium primary components was estimated to be 23 percent, 30 percent, and 22 percent, respectively.77 The leap from secondary to medium-primary components yielded some immediate lessons for what not to do in composite structural design. All three components failed before experiencing ultimate loads in initial ground tests.78 The problems showed how different composite material could be from the familiar characteristics of metal. Compared to aluminum, an equal amount of composite material can support a heavier load. But, as experience revealed, this was not true in every condition experienced by an aircraft in normal flight. Metals are known to

distribute stresses and loads to surrounding structures. In simple terms, they bend more than they break. Composite material does the opposite. It is brittle, stiff, and unyielding to the point of breaking. Boeing’s horizontal stabilizer and Douglas’s vertical stabilizer both failed before the predicted ultimate load for similar reasons. The brittle composite structure did not redistribute loads as expected. In the case of the 737 component, Boeing had intentionally removed one lug pin to simulate a fail-safe mode. The structure under the point of stress buckled rather than redistributed the load. Douglas had inadvertently drilled too big of a hole for a fastener where the web cover for the rear spar met a cutout for an access hole.79 It was an error by Douglas’s machinists but a tolerable one if the same structure were designed with metal. Lockheed faced a different kind of problem with the failure of the L-1011 vertical fin during similar ground tests. In this case, a secondary interlaminar stress developed after the fin’s aerodynamic cover buckled at the attachment point with the front spar cap. NASA later noted: “Such secondary forces are routinely ignored in current metals design.”80 The design for each of these components was later modified to overcome these unfamiliar weaknesses of composite materials. In the late 1970s, all three manufacturers began working on the basic technology for the ultimate goal of the ACEE program: designing full-scale, composite-only wing and fuselage. Control surfaces and empennage structures provided important steppingstones, but it was expected that expanding the use of composites to large sections of the fuselage and wing could improve efficiency by an order of magnitude.81 More specifically, Boeing’s design studies estimated a weight savings of 25–30 percent if the 757 fuselage was converted to an allcomposite design.82 Further, an all-composite wing designed with a metal-like allowable strain could reduce weight by as much as 40 percent for a large commercial aircraft, according to NASA’s design analysis.83 Each manufacturer was assigned a different task, with all three collaborating on their results to gain maximum results. Lockheed explored

design techniques for a wet wing that could contain fuel and survive lightning strikes.84 Boeing worked on creating a system for defining degrees of damage tolerance for structures85 and designed wing panes strong enough to endure postimpact compression of 50,000 pounds per square inch (psi) at strains of 0.006.86 Meanwhile, Douglas concentrated on methods for designing multibolted joints.87 By 1984, NASA and Lockheed had launched the advanced composite center wing project, aimed at designing an all-composite center wing box for an “advanced” C-130 airlifter. This project, which included fabricating two 35-foot-long structures for static and durability tests, would seek to reduce the weight of the C-130’s center wing box by 35 percent and reduce manufacturing costs by 10 percent compared with aluminum structure.88 Meanwhile, Boeing started work in 1984 to design, fabricate, and test full-scale fuselage panels.89 Within a 10-year period, the U.S. commercial aircraft industry had come very far. From the near exclusion of composite structure in the early 1970s, composites had entered the production flow as both secondary and medium-primary components by the mid-1980s. This record of achievement, however, was eclipsed by even greater progress in commercial aircraft technology in Europe, where the then-upstart DASA Airbus consortium had pushed composites technology even further. While U.S. commercial programs continued to conduct demonstrations, the A300 and A310 production lines introduced an all-composite rudder in 1983 and achieved a vertical tailfin in 1985. The latter vividly demonstrated the manufacturing efficiencies promised by composite designs. While a metal vertical tail contained more than 2,000 parts, Airbus designed a new structure with a carbon fiber epoxy-honeycomb core sandwich that required fewer than 100 parts, reducing both the weight of the structure and the cost of assembly.90 A few years later, Airbus unveiled the A320 narrow body with 28 percent of its structural weight filled by

composite materials, including the entire tail structure, fuselage belly skins, trailing-edge flaps, spoilers, ailerons, and nacelles.91 It would be another decade before a U.S. manufacturer eclipsed Airbus’s lead, with the introduction of the Boeing 777 in 1995. Consolidating experience gained as a major structural supplier for the Northrop B-2A bomber program, Boeing designed the 777, with an all-composite empennage one-tenth of the weight.92 By this time, the percentage of composites integrated into a commercial airliner’s weight had become a measure of the manufacturer’s progress in gaining a competitive edge over a rival, a trend that continues to this day with the emerging Airbus A350/Boeing 787 competition. As European manufacturers assumed a technical lead over U.S. rivals for composite technology in the 1980s, the U.S. still retained a huge lead with military aircraft technology. With fewer operational concerns about damage tolerance, crash survivability, and manufacturing cost, military aircraft exploited the performance advantages of composite material, particularly for its weight savings. The V-22 Osprey tilt rotor employed composites for 70 percent of its structural weight.93 Meanwhile, Northrop and Boeing used composites extensively on the B-2 stealth bomber, which is 37-percent composite material by weight. Steady progress on the military side, however, was not enough to sustain momentum for NASA’s commercial-oriented technology. The ACEE program folded after 1985, following several years of real progress but before it had achieved all of its goals. The full-scale wing and fuselage test program, which had received a $92-million, 6-year budget from NASA in fiscal year 1984,94 was deleted from the Agency’s spending plans a year later.95 By 1985, funding available to carry out the goals of the ACEE program had been steadily eroding for several years. The Reagan Administration took office in 1981 with a distinctly different view on the responsibility of Government to support the validation of commercial technologies.96

In constant 1988 dollars, ACEE funding dropped from a peak $300 million in 1980 to $80 million in 1988, with funding for validating high-strength composite materials in flight wiped out entirely.97 The shift in technology policy corresponded with priority disagreements between aeronautics and space supporters in industry, with the latter favoring boosting support for electronics over pure aeronautics research.98 In its 10-year run, the composite structural element of the ACEE program had overcome numerous technical issues. The most serious issue erupted in 1979 and caused NASA to briefly halt further studies until it could be fully analyzed. The story, always expressed in general terms, has become an urban myth for the aircraft composites community. Precise details of the incident appear lost to history, but the consequences of its impact were very real at the time. The legend goes that in the late 1970s, waste fibers from composite materials were dumped into an incinerator. Afterward, whether by cause or coincidence, a nearby electric substation shorted out.99 Carbon fibers set loose by the incinerator fire were blamed for the malfunction at the substation. The incident prompted widespread concerns among aviation engineers at a time when NASA was poised to spend hundreds of millions of dollars to transition composite materials from mainly space and military vehicles to large commercial transports. In 1979, NASA halted work on the ACEE program to analyze the risk that future crashes of increasingly composite-laden aircraft would spew blackout-causing fibers onto the Nation’s electrical grid.100 Few seriously question the potential benefits that composite materials offer society. By the mid-1970s, it was clear that composites dramatically raise the efficiency of aircraft. The cost of manufacturing the materials was higher, but the life-cycle cost of maintaining noncorroding composite structures offered a compelling offset. Concerns about the economic and health risks poised by such a dramatic transition to a different structural material have also been very real.

It was up to the aviation industry, with Government support, to answer these vital questions before composite technology could move further. With the ACEE program suspended to study concerns about the risks to electrical equipment, both NASA and the U.S. Air Force by 1978 had launched separate efforts to overcome these concerns. In a typical aircraft fire after a crash, the fuel-driven blaze can reach temperatures between 1,800 to 3,600 degrees Fahrenheit (ºF). At temperatures higher than 750 ºF, the matrix material in a composite structure will burn off, which creates two potential hazards. As the matrix polymer transforms into fumes, the underlying chemistry creates a toxic mixture called pyrolysis product, which if inhaled can be harmful. Secondly, after the matrix material burns away, the carbon fibers are released into the atmosphere.101 These liberated fibers, which as natural conductors have the power to short circuit a power line, could be dispersed over wide areas by wind. This led to concerns that the fibers would could come into contact with local power cables or, even worse, exposed power substations, leading to widespread power blackouts as the fibers short circuit the electrical equipment.102 In the late 1970s, the U.S. Air Force started a program to study aircraft crashes that involved early-generation composite materials. Another incident in 1997 was typical of different type of concern about the growing use of composite materials for aircraft structures. A U.S. Air Force F-117 flying a routine at the Baltimore airshow crashed when a wing-strut failed. Emergency crews who rushed to the scene extinguished fires that destroyed and damaged several dwellings, blanketing the area with a “wax-like” substance that contained carbon fibers embedded in the F-117’s structures that could have otherwise been released into the atmosphere. Despite these precautions, the same firefighters and paramedics who rushed to the scene later reported becoming “ill from the fumes emitted by the fire. It was believed that some of these fumes resulted from the burning of the resin in the composite materials,” according a U.S. Navy technical paper published in 2003.103

Yet another issue has sapped the public’s confidence in composite materials for aircraft structures for several decades. As late as 2007, the risk presented by lightning striking a composite section of an aircraft fuselage was the subject of a primetime investigation by Dan Rather, who extensively quoted a retired Boeing Space Shuttle engineer. The question is repeatedly asked: If the aluminum structure of a previous generation of airliners created a natural Faraday cage, how would composite materials with weaker properties for conductivity respond when struck by lightning? Technical hazards were not the only threat to the acceptance of composite materials. To be sure, proving that composite material would be safe to operate in commercial service constituted an important endorsement of the technology for subsequent application, as the ACEE projects showed. But the aerospace industry also faced the challenge of establishing a new industrial infrastructure from the ground up that would supply vast quantities of composite materials. NASA officials anticipated the magnitude of the infrastructure issue. The shift from wood to metal in the 1930s occurred in an era when airframers acted almost recklessly by today’s standards. Making a similar transition in the regulatory and business climate of the late 1970s would be another challenge entirely. Perhaps with an eye on the rapid progress being made by European competitors in commercial aircraft, NASA addressed the issue head-on. In 1980, NASA Deputy Administrator Alan M. Lovelace urged industry to “anticipate this change,” adding that he realized “this will take considerable capital, but I do worry that if this is not done then might we not, a decade from now, find ourselves in a position similar to that in which the automobile industry is at the present time?”104 Of course, demand drives supply, and the availability of the raw material for making composite aerospace parts grew precipitously throughout the 1980s. For example, 2 years before Lovelace issued his warning to industry, U.S. manufacturers consumed 500,000 pounds of composites every 12 months, with the aerospace industry accounting for half of that amount.105 Meanwhile, a single supplier for graphite fiber, Union Carbide, had already announced plans to increase annual output to 800,000 pounds by the end of 1981.106 U.S. consumption would soon be driven by the automobile industry, which was also struggling

to keep up with the innovations of foreign competition, as much as by the aerospace industry throughout the 1980s.

7

Challenges and Opportunities If composites were to receive wide application, the cost of the materials would have to dramatically decline from their mid-1980s levels. ACEE succeeded in making plastic composites commonplace not just in fairings and hatches for large airliners but also on control surfaces, such as the ailerons, flaps, and rudder. On these secondary structures, cashstrapped airlines achieved the weight savings that prompted the shift to composites in the first place. The program did not, however, result in the immediate transition to widespread production of plastic composites for primary structures. Until the industry could make that transition, it would be impossible to justify the investment required to create the infrastructure that Lovelace described to produce composites at rates equivalent to yearly aluminum output. To the contrary, tooling costs for composites remained high, as did the labor costs required to fabricate the composite parts.107 A major issue driving costs up under the ACEE program was the need to improve the damage tolerance of the composite parts, especially as the program transitioned from secondary components to heavily loaded primary structures. Composite plastics were still easy to damage and costly to replace. McDonnell-Douglas once calculated that the MD-11 trijet contained about 14,000 pounds of composite structure, which the company estimated saved airlines about $44,000 in yearly fuel costs per plane.108 But a single incident of “ramp rash” requiring the airline to replace one of the plastic components could wipe away the yearly return on investment provided by all 14,000 pounds of composite structure.109 The method that manufacturers devised in the early 1980s involved using toughened resins, but these required more intensive labor to fabricate, which aggravated the cost problem.110 From the early 1980s, NASA

worked to solve this dilemma by investigation new manufacturing methods. One research program sponsored by the Agency considered whether textile-reinforced composites could be a cost-effective way to build damage-tolerant primary structures for aircraft.111 Composite laminates are not strong so much as they are stiff, particularly in the direction of the aligned fibers. Loads coming from different directions have a tendency to damage the structure unless it is properly reinforced, usually in the form of increased thickness or other supports. Another poor characteristic of laminated composites is how the material reacts to damage. Instead of buckling like aluminum, which helps absorb some of the energy caused by the impact, the stiff composite material tends to shatter. Some feared that such materials could prove too much for cashstrapped airlines of the early 1990s to accept. If laminated composites were the problem, some believed the solution was to continue investigating textile composites. That meant shifting to a new process in which carbon fibers could be stitched or woven into place, then infused with a plastic resin matrix. This method seemed to offer the opportunity to solve both the damage tolerance and the manufacturing problems simultaneously. Textile fibers could be woven in a manner that made the material strong against loads coming from several directions, not just one. Moreover, some envisioned the deployment of giant textile composite sewing machines to mass-produce the stronger material, dramatically lowering the cost of manufacture in a single stroke. The reality, of course, would prove far more complex and challenging than the visionaries of textile composites had imagined. To be sure, the concept faced many skeptics within the conservative aerospace industry even as it gained force in the early 1990s. Indeed, there have been many false starts in the composite business. The Aerospace America journal in 1990 proposed that thermoplastics, a comparatively littleused form of composites, could soon eclipse thermoset composites to become the “material of the ’90s.” The article wisely contained a cautionary note from a wry Lockheed executive, who recalled a quote by a former boss in the structures business: “The first thing I hear about a new material is the best thing I ever hear about it. Then reality sinks in, and it’s a matter of slow and steady improvements until you achieve the properties you want.”112 The visionaries of textile composite in the late

1980s could not foresee it, but they would contend with more than the normal challenges of introducing any technology for widespread production. A series of industry forces were about to transform the competitive landscape of the aerospace industry over the next decade, with a wave of mergers wreaking particular havoc on NASA’s best-laid plans. It was in this environment when NASA began the plunge into developing ever-more-advanced forms of composites. The timeframe came in the immediate aftermath of the ACEE program’s demise. In 1988, the Agency launched an ambitious effort called the Advanced Composites Technology (ACT) program. It was aimed at developing hardware for composite wing and fuselage structures. The goals were to reduce structural weight for large commercial aircraft by 30–50 percent and reduce acquisition costs by 20–25 percent.113 NASA awarded 15 contracts under the ACT banner a year later, signing up teams of large original equipment manufacturers, universities, and composite materials suppliers to work together to build an all-composite fuselage mated to an allcomposite wing by the end of the century.114 During Phase A, from 1989 to 1991, the program focused on manufacturing technologies and structural concepts, with stitched textile preform and automated tow placement identified as the most promising new production methods.115 “At that point in time, textile reinforced composites moved from being a laboratory curiosity to large scale aircraft hardware development,” a NASA researcher noted.116 Phase B, from 1992 to 1995, focused on testing subscale components. Within the ACT banner, NASA sponsored projects of wide-ranging scope and significance. Sikorsky, for example, which was selected after 1991 to lead development and production of the RAH-66 Comanche, worked on a new process using flowable silicone powder to simplify the process of vacuum-bagging composites before being heated in an autoclave.117 Meanwhile, McDonnell-Douglas Helicopter investigated 3-D

finite element models to discover how combined loads create stresses through the thickness of composite parts during the design process. The focus of ACT, however, would be aimed at developing the technologies that would finally commercialize composites for heavily loaded structures. The three major commercial airliner firms that dominated activity under the ACEE remained active in the new program despite huge changes in the commercial landscape. Lockheed already had decided not to build any more commercial airliners after ceasing production of the L-1011 Tristar in 1984 but pursued ACT contracts to support a new strategy—also later dropped—to become a structures supplier for the commercial market.118 Lockheed’s role involved evaluating textile composite preforms for a wide variety of applications on aircraft. It was still 8 years before Boeing and McDonnell-Douglas agreed to their fateful merge in 1997, but ACT set each on a path for developing new composites that would converge around the same time as their corporate identities. NASA set Douglas engineers to work on producing an all-composite wing. Part of Boeing’s role under ACT involved constructing several massive components, such as a composite fuselage barrel; a window belt, introducing the complexity of material cutouts; and a full wing box, allowing a position to mate the Douglas wing and the Boeing fuselage. As ambitious as this roughly 10-year plan was, it did not overpromise. NASA did not intend to validate the airworthiness of the technologies. That role would be assigned to industry, as a private investment. Rather, the ACT program sought to merely prove that such structures could be built and that the materials were sound in their manufactured configuration. Thus, pressure tests would be performed on the completed structures to verify the analytical predictions of engineers. Such aims presupposed some level of intense collaboration between the two future partners, Boeing and McDonnell-Douglas, but NASA may have been disappointed about the results before the merger of 1997. Although the former ACEE program had achieved a level of unique collaboration between the highly competitive commercial aircraft prime contractors, that spirit appeared to have eroded under the intense market pressures of the early 1990s airline industry. One unnamed industry source explained to an Aerospace Daily reporter in 1994: “Each company

wants to do its own work. McDonnell doesn’t want to put its [composite] wing on a Boeing [composite] fuselage and Boeing doesn’t trust its composite fuselage mated to a McDonnell composite wing.”119 NASA, facing funding shortages after 1993, ultimately scaled back the goal of ACT to mating an all-composite wing made by either McDonnellDouglas or Boeing to an “advanced aluminum” fuselage section.120 Boeing’s work on completing an all-composite fuselage would continue, but it would transition to a private investment, leveraging the extensive experiences provided by the NASA and military composite development programs. In 1995, McDonnell-Douglas was selected to enter Phase C of the ACT program with the goal to construct the all-composite wing, but industry developments intervened. After McDonnell-Douglas was absorbed into Boeing’s brand, speculation swirled about the fate of the former’s active all-composite wing program. In 1997, McDonnell-Douglas had plans to eventually incorporate the new wing technology on the legacy MD-90 narrow body.121 (Boeing later renamed MD-90 by filling a gap created when the manufacturer skipped from the 707 to the 727 airliners, having internally designated the U.S. Air Force KC-135 refueler the 717.122) One postmerger speculative report suggested that Boeing might even consider adopting McDonnell-Douglas’s all-composite wing for the Next Generation 737 or a future variant of the 757. Boeing, however, would eventually drop the all-composite wing concept, even closing 717 production in 2006. The ACT program produced an impressive legacy of innovation. Amid the drive under ACT to finally build full-scale hardware, NASA also pushed industry to radically depart from building composite structures through the laborious process of laying up laminates. This process not only drove up costs by requiring exorbitant touch labor; it also produced material that was easy to damage without adding bulk—and weight—to the structure in the form of thicker laminates and extra stiffeners and doublers. The ACT formed three teams that combined one major airframer each, with several firms that represented part of a growing and

increasingly sophisticated network of composite materials suppliers to the aerospace industry. A Boeing/Hercules team focused on a promising new method called automated tow placement. McDonnell-Douglas was paired with Dow Chemical to develop a process that could stitch the fibers roughly into the shape of the finished parts, then introduce the resin matrix through the resin transfer molding (RTM) process.123 That process is known as “stitched/RTM.”124 Lockheed, meanwhile, was tasked with BASF Structural Materials to work on textile preforms. NASA and the ACT contractors had turned to textiles full bore to both reduce manufacturing costs and enhance performance. Preimpregnating fibers aligned unidirectionally into layers of laminate laid up by hand and cured in an autoclave had been the predominant production method throughout the 1980s. However, layers arranged in this manner have a tendency to delaminate when damaged.125 The solution proposed under the ACT program was to develop a method to sew or weave the composites three-dimensionally roughly into their final configuration, then infuse the “preform” mold with resin through resin transfer molding or vacuum-assisted resin transfer molding.126 It would require the invention of a giant sewing machine large and flexible enough to stitch a carbon fabric as large as an MD-90 wing. McDonnell-Douglas began the process with the goal of building a wing stub box test article measuring 8 feet by 12 feet. Pathe Technologies, Inc., built a single-needle sewing machine. Its sewing head was computer controlled and could move by a gantry-type mechanism in the x- and y-axes to sew materials up to 1 inch in thickness. The machine stitched prefabricated stringers and intercostal clips to the wing skins.127 The wings skins had been prestitched using a separate multineedle machine.128 Both belonged to a first generation of sewing machines that accomplished their purpose, which was to provide valuable data and experience. The single-needle head, however, would prove far too limited. It moved only 90 degrees in the vertical and horizontal planes,

The Advanced Composite Cargo Aircraft is a modified Dornier 328Jet aircraft. The fuselage aft of the crew station and the vertical tail were removed and replaced with new structural designs made of advanced composite materials fabricated using out-of-autoclave curing. It was developed by the Air Force Research Laboratory and Lockheed Martin. Lockheed Martin.

meaning it was limited to stitching only panels with a flat outer mold line. The machine also could not stitch materials deeply enough to meet the requirement for a full-scale wing.129 NASA and McDonnell-Douglas recognized that a high-speed multineedle machine, combined with an improved process for multiaxial warp knitting, would achieve affordable full-scale wing structures. This so-called advanced stitching machine would have to handle “cover panel preforms that were 3.0m wide by 15.2m long by 38.1mm thick at speeds up to 800 stitches per minute. The multiaxial warp knitting machine had to be capable of producing 2.5m wide carbon fabric with an areal weight of 1,425g/m².”130 Multiaxial warp knitting automates the process of producing multilayer broad goods. NASA and Boeing selected the resin film infusion (RFI) process to develop a wing cost-effectively. Boeing’s advanced stitching machine remains in use today, quietly producing landing gear doors for the C-17 airlifter. The thrust of 129. Ibid. 130. Chambers, Concept to Reality.

398

Case 7 | Evolving the Modern Composite Airplane

innovation in composite manufacturing technology, however, has shifted to other places. Lockheed’s ACCA program spotlighted the emergence of a third generation of out-of-autoclave materials. Small civil aircraft had been fashioned out of previous generations of this type of material, but it was not nearly strong enough to support loads required for larger aircraft such as, of course, a 328Jet. In the future, manufacturers hope to build all-composite aircraft on a conventional production line, with localized ovens to cure specific parts. Parts or sections will no longer need to be diverted to cure several hours inside an autoclave to obtain their strength properties. Lockheed’s move with the X-55 ACCA jet represents a critical first attempt, but others are likely to soon follow. For its part, Boeing has revealed two major leaps in composite technology development on the military side, from the revelation of the 1990s-era Bird of Prey demonstrator, which included a single-piece composite structure, to the co-bonded, all-composite wing section for the X-45C demonstrator (now revived and expected to resume flight-testing as the Phantom Ray). The key features of new out-of-autoclave materials are measured by curing temperature and a statistic vital for determining crashworthiness called compression after impact strength. Third-generation resins now making an appearance in both Lockheed and Boeing demonstration programs represent major leaps in both categories. In terms of raw strength, Boeing states that third-generation materials can resist impact loads up to 25,000 pounds per square inch (psi), compared to 18,000 psi for the previous generation. That remains below the FAA standard for measuring crashworthiness of large commercial aircraft but may fit the standard for a new generation of military cargo aircraft that will eventually replace the C-130 and C-17 after 2020. In September 2009, the U.S. Air Force awarded Boeing a nearly $10-million contract to demonstrate such a nonautoclave manufacturing technology.

7

Toward the Future NASA remains active in the pursuit of new materials that will support fresh objectives for enabling a step change in efficiency for commercial aircraft of the next few decades. A key element of NASA’s strategy is to promote the transition from conventional, fuselage-andwing designs for large commercial aircraft to flying wing designs, with the Boeing X-48 Blended Wing-Body subscale demonstrator as the model. The concept assumes many changes in current approaches to 399

NASA’s Contributions to Aeronautics

7

NASA’s Langley Research Center started experimenting with this stitching machine in the early 1990s. The machine stitches carbon, Kevlar, and fiberglass composite preforms before they are infused with plastic epoxy through the resin transfer molding process. The machine was limited to stitching only small and nearly flat panels. NASA.

flight controls, propulsion, and, indeed, expectations for the passenger experience. Among the many innovations to maximize efficiency, such flying wing airliners also must be supported by a radical new look at how composite materials are produced and incorporated in aircraft design. To support the structural technology for the BWB, Boeing faces the challenge of manufacturing an aircraft with a flat bottom, no constant section, and a diversity of shapes across the outer mold line.131 To meet these challenges, Boeing is returning to the stitching method, although with a different concept. Boeing’s concept is called pultruded rod stitched efficient unitized structure (PRSEUS). Aviation Week & Space Technology described the idea: “This stitches the composite frames and stringers to the skin to produce a fail-safe structure. The frames and stringers provide continuous load paths and the nylon stitching stops cracks. The design allows the use of minimum-gauge-post-buckled-skins, and Boeing 131. Graham Warwick, “Shaping the Future,” Aviation Week & Space Technology, Feb. 2, 2009, p. 50.

400

Case 7 | Evolving the Modern Composite Airplane

estimates a PRSEUS pressure vessel will be 28% lighter than a composite sandwich structure.”132 Under a NASA contract, Boeing is building a 4-foot by 8-foot pressure box with multiple frames and a 30-foot-wide test article of the doubledeck BWB airframe. The manufacturing process resembles past experience with the advanced stitching machine. Structure laid up by dry fabric is stitched before a machine pulls carbon fiber rods through pickets in the stringers. The process locks the structure and stringers into a preform without the need for a mold-line tool. The parts are cured in an oven, not an autoclave.133 The dream of designing a commercially viable, large transport aircraft made entirely out of plastic may finally soon be realized. The allcomposite fuselage of the Boeing 787 and the proposed Airbus A350 are only the latest markers in progress toward this objective. But the next generation of both commercial and military transports will be the first to benefit from composite materials that may be produced and assembled nearly as efficiently as are aluminum and steel.

NASA Beech King Air general aviation aircraft over the Dryden Flight Research Center. NASA.

408

NACA-NASA’s Contribution 8 to General Aviation

CASE

By Weneth D. Painter

General Aviation has always been an essential element of American aeronautics. The NACA and NASA have contributed greatly to its efficiency, safety, and reliability via research across many technical disciplines. The mutually beneficial bonds linking research in civil and military aeronautics have resulted in such developments as the supercritical wing, electronic flight controls, turbofan propulsion, composite structures, and advanced displays and instrumentation systems.

8

T

HOUGH COMMONLY ASSOCIATED IN THE PUBLIC MIND with small private aircraft seen buzzing around local airports and air parks, the term “General Aviation” (hereafter GA) is primarily a definition of aircraft utilization rather than a classification per se of aircraft physical characteristics or performance. GA encompasses flying machines ranging from light personal aircraft to Mach 0.9+ business jets, comprising those elements of U.S. civil aviation which are neither certified nor supplemental air carriers: kit planes and other home-built aircraft, personal pleasure aircraft, commuter airlines, corporate air transports, aircraft manufacturers, unscheduled air taxi operations, and fixed-base operators and operations. Overall, NACA-NASA’s research has profoundly influenced all of this, contributing notably to the safety and efficiency of GA worldwide. Since the creation of the NACA in 1915, and continuing after establishment of NASA in 1958, Agency engineers have extensively investigated design concepts for GA, GA aircraft themselves, and the operating environment and related areas of inquiry affecting the GA community. In particular, they have made great contributions by documenting the results of various wind tunnel and flight tests of GA aircraft. These results have strengthened both industrial practice within the GA industry itself and the educational training of America’s science, technology, engineering, and mathematics workforce, helping buttress and advance America’s stature as an aerospace nation. This study discusses the advancements 409

NASA Contributions to Aeronautics

in GA through a review of selected applications of flight disciplines and aerospace technology.

8

The Early Evolution of General Aviation The National Advisory Committee for Aeronautics (NACA) was formed on March 3, 1915, to provide advice and carry out much of cutting-edge research in aeronautics in the United States. This organization was modeled on the British Advisory Committee for Aeronautics. President Woodrow Wilson created the advisory committee in an effort to organize American aeronautical research and raise it to the level of European aviation. Its charter and $5,000 initial appropriation (low even in 1915) were appended to a naval appropriations bill and passed with little fanfare. The committee’s mission was “to supervise and direct the scientific study of the problems of flight, with a view to their practical solution,” and to “direct and conduct research and experiment in aeronautics.”1 Thus, from its outset, it was far more than simply a bureaucratic panel distanced from design-shop, laboratory, and flight line. The NACA soon involved itself across the field of American aeronautics, advising the Government and industry on a wide range of issues including establishing the national air mail service, along with its night mail operations, and brokering a solution—the crosslicensing of aeronautics patents—to the enervating Wright-Curtiss patent feud that had hampered American aviation development in the pre-World War I era and that continued to do so even as American forces were fighting overseas. The NACA proposed establishing a Bureau of Aeronautics in the Commerce Department, granting funds to the Weather Bureau to promote safety in aerial navigation, licensing of pilots, aircraft inspections, and expanding airmail. It also made recommendations in 1925 to President Calvin Coolidge’s Morrow Board that led to passage of the Air Commerce Act of 1926, the first Federal legislation regulating civil aeronautics. It continued to provide policy recommendations on the Nation’s aviation until its incorporation in the National Aeronautics and Space Administration (NASA) in 1958.2

The NACA started working in the field of GA almost as soon as it was established. Its first research airplane programs, undertaken primarily by F.H. Norton, involved studying the flight performance, stability and control, and handling qualities of Curtiss JN-4H, America’s iconic “Jenny” of the “Great War” time period, and one that became first great American GA airplane as well.3 The initial aerodynamic and performance studies of Dr. Max M. Munk, a towering figure in the history of fluid mechanics, profoundly influenced the Agency’s subsequent approach to aerodynamic research. Munk, the inventor of the variabledensity wind tunnel (which put NACA aerodynamics research at the forefront of the world standard) and architect of American aerodynamic research methodology, dramatically transformed the Agency’s approach to airfoil design by introducing the methods of the “Prandtl school” at Göttingen and by designing and supervising the construction of a radical new form of wind tunnel, the so-called “variable density tunnel.” His GA influence began with a detailed study of the airflow around and through a biplane wing cellule (the upper and lower wings, connected with struts and wires, considered as a single design element). He produced a report in which the variation of the section, chord, gap, stagger, and decalage (the angle of incidence of the respective chords of the upper and lower wings) and their influence upon the available wing cell space for engines, cockpits, passenger, and luggage, were investigated with a great number of calculated examples in which all of the numerical results were given in tables. Munk’s report was in some respects a prototypical example of subsequent NACA-NASA research reports that, over the years, would prove beneficial to the development of GA by investigating a number of areas of particular concern, such as aircraft aerodynamic design, flight safety, spin prevention and recoveries, and handling qualities.4 Arguably these reports that conveyed Agency research results to a public audience were the most influential product

of NACA-NASA research. They influenced not only the practice of engineering within the various aircraft manufacturers, but provided the latest information incorporated in many aeronautical engineering textbooks used in engineering schools. Though light aircraft are often seen as the by-product of the air transport revolution, in fact, they led, not followed, the expansion of commercial aviation, particularly in the United States. The interwar years saw an explosive growth in American aeronautics, particularly private flying and GA. It is fair to state that the roots of the American air transport revolution were nurtured by individual entrepreneurs manufacturing light aircraft and beginning air mail and air transport services, rather than (as in Europe) largely by “top-down” government direction. As early as 1923, American fixed-base operators “carried 80,888 passengers and 208,302 pounds of freight.”5 In 1926, there were a total of 41 private airplanes registered with the Federal Government. Just three years later, there were 1,454. The Depression severely curtailed private ownership, but although the number of private airplanes plummeted to 241 in 1932, it rose steadily thereafter to 1,473 in 1938, with Wichita, KS, emerging as the Nation’s center of GA production, a distinction it still holds.6 Two of the many notable NACA-NASA engineers who were influenced by their exposure to Max Munk and had a special interest in GA, and who in turn greatly influenced subsequent aircraft design, were Fred E. Weick and Robert T. Jones. Weick arrived at NACA Langley Field, VA, in the 1920s after first working for the U.S. Navy’s Bureau of Aeronautics.7 Weick subsequently conceived the NACA cowling that became a feature of radial-piston-engine civil and military aircraft design. The cowling both improved the cooling of such engines and streamlined the engine installation, reducing drag and enabling aircraft to fly higher and faster. 5. Roger E. Bilstein, Flight Patterns: Trends of Aeronautical Development in the United States, 1918–1929 (Athens: The University of Georgia Press, 1983), p. 63. 6. Donald M. Pattillo, A History in the Making: 80 Turbulent Years in the American General Aviation Industry (New York: McGraw-Hill, 1998), pp. 5–44; and Tom D. Crouch, “General Aviation: The Search for a Market, 1910–1976,” in Eugene M. Emme, Two Hundred Years of Flight in America: A Bicentennial Survey (San Diego: American Astronautical Society and Univelt, 1977), Table 2, p. 129. For Wichita, see Jay M. Price and the AIAA Wichita Section, Wichita’s Legacy of Flight (Charleston, SC: Arcadia Publishing, 2003). 7. Fred E. Weick and James R. Hansen, From the Ground Up: The Autobiography of an Aeronautical Engineer (Washington, DC: Smithsonian Institution Press, 1988).

412

Case 8 | NACA-NASA’s Contribution to General Aviation

8

This Curtiss AT-5A validated Weick’s NACA Cowling. The cowling increased its speed by 19 miles per hour, equivalent to adding 83 horsepower. Afterwards it became a standard design feature on radial-engine airplanes worldwide. NASA.

In late fall of 1934, Robert T. Jones, then 23 years old, started a temporary, 9-month job at Langley as a scientific aide. He would remain with the Agency and NASA afterwards for the next half-century, being particularly known for having independently discovered the benefits of wing sweep for transonic and supersonic flight. Despite his youth, Jones already had greater mathematical ability than any other of his coworkers, who soon sought his expertise for various theoretical analyses. Jones was a former Capitol Hill elevator operator and had previously been a designer for the Nicholas Beazley Company in Marshall, MO. The Great Depression collapsed the company and forced him to seek other employment. His work as an elevator operator allowed him to hone his mathematical abilities gaining him the patronage of senior officials who arranged for his employment by the NACA.8 Jones and Weick formed a fruitful collaboration, exemplified by a joint report they prepared on the status of NACA lateral control research. Two things were considered of primary importance in judging the effectiveness of different control devices: the calculated banking and yawing motion of a typical small airplane caused by control deflection, and the stick force required to produce this control deflection. The report included a table in which a number of different lateral control devices

8. Jones’s seminal paper was his “Properties of Low-Aspect-Ratio Pointed Wings at Speeds Below and Above the Speed of Sound,” NACA TN-1032 (1946).

413

NASA Contributions to Aeronautics

8

were compared.9 Unlike Jones, Weick eventually left the NACA to continue his work in the GA field, producing a succession of designs emphasizing inherent stability and stall resistance. His research mirrored Federal interest in developing cheap, yet safe, GA aircraft, an effort that resulted in a well-publicized design competition by the Department of Commerce that was won by the innovative Stearman-Hammond Model Y of 1936. Weick had designed a contender himself, the W-1, and though he did not win, his continued research led him to soon develop one of the most distinctive and iconic “safe” aircraft of all time, his twin-fin and single-engine Ercoupe. It is perhaps a telling comment that Jones, one of aeronautics’ most profound scientists, himself maintained and flew an Ercoupe into the 1980s.10

The Weick W-1 was an early example of attempting to build a cheap yet safe General Aviation airplane. NASA.

The NACA-NASA contributions to GA have come from research, development, test, and evaluation within the classic disciplines of aerodynamics, structures, propulsion, and controls but have also involved functional areas such as aircraft handling qualities and aircrew 9. Fred E. Weick and Robert T. Jones, “Response and Analysis of NACA. Lateral Control Research,” TR 605 (1937). 10. Weick and Hansen, From the Ground Up, pp. 137–140. Jones kept his Ercoupe at Half Moon Bay Airport, CA; recollection of R.P. Hallion, who knew Jones.

414

Case 8 | NACA-NASA’s Contribution to General Aviation

8

Weick’s Ercoupe is one of the most distinctive and classic General Aviation aircraft of all time. RPH.

performance, aviation safety, aviation meteorology, air traffic control, and education and training. The following are selected examples of such work, and how it has influenced and been adapted, applied, and exploited by the GA community. Airfoil Evolution and Its Application to General Aviation In the early 1930s, largely thanks to the work of Munk, the NACA had risen to world prominence in airfoil design, such status evident when, in 1933, the Agency released a report cataloging its airfoil research and presenting a definitive guide to the performance and characteristics of a wide range of airfoil shapes and concepts. Prepared by Eastman N. Jacobs, Kenneth E. Ward, and Robert M. Pinkerton, this document, TR-460, became a standard industry reference both in America and abroad.11 The Agency, of course, continued its airfoil research in the 1930s, making notable advances in the development of high-speed airfoil sections and low-drag and laminar sections as well. By 1945, as valuable as TR-460 had been, it was now outdated. And so, one of the 11. Eastman N. Jacobs; Kenneth E. Ward; and Robert M. Pinkerton, “The Characteristics of 78 Related Airfoil Sections from Tests in the Variable-Density Wind Tunnel,” NACA TR-460 (1933); see also Ira H. Abbott and Albert E. von Doenhoff, Theory of Wing Sections, Including a Summary of Airfoil Data (New York: McGraw-Hill, 1949), p. 112.

415

NASA Contributions to Aeronautics

most useful of all NACA reports, and one that likewise became a standard reference for use by designers and other aeronautical engineers in airplane airfoil/wing design, was its effective replacement prepared in 1945 by Ira H. Abbott, Albert E. von Doenhoff, and Louis S. Stivers, Jr. This study, TR-824, was likewise effectively a catalog of NACA airfoil research, its authors noting (with justifiable pride) that

8

Recent information of the aerodynamic characteristics of NACA airfoils is presented. The historical development of NACA airfoils is briefly reviewed. New data are presented that permit the rapid of the approximate pressure distribution for the older NACA four-digital and fivedigit airfoils, by the same methods used for the NACA 6-series airfoils. The general methods used to derive the basic thickness forms for NACA 6 and 7 series airfoils together with their corresponding pressure distributions are presented. Detailed data necessary for the application of the airfoils to wing design are presented in supplementary figures placed at the end of the paper. This report includes an analysis of the lift, drag, pitching moment, and critical-speed characteristics of the airfoils, together with a discussion of the effects of surface conditions available data on high-lift devices. Problems associated with the later-control devices, leading edge air intakes, and interference is briefly discussed, together with aerodynamic problems of application.12 While much of this is best remembered because of its association with the advanced high-speed aircraft of the transonic and supersonic era, much was as well applicable to new, more capable civil transport and GA designs produced after the war. Two key contributions to the jet-age expansion of GA were the supercritical wing and the wingtip winglet, both developments conceived by Richard Travis Whitcomb, a legendary NACA-NASA Langley aerodynamicist who was, overall, the finest aeronautical scientist of the postSecond World War era. More comfortable working in the wind tunnel 12. Ira H. Abbott, Albert E. von Doenhoff, and Louis S. Stivers, Jr., “Summary of Airfoil Data,” NACA TR-824 (1945), p. 1.

416

Case 8 | NACA-NASA’s Contribution to General Aviation

than sitting at a desk, Whitcomb first gained fame by experimentally investigating the zero lift drag of wing-body combinations through the transonic flow regime based on analyses by W.D. Hayes.13 His resulting “Area Rule” for transonic flow represented a significant contribution to the aerodynamics of high-speed aircraft, first manifested by its application to the so-called “Century series” of Air Force jet fighters. 14 Whitcomb followed area rule a decade later in the 1960s and derived the supercritical wing. It delayed the sharp drag rise associated with shock wave formation by having a flattened top with pronounced curvature towards its trailing edge. First tested on a modified T-2C jet trainer, and then on a modified transonic F-8 jet fighter, the supercritical wing proved in actual flight that Whitcomb’s concept was sound. This distinctive profile would become a key design element for both jet transports and high-speed GA aircraft in the 1980s and 1990s, offering a beneficial combination of lower drag, better fuel economy, greater range, and higher cruise speed exemplified by its application on GA aircraft such as the Cessna Citation X, the world’s first business jet to routinely fly faster than Mach 0.90.15 The application of Whitcomb’s supercritical wing to General Aviation began with the GA community itself, whose representatives approached Whitcomb after a Langley briefing, enthusiastically endorsing his concept. In response, Whitcomb launched a new Langley program, the Low-andMedium-Speed Airfoil Program, in 1972. This effort, blending 2-D computer analysis and tests in the Langley Low-Turbulence Pressure Tunnel, led to development of the GA(W)-1 airfoil.16 The GA(W)-1 employed a

Technology Light Twin (ATLIT) engine airplane, a Piper PA-34 Seneca twin-engine aircraft modified to employ a high-aspect-ratio wing with a GA(W)-1 airfoil with winglets. Testing on ATLIT proved the practical advantages of the design, as did subsequent follow-on ground tests of the ATLIT in the Langley 30 ft x 60 ft Full-Scale-Tunnel.18 Subsequently, the NASA-sponsored General Aviation Airfoil Design and Analysis Center (GA/ADAC) at the Ohio State University, led by Dr. Gerald M. Gregorek, modified a single-engine Beech Sundowner light aircraft to undertake a further series of tests of a thinner variant, the GA(W)-2. GA/ADAC flight tests of the Sundowner from 1976–1977 confirmed that the Langley results were not merely fortuitous, paving the way for derivatives of the GA(W) family to be applied to a range of new aircraft designs starting with the Beech Skipper, the Piper Tomahawk, and the Rutan VariEze.19 Following on the derivation of the GA(W) family, NASA Langley researchers, in concert with industry and academic partners, continued refinement of airfoil development, exploring natural laminar flow (NLF) airfoils, previously largely restricted to exotic, smoothly finished sailplanes, but now possible thanks to the revolutionary development of smooth composite structures with easily manufactured complex shapes tailored to the specific aerodynamic needs of the aircraft under development.20 Langley researchers subsequently blended their own conceptual and tunnel research with a computational design code developed at the University of Stuttgart to generate a new natural laminar flow

airfoil section, the NLF(1).21 Like the GA(W) before it, it served as the basis for various derivative sections. After flight testing on various testbeds, it was transitioned into mainstream GA design beginning with a derivative of the Cessna Citation II in 1990. Thereafter, it has become a standard feature of many subsequent aircraft.22 The second Whitcomb-rooted development that offered great promise in the 1970s was the so-called winglet.23 The winglet promised to dramatically reduce energy consumption and reduce drag by minimizing the wasteful tip losses caused by vortex flow off the wingtip of the aircraft. Though reminiscent of tip plates, which had long been tried over the years without much success, the winglet was a more refined and

better-thought-out concept, which could actually take advantage of the strong flow-field at the wingtip to generate a small forward lift component, much as a sail does. Primarily, however, it altered the span-wise distribution of circulation along the wing, reducing the magnitude and energy of the trailing tip vortex. First to use it was the Gates Learjet Model 28, aptly named the “Longhorn,” which completed its first flight in August 1977. The Longhorn had 6 to 8 percent better range than previous Lears.24 The winglet was experimentally verified for large aircraft application by being mounted on the wing tips of a first-generation jet transport, the Boeing KC-135 Stratotanker, progenitor of the civil 707 jetliner, and tested at Dryden from 1979–1980. The winglets, designed with a generalpurpose airfoil that retained the same airfoil cross-section from root to tip, could be adjusted to seven different cant and incidence angles to enable a variety of research options and configurations. Tests revealed the winglets increased the KC-135’s range by 6.5 percent—a measure of both aerodynamic and fuel efficiency—better than the 6 percent projected by Langley wind tunnel studies and consistent with results obtained with the Learjet Longhorn. With this experience in hand, the winglet was swiftly applied to GA aircraft and airliners, and today, most airliners, and many GA aircraft, use them.25

8

24. See Neil A. Armstrong and Peter T. Reynolds, “The Learjet Longhorn Series: The First Jets with Winglets,” in Society of Experimental Test Pilots, 1978 Report to the Aerospace Profession (Lancaster: SETP, 1978), pp. 57–66. 25. Richard T. Whitcomb, “A High Subsonic Speed Wind-Tunnel Investigation of Winglets on a Representative Second-Generation Jet Transport Wing,” NASA TN D-8264 (1976); and NASA Dryden Flight Research Center, “Winglets,” NASA Technology Facts, TF 2004-15 (2004), pp. 1–4. Another interesting project in this time period was the NASA AD-1 Oblique Wing, whose flight test was conducted at Dryden. The oblique wing concept originated with Ames’s Robert T. Jones. The NASA Project Engineer was Weneth “Wen” Painter and the Project Pilot was Tom McMurtry. The team successfully demonstrated an aircraft wing could be pivoted obliquely from 0 to 60 degrees during flight. The aircraft was flown 79 times during the research program, which evaluated the basic pivot-wing concept and gathered information on handling qualities and aerodynamics at various speeds and degrees of pivot. The supersonic concept would have been design with a more complex control system, such as fly-by-wire. The AD-1 aircraft was flown by 19 pilots: 2 USAF pilots; 2 Navy pilots; and 15 NASA Dryden, Langley, and Ames research pilots. The final flights of the AD-1 occurred at the 1982 Experimental Aircraft Association’s (EAA) annual exhibition at Oshkosh, WI, where it flew eight times to demonstrate it unique configuration, a swan song watched over by Jones and his old colleague Weick.

421

NASA Contributions to Aeronautics

8

The Propulsion Perspective Aerodynamics always constituted an important facet of NACA-NASA GA research, but no less significant is flight propulsion, for the aircraft engine is often termed the “heart” of an airplane. In the 1920s and 1930s, NACA research by Fred Weick, Eastman Jacobs, John Stack, and others had profoundly influenced the efficiency of the piston engine-propellercowling combination.26 Agency work in the early jet age had been no less influential upon improving the performance of turbojet, turboshaft, and turbofan engines, producing data judged “essential to industry designers.”27 The rapid proliferation of turbofan-powered GA aircraft—over 2,100 of which were in service by 1978, with 250 more being added each year— stimulated even greater attention.28 NASA swiftly supported development of a specialized computer-based program for assessing engine performance and efficiency. In 1977, for example, Ames Research Center funded development of GASP, the General Aviation Synthesis Program, by the Aerophysics Research Corporation, to compute propulsion system performance for engine sizing and studies of overall aircraft performance. GASP consisted of an overall program routine, ENGSZ, to determine appropriate fanjet engine size, with specialized subroutines such as ENGDT and NACDG assessing engine data and nacelle drag. Additional subroutines treated performance for propeller powerplants, including PWEPLT for piston engines, TURBEG for turboprops, ENGDAT and PERFM for propeller characteristics and performance, GEARBX for gearbox cost and weight, and PNOYS for propeller and engine noise.29 Such study efforts reflected the increasing numbers of noisy turbine-powered aircraft operating into over 14,500 airports and airfields

in the United States, most in suburban areas, as well as the growing cost of aviation fuel and the consequent quest for greater engine efficiency. NASA had long been interested in reducing jet engine noise, and the Agency’s first efforts to find means of suppressing jet noise dated to the late NACA in 1957. The needs of the space program had necessarily focused Lewis research primarily on space, but it returned vigorously to air-breathing propulsion at the conclusion of the Apollo program, spurred by the widespread introduction of turbofan engines for military and civil purposes and the onset of the first oil crisis in the wake of the 1973 Arab-Israeli War. Out of this came a variety of cooperative research efforts and programs, including the congressionally mandated ACEE program (for Aircraft Engine Efficiency, launched in 1975), the NASA-industry QCSEE (for Quiet Clean STOL Experimental Engine) study effort, and the QCGAT (Quiet Clean General Aviation Turbofan) program. All benefited future propulsion studies, the latter two particularly so.30 QCGAT, launched in 1975, involved awarding initial study contracts to Garrett AiResearch, General Electric, and Avco Lycoming to explore applying large turbofan technology to GA needs. Next, AiResearch and Avco were selected to build a small turbofan demonstrator engine suitable for GA applications that could meet stringent noise, emissions, and fuel consumption standards using an existing gas-generating engine core. AiResearch and Avco took different approaches, the former with a high-thrust engine suitable for long-range high-speed and high altitude GA aircraft (using as a baseline a stretched Lear 35), and the latter with a lower-thrust engine for a lower, slower, intermediate-range design (based upon a Cessna Citation I). Subsequent testing indicated that each company did an excellent job in meeting the QCGAT program goals, each having various strengths. The Avco engine was quieter, and both engines bettered the QCQAT emissions goals for carbon monoxide and unburned hydrocarbons. While the Avco engine was “right at the goal” for nitrous oxide emissions, the AiResearch engine was higher, though much better than the baseline TFE-731-2 turbofan used for comparative purposes. While the AiResearch engine met sea-level takeoff and design cruise thrust goals, the Avco engine missed

8

30. And are treated in other case studies. For Lewis and NASA aero-propulsion work in this period, see Dawson, Engines and Innovation, pp. 203–205; and Jeffrey L. Ethell, Fuel Economy in Aviation, NASA SP-462 (NASA Scientific and Technical Information Branch, 1983), passim.

423

NASA Contributions to Aeronautics

8

both, though its measured numbers were nevertheless “quite respectable.” Overall, NASA considered that the QCGAT program, executed on schedule and within budget, constituted “a very successful NASA joint effort with industry,” concluding that it had “demonstrated that noise need not be a major constraint on the future growth of the GA turbofan fleet.”31 Subsequently, NASA launched GATE (General Aviation Turbine Engines) to explore other opportunities for the application of small turbine technology to GA, awarding study contracts to AiResearch, Detroit Diesel Allison, Teledyne CAE, and Williams Research.32 GA propulsion study efforts gained renewed impetus through the Advanced General Aviation Transport Experiment (AGATE) program launched in 1994, which is discussed later in this study. Understanding GA Aircraft Behavior and Handling Qualities As noted earlier, the NACA research on aircraft performance began at the onset of the Agency. The steady progression of aircraft technology was matched by an equivalent progression in the understanding and comprehension of aircraft motions, beginning with extensive studies of the loads, stability, control, and handling qualities fighter biplanes encountered during steady and maneuvering flight.33 At the end of the interwar period, NACA Langley researchers undertook a major evaluation of the flying qualities of American GA aircraft, though the results of that investigation were not disseminated because of the outbreak of the Second World War and the need for the Agency to focus its attention on military, not civil, needs. Langley test pilots flew five representative aircraft, and the test results, on the whole, were generally satisfactory. Control effectiveness was, on the overall, good, and the aircraft demonstrated a desirable degree of longitudinal (pitch) inherent stability, though two of the designs had degraded longitudinal stability at low speeds. Lateral 31. Gilbert K. Sievers, “Summary of NASA QCGAT Program,” in NASA Lewis RC, General Aviation Propulsion, NASA CP-2126, pp. 189–190; see also his “Overview of NASA QCGAT Program” in the same volume, pp. 2–4. 32. See William C. Strack, “New Opportunities for Future, Small, General-Aviation Turbine Engines (GATE),” in NASA Lewis RC, General Aviation Propulsion, NASA CP-2126, pp. 195–197. 33. For example, James H. Doolittle, “Accelerations in Flight,” NACA TR-203 (1925); Richard V. Rhode, “The Pressure Distribution Over the Horizontal and Vertical Tail Surfaces of the F6C-4 Pursuit Airplane in Violent Maneuvers,” NACA TR-307 (1929); and Richard V. Rhode, “The Pressure Distribution Over the Wings and Tail Surfaces of a PW-9 Pursuit Airplane in Flight,” NACA TR-364 (1931).

424

Case 8 | NACA-NASA’s Contribution to General Aviation

(roll) stability was likewise satisfactory, but “wide variations” were found in directional stability, though rudder inputs on each were sufficient to trim the aircraft for straight flight. Stall warning (exemplified by progressively more violent airframe buffeting) was good, and each aircraft possessed adequate stall recovery behavior, though departures from controlled flight during stalls in turns proved more violent (the airplane rolling in the direction of the downward wing) than stalls made from wings-level flight. In all cases, aileron power was inadequate to maintain lateral control. Stall recovery was “easily made” in every case simply by pushing forward on the elevator. Overall, if some performance deficiencies existed—for example, the tendency to spiral instability or the lack of lateral control effectiveness at the staff—such limitations were small compared with the dramatic handling qualities deficiencies of many early aircraft just two decades previously, at the end of the First World War. This survey demonstrated that by 1940 America had mastered the design of the practical, useful GA airplane. Indeed, such aircraft, built by the thousands, would play a critical role in initiating many young Americans into wartime service as combat and combat support pilots.34

8

The Aeronca Super Chief shown here was evaluated at Langley as part of a prewar survey of General Aviation aircraft handling and flying qualities. NASA.

During the Second World War, the NACA generated a new series of so-called Wartime Reports, complementing its prewar series of Technical 34. Paul A. Hunter, “Flight Measurements of the Flying Qualities of Five Light Airplanes,” NACA TN-1573 (1948), pp. 1–2, 8–9, 19–20.

425

NASA Contributions to Aeronautics

8

Reports (TR), Technical Memoranda (TM), and Technical Notes (TN). They subsequently had great influence upon aircraft design and engineering practice, particularly after the war, when applied to high-performance GA aircraft. The NACA studied various ways to improve aircraft performance through drag reduction of single-engine military fighter type aircraft and other designs resulting in improved handling qualities and increased airspeeds. The first Wartime Report was published in October 1940 by NACA engineers C.H. Dearborn and Abe Silverstein. This report described the test results that investigated methods for increasing the high speed for 11 single-engine military aircraft for the Army Air Corps. Their tests found inefficient design features on many of these airplanes indicating the desirability of analyzing and combining all of the results into a single paper for distribution to the designers. It highlighted one of the major problems afflicting aircraft design and performance analysis: understanding the interrelationship of design, performance, and handling qualities.35

The fifteen different types of aircraft evaluated as part of a landmark study on longitudinal stability represented various configurations and design layouts, both single and multiengine, and from light general aviation designs to experimental heavy bombers. From NACA TR-711 (1941).

The NACA had long recognized “the need for quantitative design criterions for describing those qualities of an airplane that make up satis35. C.H. Dearborn and Abe Silverstein, “Drag Analysis of Single-Engine Military Airplanes Tested in the NACA Full-Scale Wind Tunnel,” NACA WR-489 (1940).

426

Case 8 | NACA-NASA’s Contribution to General Aviation

factory controllability, stability, and handling characteristics,” and the individual who, more than any other, spurred Agency development of them was Robert R. Gilruth, later a towering figure in the development of America’s manned spaceflight program.36 Gilruth’s work built upon earlier preliminary efforts by two fellow Langley researchers, Hartley A. Soulé (later chairman of the NACA Research Airplane Projects Panel that oversaw the postwar X-series transonic and supersonic research airplane programs) and chief Agency test pilot Melvin N. “Mel” Gough, though it went considerably beyond.37 In 1941, Gilruth and M.D. White assessed the longitudinal stability characteristics of 15 different airplanes (including bombers, fighters, transports, trainers, and GA sport aircraft).38 Gilruth followed this with another study, in partnership with W.N. Turner, on the lateral control required for satisfactory flying qualities, again based on flight tests of numerous airplanes.39 Gilruth capped his research with a landmark report establishing the requirements for satisfactory handling qualities in airplanes, issued first as an Advanced Confidential Report in April 1941, then as a Wartime Report, and, finally, in 1943, as one of the Agency’s Technical Reports, TR-755. Based on “real-world” flight-test results, TR-755 defined what measured characteristics were significant in the definition of satisfactory flying qualities, what were reasonable to require from an airplane (and thus to establish as design requirements), and what influence various design features had upon the flying qualities of the aircraft once it entered flight testing.40 Together, this trio profoundly influenced the field of flying qualities assessment. But what was equally needed was a means of establishing a standard measure for pilot assessment of aircraft handling qualities.

This proved surprisingly difficult to achieve and took a number of years of effort. Indeed, developing such measures took on such urgency and constituted such a clear requirement that it was one of the compelling reasons underlying the establishment of professional test pilot training schools, beginning with Britain’s Empire Test Pilots’ School established in 1943.41 The measure was finally derived by two American test pilots, NASA’s George Cooper and the Cornell Aeronautical Laboratory’s Robert Harper, Jr., thereby establishing one of the essential tools of flight testing and flight research, the Cooper-Harper rating scale, issued in 1969 in a seminal report.42 This evaluation tool quickly replaced earlier scales and measures and won international acceptance, influencing the flight-test evaluation of virtually all flying craft, from light GA aircraft through hypersonic lifting reentry vehicles and rotorcraft. The combination of the work undertaken by Gilruth, Cooper, and

their associates dramatically improved flight safety and flight efficiency, and must therefore be considered one of the NACA-NASA’s major contributions to aviation.43

8

The Cessna C-190 shown here was evaluated at Langley as part of an early postwar assessment of General Aviation aircraft performance. NASA.

Despite the demands of wartime research, the NACA and its research staff continued to maintain a keen interest in the GA field, particularly as expectations (subsequently frustrated by postwar economics) anticipated massive sales of GA aircraft as soon as conflict ended. While this was true in 1946—when 35,000 were sold in a single year!—the postwar market swiftly contracted by half, and then fell again, to just 3,000 in 1952, a “boom-bust” cycle the field would, alas, all too frequently repeat over the next half-century.44 Despite this, hundreds of NACA general-aviation-focused reports, notes, and memoranda were produced— many reflecting flight tests of new and interesting GA designs—but, as well, some already-classic machines such as the Douglas DC-3, which underwent a flying qualities evaluation at Langley in 1950 as an exercise to calculate its stability derivatives, and, as well, update and refine the then-existing Air Force and Navy handling qualities specifications guidebooks. Not surprisingly, the project pilot concluded, “the DC-3

43. For background of this rating, see George Cooper, Robert Harper, and Roy Martin, “Pilot Rating Scales,” in Society of Experimental Test Pilots, 2004 Report to the Aerospace Profession (Lancaster, CA: Society of Experimental Test Pilots, 2004), pp. 319–337. 44. Crouch, “General Aviation: The Search for a Market,” p. 126.

429

NASA Contributions to Aeronautics

8

is a very comfortable airplane to fly through all normal flight regimes, despite fairly high control forces about all three axes.”45 On October 4, 1957, Sputnik rocketed into orbit, heralding the onset of the “Space Age” and the consequent transformation of the NACA into the National Aeronautics and Space Administration (NASA). But despite the new national focus on space, NASA maintained a broad program of aeronautical research—the lasting legacy of the NACA—even in the shadow of Apollo and the Kennedy-mandated drive to Tranquility Base.

The Beech Debonair, one of many General Aviation aircraft types evaluated at the NASA Flight Research Center (now the NASA Dryden Flight Research Center). NASA.

This included, in particular, the field of GA flying and handling qualities. The first report written in 1960 under NASA presented the status of spin research—a traditional area of concern, particularly as it was a killer of low-flying-time pilots—from recent airplane design as interpreted at the NASA Langley Research Center, Langley, VA.46 Sporadically, NASA researchers flight-tested new GA designs to assess their handling qualities, performance, and flight safety, their flight test reports frankly

detailing both strengths and deficiencies. In December 1964, for example, NASA Flight Research Center test pilot William Dana (one of the Agency’s X-15 pilots) evaluated a Beech Debonair, a conventional-tailed derivative of the V-tail Beech Bonanza. Dana found the sleek Debonair a satisfactory aircraft overall. It had excellent longitudinal, spiral, and speed stability, with good roll damping and “honest” stall behavior in “clean” (landing gear retracted) configuration. But he faulted it for lack of rudder trim that hurt its climb performance, lack of “much warning, either by stick or airframe buffet” of impending stalls, and poor geardown stall performance manifested by an abrupt left wing drop that hindered recovery. Finally, the plane’s tendency to promote pilot-induced oscillations (PIO) during its landing flare earned it a pilot-rating grade of “C” for landings.47 The growing recognition that GA technology had advanced far beyond the state of GA that had existed at the time of the NACA’s first qualitative examination of light aircraft handling qualities triggered one of the most significant of NASA’s GA assessment programs. In 1966, at the height of the Apollo program, pilots and engineers at the Flight Research Center performed an evaluation of the handling qualities of seven GA aircraft, expanding upon this study subsequently to include the handling qualities of other light aircraft and advanced control systems and displays. The aircraft for the 1966 study were a mix of popular single-and twinengine, high-and low-wing types. Project pilot was Fred W. Haise (subsequently an Apollo 13 astronaut); Marvin R. Barber, Charles K. Jones, and Thomas R. Sisk were project engineers.48 As a group, the seven aircraft all exhibited generally satisfactory stability and control characteristics. However, these characteristics, as researchers noted,

The qualitative portion of the program showed the handling qualities were generally satisfactory during visual and instrument flight in smooth air. However, atmosphere turbulence degraded these handling qualities, with the greatest degradation noted during instrument landing system approaches. Such factors as excessive control-system friction, low levels of static stability, high adverse yaw, poor Dutch roll characteristics, and control-surface float combined to make precise instrument tracking tasks, in the present of turbulence difficult even for experienced instrument pilots. The program revealed three characteristics of specific airplanes that were considered unacceptable if encountered by inexperienced or unsuspecting pilots: (1) A violent elevator force reversal or reduced load factors in the landing configuration, (2) power-on stall characteristics that culminate in rapid roll offs and/or spins, and (3) neutral-to-unstable static longitudinal stability at aft center gravity. A review indicated that existing criteria had not kept pace with aircraft development in areas of Dutch roll, adverse yaw, effective dihedral, and allowable trim changes with gear, flap and power. This study indicated that criteria should be specified for control-system friction and control-surface float. This program suggested a method of quantitative evaluating and handling qualities of aircraft by the use of pilot-work-load factor.49 As well, all of the aircraft tested had “undesirable and inconsistent placement of both primary flight instruments and navigational displays,” increasing pilot workload, a matter of critical concern during precision instrument landing approaches.50 Further, they all lacked good 49. Barber et al., “An Evaluation of the Handling Qualities of Seven General-Aviation Aircraft,” p. 1. 50. Barber et al., ”An Evaluation of the Handling Qualities of Seven General-Aviation Aircraft,” p. 16.

432

Case 8 | NACA-NASA’s Contribution to General Aviation

stall warning (defined as progressively strong airframe buffet prior to stall onset). Two had “unacceptable” stall characteristics, one entering an “uncontrollable” left roll/yaw and altitude-consuming spin, and the other having “a rapid left rolloff in the power-on accelerated stall with landing flaps extended.”51 The 1966 survey stimulated more frequent evaluations of GA designs by NASA research pilots and engineers, both out of curiosity and sometimes after accounts surfaced of marginal or questionable behavior. NASA test pilots and engineers found that while various GA designs had “generally satisfactory” handling qualities for flight in smooth air and under visual conditions, they had far different qualities in turbulent flight and with degraded visibility. Control system friction, longitudinal and spiral instability, adverse yaw, combined lateral-directional “Dutch roll” characteristics, abrupt trim changes when deploying landing gear flaps, and adding or subtracting power all inhibited effective precision instrument tracking. Thus, instrument landing approaches quickly taxed a pilot, markedly increasing pilot workload. The FRC team explored applying advanced control systems and displays, modifying a light twin-engine

8

The workhorse Piper PA-30 on final approach for a lakebed landing at the Dryden Flight Research Center. NASA.

Piper PA-30 Twin Comanche business aircraft as a GA testbed with a flight-director display and an attitude-command control system. The

result, demonstrated in 72 flight tests and over 120 hours of operation, was “a flying machine that borders on being perfect from a handling qualities standpoint during ILS approaches in turbulent air.” The team presented their findings at a seminal NASA conference on aircraft safety and operating problems held at the Langley Research Center in May 1971.52 The little PA-30 proved a workhorse, employed for a variety of research studies including exploring remotely piloted vehicle technology.53 During the time period of 1969–1972, NASA researchers Chester Wolowicz and Roxanah Yancey undertook wind tunnel and flight tests on it to investigate and assess its longitudinal and lateral static and dynamic stability characteristics.54 These tests documented representative state-of-the-art analytical procedures and design data for predicting the subsonic longitudinal static and dynamic stability and control characteristics of a light, propeller-driven airplane.55 But the tests also confirmed, as one survey undertaken by North Carolina State University researchers for NASA concluded, that much work remained to be done to define and properly quantify the desirable handling qualities of GA aircraft.56 Fortunately, a key tool was rapidly maturing that made such analysis far more attainable than it would have been just a few years previously: the computer. Given a properly written analytical program, it had the ability to rapidly extract relevant performance parameters from 52. Paul C. Loschke, Marvin R. Barber, Calvin R. Jarvis, and Einar K. Enevoldson, “Handling Qualities of Light Aircraft with Advanced Control Systems and Displays,” in Philip Donely et al., NASA Aircraft Safety and Operating Problems, v. 1, NASA SP-270 (Washington, DC: NASA Scientific and Technical Information Office, 1971), p. 189. NASA has continued its research on applying sophisticated avionics to civil and military aircraft for flight safety purposes, as examined by Robert Rivers in a case on synthetic vision systems in this volume. 53. Discussed in a companion case study in this series by Peter Merlin. 54. Marvin P. Fink and Delma C. Freeman, Jr., “Full-Scale Wind-Tunnel Investigation of Static Longitudinal and Lateral Characteristics of a Light Twin-Engine Aircraft,” NASA TN D-4983 (1969); Chester H. Wolowicz and Roxanah B. Yancey, “Longitudinal Aerodynamic Characteristics of Light Twin-Engine, Propeller-Driven Airplanes,” NASA TN D-6800 (1972); and Chester H. Wolowicz and Roxanah B. Yancey, “Lateral-Directional Aerodynamic Characteristics of Light, Twin-Engine Propeller-Driven Airplanes,” NASA TN D-6946 (1972). 55. Afterwards, Wolowicz and Yancey expanded their research to include experimental determination of airplane mass and inertial characteristics. See Chester H. Wolowicz and Roxanah B. Yancey, “Experimental Determination of Airplane Mass and Inertial Characteristics,” NASA TR R-433 (1974). 56. Frederick O. Smetana, Delbert C. Summey, and W. Donald Johnson, “Riding and Handling Qualities of Light Aircraft—A Review and Analysis,” NASA CR-1975 (1972).

434

Case 8 | NACA-NASA’s Contribution to General Aviation

flight-test data. Over several decades, estimating stability and control parameters from flight-test data had progressed through simple analog matching methodologies, time vector analysis, and regression analysis.57 A joint program between the NASA Langley Research Center and the Aeronautical Laboratory of Princeton University using a Ryan Navion demonstrated that an iterative “maximum-likelihood minimum variance” parameter estimation procedure could be used to extract key aerodynamic parameters based on flight test results, but also showed that caution was warranted. Unanticipated relations between the various parameters had made it difficult to sort out individual values and indicated that prior to such studies, researchers should have a reliable mathematical model of the aircraft.58 At the Flight Research Center, Richard E. Maine and Kenneth W. Iliff extended such work by applying IBM’s FORTRAN programming language to ease determination of aircraft stability and control derivatives from flight data. Their resulting program, a maximum likelihood estimation method supported by two associated programs for routine data handling, was validated by successful analysis of 1,500 maneuvers executed by 20 different aircraft and was made available for use by the aviation community via a NASA Technical Note issued in April 1975.59 Afterwards, NASA, the Beech Aircraft Corporation, and the Flight Research Laboratory at the University of Kansas collaborated on a joint flight test of a loaned Beech 99 twin-engine commuter aircraft, extracting longitudinal and lateral-directional stability derivatives during a variety of maneuvers at assorted angles of attack

and in clean and flaps-down condition. “In general,” researchers concluded, “derivative estimates from flight data for the Beech 99 airplane were quite consistent with the manufacturer’s predictions.”60 Another analytical tool was thus available for undertaking flying and handling qualities analysis.

8

Enhancing General Aviation Safety Flying and handling qualities are, per se, an important aspect of operational safety. But many other issues affect safety as well. The GA airplane of the postwar era was very different from its prewar predecessor—gone was fabric and wood or steel tube, with some small engine and a twobladed fixed-pitch propeller. Instead, many were sleek all-metal monoplanes with retractable landing gears, near-or-over-200-mph cruising speeds, and, as noted in the previous section, often challenging and demanding flying and handling qualities. In November 1971, NASA sponsored a meeting at the Langley Research Center to discuss technologies that might be applied to future civil aviation in the 1970s and beyond. Among the many papers presented was a survey of GA by Jack Fischel and Marvin Barber of the Flight Research Center.61 Barber and Fischel offered an incisive survey and synthesis of applicable technologies, including the then-new concept of the supercritical wing, which was of course applicable to propeller design as well. They addressed opportunities to employ new structural design concepts and materials advances (as were then beginning to be explored for military aircraft). Boron and graphite composites, which could be laid up and injection molded, promised to reduce both weight and labor costs, offering higher strengthto-weight ratios than conventional aluminum and steel construction. They noted the potentiality of increasingly reliable and cheap gas turbine engines (and the then-fashionable rotary combustion engine as well), and improved avionics could provide greater utility and safety for pilots of lower flight experience. Barber and Fischel concluded that, On the basis of current and projected near-future technology, it is believed that the main technology effort in the next decade will be devoted to improving the 60. Russel R. Tanner and Terry D. Montgomery, “Stability and Control Derivative Estimates Obtained from Flight Data for the Beech 99 Aircraft,” NASA TM-72863 (1979). 61. Barber and Fischel, “General Aviation: The Seventies and Beyond,” pp. 317–332.

436

Case 8 | NACA-NASA’s Contribution to General Aviation

economy, performance, utility, and safety of General Aviation aircraft.62 Of these, the greatest challenges involved safety. By the early 1970s, the fatality rate for GA was 10 times higher per passenger miles than that of automobiles.63 Many accidents were caused by pilots exceeding their flying abilities, leading one manufacturing executive to ruefully remark at a NASA conference, “If we don’t soon find ways to improve the safety of our airplanes, we are going to be putting placards on the airplanes which say ‘Flying airplanes may be hazardous to your health.’”64 Alarmed, NASA set an aviation safety goal to reduce fatality rates by 80 percent by the mid-1980s.65 While basic changes in pilot training and practices could accomplish a great deal of good, so, too, could better understanding of GA safety challenges to create aircraft that were easier and more tolerant of pilot error, together with sub-systems such as advanced avionics and flight controls that could further enhance flight safety. Underpinning all of this was a continuing need for the highest quality information and analysis that NASA research could furnish. The following examples offer an appreciation of some of the contributions NACA-NASA researchers made confronting some of the major challenges to GA safety.

8

Spin Research One of the areas of greatest interest has been that of spin behavior. When an airplane stalls, it may enter a spin, typically following a steeply

descending flightpath accompanied by a rotational motion (sometimes accompanied by other rolling and pitching motions) that is highly disorientating to a pilot. Depending on the dynamics of the entry and the design of the aircraft, it may be easily recoverable, difficult to recover from, or irrecoverable. Spins were a killer in the early days of aviation, when their onset and recovery phenomena were imperfectly understood, but have remained a dangerous problem since, as well.66 Using specialized vertical spin tunnels, the NACA, and later NASA, undertook extensive research on aircraft spin performance, looking at the dynamics of spins, the inertial characteristics of aircraft, the influence of aircraft design (such as tail placement and volume), corrective control input, and the like.67 As noted, spins have remained an area of concern as aviation has progressed, because of the strong influence of aircraft configuration upon spin behavior. During the early jet age, for example, the coupled motion dynamics of high-performance low-aspect-ratio and high-fineness-ratio jet fighters triggered intense interest in their departure and spin characteristics, which differed significantly from earlier aircraft because their mass was now primarily distributed along the longitudinal, not lateral, axis of the aircraft.68 Because spins were not a normal part of GA flying operations, GA pilots often lacked the skills to recognize and cope with spin-onset, and GA aircraft themselves were often inadequately designed to deal with out-of-balance or out-of-trim conditions that might force a spin entry. If encountered at low altitude, such as approach to landing, the consequences could be disastrous. Indeed, landing accidents composed more than half of all GA accidents, and of these, as one NASA document noted, “the largest single factor in General Aviation fatal accidents is the stall/spin.”69 The Flight Research Center’s 1966 study of comparative handling qualities and behavior of a range of GA aircraft had underscored 66. For a historical perspective on spins, drawn from pilot accounts, see Dunstan Hadley, One Second to Live: Pilots’ Tales of the Stall and Spin (Shrewsbury, U.K.: Airlife Publishing Ltd., 1997). 67. Anshal I. Neihouse, Walter J. Klinar, and Stanley H. Scher, “Status of Spin Research for Recent Airplane Designs,” NASA TR R-57 (1960). 68. For example, see NACA High-Speed Flight Station, “Flight Experience with Two High-Speed Airplanes Having Violent Lateral-Longitudinal Coupling in Aileron Rolls,” NACA RM-H55A13 (1955). 69. NASA Scientific and Technical Information Program, “General Aviation Technology Program,” Release No. 76–51, NASA TM X-73051 (1976), p. 2; Barber and Fischel, “General Aviation: The Seventies and Beyond,” p. 323.

438

Case 8 | NACA-NASA’s Contribution to General Aviation

the continuing need to study stall-spin behavior. Accordingly, in the 1970s, NASA devoted particular attention to studying GA spins (and continued studying the spins of high-performance aircraft as well), marking “the most progressive era of NASA stall/spin research for general aviation configurations.”70 Langley researchers James S. Bowman, Jr.; James M. Patton, Jr.; and Sanger M. Burk oversaw a broad program of stall/spin research. They and other investigators evaluated tail location and its influence upon spin recovery behavior using both spin-tunnel models,71 and free-flight tests of radio-controlled models and actual aircraft at the Wallops Flight Center, on the Virginia coast of the Delmarva Peninsula.72 Between 1977 and 1989, NASA instrumented and modified four aircraft of differing configuration for spin research: an experimental low-wing Piper design with a T-tail, a Grumman American AA-1 Yankee modified so that researchers could evaluate three different horizontal tail positions, a low-wing Beech Sundowner equipped with wingtip rockets to aid in stopping spin rotation, and a high-wing Cessna C-172. Overall, the tests revealed the critical importance of designers ensuring that the vertical fin and rudder of their new GA aircraft be in active airflow during a spin, so as to ensure their effectiveness in spin recovery. To do that, the horizontal tail needed to be located in such a position on the aft fuselage or fin so as not to shield the vertical fin and rudder from active flow. The program was not without danger and incident. Mission planners prudently equipped the four aircraft with an emergency 10.5-ftdiameter spin-recovery parachute. Over that time, the ’chute had to be deployed on 29 occasions when the test aircraft entered unrecoverable

spins; each of the four aircraft deployed the ’chute at least twice, a measure of the risk inherent in stall-spin testing.73 NASA’s work in stall-spin research has continued, but at a lower level of effort than in the heyday of the late 1970s and 1980s, reflecting changes in the Agency’s research priorities, but also that NASA’s work had materially aided the understanding of spins, and hence had influenced the data and experience base available to designers shaping the GA aircraft of the future. As well, the widespread advent of electronic flight controls and computer-aided flight has dramatically improved spin behavior. Newer designs exhibit a degree of flying ease and safety unknown to earlier generations of GA aircraft. This does not mean that the spin is a danger of the past—only that it is under control. In the present and future, as in the past, ensuring GA aircraft have safe stall/spin behavior will continue to require high-order analysis, engineering, and test.

Aircraft entering wake vortex flow encountered a series of dangers, ranging from upset to structural failure, depending on their approach to the turbulent flow. From NASA SP-409 (1977).

upset, and urged by organizations such as the Flight Safety Foundation and the Aircraft Owners and Pilots Association, the Federal Aviation Administration (FAA) asked NASA and the U.S. Air Force to initiate a flight-test program to evaluate the effect of the wingtip vortex wake generated by large jet transport airplanes on a variety of smaller airplanes. The program began in December 1969 and, though initially ended in April 1970, was subsequently expanded and continued over the next decade. Operations were performed at Edwards Air Force Base, CA, under the supervision of the NASA Flight Research Center in cooperation with the Ames Research Center and the U.S. Air Force, using a range of research aircraft including 747, 727, and L-1011 airliners, and smaller test subjects such as the T-37 trainer and QF-86 drones, supported by extensive wind tunnel and water channel research.74

Subsequently, in 1972, NASA intensified its wake vortex research to seek reducing vortex formation via aerodynamic modification and addition of wind devices. By the beginning of 1974, Alfred Gessow, the Chief of Fluid and Flight Dynamics at NASA Headquarters, announced the Agency was optimistic that wake vortex could be eliminated “as a constraint to airport operations by new aerodynamic designs or by ret74. For example, M.R. Barber and Joseph J. Tymczyszyn, “Wake Vortex Attenuation Flight Tests: A Status Report,” in Joseph W. Stickle, ed., 1980 Aircraft Safety and Operating Problems, Pt. 2 (Washington, DC: NASA Scientific and Technical Information Office, 1981), pp. 387–408.

441

NASA Contributions to Aeronautics

8

rofit modifications to large transport aircraft.”75 Overall, the tests, and ones that followed, had clearly demonstrated the power of wake vortices to constrain the operations GA aircraft; light jet trainers and business aircraft such as the Lear Jet were buffeted and rolled, and researchers found that the vortices maintained significant strength up to 10 miles behind a widebody. As a result of NASA’s studies, the FAA introduced a requirement for wake turbulence awareness training for all pilots, increased separation distances between aircraft, and mandated verbal warnings to pilots during the landing approach at control-towered airports when appropriate. NASA has continued its wake turbulence studies since that time, adding further to the understanding of this fascinating, if potentially dangerous, phenomenon.76 Crash Impact Research In support of the Apollo lunar landing program, engineers at the Langley Research Center had constructed a huge steel A-frame gantry structure, the Lunar Landing Research Facility (LLRF). Longer than a football field and nearly half as high as the Washington Monument, this facility proved less useful for its intended purposes than free-flight jet-and-rocket powered training vehicles tested and flown at Edwards and Houston. In serendipitous fashion, however, it proved of tremendous value for aviation safety after having been resurrected as a crash-impact test facility, the Impact Dynamics Research Facility (IDRF) in 1974, coincident with the conclusion of the Apollo program.77

Over its first three decades, the IDRF was used to conduct 41 fullscale crash tests of GA aircraft and approximately 125 other impact tests of helicopters and aircraft components. The IDRF could pendulum-sling aircraft and components into the ground at precise impact angles and velocities, simulating the dynamic conditions of a full-scale accident 443

NASA Contributions to Aeronautics

8

or impact.78 In the first 10 years of its existence, the IDRF served as the focal point for a joint NASA-FAA-GA industry study to improve the crashworthiness of light aircraft. It was a case of making the best of a bad situation: a flood had rendered a sizeable portion of Piper’s singleand-twin-engine GA production at its Lock Haven, PA, plant unfit for sale and service.79 Rather than simply scrap the aircraft, NASA and Piper worked together to turn them to the benefit of the GA industry and user communities. A variety of Piper Aztecs, Cherokees, and Navajos, and later some Cessna 172s, some adorned with colorful names like “Born to Lose,” were instrumented, suspended from cable harnesses, and then “crashed” at various impact angles, attitudes, velocities, and sink-rates, and against hard and soft surfaces. To gain greater fidelity, some were accelerated during their drop by small solid-fuel rockets installed in their engine nacelles.80 Later tests, undertaken in 1995 as part of the Advanced General Aviation Transport Experiment (AGATE) study effort (discussed subsequently), tested Beech Starship, Cirrus SR-20, Lear Fan 2100, and Lancair aircraft.81 The rapid maturation of computerized analysis programs led to its swift adoption for crash impact research. In partnership with NASA, researchers at the Grumman Corporation Research Center developed DYCAST (DYnamic Crash Analysis of STructures) to analyze structural response during crashes. DYCAST, a finite element program, was qualified during extensive NASA testing for light aircraft component testing, including seat and fuselage section analysis, and then made available for broader aviation community use in 1987.82 Application of computa-

tional methodologies to crash impact research expanded so greatly that by the early 1990s, NASA, in partnership with the University of Virginia Center for Computational Structures Technology, held a seminal workshop on advances in the field.83 Out of all of this testing came better understanding of the dynamics of an accident and the behavior of aircraft at and after impact, quantitative data applicable to the design of new and more survivable aircraft structures, better seats and restraint systems, comparative data on the relative merits of conventional versus composite construction, and computational methodologies for evermore precise and informed analysis of crashworthiness.

8

Avionics and Cockpit Research for Safer General Aviation Operations Aircraft instrumentation has always been intrinsically related to flight safety. The challenge of blind and bad-weather flying in the 1920s led to development of both radio navigation equipment and techniques, and specialized blind-flying instrumentation, typified by the gyro-stabilized artificial horizon, which, like radar later, was one of the few truly transforming instruments developed in the history of flight, for it made possible instrument-only (IFR) flight. Taken together with advances in the Federal airway system, the development of lightweight airborne radars, digital electronics, sophisticated communications, and radar-based and later satellite navigation, as well as access to up-to-date weather information, revolutionized civil and military air operations. Ironically, accident rates remained high, particularly among GA pilots flying single-pilot (SP) aircraft under IFR conditions. By the early 1980s, the National Transportation Safety Board was reporting that “SPIFR” accidents accounted for 79 percent of all IFR-related accidents, with half of these occurring during high-workload landing approaches, totaling more than 100 serious accidents attributable to pilot error per year.84 Analysis revealed five major problem areas: controller judgment and response, pilot judgment and response, Air Traffic Control (ATC) intrafacility and interfacility conflict, ATC-pilot communication, and IFRVFR (instrument flight rules-visual flight rules) conflicts. Common to

all of these were a mix of human error, communications deficiencies, conflicting or complex procedures and rules, and excessive workload. In particular, NASA researchers concluded that “methods, techniques, and systems for reducing work load are drastically needed.”85 In the mid-1970s, NASA aeronautics planners had identified “design[ing] avionic systems to more effectively integrate the light airplane with the air-space system” as a priority, with researchers at Ames Research Center evaluating integration avionic functions with the goal of producing a single system concept.86 In 1978, faced with the challenge of rising SPIFR accidents, NASA Langley Research Center launched a SPIFR program, holding a workshop in August 1983 at Langley to review and evaluate the progress to date on SPIFR studies and to disseminate it to an industry, academic, and governmental audience. The SPIFR program studied in depth the interface of the pilot and airplane, looking at a variety of issues ranging from the tradeoffs between complex autopilots and their potential benefits to simulator utility. Overall, researchers found that “[b]ecause of the increase in air traffic and the more sophisticated and complex ground control systems handling this traffic, IFR flight has become extremely demanding, frequently taxing the pilot to his limits. It is rapidly becoming imperative that all the pilot’s sensory and manipulative skills be optimized in managing the aircraft systems”; hopefully, they reasoned, the rapid growth in computer capabilities could “enhance single-crewman effectiveness in future aircraft operations and automated ATC systems.”87 Encouragingly, in part because of NASA research, a remarkable 41-percent decrease in overall GA accidents did occur from the mid-1980s to the late 1990s.88 However, all was not well. Indeed, a key goad stimulating NASA’s pursuit of avionics technology to enhance flight safety (particularly weather safety) was the decline of American General Aviation. In the late 1970s, America’s GA aircraft industry reached the peak of its power: in 1978,

manufacturers shipped 17,817 aircraft, and the next year, 1979, the top three manufacturers—Cessna, Beech, and Gates Learjet—had combined sales over $1.8 billion. It seemed poised for even greater success over the next decade. In fact, such did not occur, thanks largely to rapidly rising insurance costs added to aircraft purchase prices, a by-product of a “rash of product liability lawsuits against manufacturers stemming from aircraft accidents,” some frivolously alleging inherent design flaws in aircraft that had flown safely for previous decades. Rising aircraft prices cooled any ardor for new aircraft purchases, particularly of single-engine light aircraft (business aircraft sales were affected, but more slowly). Other factors also contributed, including a global recession in the early 1980s, an increase in aircraft leasing and charter aircraft operations (lessening the need for personal ownership), and mergers within the aircraft industry that eliminated some production programs. The number of students taking flight instruction fell by over a third, from 150,000 in 1980 to 96,000 in 1994. That year, GA manufacturers produced just 928 aircraft, representing a production decline of almost 95 percent since the heady days of the late 1970s.89 The year 1994 witnessed both the near-extinction of American General Aviation and its fortuitous revival. At the nadir of its fortunes, relief, fortunately, was in hand, thanks to two initiatives launched by Congress and NASA. The first was the General Aviation Revitalization Act (GARA) of 1994, passed by Congress and signed into law in August that year by President William Jefferson Clinton.90 GARA banned product liability claims against manufacturers later than 18 years after an aircraft or component first flew. By 1998, the 18-year provision could be applied to the large numbers of aircraft produced in the 1970s, bringing relief at last to manufacturers who had been so plagued by legal action that many had actually taken aircraft—including old classics such as the Cessna C-172—out of production.91 It is not too strong to state that GARA saved the American GA industry from utter extinction, for it brought much needed stability and restored sanity to a litigation

8

89. Pattillo, A History in the Making, p. 127; see also John H. Winant, Keep Business Flying: A History of The National Business Aircraft Association, Inc., 1946–1986 (Washington: The National Business Aircraft Association, 1989), pp. 151–152, 157, and 186–187; and Metz, Partnership and the Revitalization of Aviation, p. 7. 90. The General Aviation Revitalization Act of 1994, Public Law No. 103–298, 103 Stat. 1552. 91. Pattillo, A History in the Making, Table 7-2, p. 129, and pp. 169–170.

447

NASA Contributions to Aeronautics

8

process that had gotten out of hand. Thus it constitutes the most significant piece of American aviation legislation passed in the modern era. But important as well was a second initiative, the establishment by NASA of the AGATE program, a joint NASA-industry-FAA partnership. AGATE existed thanks to the persistency of Bruce Holmes, the Agency’s Assistant Director of Aeronautics, who had vigorously championed it. Functionally organized within NASA’s Advanced Subsonic Technology Project Office, AGATE dovetailed nicely with GARA. It sought to revitalize GA by focusing on innovative cockpit technologies that could achieve goals of safety, affordability, and ease of use, chief of which was the “Highway in the Sky” (HITS) initiative, which aimed to replace the dial-and-gauge legacy instrument technology of the 1920s with advanced computer-based graphical presentations. As well, it supported crashworthiness research. It served as well as single focal point to bring together NASA, industry, Government, and GA community representatives. AGATE ran from 1994 through 2001, and a key aspect of its success was that it operated under a NASA-unique process, the Joint Sponsored Research Agreement (JSRA), a management process that streamlined research and internal management processes, while accelerating the results of technology development into the private sector. AGATE suffered in its early years from “learning problems” with internal communication, with building trust and openness among industry partners more used to seeing themselves as competitors, and with managerial oversight of its activities. Some participants were disappointed that AGATE never achieved its most ambitious objective, a fully automated aircraft. Others were bothered by the uncertainty of steady Federal support, a characteristic aspect of Federal management of research and development. But if not perfect—and no program ever is—AGATE proved vital to restoring GA, and as an end-of-project study concluded inelegantly if bluntly, “[a]ccording to participants from all parts of the program, AGATE revitalized an industry that had gone into the toilet.”92 The legacy of AGATE is evident in much of NASA’s subsequent avionics and cockpit presentation research, which, building upon earlier research, has involved improving a pilot’s situational awareness. Since weather-related accidents account for one-third of all aviation accidents and over one-quarter of all GA accidents, a particular concern is present-

92. Metz, Partnership and the Revitalization of Aviation, p. 18.

448

Case 8 | NACA-NASA’s Contribution to General Aviation

ing timely and informative weather information, for example, graphics overlaid on navigational and geographical cockpit displays.93 Another area of acute interest is improving pilot controllability via advanced flight control technology to close the gap between an automobile-like 2-D control system and the traditionally more complex 3-D aircraft system and generating a HITS-like synthetic vision capability to enhance flight safety. This, too, is a longstanding concern, related to the handling qualities and flight control capabilities of aircraft so that the pilot can concentrate more on what is going on around the aircraft than having to concentrate on flying it.94

8

Towards Tomorrow: Transforming the General Aviation Aircraft In the mid-1970s, coincident with the beginning of the fuel and litigation crises that would nearly destroy GA, production of homebuilt and kit-built aircraft greatly accelerated, reflecting the maturity of light aircraft design technology, the widespread availability of quality engineering and technical education, and the frustration of would-be aircraft owners with rising aircraft prices. Indeed, by the early 1990s, kit sales would outnumber sales of production GA aircraft by more than four to one.95 Today, in a far-different post-GARA era, kit sales remain strong. As well, new manufacturers appeared, some wedded to particular ideas or concepts, but many also showing a broader (and thus generally more successful) approach to light aircraft design. Exemplifying this resurgence of individual creativity and insight was Burt Rutan of Mojave, CA. An accomplished engineer and flighttester, Rutan designed a small two-seat canard light aircraft, the VariEze,

powered by a 100-hp Continental engine. Futuristic in look, the VariEze embodied very advanced thinking, including a GA(W)-1 wing section and Whitcomb winglets. The implications of applying the configuration to other civil and military aircraft of far greater performance were obvious, and NASA studied his work both in the tunnel and via flight tests of the VariEze itself.96 Rutan’s influence upon advanced general aviation aircraft thinking was immediate. Beech adopted a canard configuration for a proposed King Air replacement, the Starship, and Rutan built a subscale demonstrator of the aircraft.97 Rutan subsequently expanded his range of work, becoming a noted designer of remarkable flying machines capable of performance—such as flying nonstop around the world or rocketing into the upper atmosphere—many would have held impossible to attain. NASA followed Rutan’s work with interest, for the canard configuration was one that had great applicability across the range of aircraft design, from light aircraft to supersonic military and civil designs. Langley tunnel tests in 1984 confirmed that with a forward center of gravity location, the canard configuration was extremely stall-resistant. Conversely, at an aft center of gravity location, and with high power, the canard had reduced longitudinal stability and a tendency to enter a highangle-attack, deep-stall trim condition.98 NASA researchers undertook a second series of tests, comparing the canard with other wing planforms including closely coupled dual wings, swept forward-swept rearward wings, joined wings, and conventional wing-tail configurations, evaluating their application to a hypothetical 350-mph, 1,500-mile-range 6- or 12-passenger aircraft operating at 30,000 to 40,000 feet. In these tests, the dual wing configuration prevailed, due to greater structural weight efficiencies than other approaches.99 Seeking optimal structural efficiency has always been an important aspect of aircraft design, and the balance between configuration choice 96. Burt Rutan, “Development of a Small High-Aspect-Ratio Canard Aircraft,” in Society of Experimental Test Pilots, 1976 Report to the Aerospace Profession (Lancaster, CA: SETP, 1976), pp. 93–101; Philip W. Brown and James M. Patton, Jr., “Pilots’ Flight Evaluation of VariEze N4EZ,” NASA TM-103457 (1978). 97. Subsequently, for reasons unrelated to the basic canard concept, the Starship did not prove a great success. 98. Joseph R. Chambers, Long P. Yip, and Thomas M. Moul, “Wind Tunnel Investigation of an Advanced General Aviation Canard Configuration,” NASA TM-85760 (1984). 99. B.P. Selberg and D.L. Cronin, “Aerodynamic Structural Study of Canard Wing, Dual Wing, and Conventional Wing Systems for General Aviation Applications,” NASA CR-172529 (1985).

450

Case 8 | NACA-NASA’s Contribution to General Aviation

and structural design is a fine one. The advent of composite structures enabled a revolution in structural and aerodynamic design fully as significant as that at the time of the transformation of the airplane from wood to metal. As designers then had initially simply replaced wooden components with metal ones, so, too, in the earliest stage of the composite revolution, designers had initially simply replaced metal components with composite ones. In many of their own GA proposals and studies, NASA researchers repeatedly stressed the importance of getting away from such a “metal replacement” approach and, instead, adopting composite structures for their own inherent merit.100 The blend of research strains coming from NASA’s diverse work in structures, propulsion, controls, and aerodynamics, joined to the creative impact of outside sources in industry and academia—not least of which were student study projects, many reflecting an insight and expertise belying the relative inexperience of their creators—informed NASA’s next steps beyond AGATE. Student design competitions offered a valuable means of both “growing” a knowledgeable future aerospace workforce and seeking fresh approaches and insight. Beginning in 1994, NASA joined with the FAA and the Air Force Research Laboratory to sponsor a yearly National General Aviation Design Competition establishing design baselines for single-pilot, 2- to 6-passenger vehicles, turbine or piston-powered, capable of 150 to 400 knots airspeed, and with a range of 800 to 1,000 miles. The Virginia Space Grant Consortium at Old Dominion University Peninsula Center, near Langley Research Center, coordinated the competition. Competing teams had to address “design challenges” in such technical areas as integrated cockpit systems; propulsion, noise, and emissions; integrated design and manufacturing; aerodynamics; operating infrastructure; and unconventional designs (such as roadable aircraft).101 In cascading fashion, other opportunities existed for teams to take their designs to ever-more-advanced levels, even, ultimately, to building and test-flying them. Through these

competitions, study teams explored integrating such diverse technical elements as advanced fiber optic flight control systems, laminar flow design, swept-forward wings, HITS cockpit technology, coupled with advanced Heads-up Displays (HUD) and sidestick flight control, and advanced composite materials to achieve increased efficiencies in performance and economic advantage over existing designs.102 Succeeding AGATE was SATS—the NASA Small Aircraft Transportation System Project. SATS (another Holmes initiative) sought to take the integrated products of this diverse research and form from it a distributed public airport network, with small aircraft flying on demand as users saw fit, thereby taking advantage of the ramp space capacity at over 5,000 public airports located around the country.103 SATS would benefit as well by a Glenn Research Center initiative, the GAP (General Aviation Propulsion) program, seeking new propulsive efficiencies beyond those already obtained by previous NASA research.104 In 2005, SATS concluded with a 3-day “Transformation of Air Travel” held at Danville Airport, VA, showcasing new aviation technologies with six aircraft equipped with advanced cockpit displays enabling them to operate from airports lacking radar or air traffic control services. Complementing SATS and GAP was PAV—a Langley initiative for Personal Air Vehicles, a reincarnation of an old dream of flight dating to the small ultralight aircraft and airships found at the dawn of flight, such as Alberto SantosDumont’s little one-person dirigibles and his Demoiselle light aircraft. Like many such studies through the years, PAV studies in the 2002–2005 period generated many innovative and imaginative concepts, but the

A computer-aided-design model of a six-passenger single-pilot Advanced Personal Transport concept developed as a University of Kansas-NASA-Universities Space Research Association student research project in 1991. NASA.

Agency did not support such studies afterwards, turning instead towards good stewardship and environmental responsibility, seeking to reduce emissions, noise, and improve economic efficiencies by reducing airport delays and fuel consumption. These are not innocuous challenges: in 2005, airspace system capacity limitations generated fully $5.9 billion in economic impact through airline delays, and the next year, fuel consumption constituted a full 26 percent of airline operating costs.105 The history of the NACA-NASA support of General Aviation is one of mutual endeavor and benefit. Examining that history reveals a surprising interdependency between the technologies of air transport, military, and general aviation. Developments such as the supercritical wing, electronic flight controls, turbofan propulsion, composite structures, synthetic vision systems, and heads-up displays that were first exploited for one have migrated and diffused more broadly across the entire aeronautical field. Once again, the lesson is clear: the many streams of NASA research form a rich and broad confluence that nourishes and invigorates the entire American aeronautical enterprise, ever renewing our nature as an aerospace nation. 105. Jaiwon Shin, “NASA Aeronautics Research Then and Now,” a PowerPoint presentation at the 48th AIAA Aerospace Sciences Meeting, Orlando, FL, 4 January 2010, Slide 2; Chambers, Innovation in Flight, pp. 306–312.