30 October 2010

If you'd been following my Twitter stream earlier today (@SentinelChicken), I had been posting tidbits about the failure that was the Brewster F2A Buffalo, quite possibly one of the worst fighting aircraft we sent into action during the Second World War. The saga of Brewster Buffalo not to mention the tales of mismanagement at the Brewster Aeronautical Corporation from the 1930s right into World War II would give me blog material for at least seven posts, but this evening I want to try and keep things strictly on the American combat history of the F2A Buffalo. In fact, there was only one major battle that the United States committed the Buffalo, and that was in the hand of Marine aviators at the Battle of Midway in June 1942.

When the US Navy placed an order for 54 F2A-1 Buffalos on 11 June 1938, it was the Navy's first large scale production contract for monoplane fighter aircraft. The choice of Brewster was an unusual one given that they had only established an aeronautical division in 1935 and the 1936 order of the prototype XF2A-1 was already controversial within the Navy. As insurance, the Navy also issued a contract with Grumman Aircraft for the prototype XF4F Wildcat- and a damn thing they did, too, as history has shown. From the time of the order of the XF2A-1 to the first flight in June 1938, two years had elapsed which by the state of aviation technology in the late 1930s was an extraordinarily long period of time. Technical issues beset the Buffalo flight test program and a series of design refinements were needed to get the F2A to an operational standard that satisfied the Navy. Weight was always an issue with the stubby aircraft- the first production version, the F2A-1, for example, took 30 minutes to reach 21,000 feet! By the time 108 F2A-3 were built, an additional 1,500 lbs were added to the aircraft's weight and the Navy, finally dissatisfied with the evolution of the Buffalo, passed its F2A-3s to the US Marine Corps.

Following the attack on Pearl Harbor on 7 December 1941, the aircraft carrier USS Saratoga was tasked to deliver the fourteen F2As of Marine squadron VMF-221 to Wake Island. But as Wake had fallen to the Japanese on 23 December, on Christmas Day VMF-221 was diverted instead to Midway Island, becoming the first fighter defenses for the island bastion. Other than the occasional combat air patrol to intercept Japanese flying boats, it wasn't until April that the fortification efforts at Midway took importance once the breaking of Japanese codes indicated that the Japanese planned to take Midway in June 1942. By the end of May, 21 Marine Corps F2As made up the majority of the island's land-based fighter defenses along with 7 F4F Wildcats. On 3 June 1942 the Japanese task force had been spotted and the Marine fighters prepared for action. Seven F2As and five F4Fs were dispatched to attack an incoming formation of Japanese aircraft while twelve remained in orbit over Midway along with a single F4F to defend against a second attack that might come from another direction.

A one-hundred eight aircraft strike force from the carrier Hiryu were intercepted by the first group and was soon joined by the second group that had been orbiting over Midway. Of the twenty-five Marine Corps pilots that set out on that day to defend Midway, fifteen were killed in action including their commanding officer, Major Floyd Parks. Thirteen F2A Buffalos were shot down by the superior Mitsubishi A6M Zero fighters along with two F4F Wildcats. Of the remaining F2As, only two were in any condition to fly again. One surviving pilot commented "Any commander who orders pilots out for combat in an F2A should consider the pilot lost before leaving the ground." That single air battle destroyed VMF-221 as a fighting unit. Its surviving personnel dispersed to help the other units defending Midway. In fact, that air battle on that morning of 3 June 1942 was the heaviest loss of Marine Corps pilots ever sustained during the Second World War! In fact, most of those pilots lost their lives in the first few minutes of the air battle.

After the Battle of Midway, the F2A Buffalos were immediately withdrawn from front line service in US Marine Corps. All were assigned to stateside training units and even then, the Buffalo was disliked by pilots. They only lasted a few months in the role and most were scrapped with a handful going to aviation mechanic schools to be taken apart for instruction purposes. As a result, not a single American F2A Buffalo survived and none are in existence today other than examples that served with foreign nations overseas. It was the beginning of the end for Brewster as a defense contractor- in fact, just prior to the Battle of Midway, the US Navy seized control of Brewster's production plants and ousted its management. By 1946, Brewster was finally liquidated for good.

29 October 2010

In 1942 James McDonnell was summoned to Washington to meet with officials from the US Navy's Bureau of Aeronautics (BuAer). At the time McDonnell Aircraft only built parts for other aircraft manufacturers at its St. Louis facilities and only had one aircraft program going, the XP-67 Moonbat fighter. To McDonnell's surprise, BuAer asked McDonnell and his small team to design a carrier-based jet fighter. Not having had any prior experience worked in McDonnell's favor- the Navy felt that he was free of any bias or prejudices and was therefore most likely to come up with an innovative design. McDonnell's design would be come the FD-1/FH-1 Phantom, the first purpose-built carrier-borne jet fighter. But BuAer's decision was not without its controversy in the Navy and to satisfy the critics, it was agreed to evaluate an existing jet fighter that by that point was only a year from its first flight- the Lockheed P-80 Shooting Star, which first flew in June 1944. With a more powerful engine, the P-80 was faster than the FD-1 Phantom. In early 1945, the Navy purchased two P-80 Shooting Stars for evaluation, one of which would be suitably modified for evaluation as a carrier-borne fighter.

The first Navy P-80 was flown from Lockheed's California facility to the Navy's flight test center at NAS Patuxent River, Maryland, incidentally becoming the first transcontinental jet flight on 29 June 1945 (though it wasn't a nonstop jet flight). The pilot was a young 1st Lt. Najeeb Halaby, at the time a US Navy test pilot who later on would become the FAA Administrator under JFK and later become CEO of Pan Am. The plan was to conduct shore-based trials first before going through with ship-board trials. Through 1946 the P-80 was flown in mock combat against the Navy's main fighter of the day, the Grumman F8F Bearcat. Though not as maneuverable as the Bearcat, the P-80 had the luxury of speed to engage and disengage in combat much to the Bearcat pilots' frustration. The second P-80 arrived at NAS Patuxent River in December 1945 after being modified by Lockheed with a tailhook, catapult hooks (for use with a catapult bridle) and a catapult shuttle holdback. Shore-based tests to simulate carrier operations were used to determine the operating parameters for the P-80 on the carrier deck. Catapult shots were easily accomplished on land-based gear, but it was found that the P-80 was exceptionally clean aerodynamically on approach and had to be "flown" onto the deck, but the nose gear design was too weak for a firm carrier-style three-point landing. But if the P-80 landed to hard on its mains, it would rock forward and the hook would miss the arresting wires.

After more testing, the project pilot, the legendary Marine Corps ace Lt. Col. Marion Carl and his LSO managed to determine the proper approach speed (just 5 mph above the stall speed of the P-80) and flare to minimize the forward rocking motion on landing. Though the margin was considered too close to the stall speed, Carl found that the P-80 had good stall warnings well in advance of the actual stall. On 1 November 1946 Carl took the P-80 to sea aboard the USS Franklin D. Roosevelt. Catapult launches and arrested landings were made safely, but it was found that the P-80 needed 900 feet of deck with 35 knots of wind over the deck to take off without the catapult, over twice the distance the McDonnell FD-1 required. To put that into perspective, the length of the FDR's flight deck was just shy of 961 feet! It was also found that the J33 engine of the P-80 took as long as 2 minutes to spool up to full power after starting, which would greatly lengthen the deck launching cycle. From catapult takeoff, one circuit in the carrier landing pattern, and then an arrested landing, the P-80 used 37 gallons of fuel. By comparison, the Vought F4U Corsair only used 6 gallons.

The next phase of carrier testing then involved flying the P-80 at operational loads which also included use of the tip tanks that were a fixture on USAF aircraft during the Korean War. It was quickly found that in this more realistic configuration the catapults of the day even with a strong wind over the deck were unable to launch the P-80. In addition, the wing structure where the tanks were attached was too weak and a stronger catapult would have just launched the P-80, leaving its tip tanks behind!

Before the carrier tests were performed, some in the Navy unhappy with BuAer's decision to go with McDonnell wanted to purchase the Lockheed P-80 Shooting Star instead. Lockheed even did some design work on what they called a P-80B for the Navy which would have had the Navy designation FO-1 (Lockheed's designator become V in the 1950s, so it would have been then known as FV-1). The deck trials quickly ended this proposal and McDonnell would go on to develop a whole line of fighters for the Navy from FD-1 Phantom, the F2H Banshee, the F3H Demon, the superlative F-4 Phantom II and today's F/A-18 Hornet. The Navy, did however, buy 50 P-80s to be used as shore-based trainers to allow naval aviators to gain jet experience. VF-6A (later renumbered VF-52) at NAS North Island and Marine squadron VMF-311 at MCAS El Toro operated the P-80s. Lockheed did, however, develop the TV-2 SeaStar based on the Lockheed T-33 in the 1950s. The T-33 required an extensive amount of modification to be suitable for carrier operation. In 1962 the TV-2 was redesignated the T-1 and was ultimately replaced by the North American T-2 Buckeye.

26 October 2010

The growth of US postwar air travel didn't reach a point until the 1950s that justified nonstop transcontinental services even though aircraft theoretically capable of such flights were already in airline use at the end of the Second World War. During the course of the war, Douglas DC-4s and Lockheed Constellations seconded to the US Army Air Forces routinely made trans-Atlantic crossings safely. But the exigencies of war are far removed from market demand and in the second half of the 1940s traffic demand simply wasn't there and what passengers did travel transcontinentally were few in number and used existing multistop services. In fact, many airlines believed that passengers preferred enroute stops to stretch their legs. But more significantly, federal regulations prohibited flights more than 8 hours without a backup crew aboard and to the airlines, that increased labor costs on long distance flights. It was more economical to change crews enroute. Of course, TWA and Pan American operated their trans-Atlantic services with backup crews, but this was a matter of obvious necessity.

As air travel numbers experienced major growth in the early 1950s, American Airlines was the first out of the gate with an announcement that it would begin nonstop transcontinental services between New York and Los Angeles once it had taken delivery of its Douglas DC-7 propliners. Eager to upstage American Airlines and its iconic head, C.R. Smith, Howard Hughes had TWA launch its own transcontinental nonstop service on the same route using its new Lockheed L-1049 Super Constellations. On 19 October 1953, Trans World Airlines' "Ambassador Service" Flight 2 departed Los Angeles for New York Idlewild and just barely got into New York under the federal eight hour crew limit at 7 hours, 55 minutes. The westbound Ambassador Service flight had to deal with prevailing winds at altitude and thus couldn't make the flight nonstop within 8 hours- a 15 minute technical stop was scheduled enroute at Chicago Midway to allow for a crew change but no local passenger traffic.

The following month American Airlines inaugurated its own nonstop "Mercury Service" DC-7 flights between Los Angeles and New York Idlewild on 29 November. As the DC-7 had a higher cruising speed than the Super Constellation, the eastbound LA-New York run was made easily in 7 hours, 15 minutes (a fact not lost upon American's marketing department, hence the name "Mercury Service"), but the westbound run from New York to LA couldn't be made within 8 hours. Despite over a dozen modifications to the DC-7s made by American's engineers which included tweaks of the Wright R-3350 radial engines to squeeze every bit of horsepower out of the engines, the DC-7s still couldn't beat the prevailing winds. American's pilot union repeatedly pointed this fact out, but C.R. Smith's influence in Washington left the issue unaddressed by federal regulators. In the following year, federal regulators adjusted the time limit to allow the flight to be made legally and American's DC-7s blocked in at 8 hours, 15 minutes on a westbound nonstop.

Although TWA had reached an agreement with its Super Constellation pilots for overtime pay on the westbound nonstops to eliminate the technical stop for a crew change in Chicago, no such agreement existed with American and following deadlocked negotiations, American's pilots striked on 31 July 1954. C.R. Smith was furious but had no choice to accept an agreement for overtime pay on the westbound Mercury nonstops that exceeded 8 hours. After a punishing 24-day strike, American's pilots returned to work under new rules for transcontinental flights in excess of 8 hours.

Arriving late to the transcontinental battle was United Air Lines. United took delivery of its DC-7s six months after American Airlines but didn't inaugurate nonstop services until 1 June 1954 on it's "Red Carpet" services between San Francisco and New York Idlewild. United's nonstops were eastbound only and rather unusually, westbound nonstops weren't added by the airline until 1955, nearly a year later after the Red Carpet services started.

It should be noted, though, that although these services were operated by Lockheed Super Constellations and Douglas DC-7s, on 3 February 1948 TWA inadvertently operated the first eastbound nonstop when TWA Flight 12, operated by a Lockheed L-049 Constellation, had its regular stops in Kansas City and Chicago Midway canceled due to winter weather and the flight had to continue on to New York, arriving after covering 2,470 miles in 6 hours, 55 minutes.

25 October 2010

Before the Second World War, Grumman Aircraft had a partnership with New York-based Gillies Aviation to market and sell the company's line of amphibian aircraft. One Gillies' salesmen, a pilot named Henry Schiebel, eventually came to work for Grumman during the war as a company test pilot. In the postwar period as Grumman began to diversify its offerings outside of naval aircraft, Schiebel was the head of the company's transportation department, flying the company's executives on various business trips. At the time, Grumman's civilian offerings were limited to its amphibian line of aircraft from the Widgeon and Goose on the small end of the scale to the Albatross and Mallard on the top end. Because of Schiebel's sales background, he often found himself in the role of pitching one of the amphibians to a civilian customer as the need arose.

Though Schiebel wasn't an engineer, he began working with a small group of engineers that since the mid-1950s in their free time after hours had been putting some effort into coming up with a civilian feederliner that could also be used for corporate and private use. Unlike Grumman's previous offerings, what the engineers had in mind was a land-based aircraft that would offer a level of comfort and performance that justified their forecast one-million dollar price given that most companies in those days could get a good used DC-3 for just over $200,000. Schiebel's job was to bring to the after-hours effort not just a pilot's perspective, but also the perspective of someone who was intimately aware of the needs of civilian owners and operators.

Their starting point was actually basing their aircraft's cabin cross-section on that of the DC-3. Grumman had several of them in its corporate flight department and everyone agreed that they were the most comfortable aircraft when it came to business travel compared to other types of aircraft available on the market. Their second consideration was the type of powerplant to use- again, all were in agreement that a turboprop would offered good operating economics as well as fuel efficiency that translated into a good operating range. At the time, the Rolls-Royce Dart engine had the most flying time of any turboprop and was well-known in the commercial world as it already powered several successful airliners like the Vickers Viscount. The team was well aware that their biggest competition would be pure jets and the prototype Lockheed Jetstar was already flying. This was long before turbofans entered the picture and the choice of a turboprop, while slower in speed than a pure jet, offered better range.

By the point that the project had matured to the point that it was time to pitch it to Leroy Grumman, a minor bit of discord developed within the team about whether this new aircraft should have a low-wing or a high-wing. Aerodynamically there were no disadvantages or advantages to either layout. Schiebel, drawing upon his experiences in aircraft sales, pushed for a low-wing layout. Rather amusingly, the debate would be settled by Leroy Grumman himself when he was presented with the project by Schiebel and the adhoc group of engineers. Two models of the design were made- one with a low-mounted wing and one with a high-mounted wing. Both were shown to Grumman and he immediately settled on the low-winged configuration. He then summoned the heads of finance and his lieutenants to view the engineers presentation. Building the aircraft would require an initial investment of $23 million and Grumman himself is reportedly to have said "I'd like to build this airplane before I retire." Some had asked whether a market survey should be done first before committing to the project, but Grumman felt comfortable with the fact that Henry Schiebel had been involved with the project all along and trusted his thoughts on the market for the aircraft. Interestingly a competitor did do a market survey and predicted that Grumman would sell less than 20 of the aircraft.

It turned out that after the maiden flight of the Grumman Gulfstream I on 14 August 1958, the company would go on and build approximately 200 Gulfstream Is and launched in iconic line of business aircraft that have become near-synonymous with business aircraft. The Gulfstream I had two features which have been kept in every successive Gulfstream right up to the current Gulfstream 650 that's in flight testing- the characteristic nose/cockpit shape and the large oval windows, the largest of any business aircraft.

The first Gulfstream Is sold for $845,000 and the first one went to the Sinclair Refining Company and the FAA, responsible for certifying the aircraft, even got two for its own use. By 1963, Grumman was building two Gulfstream Is a month and began work on transitioning the product line to jet power with the Rolls-Royce Spey-powered Gulfstream II which first flew in October 1966. The new Gulfstream jets were built a new plant set up in Savannah, Georgia, by Grumman and though Grumman is no longer involved with the production of Gulfstream jets, the plant in Savannah is still to this day where they are built.

23 October 2010

The first variants of the Navy's premier carrier-borne fighter of the first half of the Second World War, the Grumman F4F Wildcat, were hampered by one weakness even before they took off from their aircraft carriers- lacking a folding wing, the first versions of the Wildcat took up quite a bit of space on the carrier and, by extension, that limited the number of Wildcats that could carried aboard. Given combat losses that would inevitably occur, having as many aircraft on the carrier as possible conferred tactical advantages in battle. The current production version of the Wildcat at the time was the F4F-3.

The most obvious answer would be hinge the wings' outer sections and folding them upward as was done on most carrier aircraft (even to this day). However, to Leroy Grumman and his engineering team, that wasn't an optimal answer as that increased the height of the stowed aircraft- if the wing could be folded in a way that didn't increase the height of the aircraft, then extra aircraft could be winched up and stowed up near the ceiling of the hangar deck. Grumman reasoned that the optimum geometry would be to somehow fold the wing back not unlike the way a bird folded its wings back along its body. Grumman drew upon his engineering intuition and reasoned that the wing had to rotate about a pivot point. He took a draftsman's eraser and two paperclips; he bent one end of the paperclips and began to stick them into the eraser. The eraser was the body of the Wildcat and the paperclips were the wings. By trying different angles, Grumman managed to figure out the angle needed to that when the clip was rotated, it folded back flat against the eraser. This was known in engineering parlance as a skewed axis.

In the Wildcat, this axis was a pivot set into the wing root that pointed outward and backward at angle into the moving portion of the wing. As the wing rotated about this pivot, it folded back against fuselage, making the F4F a compact package- so compact, that six F4Fs with what Grumman called the "Sto-Wing" could fit into a deck or hangar space normally occupied by two F4Fs that had nonfolding wings. At first it was planned to use hydraulics to fold the wings, but it was found this added excessive weight, so the wings would have to be folded manually by the deck crew. A large master pin would lock the wing in position either extended or folded.

The Sto-Wing was immediately placed into the production line- with the current Wildcat model being the F4F-3, those with the Sto-Wing became the F4F-4. From a tactical standpoint, an aircraft carrier could now carry twice as many F4Fs as before with the wingspan of the Wildcat going from 39 feet to just over 14 feet folded. Deck handling improved as it was easier to move an F4F with the wings folded on a crowded flight deck or in a packed hangar deck. The Sto-Wing concept was applied as well to the TBF Avenger and the F4F Wildcat's successor, the F6F Hellcat. And it's seen today on carrier decks with the Northrop Grumman E-2 Hawkeye- since the E-2 had the dish-like radome above the fusleage, the wings couldn't be folded up and over each other as was done on other long-span carrier aircraft like the Lockheed S-2 Viking. So E-2 Hawkeye's wings use the Grumman Sto-Wing principle to fold back towards the tail.

22 October 2010

On May 6, 1949, twenty-six sailors on shore leave from the naval base at San Diego paid a $15.60 one way fare on a small airline called PSA (Pacific Southwest Airlines) which had a single red and white DC-3 to fly to Oakland, California. Over the course of the next 20 years, PSA would grow to dominate the West Coast market between the second busiest domestic air market in the world, that between Southern California and the Bay Area (the biggest domestic market in the world is the Boston-New York-Washington corridor on the East Coast). After what can only be described as the Cinderella story of the airline industry, 1969 presented PSA with a new set of challenges that would ultimately shake the foundation of the airline to its deepest roots and set the stage for its eventual acquisition in 1987 by USAir.

The first thing to keep in mind when looking at the historical context of PSA in 1969 is that unlike most other airlines, PSA only flew to cities in California and as an intrastate carrier, wasn't subject to the regulatory and market control oversight of the federal Civil Aeronautics Board (CAB). It was the CAB that got abolished with deregulation in the early 1980s. Instead, as an intrastate carrier, PSA had only to answer to the California Public Utilities Commission (PUC) to get approval for fare increases and new services. And for the 20 years that PSA had been flying up to 1969, the PUC always gave PSA what it wanted much to chagrin of the established majors that competed with PSA (primarily United and Western). PSA was getting competition from two fronts- first from the established carriers of Western and United which had traditionally been dominant on the West Coast. After 20 years of getting their lunch eaten by PSA, they were fighting back with increased frequencies on the California Corridor. The second front was from a small intrastate start up based in Orange County called Air California. Though smaller, they proved to be just as an aggressive competitor to PSA as the majors.

At the turn of the decade, two PUC actions signaled the end of PSA's favored status with the PUC. The first came when PSA applied to open up service from Long Beach to San Francisco, Oakland, San Jose, Sacramento, and San Diego. This came about as a result of consumer demand that was getting increasingly irritated about the airport congestion at LAX. The PUC approved temporary service between Long Beach and San Francisco, but dragged its feet on the other cities and when it did finally approve, it was well after the Christmas holiday travel rush.

Not long after, PSA made a move to eliminate Air California from the market by buying the smaller airline outright. PSA stood poised to take a 79% ownership stake in Air California subject to PUC approval, but again, as was previously the case, foot-dragging by the PUC led to PSA cancelling the proposed buyout.

Now I had mentioned on the airport congestion issue at LAX above. Gate space was the biggest problem for PSA and back then, the majors weren't about the relinquish any gates to the very airline that was effectively driving them from the California market. Gate space also became an issue at San Francisco as well. In June of 1969 PSA inaugurated "every hour on the hour" service between LAX and San Francisco and "every hour on the half-hour" service between San Diego and LAX. Outside of Eastern's shuttle operation on the East Coast and the Ponte Aerea in Brazil, this was the first time a scheduled airline offered service that didn't require a timetable. It was a hit with passengers and load factors soared on the flights out of LAX to San Francisco and San Diego. With certain times being more popular than others, PSA wanted to put 2 Boeing 727-200s on certain time slots, but lacked the gate space at both LAX and San Francisco.

Enter the L-1011.

PSA president Andy Andrews met with the executives at Lockheed several times during 1969-1970 to discuss the possibilities of adding the new Tristar 300-seat widebody to the PSA fleet to solve the gate space problem at LAX and San Francisco as well as place a much larger and comfortable aircraft in service that would trump the Electras and DC-9s of Air California and the 737s and 727s of United and Western. In addition, the performance of the Tristar was such that the PSA board investigated service beyond California (Hawaii being one distinct possibility).

On August 31, 1970, Andy Andrews announced to the local press and to the PSA employees at the San Diego headquarters that PSA had signed orders for two Lockheed L-1011 Tristars with a letter of intent for three more delivery positions. The contract was worth $100 million with a delivery date of 1972. And impressively enough, the Tristar could operate from every airport in the PSA network, even Burbank! The venture, however, would be short lived and is subject for a future posting!

20 October 2010

With the rapid expansion of the Soviet submarine fleet in the 1950s, the US Navy embarked on a series of programs to improve its anti-submarine warfare capabilities. Some of these efforts paid off handsomely with the development of new detection equipment and the rise of the nuclear-powered hunter-killer submarine (SSN). At the time, though, what the Navy really wanted was a means to quickly detect and prosecute enemy submarines as far away as possible from the carrier task forces. This meant the use of aircraft to cover the distances involved and to search the large areas of open ocean. At the time the furthest advances were being made in sonar technology and the Navy began work on the concept of using a dunking sonar on a seaplane. The seaplane could seed an area of interest with sonobuoys and then land on the water and use a dunking sonar to further track an enemy submarine. Dunking sonars had already come into use in helicopters but these were strictly short range options as helicopters didn't have the speed and range of a conventional aircraft.

In 1954 the US Navy's Bureau of Aeronautics (BuAer) met with Convair which had long established its credentials in the seaplane field from before World War 2 onward with legendary aircraft like the PBY Catalina and an aggressive postwar program of hydrodynamics research for flying boat aircraft. Convair developed what was called the "Dunker" which was to be powered by two Wright R-3350 Double Cyclone radial engines and capable of operating from rough ocean seas. Tested in model form, the Dunker's most distinctive feature was a very deeply sculpted hull form designed to "cut" through the waves. The wing was high-mounted above the fuselage on a pylon like that of the PBY Catalina to provide the necessary clearance for the engines when operating in rough seas. By 1956 the Dunker's design refined further to a three-engined aircraft using three Pratt & Whitney R-2800 radial engines which allowed for a larger and more capable design.

As a result of Convair's ongoing work, BuAer issued a Request for Proposals (RfP) in May 1956 for an advanced ASW seaplane of flying boat configuration capable of operating from rough ocean seas to use dunking sonar. Martin submitted a four-engined design that was designated the P7M Submaster and was based in large part on Martin's existing work on the jet-powered P6M Seamaster. Convair's submission was a further refinement of the Dunker design and was designated P6Y. Convair's design was much more advanced that Martin's submission. The P6Y retained the three R-2800 radial piston engines, but in order to allow slow flight and softer landings in rough seas, an innovative full-span boundary layer lift control (BLC) system would be installed. Two General Electric J85 turbojet engines were installed side-by-side in the central engine nacelle behind the R-2800 engine. The inboard flaps of the 127-foot wing used both suction above and just ahead of the flaps and jet exhaust from the J85s blowing head of the flaps to dramatically increase the effectiveness of the flaps. The wings outboard of the left and right engine nacelles used BLC to blow a sheet of high-velocity air across the top of the flaps. Wingtip mounted swivelling jet nozzles were used to augment the control authority of the ailerons in low speed regimes. Convair estimated that the BLC system on the P6Y would allow landings as slow as 40 kts, impressive for an aircraft with a crew of 10 and a gross weight of over 100,000 lbs. The design of the P6Y was led by German engineer Hans Amtmann, who during the Second World War had designed a flying boats for Blohm und Voss. Reportedly Amtmann even spent time on US subs to see first hand the challenges of hunting submarines in the open ocean!

The dunking sonar and sonobuoys were housed in the center of the fuselage with a weapons bay aft of the sonar compartment that could carry depth charges, torpedoes or even nuclear weapons. A rotating bomb bay door was used to create a smooth and watertight seal on the bottom of the hull. The P6Y would have been able to operate in rough seas up to 12 feet with its deeply cut hull and BLC flap system.

By 1957 despite advanced design work, the maritime patrol community in the US Navy was less-than-enthused about open ocean rough water operation. Experience in the Second World War showed this to be highly uncomfortably and extremely rough on the structures and systems of even the most robust flying boats. With a general lack of support from the US Navy's operational patrol squadrons, the Convair P6Y was canceled in December 1957. From that point on, the role of maritime patrol and ASW would pass on to landplanes. That same year the Navy issued an RfP for successor to both the Lockheed P2V Neptune and the Martin P5M Marlin and specified a land-based aircraft to satisfy the demands of the patrol community. This aircraft would be a derivative of the Lockheed Electra and first flew in 1958 as the YP3V. In production the P3V was named Orion and after 1962 it was redesignated P-3.

Though the cancellation of the Convair P6Y marked a turning point in naval aviation as landplanes took over roles traditionally assigned to seaplanes, the concept behind the P6Y didn't die there. In 1966 the Japanese Defense Agency issued a contract to flying boat builder Shin Meiwa for an ocean-going ASW flying boat which became the Shin Meiwa PS-1. The PS-1 was powered by four General Electric T64 turboprop engines, but like the P6Y, it had an extra engine in the form of a T58 turboshaft mounted in the dorsal center fuselage to drive a powerful BLC system that allowed the PS-1 to land as slowly as 50-55 kts. Robustly built with a deep hull form influenced by the P6Y, the PS-1 could operate in seas as rough as 14 feet. Like the P6Y, the PS-1 would alight on the ocean and use a dunking sonar to prosecute submarine contacts.

The last PS-1 was delivered to the Japanese Martime Self-Defense Force in 1978. Further procurement after only 21 airframes ended after the open ocean ASW mission went to the Lockheed P-3C Orion which cost much less to procure and operated than the PS-1. However, 12 examples of a search and rescue version of the PS-1 were built starting in 1975 and remain in service to this day as the US-1. In 2003 Shin Meiwa first flew an upgraded version of the US-1 flying boat designated the US-1 Kai ("Kai" meaning upgraded) that kept the deep hull form and BLC system but mated the design with advanced Rolls-Royce AE2100 turboprop engines and a Lycoming CTS800 turboshaft driving the BLC system.

19 October 2010

Following the withdrawal of the United Kingdom from the Airbus consortium and the A300 program, the 1970s were times of significant uncertainty for the British airframe manufacturers. Some in the British aircraft industry advocated for return to the Airbus consortium, some pushed for an indigenous airframe and yet others suggested joint ventures with other nations, even if that meant working with Europe or even the Americans. The indigenous solution was the least likely of the day in light of the UK government's unwillingness to invest in the promising BAC Three-Eleven program. In 1974 the French laid down the conditions for the return of the UK to Airbus- not only would the UK have to make a substantial contribution to the development and production costs of the A300, but British airlines would also have to purchase the A300 jetliner. It was the low point of relations between the French and British that once rode the wave of cooperation with the Concorde program. Accusations were even aired in the press of the day with each nation belittling the other's aerospace industry.

Politics complicated the equation with the recession following the 1973 Yom Kippur War and the Arab oil embargo on the West. The West German government called for increase in financial commitment to Airbus as financial austerity in that nation called for a need to decrease the West German government's financial stake to 25% of what it was at the time. The suggested solution was to expand the Airbus consortium to include the UK and Italy to offset domestic decreases in the financial investment in Airbus by the Bonn government. In the UK, the bankruptcy of Rolls-Royce and the difficulties of the Lockheed Tristar program put the preservation of British aviation jobs as a top political priority. In 1976 further consolidation of the British aviation industry took place with the formation of British Aerospace with the merger of BAC and Hawker Siddeley. The new company's priority under the leadership of Lord Beswick was a new commercial airframe.

At the time the French were holding discussions with McDonnell Douglas on a successor to the Dassault Mercure, the CFM56-powered Mercure 200. However, the leadership of McDonnell Douglas wasn't convinced the Mercure 200 was the aircraft to launch a trans-Atlantic venture. As a result of wanting to see what other European manufacturers had on the table, discussions were also held with British Aerospace. At the time there were several promising designs from Dassault, Aerospatiale, and BAC (which again was looking a new version of the One-Eleven provisionally designated the X-Eleven). Britain at the time was keen on the JET program (Joint European Transport)

In August 1976 a team of BAC/Hawker engineers (now in the employment of the new British Aerospace entity) visited Boeing in Seattle to review the progress on what would be come the Boeing 757. At the time work was taking place on what was provisionally designated the Trident Five which by coincidence shared a lot of features with the Boeing 757. Like the early 757 design that had a T-tail and 727 nose, the Trident Five had a Trident nose and T-tail combined with a stretched fuselage with a moderately swept wing mounting two RB.211 engines in underwing nacelles. Boeing's president at the time, the colorful Tex Boullioun, offered BAe design work on the wing, engines, RB.211 nacelles and landing gear on the 757. In fact, Boeing was willing to even have final assembly of the Boeing 757 in the UK with flight testing taking place there. "I can't understand why any would want final assembly- it's 7% of time spent on the aircraft with 97% of the problems" as Boullion put it to BAe's head, Lord Beswick.

The British investment to the Boeing 757 was less than what the French wanted for re-entry into Airbus and the potential value to British industry was well in excess of US$3 billion. Boeing went as far as to extend an invitation to British subcontractors to visit Seattle and discuss specific areas of involvement. British Airways was already discussing an order for the 757 and it wasn't a state secret that Lord Beswick disliked the French. But once again, politics sidelines the deal. Despite the new RB.211 engine variant for the 757 being key to Rolls-Royce's recovery, the political winds of the day called for cooperation with the Europeans and with Hawker Siddeley designing an all new supercritical wing for the upcoming A310, it was felt at the highest levels of the government to continue along with what Hawker Siddeley had started with Airbus. The UK had joined the European Economic Community in 1973 and a BAe deal with Boeing to design and build the 757 was unacceptable in light of the spirit of the EEC which pushed pan-European business and industrial cooperation.

15 October 2010

With the Japanese abruptly surrendering following the atomic bombings of Hiroshima and Nagasaki, American aircraft manufacturers suddenly saw massive orders for the planned invasion of Japan canceled. With the wholesale cancellations across the industry, many companies struggled through the second half of the 1940s to adjust to a peacetime economy. One of these companies was Grumman Aircraft on Long Island. During the Second World War, Grumman had built 98% of the US Navy's torpedo bombers flown and 65% of the Navy's fighters used operationally. Leroy Grumman sought alternative markets for Grumman's design and manufacturing expertise to reduce reliance on military contracts. One such market that continues to this day was Grumman's successful line of aluminum truck and bus bodies. But a lesser known direction changed an entire consumer market more than what Grumman could have accomplished in aviation.

Towards the end of the war, one of Grumman's vice-presidents, William Hoffman, was on a short vacation away from Long Island for a weekend on a fishing trip in upstate New York's Adirondack Mountains. As he was portaging his wood and canvas canoe between lakes, he wondered why no one had ever improved upon the design and construction of the canoe into something not just lighter, but sturdier and safer. With his own background in aluminum tooling, Hoffman began to ponder the idea of an aluminum canoe. On his return to Grumman's main facility at Bethpage on Long Island, he pitched the idea of using the company's expertise in using aluminum in aircraft to produce such canoes. Leroy Grumman was impressed with the idea and thought the idea a perfect postwar project for a defense contractor that was having to shrink rapidly. Grumman assigned several engineers to Hoffman, who would head the canoe project. Special aluminum alloy was obtained from Alcoa, and quite by coincidence, Alcoa had an engineer, Russell Bontecu, who was an avid canoeist in his free time and had built several prototypes for Alcoa, but the company wasn't interested. Bontecu's designs were built in sections and weren't commercially viable, but his expertise was valuable and he was hired by Hoffman to help with the Grumman canoe project.

An existing canoe design common at the time formed the basis of the Grumman canoe. The bow and stern were increased in height to allow the canoe to handle rapids and rough waters and watertight compartments were added so the canoe would remain afloat when swamped but could also handle several people while awash as a safety measure. The hull lines were changed to include a flat bottom for better stability and this coincidentally also made it easier for tooling to press the hull forms from sheet aluminum. The structural members of the canoe were made of lightweight but strong aluminum extrusions based on the company's experience with aircraft structures. The prototype weighed in at 38 pounds compared to 64 pounds for a conventional wood and canvas design.

The first canoes were built in employee's bowling alley at the Bethpage facility. The prototypes were then tested thoroughly on the Allagash River in Maine. The canoe's sturdy design allowed it to navigate rapids and over boulders without taking any damage. Exhibited publicly in New York City, the canoes were an instant hit and by 1946 over 10,000 canoes were on order and production had to go to a three-shift basis to keep up with demand. Just outside of the Grumman plants it wasn't unusual to see stacks and stacks of canoes ready for delivery. At one point, over 8,000 canoes were stockpiled right next to a hangar at Bethpage! With the Korean War breaking out in 1951, space was again needed for increased aircraft production and canoe production was moved to upstate New York at Marathon, just south of Syracuse. Demand for the canoes peaked in 1974 with 33,000 sold following the movie "Deliverance", which featured Grumman canoes.

In 1990 Grumman sold its boat division (by this time the company had expanded its boat division into producing other types of vessels) to Outboard Marine Company (OMC). The last Grumman-branded canoe was produced in 1996, but only four months later a group of former Grumman and OMC employees secured financing to relaunch production of the canoes as the Marathon Boat Group which still produced them to this day. A special agreement was established with Northrop Grumman to allow the recognized Grumman name and brand to be used on the canoes. While there are other aluminum canoe builders on the market today, none have ever had the reach and impact on the sport as Grumman has had and today used Grumman canoes are still highly sought-after by enthusiasts, sportsmen, and collectors.

13 October 2010

On 8 April 1940 the US Navy ordered two prototypes from Grumman for a multirole torpedo bomber to replace the antiquated Douglas TBD Devastator. With Grumman having well-established credentials in designing tough carrier-based fighters, what would be come the TBF Avenger had a remarkably smooth development period due to the experience built up in designing and producing the same company's F4F Wildcat fighter. Their biggest single-engine carrier-based aircraft to date, a fully-loaded Avenger was twice as heavy as two Wildcats. But one of the Avenger's most significant technical innovations also happens to be one of the least-known features of the aircraft.

At the time the Avenger was ordered by the US Navy, few aircraft in any nation's service worldwide used gun turrets as most defensive guns were trained manually on gimbaled mounts through open hatches and windows. The gun turrets that were in existence used either mechanical or hydraulic systems to rotate the turret in azimuth and to raise or lower the gun in elevation. Mechanical systems used wheels and brakes operated by the gunner and hydraulic turrets used the same systems as mechanical systems but with the addition of hydraulic power to lessen the physical effort needed by the gunner to track and engage targets. Neither system was satisfactory for Grumman's engineers and combat use showed that the systems were either too slow or too heavy for use on a single-engined carrier-based torpedo bomber. The Navy asked Grumman for a more responsive system that was also lightweight.

Leroy Grumman turned over the challenge to the first electrical engineer hired by the company, a young ex-General Electric engineer named Oscar Olsen. Grumman assigned him the challenge of developing an electrically-driven system for the Avenger's gun turret. The challenge was bigger than it seemed- depending upon the flight attitude of the aircraft, different sides of the circumference of the turret would present different loads to any drive system resulting in differing speeds which would adversely affect target tracking. For instance, if the Avenger were in a dive, and the turret was rotating to engage an enemy aircraft, one side of the turret is rotating downward and has less of a load on the motor than the opposite side that is rotating upward against gravity and thereby presenting more of a load on the motor.

Olsen fell back on his experience with GE and looked at what were called amplidyne motors. These were electrical motors in which both the speed and torque of the motor can be controlled. Olsen knew that GE had made them for the construction industry where two motors were used to lift up a bridge span. Being on opposite ends of the span, it was important to have absolute control over both the speed and torque of the motors winching up the span. GE also made amplidyne motors for the steel industry to wind up the steel sheet used to make tin cans. As the flattened sheet comes out of the furnace, the tension has to kept within a narrow range or the sheet sags, or worse, breaks. Amplidyne motors were used to spool up the steel sheet- as the roll got bigger and heavier, the torque could be increased but the speed controlled to maintain the same amount of tension on the steel sheet. Olsen figured out that a set of amplidyne motors synchronized with an electric circuit would allow the Avenger's gunner to smoothly and quickly move the turret with just fingertip controls.

Olsen paid a visit to his old engineering colleagues at GE's research facility and explained what he needed and why he needed it. In a short period of time, GE produced by hand several prototypes of a small but powerful amplidyne motor that were installed and successfully tested on the TBF Avenger prototypes, proving the electrically-powered amplidyne motor-driven turret to be superior in all respects to both mechanical and hydraulically-driven gun turrets. The gun turrets on many different combat aircraft subsequently were patterned on the amplidyne motor-driven system devised by Oscar Olseon and pioneered in operational use by the TBF Avenger.

11 October 2010

Throughout the 1950s De Havilland Canada saw its reputation for tough STOL aircraft grow with the DHC-2 Beaver and its larger successor, the DHC-3 Otter. From the research standpoint, the durability and load carrying capacity of the Otter in particular led to one of the more unusual STOL aircraft research programs that while little-known to most, played a significant role in advancing De Havilland Canada's knowledge base in STOL aerodynamics. In 1956, two agencies in the Canadian government, the National Research Council and the Defence Research Board, agreed to a joint venture with the company to use a highly modified Otter for STOL research. Both the NRC and DRB had been closely involved with De Havilland with the Beaver and Otter development for several years already. The government report on the proposed program stated "The purpose of the DRB/DHC program is to assess the aerodynamic performance, stability and control problems of STOL aircraft wit the object of finding new refinements in the art."

The Otter STOL research program progressed in three phases. The first phase had an ex-RCAF Otter modified with a large "bat wing" flap on the inboard half of each wing. This greatly increased the wing chord and wing surface area. Both the modified Otter and a standard Otter were tested to evaluate the differences each successive modification would bring to STOL flight. In addition, the horizontal tailplane was given a considerable amount of dihedral and moved up the fin to bring it out of the propwash of the main engine. Since most flights would take place in the slow flight regime, the fin was significantly enlarged in height and chord to provide more directional authority at low speeds. Before the first flights took place, the modified Otter was fixed to ground rig that was truck-towed up and down the main runway at De Havilland Canada's Downsview facility up to 35mph to evaluated wing behavior just before liftoff/after touchdown. Once the modifications were cleared, a series of short hops commenced that led to longer flights. This first series of fight tests ended in 1960.

For the second phase of testing, the Otter was again modified. The original landing gear was replaced with a very robust external four-point gear to absorb the impact of high-sink rate landings. The gear looked a lot like two small floats, in fact, but was designed to absorb as much as twice the energy of an STOL landing over a standard Otter. The second modification put a General Electric J85 turbojet (similar to what is used on the Cessna T-37 and Northorp F-5 series of fighters) in the rear fuselage that had its exhaust diverted out vectorable diverter exhausts on each side of the lower rear fuselage. A rotary control in the cockpit allowed the DHC test pilots to divert the J85's thrust fully aft to as much as full forward to evaluate the effects of jet deflection along with the slipstream over the bat wing flap sections by the main engine. In this configuration, the modified Otter could fly as slowly as 45 mph! This series of tests ran from 1961 to 1962.

The last and third phase of testing benefitted from De Havilland Canada's close association with Pratt & Whitney Canada in the development of the PT6 turboprop. As DHC had modified a Beech 18 for powerplant company to flight test the PT6 prototype engines, DHC had an inside track on the PT6 and its development. The original main radial engine of the Otter was removed and replaced with a rounded nose fairing and two PT6 turboprop engines were rented from Pratt & Whitney Canada and installed on the wings just outboard of the bat wing sections. Several hundred hours' flying time were logged studying the effects of the more powerful propwash of the twin PT6 engines with the J85 jet engine and its vectorable exhaust. When this phase of the Otter STOL program ended in 1965, the experience and data gathered would have a significant impact on the De Havilland DHC-6 Twin Otter program that was launched in that same year.

09 October 2010

As far back as the late 1960s while the Airbus Industrie was still a start up finalizing the design of the A300 was there a realization among the major airframe manufacturers of Europe that there would be a need for a single-aisle aircraft to replace the aging fleets of BAC One-Elevens, Tridents, and Caravelles. A multitude of designs emerged from the various companies as there was a common desire for a "European" competitor to the highly successful Douglas DC-9 and Boeing 737 airliners. By the new decade, only France and the UK had the industrial capacity and expertise in putting a such an aircraft into service, but the UK was reeling from not just the withdrawal from the Airbus consortium, but the eventual cancellation of the BAC Three-Eleven project that caused the UK to withdraw from Airbus in the first place. France was putting its weight behind the Dassault Mercure, but the devaluation of the US dollar, the increase in the price of oil following the 1973 Yom Kippur War and Arab oil embargo and the short range of the Mercure (at full payload it was barely a 1000 miles) meant that the Mercure wasn't the aircraft the airline industry wanted at the time. BAC found itself reliant on new versions of the BAC One-Eleven, the 475 and 500 variants, but even these designs were failing miserably against Boeing and McDonnell Douglas on the world market.

At the 1971 Paris Air Show BAC announced the QSTOL (Quiet Short Take Off/Landing) airliner which looked like a scaled up, wide-body version of the later BAe-146 airliner. But BAC lacked the financial capital to commit the QSTOL to production and the UK government, smarting after the cancellation of the BAC Three-Eleven and the bankruptcy of Rolls-Royce, was hesitant to fund the launch of the QSTOL and project died quietly that year. In February of the following year BAC formed a consortium with MBB of West Germany and Saab-Scania of Sweden to develop the Europlane with the realization that a joint venture was the only way to secure the financing to launch production of any new jetliner. CASA of Spain joined later than year and at the 1972 Farnborough Air Show, the Europlane was announced only to be redesigned for the 1973 Paris Air Show as a CF6/RB.211 rear twinjet with a T-tail that borrowed heavily from the BAC Three-Eleven design. But politics intervened- with MBB heavily committed to the A300 program, Aerospatiale saw the Europlane as distracting MBB from full commitment to Airbus Industrie. Eventually, the Europlane, too, died quietly.

But while the Europlane venture was active, Hawker Siddeley formed a rival team called CAST (Civil Aircraft Study Team) with VFW-Fokker and Dornier to look into a family of aircraft to compete with Europlane. CAST got off the ground in 1972 and though design studies never got as far as Europlane did, one design did emerge that was a single-aisle twin with underwing engines. At the 1974 Farnborough Air Show, the Group of Six was announced- a new consortium to replace both Europlane and CAST with both BAC and Hawker Siddeley, Aerospatiale, both Dornier and MBB, and VFW-Fokker. The Group of Six combined the work of Europlane and CAST into a new project that featured two designs- a 200 seater designated Type A and a 110+ seater designated Type B. The Type A design became the Airbus A310. With the entry of Dassault into the consortium it became the Group of Seven. The British offered new variants of the both Trident and One-Eleven and the French offered two designs, the Aerospatiale A200 and the similar looking Dassault Mercure 200 with CFM56 engines.

This attempt at the Type B from the Group of Seven ended when the French insisted upon a lead role and Dassault broke away to pursue a joint venture with McDonnell Douglas based on the Mercure 200 design. That effort also failed due to issues of design leadership between the two companies and with the nationalization of the BAC in 1977 to form British Aerospace, the new BAe, needing a new cornerstone in the commercial market, abandoned further development of versions of the One-Eleven and joined forces with Airbus that year to develop an all-new 150-seat single-aisle aircraft under the program name of JET (Joint European Transport). BAe was offered a lead role in JET with final assembly in the UK provided the British returned to Airbus Industrie. JET was made up of British Aerospace (BAe), with Airbus being represented by MBB, VFW-Fokker, and Aerospatiale. Hawker Siddeley, now part of BAe, led the design effort for JET and created JET1, a 136-seater, and JET2, a 163-seater. Both aircraft were influenced by Aerospatiale's own previous A200 design. Both designs were powered by the GE/SNECMA CFM56 engine and early on a decision was made for a fuselage diameter larger than that of the Boeing 727/737 to allow a more comfortable six-abreast seating arrangement than that of the Boeing jets. After discussion with potential airline customers, JET1 was dropped and efforts were focused on JET2.

In 1978 JET2 was formalized with BAe being responsible for lead design and final assembly in the UK and Airbus Industrie responsible for coordinating BAe's European partners. In the following year the UK returned to the Airbus consortium and curiously, the JET2 team was relocated to Toulouse, France and in 1980 JET2 was redesignated under the SA (Single Aisle) designator with SA1, SA2, and SA3 being various lengths of the JET2 design.

At the same time that JET2 was moved to Toulouse, Delta Air Lines issued its "Delta III" requirement for a 150-seater with 50% of the fuel burn of the Boeing 727 that formed the bulk of Delta's US domestic fleet. Keen to enlarge its market share in the United States, Airbus consulted with Delta on the the Delta III specifications and focused much of the SA work around Delta's requirements. This aircraft was finalized in 1984 and launched by Airbus Industrie as the A320. The UK government and the British aircraft industry unions pushed hard to have A320 final assembly moved back to the UK, but the increased financial commitment Airbus required to establish final production of the A320 in the UK met with disapproval by the British Parliament and A320 assembly remained where it is to this day at Toulouse.

Ironically Delta never ended up ordering the A320 despite its needs having shaped the A320's design significantly. Much in the same way American Airlines shaped the A300 design nearly 15 years earlier, it was only years later after the A320 program was well-established that Delta ended up operating the A320, but not as a customer but as the merger partner with Northwest Airlines in 2009 which itself had a substantial A320 fleet as well as the shorter A319.

08 October 2010

In the late 1960s as the nascent Airbus Industrie was formalizing the design of the A300, the original intent of the consortium was to use a new 47,500 lb-thrust turbofan in development by Rolls-Royce, the RB.207. Great Britain at the time was actively involved in the early days of Airbus as a full partner, riding on the heels of the Anglo-French Concorde program. The two main British companies involved with Airbus at the time were Rolls-Royce to provide the engines for the A300 and Hawker Siddeley to provide the wing design for the airliner. However, all was not well for the commercial airframe start up as development costs in Toulouse were rising and worse yet, RB.207 development was falling behind at Rolls-Royce. I had posted earlier on the A300 that the original design for the aircraft, the HBN100, was a much larger aircraft than what resulted, seating over 300 passengers compared to the 250 of A300 as built. As result of the HBN100 being a twin jet the size of the rival Lockheed L-1011 and McDonnell Douglas DC-10, it would need more powerful engines than the Rolls-Royce RB.211 used on the Tristar and the General Electric CF6 used on the DC-10 and this responsibility was assigned to Rolls-Royce to develop the RB.207.

With the development of the RB.207 behind schedule, timing could not have been worse in 1968 when European financial houses made a run on the French gold reserves that depleted the reserves by 30% in a short period of time and caused not just the collapse of the French franc, but the collapse of President Charles De Gaulle's government amidst political unrest in the country. As economic malaise spread through Europe, airlines began to have doubts about the viability of the A300 and its original 300+ seating capacity. Airbus partners Sud-Aviation and Hawker Siddeley even began to prepare fall back designs based around a smaller aircraft.

Matters worsened through the year for Airbus as the Rolls-Royce was also developing the RB.211 engine for the Tristar as well as the RB.207. Given that three engines were going into the Tristar versus only two engines for the original A300 design, Rolls-Royce began to divert engineering resources from the RB.207 program to the RB.211 program which was facing technical issues of its own as the world's first three-spool engine design. The RB.207 wasn't even Airbus' first choice for the A300. French engine manufacturer SNECMA had partnered with Pratt & Whitney to offer the JT9D engine which was initially favored by Airbus, but Rolls-Royce exercised its considerable political clout and had the UK government push for a "European" engine in the form of the proposed RB.207. Rolls-Royce even went as far as to suggest that it was wrong for an American engine be part of a European aircraft. Due to pressure from Rolls-Royce and the UK government, the RB.207 engine was selected for the original A300 design but in just a few years it was Rolls-Royce itself that was putting more emphasis on the RB.211 for the Tristar than the RB.207 it had pushed for just a few years earlier!

The writing on the wall came when Roger Beteille, the head of Airbus, found out the selling price of the RB.211 engine. He had found out that two RB.207 engines would cost more than three RB.211 engines which put the A300 at a considerable price disadvantage to the Tristar. There was no way the A300 would succeed if Airbus stuck it out with Rolls-Royce and the RB.207 engine. The only way Airbus could find a way out was to change the size of the A300 from a 300+ seat aircraft to a 250 seat maximum aircraft. It was a master stroke that saved the program as it now made existing engines in the form of the RB.211 and the CF6 now appropriate for the A300 and Pratt & Whitney/SNECMA could now offer the JT9D as well.

In December 1968, the A300 was formally launched as a 250 seater with a wider range of engine options that left out the RB.207 engine. As a smaller aircraft, the airlines of Europe were more interested in the design and ironically, it was the head of Hawker Siddeley, Sir Harry Broadhurst, who announced that the RB.207 engine wasn't needed and that the JT9D was on the table as an option for the new A300 design. At this point things got very complicated for the UK government. Money had been provided for the development of the RB.207 engine for the original A300 design. Money had also been invested in the BAC Three-Eleven, which was now square in competition with new, smaller A300 design. The Three-Eleven would have been powered by two RB.211 engines, the engines that Rolls-Royce had given development priority over the RB.207. In the end, a need for address Great Britain's trade deficit meant that official support was given to the Three-Eleven program. This also increased the market for the RB.211 engine as continual problems at Rolls-Royce eventually resulted in the company going into receivership with the British government to keep it afloat. By the summer of 1970 Great Britain withdrew from the Airbus consortium as continued support for Rolls-Royce put it in conflict with the A300, but Sir Harry Broadhurst felt Hawker's investment was worth keeping and Hawker Siddeley remained with the Airbus consortium as a major subcontractor building the wings for the A300.

About Me

TAILS THROUGH TIME now has its own URL at www.tailsthroughtime.com! The previous URL of aviationtrivia.blogspot.com still works and will redirect you to the new URL. You may see those redirects within article links that connect to past articles.

Those airLINKS below marked with asterisk require registration, but are free.