Global airline accident review of 2009

Airline safety in 2009, judged by the number of fatal accidents, was a little better than the average for the decade. Better still, this first 10 years of the 21st century, taken as a whole, has seen the lowest accident rates in aviation history by a considerable margin.

The bad news is that the constant improvement in safety that has taken place each decade since the Wright Brothers is now stagnating. This shows in the fact that, judging by fatal accident numbers, there was a step change in safety performance around the year 2000, but there has been virtually no improvement in safety in the 10 years from 2000 to 2009.

In 2009 there were 28 fatal airline accidents and 749 fatalities across all sectors of the global airline industry, which compares respectively with 34 and 583 for the previous year. But since the beginning of the decade, and particularly since 2003, the number of annual fatal airline accidents has almost levelled out, and 2009 figures continue this trend.

In March this FedExBoeing MD-11F flipped on to its back when the pilot lost control during a normal landing attempt

Statistical analysis, when the number of departures for the year is confirmed, will show accident rates to have improved less than that implied by the event numbers alone, because the amount of traffic last year dropped significantly compared with 2008 figures.

Meanwhile, a Flight Safety Foundation analysis of global accident rates for Western-built jet airliners over the past 20 years demonstrates that the average figure for the noughties decade shows a marked improvement compared with the average for the 1990s.

The FSF's rate, which includes serious (but not necessarily fatal) accidents involving Western-built jets, shows that the rate for the 1990s as a whole was 1.18 events per million departures, which compares with 0.57 for the noughties. But the varying trends within those 20 years, if they are broken up into five-year periods, are revealing:

1990-94: 1.32 serious accidents per million departures.

1995-99: 1.06.

2000-04: 0.58.

2005-09: 0.55.

Those figures show an accident rate reduction of 19.6% between the average for the first and second half of the 1990s. In the noughties, the difference between the first and second halves is a 5% rate reduction. But the truly dramatic difference appeared during the decade from 1995 to 2004. In the five years before 2000 the rate was 1.06, and in the following five years that figure nearly halved to 0.58.

So the relevant question is this: what happened in the last five years of the twentieth century and the first five of this one that caused such a huge improvement in safety?

A look at the changes that took place in safety management thinking and action in the run-up to the year 2000 give the answer.

During the late 1980s and early 1990s the rapidly advancing ability to gather accident data for computer analysis began to transform strategic safety management thinking. Rather than basing safety improvement strategies on reaction to individual accidents as they occurred, policy formulation was able, for the first time, to be driven by hard data gathered over extended periods of time. So precise areas of risk could be identified, quantified and prioritised for detailed treatment.

Accident causal factors and mitigation strategies could also be identified geographically: strategies could be derived and applied globally, regionally, nationally or at operator level. At the same time, the exchange and dissemination of data became easier.

The US Commercial Aviation Safety Team (CAST), which went into operation in 1998 led by the Federal Aviation Administration, was the first national organisation that set out to identify safety priorities and create an action plan. Europe was hard on its heels with its Strategic Safety Initiative (ESSI), and regionally all over the world similar systems are now in place. Although there were minor regional variations in the order of priorities for applying action to mitigate the identified safety risks, CAST and ESSI both identified controlled flight into terrain (CFIT) and approach and landing accidents as the first areas for treatment.

The FSF had already been working on plans for mitigating both of these accident categories for some time before 1998. Analysis of approach and landing accidents led to the FSF's ALAR (approach and landing accident reduction) strategy based on the specific risks identified, and the foundation had developed and distributed ALAR and CFIT toolkits to airlines in the early 1990s.

Then the FAA's mandating of terrain awareness and warning systems (TAWS) in 1997-98 following the American Airlines accident at Cali, Colombia, made a massive difference to safety: there have been no CFIT accidents since that time involving aircraft fitted with TAWS.

The benefits airlines gained from a better understanding of ALAR risks also began to kick in at about the same time, although that complex accident category is far from eliminated.

Simultaneously another new technology was starting to demonstrate its quantifiable benefits: the integrated digital "glass cockpit" - early versions of which were introduced in the 1980s - was gradually improving, and crew situational awareness getting better with it.

DIGITAL SYSTEMS

But equally important is the fact that, by the mid-1990s, crews were becoming familiar with the strengths and weaknesses of the new digital systems in much the same way as, all over the world, people in offices were slowly learning to make good use of their personal computers.

In 1996 the FAA published the results of a landmark study of this subject, called The interfaces between flight crews and modern flight deck systems. This revealed that - especially with the early electronic flight instrument systems - greater automation sometimes increased pilot workload, and the greater system complexity combined with multi-mode capability could produce pilot confusion. While making flight and navigation more accurate, the digital flight management systems provided opportunities for new kinds of pilot error, which had become a factor in several serious accidents. The report clarified the issues that led to these, and cockpit design and procedures benefited.

Proposals resulting from the ongoing US National Transportation Safety Board's investigation into the 12 February 2009 Colgan AirBombardier Q400 accident at Buffalo, New York (see accident listing) have been announced by the US Federal Aviation Administration administrator Randy Babbitt. He wants to see a qualification programme for pilots flying Part 121 aircraft. This checks not just for the possession of a commercial pilot licence, but for training exposure to all the flying environments that such a pilot might encounter, including high-altitude flight and multi-crew skills. Babbitt cautions against a legislative approach that would assume a large number of flying hours would, alone, constitute appropriate experience.

Pilots of the Spanair Boeing MD-83 that fatally crashed on take-off from Madrid Barajas airport on 20 August 2007 had missed two checklist opportunities to set and check the flaps/slats for take-off, the inquiry reports. At the "take-off imminent" checks the co-pilot called "11", meaning flaps checked at 11°, but the report says he cannot have carried out the required visual check of the flap/slat indicator, because they were not set. The inquiry failed to establish for certain what the connection was between the crew intentionally isolating power to the ram air temperature probe, which was overheating, and the failure of the take-off configuration warning system to operate, but has required Boeing to ensure that such a failure cannot recur.

Swedish investigator SHK says that a MAP (Austria-based) Boeing MD-83 charter flight for Atlasjet took off from Are-Ostersund airport on 9 September 2007 at a weight 3.2t heavier than the maximum allowable for the conditions on the day. The result was that the aircraft only just got off the ground by the runway end, but damaged approach lights. It landed safely at its planned destination, Antalya in Turkey. The investigators concluded that commercial pressures led the crew to take shortcuts in take-off performance calculations, which meant that actual weather conditions and some load details were not taken into account.

The UK Air Accidents Investigation Branch's examination of a serious incident involving a Thomas Cook Airbus A330-200 at Montego Bay, Jamaica on 28 October 2008 has led to it recommending that regulators should call for the development of a take-off performance monitoring system. The aircraft had failed to lift off at rotate, and the commander had only ensured a safe take-off by immediately selecting TOGA power. The AAIB says that the crew had appeared to use take-off performance figures calculated at Thomas Cook's operations centre, and these seem to have assumed the aircraft was at a much lower weight. The agency says it also identified 26 recent accidents and incidents in which incorrect take-off figures had been derived, with serious consequences, and calls for development of a more reliable system for checking and monitoring take-off performance.

Meanwhile, in 1999 the International Civil Aviation Organisation began to implement its Universal Safety Oversight Audit Programme, making individual states fully and publicly accountable for the quality of safety oversight provided by their national aviation authority. This measure, reinforced by the awareness generated in the years of preparation for its adoption, started the process of bringing safety accountability to the world's economically underdeveloped countries.

Before this, in 1992, the FAA had set up its international aviation safety assessment programme, under which the FAA had to approve the safety oversight standards of any country whose airlines wanted to operate to the USA. By 1998, the European Union was running a similar programme called the safety assessment of foreign aircraft. Pressure was mounting on rogue states that had not invested in a proper safety oversight system.

The bottom line is that the effects of all these measures - data-driven safety strategy, the adoption of TAWS and other improved airborne technology, and a determination at ICAO to see that aviation standards agreed by states at treaty level were actually applied locally - were all brought to bear in the last five years of the 1990s. The result was visible immediately the new century dawned, and the improvement has proved itself robust through the decade to 2009.

But having made that one giant leap in safety performance, the system has now to seek new ideas for further advance, because the fruits of the seeds sown in the 1980s and 1990s have all been harvested. There might be a slight further reduction in accident rates as the proportion of fourth-generation aircraft (Airbus A320, Next Generation Boeing 737 series and all later types) increases still further in the world fleet, thus making TAWS almost universal.

Since 2000, serious accidents have frequently involved pilot failure to manage situations that they should really have been able to handle successfully. The year 2009 was no exception. Examples last year include: the Turkish Airlines Boeing 737-800 crash at Amsterdam; the Colgan Air Bombardier Q400 crash at Buffalo, New York; the FedEx Boeing MD-11F landing accident at Tokyo Narita; and the Yemenia A310 accident near Moroni, Comoros Islands (see accident listings).

TECHNICAL PROBLEMS

When more becomes known about the Air FranceAirbus A330 loss over the Atlantic it may turn out to have been in the same category, because the technical problems known to have afflicted the aircraft did not appear to have been such that the aircraft would not have been controllable, yet the crew were unable to prevent it crashing into the sea.

The Turkish Airlines event was caused by a technical anomaly followed by crew failure to monitor the approach airspeed, which led to a stall. The Colgan case involved a stall on approach followed by a totally inappropriate crew response to it, and early information about the Yemenia crash suggests that aircraft also stalled during a night visual approach over the sea. In the case of the FedEx MD-11F, the crew catastrophically lost control during an apparently ordinary landing after what appeared to be a stable approach in fair conditions.

The mitigating event for airline crews in 2009 was the textbook management of total power loss in a US AirwaysA320 following a birdstrike over the Hudson river, and the aircraft's successful ditching with no casualties. Less spectacular, but equally cool-headed, was the response of the British AirwaysBoeing 747-400 crew when the leading-edge high lift devices retracted at rotate during take-off from Johannesburg, putting the aircraft on the edge of a stall (see accident listing).

Although pilot failure to deal successfully with a problem is not a new causal phenomenon, its significance in accident statistics is growing as purely technical causes for crashes become rarer. The Buffalo event has led the FAA to carry out a comprehensive review of training standards for new pilots, during both ab initio and recurrent training programmes.

At the Flight International Crew Management Conference in London in early December, delegates debated whether this deterioration in pilot performance is a symptom of the long-term effects on crews of operating highly automated aircraft. Loss of control (LOC), which has been proportionately increasing as a serious accident cause, is believed to be one of the symptoms of this phenomenon.

In the absence of any appropriate change in the statutory recurrent training requirements, there is no reason to believe this is going to change. A vital component in an airline pilot's recurrent training has gone missing with the advent of high levels of automation, and at present this training has not been replaced.

The missing component is the on-the-job mental and physical interactivity with the aircraft and its navigation systems that pilots used to get in "round-dial" classic cockpits that lacked integrated navigation displays and highly capable digital flight management systems. All pilots still learn the basic "raw data" capability during their ab initio training, but if they go straight on to highly automated aircraft they may never use it again. That is not a problem until an electrical anomaly leaves them with nothing but standby instruments, or with a reduced panel, at night or in instrument meteorological conditions.

Training solutions to enable pilots to cope with this loss of line-flying practice might include the introduction of compulsory upset recovery training, and/or mandatory simulator time involving manual flying using raw data only during bi-annual recurrent training sessions. But there is no sign yet that any aviation authorities are preparing to address this issue. The nearest existing option for airlines is to win approval to operate an advanced training and qualification programme, which gives them some flexibility to tailor formerly rigid recurrent training regimes to their own fleet operational experience or individual pilot needs.

Major airlines worldwide have also begun to appreciate that flightcrew who meet legal pilot licensing minima with little or nothing in reserve are not good enough to be able to cope reliably with high workloads generated by anything from system failures to approaches in marginal weather conditions. But the reduction of pilot supply from the military, combined with the withdrawal of airlines from pilot training sponsorship, means that carriers are more likely to have to recruit self-selected, self-funded pilots who can only afford to train to the legal minima.

PERFORMANCE-BASED TRAINING

Prescriptive, performance-based pilot training and licensing for commercial pilots has long been advocated, but although the ICAO approved the new performance-based multi-crew pilot licence (MPL) standards many years ago, take-up of this option has so far been poor, and training for the less comprehensively defined CPL has continued. This is because of the need for national aviation authorities to work with flying training organisations and airlines to design approved MPL courses that will ensure that the pilot performance standards are not only achieved, but that their achievement is measurable, as ICAO requires.

When the MPL has been widely implemented, and if it delivers what it was carefully designed to do, a single global pilot licensing standard should be easier to achieve and to police, but this remains a distant goal.

In our review of airline safety from 1990 to 1999 (Flight International, 25-31 January 2000), we quoted the then International Air Transport Association director general Pierre Jeanniot on his organisation's safety ambitions. He said IATA wanted to halve the 1995 hull loss accident rate in 10 years. It would do this, said Jeanniot, by setting up a safety data exchange system, and an airline operational safety audit system the passing of which would be a condition of membership. To its credit, the industry achieved all those safety goals by 2005. The question now is how to improve further.

IATA has not set itself a statistical goal this time, but is backing the industry-wide implementation of safety management systems, fatigue risk management, and flight data analysis as tools to produce further advance, meanwhile attacking clear targets - such as runway excursions - as areas in which improvement must be achievable.

It is difficult to see future quantifiable improvement taking place at the same rate as it did in the five years either side of 2000, but it remains clear that preventable accidents are still happening, so there is plenty of room for improvement.