Toyota Underestimated 'Deadly' Risks

SAN JOSE — A software expert whose testimony led to a guilty verdict against Toyota Motors in one of a series of runaway acceleration accidents said Tuesday that the best assurance for preventing similar "deadly" outbreaks must be stronger, smarter oversight by federal regulators.

Michael Barr, co-founder and CTO of the Barr Group, told an audience of embedded system engineers at the EE Live! conference here that as automobile manufacturers have pushed each other into a race to fit cars with complex electronic control systems, watchdogs at the National Highway Traffic Safety Administration (NHTSA) have failed to keep pace. Lacking a team of experienced experts to test and monitor today's flood of automotive software designs, NHTSA is failing in its mission to oversee "safety-critical systems."

Despite assurances by companies like Toyota that their software undergoes rigorous testing, said Barr, the rush to get cars on the road means that "You, the users, have been testing the software."

In some cases, like that of Jean Bookout, who was seriously injured when her 2005 Toyota Camry accelerated unintentionally, that sort of ad hoc consumer testing can result in catastrophe. A passenger in the Bookout car, Barbara Schwarz, was killed. After Barr testified at length for the plaintiffs -- in the only software-focused Toyota case that has been tried -- an Oklahoma City jury agreed to award $3 million to Ms. Bookout and to Ms. Schwarz's family.

Commitment to a culture of safety Although insisting on tighter NHTSA regulation, Barr did not absolve carmakers, whose current passion has been described as turning every new car model into a giant, apps-loaded smartphone.

Barr said that Toyota, and by implication other auto companies eager to load their products with electronic controls, lack a "mature design process, done right, documented, and peer reviewed."

He called for carmakers -- regardless of the government's role -- to adopt a "company culture and an engineering culture of wanting to know what can go wrong, and wanting to fix what can go wrong, from the outset," rather than after-the-fact with apologies and million-dollar settlements.

Since the problem of "unintended acceleration" in Toyotas burst into headlines after a ghastly California crash that killed Mark Saylor, a 19-year California Highway Patrol veteran, and three family members, Toyota has recalled millions of cars and paid billions in penalties and settlements. Among these was a $1.2 billion criminal fine imposed last month by the Department of Justice -- for lying to government regulators.

Using an exhaustive 56-slide PowerPoint presentation and citing his 18 months examining Toyota's automotive software "source code," Barr convinced the Oklahoma jury that Toyota had deployed dangerously flawed software in its cars. Despite Barr's findings, Toyota continues to claim that all its unintended acceleration problems were mechanical, the result of misplaced floor mats and "sticky" gas pedals.

Neither NHTSA, with its absence of software expertise, nor the NASA Engineering and Safety Center -- to which NHTSA turned to study the Toyota problem -- were able to pinpoint a software cause for unintended acceleration. Nor were they able to rule out the possibility.

The NASA researchers, who were both on a deadline and not allowed to study Toyota's source code, simply ran out of time, noted Barr.

Under court order, a team from the Barr Group was allowed into a specially built "code room" provided by Toyota. They were able to pinpoint at least one anomaly that could have caused Toyota accelerators to build up speed while disabling the brake system. Barr also found numerous Toyota violations of software design standards. Toyota, in many instances, even broke its own rules for safe design and system redundancy.

Patriot missiles, Therac-25, and others that failed Many of these rules, and Toyota's subsequent actions, were either buried in corporate secrecy or covered over by corporate denial. "The answer is not to say it can't be the software, stick our heads in the sand," said Barr. If companies like Toyota examined themselves more rigorously, he added, and allowed "less code confidentiality," they wouldn't require as much regulatory scrutiny.

Barr cited past cases of "safety-critical systems" that failed but then were corrected when regulators stepped up their intensity and capabilities. After a series of radiation overexposures -- including two fatalities -- caused by a software glitch in a radiotherapy machine called the Therac-25, the Food and Drug Administration created an in-house team of software engineers to review every electronic medical device before its approval for use on patients.

In the case of the Therac-25, in the case of a software-misguided Patriot missile that killed 28 US troops during the Gulf war, and in Toyota's case, the companies responsible have invariably issued assurances about their exhaustive testing and cited "no other instances of similar damage."

Such assurances disregard the bugs that exist in every complicated system and the harm they can cause. "If you are overconfident of your software in a safety-critical system, that could be deadly," said Barr.

Yes, the self driven cars are too far from reality and safety is a major challenge. But the companies definitely want to invest in this to be in race and to make the business in the future. The real challenge lies in the hands of governments to make sure the automobiles are really safe and they are tested perfectly.

As anyone who has done a MTBF calculation to MIL-HDBK-217 knows, you can make the numbers say what you want. Our current culture of "sqeeze every last dollar out" of a business means that safety reviews can easily be skewed by a "severity" or "probability" rating in a FMECA analysis. I have witnessed a similar conclusion within my own company when such a safety analysis (although not as severe as a car crash) was discussed and discounted as not enough to warrant a redesign. Tools will have to be worked on to verify designs better.

NHTSA doesn't have the budget to hire a crack team of software experts, and congress isn't about to give them more teeth to enforce standards. OTOH the EU could take the lead and demand that all automotive software confirm to ISO 26262 and that it be certified by an independent testing body such as Germany's TUV SUD. That would be a big step forward from having the public be the beta testers and the court the enforcers.

While you may be able to skew the predictions and rationalize not doing a good job, you can't fool the field results. It's unfortunate that this had to take lawsuits to get in front of this, but given the analysis by the plaintifs experts, it's abundantly clear that they not only screwed up, but didn't even have the first clue on how to do real-time, let alone saftey-critical software.

Moreover, really Toyota has three HUGE problems they have to fix. In addition to the SW problem, they also lacked a proper design validation program that should have alerted them to the design flaw early, and they weren't tracking their field failures and getting to root cause in a day. Very uncharacteristic for a Japanese company, especially Toyota.

Finally, humans are notoriously bad at assessing risk. Even NASA, the world leaders in safety-critical systems blew it with Challenger and Columbia. At least they try (and got in trouble when they cut corners trying.) With Toyota's ignorance of how to even DO real-time safety-critical software, is it any surprise that they totally whiffed the risk ANALYSIS?

To me the takeaway as a consumer is not to avoid the benefits of new auto technology, but to make sure there is a hardwired physically-interrupting-the-power OFF switch (which Toyota also missed).

"There are two ways of constructing a software design.One way is to make it so simple that there are obviously no deficiencies.And the other way is to make it so complicated that there are no obvious deficiencies."-- C. A. R. Hoare, 1980 Turing award lecture

There are risks in self-driving cars, but as another post in this area says humans are bad at assessing risk. They are also not that great at driving cars. A self-driving car fatality would make headlines, but the numerous daily fatalities from drivers on cell phones, falling asleep at the wheel, or otherwise just not paying attention rarely make it above the fold (obscure reference for those who still remember newspapers). This case illustrates a net lowering of the safety in the activity of driving a car, where there is real evidence that self-driving cars would raise the safety level.

Agreed, Larry. The idea that a human in control, behind the wheel, is inherently "safer" than having an algorithm do the driving is really odd. Not saying that truly autonomous cars are realistically possible right this minute, but surely, in principle, such automation can easily beat human intervention, when it comes to consistency, reliability, and consequent safety.

Perhaps the problem is that people think they are more in control of their own driving safety than they really are. Sure, I too think I'm the most expert driver on the road. My problem is all those other unpredictable half-wits around me.

Even in the simplest case, i.e. trains running on determinstic tracks, how many times have we seen recently in the news that the operator dozed off? This is safe?