Can An Engineer Prevent the Unknown?

As automakers roll out autonomous driving features at the Consumer Electronics Show and the Detroit Auto Show this week and next, there's an unspoken question nagging at the fringes of the technology: Will future engineers have to find ways to prove these self-driving features don't cause accidents?

The question is especially relevant now in the wake of the recently settled Toyota unintended acceleration case. A jury found Toyota responsible after a 76-year-old woman sped out of control in her 2005 Toyota Camry as she was exiting an Oklahoma highway. The crash injured the woman and killed her passenger. A jury found in favor of the driver, awarding $1.5 million to her and $1.5 million to the family of the passenger.

The disturbing issue at the heart of the case is that we still don't know what caused the accident. Toyota claimed pedal misapplication -- the driver stepped on the accelerator when she thought she was stepping on the brake. The plaintiff's lawyers targeted the electronic throttle, citing testimony from an expert who said that the car's software code was faulty. But with no smoking gun-type evidence, Toyota was left with the unenviable task of proving that its questionable software didn't cause the accident.

If you think about it, that's a tough task. Virtually all software-based products have some issues. And powertrain controllers contain hundreds of thousands of lines of software code, all of which can interact with vehicle subsystems in billions of ways -- maybe trillions. To prove something didn’t happen, engineers essentially would have to say, "We know exactly how many possibilities exist. We've tried all 16 trillion of them, and we know it can't happen."

All this is relevant for today's automakers because almost every new car uses an electronic throttle, or throttle by wire, as it's known. It's a key enabler for adaptive cruise control, traction control, electronic stability control, torque blending, cam phasing, cylinder deactivation, and countless other features. "If the issue is throttle by wire, then it's not just a Toyota problem, and it's not just an autonomous vehicle problem," Gregory Shaver, associate professor of mechanical engineer at Purdue University, told us. "If we can't trust the software, then we have to step back and take a look at almost every vehicle we've made in the past 15-plus years."

The problem would be much simpler if we could point a finger at the causes of such accidents and then fix them. But we can't do that. We can only surmise and rely on likely scenarios, leaving Toyota to deal with the same unsettling problem that nearly crushed Audi in the 1980s.

The fact that the electronic throttle is essentially a time-tested technology adds to the perplexity. "Think about how many years they've been on the road, how many vehicles are driving around with electronic throttles, and how many miles have been logged," Jeremy Worm, PE, director of the Mobile Lab at Michigan Tech University, told us. "Many of these vehicles have gone through entire life cycles. That should tell us that the electronic throttle is a robust technology."

There are some partial solutions. Brake-throttle override, in which brake actuation shuts down a wide-open throttle, should help. And automotive black boxes, which record the driver's actions during an accident, will provide an explanation that's superior to a courtroom debate.

In the end, though, engineers can't think of all the possibilities or test for them. They can conduct all manner of bench tests, failure mode analyses, and road tests for torque security, but they can't be expected to imagine trillions of scenarios. "As an engineer, you're never going to be 100% sure," Worm said. "You can get to a level of comfort after you've done your bench testing and validation and verification, but you can never have 100% confidence."

That's especially true for cases like Toyota's. You can't be expected to test for a failure if you don't know what tripped it. "All you can really do is manage the risks," Shaver said. "That's what engineers do."

Chuck, you bring up a valid and important topic here. There is no way to "prove" everything about a vehicle. Since there are so many vehicles and they are driven so much (in hours and miles) you are likely to run into any error that exists. So, we cannot prevent problems. In safety critical systems one typically designs in multiple failsafes. This is a complex topic. There are also overrides and safe modes. This is a well understood area and is applied in the aerospace industry. Even then, it is not perfect.

The flip side is that we have lived for about a century with automobiles. They cause more deaths than just about anything else. We accept that, even though many of the fatalities involve someone just getting from one place to another, often for trivial reasons. Go figure.

There is probably no real solution. The next step is to outline the liability rules and install those black boxes.

I agree with what you are saying naperlou. But in reality, we do NOT accept the risk of death in automobile failures. If we did, Toyota would not be forking over 3 million dollars to two families. Do not get me wrong, if the car is crap and is purposefully sold disregarding safety requirements, then they pay. But as pointed out in this article, throttle by wire is a proven and robust technology and Toyota still has to pay.

Self driving cars? I agree, rules of liability have to be established. But then you would put 99% OF ALL LAWYERS OUT OF A JOB!

You're right, naperlou. You can't prove everything. This is a really complex situation because, as you mention, Toyota cars have driven billions of miles with these electronic throttles. So either you believe that the one-in-a-million error occurred, or you believe that the driver stepped on the wrong pedal. Either way, there's no hard evidence. I just wonder now how the pending cases will be resolved.

TJ, from a purely engineering stand point I completely agree with instrumented vehicles with comprehensive data recorders. However, given the revelations of government snooping and the prospect of insurance companies wanting to monitor driving habits, no thanks!

TJ, the more I think about it, I think you are correct. I would certainly want to know if a pilot is flying incorrectly or heaven forbid, incompentantly. So why not the same for drivers and driverless cars? I think if 'accidents' were overwhelmingly shown to be driver error and people had to be held liable for their incompentant driving, then the cost of cars could go down. Automakers could focus on MPG instead of adding controls to correct bad drivers.

Then again, humans have a propensity for 'hiding' their faults and drivers will do the same to the blackbox recorder.

Cameras on board rocket boosters were not common-place until after the Columbia accident. Now, most launchers have them. Aside from the fact that they provide way-cool images, they can be used forensically.

I haven't decided if it's a good idea or not. Today "free" people are under surveillance much more than anyone behind the iron curtain was in the bad old cold war days.

TJ's comment about turning unknowns into knowns is the way to go. That's why they have black boxes on aircraft. If Toyota had a recording sensor on the accelerator and brake of their cars, they would have answer to the "driver error" question. As commented earlier, automakers are extremely cost sensitive, so the occasional $3MM lawsuit may be an "acceptable risk" to the accountants vs. the sensor cost. Widespread use of driverless technology may shift that equation to the point where the automaker's liability is high enough to justify the added product cost. Another possibility is legislation which limits liability per case, as is the case currently for air travel (see the fine print on your airline ticket).

I don't remember where I read this, but supposedly the reason that Ford got hit so hard in the Pinto lawsuits (would you be surprised if a high-speed rear-end collision caused a fire - it happens it the movies) was that the cost to repair the design flaw would be more expensive than potential lawsuits. Cost benefit analysis is part of the design process. And actuaries (life insurance) are in the business of putting a value on human lives. I have seen a car commercial where an automatic braking system stops the car while the driver is not paying attention to driving. So there is the potential of a self-friving car to save lives.

Today, more than 30,000 lives a year are lost on our roads, GlennA. The belief is that some day, autonomous cars could bring that down to the hundreds. So, yes, I definitely agree with you that self-driving cars will one day save lives. The question is, will our legal system allow it?

The paperwork that surfaced during the Pinto fire lawsuits showed that Ford made a conscious decision to balance the cost of production against the cost of liabilities. Nothing specific to the Pinto model ever came to light, but documentation exposed a culture that balanced the monetary cost of liabilities against the cost of producing the vehicle.

I guess safety didn't sell back in the Seventies, but hey, now there's a mandate for every contingency.

The thing that irritates me about the "sudden acceleration" cases is that the drivers should have been able to control the cars even if the throttles were stuck wide open. Get on the brakes and STOP immediately (yes, the brakes are more forceful than the engine, but only count on one stop), turn off the igntion while you are stopping (yes you can still drive without power steering, and you will still have power brakes unless you take your foot off the brakes, which you should not do, and no, the steering will not lock), and shift to neutral while you are stopping. Please practice this.

No matter how well a self-driving car is designed and manufactured, there will be failures and accidents. Having a black box and fail-safe systems will help, but will also add to the cost of the car. Manufacturer liability for accidents will also add to the cost to consumers.

I think I will drive the old-fashioned way, and avoid being surprised by a self-driving car failure. Yes, there will be failures.

Amen to that Critic! Learn to safely shutdown your vehicle if it goes out of control. When a car is speeding down the freeway because the throttle is stuck wide open, even if the car is the fault, it became a driver problem if after several hundred feet they could not shutdown the vehicle!

Another example is manual stick shift cars and trucks. My teenage son has a 1981 F150 with a manual transmission. I have shown him and trained him on how to respond if he pushes in the clutch and it does not disengage. Brake hard and throw the shifter into neutral ASAP! Then safely coast to a safe stop (or push the truck to a safe spot). He has even demonstrated this to me so I can be sure he is aware of what to do.

GTO, the understanding of how to shut down a runaway engine is indeed a potentially lifesaving thing. And having a runaway engine overspeed destruct upon shifting to neutral may be the lesser of the evils, but it is a very expensive one, since overspeed induced failures are seldom minor.

I have had stuck throttles a few times and switching off the ignition has always been the first step to recovery. The HUGE problem, which I have pointed out before in other discussions, is the cars that no longer have a way to switch off the engine. Instead, they have a big button that sends a shut-off request to the controls computer, and does not include a way to force the shutdown.

There is no rational reason for allowing such cars on public roadways.

A simple on/off switch that would disable the ignition system entirely independant of the control computers would solve the problem. It would also be able to provide an additional child-proofing safety function, which is how it could be marketed.

For all of my career in designing industrial control systems, there has been a requirement for a "Emergency Stop" function that must be independant of all control software and logic. That requirement is right next to the specification of the machines functions, and for very good reasons. Just like any other computer type of system, if a failure has caused an unwanted type of operation it can not be expected that the failed control system will respond to any command as required. So that is why the big red button provides the non-maskable hardware shutdown. Because if part of a system has failed other parts may also have

Most modern fuel-injected engines have an overspeed shut-down that either interrupts the spark, fuel or both to prevent an engine from overspeed destruction; it may sopund scary, but it does work. Selecting neutral and standing on the brakes shoudl always work. Like Asimov's 3 laws of robotics, a heirarchy of operation needs to be established that only allows safe operation and different failures would invoke different levels of over-ride up to stopping immediately. There will never a fail-safe autonimous vehicle until the definition of "fail-safe" has been established and agreed upon by all developers. Once those utterly defensible parameters have been incorporated, autonomy may be added. The rub is there will never, ever exist a definition of "Fail Safe" that can't be successfully challenged by lawyers. The technology may be perfect, but the written word will always be open to interpretation. "Thou shalt not kill." Seems to exemplify a simple, clear sentence, yet somehow we manage to re-interpret the meaning on occasion.

Bob, I wonder if some of these cars would allow shifting into neutral. If the shifting is controlled by the same computer that has failed and locked the throttle open, then possibly not. And I know that at least a few transmissions are entirely controlled by electronics, although I think that they may have a mechanical link for the "park" locking function. And using the brakes can get interesting when the engine won't slow down. Quite a few years ago I drove a lab car about 50 miles after the idle speed cam control system froze, and the "idle" would run about 78MPH. The day was bitter cold and it was befor cell phones, and so the chice was sit and freeze or dive and heat up the brakes. They were quite hot by the time I got back. And even with good power brakes, slowing a vehicle with the engine running hard is not easy.

You're right, critic, there will be failures. Watching my car struggle through the recent deep freeze, with mechanical parts locked up by sub-zero temperatures and snow, I wondered how good those autonomous vehicles will be when they face bad weather and aging parts. Will they know the headlights are blocked by ice and snow? Will the camera-based sensors be able to see under those conditions? And, if not, will they know they can't see? Vehicle intelligence will be built up by years of experience and, yes, failures.

Even as a competent Design Engineer, I strongly advise there is no better safeguard against accidents than an experienced, skilled operator. It's just sad that people (in general) expect everyone else to protect them, and take no personal responsibility in the fact that they perhaps don't belong behind the wheel of a car.

This mentality has forced all automakers to include countless so-called 'safety' features, in effort to appease the unqualified demands of the public.

If you think about it, if we lived in a world where this was not so important, there would exist a natural-selection process which would help keep roads safer, merely by thinning the herd.

Excellent post Charles. One factor that contributes to the unknown is the condition of the car AFTER maintenance has been performed. I think we all have had problems resulting from maintenance that might have fixed one problem but created another. Then it becomes "he said--she said". Is the fault basic engineering or issues AFTER customary work accomplished during the life of the vehicle. I really don't know how engineers can prepare for outcomes such as this. I have been part of FMEA (failure mode effect analysis) exercises and sometimes the possible number failure modes are truly astounding. Add to that customer interaction and maintenance and you have to be a prophet to understand all of the possibilities.

Yes, an engineer would have to be a prophet to consider all possibilities, bobjengr, and therein lies the problem. A class action suit resulting for one of those unforeseen problems can practically crush a company. It almost did in Audi's case, and we still don't know for sure what the cause was there.

Building the safety case for software controlled weapon system; we are required to prove that the probability of a hazardous incident is less than 1 in a million. The only way to do this is by analysis, supplemented with tests. The system has to be partitioned and designed from the beginning to support the safety case. In the end it does not matter what fault or rather what faults in combination lead to a catastrophic event so all possibilities must be accounted for over the life of the system.

What is alarming is how complex software is getting - and I think unnecessarily. It hasn't helped reliability. As an example, I rent dozens of vehicles a year, and in spite of the fact that none had more more than a few thousand miles on them I've had two rentals where the throttle and transmission simply stopped working when I was backing up - one in the desert, the other in the snow. Both times required shutting off the ignition to get them working again. In a combined 700,000 miles/65 years on my old personal vehicles under worse environmental conditions than I've subjected any rental to I've never once had an issue with the transmission (OK - except for leaking seals, and frozen solid due to -45F). I've also had two rentals suddenly go to full throttle for no reason (when cruise control was engaged) - fortunately both were on interstates with no traffic around me. Not hazardous, but definitely irritating.

And mind you, what will happen when these complex systems are subjected not to unusual environments - including EMI, but to deliberate malicious attack - say a bunch of teens who get their jollies out of watching drivers reactions when they cause a vehicle to accelerate just before a red light?

Everything needs to be designed to be 2 fault tolerant and self diagnosing.

You don't make a single channel throttle pedal, and you don't make a 2 channel throttle pedal where the A channel and B Channel send the same voltage. to show the same position because a sneak circuit reduces you to 1 channel with no visible detection.

No you design it so the A channel sends 0-6 volts and the b Channel sends 13-18

volts that way, if you get 14 volts on the A channel you know there is a sneak circuit,

and if you get 5 volts on the B channel you also know and during startup the system

checks itself out by running on single channel.

now it takes 2 failed channels to kill the throttle.

you put in things like a hard stop button, that kills the driveline.

and as charles points out, you have telemetry, and record everything. Wheel inputs,

Earlier this year paralyzed IndyCar drive Sam Schmidt did the seemingly impossible -- opening the qualifying rounds at Indy by driving a modified Corvette C7 Stingray around the Indianapolis Motor Speedway.

Focus on Fundamentals consists of 45-minute on-line classes that cover a host of technologies. You learn without leaving the comfort of your desk. All classes are taught by subject-matter experts and all are archived. So if you can't attend live, attend at your convenience.