Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

New submitter robertchin writes "Michael Barr recently testified in the Bookout v. Toyota Motor Corp lawsuit that the likely cause of unintentional acceleration in the Toyota Camry may have been caused by a stack overflow. Due to recursion overwriting critical data past the end of the stack and into the real time operating system memory area, the throttle was left in an open state and the process that controlled the throttle was terminated. How can users protect themselves from sometimes life endangering software bugs?"

Technically knowledgeable people often give very poor names to their efforts.

I thought "Stack Overflow" was great branding for a website aimed at helping programmers solve technical problems. It's a distinctive, cheeky in-reference understood by its intended audience. (And honestly, it didn't hurt that most developers enjoy being made to feel clever about themselves.) That's what a brand is suppose to do [about.com], and it partially* explains their overwhelming success [alexa.com]. And hey, much better branding choice than ExpertSexChange.com!

*Of course, branding is just one of many things they did right. They also filled a unique niche, understood their community (because it was started by programmers, for programmers), and made the site super-easy to use by (here comes the important part...) NOT crapping over the UI with a fake paywall that sought rent for years' worth good-faith user contributions. However, they are sort of starting to be dicks about subjective questions (such as help with API choices, etc.). That may provide a niche for a new competitor to fill...

Division by zero results in positive or negative infinity, depending upon the sign of the numerator.

False. Division by zero is simply undefined. Sometimes you can determine the limit of some function as it approaches a discontinuity due to division by zero, but the limit will depend on the function (not just the numerator) and possibly on which side you approach the discontinuity from. For example, the function x/x has a discontinuity at x=0, but the limit isn't infinity, it's 1. The function 1/x approaches positive infinity as x goes to zero from the right, and negative infinity as x goes to zero from the left; since these are not equal, the limit does not exist at x=0. On the other hand, the limit of 1/(x*x) at x=0 is in fact positive infinity, because the function approaches positive infinity from both sides.

The problem is that infinity isn't a number you can calculate with; it's more of a trend, or a way to express the absence of a finite boundary. An expression is never "equal to" infinity, though it may approach infinity as some other condition changes.

Amish buggies typically don't have software throttle failures. Run-away horses can be an issue.... and actually having to share the road with dipshit drivers who don't understand the number of slow moving vehicles (not just buggies) that there are out in farm country are a real danger.

Seriously, software has bugs. Mecanical throttle linkages can stick, too. Life has risks.

Coming from the aerospace industry, you cannot have software that has bugs. And if there was the possibility of a software bug, you have to prove that you can mitigate the effect in hardware. So just to say "software has bugs...life has risks" isn't an acceptable answer (in my opinion). We have to remember this is not an apples to apples comparison. Just because traditional consumer software always has bugs in it (which are acceptable) doesn't mean they are acceptable in other industries. Considering that the failure puts someone's life at risk, I would think it should be considered unacceptable in automotive industry as well.

Coming from the aerospace industry, you cannot have software that has bugs. And if there was the possibility of a software bug, you have to prove that you can mitigate the effect in hardware. So just to say "software has bugs...life has risks" isn't an acceptable answer (in my opinion). We have to remember this is not an apples to apples comparison. Just because traditional consumer software always has bugs in it (which are acceptable) doesn't mean they are acceptable in other industries. Considering that the failure puts someone's life at risk, I would think it should be considered unacceptable in automotive industry as well.

If you want your cars to be as expensive as a 747, then you can attain that goal. I used to work in the automotive industry designing embedded software for engine management systems. At that time, no automotive company would pay more than $100 for the Engine Control Unit. Probably 60% of the code was written to manage failures (both software and hardware), and there were other electronic fail safe mechanisms. But you can't mitigate every possible failure event without introducing costs that would have made the unit orders of magnitude more expensive.

I'm not so sure cost is the biggest problem. Once upon a time ABS was an exotic tech used only on aircraft. I was impressed when they became a car option.

My biggest concern is whether a driverless car can be smart enough to reliably handle all the situations that arise. Probably someday, but I don't know when. Google's over-hyped toys can only handle clear weather, and even then they often have to kick out and go to manual control.

Once upon a time ABS was an exotic tech used only on aircraft. I was impressed when they became a car option.

In other words, ABS software (and hardware) was very expensive to develop, and only the development budgets of airliners was large enough to cover its development. Once developed, the companies that developed it realized that adapting it for automotive use would be within the budget of luxury car makers, and once it was working there, it became very cheap to adapt for standard cars, as the differences are very minor (in fact it is basically a case of the luxury car makers funding continued improvements, and standard cars getting the previous generation that already has its development paid for).

People can't reliably handle all the driving situations that arise.
The actual target for driverless cars should be something more like handling situations that arise at the 95th percentile compared to human beings. When they are as good as the very best human drivers, that should be good enough, although at that point, it will probably not be too much longer until they're at the 99.999th percentile level.

Even in the aerospace industry, there are software bugs. Very few, yes, because a lot more time and money is spent to track them down. There are mechanical failures too, despite the best engineering efforts. But anything we build has the potential to be flawed somewhere in the process. That's why we still put highly trained pilots in the things, for when something goes wrong - and even then, human error causes unintended faults, sometimes catastrophically.

If a car cost as much as a jet, and drivers went through as much training as a passenger pilot - and had to have a co-driver at all times - we'd be far safer on the roads.After all, the vast majority of car crashes are driver error. And failure modes when you're at 30mph on wheels are a lot better on the whole than when at 30,000 feet. But if we built cars to the same standard, and held drivers to the same standard as aerospace engineering, only the rich could afford to.

There's a trade off between acceptable risk, and cost. Even though the designs get safer every year, maybe we allow too much risk in drivers and their cars. But absolute flaw free cars? Impossible.

You're not supposed to, but you do routinely have software that has bugs even in aerospace, because there is no development process that can guarantee the prevention of 100% of defects, nor even guarantee that 100% of defects are detected and corrected.

Coming from a fault tolerant computing background, your non-trivial software could have bugs. There is a reason N-version programming [wikipedia.org] exists (and it's not for fun).

Unless you have true N-version programming with totally separate teams writing on different platforms with different languages, different academic backgrounds, different cultural paradigms, etc, then you don't have perfectly reliable software. (And even then your application may lend itself to certain symantic commonalities.)

Killing the ignition also means killing power steering and power braking. There is a quite widespread belief that it can also engage the steering wheel lock, but AFAIK no one has been able to name a car where that happens so far. The next challenge is that in many modern cars the ignition switch is just a button which is handled in... software. You could throw the key out of the window and wait for the anti-theft device to kill the fuel supply, but that does not seem like something that most people would try.

In most cars you can put the gear box in neutral. The car will likely have a rev limiter (possibly in software, but it might still work). Worst case the engine breaks, but in almost all cases that would not be fatal to the people in the car.

However, in almost all cars, when not going down a steep hill, the brakes are actually more powerful than the acceleration. Just do not let off the brakes once you get the car slowed down and you think things are under control -- then the brakes overheat and you have a stuck accelerator combined with no brakes, and that has killed at least one driver already.

How would a mandatory publication of all code as open source [not suggesting liberal licensing here] work out here? Might converge at a collaborative initiative and will most likely be reviewed by all sort of people.

One way would be to insist that automakers do not nickel and dime design vehicles. The critical components related to vehicle safety should be designed for safety first, cost second.

These vehicles go for over $20 000, I should at least have the option to pay an extra $1000 to chuck the electronic crap.

The electronics are very deeply embedded. Not sure how you're gonna dump them when there's no physical cable connecting your throttle to your engine. Impossible in the case of EVs or hybrids.

Also although the article does a decent job of showing that a stack overflow is possible and might result in unexpected behavior, what's needed is a simulated failure scenario to see if that's what actually happens.

You'd have to restrict that to old motorcycles. My '98 has ABS and fuel injection, both of which used programmed electronics. Newer bikes include systems such as CAN Bus, traction control, fly by wire throttles and more. Except for things like air bags, seat belts and bumpers, motorcycles use a lot of technology found in automobiles.

Does it have an automatic transmission, and if not does it have clutch by wire?

Automatic transmissions are common (perhaps universal) on scooters and have been used on motorcycles in the past. The newest BMWs have ability to shift without pulling the clutch lever or reducing the throttle. From http://www.motorcycledaily.com... [motorcycledaily.com] "The BMW Gear Shift Assistant Pro, available as a factory option, represents a world first for production motorcycle manufacture. It enables upshifts and downshifts to be made without operation of the clutch or throttle valve in the proper load and rpm speed ranges while riding."

Idiot drivers hit the gas pedal instead of the brake and instead of owning up to their incompetence as a drivers, they blame the car instead. The Toyota sudden acceleration problem disproportionately affects the elderly and inexperienced drivers. It also a uniquely an American problem and it occurred during a deep recession where GM and Chrysler were going bankrupt and Americans needed some FUD against Toyota because supporting American car companies was the jingoism of the day. The toyota sudden acceleration is more of a case study of an American moral panic and mass hysteria perpetrated by the media than it was an engineering problem.

I think it was actually two problems. There was the problem reported by Wozniak, where cruise control would start to accelerate, but tapping on the brakes would fix the problem. It was a bug, but it wasn't life-threatening at all.

Then there is the problem of the car going wildly out of control, unable to stop even when the brakes are applied. That one seems to be a case of foolish driver syndrome.

Actually, the best analysis of the situation I have seen came from someone who had investigated a similar problem that was reported for cars with purely mechanical acceleration linkages some years back. They had studied that situation and discovered that overwhelming majority of those experiencing the problem were in a particular age range (I forget now if that age range was 55-65, or somewhat older). Further analysis revealed that most people go through a kinesthesia change during that period (kinesthesia is the awareness of where a body part is based on the sensations you are receiving from it). When going through that change, people often believe that their feet are positioned a few inches from where they actually are. As a result, drivers in this age range are likely to be positive that their foot is on the brake when it is actually on the accelerator. The interesting thing is that once the person becomes used to the change, they are perfectly capable of being aware of where their foot is once more. The person who did that original analysis analyzed the Toyota data and found that the majority of reported cases involved drivers who were in the same age range. When he took out those data points which looked suspicious as to being part of this actual problem(drivers who looked to be cashing in on the publicity of this, either for money or to explain away their own bad behavior, accidents where no one in the vehicle survived, but this was used to explain irrational behavior on the part of the driver, etc) the overwhelming majority of cases were in this age range and most of the remaining were inexperienced drivers.

Driver error. When you're stopped you're supposed to keep the brake down (and the wheels straight). The usual reason is in case someone rear ends you. My car even helps out: if you press the brake all the way down it holds it on until you release it completely.

Sure, most people like to do the creeping forward thing. Most people are poor drivers.

Honestly, not much, except perhaps demand better software. Better processes, better languages. I'm just hypothesizing here but it might not have happened if they had e.g. followed better development standards like the MISRA C standard, or don't use C at all, use Ada or something. Better QA processes might have caught it before it went into production, e.g. using a dynamic stack profiling tool, input fuzzing, whatever. Fundamentally a system like this should have an independant hardware watchdog timer to at least try and make it fail-safe in the event of a CPU crash. Finally any motor vehicle ought to have a manual cutoff switch wired into the fuel pump or ignition circuit so that when the CPU shits it's bits you can still turn the damn thing off before you crash crash.

"We've demonstrated how as little as a single bit flip can cause the driver to lose control of the engine speed in real cars due to software malfunction that is not reliably detected by any fail-safe," Michael Barr, CTO and co-founder of Barr Group, told us in an exclusive interview. Barr served as an expert witness in this case

Emphasis mine.

Yes, a single bit flip can cause unpredictable behavior in any code. You could say that without any analysis. A single mistake in sign can get you a goose egg in the Algebra paper. So many people could have won the lottery if only one digit was different. These are well known. But can != did. Did that stack overflow? Did it actually happen? That is the question.

From an engineering standpoint, it's not really the question - if there is a design flaw so that the system can fail with a non-negligible probability, it will eventually fail. Bits flip everyday, everywhere, but there should be mitigation in place to take care of that (at least a watchdog).

A watchdog isn't the best solution to this problem. You don't really want your ECU to reboot randomly due to a fixable error. Just use ECC RAM and keep redundant copies of critical values in memory.

There is no way this issue could have caused the number of events people are claiming to have had. Such a bit flip has never been observed in real life (they used a debugger to simulate it) and the changes of that particular bit out of millions corrupting is extremely low. Of bit flips were common enough to cause this we would see the effects as other systems like the dashboard display and head unit crash, not to mention other parts of the ECU.

Quite simply the absolute control should not be handed over to the computer. Basically doing something like pulling on the handbrake should basically physically cut the throttle. Or stomping on the brakes should activate a simple solenoid that cuts the throttle. This mechanism should be 100% separate from the computer and override most computer outputs.

I see this as critical in a driverless car. There needs to be a way for people to pull the plug and there needs to be a way for people to phone in an emergency. So if someone is lying in a pothole being run over by car after car, or the bridge is failing, there needs to be a way for 911 to say that a stretch of road is now cut off. The key is that this cannot be ab abusable by officials. I do not want my car grinding to a halt because the police are looking for some runaway or a bank was robbed.

I have actually had to do this before. Had a 2002~ A8L that would full throttle on its own, and yes the breaks are more powerful than the engine. We spent months going around with Audi on the issue before at some point we took a regional manager on a ride and it did it to him. And no it wasn't the fucking floor mat. They took it back without a word and gave us a newer model with 20k less miles.
The important part to note here is that stepping on the breaks will still stop the car.

This will happen, and it will make national/international news, and there will be a bunch of asswipes all going, "I told you so, these automated cars were going to be the death of us all." But this will be in the face of driverless cars killing so few people that it might be single digits nationwide.

Oh and I forgot to mention all the "experts" they will get on the news who will try to turn driverless car safety into an issue; when in fact any changes that they propose will probably kill even more people.

What you're asking for is basically an emergency stop. The problem is that in some cases this can be dangerous as well. What if there's a truck 30 feet behind you, and you suddenly by accident (or inherent fault) activate the emergency stop? Safety is complex, and I'm not sure emergency stop is a good idea here, as it introduces it's own problems.

... you know... i worked for pi technology in milton, cambridge, in 1993. they're a good company. they write automotive control systems, engine control management software, vehicle monitoring software and diagnosis systems and so on. one of the things i learned was that coding standards for mission-critical systems such as engine control have to be very very specific and very very strict. the core rules were simple:

1) absolutely no recursion. it could lead to stack overflows.2) absolutely no local variables. it could lead to stack overflows.3) absolutely no use of of malloc or free. it could lead to stack overflows.

now you're telling me that there are actually car manufacturers that cannot be bothered to follow even the simplest and most obvious of safety rules for development of mission-critical software, on which peoples' lives depend? that's so basic that to not adhere to those blindingly-obvious rules sounds very much like criminal negligence.

In which case you shouldn't have function calls or interrupts either. I know recursion and dynamic allocation are to be avoided, but even in the highest rel standards I've never heard of not being able to use local variables. Careful stack use analysis and testing, sure.

Perhaps it should be rule number one, but actually it's Rule 16.2 of MISRA-C:2004 (Motor Industry Software Reliability Association, Guidelines for the use of the C language in critical systems):

Functions shall not call themselves, either directly or indirectly.

The rule actually appeared first in MISRA-C:1998. Each rule is accompanied by a detailed rationale that I will not reproduce verbatim here as the standard is not open; one must pay for the privilege. The rationale for 16.2 is that recursion may cause stack overflows. I only cite the rule itself because it appears in public testimony and also on the (first) page linked by this story...... which you obviously did not read.

Because MISRA also disallows constructs such as function call indirection, self modifying code, etc. a compiler is entirely capable of detecting recursion and reporting the violation as an error. MISRA compliant compilers do exactly that.

Yes Virginia, the largest auto manufacturer on Earth ignores the very thing that was designed to prevent simple, common, easily predictable failures such as stack overflow despite the fact that the cost of compliance is much, much smaller than a rounding error for an outfit like Toyota.

Also, despite the fact that Industry dutifully identified this specific problem in a published standard at least 16 years ago, compliance is apparently not yet a requirement by government regulators. I suspect they're too busy investigating child seat manufacturers or Telsa batteries or whatever other politically high profile crisis that giant, engineer-free gaggle of NTSB lawyers fill their bankers hours with.

Sorry. I work in automotive embedded systems(although not personally in safety-critical parts we use the same parts and rules) and I can tell you rules like yours were behind this.

We are following all these rules and so we are safe. We can save a penny per 1000 units sold using a crappy MMU-less CPU.

First of all, following the stupid rules requires you to use baroque lint imitations which will go off on every line of idiomatic C. You need a paper trail to justify every line of code. Seems about right, people's lives are in danger, right?

Now consider that the controller system is hundreds of thousand of LOCs(for us it's more like millions). Most of that is crap boilerplate code required by the standards. This means if you follow that methodology strictly, you need hundreds of people going through mindnumbing lists of "You are not using this argument/This code assigning an argument to itself does nothing". Given that most software developers are inept and overworked, I can give you a certificate that there will be bugs.

It took me two weeks with the code to find a checksum function used all over the place that had been "fixed" to detect offset data after some earlier corruption bug was not detected.

Every 256 bytes "checksummed", a bit from the input would be left unaccounted(And it was actually used for data several times larger than that). I know for a fact that had to go through at least three source and design reviews and at least one more design review with some fat managers higher up.

Now tell me you feel safe.

Note to PHBs: Googleing up a fucking working CRC and getting a CS PhD to make a formal proof that it will work as intended would have cost far less.

Also, you see, the crappy CPU vendor stack measuring tools - that rules say we must use to guarantee safety - don't account for function pointers(they do show scary icons for recursive functions). They say foo(384) bar(uhhm... maybe 0?) I know to look for that when I add calls to function pointers, but I guess most people don't.

Now you add another rule. LOZRA 4092: You can't use function pointers at all.

Make my life more miserable, give the remaining work I will be unable to do to Dave, the monstera plant, or someone with the same programming aptitude.

I will give the crappy CPU/Compiler/RTOS vendors that should be sued free advice:

0- Add an MPU

1- Add canaries to every function call with any local variable at all(here it's not hackers it's programmers following LOZRA 396: cast the shit off everything so the compiler can't tell)

2- Add stack overflow canaries on every task switch. (add an MPU and align to page in the stack growing direction)

3- Add canaries to any memory pool allocation. (add MPU dead pages - You don't need RAM, just fucking address space of which you are using like 2%)

4- If any of the above traps, jump to a customer defined function(stored in ROM than can only be physically modified by outside hardware) that puts all vital hardware in a safe state, adds a record to the black box and reset the whole thing from scratch.

5- Forget about tasks and threads and move on to processes running on separate address spaces. If information must flow from a to b it better go through accepted channels.
6- Did I tell you to add a fucking MPU!?

:) Which is why we had to spend the first 6 months after hiring any new grad, retraining them in development techniques that actually worked in our embedded near-real-time, real-world sitations. I still have no idea why colleges convince their graduates that they actually know anything. College is an opportunity to learn how to think and how to learn, not to learn what's needed to be an instant star. We all have to learn constantly our whole lives to stay on top of the technology.

Most embedded code is terrible, and the programmers themselves are completely full of themselves.

Certainly most embedded code is terrible. I've worked on systems from the smallest to the largest (4K ROM + 128 bytes RAM to Google-scale) and it turns out that MOST code is terrible. There was one embedded project I worked on where most of the code was a horrible mess... and then there was this one little module that was beautiful. And not by appplication of any rules like "no function pointers"; no, it was

For anyone else that's curious; at first I thought it was double speak, so not to sound as bad.

Stack overflow refers specifically to the case when the execution stack grows beyond the memory that is reserved for it. For example, if you call a function which recursively calls itself without termination, you will cause a stack overflow as each function call creates a new stack frame and the stack will eventually consume more memory than is reserved for it.

Buffer overflow refers to any case in which a program writes beyond the end of the memory allocated for any buffer (including on the heap, not just on the stack). For example, if you write past the end of an array allocated from the heap, you've caused a buffer overflow.

The testimony makes fascinating reading, and is based on analysis of the actual source code in clean-room conditions. If after reading it you still think this isn't a software problem, maybe you need to turn in your geek badge right now.

That article demonstrates just how clueless the guys doing the testing were. For example they complain that there "thousands of global variables", but that is actually the normal way to write safety critical firmware since local variables can cause stack overflows. They couldn't read any of the source code comments which were in Japanese either, only get poor machine translations of them.

Most damning of all though is that actually the skid marks the article claims are evidence of the bug are easily explainable, and indeed Toyota did offer an explanation. If the mat/carpet causes the brake and accelerator pedals to become linked pressing one with of course press the other was well. The driver could not explain why she didn't push the brake pedal hard enough to stop the car (even with max acceleration the brakes will always win over the engine), but Toyota could. The mat that was also pushing the accelerator was preventing her from fully engaging the brakes. The pedal could not be pushed down fully.

The fix was to change the firmware to stop accelerating when both the accelerator and brake are heavily engaged. So, in actual fact, the supposedly lethally flawed firmware is now saving people from their own stupidity.

Mission critical systems usually have a set of voting computers. Today electronics is cheap and the technology is not new. Maybe inertia prevents them of building something better. Very common in the automobile industry.

MISRA C, which is a mandatory coding standard across pretty much all the automotive industry, does outlaw recursion. So I find this speculation on the cause to be very unlikely, and is just lawyers bringing in "software experts" to speculate. If you want to speculate, there are many other potential software bug causes as well. Some of them would even pass MISRA C coding checks, and not be easily detectable by static analysis.

They didn't speculate. They analysed the source code, which they had clean-room access to, as well as the actual compiler and test harness used by Toyota. Toyota testified that the stack utilisation was 41%, whereas the analysis showed that it was actually 97% *before* the recursion was taken into account. It looks pretty certain that stack overflow could occur. Following the stack are key system structures used to control the scheduling of threads on the CPU, and damage to these structures could cause one or more of the threads to never get any scheduled time. One of the threaded tasks not only controls the throttle, but also the failsafes in case of some scenarios,and also the code that writes fault codes to the battery-backed RAM. Basically, if that task dies, then the throttle is left uncontrolled, the failsafes don't kick in, and no fault codes are written so that the problem is revealed after the fact. It's a terrible design; a disaster waiting to happen.

Uses can protect themselves against this sort of thing by not buying a Toyota until they are compliant with the relevant standards. Only hurting them in the marketplace will get them to fix this problem.

The testimony from both expert witnesses Barr and Koopman are now a matter of the public record and actually make fascinating reading - they'll be especially interesting to computer guys because it goes into a lot of detail about the code design, though frequently translated for the benefit of the court into layman's language. It's going to go down in history I reckon as a classic case of how not to design a safety-critical system.

I've known a lot of high-quality developers over my 15 years of professionally developing software. The reason I don't want an automated car is because of these people. People make mistakes, intentionally or otherwise. There are unforeseen circumstances that the software may not understand. A foreseen circumstance: I've yet to see a demo of an automated car navigating anything resembling an icy surface (safely or otherwise), let alone in stop & go traffic in a city such as Chicago, where such things are quite common.

Yeah, but we know today who pays: the insurance company of the at-fault driver (provided they have the legally required insurance - I think collision is required in all US states). Failing that, the at-fault driver. Failing that, the dead at-fault driver's estate.

The question at hand is, in the case of an automated car, who is at fault (when the automated car is deemed to have caused the accident)? The manufacturer, because, it must have been a design/implementation flaw? The owner? The driver (because owner/driver aren't necessarily the same)? It becomes more difficult when you divest yourself from current paradigms of car transport. Oh, I sent my 6 year old daughter to school as a passenger, and along the way it ran over someone. There was no driver (the daughter was a passenger), but the car still killed someone. Am I at fault because I ordered the car to make the trip? Is the manufacturer at fault because the car didn't detect and prevent the collision that killed the other party? Is my 6 year old daughter at fault, because she was the only human occupant? This is the question that is being posed.

Yes, people do make mistakes. Often while driving. The test shouldn't be whether automated cars make mistakes, but rather whether they do better than an average driver. Can they deal with icy roads as well as an average driver? That bar's pretty low, even here in Edmonton.

Once they've reached that average competence and start being deployed, they'll also improve rapidly over time; computers have the potential to be much safer drivers than humans. They'd know where other cars are and where they're going, they'd be able to apply brakes to wheels independently with lightning reactions, and would not be subject to health conditions, intoxication, aging, or inexperience.

I've known a lot of high-quality developers over my 15 years of professionally developing software. The reason I don't want an automated car is because of these people. People make mistakes, intentionally or otherwise.

When it comes to true high-rel software, like that written to DO-178B Level A (an avionics software standard used for things like fly-by-wire) it's almost never the software per se that's at fault. The stuff is amazingly good. It's also amazingly expensive to write and test. You might also find it frustrating because it brings new meaning to the idea of conservative design. For example, I don't think it allows recursion. I know it doesn't allow dynamic allocation.

There are unforeseen circumstances that the software may not understand.

That's more the point. When things like fly-by-wire systems have problems, it's almost never the software itself, but something that was unforeseen by the system designers.

A foreseen circumstance: I've yet to see a demo of an automated car navigating anything resembling an icy surface (safely or otherwise), let alone in stop & go traffic in a city such as Chicago, where such things are quite common.

Agreed. This technology is interesting, but it's way over-hyped. It's impressive to be able to drive on a nice clear day, but a far cry from the real world, even in Silicon Valley, let alone Chicago.

Ever hear of the flying cars that in the 1960's they said we'd all have soon? I haven't seen any lately. I suspect driverless cars robust enough not to require human intervention when the going gets tough may be developed on the same schedule.

Well, one of the interesting features, IIRC, was that no mapping was done by the AI, it drove strictly "by the seat of its pants", and had no trouble performing precisely controlled power-slides that would make most drivers shit their pants. If we're discussing the challenges of driving on ice I think that's very relevant. I don't remember if it had any specific moving-obstacle-avoidance code, but plenty of other projects are tackling that. Conceptually at least it shouldn't be difficult to combine one A

One problem caused by this fault is that if the throttle gets stuck in the open position (the exact amount is redacted from the public record, but it looks to be >30%), then the vacuum assist to the brakes is greatly reduced (after all, normally the throttle closes when you move your foot to the brake pedal, so you get full vacuum assist). The upshot is that the driver would need to apply far more pedal pressure than they're used to to get full braking - combined with the fact that the engine is pulling hard it will feel like the brakes have simultaneously failed. Turning off the ignition might help with the acceleration, but not with replenishing the vacuum assistance.

I had my car suddenly accelerate on me before. I was driving along and suddenly the pedal felt really strange and it start accelerating, even when I took my foot off the pedal. I turned off the car and pulled over. Turns out the rubber mat I put in to protect the inside of my car from wet/snow had somehow managed to flop on top of the pedal and pushed it down. When I heard about these Toyotas accelerating on their own, it's the first thing I thought of.

During the trial, embedded systems experts who reviewed Toyota's electronic throttle source code testified that they found Toyota's source code defective, and that it contains bugs -- including bugs that can cause unintended acceleration.

I'm currently weighing your hunch against the industry experts who reviewed the source code, identified specific flaws in detail in an 800+ page report, and testified to their findings, presumably under oath. I'm on the fence here, it's a tough call. I'll let you know what I decide.

Although I'll admit I get the feeling that there is some kind of conspiracy going on here, probably related to an Obama Muslim invasion. I mean, what are the odds that the "several hundred [people] contending that Toyota's vehicles inadvertently accelerated" all happened to be fans of Toyota's with the same electronic throttle control system?

Did they also discover a flaw in the brakes such that they could not overcome the engine power? This was the point of the parent post, I think. Modern cars have sufficient braking force to completely stop the engine even at full throttle. So if the driver is "stepping on the brake really hard," the car should stop in spite of a stuck throttle, unless a simultaneous brake system failure can be demonstrated.

This same thing happened to Audi in the 1980s and as far as anyone can tell, objectively, it was at

In what car?! All modern mainstream vehicles still use a master cylinder in tandem with a booster and ABS. Even if you lose all engine power, should still be able to apply the brakes. Although it will require more force to push the pedal down, it should still be doable to bring the car to a complete safe stop.

In what car?! All modern mainstream vehicles still use a master cylinder in tandem with a booster and ABS. Even if you lose all engine power, should still be able to apply the brakes. Although it will require more force to push the pedal down, it should still be doable to bring the car to a complete safe stop.

If only the throttle was also required to have a physical linkage!

Unfortunately, once we abandoned the carburetor, there isn't much to link the throttle pedal to any more, and we are stuck with using it pretty like as a mouse. A physical pedal up fuel valve to reduce fuel availability to a level sufficient only to idle the engine might return some measure of physical control. Currently most manufacturers have the computer cut throttle when the brake is pressed, even if the gas is also pressed. (Toyota

It depends on if the ABS and/or brake servo is influenced by the bug and makes it harder to apply the brakes.

Be aware that most brake servos are using the manifold vacuum to increase the brake force. If the engine gets full throttle the vacuum is soon depleted and a much larger force on the pedal is needed which will be experienced as failing brakes.

A 268 horsepower Toyota Camery can stop from 70mpg in 190 feet with the engine at full throttle. Only 16 feet longer than not throttle. At 100mph the stopping different was 88 feet.

When they tested a full throttle stop from 120mph, the car slowed to 10mph before the brakes overheated.

Perhaps you should put your foot harder on your brake pedal.You are american though, so your Impala is probably an automatic. In first gear with the torque converter multiplying the torque it may be able to slowly move the brakes when they're fully applied.

Weird. The stock brakes on both the '95 Caprice and '96 Impala SS sitting in my driveway can hold the car in place. That was true when the engine was stock, and is still true after adding a shift kit, PCM tune, cat-back, intake, and valve train upgrades. It's been true on both the factory tires and the substantially wider aftermarket tires. It might be time for you to replace your brake material; you're seemingly endangering the other cars on the road.

Well, if it was in article comments on the Internet, that's a whole new story...;)

No one sells a car in the US with exclusive brake-by-wire, because nearly every state mandates the existence of a second braking system independent of the primary braking system. That's often the thing people call the "emergency brake," as compared to the "service brake." For IL, look at Article III at http://www.ilga.gov/legislatio... [ilga.gov]. They must be separated such that a failure in any one part does not leave the vehicle w

Did they also discover a flaw in the brakes such that they could not overcome the engine power? This was the point of the parent post, I think. Modern cars have sufficient braking force to completely stop the engine even at full throttle

Is this definitely always the case? Under all conditions? There's a huge difference, for example, between holding down the brake while stopped and gunning the engine and slamming on the brake while already travelling 70 miles an hour with the engine similarly gunning. Static vs Dynamic friction for one thing. Not to mention brake exhaustion due to overheating. The pads and rotors heat up and the physical properties of both change, the rotors can warp, moisture can flash into steam and create a nearly fricti

A Prius does not. More than half the brake power is supplied by regenerative breaking, and if the register for throttle position is at 100, regenerative braking never kicks in, leaving you with the relatively whimpy disc brakes. A prius crash near me last year had the brake pads worn away and smoking when the police arrived.

Many of these uncontrolled acceleration cases involved hybrid Toyota vehicles. In addition to the electronic throttle, Toyota's Hybrid Synergy Drive uses brake by wire, so the computer can dynamically use any desired combination of regenerative and friction braking, based on the hybrid battery charge state and the severity of the driver's control input on the pedal. These cars also eschew mechanical control for the gear-shift and the push-button ignition switch, relying on interface through the ECU.

It thus seems entirely plausible that a stack overflow, race condition or other crash/freeze/whatever could result in a wide-open throttle with no brakes and no gear-shift or ignition off control. if this is the case, it represents a epic lack of fail-safe design. It certainly doesn't help prevent operator error when Toyota uses a non-PRNDL shift pattern on their hybrids, to say nothing of the lack of industry standardization of the behavior of push-button ignition.

Almost all safety systems in Ontario have not been designed or written by a PEO licensed engineer. The PEO is the same organization that tried to get Microsoft to stop using the term "Microsoft Certified Systems Engineer (MCSE)" and largely lost. If you start analyzing real and deployed systems, you will be shocked at what you find.

Yes, there are a few very well designed machines out there that do hardware and software interlocks properly, and in an obviously safe fashion. These are the exception, and I am delighted to find the few exceptions that exist.

However, if you want excellent examples of obviously unsafe things, consider:

- The gas pumps at Shell, Esso, and Petro-Canada. How many brands have an Emergency Stop button? One?

- Toyota cars have a push to start button that is also a push and hold to stop button. So how do you stop the car quickly? Shouldn't a car that has push-button start, also have a push-button stop, that is a different button and works quickly? Why would Toyota follow the Microsoft standard of using a start button to stop, instead of following the very well thought out emergency stop button standards?

- Hospitals have implemented a number of computer systems that are networked, and make the job of nurses quicker, easier and more productive. This reduced nursing costs considerably, and fewer nurses are looking after more patients. However, these systems are not reliable, and the official backup plan is that a nurse will step in and do the job manually if the system fails. Unfortunately, many of these core systems are also running on Microsoft Windows (often Windows XP.) One virus, or one bad update, written by a non-engineer, to wipe out many core systems. A major hospital had its Internet linked systems disrupted because too many people watched Olympic hockey (over the critical internal network.) Has any engineer approved any of this? Does any hospital have enough nurses to cover off in the event of a computer failure?

- Most servo-motor drives are sold with a "not recommended that power be cut by an emergency stop/safety system" warning buried somewhere in the documentation. Ignoring this, and assume braking resistors are used, and power is really cut. Most motors will follow an exponential stopping curve, and appear to coast to a stop. A mechanical engineer doing a PSHSR (Pre-Start Health and Safety Review) will expect the machine to stop quickly, and not coast. The cheapest way to do that is to dynamically brake into braking resistor under full software and transistor control. The second cheapest way is to use a parking brake, but those are not rated for safety and only a fraction of the servo-motor market uses parking brakes anyway. How many PEO licensed mechanical engineers doing PSHSR reviews have passed systems with incorrect software E-STOP circuits and Safety Circuits, and failed E-STOP circuits and Safety Circuits that cut power to the motors in hardware?

I know that sometimes its difficult to admit that someone you know well may have made such a stupid mistake, but the probability of your father-in-law being confused/mistaken is a lot more than entire teams of engineers at the world's largest car manufacturer fucking up in such a dangerous manner.