@JCreasy I agree, proactive recalls are much better than after the fact. I heard from amate that BMW had issues in the US that they didn't want to recall, and Mazda here did something similar. Buyers should vote with their feet, not necessarily for the car with the least problems but rather the best after sales care

As a lifelong gear head, I know that worn or broken parts can kill you, but I don't believe claims about unintended acceleration, given functional mechanical pedals, linkages, etc.

I developed a verification and testing process for a firm that developed embedded engine controllers and this all sounds familiar. I'd been dubious about the Toyota failures, but I didn't realize that this car was drive by wire. Buggy software as the root cause of the failure mode is therefore completely plausible, despite no finding of mechanical or electronic failures.

If Barr's report is accurate, the software design, programming, and testing was ignorant, sloppy, and inadequate. The real shame is that this is completely unnecessary – we've known how to achieve very high reliability software systems for a long time without breaking the bank. Model-based testing is now a big part of that.

I'm not sure who's responsible for the hype and inflammatory language ("a single bit flip could...," task death, dead task, dead app), but I guess that's what you have to do to make software failures tangible to a jury. It is interesting no smoking gun is reported (recorded input/state with incorrect output that directly caused the failure - i.e., it is not correct to say that a single bit flip caused the failure.) In a tort case, circumstantial evidence can be sufficient, so it seems that evidence of poor software development alone was enough to convince the jury that it probably caused the failure.

This may be the first time that indicators of bad code (not actual results) were sufficient to get a judgement. If so, I hope this is a wake up call for people who manage this kind of system development and its risks: software hygine isn't a fool's errand.

Hi, Bert. I appreciate a level of skepticism...but let's get too cynical before we know all the facts.

Actually, I find the fact that the experts' group was able to demonstrate at least one way for the software to cause unintended acceleration is a "breakthrough," at a time when the Toyota case -- up until last week -- was viewed by many as an issue of floor mat, sticky pedal or a driver's error.

But as Bert and I have pointed out, ultimately the designers of critical safety systems, both hardware & software designers, are dealing with probabilities, and their task is one of reducing the probability of a dangerous incident to some acceptably low level, which can never be exactly zero.

Those familiar with ISO 26262 know that it defines four automotive safety integrity levels (ASILs) and various metrics, including the probability of violation of a safety goal (PVSG). The highest ASIL level, ASIL D, requires a PVSG (1/hour) of less than 10^-8, which is an order of magnitude less than that required by IEC 61508.

An argument could be made that safety systems which meet such requirements reduce the failure rate of these systems to a level far less than the failure rate of human behavior & decision-making while driving. Again, I am speaking of automotive safety in general, without regard to the particulars of this case. We could imagine future standards requiring even lower error probabilities, but these will never be zero.

Consider that just in North America, vehicle travel amounts to about 3 trillion miles a year, amounting to billions of hours spent behind the wheel. That is a sufficiently large sample size that even with extremely low failure probabilities resulting from best engineering practices to ensure absolute safety, failures will still be seen from time to time. But consider how that compares with the injury and fatality rates caused by human error.

Even as modern automotive safety systems make driving safer every year, and as we look toward a future in which we humans are merely passengers in our vehicles and safety incidents are rare, the burden of responsibility and the cost of failure borne by the providers of these systems is far greater than it ever was or is on the fallible humans whose errors cause so many injuries and fatalities every day on our roads.

At 60mph (88fps) you would maintain a minimum distance of 264ft on a US freeway ...and you'd like to have 528ft? mmm, I'd like to see you do that in peak hour on one of Sydney's or Melbourne's freeways.

I've never been on a Melbourne freeway in peak hour where the speed get much above 60kph let alone 60mph. :-) (you new that was coming right? :-)

Platooning is an interesting one, it would avoid the cycling between 0kph and 60kph that occurs at various intervals on freeways, but as you say with no margin for error, and who is responsible for the collision and possible death? Toyota paid dearly here, and while most drivers can't cope with their car in working order they have little chance to cope with an X-failure in platooning, the car makers will likely be sued out of existence in that case.

People just like to blame someone, and usually the one with the deepest pockets rather than the one at fault.

In conjunction with unveiling of EE Times’ Silicon 60 list, journalist & Silicon 60 researcher Peter Clarke hosts a conversation on startups in the electronics industry. One of Silicon Valley's great contributions to the world has been the demonstration of how the application of entrepreneurship and venture capital to electronics and semiconductor hardware can create wealth with developments in semiconductors, displays, design automation, MEMS and across the breadth of hardware developments. But in recent years concerns have been raised that traditional venture capital has turned its back on hardware-related startups in favor of software and Internet applications and services. Panelists from incubators join Peter Clarke in debate.