Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Death Metal writes with an excerpt from the website of defense attorney Evan Levow: "After two years of attempting to get the computer based source code for the Alcotest 7110 MKIII-C, defense counsel in State v. Chun were successful in obtaining the code, and had it analyzed by Base One Technologies, Inc. By making itself a party to the litigation after the oral arguments in April, Draeger subjected itself to the Supreme Court's directive that Draeger ultimately provide the source code to the defendants' software analysis house, Base One. ... Draeger reviewed the code, as well, through its software house, SysTest Labs, which agreed with Base One, that the patchwork code that makes up the 7110 is not written well, nor is it written to any defined coding standard. SysTest said, 'The Alcotest NJ3.11 source code appears to have evolved over numerous transitions and versioning, which is responsible for cyclomatic complexity.'"Bruce Schneier comments on the same report and neatly summarizes the take-away lesson: "'You can't look at our code because we don't want you to' simply isn't good enough."

I read the report earlier, and there are some very valid issues with the source. The first is that in incorrectly averages readings taken, assigning more weight to the first reading than the subsequent ones. It also has a buffer overflow issue, where an array is being written past its end, and even if this results in an error, it goes unreported.

You would have to be a fricken moron not to have a problem with mis-averaging, however in my experiences with law-people, they can be even worse than PHBs.

Also it looks like their out of range error scheme was to set it to the closest legal value and report it if it was recurring and continuous. Assume for a moment you took a test right after the last good reading, you took 32 samples. It would only report an error if all 32 samples failed. Otherwise 31 of the 32 will report the maximum legal extreme closest to that reading. Couple that with the fact that the averages were taken incorrectly, this isn't just reasonable doubt it is worse than using a RNG to find if they are drunk.

I'm not generally someone that insists everything needs to be open source. However, in a situation like this, where this device makes the difference between a life changing conviction and exoneration, it's pretty obvious that people should have the right to examine it. The court was able to order it opened here, but it makes you wonder how many people have been screwed by this.

Sadly in the majority of cases where evidence based on something like this (DNA, hair analysis, etc) is shown to be based on someone or something that's not good - nothing comes of it. I saw a blurb about a "forensic expert" that would give the prosecution any testimony they wanted. The state he was based in refused to reexamine the cases he was involved in even after he was shown to be a liar.

It's depressing but it's one reason I steer clear of the law as much as I can. As much as we Americans like to think of our legal system as dispensing justice, the sad fact is that it frequently doesn't.

Since I'm not an American I don't know how a drunk stop works, but here in Denmark, you get to blow on a mobile device, if it shows up as drunk you are taken to the hospital for a blood sample and only that blood sample will be used against you.

This seems to make sense to me. The breathalizer is supposed to measure the blood alcohol content, and this is done by measuring the alcohol content in air expelled by the *lungs* (with a knowlege of partial pressures).

But if you equally weight beginning readings with ending readings, then you can be skewed by the first reading, which comes from the air in the mouth, instead of the lungs (giving low scores for people with time since their last drink, and people high scores with a recent last drink).

I would think that this method would give a more accurate reading by filtering out the readings from 'mouth air' and giving preference to 'lung air'.

But regardles, tests should have been done using both methods, and comparing to blood test to see which returns more consistantly accurate results. I wonder if those tests need to be made public as well.

You can think that you're doing fine because you've gotten good at compensating. For instance, dancers and figure skaters can learn to compensate for inner ear/balance issues from spinning at speeds and duration that would have most people nauseous or throwing up, but the spinning doesn't affect their reflexes. However you don't have to have your cochlear sense of balance feel impaired for intoxication to be affecting your ability to drive. It doesn't take much alcohol for your reflexes and cognitive response to be impaired enough to cause an accident, even if it's not obviously apparent. While there is some variation, the acceptable BAC levels were based on correlation with average results from testing for significant reflex and attention deficits.

You might be one of the outliers, but the odds are much better that you might are one of the myriad of people who delude themselves into thinking that they are outliers because their judgement is impaired. Unless you've actually personally undergone reflex/response testing by a third party in conjunction with BAC testing to judge your personal susceptibility to alcohol, your judgement on the subject after alcohol consumption is unreliable. However your ability to compensate for impairment in normal driving conditions wouldn't save you from an accident in an unexpected situation the way unimpaired reflexes would.

The small restriction on the few outliers is not a high price to pay for the safety of innocents. Nobody says you can't drink or drive, just that you have to exercise some level of personal responsibility and not do both (or for that matter, drive and consume any other drug that impairs your ability to drive safely)

Readings are Not Averaged Correctly: When the software takes a series of readings, it first averages the first two readings. Then, it averages the third reading with the average just computed. Then the fourth reading is averaged with the new average, and so on. There is no comment or note detailing a reason for this calculation, which would cause the first reading to have more weight than successive readings.

You are correct. In the biz, we refer to this as an exponentially-weighted moving-average-filter. Recent samples are weighted more heavily than older samples.

y(n) = alpha*x(n) + (1 - alpha)*y(n-1)

The alpha value controls how much of the current input makes it to the output and how much of the old output stays. i.e. with an alpha value of 0.5, half of the new value is added to half of the old value. With an alpha of 0.1, 10% of the new value gets added to 90% of the old value.

This filter is nice because it doesn't require you to remember all the values that you want to average together, but it's a horrible way to get over the inherent noisiness in sensors.

If you have a noisy sensor and are trying to keep a low-noise estimate of the input, while that input is changing, you do some sort of filtering on the data. The weighted rolling average described above is nice for a number of reasons, mainly it's simple to implement and simple to analyze. In some cases, other filters are better.

If you have a noisy sensor and want to measure a single unchanging input, you would want a different sort of filter. In this case, the simple arithmetic average works quite well.

As you correctly observe, the two filters of similar complexity. Which one you use should depend on the sort of input you're trying to measure. In this case, they used the former type of filter on the latter type of data, which is a definite no-no. This will result in data that is far noisier than you would otherwise expect from the raw sensor noise and the number of samples taken. When that noise could be the difference between a DUI conviction and and the cop telling you to drive home carefully, I'd say that's worth worrying about.

Assuming the microcontroller has a 10-bit A/D converter to get the reading, I'm pretty sure such a chip could add 32 numbers together. With the speed of 8-bit microcontrollers these days exceeding 1MHz even at ~$1 price points, emulating 16 bit numbers to get your sum is not a problem. Take a power-of-two number of readings and your average can be a simple bit shift. It will take more horsepower to convert to base-10 on the display than to take the average.

This is not a cheap child's toy or a toaster, it's a law-enforcment grade breathalyzer going for above $100; there is no excuse for being so lazy. Code that runs on small systems should be *clean* because bugs are harder to find without easy I/O, and the efficiency of it needs to be obvious. Also, code that can put someone in jail should not be spaghetti, regardless of the scale of the system running it.

Common wisdom holds that the end of a breath from the "bottom" of the lungs contains a higher percentage of alcohol than the main body of the breath, this is held to be why the officer will tend to tell you to push harder to get that last higher sample into the device. If anything sets off the machine, it'll be that last bit with a more concentrated sample.

Whether that reflects the *actual* blood alcohol level in any well defined and useful fashion needs to be

No, writing lousy code is not a prerequisite for being considered great and invaluable. As/.ers are so fond of saying, correlation is not causation.

It's just that if you are a lousy coder, you probably have time for proper amounts of sucking up, while if you're a great coder, you probably are too busy getting things to work properly to concern yourself with interpersonal relationships. And it's easy to see why the former get more promotions than the latter.

In Britain, the breathalysers decide whether or not you get taken to the police station.

When you arrive at the police station, they take a blood sample. They test half of it, and that decides whether or not you get convicted. You get the other half and can arrange for your own tests on it.

I'd be more interested in their test plan and test results than their source code if I were trying to convince a computer illiterate judge of something. Find a missing test case or an uncovered corner condition and you might have a decent case, code that doesn't pass static analysis and is ugly... well that pretty much defines 99% of the code out there.

We trust our lives and livelihoods to shitty code every day and the plain fact of the matter is that shitty code usually works. As programmers we like to think of ourselves as artists; creating a master piece of perfectly engineered code. In reality, all projects face budget and time constraints, most projects have legacy code which is hard to maintain, and most teams have at least one guy who just doesn't get it.

If the code works, and you can show empirically that the code works, that is proven beyond a reasonable doubt it my opinion. Not beyond any doubt, but that isn't the standard that our justice system is based upon.

The problem isn't that it can break, the problem is it can return bad readings. For example, a dentist's X Ray machine isn't suddenly going to show cavities everywhere because there is no code in the X-ray. Worst thing with a credit card machine is that it doesn't work, most of the time it doesn't overcharge you or something like that, or if it does a few phone calls will sort it out. Again, the worst thing that happens with fuel injectors is they break, your car doesn't run, you pay a few hundred and get it fixed. Worst thing with stoplights is they break, there is always a human driver who can figure out if all the lights are on red or green and call the police to manage traffic.

Breathalyzers are basically black boxes, there is no human to really check them out. With the code more apt to return false readings then simply break, it is dangerous code, when those readings can be the difference between a crime and a non-crime.

Equipment can break down and programming errors do occur. Because of the safety issues involved, signals are equipped with a "conflict monitor." A conflict monitor is a simple device, completely independent of the controller, that watches the signal operate. It does this by monitoring a number of conditions, including the voltage to the individual bulbs in the heads.

If a condition occurs which is not normal (for example opposing greens) the conflict monitor detects the condition and shuts down the intersection. Normally, it places the signal on "flash mode." The main street is given a flashing yellow, to indicate that the situation is not normal and caution is needed. The secondary street is given a flashing red light that should be treated like a stop sign. For safety reasons, the signal will not normally reset itself. A technician must visit the intersection, determine the problem and reset the controller.

I disagree. Anything upon which guilt or innocence rests on needs to be held to a higher standard.

For many other applications, especially non-government ones, if the code doesn't work well, then customers probably aren't going to buy it, and changes will be made. For instance, your example of fuel-injection code. If you don't do that correctly, you're going to have an engine that runs like crap and get poor economy. Cars that run poorly generally don't sell well. They might sell some, but as we see with GM and Chrysler, you have to do better than that to avoid bankruptcy.

Saving your thesis paper? The code in TeX is probably some of the most bug-free code around. At least I hope you're using TeX and not something crappy like MS Word for a thesis. But even MS Word isn't that bad, since so many businesses rely on it and don't have problems with random data corruption to my knowledge.

Timing stoplights is a good counterpoint to your example. In my experience, stop lights have horrible timing most places I go. It's almost like they're intentionally designed to make you stop at every single light, unless you drive at > 80mph on surface streets. Why is such poor performance accepted from our traffic lights? Because they're run by the government, and we the people don't have a choice. That's exactly the same as this breathalyzer crap: if you're accused, you don't get a choice about which breathalyzer they use on you. It's decided by the government (probably with help from bribes), and that's what they use, whether it works well or not.

Yes, but to GP's point - if the code had been subjected to proper tests, then it wouldn't matter how hard it was to maintain. Either the maintainers overcame that difficulty and it passed the test, or they didn't and it failed.

Regardless of the state of the code no breathalyzer truly "works". None of them can directly detect blood alcohol content. All they do is use a proxy to estimate using the reaction products from your breath. These devices are wholly unscientific. There is no possible way they can derive a credible estimate with a precision of 0.001% or even 0.01%. There is no accounting for body size, type, or metabolic rate. Furthermore these devices can be triggered by more than just ethanol. Chocolate is reported to caus

In all 50 states, refusal to take a breathalyzer at the police station will result in a 1 year(minimum) suspension of your drivers license.

In all 50 states, you can refuse to take a roadside breathalyzer as they're inadmissable in a court of law. If you have had even 1 drink, always ask to go to the station for a real breathalyzer. The PBT's, or portable breath testers, are wildly inaccurate and only give the police probable cause to arrest. It will not work in your favor to take it.

Remember when it used to be you couldn't drunk drive?Then it was you couldn't be behind the wheel while drunk?Then it became you couldn't even be in the driver's seat with the car off while drunk?Then it became you couldn't drive if you couldn't get out and walk in a straight line?Then it became reciting your alphabet backwards...Then suddenly, you couldn't have an arbitrary percentage of alcohol in your blood to do all those things.Then it became whatever the machine said your blood alcohol might be.

There are no laws against drunk driving anymore. There are laws about not being able to potentially operate a vehicle if a machine determines you have enough alcohol on your breath.

Actually, research I read when I got my DUI in 2007 seemed to indicate the release of alcohol vapors into the air by the lungs can vary widely between persons, by as much as 20%.

This has nothing to do with body size, type or metabolic rates that I'm aware of, but more research is obviously needed for the scientific community to reach a consensus. The sampling process is fundamentally flawed but the courts have routinely rejected any evidence to the contrary.

I don't know about 49 of the states, but in Washington state, if you want something more accurate than a breathalyzer, you have to demand the police take you to the hospital to have blood drawn at your own expense. They are required by law to comply, but 99% of DUI suspects know nothing of their rights.

If I got pulled over again that same night, I would have driven home without a DUI and even if they did manage to get me to perform parking lot special olympics(also called the field sobriety test), I would have asked for a lawyer. Like most first time offenders, I took a plea deal to avoid significant jail time and paid the ridiculous fine and took alcohol awareness classes. The whole thing was a farce, intended to make money.

I blew.086% and easily could have challenged the results in court, given the breathalyzer had a sticker on it that said it hadn't been calibrated in 2 years.

Like most first time offenders, I took a plea deal to avoid significant jail time and paid the ridiculous fine and took alcohol awareness classes. The whole thing was a farce, intended to make money.

When you use words like "farce" and "ridiculous," it makes it sound like you don't want to take responsibility for your own actions. I don't think DUI laws are "a farce, intended to make money." I think they're intended to protect people like me from getting killed by people like you.

Don't always assume the judge will, in fact, look at the evidence and arguments. In their eyes, it doesn't look good to overturn a DUI conviction. Period.

Had a buddy of mine leave a night club and he got pulled over for supposedly making an illegal left turn. Blew over the state's.07 and got arrested per the usual.

However, the judge didn't care that there was no reason to be pulled over (with photos of the left-turn sign) since the cop explicitly said it wasn't due to erratic driving, *only* the 'illegal' left turn. Examples must be made. DUI upheld.

Hell, even I got pulled over once for simply driving at 2am, but my breathalyzer revealed a stunning 0.00% BAC. After chatting with the cop for a bit, turns out they were just looking for easy DUI targets, and I happened to be driving on the same road as them.

Well the problem with calculating the averages should honestly be enough to get this tossed. The defense can put up an exhibit with a set of numbers using the flawed methodology which shows a person to be over the limit. Then call an expert witness with a math degree, or an accountant for that matter. Show that the average when calculated normally is below the the legal limit. Even better is if you can show that the machine has calculated an average that falls below the legal limit but should have been abov

Good question, but it needs to be reworded. Does it always work for all inputs?

Also important, if it's a poorly written mess, why is the company claiming that it works? I see no indication that they've done due diligence for a device used to convict people. Just because they've never observed it to fail, doesn't mean a thing.

Looks like the answer is no. It's a black box that doesn't report internal errors except when it can't ultimately decide on an answer.

The source code is useful only for showing the machines can be unreliable in certain circumstances, but unless he has substantiating evidence to show it gave an incorrect result he is unlikely to prevail.

Example: Guy blows.09 after drinking 2 beers. He might have a case that the machine was wrong. Example 2: Guy drinks 8 beers and blows.18. Machine might be wrong, but even if it was off by a bit due to rounding averages, he's still guilty as sin.

Sucks, but that's just the way the law looks at it.

Someone mentioned earlier that the weighting of samples under repeat tests give weight to the first blow, which is a big red flag. The initial blow is probably the sample most likely to be contaminated by liquid from the mouth which will skew the reading dramatically, leading to higher BAC's than actuality. If someone blew a.12 and then a.07 on the same machine, he could be found guilty but it's possible the second sample is more accurate.

"Just because they've never observed it to fail, doesn't mean a thing."

Correct! This is a point that many people fail to understand. Testing can't prove that there aren't bugs. All it proves is that a bug did not occur. Failing a test just proves that a bug exists while passing all test just proves that you failed to find a bug. Passing many tests can boost your confidence that there are no bugs. Verification can prove that your code is correct but for most programs it is unfeasible.

Read the article. The code in question, among other things, calculates an arithmetic mean of a sequence of values by successively averaging each value with the mean of all the previous ones, and reduces 12 bits of precision coming from the hardware sensor to 4 for some unspecified but undoubtedly stupid reason.

The code in question, among other things, calculates an arithmetic mean of a sequence of values by successively averaging each value with the mean of all the previous ones, and reduces 12 bits of precision coming from the hardware sensor to 4 for some unspecified but undoubtedly stupid reason.

Well, it's not hard to imagine why they throw away all those bits. Prospective LEO customer: "Wow, this thing never gives the same reading twice. How am I supposed to secure convictions with numbers this flaky?"

Well, if we assume the machine was sensitive up to the LD50 for ethanol of 0.5% BAC, then with only 4 bits of precision the uncertainty just from the rounding error is comparable to the difference between being over the limit and being completely sober. This was covered in the comments on Bruce Schneier's blog [schneier.com]. That one's probably wrecked a few peoples' lives too.

Very true. To some extent, it's reasonable to truncate a few bits of precision if the noise floor of the BAC sensor is substantially higher than the dynamic range of a 12-bit ADC. No reason to display a bunch of meaningless flickering digits extending far to the right of the decimal point.

But when you're displaying a decimal value, every place value with full 0-9 range takes about 3.3 bits of precision. If you're going to return numbers like "0.18" from a device with a range of 0.00 to 0.99, you need to

Of course, with poorly written code, it's hard to show whether or not the code ultimately works by examination of the code.

Then again, proving that the code works (which should be the standard when the code is analyzed in court) by code examination is very difficult even for well-written code.

Perhaps a better approach would be documented, repeatable testing of the device. When I challenge a radar gun, I get to ask about its calibration documents, but I don't think I get to debate the blueprints from which it was built.

My personal opinion - and before getting on an "innocent until proven guilty" kick bear in mind that I'm not a part of the court system in this case - is that the defense realizes that almost all software systems look awful and are trying to game their way out of a conviction they've probably earned.

That said, if for no other reason then to eliminate such gaming, there should be standards for testing and documenting the proper function of these devices. Any device that can't be calibrated and tested with sufficient certainty should be banned from use as evidence in court. If the device passes the test, then exactly how it does it shouldn't really matter.

Perhaps a better approach would be documented, repeatable testing of the device. When I challenge a radar gun, I get to ask about its calibration documents, but I don't think I get to debate the blueprints from which it was built.

Calibration and testing won't reveal all the edge cases that might cause errors. Consider a radar gun designed to take the average of five samples. You've got a car moving away from you at 70 MPH, and a duck flies into the beam for one sample, moving towards you at 5 MPH. This gives the following five samples:

70 70 70 -5 70

I can see a way that badly-written code would turn that into an average speed of 106 MPH (storing a signed char as an unsigned char, which would turn the -5 into a 251), and yet it would pass calibration and every test someone's likely to perform.

Just read Schneier's comments. He cites some of the more important things:

Readings are Not Averaged Correctly: When the software takes a series of readings, it first averages the first two readings. Then, it averages the third reading with the average just computed... There is no comment or note detailing a reason for this calculation, which would cause the first reading to have more weight than successive readings.

That alone should be enough -- the readings are not averaged correctly. But it goes on:

The A/D converters measuring the IR readings and the fuel cell readings can produce values between 0 and 4095. However, the software divides the final average(s) by 256, meaning the final result can only have 16 values to represent the five-volt range (or less), or, represent the range of alcohol readings possible. This is a loss of precision in the data; of a possible twelve bits of information, only four bits are used. Further, because of an attribute in the IR calculations, the result value is further divided in half. This means that only 8 values are possible for the IR detection, and this is compared against the 16 values of the fuel cell.

So we know it's buggy and inaccurate, to a moronic degree. If that wasn't enough:

Catastrophic Error Detection Is Disabled: An interrupt that detects that the microprocessor is trying to execute an illegal instruction is disabled, meaning that the Alcotest software could appear to run correctly while executing wild branches or invalid code for a period of time. Other interrupts ignored are the Computer Operating Property (a watchdog timer), and the Software Interrupt.

So, basically, it's designed to always return some value, even if it's wildly inaccurate, and even if the software is executing garbage at the time.

In other words: It appears to be a very low-level equivalent of Visual Basic's "on error resume next".

Whiskey. Tango. Foxtrot.

So to answer your question: No, it does not work. Even if it did somehow work, there's obviously an unacceptably poor level of quality control here.

The problem in a lot of states is that.01 can make a huge difference between a DUI, a DUI with a "high BAC kicker", a wet-reckless, or nothing at all. It has to be accurate to at least a few 9's or for those "on the bubble" cases do have a severe level of doubt. Because driving with a.07 is not illegal (for the most part), but.08 is. The question in court is not "were you drinking tonight", but "how much did you drink" which is a very specific very objective, very deturminable piece of information.

As states lower their legal limits to the point where they intersect with non-impaired drinking drivers, especially with a.01 or more margin of error, you're going to get a lot of overzealous cops in cities with revenue shortfalls taking innocent people in for DUIs and hopefully more and more of these "border cases" will bring these devices into question more than the over-the-top blacking out, pissing his pants multiple-offender does in court.

In embedded systems programming, it is common practice to disable interrupts if they are not used. It is certainly possible that this app simply does not need to handle these interrupts, whether they are enabled or not.

It is also possible that the other flaws mentioned, which clearly reduce accuracy, do not do so sufficiently to change the outcome in a meaningful way.

The problem with drunk driving law is not primarily one of testing. It is that it presumes someone is incapable of driving with even trace amounts of alcohol, while treating other forms of more dangerous driving (such as driving while texting or on the phone) as being OK or far far less severe.

The way the laws themselves are written is a horrible miscarriage of justice. This is the result of the perverse and hypocritical views of MADD and its ilk, the bastard children of the prohibition movement.

Sure, one bit would be enough to make a pass/fail decision. But they throw away info BEFORE making that determination. You can make a determination and round it down to one bit, but you can't round down to one bit and then make an accurate determination, this is an analog sensing device we are talking about. Throw away everything but one bit, and you don't have a yes/no on the legal limit, you have 'above 2.5v, or below 2.5v.' What's the legal limit, translated into volts, hmmm?

Whether or not it "works" isn't quite enough in my opinion. It needs to be clearly written in such a way that the purpose and methods used in sampling input from hardware and the making of calculations are verifiably accurate and true in all cases. This is an instrument that measures whether or not someone is within a prescribed legal limit and needs to be as provably clear and accurate as possible. We are talking about taking away freedoms from people as a result of this test machine and there should be as little room for error as possible.

If I were to prescribe a system for analyzing breath for alcohol content, I would require that a single test unit be comprised of two machines from two different manufacturers and that any single sample be split equally between the two machines for measurement such that when both machines return results and are both in agreement within a prescribed "reasonable" difference from one another, then we might begin to say we have a reasonably accurate measure from which judgements can be reasonably made.

In the mean time, software architecture needs to be held to the same legal standards as ACTUAL architecture and engineering. I recall being involved in a cabling project where all terminations were reading perfectly, but when I inspected the raceways, the bed radius of the cabling was way too tight and much of the cable was tied to various pipes and conduits and not fixed to the hardware intended for the handling of the cable. All of the cabling was not installed according to clear and complete specification and I was furious at what I found. The first answer offered to me was "but it all works right?"

If you took your car in for repair and they charged you the full price of the repair with parts and you found that it was repaired with duct tape and bailing wire, would you accept "but it works!" as a reasonable answer to your complaint? I think not!

Back to this situation: "Does it work?" The real answer? If you cannot read the code and make clear sense of it, you cannot prove that it works, only that it works under the practical conditions of testing. That is simply NOT good enough for any scientific measurement and especially not good enough for measurements that may be used to determine whether or not a person is sent to prison.

Ok, I'm not happy that some people almost certainly were measured inaccurately by these things. I'm not happy that this company was allowed to pull this kind of shit -- when you do government contracting, the government should own what you do.

However, I am very glad that the precedent has been set.

And I am especially glad that not only is there precedent, but there's a real live example of why we need this stuff to be open.

when you do government contracting, the government should own what you do

But they weren't doing government contracting. The produced a good that was purchased by the government. There's a very big difference.

The key here is not that the government, or anyone, should own what they produced -- it's that when what they produced is used to convict someone, that person has the right to examine the methods used.

It's not about openness, at all. It's about the right to a fair trial; openness is just a side effect.

The key here is not that the government, or anyone, should own what they produced -- it's that when what they produced is used to convict someone, that person has the right to examine the methods used.

I will call out the company for doing shoddy work. The question is whether the device was ever certified for the purpose, and if it was who did it and what was the process used. If you are going to use something to prosecute, then there needs to be evidence that the device was tested and certified using a publicly documented process. This is black box testing and if the government never did it, then why is it allowed in court?

80% of the code in business fits this description. With 20 year old legacy code written by 50 consultants, then upgraded in India, then ported from one platform to another to another, and a database engine switch or two. Code gets senile. What do they expect? Good thing we're all just commodities... human lego bricks easily replaced with cheaper plastic.

Just because code is not written to some official standard does not mean it is guaranteed to be buggy. Undisciplined coding is as bad as undisciplined specifications - results can indeed be ugly. It is preferable if the coders follow good practices, and there ideally would be a clear system for specifying program behaviour in testable ways. It is easier to produce good code with robust behaviour if good practices are followed from design through coding to testing and documentation, but it is not impossible to achieve good results in other ways also.
Did they find any coding bugs, or did they just criticize the approach to coding?

2. Readings are Not Averaged Correctly: When the software takes a series of readings, it first averages the first two readings. Then, it averages the third reading with the average just computed.

There you go. It's also inaccurate:

The A/D converters measuring the IR readings and the fuel cell readings can produce values between 0 and 4095. However, the software divides the final average(s) by 256... Further, because of an attribute in the IR calculations, the result value is further divided in half. This means that only 8 values are possible for the IR detection...

And, if there were a catastrophic bug, you wouldn't know it, you'd just keep getting readings:

An interrupt that detects that the microprocessor is trying to execute an illegal instruction is disabled, meaning that the Alcotest software could appear to run correctly while executing wild branches or invalid code for a period of time. Other interrupts ignored are the Computer Operating Property (a watchdog timer), and the Software Interrupt.

If you got your hands on and analyzed the sourcecode to most DVD' players, TV's (Panasonic runs linux!) and other devices that are complex you will discover that in order to ship it earlier the code is an utter mess.

Programmers are not joking when we complain about the "It compiles? Ship it!" statement.

the fault is the Executive staff that refuse to listen to their experts (programmers) and do what they recommend. Instead we get morons that know nothing about programming making unrealistic deadlines and forcing death march coding marathons to give up the mess we have today.

The fault is the State using output of a device which is an undocumented, unverified black box in legal proceedings.

Yes, of course, most of code out there is a similar mess. But if it fails, the worst that can happen is that your desktop crashes, or your iPod hangs... which is bad, of course, but not as bad as getting a criminal conviction for drunk driving.

These things should be held to the same standards as code in military equipment or nuclear reactors - mistakes are inexcusable.

the fault is the Executive staff that refuse to listen to their experts (programmers) and do what they recommend. Instead we get morons that know nothing about programming making unrealistic deadlines and forcing death march coding marathons to give up the mess we have today.

To some extent, you are correct. However, I also blame the developers. There are many "software engineers" and "computer scientists" I have worked with who didn't understand the basics of algorithms, design, testing, and other topics that are necessary to our field.

If I were the manufacturer, at this point I'd say: (1) lawyers are expensive; (2) competent programmers are expensive, but less expensive than lawyers; (3) our business is selling the beathalyzer, not the software, so we gain nothing by keeping the source secret; (4) this publicity is hurting us; (5) let's hire some more competent programmers to clean up the code, and then we can make it public; (6) profit!

This is different from the case of the voting machines. In the case of a voting machine, there are lots of people who might be motivated to hack it, lots of people have access to the machines, and it only takes one compromised machine to throw a close election. If you believe in security by obscurity, then there is at least some logical argument for keeping the voting machine code secret. In the case of the breathalyzer, there's not even that lame argument.

The good: This particular breathalyzer has been proven to be the unreliable POS that it apparently is. This unit, and others like it, will finally start being held to a stronger coding standard.

The bad: every sleezeball, ambulance chasing, "call lee free", douchebag of a lawyer will use this case to attack the credibility of any and all breathalyzers made in the past, present, or future, spreading enough FUD to juries everywhere that an unacceptable number of drunken idiots get the God given right to keep their license until they finally end up killing someone.

As a person, I think groups like MADD spend most of their time trying to scare monger politicians into pushing us as close to prohibition as possible. I believe that alcohol can be used responsibly. But I also know that this case is going to result in DUI's getting overturned for people that damn sure don't deserve it. Borderline cases will get knocked down, cases will get thrown out, and the people that broke the law, that did something wrong, will walk out of a court room 'vindicated.' They didn't do anything wrong when they had six beers and drove home, it was that confounded *machine* that *said* they broke the law. The *machine* was busted, ergo they didn't break the law. In short, this case is going to make a lot of O.J. Simpson's. The jury said they didn't commit a crime, so they didn't. No harm no foul. Technicality? Bah! They're as innocent as the sweet baby Jesus.

I'd like to think things will wash out in the end. This case will probably end up making it harder to get off on this particular technicality in the long term. In the short term? Here come the appeals. Maybe the state is partially at fault for buying shoddy equipment. (Or maybe not. Did they do a code review? Do they have the resources to one? Probably not. Did you do a code review of the 3com switch in your server room? Their selection criteria can certainly be questioned, but it probably doesn't change the fact that someone drank enough to blow a.22 then decided to drive home.)

But in the end, the drunks are still going to be drunks. And tomorrow some of them will probably get to file appeals, and some of the ones that shouldn't be on the road, or even in public, will get to slip out of this brand new loophole. I'm not sure that that deserves a cork-popping celebration.

(and yes: We all handle our booze differently. Arbitrary limits that determine "drunk" may or may not be the answer. Hardcore drunks will keep driving even after losing their license. DUI's are as much moneymakers for the States as speeding tickets. Yadda yadda yadda.)

OK, LOTS of strange posts from people who claim to have read the article but only see that it's bad code, not actually broken.

Read it again. It's broken from a legal liability and trustworthiness standpoint. It's broken from a precision standpoint. It's broken from an algorithm standpoint. It is not trusworthy, precise, accurate, or correct.

"It is clear that, as submitted, the Alcotest software would not pass development standards and testing for the U.S. Government or Military. It would fail software standards for the Federal Aviation Administration (FAA) and Federal Drug Administration (FDA), as well as commercial standards used in devices for public safety. This means the Alcotest would not be considered for military applications such as analyzing breath alcohol for fighter pilots. If the FAA imposed mandatory alcohol testing for all commercial pilots, the Alcotest would be rejected based upon the FAA safety and software standards."

Nobody in the government or military would be allowed to trust this, if it weren't already in use.

"Results Limited to Small, Discrete Values"

Sixteen values is all it displays! It throws away almost all of the precision of the 12-bit ADC, and reduces it to 4 bits! This is NOT precise enough!

"Catastrophic Error Detection Is Disabled""Diagnostics Adjust/Substitute Data Readings""Range Limits Are Substituted for Incorrect Average Measurements""The software design detects measurement errors, but ignores these errors unless they occur a consecutive total number of times."

It's not correct. It's not accurate. It's not good enough. The odds are VERY good that some people over the limit have gotten off lucky, and also that some people below the limit now have criminal records.

Testimony is subject to cross-examination (at least in the US). Opposing counsel has the opportunity to exploit weaknesses in the witness's testimony. Also, the witness is subject to prosecution for perjury for lying. What penalty does a faulty (if it be faulty) device face?

Er, why would it need or be expected to be? It's a commercial product. I don't think most bank websites are "coded" to any specific standard either.

From the article:

1. The Alcotest Software Would Not Pass U.S. Industry Standards for Software Development and Testing: The program presented shows ample evidence of incomplete design, incomplete verification of design, and incomplete "white box" and "black box" testing. Therefore the software has to be considered unreliable and untested, and in several cases it does not meet stated requirements. The planning and documentation of the design is haphazard. Sections of the original code and modified code show evidence of using an experimental approach to coding, or use what is best described as the "trial and error" method. Several sections are marked as "temporary, for now". Other sections were added to existing modules or inserted in a code stream, leading to a patchwork design and coding style.

Ok. Would you want to have something that can cause you to get convicted because it wasn't documented or even tested fully - ("Oh, Crap. That constant should have been 0.001, not 0.01. Ooops. Blood Alcohol level was 0.008, not.08. Sorry !")

Common sense (if it WERE common) should indicate that there should be full tests for a wide range of values performed with the written tests and expected values verified and available to prove that the device/software actually does detect the proper levels of alcohol.

2. Readings are Not Averaged Correctly: When the software takes a series of readings, it first averages the first two readings. Then, it averages the third reading with the average just computed. Then the fourth reading is averaged with the new average, and so on.... the comments say that the values should be averaged, and they are not.

It's been a while but didn't the teacher in 5th grade show you why that wouldn't work?

Or how about this:

The A/D converters measuring the IR readings and the fuel cell readings can produce values between 0 and 4095. However, the software divides the final average(s) by 256, meaning the final result can only have 16 values to represent the five-volt range (or less), or, represent the range of alcohol readings possible.

Who the hell didn't pay attention in the A/D quantization error in controls class?

I don't want to fill my whole comment with copy and paste from TFA, but not only is this a code standard issue. It's just plain stupidity. Error checking, out of range checking all sound like something a first year programmer should have gotten right.

The thing is that probably 95% of the Lint reports could have been fixed by the code designers, just by making appropriate declarations or a bit of type casting. The fact that 60% of the source is reported by Lint, indicates that the designers never bothered to do any kind of static code checking or to clean up warnings, and that points to a lack of care during development and testing.

At a previous job we had to buy a third-party driver for an embedded PCMCIA controller. The software vendor delivered code that (the first time around) produced about 1200 lines of warnings when we compiled it. We queried them about it and they responded that "we don't compile with warning output enabled". Our reply to them was that our coding standard was that the compile would fail on warnings, and we wouldn't accept their code unless they fixed all the warnings... they cleaned up their act, and fixed a couple of previously unresolved problems in the process.

I work on embedded system stuff every day. At the end of the day, there are NO lint warnings in my code. First, I tend to avoid coding practices and designs that generate lint warnings. By and large, lint warns for a good reason most of the time. Second, in the limited number of situations where lint flags something incorrectly, there are methods for silencing the warnings via special comments. I'm currently working on a 50000 line project, and there are about 70 places in the entire code base were we had to tell lint to ignore a warning. Each warning suppression is documented as to why lint is incorrect.

Lint isn't a perfect tool by any means but in my opinion, anyone developing C code without it is not acting in a professional manner.

The code is protected in the US by copyright. It is not protected anywhere else, especially in countries where it is cheap to reproduce the hardware. US Customs has proven over and over they will not block the import of infringing devices.

This means that once the software gets out - and it is - look for cheap copies that will put the original manufacturer out of business. Because law enforcement and just about everyone else in the market for such devices is going to jump on the price difference. Same fu