Posted
by
Soulskill
on Saturday October 31, 2009 @08:16AM
from the must-be-lit-majors dept.

antdude writes "This TechRadar article explains why computers suck at math, and how simple calculations can be a matter of life and death, like in the case of a Patriot defense system failing to take down a Scud missile attack: 'The calculation of where to look for confirmation of an incoming missile requires knowledge of the system time, which is stored as the number of 0.1-second ticks since the system was started up. Unfortunately, 0.1 seconds cannot be expressed accurately as a binary number, so when it's shoehorned into a 24-bit register — as used in the Patriot system — it's out by a tiny amount. But all these tiny amounts add up. At the time of the missile attack, the system had been running for about 100 hours, or 3,600,000 ticks to be more specific. Multiplying this count by the tiny error led to a total error of 0.3433 seconds, during which time the Scud missile would cover 687m. The radar looked in the wrong place to receive a confirmation and saw no target. Accordingly no missile was launched to intercept the incoming Scud — and 28 people paid with their lives.'"

It's pretty pathetic and negligent that software that controls explosive missles was not tested for over 100 hours of operation. That's a standard Quality Assurance procedure for even the simplest low-budget hardware...

It's also pretty pathetic that the system designers implemented a broken design and did not foresee this problem. High-resolution timekeeping has been accomplished pretty successfully already...

I wonder how much time and money was spent in research and development for this thing
It doesn't seem like we're getting a quality product for the likely huge sum that was paid for it...

which is stored as the number of 0.1-second ticks since the system was started up. Unfortunately, 0.1 seconds cannot be expressed accurately as a binary number, so when it's shoehorned into a 24-bit register — as used in the Patriot system — it's out by a tiny amount. But all these tiny amounts add up. At the time of the missile attack, the system had been running for about 100 hours, or 3,600,000 ticks to be more specific. Multiplying this count by the tiny error led to a total error of 0.3433 seconds, during which time the Scud missile would cover 687m. The radar looked in the wrong place to receive a confirmation and saw no target. Accordingly no missile was launched to intercept the incoming Scud — and 28 people paid with their lives.'"

So in a system that should have clocks synchronized to less than a microsecond nobody bothered to run "ntpdate" even once in hundred days ? And surely the military has better clock synch than a stupid home pc ? This is stupidity, also known as "human error", causing those deaths. It's a case of "the correct answer to the wrong question".

What is always brought up as a "computer problem" is the crash in Paris of a jet due to infighting between the human pilot and the autopilot. Of course, there the ultimate mistake was the pilot's : he had forgotten to turn off the autopilot to land. It was set for cruising altitude (3km), and the pilot was trying to land. This resulted in ever more desperate attempts by the autopilot to get the plane to gain height, which eventually resulted in a total loss of lift for the plane, which naturally resulted in the plane hitting the ground nose-down and a big fireball. The computer did exactly as instructed, it's just that the pilot's (unintentionally given) instructions were stupid, and the fact that it took the pilot over 3 minutes to realize just how stupid he had been.

The computer did exactly as instructed, it's just that the pilot's (unintentionally given) instructions were stupid, and the fact that it took the pilot over 3 minutes to realize just how stupid he had been.

Sounds like a user interface problem to me. Given the potential consequences of that particular user error, the fact that the autopilot was still engaged should have been made more obvious to the pilot. (e.g. when the plane computer sees that a struggle is going on between the autopilot and the manual controls, it should prompt a loud, un-maskable synthesized voice shouting "THE AUTOPILOT IS ENGAGED, YOU IDIOT!")

The computer did exactly as instructed, it's just that the pilot's (unintentionally given) instructions were stupid, and the fact that it took the pilot over 3 minutes to realize just how stupid he had been.

Sounds like a user interface problem to me. Given the potential consequences of that particular user error, the fact that the autopilot was still engaged should have been made more obvious to the pilot. (e.g. when the plane computer sees that a struggle is going on between the autopilot and the manual controls, it should prompt a loud, un-maskable synthesized voice shouting "THE AUTOPILOT IS ENGAGED, YOU IDIOT!")

Or if the pilot is pushing hard on the stick the autopilot should disengage (with loud alarms).If I tap on the breaks in my car the cruise control disengages, it does not fight me.- Dan

So in a system that should have clocks synchronized to less than a microsecond nobody bothered to run "ntpdate" even once in hundred days ?

Do you want to be the one to explain to the generals why their stand-alone, truck-based mobile air protection system needs a hard-line network connection to work?

The real idiocy is here:

Unfortunately, 0.1 seconds cannot be expressed accurately as a binary number, so when it's shoehorned into a 24-bit register

Taken charitably, the article writer has oversimplified to the point of obscuring the point. It's perfectly possible to represent a 0.1-second tick in a 24-bit register. There's an overflow about once every 19 days. The problem is doing calculations *with* that number, and that takes knowing what the hell you're doing. Given the problem the system designers were trying to solve with Patriot, this should not have been a problem.

And surely the military has better clock synch than a stupid home pc ?

You'd be surprised how hard clock accuracy is to get right, *especially* under military conditions. A drift of 0.3433 seconds over 100 hours works out as an accuracy of 1 part in a million, give or take. Besides, the problem here wasn't clock drift, so it's a irrelevant.

FTFA:"So computers might suck at maths, but there's always a solution available to circumvent their inherent weaknesses. And in that case, it's probably more accurate to say that computer programmers suck at maths - or at least some of them do."

Thank you, come again.

So in a system that should have clocks synchronized to less than a microsecond nobody bothered to run "ntpdate" even once in hundred days ?

Yes, obviously they just needed to ssh into their patriot missile air defense system, edit a few lines in/etc/inet/ntp.conf and svcadm restart ntp.

The obvious problem in the article, if you read it, is computer's finite precision, and how it is dealt with. By 'computer', the author could have easily included the system libraries that are actually doing all the rounding and overflows instead of implementing arbitrary precision in software.

Everyone defending the way 'computers' is used in this article, and conflating it with 'processor' is a complete idiot.

The obvious problem in the article, if you read it, is computer's finite precision, and how it is dealt with. By 'computer', the author could have easily included the system libraries that are actually doing all the rounding and overflows instead of implementing arbitrary precision in software.

Not at all, since correcting an inappropriate hardware design with software is like fixing an automobile that was designed with square wheels by manually sawing off the corners to make them octagonal instead. You could create a recursive software routine to continue sawing until the wheels were a good approximation of round, but that's an awful lot of sawing to fix something that should have been right in the first place.

The clock in modern systems is nothing but a hardware register that gets incremented periodically (as correctly described in the article). The ONLY rounding error introduced by software is in converting that number to decimal. But rounding had nothing to do with the problem described. The appropriate solution is a better hardware design, not attempting to patch or correct it in software.

The problem was error accumulated in the clock register itself due to the imprecision of the clock, and overflows due to the inappropriately small size of the register. Both are hardware issues and represent bad design decisions. They way to fix them is to design the hardware properly in the first place so that it is appropriate for the job at hand.

The problem described is not overflow, it is repeated rounding on the imprecise representation of 0.1. The systems failed after 3.6M ticks. In a 24-bit register overflow is not a problem until at least 16.7M ticks. The lowest bound is because this is not an integer register and the article does not describe the size of the exponent. If you check the figures in the article, when the system failed it was out by about three ticks in 3.6M. Overflow causes the representation to suddenly shift to a completely wro

I'm obviously not a hardware designer? That's funny. I am not the cluless one here. How about some simple math? Maybe you would learn something.

A 24-bit register, with clock ticks every 0.1 second, would overflow in less than 20 days. And if the clock ticks were faster, then it would overflow even sooner. No wonder they recommended rebooting the system every few days.

Of course I do not recommend an infinitely large register. Simply one that is large enough for the job at hand. This one obviously isn't. Further, a 0.1-second resolution clock is obviously not adequate to a job requiring this kind of precision.

If the hardware clock is off (not overflowed but INACCURATE, which was the real situation here), no amount of software tweaking will properly fix the problem. The article did not state but implied -- incorrectly -- that the clock register was accumulating rounding errors; that is not the case. Nobody makes system clocks that way, nor did they in the 90s or even the 80s. The system clock is nothing but a counter that is incremented every clock tick. The actual problem was that the clock ticks were not sufficiently precise, so over time the count was off. Math libraries and rounding errors played no part whatsoever in that error.

Finally, I would like to point out that today's standard PC-type system clocks are large enough that they won't overflow for 100 years or so; that is the obvious and proper solution to the overflow problem. The problem of clock ticks that are sufficiently precise for timing of missile navigation, as far as I know, has not been addressed on standard PCs, however, and they do not try to correct for that in software because the adequate precision in the clock simply does not exist. It would amount to tilting at windmills. Keeping a count in software of the number of times the register overflows is also NOT an appropriate solution for a system clock, nor is any software tweak, because software by definition is volatile while the hardware clock is not. In other words, nobody does it that way, dude, because it's just plain the wrong answer.

As for your final comment, most Unix programmers know what epoch time is, when it started (00:00:00 UTC on 1 January 1970 according to ISO 8601), and when that date will roll over in the counter (approximately 65 YEARS later, so it isn't much of an issue). Nobody is arguing that we should make a missile system that needs to last, unmodified, for over 65 years. But proper hardware design in the first place, which was certainly possible at that time using ASICs if not straight-up custom chips, would have eliminated the problem.

Yes. The issue here sounds like they had a system clock counter that was an integer, that counted the number of 0.1 second clock ticks. Then they wanted to convert this to a floating point number in 24 bit IEEE format, They simply multiplied 0.1 by the integer in the register. Of course, that still sounds like too large an error top have occured from just that, but lets pretend it did.

There are several issues here. For missiles travelling at such speeds, using a system clock counter based on 0.1 second ticks sounds terribly coarse to me. Second, since 0.1 seconds are the baseline resolution of the system, the system should have been using floating point numbers where '1' corresponds to a decisecond rather than a second. Then the time counter would be exactly expressible in the floating point format.

Lastly, if the floating point format really needed to be in units of seconds, rather than deciseconds, the time counter should have been loaded in, having an exact representation, then it should be divided by 10, which has an exact representation. This is all prety basic to anybody who has even a limited understanding of floating point. If you understand the inherent precision of every operation even better than I do, even more improvements would be possible.

But to be honest, I'm not sure why floating point was used at all here. It sounds to me like fixed point may have worked just fine for most of these problems. (Of course, fixed point has its own set of rules ensuring maximal accuracy. )

Yup. This, combined with the parent's note about high-resolution timing shouldn't have even gotten past the first programmer to write the code. The instant they wrote a line of code that depended on timekeeping that precise, there should have been a review of the time system, or rather before, that should have been thought of in the design phase. And as for floating-point errors, any programmer that isn't aware of those issues needs to be writing... fuck, I don't even know. Something that doesn't use float

This particular story took place in 1991, and most of the code for Patriot was written in the 70s - needless to say, software QA was a little more lax back then. The fix for this problem was out a couple days after the incident.

Oh really? The problem with these systems is that they have never worked in anything other than rigged tests and are just silicon snake oil.I remember having this same discussion where there was a story here about some sort of Israeli space lasers that could apparently even shoot down artillery shells. Only a few months after that a very large number of thirty year old rockets dumped at discount price by Iran for being obsolete came flying over the border from Lebanon. Since then a lot of even slower rockets came out of Gaza. The success rate of this amazing new space toy matches that of the Patriot - zero.

The Iron dome [wikipedia.org] system works perfectly. It's just not capable of protecting any kind of large area. It can, however, make a military base invulnerable to rocket fire, and they're working on making the system mobile, to protect tanks. The only real problem left for doing this is the power requirements.

For ships, another such system exists, and protected the ships perfectly well from those same rockets fired by hizbullah. It's "protection range" ? In the largest deployment about 200 square meters.

There is also the problem that a downed missile presents. What is a "downed missile" ? Well it's a large collection of very-high speed pieces of metal that have been heated up by a large explosion that's about to crash into the ground. So far so good.

So what is "the ground" in the case of a hizbullah or hamas missile launch ? Well it's the center of the city that's controlled by the terrorists. It's their human shields. Markets, schools, you name it. So a successfull missile intercept is reported in the press as "Israel fires a rocket into a palestinian kindergarten". That is, by the way, the literal truth, even if the rather important detail of a rocket's presence above said kindergarten is left out. In the deployed missile intercept installations "the ground" is chosen to be something else, like the ocean surface.

Missile intercept systems are no solution for terrorism. Most unfortunately, the only solution for those rocket attacks is preventing they're fired in the first place. Which obviously requires either palestinians police their own terrorists, or someone does it for them (that's called "occupation").

These systems work, they are deployed successfully in the field. They're no silver bullets, and any bullet that's fired, whether a missile or a missile-intercept-missile, will eventually hit the ground at rather high speeds. Which makes their use above urban environments result in civilian casualties.

The "kingdom of Egypt" (the state of the Farao's) ? (exterminated to the last man by muslims)The Hittite Emptre ? (exterminated by the Greeks, Romans, Persians)The kingdom of Israel ?The Assyrian Empire ?

Which of these do we restore ? (note that the palestinians, or to be more exact, the arabs only come into play about 4500 years after the Assyrian Empire)

Which do we restore ? And why do they have more rights than all the others who conquered that piece of land ?

Note the obvious truth : the Jews controlled Israel about 4300 years before the arabs even left their tiny province...

What if some Greek starts firing rockets at the Arabs ? Will you tell them to leave ? He has at least as much right to Israel as they do ? What if the Jews start firing rockets into Jordan (territory that was part of the kingdom of Israel) ?

And of course, you shouldn't count out yourself. You're an Indo-European living in America. It seems hypocritical in the extreme to tell others to leave conquered lands. Your province of origin is northwestern Iran, every other place on this earth indoeuropeans live (including Europe), is obviously conquered from someone else.

The problem with the Palestine / Israel issue is that Israel is not working towards any solution. What is Israel's long term solution? Have sovereign absolute rule over a few million people in a prison that their citizens can, at will, and with army backing, snatch up pieces for settlement? Oh yeah, that is going to work out. Palestinians either need to be sovereign or citizens of Israel. Israel needs to pick one because the keeping a ghetto of nationless people method isn't working.

Uhh hezbolah was created to defend lebanon in the 80s after israel killed thousands of lebanese and occupied a good chunk of it.

The last fight between them happened in 2006. Hezbolah kidnapped a few SOLDIERs to trade for PoWs (a common thing since israel has a shit ton of prisoners).

Israel responded by sending in an army many 100s of times larger than lebanon's they bombed many buildings including hospitals, school, UN bunkers and apartment buildings. Hezbolah fired rockets back to show resistance.

In the end Israel killed 1200 civilians, 300soldiers, and a significant percentage of the countries economy. Hezbolah killed 120soldiers, 40civilians. Notice the fucking difference in ratios. Oh and the whole time hezbolah conducted rescue missions, gave out food and helped transport people to safety. So fuck off.

Also: "Hezbollah is now also a major provider of social services, which operate schools, hospitals, and agricultural services for thousands of Lebanese Shiites, and plays a significant force in Lebanese politics.".

Also hezbolah states that they distinguish between zionists and jewish. Their stated reason for firing rockets is continued resistance against israeli attacks and to put an end to any colonial entity within lebanon. NOT kill jews.

How the fuck parent got modded up is beyond me. Every single point is a verifiable falsehood.

What I find deplorable is how the US propagadizes about supporting Democratic regimes, but when the Palestinians elected Hamas to power, we refused to have any dealings with them. Talk about fucking hypocritical (spoken as a lifelong US citizen).

1. The Patriot version used in the Gulf War (round 1) was not designed to be used against Tactical Ballistic Missiles (like SCUDs), but against opposition aircraft. A fighter isn't going to be flying as fast, and thus the error is going to be much smaller, which means the missile would probably still find the plane.

2. The Patriot has a quite good record against SCUDs (after the software upgrades). Much better than the Soviet SA-2s did against B-52 raids in Vietnam.

3. Systems don't always work right the first time, and if you do a full on test to start with, and something goes wrong, it's a lot harder to find where the error is than if you test one part at a time.

I know that I'm arguing with a trolling AC, but for the other readers of slashdot, you should know that the grandparent's post refers to the controversy regarding the analysis of the Patriot system during the first Gulf war. There was a huge propaganda machine behind the Patriot's "successes" which turned out to be very near zero indeed. This was covered in a series of hearings in the early 90's...

One of the other results [cdi.org] (the first one that comes up for me actually) claims that in testimony presented to Congress Postol's methodology was called out as flawed based on the fact that three or eight Patriots were launched at every incoming missle and his video analysis is done per interceptor fired completely ignoring the massive odds against more than one interceptor making a hit. The Isreali's independent analysis puts the success rate at 50%.

The way the system is sure it's tracking the target it was given is by predicting where it should be seen next based on speed and diretion, and then only looking for it in a window ("range gate") around that predicted position. The window is a point in space-time and therefore has time coordinates as well as space coordinates, and the problem was that the Patriot system apparently used absolute time since power on to specify the time coordinate, hence the error accumulation. The problem could have been avoided simply by using a time coordinate relative to the last tracked postion rather than an absolute one.

The GAO report also blames the 24 bit registers of the 1970's era hardware as limiting accuracy which is just garbage. A good excuse to a politician perhaps, but there was nothing stopping them from using a 64 bit, or whatever, math library if that would have helped.

Of course the Patriot was being used outside of it's original requirements spec when being used to target SCUDs, so it seems someone really screwed up in not reviewing the design beforehand and determining it's limitations (and fixing them) rather than finding out after the fact when 28 people are dead as a result.

I actually read about this specific incidence once; I seem to remember (though honestly not sure) that the design flaw was known and the user manual indicated that the computer needed to be reset every 36 hours. However, in wartime, under attack (there were frequent Scud intercepts), the crew controlling the missile battery opted against shutting it down if even for short time. Maybe even though the manual said it SHOULD be rebooted it did not explain WHY or what the consequences would be.

So they designed a system that accumulated rounding errors over time, and their solution was to ask the system's users to reboot the system every so often? Somehow, that does not add to my sympathy for these programmers...

When you write programs which deal with time like this, you never use floating point math. If your required precision is 1/10 of a second, your units are in 1/10 of a second. You do not resort to floating point. I'd probably use 1/100 or go to 64 bit and use 1/10000 of a second. With a high level language, there are better ways to do it of course.

The reboot hack is a reasonable workaround in the field, as long as the downtime is documented and understood by leadership, and as somebody mentioned, the s

a) If they knew enough about it to put "reboot every 36 hours" in the manual they knew enough to fix it.

b) According to the summary, 36 hours would still be a complete miss (a third of 687 meters is still 229)

c) A fixed point integer (32 bits) can mark tenths of seconds with complete accuracy for over 13 years.

d) Leaving aside a,b and c, the story still doesn't make any sense. The system would start the calculation the moment it saw the missile, not 100 hours before it appeared on the radar.

Now... at the speed of a scud missile (mach 5 if google serves me), it may be that an accuracy of 1/10th second isn't enough to compute the trajectory accurately enough to intercept it. At that speed you might need 10,000th second resolution or whatever. *That* would be believable (but unlikely - the designers would have to be complete idiots).

The rest of the article? Yawn. It's the same old recycled story we've been seeing since the 1970s (those of us who are old enough).

I want to know who programmed a system that allowed floating point errors to accumulate over time in a critical calculation. I hope they did not receive a degree in computer science, or that if they did, it was not from my alma mater.

Seriously, what programmer has not heard of floating point errors? That has to be one of the most common phrases I have ever heard in relation to programming; even the EEs and MEs I have met are familiar with the concept.

I had a similar issue with some code of mine for physics analysis. While I had heard of floating point errors, they're a lot more subtle than it first appears, and I ended up falling victim to one. Fortunately I discovered it before it actually let to any serious problems, it just resulted in wasted time.

Not everyone with a need for programming has a CS background and enough experience to be aware of all the potential problems. You'd hope that someone working on a missile system would have though.

Everybody knows that they exist, fewer people know how to avoid them. Lots of early multimedia frameworks, for example, were written using floating point timestamps and developed this exact problem (add some fraction repeatedly for each audio and each video frame, and after an hour the two tracks are noticeably out of sync). Now, they use a numerator-and-denominator form which is simple to add without rounding errors and so you only get them when you convert to floating point for comparison.

Even fewer people realise how compiler and hardware dependent they can be. For example, if you do a sequence of floating point operations on x86 then the values will stay in 80-bit registers until they are stored out to a variable. If you compile the same code for a newer machine with SSE or for another architecture then you will get 32-bit operations on your 32-bit floats and so you'll have less precision. A lot of compilers will even generate different precision between debug and release builds.

Even fewer people realise how compiler and hardware dependent they can be. For example, if you do a sequence of floating point operations on x86 then the values will stay in 80-bit registers until they are stored out to a variable. If you compile the same code for a newer machine with SSE or for another architecture then you will get 32-bit operations on your 32-bit floats and so you'll have less precision. A lot of compilers will even generate different precision between debug and release builds.

I ran into this when someone was using my library with DirectX. I was initializing a filter kernel and using double-precision calculations, but apparently DirectX put the processor in single-precision mode, so all my double-precision calculations weren't done as such. Same compiled code, just a run-time difference. I took the opportunity to improve the algorithm to work even with single-precision floats, which was probably good to do anyway.

To be honest, from working in two specialist fields(HPC system level programming and embedded applications(particularly sensor stuff), I've experienced that CompSci grads are more likely than CompEng or EE grads to make errors like this. A large part of it is simply that CompSci nowadays is too high-level and abstract, many of them don't know very much about how computers ACTUALLY work other than as a theoretical model.

A common remark is "Why should I need to know that, the compiler will take care of it better than I will anyway", completely forgetting that the compiler is only as smart as the programmer who coded it is. So you can get what I ran into with an odd appliance based around the SH-4 processor I was hired to fix some performance problems with. It ran fixed point integer and decimal math, and was ported over from ARM. But it only reached about 25% of maximum theoretical performance, while the ARM reached around 80%. Turns out GCC was at fault, using a generic method that wasn't suitable for the Super-H architecture. And the CompSci had no clue about such things.

>>>It's also pretty pathetic that the system designers implemented a broken design and did not foresee this problem. High-resolution timekeeping has been accomplished pretty successfully already...

I sorry.

j/k.

We had a similar problem with an Aegis design, and it was a major headache for us Hardware engineers to try to convince the Systems Engineers that counting in Binary time was more logical than counting in 0.1 second increments. The SEs kept insisting that their computers at home accurately count in seconds and we hardware engineers should be able too. The HE manager and the SE manager were butting heads for about a month over this issue, until finally an upper-level manager handed-down a decision in favor of the HE manager and binary-based counting/requirements documentation.

I guess in the Patriot situation, the decision went in the opposite direction. Hence errors we introduced.

We had a similar problem with an Aegis design, and it was a major headache for us Hardware engineers to try to convince the Systems Engineers that counting in Binary time was more logical than counting in 0.1 second increments. The SEs kept insisting that their computers at home accurately count in seconds and we hardware engineers should be able too.

And the software engineers would have been right. The error was not about counting in 0.1 second increments versus 1 second increments or whatever, but it was in using floating point representation where fixed point (basically, scaled integer) would have been more appropriate.

And come to think of it, that is more or less what most desktop and server OSes do: they count number of milli, micro, or nanoseconds, and store that as an integer.

Hindsight is almost 20/20. Except that the original purpose of the Patriot was to shoot down much slower aircraft, flying parallel to the earth, not ballistic missles. This new use for Patriot was essentially experimental and had had been rushed to war - and in war you run into alot of unexpexcted circumstances. For example, conventional doctrine in the 1980's required Patriots to move constantly on the battlefield to avoid air attack. The clock would then reset when repositioned. No one expected a Patriot in air defense mode to stay stationary for 10 hours let alone 100. But in a missle defense role they did. There is a good GAO report on this.

Wow. People complain about the US government. Still look at the transparency. The GAO wrote a very readable report for the House Of Representatives and now we can all read it on the web. It's not unreasonable to think that the US's vast military superiority over everyone else on the planet is at least in part due to this sort of thing. I don't think any other government would do this - mistakes in the military would just get covered up as state secrets and anyone who tried to talk about them would get locked up or worse.

I think the guy has a point (altough he's being a bit nationalistic about it): Transparancy is key in order to learn from mistakes. You can say many different things about the US of A, but the US of A is good at open hearings.

Even if a flawed design would have worked in the intended usage scenarios as you speculate, given the option of writing a correct program and an incorrect program with no significant difference in effort, why would you ever consciously consider choosing the broken solution from the start? This sounds more like plain and simple incompetence to me.

The problem is the programmer, they should simply have maintained a count of the ticks in an integer and then multiplied it by 0.1 when necessary. Even better, use a proper data type, not a suckish 24-bit float in a freaking weapon, unless they understand very well what are they doing.

Unfortunately, 0.1 seconds cannot be expressed accurately as a binary number, so when it's shoehorned into a 24-bit register -- as used in the Patriot system -- it's out by a tiny amount.

Sorry, 0.1 seconds can be represented EXACTLY in such a system. It doesn't even need floating-point. Here is how such a system could represent the durations of 0.1 seconds, 25.7 seconds, and 123.4 seconds: 1, 257, and 1234. So like you say, fixed-point works here. No need for anything beyond integers in this case.

Well, in this specific instance a decimal system would have been ok, but it isn't a general answer. The general answer is "make sure your increments are divisible into your number base", if they had used 1/8th or 1/16ths of a second, or even 3/32 of a second, as their timer increment then they would not have had this problem. There's no reason why 1/10th of a second has any magic properties.

In general terms, all number bases have other number bases with which they are incompatible. The inability of binary to represent 1/10 accurately is just the same as the inability of decimal to represent 1/3 accurately. It's only because we use decimal all the time that we overlook decimal's shortcomings (or instinctively compensate for or avoid them) and then blame computers for binary's incompatibility with decimal.

Well, in this specific instance a decimal system would have been ok, but it isn't a general answer. The general answer is "make sure your increments are divisible into your number base" . ..

Close. Very close. The general answer is no matter what base you select for time, distance, or any other metric that might accumulate errors, be certain to (a) perform a careful error analysis, (b) include some additional safeguard to control the error if there are potentially large downstream effects.

Just because these computers counted in, say INT/10, and therefore could represent 0.1 seconds exactly does not mean, for example, that the timebase used to drive that counting was accurate and stable. Erro

Or just keep track of things in increments that make sense in binary. 0.1 seconds is arbitrarily chosen to be nice number in decimal. They should have chosen an arbitrary time interval that is a nice interval in binary, the base they were actually using.

This article isn't about how computers suck at math, it's about how people suck at math.

Ok, now go and read the article. The Patriot bug was a problem with fixed point maths. The Ariane bug was integer overflow. The Intel FPU bug was caused by a production error with nothing to do with the arithmetic actually being performed.

Fixed point never rounds when operating in the range and precision for which it is designed. In this case they needed a precision of.1, using INT/10 would be 100% accurate and never give them any rounding errors for this use case.

So, in other words: You are wrong, and should probably considering using fixed point more.

I believe that the problem was not that 0.1s could not be represented. After all, the article states that there were 0.1s ticks and they likely counted ticks as integers. No problem there.However, I gues that 0.1s was no integer multiple of the system clock. If for example the tick should occur after 6,666,666.67 clock cycles, the system likely emitted a tick after 6,666,667 clock cycles. Such a system would accumulate 3.3 clock cycles of error each second.

With fixed point you can choice the basis of the fraction part. A binary fixed point would not help them, but a decimal fixed point of/10 or/100 would. The algrebra of fixed point is the same no matter what base you choice. This means it is fastest way to get decimal based fraction instead of binary fractions (decimal floating point is best with hardware support).

Use fixed point numbers? You know, in financial apps, you never store things as floating points, use cents or 1/1000th dollars instead!

Computers don't suck at math, those programmers do. You can get any precision mathematics on even 8 bit processors, most of the time compilers will figure out everything for you just fine. If you really have to use 24 bits counters with 0.1s precision, you *know* that your timer will wrap around every 466 hours, just issue a warning to reboot every 10 days or auto reboot when it overflows.

yea because the missile counter measures failed to fire because the system was doing its scheduled reboot is so much better than the missile counter measures failed to fire because of timer precision

The OP's suggestion for scheduled reboots could be solved by having redundant systems, no? System X comes up at 0 hour mark, System Y comes up at 233 hour mark. System X switches to System Y and reboots at 466 hour mark; System Y only has 233 hours uptime.

Each battery has overlapping coverage with its nearest neighbors. A proper deployment has overallping fields of fire in both depth and breadth. Surface-to-Air missile defense involves multiple layers of different systems, each specializing in different ranges: Short Range - things like stingers, Medium Range - things like HAWK, Long-Range - things like Patriot. A proper tactical deployment never relies upon a single battery to provide the sole coverage. The problem here was primarily on of tactical deployme

how about this then, have two systems, set up so that for a amount of time each 36 hours they process the tasks in parallel, so that the most recently rebooted can take over control while the other reboots.

There is the concept of "Fail safe" if the system is down (for reset reloading or whatever) then the folks up in C&C know that its down and can do other things but if its "working perfectly" but is in fact not then the folks in C&C don't know this (and land up going boom).

besides a properly built EMBEDDED MILITARY GRADE system should not take more than a couple minutes to "reboot" so you have a couple F16s in the air patrolling to watch for incoming "stuff"

It was written up as a tech failure (and not a people failure) because newsmen who call their sources stupid lose their sources. As others have pointed out, the answer to your question of why this is news is because of the system failure resulting is death.

I had the same reaction - stupid news. Old technical problem with old solutions, but someone thought they had a "catchy headline". And conclusion. It allows for radicalizing - involves missiles, war, deaths, a lot of money, national pride, terrorism, religion, politics, corruption... sex lies and videotape. No big news, all the same, some engineers screw up, some people do wars and weapons, and some people die. If the same error was in a videogame, or in an Intel CPU [wikipedia.org] or Excel spreadsheet calculation error [microsoft.com]

The problem seems to be right out of the textbook for "Practical Analysis" (not sure if this is the correct translation for the german "Praktische Analysis"). This was a nandatory course for every computer science degree during my university time (20 years ago). Don't know if this is still the case. It was an eye opener to see how correct formulas and a perfectly working computer could yield absurd results. Several times i was asked for help by people claiming their Excel was broken due to such mistakes.

Any first year compsci student should know that this happens, and should know to choose data types that can represent the data to the needed degree of accuracy.

A simple struct {int integral_part, int decimal_part}; would do the job for this. Or since you care exactly about.1 second increments, you could even use integral values in the first place. With 24 bits, you can cover 19 days before it overflows, and almost half a day on top of that to provide a buffer if bad guys show up right as the scheduled re

The author seems to imply that computers can't do simple base 10 math without errors. That's not entirely true if you have a fixed precision. You use an integer and shift it so there is no decimal portion, in this case you would make your base a 1/10th of second instead of 1 second. Addition, subtraction and multiplication will be error free. You'll still have a problem with division and other operations but in this case that doesn't sound like their primary issue.
It wasn't the computer's fault that t

Say what? Citations please. Me thinks one of those 2.0 values isn't really 2.0. Hint: printing a value isn't a good way to get its actual value, because the printing function most likely rounds it to fewer digits than it's actually stored as.

It is absurd to blame the computer (or worse, all computers) for what is bad programming. Computers can store a 1/10 of a second perfectly accurately, as long as it is stored in a variable that counts tenths of seconds rather than seconds. It can easily be stored as an integer that way, avoiding any floating point rounding errors.

There certainly are cases of bad math in computers, particularly Intel computers. But this isn't such an example. This is just a lazy and stupid programmer who didn't understand what he was really doing who should take the blame for the failure that killed people, not the computer.

I remember this from a numerical methods class in the 1980s. To deal with situations like this, you can do one of three things :

a) Have a function that you sample as a function of t, so you don't get accumulated error.b) Have enough bits so that error won't be an issue. This is actually hard to do because floating point errors do stack up pretty quick if you are not careful.c) Or, you can have an error term which you can use to make adjustments along the way to account for a lack of precision. Bresenham's line does that more or less exactly when he does his lines. That's why you had "stair stepping" as the algorithm corrected itself along the way.

If the OP was correct, then PATRIOT failed because it did none of them. My bet is in reality, they simply underestimated the actual error term, but did everything else correct. This could be because of discrepancies in flight control instrumentation or some sensor, or, they were simply trying to save money on bits and didn't really do the calculation as to how far the missile could be off in an error term length seconds of flight at a particular phase in its flight profile.

Bottom line is, the engineering discipline exists to solve this problem and is really no different than error handling in any guidance system. Putting a man on the moon, launching an ICBM at target, shooting down a missile, are all essentially the same computer science problem from an error management perspective. The Phd's already nailed this decades ago. There's not a fundamental limitation to computing, in this case, merely, a failure or inability of engineers on this project to apply the correct known answer to this problem.

I don't disagree with what you wrote. One thing is, though, that requirements are very fluid and you have to ask if perhaps the problem is that 10 hours and reboot is a ridiculous requirement from the get go. Soldiers aren't going to sit in a middle of a war zone and turn off the shields.

Arguably, when specing out systems like this, the solution is probably not to build them because they are really too complex to test for battlefield conditions. But that's crazy. So.. what was the outcome? You put a sy

I'm not a serious developer and certainly not one that works on mission critical systems but I have a question:

Are there any symbolic math libraries that allow a program to compute and store its interim values symbolically until the final result was needed? (Like, as an AC mentioned earlier, Mathematica?). Of course there would be an memory overhead (but surely the entire Mathematica kernel wouldn't be needed) and performance might be much MUCH slower than current "binary math" libraries but surely in a d

The mpz module in the LGPL library GMP [gmplib.org] (not to be confused with a bitmap image editor) does arithmetic on large integers, and its mpq module represents rational numbers exactly as ratios of mpz integers. For example, 3.14 would become "157/50".

This is an example of engineers and developers failing to draw up valid requirements, failing to develop to specification, and failing to test against real-world use cases.

Management undoubtedly shares an equal if not greater portion of the blame here. This is typical military-industrial complex, lowest-bidder contractor mentality at work, just another form of corporate welfare if the government doesn't turn around and punish shortfalls like this.

And this is why it is a good idea to take a Numerical Analysis course or an Assembly course that lets you play with floating-point arithmetic as part of your CS electives. As much as I'd like to blame today's Java/.NET-oriented CS curricula (which seem to be fashionable now in many universities), it's been quite a while that many universities barely pay any attention (if any) to the details of floating point arithmetic.

It's the reporting that's garbage. It makes no sense at all. A system tracking missiles travelling at Mach 3 is keeping track of time to 0.1 sec accuracy?! Do you really believe that? Wanna buy a bridge?

0.1 sec at Mach 3 is 100m, so you'd have a hope in hell of ever hitting a 3m long target.

The problem isn't the people working for the defence company, who are hard-core PhDs with some very serious domain knowledge. The problem is people like yourself who are so math illiterate as not to be able to fact check a piece-of-shit story!

I could see designing the system to synchronize both launch times and observations with a timer tick (it wouldn't be surprising if the whole system was driven by the timer interrupt), and then you're not going to have an error due to the spacing between ticks.

I am more bit dubious about the 24 bit thing, though. Was it fixed-point or floating-point?

I don't think it was a float. What would that be? Maybe 16 bit mantissa, 1 bit sign and 7 bit exponent would seem to be the likeliest bet for a 24 bit float. If so, then after about two hours doing t += 0.1 would stop changing t, and the error would be much bigger.

So presumably it was fixed point. But if you're doing it fixed point, instead of storing x, you store nx in an int, for some appropriate scaling factor n. But if you're going to do that, surely you'll choose n in a smart way, and in this case the obvious choice, as pointed out by many posters, is n=10. This is not only the obvious choice because it gets you more precision, but it's the obvious choice because the easiest, most obvious and most standard way of coding timers is to just increment a register with each tick. It would be silly, for instance, to let n=2^8, and then increment a register with 0.1*2^8 = 0x20. It would be a very unlikely assembly language programmer who would have put an add reg,20h opcode in interrupt hander code when inc reg would have worked.

Now maybe at some point the timer value would get converted to a float for computations. But that surely wouldn't be a 24-bit float.

So maybe the article has mangled things and it was not a 24-bit register, but a 32-bit float, with 24-bit mantissa, 7 bit exponent and 1 bit sign, and the "24" in the article came from the mantissa. That's a much more realistic choice. Still, the standard way to handle timers is to just increment a timer variable. So what I could see happening is this. There is a timer system variable t at full 0.1 second precision incremented on interrupt. (That's how PCs used to work--maybe still do--except the timer resolution was 1/30 sec.) Then for their launch calculations, they do: (float32)t / 10. And now they're going to get nasty roundoff errors as the mantissa gets filled up. At the 36 hour point, t is already about 23 bits long. So when you do a float divide by 10, you'll certainly have roundoff problems. But you're still not going to be more than one tick (0.1 sec) off, because each tick still adjusts the mantissa, while the article says they were 0.36 seconds off.

So I think something got mangled in the article. Or we had a really unlikely assembly language programmer who had floating point code executed with every tick of a timer interrupt. But even if the interrupt is only at 10hz, that's just completely contrary to the instincts of an assembly language programmer. And this would have been done back in the hey-day of assembly language programming, when one would try to optimize every clock cycle one could. (And, yes, I've worked with timer interrupt handlers, both on the Z80 and the 8086.)

You're just lying to yourself. The Patriots defense is awesome this year. I mean, was there really ANY point for the Titans offense to show up a couple of weeks back?

And the Scuds? C'mon man. They let go their best man two seasons ago. The QB can't hit the broadside of a barn and their entire wide-receiver corp has Jello hands anyway. The missile attack is a gadget play, pure and simple. Belichick sees right through that and you know it.

Haters need to stop all the hatin' and get on the Pats bus!!!!! GO PATRIOTS!

There's no way a real-time missile tracking system is going to be dealing with time at an accuracy of 0.1 sec.

A Patriot missile travels at about Mach 3 (~1000 m/sec) so a rounding error of 0.05, even without any error accumulation, means you'd be off by 50m in position.

Who knows what the real story is vs the garbage that was reported, but even if there was a cumulative error that's the fault of the programmer rather than a lack of a computers ability to do math. You do your error analysis and use whatever accuracy needed to keep the errors in a tolerable range.

The part about the system running for 100 hours was pure gibberish. Yes, we can all divide that by 0.1 sec, but what on earth does that have to do with a real-time tracking system tracking a target is acquired a few minutes ago?!

A better title for the story rather than "computers can't do math" would be "we can't do tech reporting".

A Patriot missile travels at about Mach 3 (~1000 m/sec) so a rounding error of 0.05, even without any error accumulation, means you'd be off by 50m in position.

Perhaps the tracking radar has a 500m field of view at a range of X km (enough distance to launch a Patriot missile). It doesn't look at the target through a keyhole and just has to be in the general vicinity to detect/confirm the incoming Scud.

How about if you realized that there are two systems in this story?
1) Radar (0.1 s accuracy)
2) Patrio

Consider that book was written in 1972. I was programming computers in 1972. I actually did a course in numerical analysis in 1972 and just re-read the first 10 pages or so. I happen to have read a masters thesis that came out of the Colorado School of Mines where the author stated Meadows' Runge Kutta Numerical Integrations did not converge.

Yet that book is still often quoted. Its been flawed from the get go. So consider something else! How fast were the machines that Meadows used? How big? What would be the MOST SOPHISTICATED model he could use at the time. How could _anyone_ take seriously predictions made by a primitive model run on such a machine?

Witness: The current discussion about Global Warming and Climate Change. The change in CO2 over the last 100 years is about 100 ppm if you can believe the data. This is 100/1,000,000 = 0.0001. Now the thing is this. A 32 bit float holds about 6.9 digits of precision. Lets call it 7 digits. If one were to add a whole number of some kind to the fractional change of the CO2 as measured relative to the total gases in the atmosphere then one has 7-4 = 3 digits or less to work with.

Of course one can use a double precision float. That isn't my point. One has to be an EXPERT in order to avoid huge problems with propagating rounding errors.

Its not just about pretending computers use base 10 when they don't, its about knowing the actual properties of a number of type float and what the consequences are when we use it.

In the case of that rocket I suspect the rounding error can be solved by normalizing everything so the time line is not in seconds but is actually in clock ticks... as accurately as they can be determined of course.

But in my career I have seen so few programmers who can do this that I've never even needed to look at a finger or a toe for something to count on. Nada - never met one.

I'll give another example. More than one project team that I worked with had no idea how floats even work! To sit there and try to use floats for their Accounts Payable and Accounts Receivable and then say they can't understand why nothing will balance? Arrghh! IMHO its downright incompetence. They needed to use comp which COBOL supported which is base 10 or normalize all their money into pennies and handle the decimal when the data was read in and printed.

LISP, Scheme, Haskell, Mathematica, Maple, and plenty of other languages support arbitrary precision rational numbers as built in types. This fixes all rounding errors involving rational numbers (including fractions). If irrational numbers like pi, e, or transcendental functions are necessary, then there will always be inherent error in the representation and the programmer has to know how to do with that error and calculate the expected error of a sequence of operations. If you want to get fancy, you can use an algebraic language like Mathematica to symbolically solve your equations and maintain perfect accuracy with symbolic representations of irrational and transcendental numbers.

While I agree that the design decisions which lead to this were poorly made, this error was common knowledge.

The Patriot system _must_ be restarted every X days, exactly due to this bug. This is documented and everything.

While the initial error was with the people who created the Patriot system, the soldiers who were assigned to the system were the ones who made sure that a documented bug with a known-good work-around became a loss of life.

Except that people tend to rely on computers, and take risks they would not have otherwise taken. I am not saying that the number of deaths resulting from computer errors is going to be higher than other deaths, but that it is not as simple as "every death caused by a computer error is a death that would have happened before computers." If you knew your enemy was launching missiles at you, and you had no missile defense, what would you do to protect yourself? What would you do if you did have missile def

because military computers are 20 years out of date to start with. Heck even the awesome modern land warrior hardware, is 10 years out of tech date. Heck they could probably shave 5 pounds off of the hardware by using modern chips, and displays.

Military Spec is only good at rugged. up to date with the best is far behind.

Regardless, what isn't possible is is to design a system that can accurately track and shoot down missiles in flight. As the Patriot defence system so patently demonstrated.

You're right. Just as the failure of Samuel Langley's aircraft demonstrated that man would never fly, the failure of an anti-aircraft missile to destroy only half of the ballistic missiles (targets moving at what, twice the speed of the targets it was designed to destroy?) demonstrates that ABM's will never work.

OK, we'll go with 0% success. My point is that the failure of any one implementation does not invalidate the concept. Edison tried hundreds of wrong ways to make a light bulb, none of which demonstrated that the light bulb was unworkable.

Oh, and the Scud hunting in Gulf One was largely an air exercise, as I recall, and of course they went after the launchers. It's always preferable to destroy the enemy on the ground (or in harbor, or asleep in barracks) then when they're incoming. The Japanese didn't bomb