oko1 has asked for the
wisdom of the Perl Monks concerning the following question:

Hi, all: I'm getting a disturbingly-large discrepancy in running an identical script on different machines, and hoping that someone can suggest a cause and perhaps a fix. I'm suspecting a possible bug in my perl version.

Following on from ig's comment a couple of alarm bell stimulating items in relation to your code, data and results should be mentioned:

sums and differences of floats can easily lead to large errors if not managed correctly, especially in the context of exponentiation

output values with more significant digits than seem reasonable given the input data is a cause for concern

This sort of code can be very tricky to get right with careful attention needing to be paid to expected ranges of values at each step and consideration given to changing calculation order to minimise errors propagating through the calculation chain. Using higher precision numerical representation is a quick way around what can otherwise often be a difficult problem.

An alternative to running with more significant digits, is to rewrite your code to make use of Number::WithError. This module knows how to propagate error through most common kinds of operations. This module will work with BigFloat numbers as well.

FWIW: I get the same results on I686 Windows, as you get on I686 linux, which suggests to me that the difference if down to differences in the floating point hardware rather either the way Perl is built, or the underlying CRT math functions.

I thought for a while that it might be down to the IEEE rounding mode is use, but I tried it with all 4 modes and whilst the results do vary, the differences are far less than you are seeing:

Thanks, all, for the very useful responses (I'm responding to BrowserUk's reply specifically because it's so detailed and helpful in so many aspects) - this was very useful for both confirmation and more direction in deciding where to look for the error. I'm not all that familiar with the guts of Perl, but it's looking likely that there's a major compilation difference responsible here: either single-precision lookup tables, or perhaps a radically different math lib. Annoying that something like that could affect a Perl program... but on the other hand, good to learn that it can.

Again, thank you all very much.

--
"Language shapes the way we think, and determines what we can think about."
-- B. L. Whorf

either single-precision lookup tables, or perhaps a radically different math lib. Annoying that something like that could affect a Perl program...

I'm very unsure of my ground here as I'm not familiar with the other platforms, but it might not be software--Perl or the libs--but the floating point hardware. If their FPU's are only single precision, that might account for the results.

That said, my best efforts to perform the calculation in single precision doesn't get close to the inaccuracies you;re seeing:

It looks to me all of your @x, @y, $v?s, $m?s and $d are integers. $A, $B, and $C are all integers divided by the same number ($d). You should be able to rewrite your calculations so you factor out dividing by $d - if I'm not mistaken, skipping the three divisions by $d, and replacing the adjustment of $C by doing $C -= $tgt * $d; should do this.

This reduces the amount of floating point calculations, and hence the rounding errors. Of course, there still may be floating point calculations if any of intermediate integers becomes "too large". (Print out $A, $B, $C and $d to make sure).

The adage that I heard was: “Floating point numbers like piles of dirt on a beach. Every time you pick one up and move it around, you lose a little dirt and you pick up a little sand.”

Every implementation ought to produce the same answer, within the useful number of significant-digits, for most calculations. But, the more calculations you do (and depending on exactly how you do it), the more the results will “drift” toward utter nonsense.

And I truly think that you should expect this from any binary floating-point implementation. There are two classic ways that applications (such as, accounting applications in particular) counter this:

Binary-Coded Decimal (BCD): The calculations truly are performed in decimal mode, using 4 bits per decimal integer. (COBOL turned this into a science.)

Scaled Integers: The calculations are performed using integer arithmetic, and the result is understood to be “multiplied by (say...) 10,000,” giving you a fixed precision of (say...) 4 digits to the right of the decimal. (Microsoft Access uses this strategy in their “Currency” data-type.)

Even so, errors can accumulate. This can be further addressed by algorithms such as “banker’s rounding.” There is, of course, the (probably apocryphal) tale of an intrepid computer-programmer who found a way to scoop all of those minuscule rounding-errors into his own bank account...

Float-binary can never be a “pure” data representation. It is well-understood that the fraction 1/3 cannot be precisely expressed as a decimal number. Similar artifacts occur for other fractions in other bases, and, so they tell me, for base-2 floats, one of those unfortunate numbers is 1/10. (“So I have been told.” I don’t have enough geek-knowledge to actually know for sure...)

When putting a smiley right before a closing parenthesis, do you:

Use two parentheses: (Like this: :) )
Use one parenthesis: (Like this: :)
Reverse direction of the smiley: (Like this: (: )
Use angle/square brackets instead of parentheses
Use C-style commenting to set the smiley off from the closing parenthesis
Make the smiley a dunce: (:>
I disapprove of emoticons
Other