>> Chris F Clark <cfc@world.std.com> writes:>>>> However, you can have "too much" precision if you can't have it>> universally. Take the following fragment of a basic derivative:>>>> 100 input a, b>> 110 let x = a*a - b*b>> 120 print x>> 130 end

>> ... If your implementation keeps either a*a>> or b*b in a register before performing the subtraction (and does not>> store both values into memory), then it will get the wrong answer (it>> will not get 0, which is the right answer according to both real>> arithmetic and its computer approximation).

This is a good example.

In low precision, we have

l1 = a*a
l2 = b*b
x = l1 - l2

In mixed precision, we have

h1 = a*a // extended precision temp
l2 = b*b
x = h1 - l2

The difference between these results is h1 - l1, which is equal to the
roundoff that results from converting a*a from high to low precision.

As Chris points out, when we run the program with a = b, we get a
result of 0.0 from the low precision version, which looks very good.
But the floating point precision does not justify interpreting this an
exact zero. All that we can say is that it is a value that is near to
zero.

Now run the program with a = b + delta, where 0 < delta < sqrt(h1 - l1).
This time the mixed precision version gives a strictly more accurate
result than the low precision one. The low precision version
continues to give us a zero.

Run the program once more with a chosen such that a*a = +inf in low
precision, but does not overflow in extended precision. Again the
mixed precision version gives a strictly more accurate result than the
low precision one.

I would argue that if the programmer is surprised by any of these
outcomes, then the program has been written to assume strict equality
where it should assume only approximate equality. In other words, it
is attributing undue significance to the low-precision floating point
format.

My reading of the programming manual for the PowerPC is that the
internal registers that hold the intermediate result for a
multiply-add instruction have extended precision. If this is true,
and if it is true that occasional use of extended precision is
'wrong', then there is virtually no circumstance under which a
compiler can correctly generate the PowerPC fmadd and fmsub
instructions, for example. This strikes me as a very severe
interpretation of floating point semantics.