> My reading of the programming manual for the PowerPC is that the> internal registers that hold the intermediate result for a> multiply-add instruction have extended precision. If this is true,> and if it is true that occasional use of extended precision is> 'wrong', then there is virtually no circumstance under which a> compiler can correctly generate the PowerPC fmadd and fmsub> instructions, for example. This strikes me as a very severe> interpretation of floating point semantics.

You can either see it as "infinite precision intermediate result" or,
which I think is more useful, see the fmadd operation as another
"basic" operation, so you have the "basic" operations a+-b, a*b, a/b
and a*b+-c.

In case of the PowerPC, the instruction is definitely useful. It is
faster than two separate instructions, and the result is guaranteed to
be as close or closer to the mathematical result as two separate
instructions. Being faster is always good; the different result
doesnt matter in 90% of the cases, and it is an advantage in 9% of all
cases, leaving 1 percent where it is bad :-(

As an example where it is bad, normally if a*b is mathematically
greater than or equal to c*d, then a*b - c*d >= 0. This is not the
case with fmadd.

In the forthcoming C9X standard for the C language, and in existing
compilers for the PowerPC, the programmer can tell the compiler
whether it is allowed to use these instructions or not. I think in C9X
the syntax is

#pragma fp_contract on/off

or something similar, which would allow the compiler to use other
basic operations than +,-,*,/. I am not sure how far this is allowed,
that is whether it is only allowed where *+ is directly visible in an
expression, like a*b + c, or if it also is allowed if this is not
directly visible, like