The moderator opined:> [I believe that the problem is that different machines do FP arithmetic> differently, e.g., x86 promotes everything to 80 bits, IBM 360/70/90> does it in the precision of the operands. -John]

While different architectures may have a preferred mode of operating
on floating point numbers, that does not mean a programming language
shouldn't impose requirements on how the floating point arithmetic
behaves. For example, while the x86 has 80 bit double extended
registers, with some effort, it is possible for the x86 to implement a
"strict evaluation" policy where double combined with double yields
double and float combined with float yields float. On the x86, it is
easy to have both float and double operands promoted to double and
then written out as either float or double.

There is some need for a language to "do whatever runs fast on this
particular hardware." However, there is also a need to be able to
specify exactly how floating point values are combined. I think not
providing for this latter requirement is a design flaw.

-Joe Darcy
darcy@cs.berkeley.edu
[I think we're in agreement. On the one hand, the tighter the
floating point model is specified, the easier it is to write robust
and reliable floating point code. On the other hand, the the tighter
the model is specified, the more likely that on any given platform,
some part of the model will mismatch what the hardware does, software
will have to do something wierd to fix it up, programs will run
slowly, and programmers, who have a bad habit of preferring fast wrong
answers, won't use your well specified language. Look at the fights
about the Java floating point spec, for example. I don't know of any
way to reconcile the desire for a tight spec and a desire for code
that maps efficiently onto whatever hardware you happen to have. In
the long run, I hope that it means that people build FP hardware to
match reasonable specs, but history is not encouraging. -John]