On 11/20/2013 4:29 AM, Martin Ward wrote:> Apart from these cases: do you still disagree with the assertion?> Is there any (semantic) behaviour which should be left as> "implementation dependent"?

1. The size of a pointer variable.
2. The size of an integer variable intended to be the same size as a
pointer.
3. The reinterpretation of a function pointer as a data pointer.
4. The accuracy of non-fundamental floating point computations (e.g.,
exp). I think it would be reasonable to constrain them, though.
5. The contents of a memory location in the presence of a data race.
6. The correctness of floating point numbers in denormalized situations.
(I know NVIDIA GPUs used to not do denormalized correctly, and I think
some SSE implementations don't either).
7. The order, size, etc. of functions (this is observable if you can
take the address of a function and compare or subtract pointers).
8. The order of variables either in global memory, heap-allocated
memory, or stack-allocated memory.
9. If you allow type punning, semantics that would require a specific
endianness of types.

> The usual reason for "implementation dependent" behaviour is that code> compiles to one behaviour more efficiently one one CPU, and to some> other behaviour more efficiently on some other CPU. So, rather than> "penalise" one CPU, the language designers give way to lobbying from> the CPU manufacturers and allow either behaviour (or any behaviour).> The result:

The worst impacts of undefined or implementation-defined behavior are
not because the underlying hardware is unreliable, it's because the
optimizers gleefully trash the intent of your code in an attempt to make
it faster. Strict aliasing in C is perhaps the worst offender in this
regard.

> On the other hand: suppose the language designers pick on a certain> behaviour and define *that* as the exact semantics of the programming> construct in question. The result:

Or, you can get extremely inefficient computation on all CPUs. Suppose
you had a language that required you to trap on arithmetic overflows,
and that required you to trap at a very specific place in computation.
This language means you have to prove range analysis on all variables to
prove that they cannot overflow before you can do any code motion or
code elimination--effectively negating the most powerful optimizations a
compiler can do.

If you allow a little bit of undefined behavior, if you let compilers
choose to trap before the value would be observed as overflowed, you
again allow code elimination and code motion without the range analysis,
while achieving very nearly the same result.

--
Beware of bugs in the above code; I have only proved it correct, not
tried it. -- Donald E. Knuth