Robert A Duff <bobduff@world.std.com> wrote:>The size in bits of the result>should be determined by the size of the operands, and not their "type".

jfc@mit.edu (John Carr) writes:>Unless you move away from a static typed language, the C model has a>great advantage. If the result of an operation is defined to have>infinite precision a long expression gains bits for every operator and>the code to implement it is big and slow.

>To get the exact result for

> (x + y) * (a + b) / d

>where all variables are 32 bits requires 66 bit arithmetic. Addition>adds 1 bit and multiplication doubles the number of bits. The quotient>has the same number of bits as the dividend.

Robert Duff's proposal should be refined: the size in bits of the
result should be determined by the size of the operands and the type
(size) of the variable that gets the result. Hence, multiplying two
n-bit numbers and putting the result in an n-bit variable doesn't need
a 2n-bit intermediate result. Hence, the compiler should move size
information both forwards (upwards in the expression tree) and
backwards (downwards in the expression tree): The forwards information
describes the maximum number of bits required to hold the full
precision, the backwards information the number of bits required by
the destination. The compiler can choose any number of bits that is at
least the _minimum_ of these two numbers. Hence, you don't get the
explosion in the size of intermediate results that John Carr fears.

I much favour the Pascal idea of specifying exact number ranges, e.g.

var x : -13342..275249;

The compiler can choose any word size that contains this range for
implementing x. This will typically be the register size if this is
large enough, as this would give the best speed. For arrays, the
'packed' keyword tells the compiler that it should try to optimize the
array for size rather than speed, but it gives no guarantee that the
absolute minimum space is used.

Pascal does have an 'integer' type that is machine specific. I would
actually prefer omitting this altogether. If you think it is tedious
to write -32768..32767 every time you declare a variable, you are free
to declare

type integer = -32768..32767;

which gives the compiler useful information when moving to a different
platform. Note that nothing prevents the compiler from using 32-bit
words to implement this type. If you want to have a type that is
exactly 16 bits (for bitwise operations etc.), you could add an
'exact' keyword to specify that the machine should use the least
number of bits that can hold the range.