This is only my initial reaction, but this strikes me as a really bad idea,
because it violates the principle that correct programs should be easy to
write and the burden of micro-optimization should be on programmers who need
to do micro-optimization, not on all programmers. It seems to me that you're
making the language significantly more complicated and hard to understand,
just to make compiler optimization a little easier. The specific aspect I
object to is having two non-id?, non-operationally-equivalent integers with
the same mathematical value.
Can't we find a less semantically violent way to enable the desired compiler
optimization?
This integer business isn't going to come out clean on all dimensions
however we slice it. The sad truth is that existing (or
foreseeable-future) hardware provides efficient arithmetic for a certain
class of integers, but this class isn't closed under the major arithmetic
operations. So either you take an exception when one of these operations
overflows or you quietly roll over into an infinite-precision format that
is *much* less efficient.
So how should Dylan deal with this? Let's try to look at all the
possibilities and see which best meets the goals of Dylan.
1. The current Dylan approach (as I understand the book) would quietly roll
over to an infinite precision format whenever an overflow occurs. This is
indeed the easiest system in which to write correct code. If you don't
care at all about efficiency, you never have to worry about overflows or
the size of your integers.
But the price for this bit of added safety is very high. First, you have
to keep the bignum code around in the runtime system at all times, and that
code is large and complex. Second, no arithmetic operations on integers
can be open-coded, since you have to allow for bignum operands and
overflows everywhere. This is not just micro-optimization -- it can slow
these arithmetic operations down by a factor of 4 or more, whether or not
your program ever actually sees a bignum. You're paying a lot for a
complex facility that most programs do not use.
I think that this option is unacceptable if Dylan aspires to be a language
for writing real code that is competitive with C in efficiency.
2. We could go with the system described in 1 above, but add type
declarations a la Common Lisp. The problem is that it is not sufficient
just to declare your variables to be fixnum (or small-integer). You also
have to declare the output of each arithmetic operator to be fixnum, since
two fixnums might produce a bignum. This is true no matter how smart your
compiler is. Alternatively, you could declare the variables to have much
smaller ranges, such as (integer -1000 1000), and a sufficiently smart
compiler could then prove that certain arithmetic operations would not
overflow. But this puts a big load on both the programmer and the
compiler. Note, for example, that the fact that a loop counter starts at
1000 and counts down to 0 does not allow the compiler to assume short-float
arithmetic unless it can also prove that no code within the body of the
loop alters the counter variable.
Again, I wouldn't call this micro-tuning. You have to put these delaration
in everywhere unless you're willing to take a BIG hit in performance, and
you have to get them right. Once you start putting in declarations to get
reasonably efficient code, the risk of getting something wrong is greater
than the risk of getting hit unexpectedly by an overflow, so writing
correct code becomes harder, not easier.
3. The opposite extreme is to throw bignums out altogether. If any
operation overflows, a condition is signalled. This does make it a bit
harder for the programmer to write correct code, since there is a new kind
of error that the programmer must worry about. However, such errors are
rare (especially if we mandate a minimum small-integer size of 28 bits or
so), and usually easy to rule out. In exchange for this new risk, we get
some major beneifts: the runtime system no longer needs the bignum code,
and the programmer gets maximally efficient integer code without any
hassles over type declarations.
Most programmers (now including me) feel that this trade-off is a good one.
The risk of an occasional overflow is a small price to pay for code that is
maximally efficient without any great effort on the programmer's part.
Most conventional languages adopt this option, and even novice programmers
seem to thrive in this regime. Having a good condition-handling system
makes overflows that much less fearsome when they do occur.
However, there are occasional situations in which bignums and ratios of
bignums are useful after all, so I don't think that 3 is an optimal choice
for Dylan.
4. Rob's proposal basically makes 3 the default, but allows you to pull in
bignums if you want to use them. This seems like a nice, intuitive model
to me: if you don't traffic in bignums, they aren't going to appear
unbidden. In that case, you don't have to keep the bignum code around (or
load it in the first place), and you don't have to write a lot of arcane
declarations to get good performance. But if you want to play with
bignums, there they are. The downside, as you point out, is that overflow
policy is now bundled into the data type, so there are two different
flavors of integer 5. This can be confusing, though less so if we make
these two types look slightly different, as I suggested.
5. We could go with a dual overflow policy, as in 4, but hide the overflow
signal-or-bignum selection somewhere other than in the types of the
operands. For example, we could have two separate sets of arithmetic
operators, one for fixnums only (signalling an error on overflow) and one
that would allow, and sometimes produce, bignums. In fact, many Common
Lisp programmers end up doing this with macros: they define a +fix macro
that calls + with the inputs and output declared to be fixnums, and so on.
I think that this is somewhat uglier than 4, but could live with it. It
certainly doesn't look very object-orietned to have separate function names
for the fixnum-only and maybe-bignum versions of plus.
If we go this way, I would argue that the fixnum-only versions should get
the good names (+, -, etc.), since they are what most programmers will use
most of the time.
6. One other option is to declare, for whole chunks of code, that
arithmetic is to be fixnum-only or maybe-bignum. This must be done at
compile-time if you want the fixnum-only code to be efficiently open-coded.
The problem here is that this is a rather blunt axe. You might want to do
some maybe-bignum addition within a loop whose counter is incremented and
tested using fixnum arithmetic. A big-switch approach doesn't give you
fine enough control to do this.
Those are all the options I can think of. For my own use, I would rate
these options in descending order as follows: {4 5 6 3 2 1}. Probably most
of the C++ programmers we want to appeal to would come up with a similar
rating (or maybe would put 3 at the top if they've never learned to like
bignums). Common Lisp old-timers might rank 2 first, unless, like me,
they've learned to hate these declarations after years of putting them in
for efficiency.
-- Scott
===========================================================================
Scott E. Fahlman
School of Computer Science
Carnegie Mellon University
5000 Forbes Avenue
Pittsburgh, PA 15213
Internet: sef+@cs.cmu.edu