I'd like a higher max size limit for static arrays:
uint[10_000_000] arr;
For the LDC compiler is a fully arbitrary limit, it can support higher values.
I'd like ldc to be free to use a higher limit.
Most/all PC CPUs & operating systems are probably going to become 64 bit, but
in D int values are 32 bit, so some years from now, when everything is 64 bit D
programs will probably keep containing:
int i;
Unless D programmers will train themselves to nearly never use int and usually
use:
long i;
that will be about as equally fast, but less probable to cause integral
overflow.
The choice of 32 bit as default int number in D may look bad few years from now.
I don't see good solution. To me the best solution seems to use "long"
everywhere in future programs fit for a 64 bit world (cent/ucent will be
present, LLVM supports them already, so it's easy to add them to LDC).
Bye,
bearophile

I'd like a higher max size limit for static arrays:
uint[10_000_000] arr;
For the LDC compiler is a fully arbitrary limit, it can support higher
values. I'd like ldc to be free to use a higher limit.
Most/all PC CPUs & operating systems are probably going to become 64 bit,
but in D int values are 32 bit, so some years from now, when everything is
64 bit D programs will probably keep containing:
int i;
Unless D programmers will train themselves to nearly never use int and
usually use:
long i;
that will be about as equally fast, but less probable to cause integral
overflow.
The choice of 32 bit as default int number in D may look bad few years
from now.
I don't see good solution. To me the best solution seems to use "long"
everywhere in future programs fit for a 64 bit world (cent/ucent will be
present, LLVM supports them already, so it's easy to add them to LDC).
Bye,
bearophile

I'd like a higher max size limit for static arrays:
uint[10_000_000] arr;
For the LDC compiler is a fully arbitrary limit, it can support higher values.
I'd like ldc to be free to use a higher limit.
Most/all PC CPUs & operating systems are probably going to become 64 bit, but
in D int values are 32 bit, so some years from now, when everything is 64 bit D
programs will probably keep containing:
int i;
Unless D programmers will train themselves to nearly never use int and usually
use:
long i;
that will be about as equally fast, but less probable to cause integral
overflow.
The choice of 32 bit as default int number in D may look bad few years from
now.
I don't see good solution. To me the best solution seems to use "long"
everywhere in future programs fit for a 64 bit world (cent/ucent will be
present, LLVM supports them already, so it's easy to add them to LDC).
Bye,
bearophile

It's not really *easy* to add them to LDC, since they're not
implemented in the frontend at all.

There are C programs that use larghish static 2D arrays that can't be converted
to D1 for LDC as is, because of an arbitrary imposed limit. I'd like the D
specs to state that such limit is implementation-specific, can be different
between different compilers (but if you want you can add to the specs a common
minimum limit).
Bye,
bearophile