Could we please have at least a warning if code isn't compatible with
64Bit?
It's really annoying to test out some code and having to fix a bunch of
stupid uint->size_t bugs just because the author is still on a 32 bit
machine.
Is that feasible?

Could we please have at least a warning if code isn't compatible with
64Bit?
It's really annoying to test out some code and having to fix a bunch of
stupid uint->size_t bugs just because the author is still on a 32 bit
machine.
Is that feasible?

Could we please have at least a warning if code isn't compatible with
64Bit?
It's really annoying to test out some code and having to fix a bunch of
stupid uint->size_t bugs just because the author is still on a 32 bit
machine.
Is that feasible?

In general, no. What you're asking is for the compiler to compile your
code twice, once for 32-bit and once for 64-bit.
Remember that size_t is defined in druntime, not the language, so the
compiler doesn't know what size_t is ahead of time.
version(D_LP64)
{
alias ulong size_t;
alias long ptrdiff_t;
alias long sizediff_t;
}
else
{
alias uint size_t;
alias int ptrdiff_t;
alias int sizediff_t;
}

Could we please have at least a warning if code isn't compatible with
64Bit?
It's really annoying to test out some code and having to fix a bunch of
stupid uint->size_t bugs just because the author is still on a 32 bit
machine.
Is that feasible?

In general, no. What you're asking is for the compiler to compile your code
twice, once
for 32-bit and once for 64-bit.

No it isn't. It's basically asking to make size_t/ptrdiff_t not implicitly
convertible to
uint/int, or at least issue a warning if you implicitly convert between them.
At least some Microsoft C++ complier versions have this warning.
Stewart.

Could we please have at least a warning if code isn't compatible with
64Bit?
It's really annoying to test out some code and having to fix a bunch of
stupid uint->size_t bugs just because the author is still on a 32 bit
machine.
Is that feasible?

In general, no. What you're asking is for the compiler to compile your
code twice, once
for 32-bit and once for 64-bit.

No it isn't. It's basically asking to make size_t/ptrdiff_t not
implicitly convertible to uint/int, or at least issue a warning if you
implicitly convert between them.
At least some Microsoft C++ complier versions have this warning.
Stewart.

I generally like the idea of making size_t strongly typed, but
that would necessitate X!size_t to become a distinct instantiation from
X!uint or X!ulong. Furthermore, it would break all existing D programs
that are deliberately not 64 bit aware =).

I generally like the idea of making size_t strongly typed, but
that would necessitate X!size_t to become a distinct instantiation from X!uint
or X!ulong.
Furthermore, it would break all existing D programs that are deliberately not
64 bit aware
=).

You mean compilers that target a 32-bit platform and rely on the runtime
environment to
ensure CTFE is consistent with runtime evaluation of the same function?
Stewart.

I generally like the idea of making size_t strongly typed, but
that would necessitate X!size_t to become a distinct instantiation
from X!uint or X!ulong.
Furthermore, it would break all existing D programs that are
deliberately not 64 bit aware
=).

You mean compilers that target a 32-bit platform and rely on the runtime
environment to ensure CTFE is consistent with runtime evaluation of the
same function?
Stewart.

Couldn't it be handled by a special switch on 64 bit compilers, and
disabled normally? That way, the default behavior doesn't change, but
all one would have to do is say "
-Matt Soucy

I generally like the idea of making size_t strongly typed, but
that would necessitate X!size_t to become a distinct instantiation
from X!uint or X!ulong.
Furthermore, it would break all existing D programs that are
deliberately not 64 bit aware
=).

You mean compilers that target a 32-bit platform and rely on the runtime
environment to ensure CTFE is consistent with runtime evaluation of the
same function?
Stewart.

Couldn't it be handled by a special switch on 64 bit compilers, and
disabled normally? That way, the default behavior doesn't change, but
all one would have to do is say "
-Matt Soucy

...I totally meant "-alertx64compatability", but a "-MattSoucy" compiler
flag would be amusing as well. And scary that I would warrant a compiler
flag.
Either way, a flag like that would still require making size_t strongly
typed...
-Matt Soucy

Couldn't it be handled by a special switch on 64 bit compilers, and
disabled normally?

Theoretically yes, but it would destroy the original intention.
Ensuring 64 bit compatibility is as easy as compiling with -m64 from time
to time, but some people can't be bothered.
They won't use a new switch either.

Couldn't it be handled by a special switch on 64 bit compilers, and
disabled normally?

Theoretically yes, but it would destroy the original intention.
Ensuring 64 bit compatibility is as easy as compiling with -m64 from
time to time, but some people can't be bothered.

Or they're on windows.

-m64 -o- should work on Windows regardless.

I got an ICE last time i tried that with DMD (haven't tried the latest
version though).
I've just been making some changes to Juno to fix 64bit issues found by
GDC, and that will do 64bit builds even on 32bit hosts, so it's pretty
straight forward to give it a try.

Couldn't it be handled by a special switch on 64 bit compilers, and
disabled normally?

Theoretically yes, but it would destroy the original intention.
Ensuring 64 bit compatibility is as easy as compiling with -m64 from
time
to time, but some people can't be bothered.

Or they're on windows.

Then you've got the added fun of whether it builds on Linux or any other Posix
system _anyway_. To really know whether something is going to work on a system
other than the one you're developing on, you need to buid it and run into on
other systems (or built it _for_ other systems and then run it there in the
case of cross-compiling).
It would be nice if size_t were handled better, but a flag for 64-bit would
only solve _one_ of the problems related to writing code on one system and
trying to run it on another, and that's assuming that it actually solved the
problem for 64-bit, which it wouldn't, since you could still have version
differences beyond size_t. It would just help with the very common (and
understandably annoying) issue of using size_t correctly on 32-bit box such
that it would work on a 64-bit box.
So, it may very well be worth having something in the compiler flag obvious
mis-use of size_t, but it doesn't really solve the problem, just mitigate it.
- Jonathan M Davis

Could we please have at least a warning if code isn't compatible with
64Bit?
It's really annoying to test out some code and having to fix a bunch
of
stupid uint->size_t bugs just because the author is still on a 32 bit
machine.
Is that feasible?

In general, no. What you're asking is for the compiler to compile your
code twice, once
for 32-bit and once for 64-bit.

No it isn't. It's basically asking to make size_t/ptrdiff_t not
implicitly convertible to uint/int, or at least issue a warning if you
implicitly convert between them.
At least some Microsoft C++ complier versions have this warning.
Stewart.

I generally like the idea of making size_t strongly typed, but
that would necessitate X!size_t to become a distinct instantiation from
X!uint or X!ulong. Furthermore, it would break all existing D programs
that are deliberately not 64 bit aware =).

I like the idea, too. Memory sizes and collection lengths are numbers of
machine word size. This is a logically distinct type. I want to support my
claim with this article:
http://en.wikipedia.org/wiki/Integer_(computer_science)#Words
Although past systems had 24-bit architectures, in practice today a
machine word maps to either uint or ulong. So what I have in mind is a
machine word "typedef": It is logically different from both uint and
ulong, but template instances using it are mapped to either uint or ulong
(the semantically equivalent). As a new keyword, it would also look ok
with syntax highlighting editors and remove size_t, which does look so so.
void* allocate(word size);

Could we please have at least a warning if code isn't compatible with
64Bit?
It's really annoying to test out some code and having to fix a bunch of
stupid uint->size_t bugs just because the author is still on a 32 bit
machine.
Is that feasible?

In general, no. What you're asking is for the compiler to compile your
code twice, once
for 32-bit and once for 64-bit.

No it isn't. It's basically asking to make size_t/ptrdiff_t not
implicitly convertible to uint/int, or at least issue a warning if you
implicitly convert between them.
At least some Microsoft C++ complier versions have this warning.
Stewart.

size_t is defined in druntime as an alias to uint/ulong. The compiler is
unaware of any special status that it may have.

The whole point of what I'm saying is that it doesn't need to be.
writefln is a library function. But DMD recognises it specially, so that it
can give
"perhaps you need to import std.stdio;" if you try using it.
In the same way, it could recognise size_t/ptrdiff_t specially, by treating
them
internally as strong types even if they aren't - so that if you try to use one
as a
uint/int, it will give a warning. Just like the M$ C++ compiler does.
OK, so it's simpler if size_t and ptrdiff_t are changed to built-in types, but
my point is
that it's not strictly necessary.
From what I gather, some C++ compilers do more than this: they have a built-in
understanding of the STL types, which they can use to optimise operations on
them better
than can be done in the code implementations of them.
Stewart.

From what I gather, some C++ compilers do more than this: they have a
built-in
understanding of the STL types, which they can use to optimise operations on
them better
than can be done in the code implementations of them.

Could we please have at least a warning if code isn't compatible with
64Bit?
It's really annoying to test out some code and having to fix a bunch of
stupid uint->size_t bugs just because the author is still on a 32 bit
machine.
Is that feasible?

IMHO the ideal solution would be:
- treat size_t as a magical type (not a simple alias).
- allow size_t -> uint if you are in a machine-specific version
statement that implies 32 bits (eg, version(D_InlineAsm_X86),
version(Win32), version(X86)).
- allow size_t -> ulong if you are in a version statement that implies
64 bits.
- Otherwise, disallow implicit casts.
Incidentally this was a motivation for the 'one-definition rule' that I
proposed for version statements: it means the compiler can easily
identify which versions imply machine-specific.

IMHO the ideal solution would be:
- treat size_t as a magical type (not a simple alias).
- allow size_t -> uint if you are in a machine-specific version statement that
implies 32
bits (eg, version(D_InlineAsm_X86), version(Win32), version(X86)).
- allow size_t -> ulong if you are in a version statement that implies 64 bits.
- Otherwise, disallow implicit casts.

<snip>
And have what rules for implicit conversions _to_ size_t?
Stewart.

IMHO the ideal solution would be:
- treat size_t as a magical type (not a simple alias).
- allow size_t -> uint if you are in a machine-specific version statement
that implies 32
bits (eg, version(D_InlineAsm_X86), version(Win32), version(X86)).
- allow size_t -> ulong if you are in a version statement that implies 64
bits.
- Otherwise, disallow implicit casts.

<snip>
And have what rules for implicit conversions _to_ size_t?
Stewart.

Could we please have at least a warning if code isn't compatible with
64Bit?
It's really annoying to test out some code and having to fix a bunch of
stupid uint->size_t bugs just because the author is still on a 32 bit
machine.
Is that feasible?

IMHO the ideal solution would be:
- treat size_t as a magical type (not a simple alias).
- allow size_t -> uint if you are in a machine-specific version
statement that implies 32 bits (eg, version(D_InlineAsm_X86),
version(Win32), version(X86)).
- allow size_t -> ulong if you are in a version statement that implies
64 bits.
- Otherwise, disallow implicit casts.
Incidentally this was a motivation for the 'one-definition rule' that I
proposed for version statements: it means the compiler can easily
identify which versions imply machine-specific.

I think the ODR for version is right on the money. FWIW I also think the
strategy you sketch would work (it's similar to gcc's), but I'd say -
let's not implement this. It's a "nice to have" thing but doesn't add
much power, and doesn't remove a large annoyance.
Andrei