This is just an FYI, but on my computer, this code:
import std.stdio;
int main(char[][] args)
{
writefln(real.sizeof * 8);
return 0;
}
Outputs the size of real as 96 bits, not 80.
I have an Intel Pentium 4 (Prescott) CPU.

This is just an FYI, but on my computer, this code:
import std.stdio;
int main(char[][] args)
{
writefln(real.sizeof * 8);
return 0;
}

Side note:
Who said the size of a "real" is 80 bits ? The size varies.
It's just defined as: "largest hardware implemented FP size"
I get 64, here on PowerPC :-) On a SPARC, you could get 128.

Outputs the size of real as 96 bits, not 80.
I have an Intel Pentium 4 (Prescott) CPU.

The difference is due to alignment of the long double type.
In x86 Linux, it is 96 bits. In x64 Linux, it is 128 bits...
But they both still only use 80 bits, just add some padding.
--anders
PS.
http://gcc.gnu.org/onlinedocs/gcc-4.0.2/gcc/i386-and-x86_002d64-Options.html
"The i386 application binary interface specifies the size to be 96 bits,
so -m96bit-long-double is the default in 32 bit mode." [...] "In the
x86-64 compiler, -m128bit-long-double is the default choice as its ABI
specifies that long double is to be aligned on 16 byte boundary."

Side note:
Who said the size of a "real" is 80 bits ? The size varies.
It's just defined as: "largest hardware implemented FP size"

Which should be 80 on x86 processors!

The difference is due to alignment of the long double type.
In x86 Linux, it is 96 bits. In x64 Linux, it is 128 bits...
But they both still only use 80 bits, just add some padding.
--anders
PS.
http://gcc.gnu.org/onlinedocs/gcc-4.0.2/gcc/i386-and-x86_002d64-Options.html
"The i386 application binary interface specifies the size to be 96 bits,
so -m96bit-long-double is the default in 32 bit mode." [...] "In the
x86-64 compiler, -m128bit-long-double is the default choice as its ABI
specifies that long double is to be aligned on 16 byte boundary."

Well if the only difference is in the alignment, why isn't just the
real.alignof field affected? An x86-32 real is 80 bits, period. Or does it
have to do with, say, C function name mangling? So a C function that takes
one real in Windows would be _Name 80 but in Linux it'd be _Name 96 ?

Well if the only difference is in the alignment, why isn't just the
real.alignof field affected? An x86-32 real is 80 bits, period. Or does
it have to do with, say, C function name mangling? So a C function that
takes one real in Windows would be _Name 80 but in Linux it'd be _Name 96
?

It's 96 bits on linux because gcc on linux pretends that 80 bit reals are
really 96 bits long. What the alignment is is something different again.
Name mangling does not drive this, although the "Windows" calling convention
will have different names as you point out, but that doesn't matter.
96 bit convention permeates linux, and since D must be C ABI compatible with
the host system's default C compiler, 96 bits it is on linux.
If you're looking for mantissa significant bits, etc., use the various
.properties of float types.

Well if the only difference is in the alignment, why isn't just the
real.alignof field affected? An x86-32 real is 80 bits, period. Or does
it have to do with, say, C function name mangling? So a C function that
takes one real in Windows would be _Name 80 but in Linux it'd be _Name 96
?

It's 96 bits on linux because gcc on linux pretends that 80 bit reals are
really 96 bits long. What the alignment is is something different again.
Name mangling does not drive this, although the "Windows" calling convention
will have different names as you point out, but that doesn't matter.
96 bit convention permeates linux, and since D must be C ABI compatible with
the host system's default C compiler, 96 bits it is on linux.
If you're looking for mantissa significant bits, etc., use the various
.properties of float types.

The 128 bit convention makes some kind of sense -- it means an 80-bit
real is binary compatible with the proposed IEEE quad type (it just sets
the last few mantissa bits to zero).
But the 96 bit case makes no sense to me at all.
pragma's DDL lets you (to some extent) mix Linux and Windows .objs.
Eventually, we may need some way to deal with the different padding.