Why use type declarations like this (below) and instead use human readable
language to declare what it really is...
The use of "double", "short" and "long" is CUMBERSOME!
Especially when switching between Processors Architectures.
This is trying to be a new language... make it NEW!
Or at least make these key words SYNONYMOUS!
D Meaning New Key Word
byte signed 8 bits int8
short signed 16 bits int16
int signed 32 bits int32
long signed 64 bits int64
cent signed 128 bits int128
float 32 bit floating float32
double 64 bit floating float64

Why use type declarations like this (below) and instead use human readable
language to declare what it really is...
The use of "double", "short" and "long" is CUMBERSOME!
Especially when switching between Processors Architectures.
This is trying to be a new language... make it NEW!
Or at least make these key words SYNONYMOUS!
D Meaning New Key Word
byte signed 8 bits int8
short signed 16 bits int16
int signed 32 bits int32
long signed 64 bits int64
cent signed 128 bits int128
float 32 bit floating float32
double 64 bit floating float64

Why use type declarations like this (below) and instead use human readable
language to declare what it really is...

I believe there have been some studies done that indicate the use of words
containing a mixture of alphabetic and numeric characters is harder to
read. It appears that there is a type of 'context switch' silently going on
when people see digits in their text.

The use of "double", "short" and "long" is CUMBERSOME!
Especially when switching between Processors Architectures.

You do realize that D has defined these data types as fixed length. An
'int' is always going to be 32-bits regardless of the CPU architecture.

This is trying to be a new language... make it NEW!
Or at least make these key words SYNONYMOUS!
D Meaning New Key Word
byte signed 8 bits int8
short signed 16 bits int16
int signed 32 bits int32
long signed 64 bits int64
cent signed 128 bits int128
float 32 bit floating float32
double 64 bit floating float64