Allright - octals are pretty useless (or aren't they?), but I was always
missing binary literals from C++. Would it break anything if we had

binary

literals in D? Sometimes it saves a lot of time!

That exact syntax has been supported by Digital Mars C and C++ for 20 years
now. It's even supported as a printf format. To my knowledge, nobody has
ever used it. The truth is, it's easier to deal with bit patterns in hex.
Quick, what is the value of 0b110010000001000? If you're like me, you stick
a penpoint on the screen and count the 0's. So much easier to deal with
0x6408. The only time I found binary notation useful was when I was creating
data for a bitmapped cursor <g>.
I don't see octal used so much anymore, but a lot of legacy code used it
because it mapped well onto the PDP-11 instruction set. Lots of people got
used to using octal, and it carried forward into hex computers.

That exact syntax has been supported by Digital Mars C and C++ for 20 years
now. It's even supported as a printf format. To my knowledge, nobody has
ever used it.

Because it's not *standard*, man! It's being avoided for portability!
Why do you think people use C++? Because it's *standard*.
Whatever makes it into D, shall become standard within D. And thus
gradually accepted.

The truth is, it's easier to deal with bit patterns in hex.

Not always.

Quick, what is the value of 0b110010000001000? If you're like me, you stick
a penpoint on the screen and count the 0's. So much easier to deal with
0x6408. The only time I found binary notation useful was when I was creating
data for a bitmapped cursor <g>.

Argh. You must allow for an underscore in numbers. It shall also do
readability of integers and floating-point numbers a lot of good.
0b_11111000_00011111
BTW, does this bitmask ring the bell? Well, things like that are not at
all rare, and i don't want to need a bin-hex table at hand all the time!

I don't see octal used so much anymore, but a lot of legacy code used it
because it mapped well onto the PDP-11 instruction set. Lots of people got
used to using octal, and it carried forward into hex computers.

The current syntax for octals is horrible anyway. Why don't you make it
into 0o123?
-i.

now. It's even supported as a printf format. To my knowledge, nobody has
ever used it.

Because it's not *standard*, man! It's being avoided for portability!
Why do you think people use C++? Because it's *standard*.
Whatever makes it into D, shall become standard within D. And thus
gradually accepted.

That is true!

The truth is, it's easier to deal with bit patterns in hex.

Not always.

Quick, what is the value of 0b110010000001000? If you're like me, you

stick

a penpoint on the screen and count the 0's. So much easier to deal with
0x6408. The only time I found binary notation useful was when I was

creating

data for a bitmapped cursor <g>.

Argh. You must allow for an underscore in numbers. It shall also do
readability of integers and floating-point numbers a lot of good.
0b_11111000_00011111

Kind of silly looking, maybe whitespace something like string literals:
0b 11111000 00011111
I can't think of anything off the top of my head that would break with this.

BTW, does this bitmask ring the bell? Well, things like that are not at
all rare, and i don't want to need a bin-hex table at hand all the time!

I don't see octal used so much anymore, but a lot of legacy code used it
because it mapped well onto the PDP-11 instruction set. Lots of people

got

used to using octal, and it carried forward into hex computers.

The current syntax for octals is horrible anyway. Why don't you make it
into 0o123?
-i.

For my opinion on binary literals: I've learned to live with using hex so it
won't really bother me either way, but it does seem like a good idea to add.

now. It's even supported as a printf format. To my knowledge, nobody has
ever used it. The truth is, it's easier to deal with bit patterns in hex.
Quick, what is the value of 0b110010000001000? If you're like me, you

stick

a penpoint on the screen and count the 0's. So much easier to deal with
0x6408. The only time I found binary notation useful was when I was

creating

data for a bitmapped cursor <g>.
I don't see octal used so much anymore, but a lot of legacy code used it
because it mapped well onto the PDP-11 instruction set. Lots of people got
used to using octal, and it carried forward into hex computers.

the amount of times I've written 0x0..0xF out as 0b0000 .. 0b1111 on a piece
of paper next to me
so I could write bit masks or bit fields values out in hex I can not count.
I think most C/Java programmers would understand what 0b0010 was and as
pointed out its not standard so ppl would not tend to use it. I've worked on
C project where // comments where banned even though we used gcc and mcvc++
both support // in C.
octal: before you scrap it, think of unix programmers, I believe that file
permisions on unix are octal so some ppl may still want it.
what about 0o777 for octal,
numbers are 0<letter>(letter|digits) for anything not base10 with x or h for
hex, b for binary and o for octal

the amount of times I've written 0x0..0xF out as 0b0000 .. 0b1111 on a

piece

of paper next to me
so I could write bit masks or bit fields values out in hex I can not

count.
I think if you try it in D right now, it already works <g>.

octal: before you scrap it, think of unix programmers, I believe that file
permisions on unix are octal so some ppl may still want it.
what about 0o777 for octal,
numbers are 0<letter>(letter|digits) for anything not base10 with x or h

for

hex, b for binary and o for octal

Octal will stay in, and I'll leave it in the C syntax. For those that still
want octal, I think they'll want the C syntax they're used to for it.

I would use it.
Consider converting a ARGB1555 pixel to ARGB4444 format: (yes, we actually
have to do this:)
return ((pixel & 0b_1_00000_00000_00000) >> 3) * 0b1111
| ((pixel & 0b_0_11110_00000_00000) >> 3)
| ((pixel & 0b_0_00000_11110_00000) >> 2)
| ((pixel & 0b_0_00000_00000_11110) >> 1);
It gets worse. There are all kinds of pixel formats. Change those numbers
above to hex and you really lose the sense of what's going on. We have to
resort to writing the values twice, once as hex and once in a comment as
binary "documentation".
Don't get me wrong: I know hex. I use hex. I love hex. Hex isn't always
the right tool, though. The brain takes time and effort to convert hex to
binary. If you just write in binary, you see the patterns immediately.
Which is more obvious what is going on? 0xAA or 0b10101010 ? When what you
are manipulating *is* bits or bit patterns, hex adds a layer of obfuscation.
It's not so bad when the fields are aligned well, but when they're not, it
can be confusing; deceiving even.
Actually these days, this kind of code is usually found only in tools, not
in the main game so much anymore. The main game has support hardware to do
most of the pixel pushing, and integer SIMD to do what the GPU hardware
can't do, fast. The main cpu is not where you want to be doing this kind of
stuff.
Sean
"Walter" <walter digitalmars.com> wrote in message
news:bgrfj5$18er$1 digitaldaemon.com...

now. It's even supported as a printf format. To my knowledge, nobody has
ever used it. The truth is, it's easier to deal with bit patterns in hex.
Quick, what is the value of 0b110010000001000? If you're like me, you

stick

a penpoint on the screen and count the 0's. So much easier to deal with
0x6408. The only time I found binary notation useful was when I was

creating

data for a bitmapped cursor <g>.
I don't see octal used so much anymore, but a lot of legacy code used it
because it mapped well onto the PDP-11 instruction set. Lots of people got
used to using octal, and it carried forward into hex computers.