I understand, in many cases its important to define how large your numeric variable is (and I always consider unsigned vs signed), but do compilers ever optimize or make tailored size decisions when you don't specify char, short, long or long long (so, int does != long long int by default?). This would probably be hard to implement in a compiler, but is it so? I think some kind of meta-programming mechanism which provides virtual integer sizes (not templates) is essential. I'm also annoyed by the very existence of char. Please, call it octet, or something, because we might handle textual strings with 2-byte characters (Or use 1-byte numeric values).

I think C++11 has some feature like this (virtual integer size), right? "auto" ?

But it doesn't specify anything about size inference. This would be useful if the compiler could look back at the way you use a virtual integer in the final program, and then deduce the optimal size. But I doubt any discerned modification of C++ could be powerful enough for such generalization utility.

Edit:A lot of people seem undervalue the importance of size efficiency. And this isn't just for efficiency, but there's a large number of cases where I can write a set of functionality which is great by the aspect of its individual design, but heavily depends on certain tuning.

I have yet to come across a compiler that looks at variable usage to determine the optimal storage size.

However, more often than not, you'll be looking at a Speed/Size trade-off. For example, on a 32-bit processor, 32-bit reads/write are the fastest memory operations- even faster if the addresses are DWORD aligned. Sure, you can still use 16-bit or even 8-bit variables, but it'll take longer to access them, not to mention it might pollute the CPU's caches if you access a lot of those. With modern PCs, size doesn't matter much; an average PC has > 2 GB of RAM, however, on embedded systems, smart phones etc. RAM is still in short supply, and you'll have to see if you can shave off a couple of megs if you're cutting it close.

Personally, I'm all for speed, but clever optimizations in my code, coupled with a good optimizing compiler may yield space savings at the same time. You won't know until you try.

Thank you. Actually, a while ago, I was speculating the fact that 32-bit integers are faster on processors with a 32-bit word architecture etc. (faster than 8 or 16 bit integers, alike the widely known benefits of "64-bit integers on 64-bit word architectures"), but I foolishly denied it. Your information is very helpful. Maybe one of the many specializations graphics cards apply could be for handling 8-bit color components... maybe?

... and that demonstrates my lack of experience with GPU related programming -- besides API's such as DirectX and OpenGL, but I've haven't even made a shader. I finally have more than an Intel Graphics Chipset; much more in fact (Radeon HD 6950). Some day I should try writing a shader.