In computing, a binary prefix is a specifier or mnemonic that is prepended to the units of digital information, the bit and the byte, to indicate multiplication by a power of 2. In practice the powers used are mostly multiples of 10, so the prefixes denote powers of 1024 = 210.

The computer industry uses terms such as "kilobyte," "megabyte," and "gigabyte," and corresponding abbreviations "KB", "MB", and "GB", in two different ways. For example, in citations of main memory or RAM capacity, "gigabyte" customarily means 1073741824 bytes. This is a power of 2, specifically 230, so this usage is referred to as a "binary unit" or "binary prefix." However, in other contexts, the industry uses "kilo", "mega", "giga", etc., in a manner consistent with their meaning in the International System of Units (SI): as powers of 1000. For example, a "500 gigabyte" hard drive is 500000000000 bytes, and a "100 megabit" Ethernet connection is running at 100000000 bits per second.

...and we all know that the meanings of words absolutely cannot change, nor is there any flexibility in their meanings.

There was never any problem. It was decided that 1 kilobyte was 1024 bytes, which in binary it is. Everyone understood it and there were no issues with it until hard drive makers realized they could change it for marketing purposes and milk a little more size out of their drives. Then consumers started screaming "where did all my space go" because the OS still reported it in the correct manner while the size on the box said something different. Thank hard drive manufacturers for this whole mess.

I'm fully aware of what base 10 and base 2 is, thank you very much. still don't see why a 1000 units of something should be 1024 in binary. There is no reason, there never has been a good reason and it only created confusion. It's a historical mistake and it's about time we start correcting it

Yeah, it's true, but for the average consumer kilo is 1000 and HD makers using that just made things worse so I agree that some changes must be made if you want to reach the consumer, no other way around.

Yes having conflicting numbers is bad, and since we'd been using 1024 from the very beginning it should never have been changed. It was the hard drive makers that changed it, not because it was correct but because it made their products sound bigger. Once again, computers are binary devices. Yes Kilo means 1000...in base 10. We're not dealing with that though.

Kilo means 10^3 everywhere, always and forever. It's an international standard clearly defined to be for powers of 10.There are alternatives for base 2 systems that are also clearly defined - they are not the same as the base-10 SI prefixes. The term was used incorrectly and there is now a push to be more accurate. what benefit is there to continue to be incorrect in one branch of computing?

You might have had an argument if the prefix was universal within computer technology circles but it isn't (see network transmission rates: 1 megabit per second is not 1024^2 bits / second). It all hinges on the idea that computers will continue to operate in binary forever (quantum computers are likely to use base 3 or 5).

I'm fully aware of what base 10 and base 2 is, thank you very much. still don't see why a 1000 units of something should be 1024 in binary. There is no reason, there never has been a good reason and it only created confusion. It's a historical mistake and it's about time we start correcting it

Because it works better for digital calculations. Ask yourself why 1Byte has 8 bits? For the same reason, having 9bits you wouldn't be able to divide by 2, and 8 is a better choice than 10 because it's 2^3, while 10 is not a power of 2. 1024 is 2^10, while 1000 is not a power of 2 which complicates the ability of processors to calculate data.

Everything is in their right place, there is no use to put decimal system in computers because the hardware itself will never use it.

It also brings it in line with HD makers (who have always done it properly)

Uhhh, no. The standard since the beginnings of the personal computer revolution in the 70's (and probably before that) has been to refer to a kilobyte as KB (note the uppercase K) and that it was equal to 1024 bytes. And by extension a megabyte is 1024 KBs, and so on. The push towards using SI or IEC terminology only really got going a little over a decade ago when IEC passed their proposal for the new prefixes. http://en.wikipedia....nt_use_of_units

Your statement is ambiguous and confusing. Just for those who might be confused, it should be said that the Base-10 number 1000 is translatable to the Base-2 (binary) number 1111101000. Now, a kilobyte is defined as a quantity of bytes that amounts to 1000, with 1000 being understood as a Base-10 number, so there are 1000 (understood as Base-10) binary digits in a kilobyte.

Because it works better for digital calculations. Ask yourself why 1Byte has 8 bits? For the same reason, having 9bits you wouldn't be able to divide by 2, and 8 is a better choice than 10 because it's 2^3, while 10 is not a power of 2. 1024 is 2^10, while 1000 is not a power of 2 which complicates the ability of processors to calculate data.

Everything is in their right place, there is no use to put decimal system in computers because the hardware itself will never use it.

while dividing by a power of 2 is a bitshift operation and dividing by a power of 10 is not (yes i've programmed both RISC and CISC processors), and thus you are right about speed. Do you really think that expressing file size according to standards will have an impact on your computer performance?