Share or Embed Document

Why Computers Use Binary

Binary numbers – seen as strings of 0's and 1's – are often associated with computers. But why is this? Why can't computers just use base 10 instead of converting to and from binary? Isn't it more efficient to use a higher base, since binary (base 2) representation uses up more "spaces"? I was recently asked this question by someone who knows a good deal about computers. But this question is also often asked by people who aren't so tech-savvy. Either way, the answer is quite simple.

WHAT IS "DIGITAL"?
A modern-day "digital" computer, as opposed to an older "analog" computer, operates on the principle of two possible states of something – "on" and "off". This directly corresponds to there either being an electrical current present, or said electrical current being absent. The "on" state is assigned the value "1", while the "off" state is assigned the value "0". The term "binary" implies "two". Thus, the binary number system is a system of numbers based on two possible digits – 0 and 1. This is where the strings of binary digits come in. Each binary digit, or "bit", is a single 0 or 1, which directly corresponds to a single "switch" in a circuit. Add enough of these "switches" together, and you can represent more numbers. So instead of 1 digit, you end up with 8 to make a byte. (A byte, the basic unit of storage, is simply defined as 8 bits; the well-known kilobytes, megabytes, and gigabytes are derived from the byte, and each is 1,024 times as big as the other. There is a 1024-fold difference as opposed to a 1000-fold difference because 1024 is a power of 2 but 1000 is not.)

DOES BINARY USE MORE STORAGE THAN DECIMAL?
On first glance, it seems like the binary representation of a number 10010110 uses up more space than its decimal (base 10) representation 150. After all, the first is 8 digits long and the second is 3 digits long. However, this is an invalid argument in the context of displaying numbers on screen, since they're all stored in binary regardless! The only reason that 150 is "smaller" than 10010110 is because of the way we write it on the screen (or on paper). Increasing the base will decrease the number of digits required to represent any given number, but taking directly from the previous point, it is impossible to create a digital circuit that operates in any base other than 2, since there is no state between "on" and "off" (unless you get into quantum computers... more on this later).

WHAT ABOUT OCTAL AND HEX?
Octal (base 8) and hexadecimal (base 16) are simply a "shortcut" for representing binary numbers, as both of these bases are powers of 2. 3 octal digits = 2 hex digits = 8 binary digits = 1 byte. It's easier for the human programmer to represent a 32-bit integer, often used for 32-bit color values, as FF00EE99 instead of 11111111000000001110111010011001. Read theBitwise Operators article for a more in-depth discussion of this.

NON-BINARY COMPUTERS
Imagine a computer based on base-10 numbers. Then, each "switch" would have 10 possible states. These can be represented by the digits (known as "bans" or "dits", meaning "decimal digits") 0 through 9. In this system, numbers would be represented in base 10. This is not possible with regular electronic components of today, but it is theoretically possible on a quantum level.

. (Quantum computers aren't exactly on sale at the moment. binary was determined to be the most practical system to use with the computers we did design.) The binary system was chosen only because it is quite easy to distinguish the presence of an electric current from an absense of electric current. And using any other number base in this system ridiculous.. So although the question of binary being "inefficient" does have some validity in theory. rather. the base-10 computer would be able to fit considerably more processing power into the same physical space. but not in practical use today. because the system would need to constantly convert between them.
WHY DO ALL MODERN-DAY COMPUTERS USE BINARY THEN?
Simple answer: Computers weren't initially designed to use binary. Full answer: We only use binary because we currently do not have the technology to create "switches" that can reliably hold more than two possible states.Is this system more efficient? Assuming the "switches" of a standard binary computer take up the same amount of physical space (nanometers) as these base-10 switches. especially when working with trillions of such connections.. That's all there is to it.