How Does My Computer Work?

My original intention was to figure out what quantum computing is, as there is quite a buzz surrounding it lately, with major interest being expressed (with words and dollars) by some of the world’s hugest companies, all vying to conquer the elusive technology to harness its yet-unknown possibilities. Indeed, the first one to market on a viable quantum computer could disrupt all technologies based on cryptography, including the lionized BitCoin or Ethereum, as well as melting away the protection offered by password encryption as is standard in various information transfer protocols. Essentially, it is theorized (though as of now uncertain) that quantum computers would have some major impacts, sooner or later, in just about every industry. So naturally, I wanted to learn how it worked. But as I dug into it, not only was I confronted with the naturally perplexing nature of subatomic particles and their behavior (think Schrödinger’s cat), but also with the approaching realization that I didn’t even know how traditional computers work! Given that quantum computers are as of now just a dream, I figured it might be more appropriate to figure out just how a standard computer processes information.

Well then, how does a computer think, anyway? On some basal level, I think most people have an idea:

Computers think in binary! Like I said, I think we all knew that to some degree. But when you look into how it really works, it gets interesting really quick.

Consider the following:

Could you represent an image of yourself with only ones and zeroes?

Could you answer the question: “What is two plus two?” with only a series of true or false responses?

Could you tell me what color the leaves on a tree are with only the words “yes” and “no?”

Sounds ridiculous. And indeed, it is not natural for the human mind to think this way. But it is easy to design a simple machine, a switch, or a series of switches which contain this type of information that those in the programming world know (and love) as a boolean. Have you ever thought about where the word “boolean” comes from? Neither had I. Turns out, there was a guy named George Boole, an English mathematician born in 1815. His groundbreaking work laid a foundation for the information age, most specifically that of boolean logic, in which variables always represent a “truth” value of 1 or a “false” value of 0. An array of operations are performed on these values in order to arrive at a conclusion. Boole is known to have said the following (taken from wikipedia):

No general method for the solution of questions in the theory of probabilities can be established which does not explicitly recognise, not only the special numerical bases of the science, but also those universal laws of thought which are the basis of all reasoning, and which, whatever they may be as to their essence, are at least mathematical as to their form.

In other words, he claimed that there is no expressible thought or solution to any problem of any type which cannot be expressed with simple numbers. What a thought! Because it is very easy to design a machine or machine unit which can represent an individual boolean value or operator, it might then be useful to reduce any operation, thought, or exercise into this reduced “boolean logic” form, so that a machine or device could represent it.

When it comes to a computer, there are a number of physical embodiments of these boolean variables and operators. Specifically, I’m talking about memory cells (there are many other working parts of computer logic which are beyond my understanding at this time). A memory cell is the absolute smallest unit of memory that exists in a computer: it represents exactly one “bit” of data. You could consider it to be a single neuron in the brain of a computer. The incantation of a memory cell has changed over time, ranging from a vacuum tube, a magnetized ring, and finally in its modern form of an electronic circuit set on a semiconductor. A transistor within this circuit will serve to either be charged (in the “1” state) or uncharged (the “0” state) at the time that it is read. With some help from the CPU, input registered in this binary format can be “translated” into a different binary sequence, representing a basic transformation of data inside a basic electronic device.

a single memory cell

In current-day computers, these data transformations occur in the computer’s memory, a large set of integrated circuits, which is a set of electronic components set on a semiconductor material. There are different types of modern memory, most notably SRAM (static random access memory) and DRAM (dynamic random access memory). In practical terms, they are different in their ability to store information and in their ability to change state. SRAM is what you might refer to as “storage” memory — it takes more energy to write information in an SRAM format, but it takes very little energy to keep hold of it when it is not in use, and it can therefore easily be read at any time after writing it. DRAM is the opposite — it is easily written and operated on, but requires constant charge unless the values stores in these cells be erased. This is why your documents, user settings, pictures, and everything else lives on a hard drive and not in your RAM, and it also explains why RAM might be a bottleneck to computer performance: more DRAM (referred to as RAM in everyday speech) means that the computer has more fast memory to play with when thinking.

With a general idea of how memory works, let’s scope out a general overview of the input-calculation-output process:

The user enters some kind of input. This is most often done with the keyboard and mouse.

The computer converts the input into binary as instructed by a combination of the hardware and firmware provided by the manufacturer of the particular electronic and the operating system which is installed on the device.

The binary input undergoes some kind of algorithmic digestion, as instructed by a set of guiding instructions.

This output is used in whatever fashion it needs to according to the context in which the operation is performed.

If, for example, you were to make a computer yourself, you could instruct it to register a keyboard press on the letter “s” as the binary code 1110011. Then, you could write a word processing program that understands this bit value and knows to immediately change the color displayed by the pixels at a certain location to display something that looks like the letter s. At this point, why not just write the letter s by hand?

Luckily, someone already programmed this logic for you (Bill Gates) and wrapped it up in a nice little package, so you will never handle computer language on this level unless you are a hobbyist.

By the way, the same binary code which we used to represent the letter s could represent data used in a wide range of contexts. It could also represent the number 115, a particular color, a volume of sound, or whatever else. So how does the computer know what to do? Taken from cs.uri.edu:

We have seen that the byte: 01000011 can represent the integer 67, the character ‘C’, a pixel with darkness level 67, a sample of a sound with decibel level 67, or an instructions. There are other types of data that a byte can represent too. If that same byte can be all of those different types of data, how does the computer know what type it is? The answer is the context in which the computer uses the byte. If it sends the byte to a speaker, the 67th level of sound is produced. If it sends the byte to a monitor or printer, a pixel with the 67th level of darkness is produced, etc. More accurately, if the byte were coded with a standard coding technique, like ASCII for characters, GIF for pictures, and WAV for sounds, then when the computer sends the byte to a device, the data corresponding to that coding is produced by the device.

So, there is a lot context surrounding all steps of the input-transformation-output process, and certainly there is a lot to be understood about each step. As someone without any computer science background this has proved to be quite a daunting set of information, and if I didn’t stop myself from writing, I this article could be a million words long! So if you read this far, hopefully you found it interesting as well.

If you’re interested in learning more, check out these links and see where you wanna go from there!