We posed this question to maths teacher Jeffrey Zilahee from mathgurus.info...Jeffrey - We all know that calculators are these fast little machines that can do calculations in incredible speed and have served to make humanity more computationally exact species, but exactly, how do they work? Well, whether you're talking about a scientific, financial graphing, or even a calculator on your phone, they all work in a similar fashion. In a nutshell, calculators just like their big brother, the computer, work by understanding everything in terms of two states. We call this binary and specifically, those two states are given as either a zero or a one. So, when we press buttons on a calculator, those buttons are connected to sensors that send electrical currents to the integrated circuitry of a calculator. This circuitry contains transistors that build up a logical framework for solving any given calculation, and the more transistors present, the more advance the functionality of the calculator is likely to be. Transistors use electricity to be in an on-state indicated by a one and off, indicated by a zero. So when a calculator wants to add two numbers it first converts those numbers into binary. For example, a four would be represented as 1-0-0 and a two would be represented as 1-0. From there, the process of addition is dictated by each column either summingto 0, 1, or two 1s, in which case a one would go into the next column since calculators cannot comprehend a 2. Once the calculator has the answer since it is in binary, it turns on a series of lines and/or pixels to create the visual match of the number that we understand which is decimal or as mathematicians call it - base 10. Part of the reason why calculators are so quick is because at their core, they're relying on electrical impulses which travel at the speed of light.

Diana - So, calculators, much like computers, translate everything into binary or base 2 because it allows numbers to be translated into electrical signals that are either on ‘1’ or off ‘0.’ To display an answer, it then sends this information to its LCD screen and as those of you with any sort of LCD TV monitor or clock may know that these displays work by placing a voltage across a layer of molecules which are layered between filters and the changing voltage will make these liquid crystals appear opaque or transparent.

Oh...Do I have to remember the difference between Ands, Nands, Ors, Nors, & XORS?

A calculator is like a tiny computer, but at least a classic calculator has relatively few pre-programmed tasks. Newer programmable calculators are much more complex.

You probably have heard that computers represent numbers in binary. Essentially 0's and 1's.

For example, you can represent the numbers from 0 to 7 in binary as the following by incrementing the right most digit.

0 0001 0012 0103 0114 1005 1016 1107 111

The basic function for addition is the "Exclusive Or + Carry".

The way the exclusive or works:0+0 --> 00+1 --> 11+0 --> 11+1 --> 0 (the exclusive part, you get a zero if both are 1).

So, using the numbers above...1 + 2 --> 001 + 010

XOR on the right most two bits: 1 & 0 --> 1XOR on the middle bits: 0 & 1 --> 1XOR on the left most bits: 0 & 0 --> 0

And, one gets 011 or 3.

Carrying a bit would just propagate it in much the way you did it in elementary math.

The simple logic functions can be created with basic diodes which have been highly miniaturized over time.

Another basic component is called the "flip flop". It is basically a counter, or a simple form of memory, which can store & reproduce the input to it essentially indefinitely, or until reset or turned off.

It is relatively easy to create a flip-flop utilizing the basic logic functions above.

Anyway, subtraction is the opposite of addition. Multiplication and division are basically done as you learned them in 3rd grade.

Old calculators used to represent numbers in block forms with LEDs (Light Emitting Diodes), where a number would be multiplexed to highlight the bars to create that form. Newer calculators usually use LCDs which are more efficient, but may be constructed in a similar fashion.

One concept is built onto another, and with a little hand waving, one gets a basic working calculator.

If I remember correctly, calculators commonly represent numbers in some form of BCD (binary coded decimal) rather than using the binary representation used in most computers. Presumably this is to avoid the conversion of inputs and outputs between decimal and binary even though it requires more bits to represent a number.

I was actually only counting from 0 to 7 (less than one decimal), so there would be no significant difference with what I wrote and BCD.

To go from 0 to 9, one would need 4 bits... which would be enough to count from 0 to 15, so BCD would waste a half a bit.

But, you're right, since your classic calculator has very little memory (just enough to hold a few parentheses and etc, and give the user a couple of memory locations), BCD might be simpler with relatively little memory wasted. It would also be easier to write the display interface with native BCD.

I would also like to note that calculators generally use series approximations to calculate each significant figure for complex calculations. So for example, when you type in sin(2.345) the calculator does the Taylor series for sine which is

sin(2.345) = 2.345 - (2.345)^3/3! - (2.345)^5/5! - (2.345)^7/7! - ...

As you can see you could continue this with odd integers and the longer the series the more accurate the answer becomes with more significant figures.

I would also like to note that calculators generally use series approximations to calculate each significant figure for complex calculations. So for example, when you type in sin(2.345) the calculator does the Taylor series for sine which is

sin(2.345) = 2.345 - (2.345)^3/3! - (2.345)^5/5! - (2.345)^7/7! - ...

As you can see you could continue this with odd integers and the longer the series the more accurate the answer becomes with more significant figures.

Jolly good point Mr P! I knew they were algorithmic, but I had no idea about the algorithms used. Can you clue us in on some the other series they might use?

Sure! To be clear, a series itself is not an algorithm, but can be a part of a algorithm. Some series converge to a number while others are infinite in nature. Another clear example is

e^x = 1 + x + x^2/2! + x^3/3! + x^4/4! + ...

In fact, if you wanted to get a deeper understanding you can see how the approximation begins with a tremendous error, but gets closer as the series expands. For example, if you type in e^3 into your calculator, you'll get something like 20.085. Using the series you can see how you approach the correct value to certain significant figures...

1 + 3 = 4

1 + 3 + 3^2/2! = 8.5

1 + 3 + 3^2/2! + 3^3/3! = 13

1 + 3 + 3^2/2! + 3^3/3! + 3^4/4! = 16.375

1 + 3 + 3^2/2! + 3^3/3! + 3^4/4! + 3^5/5! = 18.4

1 + 3 + 3^2/2! + 3^3/3! + 3^4/4! + 3^5/5! + 3^6/6! = 19.413

1 + 3 + 3^2/2! + 3^3/3! + 3^4/4! + 3^5/5! + 3^6/6! + 3^7/7! = 19.85

Continuing on the series converges to the 20.085 that is output by your calculator. When the input of e^x is put in, the calculator is programed to continue the series until a certain amount of consistent significant figures is produced to show the convergence. Some functions that never converge can output an error or an 'unknown' response.

Also, from my last post, the sine series should be sin(2.345) = 2.345 - (2.345)^3/3! + (2.345)^5/5! - (2.345)^7/7! + ... (alternating addition and subtraction) If you think about this one like my example, you'll see that this one resonates around the convergence; alternating between being greater than and less than the actual value.

Actually a lot of calculators are computers, they were one of the very early users of microprocessors. The very simplest ones used just hard-wired electronics and not computers, but were often incapable of doing more complicated operations like sin and cos. Once you need to do calculations like that, then using a microprocessor to do it in software is much more desirable.

Actually a lot of calculators are computers, they were one of the very early users of microprocessors. The very simplest ones used just hard-wired electronics and not computers, but were often incapable of doing more complicated operations like sin and cos. Once you need to do calculations like that, then using a microprocessor to do it in software is much more desirable.

True - if fact, the electronic calculator was one of the major drivers for the initial development of the microprocessor, but as soon as microprocessors appeared on the scene it didn't take long before they became far more significant than the electronic calculator!

I would say that understanding an important part of what allows calculators to do what they do is that they are digital. Although "digital" and "binary" are not identical, they are synonymous in that digital is the simplest form of analog (on or off) and binary (base 2) is the lowest useful numerical representation whose values are, essentially, digital (on or off). Otherwise, a calculator would need, for example, to quantitize a value (let's say a base 10 digit) as a 4 or 5 or 6 etc every time it wanted to use it. A binary value is quantitized as only one of two states. In electronics, it's virtually error-free to distinguish between a circuit presenting no voltage and a circuit presenting maximum/saturated voltage; there is virtually no mistaking the two as the circuitry acts as differently as possible (instead of with 10 different voltages as it would with a base 10 digit).

Then, with the miracle of modern microchips, enough circuitry is available in a reasonable space to perform basic and advanced calculations on two numbers. This makes calculators what they are -- hand-held devices.

Another miracle involving microchips is their very low power consumption. This is almost a necessity for a hand-held device, otherwise it could be putting out the heat of a 100-Watt bulb, and you couldn't hold it in your hand for very long. Anyway, very low power consumption also provides calculators with another quality to do what they do -- not being plugged into the wall. (Am I the only one here who remembers Olivetta mechanical calculators and Friden electronic calculators, both of which were plugged into the wall?)

(Am I the only one here who remembers Olivetta mechanical calculators and Friden electronic calculators, both of which were plugged into the wall?)

In my graduate student days in the late 60s, the "computer" was housed in a large room, air-conditioned to help dissipate the many kilowatts of heat it produced, and a kilometre away from the department. We would take our decks of punched cards by bicycle, across to the computer reception room in the morning, and pick up the decks with our output in the afternoon (unless we were doing really heavy calculations, in which case those two delivery steps were reversed).

In the department we had an Olivetti programmable electronic calculator with a cash register strip type of printed output -- about 120 steps of program were possible if I remember aright. I had a competition going with two distinguished professors about which of us could get the fastest production of all (and only) prime numbers up to 2000 out of the Olivetti.

Both the Olivetta and Friden were desktop calculators and plugged into the wall for power.

The Olivettas we used also had a paper-strip output as well as plenty of gears and levers. It did not have the big handle used to power the calculation as some older Olivetta models had, but used motors actuated by wall power. When it would perform a calculation, the levers would pop up and down varying distances, ratcheting the geared wheels around and around, sometimes continuing for more than a minute. If it halted and an error light came on, we had to open the case and fiddle with the mechanisms to dislodge the jam and set it right again.

The Friden "electronic" calculator displayed through a cathode ray tube (CRT), which displayed a stack of four or five numbers, if I remember correctly. It contained about 15 printed circuit boards loaded with discrete components (resistors, capacitors, transistors, etc). Back then, all transistors were those large metal-can types, and none of the more modern, much smaller, plastic-bodied types. The Friden was about the size of a desktop computer and put out a fair amount of heat.

My father had worked in the factory of a world-famous manufacturer, and I remember as a kid, listening to him marvel about a tabletop calculator they had there, and all it did was take the square root of a number. To listen to him, you'd think it could decipher the human genome.

And as well I have an Intel 4004 chipset from an old calculator. Originally calculators cost thousands of dollars, and were basic 4 function machines. Now you can buy a cheap one for under $5 that outperforms that one, and runs from a single AA cell for years.

The definition of an algorithm is one which always halts after a finite number of steps.Taylor series and Newton's method rarely converge to the "correct" answer, and then stop (except for e^0=1, or if you guess the exact answer to start Newton's method).

With series which converge quickly, you can stop when each successive term adds a number smaller than the smallest digit displayed on the calculator.

More troublesome are those series which converge very slowly - series which generate pi are notorious for this; even after a lot of calculations, the answer is very inaccurate. With each calculation, rounding errors increase.

For series like this, it is common to rearrange the equation so it converges with very few calculations, but perhaps over a reduced range of values.Suitable mathematical relations for the sine function (angles in radians):sin(x)=sin(x+2*pi)cos(x)=sin(x+pi/2)

Reducing the value of x closer to zero with methods like these means that you need fewer calculations to obtain the same accuracy.

The definition of an algorithm is one which always halts after a finite number of steps.Taylor series and Newton's method rarely converge to the "correct" answer, and then stop (except for e^0=1, or if you guess the exact answer to start Newton's method).

With series which converge quickly, you can stop when each successive term adds a number smaller than the smallest digit displayed on the calculator.

More troublesome are those series which converge very slowly - series which generate pi are notorious for this; even after a lot of calculations, the answer is very inaccurate. With each calculation, rounding errors increase.

For series like this, it is common to rearrange the equation so it converges with very few calculations, but perhaps over a reduced range of values.Suitable mathematical relations for the sine function (angles in radians):sin(x)=sin(x+2*pi)cos(x)=sin(x+pi/2)

Reducing the value of x closer to zero with methods like these means that you need fewer calculations to obtain the same accuracy.

The relations that are really useful for trig functions are the tangent half-angle formulae:

sin(x) = 2 tan(x/2)/(1+tan2(x/2))

cos(x) =(1–tan2(x/2))/(1+tan2(x/2))

tan(x) = 2 tan(x/2)/(1–tan2(x/2))

What you can then do is to get a 'one size fits all' trig formulation. You fit tan(y) over the range 0 --> π/4 (or a smaller range) with its rapidly converging Taylor series or a polynomial regression. It will then give you rapid access to all six of the basic trig functions for the angle 2y. (This is 1960s state of the art when computer memory and processing storage were still at issue).

Converting a number between binary and decimal representations requires around a hundred calculations, which causes "rounding" errors for many values.

One difference between a calculator and computer:

A calculator will typically do a single operation (add, subtract, multiply, divide, or even square root), and then display the answer to a human. To minimise processing and rounding errors, they do the calculation in BCD, which requires no decimal conversion.

A computer might do dozens (or even billions) of calculations before formatting the results for human consumption. It saves effort to do the calculations in binary, and then just convert the answer to decimal if and when it needs to be delivered for human viewing.

Programmable calculators fall somewhere in the middle, and could go either way.

Some early "Commercial" computers used BCD, while "Scientific" computers used binary floating point.

The Naked Scientists® and Naked Science® are registered trademarks.
Information presented on this website is the opinion of the individual contributors
and does not reflect the general views of the administrators, editors, moderators,
sponsors, Cambridge University or the public at large.