Doing Math in FPGAs, Part 2 (BCD)

Binary coded decimal is not used as much as it once was, but we can still find it hither and yon -- mostly in applications where the data will be directly driving a display.

In my previous blog, I rambled on about some simple ways to multiply and divide by 10 in any general-purpose device, not just in an FPGA. In the comments, I was asked why in the heck I would even consider doing such a thing (as opposed to sticking with binary representations). Well, the answer is along the lines of "We are base-10 creatures living in a base-10 world." Much of what we do is related to the decimal system, so we often find the need to multiply and divide by 10. Let's figure out some ways to do it quickly and efficiently. Sometimes this can lead to things that work better in the base-2 world of numerical machines.

I think I mentioned this last time, but my quick investigation into such frivolity didn't work out for me. I didn't find an easier way to do the math. It was time for me to get back in the box and conform to binary (drat).

What to do? There are plenty of ways to represent real numbers in a binary system. I would guess the three most common are probably binary coded decimal (BCD), floating point, and fixed point. Each has its strengths and weaknesses, depending on your end game. As such, I guess this issue will feature a brief discussion of these three representations and perhaps will touch on their strengths, weaknesses, and implementations.

Let's start with BCD. It's not used as much as it once was, but we can still find it hither and yon -- mostly in applications where the data will be directly driving a display of some sort (e.g., a seven-segment display). And this, of course, is the beauty of BCD. The individual digits not only represent the decimal system with which we are naturally comfortable, but they can also be treated as separate circuits for mathematical operations. There's another positive ramification to this representation: We can represent any number exactly if we are willing to allocate the bits to do so. For instance, 0.2 in BCD is 0.2, but in binary, it might be 00110011... (in, say, a fractional-only fixed-point representation).

Let's take a quick look at just what BCD is. There are, in fact, multiple formats for this representation, but (ignoring compressed and uncompressed) the most common usage is shown in the following table.

Now that we know what it is, how do we use it? In an FPGA, I would imagine that the most likely use might be in output and input encoding to some external (human) user. With that thought in mind, let's talk about how we might go about interfacing to the outside world.

Let's consider the case where we want to do all our manipulations in binary. In this case, we will want to convert our BCD encoded data into binary. This can be achieved as illustrated below.

We start off with a binary value of zero, and then we add each BCD digit multiplied by its decimal power. Of course, those multipliers might be pretty big, but if we get clever, maybe we can use the x10 schemes we discussed in my previous blog to our advantage and replace these complex multiplications in a pipelined configuration of shifts and adds. I'll leave it you to ponder on that for a while and figure out how to do it for yourself.

Once we're done with our binary machinations/manipulations, we might want to display the result to a seven-segment display. Unfortunately, binary doesn't map directly to BCD, does it?

To translate from binary to BCD, we can employ the shift-and-add-3 algorithm:

Left-shift the (n-bit) binary number one bit.

If n shifts have taken place, the number has been fully expanded, so exit the algorithm.

If the binary value of any of the BCD columns is greater than or equal to 5, add 3.

Return to (1).

As an example, let's use the nine-bit number 100101110 (302 decimal) and run it through our algorithm.

In some cases, maybe we shouldn't even bother converting back and forth between binary and BCD. Doing math with BCD is fairly straightforward. As an added benefit, we can treat each digit as a (mostly) separate circuit. Let's talk about addition, which is performed as follows.

Align the decimal points (if necessary).

Starting from the right (the one's digit), add the two numbers.

If the result is greater than 9 OR the result generates a carry, then add 6.

Propagate the carry to the next digit.

If all digits have been added, we're done.

Go to (2).

As an example, let's use our algorithm to add 93 and 79. (It does not matter which order you perform the adds, so long as you check between each for the out-of-bounds condition.)

@Tomii: One interesting point I forgot to mention is that you can use BCD representations to represent anything you want -- so you could use BCD to represent integer, fixed-point, or floating-point values -- now my head hurts :-)

No, I actually meant the wealth of numbers that are on the real axiz of the complex plane (ow, that makes my head hurt). A decimal is a fraction (maybe), if you can just figure out what it is. Now, I'm not trying to say any of those ways you can represent the real numbers is correct, because we're almost always going to lose something somewhere.

Like I keep saying, evaluate your alternatives, and choose the method(s) that best match your needs/requirements. My way most assuredly is not the best way, and your mileage (like my own) may vary.

Still on nine's and ten's complements -- as you noted the way we perform a subtraction in binary is based on the fact that performing the operation (a - b) is the same as performing the operation ((a + (-b)). In binary, the -b is the two's complment of +b, and we can easily generate this by inverting all of the bits in b (to generate the one's complemet) and then adding 1. It's also easy to do thsi in hardware, because we already have an adder, so we just invert all of the bits in b and then we set the carry-in to the adder to 1 (which equates to the "add 1").

The cool thing is that we can do much the same thing with nine's and ten's complements.

For those who want to learn more, you can check out my book How Computers Do Math. Also, you might find the following two columns to be of interest:

@Tom: I would guess that the three most common are probably Binary Coded Decimal (BCD), Floating Point, and Fixed Point.

I guess I would have said: Integer (unsigned and signed), Fixed-point, Floating-point, and BCD. Also, integers may be considered to be a special case of fixed-point (i.e., fixed-point without the fractional part).

Traditionally FPGA designers worked predominantly with integer or fixed-point (with some BCD for hardward acceleration targeted at financial applications / high-performance computing). More recently, they are being used to implement floating-point calculations for things like beam-forming and radar applications -- Altera has some real cool stuff in this regard that helps you map a floating-point pipeline onto their DSP slices and programmable fabric.

@Tom: You said There are plenty of ways to represent "real" numbers in a binary system.

The "real" numbers confused me for a moment because in mathematics a real number is a value that represents a quantity along a continuous line -- including integers, fractions, and irrational numbers such as the square root of two. But I think that when you said "real" numbers you meant "numbers in the real world" .. is that right?

Hi Tom -- you've touched on a topic that's close to my heart. As you mention, there are some numbers that are easy to represent in decimal (like 0.1) that cannot be exactly represented in binary (the equivalent to 0.1 is an ever-recurring pattern of 1s and 0s -- since we don't have infinatly large fields in which to store these numbers, we have to truncate, which leads to errors).

Of course, there are some numbers that are difficult to specify in decimal (base-10) -- take 1/3, for example, which equares to 0.333333r (recurring). This would be easy to specify in certain other bases. Similarly, there are some values that are easy to represent in base-2 that cannot easily be represented in base-10.

You mention one reason for using BCD is when you are accepting input from decimal-based devices or providing output to decimal-based devices. There is another big reason -- in many countries it is mandated by law that any financial calculations performed on a computer must return EXACTLY the same results as if they had been performed with pencil and paper. This is so that everyone knows exactly what will happen with regard to rounding values and so forth. The only way to achieve this is for the calulations to be performed using some sort of base-10 representation, like BCD. Check out some blogs I wrote on this topic a few years back: Binary Coded Decimal (BCD) 101 and BCD coming to an FPGA near you