Floating Point Numbers & Currency Rounding Errors

A reminder to those who learned this lesson long ago, and a warning to those that have not learned it for themselves: Don’t use floating point numbers for money.

When doing any kind of calculation with currency, accuracy is extremely important. And floating point numbers (floats and doubles) don’t have an accurate enough representation to prevent rounding errors from accumulating when doing arithmetic with monetary values.

Representing money as a double or float will probably look good at first as the software rounds off the tiny errors, but as you perform more additions, subtractions, multiplications and divisions on inexact numbers, you’ll lose more and more precision as the errors add up. This makes floats and doubles inadequate for dealing with money, where perfect accuracy for multiples of base 10 powers is required.

Ways it Can Go Wrong

When it comes to dealing with money in a computer program, a developer needs to stay ever-vigilant. Just because you, or a previous developer, made the right call by using a DECIMAL column when creating the database schema, doesn’t mean that accuracy can’t be lost before the result of some calculation is displayed to an end user. It’s all too easy to slip up and lose the necessary accuracy without even realizing it.

1. During Calculations

In order to maintain the required accuracy for financial calculations, the best option (in my experience) is to use a built-in decimal type if your language provides one (e.g. Java and Ruby both have a BigDecimal class, C# has a decimal type).

In Ruby:

amount = BigDecimal.new("5.99")

amount = BigDecimal.new("5.99")

2. During Storage

To ensure an accurate representation of the currency amount in the database, use something like MySQL’s DECIMAL column. One StackOverflow response indicates that DECIMAL(15,2) is the way to go. PostgreSQL even has a money numeric type that can be used for a column.

If you are creating your schema using a Rails migration, you can specify a decimal column like this:

As this Decimal numbers in Rails and MySQL post points out, if you forget to specify the precision and scale, Rails will default (for a MySQL database) to a precision of 10 and a scale of 0. As the post points out, this means 15.37 will be stored in the database as 15!

3. During Retrieval

Now that the database is storing the monetary values accurately, you don’t want to do anything to lose that accuracy when retrieving values from it. In a Rails app you don’t need to worry about it because ActiveRecord automatically uses BigDecimal for DECIMAL type columns. But if you are using an alternative ORM, like DataMapper you need to make sure you are using Decimal for your properties, and not Float.

The same opportunity for error arises in ORMs in most languages. For example, when using Java’s Hibernate ORM, you need to specify the variable as a BigDecimal not a Float or Double.

The same vigilance is needed when not using an ORM at all. I recently discovered a problem in some Java reporting code that was using straight JDBC calls, and extracting a price value from the database with rset.getDouble("price") instead of rset.getBigDecimal("price").

4. Before Display

Just because a set of monetary values has been saved in the database accurately, and then retrieved from the database accurately, doesn’t mean you get to relax. Any time you are doing a calculation with a set of monetary values, you need to be careful to not accidentally lose the accuracy you’ve worked so hard to maintain by introducing a floating point error just before displaying a value to the end user.

Here is a seemingly correct totaling up of BigDecimal values from a Ruby program:

subtotals.inject(0.0)do|total, val|
total += val
end

subtotals.inject(0.0) do |total, val|
total += val
end

But by providing that initial value of 0.0, all of the BigDecimals are converted to floats, and errors can be introduced. Instead it should look like:

subtotals.inject(BigDecimal.new(0))do|total, val|
total += val
end

subtotals.inject(BigDecimal.new(0)) do |total, val|
total += val
end

Any language that will silently convert from a decimal type to a float/double is susceptible to this easy-to-make mistake.

Resources

For some good in-depth explanation/analysis of why floating point numbers are not acceptable for representing money, give these a read:

Adding a Float and a BigDecimal will return a BigDecimal. And I believe the float 0.0 specifically is guaranteed not to have rounding errors (since it can be represented exactly in binary), so that first addition won’t introduce any error.

It’s still true that you need to be alert so you stay in the world of BigDecimals, but this example is actually not dangerous (except as a bad habit).

Note that BigDecimal and C# decimal are different beasts. BigDecimal is arbitrary-precision, while C#’s decimal is still a floating-point number – just a decimal floating point number with high precision, instead of a medium precision binary floating point number like double.

Because floats and doubles cannot accurately represent the base 10 multiples we use for money, so it is impossible to represent 0.1 (or any other negative power of ten). This issue is not only in Java, it’s for any programming language that uses native floating-point types, as it stems from how computers handle floating-point numbers by default.

Example

Suppose you have $1.03 and you spend 42c. Calculate how much money remaining you?

System.out.println(1.03 – .42);

It prints 0.6100000000000001.

The solution to this problem is to use BigDecimal, int or long for monetary calculations.