Double or nothing

29 February 2016

You are working on a new feature where you need to use decimal numbers and you know from requirements that calculations must be precise. If you are new to java or if your knowledge cache needs a refresh, you search and read a tutorial about floating point numbers. You know what kind of values you need to handle in your program, you check their types, formats and values and then suddenly you wonder why don’t your numbers add up?. You read again the tutorial and you finally understand that:

float: The float data type is a single-precision 32-bit IEEE 754 floating point. As with the recommendations for byte and short, use a float (instead of double) if you need to save memory in large arrays of floating point numbers. This data type should never be used for precise values, such as currency. For that, you will need to use the java.math.BigDecimal class instead. Numbers and Strings covers BigDecimal and other useful classes provided by the Java platform.

The double data type is a double-precision 64-bit IEEE 754 floating point. For decimal values, this data type is generally the default choice. As mentioned above, this data type should never be used for precise values, such as currency.

You now have the knowledge about these primitives and you can decide when and how to use them or you can even go to extremes and decide that java float and double primitive types are evil and don’t use them anymore. There are so many resources about this subject that it is really hard to come with something new. The only thing that I want to check now is how fast or slow simple calculations with these types are when compared with the more precise BigDecimal. In order to do this I prepared a jmhbenchmark:

The results of this test were not a big surprise with BigDecimal calculations being the slowest:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

Benchmark Mode Cnt Score Error Units

FloatingPointOperations.baseline avgt100.331±0.012ns/op

FloatingPointOperations.measureAddBigDecimal avgt1011.618±1.754ns/op

FloatingPointOperations.measureAddDouble avgt102.871±0.098ns/op

FloatingPointOperations.measureAddFloat avgt102.934±0.131ns/op

FloatingPointOperations.measureAllBigDecimal avgt1095.048±13.347ns/op

FloatingPointOperations.measureAllDouble avgt109.038±0.155ns/op

FloatingPointOperations.measureAllFloat avgt104.584±0.039ns/op

FloatingPointOperations.measureDivBigDecimal avgt1027.309±2.724ns/op

FloatingPointOperations.measureDivDouble avgt104.641±0.210ns/op

FloatingPointOperations.measureDivFloat avgt103.182±0.039ns/op

FloatingPointOperations.measureMultBigDecimal avgt1011.956±2.377ns/op

FloatingPointOperations.measureMultDouble avgt102.902±0.049ns/op

FloatingPointOperations.measureMultFloat avgt102.950±0.157ns/op

FloatingPointOperations.measureSubBigDecimal avgt1011.120±1.160ns/op

FloatingPointOperations.measureSubDouble avgt102.945±0.096ns/op

FloatingPointOperations.measureSubFloat avgt102.893±0.084ns/op

What is a surprise for me is why more calculations with double (see measureAllDouble) take twice as much as the ones with float (see measureAllFloat) when single calculations take almost the same time. Do you have an idea?