Recommending comparing using a tolerance is inappropriate advice because it decreases false reports of inequality at the expense of increasing false reports of equality, and you cannot know whether that is acceptable to an application you know nothing about. The application might be “more interested” in seeking inequality than seeking equality or might have other specifications it needs to meet.
–
Eric PostpischilJul 30 '13 at 0:36

2

@Eric - When working with floating point numbers there is no notion of identity or inequity, there is only a notion of distance. If in the formula I gave in the answer you replace < with > you will get a criteria for comparing floating point numbers for inequity in terms of distance. Bitwise identity of floating point numbers' representation in the computer memory is of no interest for most practical applications
–
bobahJul 30 '13 at 7:38

1

You are examining a damped oscillator and want to distinguish underdamping, overdamping, and critical damping. This requires a strict test, with no tolerance. Allowing a tolerance would lead to taking the square root of a negative number. However, in spite of this example, your request is a straw man. Advising not to compare with a tolerance does not imply comparing for exact equality, because there are other options. For example, one possibility is to avoid using a comparison at all; just report the best result available without attempting to force it to a quantized result.
–
Eric PostpischilJul 30 '13 at 17:05

2

Regardless of any examples, there is a fundamental problem in advising people to compare using a tolerance. It increases false reports of equality, and, because you do not know the application, you cannot know whether this is acceptable or is a problem.
–
Eric PostpischilJul 30 '13 at 17:06

If you are interested in fixed precision numbers, you should be using a fixed precision type like BigDecimal, not an inherently approximate (though high precision) type like float. There are numerous similar questions on Stack Overflow that go into this in more detail, across many languages.

I think it has nothing to do with Java, it happens on any IEEE 754 floating point number. It is because of the nature of floating point representation. Any languages that use the IEEE 754 format will encounter the same problem.

As suggested by David above, you should use the method abs of java.lang.Math class to get the absolute value (drop the positive/negative sign).

This is a weakness of all floating point representations, and it happens because some numbers that appear to have a fixed number of decimals in the decimal system, actually have an infinite number of decimals in the binary system. And so what you think is 1.2 is actually something like 1.199999999997 because when representing it in binary it has to chop off the decimals after a certain number, and you lose some precision. Then multiplying it by 3 actually gives 3.5999999...

I’m using this bit of code in unit tests to compare if the outcome of 2 different calculations are the same, barring floating point math errors.

It works by looking at the binary representation of the floating point number. Most of the complication is due to the fact that the sign of floating point numbers is not two’s complement. After compensating for that it basically comes down to just a simple subtraction to get the difference in ULPs (explained in the comment below).