> would it be more efficient to add ... in the beginning of the function? <invoke Knuth>Premature optimization is the root of all evil.</invoke Knuth> Just go with abs(a-b) < EPS as noted above, it's clear and easy to understand.
– Andrew ColesonAug 20 '08 at 5:55

The only thing unoptimal about original poster's implementation is that it contains an extra branch at &&. OJ's answer is optimal. fabs is an intrinsic which is a single instruction on x87, and i suppose on almost anything else too. Accept OJ's answer already!
– 3yEMar 27 '10 at 17:04

3

If you can, drop the floating point and use fixed points. Example, use {fixed point} millimeters instead of {floating point} meters.
– Thomas MatthewsJun 11 '12 at 2:11

21

"Simply doing this is not correct" - This is mere rubbish, of course using == can be perfectly correct, but this entirely depends on the context not given in the question. Until that context is known, == still stays the "most efficient way".
– Christian RauMay 13 '13 at 7:39

27 Answers
27

Be extremely careful using any of the other suggestions. It all depends on context.

I have spent a long time tracing a bugs in a system that presumed a==b if |a-b|<epsilon. The underlying problems were:

The implicit presumption in an algorithm that if a==b and b==c then a==c.

Using the same epsilon for lines measured in inches and lines measured in mils (.001 inch). That is a==b but 1000a!=1000b. (This is why AlmostEqual2sComplement asks for the epsilon or max ULPS).

The use of the same epsilon for both the cosine of angles and the length of lines!

Using such a compare function to sort items in a collection. (In this case using the builtin C++ operator == for doubles produced correct results.)

Like I said: it all depends on context and the expected size of a and b.

BTW, std::numeric_limits<double>::epsilon() is the "machine epsilon". It is the difference between 1.0 and the next value representable by a double. I guess that it could be used in the compare function but only if the expected values are less than 1. (This is in response to @cdv's answer...)

Also, if you basically have int arithmetic in doubles (here we use doubles to hold int values in certain cases) your arithmetic will be correct. For example 4.0/2.0 will be the same as 1.0+1.0. This is as long as you do not do things that result in fractions (4.0/3.0) or do not go outside of the size of an int.

+1 for pointing out the obvious (that often gets ignored). For a generic method, you can make the epsilon relative to fabs(a)+fabs(b) but with compensating for NaN, 0 sum and overflow, this gets quite complex.
– peterchenAug 7 '10 at 6:41

4

There must be something I don't understand. The typical float/double is MANTISSA x 2^ EXP. epsilon will be dependent on the exponent. For example if the mantissa is 24bits and the exponent is signed 8bit, then 1/(2^24)*2^127 or ~2^103 is an epsilon for some values; or is this referring to a minimum epsilon?
– artless noiseMar 17 '13 at 18:54

This code is only correct when either a or b is 1.0.
– Don RebaJul 29 '11 at 14:02

10

@DonReba: Only if EPSILON is defined as DBL_EPSILON. Normally it will be a specific value chosen depending on the required accuracy of the comparison.
– Nemo157Dec 20 '11 at 20:38

3

EPSILON comparison does not work when the floats are large, as the difference between consecutive floats also becomes large. See this article.
– kevintodiscoOct 23 '13 at 4:04

14

No wonder there is Z-fighting in some games when textures/objects far away flicker, like in Battlefield 4. Comparing the difference with EPSILON is pretty much useless. You need to compare with a threshold that makes sense for the units at hand. Also, use std::abs since it is overloaded for different floating point types.
– Maxim EgorushkinFeb 19 '14 at 16:40

5

I downvoted since the example code shows the typical bug whis is repeated by the majority of programmers. Floating point is always about relative errors, since it is floating point (not fixed point). So it will never correctly work with a fixed error (epsilon).
– user2261015Apr 13 '17 at 7:58

// Copyright 2005, Google Inc.
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
// Authors: wan@google.com (Zhanyong Wan), eefacm@gmail.com (Sean Mcafee)
//
// The Google C++ Testing Framework (Google Test)
// This template class serves as a compile-time function from size to
// type. It maps a size in bytes to a primitive type with that
// size. e.g.
//
// TypeWithSize<4>::UInt
//
// is typedef-ed to be unsigned int (unsigned integer made up of 4
// bytes).
//
// Such functionality should belong to STL, but I cannot find it
// there.
//
// Google Test uses this class in the implementation of floating-point
// comparison.
//
// For now it only handles UInt (unsigned int) as that's all Google Test
// needs. Other types can be easily added in the future if need
// arises.
template <size_t size>
class TypeWithSize {
public:
// This prevents the user from using TypeWithSize<N> with incorrect
// values of N.
typedef void UInt;
};
// The specialization for size 4.
template <>
class TypeWithSize<4> {
public:
// unsigned int has size 4 in both gcc and MSVC.
//
// As base/basictypes.h doesn't compile on Windows, we cannot use
// uint32, uint64, and etc here.
typedef int Int;
typedef unsigned int UInt;
};
// The specialization for size 8.
template <>
class TypeWithSize<8> {
public:
#if GTEST_OS_WINDOWS
typedef __int64 Int;
typedef unsigned __int64 UInt;
#else
typedef long long Int; // NOLINT
typedef unsigned long long UInt; // NOLINT
#endif // GTEST_OS_WINDOWS
};
// This template class represents an IEEE floating-point number
// (either single-precision or double-precision, depending on the
// template parameters).
//
// The purpose of this class is to do more sophisticated number
// comparison. (Due to round-off error, etc, it's very unlikely that
// two floating-points will be equal exactly. Hence a naive
// comparison by the == operation often doesn't work.)
//
// Format of IEEE floating-point:
//
// The most-significant bit being the leftmost, an IEEE
// floating-point looks like
//
// sign_bit exponent_bits fraction_bits
//
// Here, sign_bit is a single bit that designates the sign of the
// number.
//
// For float, there are 8 exponent bits and 23 fraction bits.
//
// For double, there are 11 exponent bits and 52 fraction bits.
//
// More details can be found at
// http://en.wikipedia.org/wiki/IEEE_floating-point_standard.
//
// Template parameter:
//
// RawType: the raw floating-point type (either float or double)
template <typename RawType>
class FloatingPoint {
public:
// Defines the unsigned integer type that has the same size as the
// floating point number.
typedef typename TypeWithSize<sizeof(RawType)>::UInt Bits;
// Constants.
// # of bits in a number.
static const size_t kBitCount = 8*sizeof(RawType);
// # of fraction bits in a number.
static const size_t kFractionBitCount =
std::numeric_limits<RawType>::digits - 1;
// # of exponent bits in a number.
static const size_t kExponentBitCount = kBitCount - 1 - kFractionBitCount;
// The mask for the sign bit.
static const Bits kSignBitMask = static_cast<Bits>(1) << (kBitCount - 1);
// The mask for the fraction bits.
static const Bits kFractionBitMask =
~static_cast<Bits>(0) >> (kExponentBitCount + 1);
// The mask for the exponent bits.
static const Bits kExponentBitMask = ~(kSignBitMask | kFractionBitMask);
// How many ULP's (Units in the Last Place) we want to tolerate when
// comparing two numbers. The larger the value, the more error we
// allow. A 0 value means that two numbers must be exactly the same
// to be considered equal.
//
// The maximum error of a single floating-point operation is 0.5
// units in the last place. On Intel CPU's, all floating-point
// calculations are done with 80-bit precision, while double has 64
// bits. Therefore, 4 should be enough for ordinary use.
//
// See the following article for more details on ULP:
// http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm.
static const size_t kMaxUlps = 4;
// Constructs a FloatingPoint from a raw floating-point number.
//
// On an Intel CPU, passing a non-normalized NAN (Not a Number)
// around may change its bits, although the new value is guaranteed
// to be also a NAN. Therefore, don't expect this constructor to
// preserve the bits in x when x is a NAN.
explicit FloatingPoint(const RawType& x) { u_.value_ = x; }
// Static methods
// Reinterprets a bit pattern as a floating-point number.
//
// This function is needed to test the AlmostEquals() method.
static RawType ReinterpretBits(const Bits bits) {
FloatingPoint fp(0);
fp.u_.bits_ = bits;
return fp.u_.value_;
}
// Returns the floating-point number that represent positive infinity.
static RawType Infinity() {
return ReinterpretBits(kExponentBitMask);
}
// Non-static methods
// Returns the bits that represents this number.
const Bits &bits() const { return u_.bits_; }
// Returns the exponent bits of this number.
Bits exponent_bits() const { return kExponentBitMask & u_.bits_; }
// Returns the fraction bits of this number.
Bits fraction_bits() const { return kFractionBitMask & u_.bits_; }
// Returns the sign bit of this number.
Bits sign_bit() const { return kSignBitMask & u_.bits_; }
// Returns true iff this is NAN (not a number).
bool is_nan() const {
// It's a NAN if the exponent bits are all ones and the fraction
// bits are not entirely zeros.
return (exponent_bits() == kExponentBitMask) && (fraction_bits() != 0);
}
// Returns true iff this number is at most kMaxUlps ULP's away from
// rhs. In particular, this function:
//
// - returns false if either number is (or both are) NAN.
// - treats really large numbers as almost equal to infinity.
// - thinks +0.0 and -0.0 are 0 DLP's apart.
bool AlmostEquals(const FloatingPoint& rhs) const {
// The IEEE standard says that any comparison operation involving
// a NAN must return false.
if (is_nan() || rhs.is_nan()) return false;
return DistanceBetweenSignAndMagnitudeNumbers(u_.bits_, rhs.u_.bits_)
<= kMaxUlps;
}
private:
// The data type used to store the actual floating-point number.
union FloatingPointUnion {
RawType value_; // The raw floating-point number.
Bits bits_; // The bits that represent the number.
};
// Converts an integer from the sign-and-magnitude representation to
// the biased representation. More precisely, let N be 2 to the
// power of (kBitCount - 1), an integer x is represented by the
// unsigned number x + N.
//
// For instance,
//
// -N + 1 (the most negative number representable using
// sign-and-magnitude) is represented by 1;
// 0 is represented by N; and
// N - 1 (the biggest number representable using
// sign-and-magnitude) is represented by 2N - 1.
//
// Read http://en.wikipedia.org/wiki/Signed_number_representations
// for more details on signed number representations.
static Bits SignAndMagnitudeToBiased(const Bits &sam) {
if (kSignBitMask & sam) {
// sam represents a negative number.
return ~sam + 1;
} else {
// sam represents a positive number.
return kSignBitMask | sam;
}
}
// Given two numbers in the sign-and-magnitude representation,
// returns the distance between them as an unsigned number.
static Bits DistanceBetweenSignAndMagnitudeNumbers(const Bits &sam1,
const Bits &sam2) {
const Bits biased1 = SignAndMagnitudeToBiased(sam1);
const Bits biased2 = SignAndMagnitudeToBiased(sam2);
return (biased1 >= biased2) ? (biased1 - biased2) : (biased2 - biased1);
}
FloatingPointUnion u_;
};

EDIT: This post is 4 years old. It's probably still valid, and the code is nice, but some people found improvements. Best go get the latest version of AlmostEquals right from the Google Test source code, and not the one I pasted up here.

+1: I agree this one is correct. However, it doesn't explain why. See here: cygnus-software.com/papers/comparingfloats/comparingfloats.htm I read this blog post after I wrote my comment on the top score here; I believe it says the same thing and provides the rational/solution that is implemented above. Because there is so much code, people will miss the answer.
– artless noiseMar 17 '13 at 19:07

2

Good find! I guess it would be best to contribute them to Google Test, though, where this code was stolen from. I'll update the post to reflect that probably there's a newer version. If the Google guys act itchy, could you put it in e.g. a GitHub gist? I'll link to that as well, then.
– skrebbelMay 1 '14 at 12:48

2

For the newest code snippet, see here and here.
– JaegeNov 25 '16 at 3:11

1

I have extracted the necessary lines to a gist file. Anyone can reach from here.
– Yusuf Tarık GünaydınJul 25 '17 at 18:40

Of course, choosing epsilon depends on the context, and determines how equal you want the numbers to be.

Another method of comparing floating point numbers is to look at the ULP (units in last place) of the numbers. While not dealing specifically with comparisons, the paper What every computer scientist should know about floating point numbers is a good resource for understanding how floating point works and what the pitfalls are, including what ULP is.

thanks for posting how to determine which number is smaller/bigger!
– TomatoDec 12 '13 at 5:26

fabs(a - b) <= ( (fabs(a) < fabs(b) ? fabs(b) : fabs(a)) * epsilon); saved my live. LOL Note that this version (I haven't checked if is applies for the others too) also considers the change that might occur in the integral part of the floating point number (example: 2147352577.9999997616 == 2147352576.0000000000 where you can clearly see that there is almost a difference of 2 between the two numbers) which is quite nice! This happens when the accumulated rounding error overflows the decimal part of the number.
– rbaleksandarOct 12 '16 at 12:24

Given that this question is tagged C++, your checks would be easier to read being written as std::max(std::abs(a), std::abs(b)) (or with std::min()); std::abs in C++ is overloaded with float & double types, so it works just fine (you can always keep fabs for readability though).
– RazakhelOct 1 '18 at 17:09

definitelyGreaterThan is reporting true for something that should definitely be equal to, i.e. not greater than.
– mwpowellhtxJan 19 at 21:05

You'll want a multiple of that epsilon most likely.
– user7116Aug 8 '10 at 1:41

9

Can't you just use std::abs? AFAIK, std::abs is overloaded for doubles as well. Please warn me if I'm wrong.
– kolistivraSep 25 '10 at 8:41

2

@kolistivra, you are wrong. The 'f' in 'fabs' does not mean the type float. You're probably thinking of the C functions fabsf() and fabsl().
– jcofflandJan 24 '12 at 9:21

5

Actually for reasons outlined in Bruce's articleepsilon changes as the floating point value gets bigger. See the part where he says "For numbers larger than 2.0 the gap between floats grows larger and if you compare floats using FLT_EPSILON then you are just doing a more-expensive and less-obvious equality check."
– boboboboJan 19 '13 at 1:07

5

i know this is old but std::abs is overloaded for floating point types in cmath.
– mholzmannJun 30 '13 at 18:45

Realizing this is an old thread but this article is one of the most straight forward ones I have found on comparing floating point numbers and if you want to explore more it has more detailed references as well and it the main site covers a complete range of issues dealing with floating point numbers The Floating-Point Guide :Comparison.

The article notes that the absolute test fails when x and y are large and fails in the relative case when they are small. Assuming he absolute and relative tolerance is the same a combined test would look like this:

I wonder if fabs wouldn't make you lose lazy evaluation in some case. I would say it depends on the compiler. You might want to try both. If they are equivalent in average, take the implementation with fabs.

If you have some info on which of the two float is more likely to be bigger than then other, you can play on the order of the comparison to take better advantage of the lazy evaluation.

Finally you might get better result by inlining this function. Not likely to improve much though...

But otherwise it'll lead you into trouble. Double precision numbers have a resolution of about 16 decimal places. If the two numbers you are comparing are larger in magnitude than EPSILON*1.0E16, then you might as well be saying:

return a==b;

I'll examine a different approach that assumes you need to worry about the first issue and assume the second is fine your application. A solution would be something like:

This is expensive computationally, but it is sometimes what is called for. This is what we have to do at my company because we deal with an engineering library and inputs can vary by a few dozen orders of magnitude.

Anyway, the point is this (and applies to practically every programming problem): Evaluate what your needs are, then come up with a solution to address your needs -- don't assume the easy answer will address your needs. If after your evaluation you find that fabs(a-b) < EPSILON will suffice, perfect -- use it! But be aware of its shortcomings and other possible solutions too.

Aside from the typos (s/-/,/ missing comma in fmax()), this implementation has a bug for numbers near zero that are within EPSILON, but not quite VERYSMALL yet. E.g., AreSame(1.0E-10, 1.0E-9) reports false because the relative error is huge. You get to be the hero at your company.
– brlcadOct 21 '10 at 0:00

1

@brlcad You didn't get the point of floating point. 1.0E-10 and 1.0E-9 differ by the magnitude of 10. So it is true that they are not the same. floating point is always about relative errors. If you have a system which considers 1.0E-10 and 1.0E-9 as almost equal, since both are "pretty close to zero" (which sounds reasonable to humans but is nothing mathematically), then that EPSILON needs to be adjusted as appropriate for such a system.
– user2261015Nov 12 '15 at 11:10

I ended up spending quite some time going through material in this great thread. I doubt everyone wants to spend so much time so I would highlight the summary of what I learned and the solution I implemented.

Quick Summary

There are two problems with float compares: you have limited precision and meaning of "approximately zero" depends on context (see next point).

Is 1E-8 approximately same as 1E-16? If you are looking at noisy sensor data then probably yes but if you are doing molecular simulation then may be not! Bottom line: You always need to think of tolerance value in context of specific function call and not just make it generic app-wide hard-coded constant.

For general library functions, it's still nice to have parameter with default tolerance. A typical choice is numeric_limits::epsilon() which is same as FLT_EPSILON in float.h. This is however problematic because epsilon for comparing values like 1.0 if not same as epsilon for values like 1E9. The FLT_EPSILON is defined for 1.0.

The obvious implementation to check if number is within tolerance is fabs(a-b) <= epsilon however this doesn't work because default epsilon is defined for 1.0. We need to scale epsilon up or down in terms of a and b.

There are two solution to this problem: either you set epsilon proportional to max(a,b) or you can get next representable numbers around a and then see if b falls into that range. The former is called "relative" method and later is called ULP method.

Both methods actually fails anyway when comparing with 0. In this case, application must supply correct tolerance.

As others have pointed out, using a fixed-exponent epsilon (such as 0.0000001) will be useless for values away from the epsilon value. For example, if your two values are 10000.000977 and 10000, then there are NO 32-bit floating-point values between these two numbers -- 10000 and 10000.000977 are as close as you can possibly get without being bit-for-bit identical. Here, an epsilon of less than 0.0009 is meaningless; you might as well use the straight equality operator.

Likewise, as the two values approach epsilon in size, the relative error grows to 100%.

Thus, trying to mix a fixed point number such as 0.00001 with floating-point values (where the exponent is arbitrary) is a pointless exercise. This will only ever work if you can be assured that the operand values lie within a narrow domain (that is, close to some specific exponent), and if you properly select an epsilon value for that specific test. If you pull a number out of the air ("Hey! 0.00001 is small, so that must be good!"), you're doomed to numerical errors. I've spent plenty of time debugging bad numerical code where some poor schmuck tosses in random epsilon values to make yet another test case work.

If you do numerical programming of any kind and believe you need to reach for fixed-point epsilons, READ BRUCE'S ARTICLE ON COMPARING FLOATING-POINT NUMBERS.

Note that comparing values where either p1 or p2 is 0.0 will not work,
nor does comparing values where one of the values is NaN or infinity.
If one of the values is always 0.0, use qFuzzyIsNull instead. If one
of the values is likely to be 0.0, one solution is to add 1.0 to both
values.

General-purpose comparison of floating-point numbers is generally meaningless. How to compare really depends on a problem at hand. In many problems, numbers are sufficiently discretized to allow comparing them within a given tolerance. Unfortunately, there are just as many problems, where such trick doesn't really work. For one example, consider working with a Heaviside (step) function of a number in question (digital stock options come to mind) when your observations are very close to the barrier. Performing tolerance-based comparison wouldn't do much good, as it would effectively shift the issue from the original barrier to two new ones. Again, there is no general-purpose solution for such problems and the particular solution might require going as far as changing the numerical method in order to achieve stability.

Unfortunately, even your "wasteful" code is incorrect. EPSILON is the smallest value that could be added to 1.0 and change its value. The value 1.0 is very important — larger numbers do not change when added to EPSILON. Now, you can scale this value to the numbers you are comparing to tell whether they are different or not. The correct expression for comparing two doubles is:

if (fabs(a - b) <= DBL_EPSILON * fmax(fabs(a), fabs(b)))
{
// ...
}

This is at a minimum. In general, though, you would want to account for noise in your calculations and ignore a few of the least significant bits, so a more realistic comparison would look like:

Why do you think that EPSILON in the question is DBL_EPSILON or FLT_EPSILON? The problem is in your own imagination, where you substituted DBL_EPSILON (which indeed would be the wrong choice) into code that did not use it.
– Ben VoigtApr 21 '15 at 3:52

@BenVoigt, you're right, it was something on my mind at the time, and I interpreted the question in that light.
– Don RebaApr 21 '15 at 8:56

My class based on previously posted answers. Very similar to Google's code but I use a bias which pushes all NaN values above 0xFF000000. That allows a faster check for NaN.

This code is meant to demonstrate the concept, not be a general solution. Google's code already shows how to compute all the platform specific values and I didn't want to duplicate all that. I've done limited testing on this code.

0.5000000000000000 and 0.5000000000000001 are close.
1.0000000000000000 and 1.0000000000000002 are not close.

Note that in the second case (one and just larger than one), the two input values are as close as they can possibly be, and still compare as not close. Thus, for values greater than 1.0, you might as well just use an equality test. Fixed epsilons will not save you when comparing floating-point values.

It depends on how precise you want the comparison to be. If you want to compare for exactly the same number, then just go with ==. (You almost never want to do this unless you actually want exactly the same number.) On any decent platform you can also do the following:

diff= a - b; return fabs(diff)<EPSILON;

as fabs tends to be pretty fast. By pretty fast I mean it is basically a bitwise AND, so it better be fast.

And integer tricks for comparing doubles and floats are nice but tend to make it more difficult for the various CPU pipelines to handle effectively. And it's definitely not faster on certain in-order architectures these days due to using the stack as a temporary storage area for values that are being used frequently. (Load-hit-store for those who care.)

I'd be very wary of any of these answers that involves floating point subtraction (e.g., fabs(a-b) < epsilon). First, the floating point numbers become more sparse at greater magnitudes and at high enough magnitudes where the spacing is greater than epsilon, you might as well just be doing a == b. Second, subtracting two very close floating point numbers (as these will tend to be, given that you're looking for near equality) is exactly how you get catastrophic cancellation.

While not portable, I think grom's answer does the best job of avoiding these issues.

+1 for good information. However, I fail to see how you could mess up the equality comparison by increasing the relative error; IMHO the error becomes significant only in the result of the subtraction, however it's order of magnitude relative to that of the two operands being subtracted should still be reliable enough to judge equality. Unless of the resolution needs to be higher overall, but in that case the only solution is to move to a floating point representation with more significant bits in the mantissa.
– seheMay 17 '11 at 9:10

Subtracting two nearly-equal numbers does NOT lead to catastrophic cancellation -- in fact, it does not introduce any error at all (q.v. Sterbenz's Theorem). Catastrophic cancellation occurs earlier, during the calculation of a and b themselves. There is absolutely no problem with using floating point subtraction as part of a fuzzy comparison (though as others have said, an absolute epsilon value may or may not be appropriate for a given use case.)
– SneftelMay 26 '18 at 12:54

Actually a very solid reference to a good answer, though it is very specialized to limit anyone without scientific computing or numerical analysis background (I.e. LAPACK, BLAS) to not understand the completeness. Or in other words, it assumes you've read something like Numerical Recipes introduction or Numerical Analysis by Burden & Faires.
– mctylrAug 9 '13 at 19:47

If epsilon is the small fraction of the magnitude of quantity (i.e. relative value) in some certain physical sense and A and B types is comparable in the same sense, than I think, that the following is quite correct:

/// testing whether two doubles are almost equal. We consider two doubles
/// equal if the difference is within the range [0, epsilon).
///
/// epsilon: a positive number (supposed to be small)
///
/// if either x or y is 0, then we are comparing the absolute difference to
/// epsilon.
/// if both x and y are non-zero, then we are comparing the relative difference
/// to epsilon.
bool almost_equal(double x, double y, double epsilon)
{
double diff = x - y;
if (x != 0 && y != 0){
diff = diff/y;
}
if (diff < epsilon && -1.0*diff < epsilon){
return true;
}
return false;
}

I used this function for my small project and it works, but note the following:

Double precision error can create a surprise for you. Let's say epsilon = 1.0e-6, then 1.0 and 1.000001 should NOT be considered equal according to the above code, but on my machine the function considers them to be equal, this is because 1.000001 can not be precisely translated to a binary format, it is probably 1.0000009xxx. I test it with 1.0 and 1.0000011 and this time I get the expected result.

+1: hey, I'm not going to do game programming with this, but the idea of round-tripping floats came up several times in Bruce Dawson's blog (treatise? :D) on the issue, and if you're trapped in a room and someone puts a gun to your head and says "hey you have to compare two floats to within X significant figures, you have 5 minutes, GO!" this is probably one to consider. ;)
– shelleybutterflyMay 25 '14 at 2:04

@shelleybutterfly Then again the question was for the most efficient way of comparing two floating point numbers.
– Tommy AndersenAug 13 '14 at 11:11

@TommyA lol perhaps, but I bet round-tripping was downvoted for reasons not efficiency-related. Though my intuition is that it would be rather inefficient compared to HW fp math but also says that the algoritihms in software fp are unlikely to have a bigO difference at least. I eagerly await the analysis you did showing the efficiency concerns in that case are significant. Besides, sometimes less-than-optimal can still be a valuable answer, and as it was downvoted–despite being a valid technique that was even mentioned by Dawson's blog on the subject, so I thought it deserved an upvote.
– shelleybutterflyAug 24 '14 at 8:04

This method has many weaknesses, like if the numbers a and b are already smaller than epsilon() there difference may still be significant. Conversely if the numbers are very large then even a couple of bits of error will make the comparison fail even if you did want the numbers to be considered equal. This answer is exactly the type of "generic" comparison algorithm that you want to avoid.
– SirGuySep 22 '16 at 2:05

Why not perform bitwise XOR? Two floating point numbers are equal if their corresponding bits are equal. I think, the decision to place the exponent bits before mantissa was made to speed up comparison of two floats.
I think, many answers here are missing the point of epsilon comparison. Epsilon value only depends on to what precision floating point numbers are compared. For example, after doing some arithmetic with floats you get two numbers: 2.5642943554342 and 2.5642943554345. They are not equal, but for the solution only 3 decimal digits matter so then they are equal: 2.564 and 2.564. In this case you choose epsilon equal to 0.001. Epsilon comparison is also possible with bitwise XOR. Correct me if I am wrong.

Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).