Binary floating-point math is complex and subtle. I’ve collected here a few of my favorite oddball facts about IEEE floating-point math, based on the articles so far in my floating-point series. The focus in this list is on float but the same concepts all apply to double.

These oddities don’t make floating-point math bad, and in many cases these oddities can be ignored. But when you try to simulate the infinite expanse of the real-number line with 32-bit or 64-bit numbers then there will inevitably be places where the abstraction breaks down, and it’s good to know about them.

Some of these facts are useful, and some of them are surprising. You get to decide which is which.

Share this:

Like this:

LikeLoading...

Related

About brucedawson

I'm a programmer, working for Google, focusing on optimization and reliability. Nothing's more fun than making code run 10x faster. Unless it's eliminating large numbers of bugs.
I also unicycle. And play (ice) hockey. And sled hockey. And juggle. And worry about whether this blog should have been called randomutf-8.

Short of it is that the mantissa always has to have a leading one, otherwise when you assign one float to another, the assignment will “correct” the float for you. I guess it was our silliness for assigning the return of an endianize swap back to a float instead of a uint, but this behavior did result in some funny looking shaders and geometry (and who knows what else – that’s the scary part!) for a couple of weeks until we tag-team tracked it down.

Something seems very fishy here. The claim that a float must have a one in the most significant bit of the mantissa is incorrect, for your purposes. While it is true that all normalized floats have a leading one, that one is *implied*. Storing a bit that is always one would be unconscionably inefficient.

All 32-bit float values are valid. The only ones that could possibly be *corrected* by loading/storing are NaNs — numbers with an 0xFF exponent and a non-zero mantissa.

Depending on the FPU settings, denormalized numbers (zero exponent and non-zero mantissa) might get zeroed, but you shouldn’t generally be using these anyway.

I recommend digging deeper. I don’t think you’ve found the problem. You should seriously consider an exhaustive search. In a very reasonable time (as little as fifteen minutes) you scan scan all positive flots doing quite expensive tests, and that should help isolate the real problem.

One thing to be aware of is that the type punning/aliasing that you are doing is illegal/undefined. Consider using a union instead — although I doubt the type punning would cause a problem with the VC++ compiler on Xbox 360.

You say “We are assigning the byte-swapped data, which may or may not have a 1 in that position.” and if what you are doing is to take a float, byte swap it, and then treat it as a float, then yeah, that’s bad. Probably you are ending up with some NaN values, and they will not necessarily be preserved.

Don’t pretend that a byte-swapped float is a float. If that is the solution that you reached then you’re okay. But you should say what your solution was.

We alluded to our solution above, but basically it was to realize that when you are endian swapping something, you are no longer dealing with a type but a block of memory, and so you should treat it as a block of memory – we cast the memory at the address to a uint32, swapped it around, and then wrote the bitpattern directly out instead of reassigning it to a float like we were lazily doing before.

Looking over some of your other articles on floating point, I can see indeed points where that leading 1 isn’t present in other floats (like the representation of 1) so the explanation that we have is faulty. We were only seeing this behavior on a select few float values so it could be that we were dealing with NaNs that we hadn’t recognized as such.

On any floating point machine (or double-precision machine), one will observe that a peculiar form of intermittent chaotic behavior is observed in the corresponding numerical orbits whenever alpha doesn’t belong to the set {0,1} U [3/2, 3]. Moreover, this peculiar chaotic behavior is completely unexpected (i.e., not predicted by the dynamics of the underlying recurrence equation); it arises only when these sequences are computed numerically. More technically, one might say that it occurs due to the non-trivial interaction between the dynamics of the underlying recurrence equation and that of the floating point environment in which its orbits are embedded.

From what I can tell, the round-off errors in these sequences are accumulating and propagating in a bizarre way: there seems to be a very regular pattern where the values in 2 out of every 3 terms exhibit errors – which are growing exponentially in magnitude – while every 3rd term has no error at all. In other words, whenever the previous two terms contain numerical errors, the corresponding computation of the next term somehow cancels these errors out, resulting in an error-free, exact value.

A simpler way of saying the above might be this: there’s a rather interesting combination of error amplification and attenuation at play here.

One particularly interesting problem is trying to explain why no such chaos is observed whenever 3/2 <= alpha <= 3 (the cases when alpha = 0 or 1 are trivial) … these sequences actually converge to the 3-cycle {-1, (1 – alpha), -1} like they are supposed to.

Here’s one more … what is the probability that two randomly chosen floating-point numbers will yield a product whose significand will not have to be normalized? Stated another way, given two random floats, what’s the probability that the product of their respective significands will be less than 2? The answer is 2ln(2) – 1 = 0.38629…

Moreover, this probability has connections to a wide variety of other problems … see my blogpost for more details:

For these two random floats, are you assuming that they are in 1.0 to 1.99999 range? I can’t figure out how else “not have to be normalized” and “less than 2” can be equivalent. If that is the case then clarify on your blog post? Looks interesting anyway.

Yes, I’m assuming both significands are in the interval [1,2) … thus, their product will end up somewhere in the interval [1,4) prior to normalization. If normalization is required, this implies the significand of the product ended up in the subinterval [2,4); otherwise, it ended up in the sub-interval [1,2) and so the normalization step wouldn’t be needed.

Just for the record, as far as floating-point multiplication is concerned, the significand of each operand is always assumed to be in the interval [1,2) unless the number is denormalized, in which case the significand would be in the interval [0,1).

Moreover, the nature of floating-point multiplication allows one to treat each part of the operation independently (prior to normalization): (i) the sign of the product depends only on whether the sign bits of the operands are similar or not; (ii) the significand of the product depends only on the product of the significands of the operands, and (iii) the exponent of the product depends only on the sum of the exponents of the operands.

If the significand of the product has to be normalized, this will increment the exponent of the product by 1; otherwise, the resulting exponent (i.e., the sum of the exponents of the operands) will remain unchanged.

One of my favorite weirdnesses about -0: sqrt(-0) = -0, according to IEEE-754. And IIRC, the only way to _reveal_ the sign of 0 (without examining the bitwise representation directly) is one of the following: Divide by it and look for -Inf vs. +Inf, or use the CopySign primitive to copy the sign onto another number.

There are other places -0 behaves differently. For example, negating a 0 is not the same as subtracting a 0 from 0. I forget the exact case, but there’s one case that doesn’t flip the sign due to a different rule about subtracting equal numbers. But, that doesn’t _reveal_ the sign of 0.