What this ends up doing, for me, is to add .1 as expected but after a while it ends up adding .000009 instead, or something along those lines. You can see the last digit "not keeping up" with the whole thing so I get 2.899999, then 4.09998 later, then 5.299997, etc.

Yet, if I start at 10.00 it soon goes to 12.200001, later 21.900045, etc.

To understand why that's happening you have to understand the concept of "Floating Point". The idea is that of the 4 or 8 bytes storing the number, one or more are used to describe the decimal point, the others are used to describe the value. Thus there is a range between the first and last digit you can adhere to and if you start to exit that range your accuracy decreases because the value can no longer get big enough to handle it.

I believe standard floats use the format IEEE, which means one byte (8-bits) describes the decimal point, the other three (24-bits) describe the actual value.

A 24-Bit number has a range of 16,777,216, so you can have at most about 6 or 7 significant digits in your float with any accuracy.

To see this in action, try storing the number 12345.12345 into a standard float. You're going to notice that it gets truncated.

However, you can still compare that number you just stored using == with 12345.12345 as the comparison value and still get TRUE because 12345.12345 as a constant still needs to be converted the same way to a floating point number to do the comparison, thus it turns out to be the same.

That's probably a lot more info than you needed, but the point is, floating point isn't 100% accurate, so integer based calculations will always be off by just a tiny amount.

More generally speaking, you can't convert fraction numbers from one base to another without losing precision. In a decimal system the value 1/10 is exactly 0.1 but 1/9 is 0.111111111... and no matter how many ones you add to it, you won't get it right. But the value 1/9 could be described as 0.1 in a base 9 system and that would be the precise value.

Since the computer calculates with a base 2 system (binary numbers), I guess the decimal value 0.1 ends up in some difficult floating point value with an unlimited number of digits.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Years of thorough research have revealed that the red "x" that closes a window, really isn't red, but white on red background.

Years of thorough research have revealed that what people find beautiful about the Mandelbrot set is not the set itself, but all the rest.

As background, I currently have a gravity of 1 (integer) and it's too fast so I was going to convert the velocity of my character and the gravity to float, which should be fine, now that I know what's going on.

____________________________________________________________________________________________"c is much better than c++ if you don't need OOP simply because it's smaller and requires less load time." - alethiophileOMG my sides are hurting from laughing so hard... :D

I believe standard floats use the format IEEE, which means one byte (8-bits) describes the decimal point, the other three (24-bits) describe the actual value.

A 24-Bit number has a range of 16,777,216, so you can have at most about 6 or 7 significant digits in your float with any accuracy.

To see this in action, try storing the number 12345.12345 into a standard float. You're going to notice that it gets truncated.

That's not completely accurate. It has a 1 bit sign, an 8 bit exponent, and a 23 bit significand. However, after normalization a leading 1 is assumed, so you effectively get 24 bits in the significand. This just means that the numbers 0 and 1 have to be specially defined.

For instance, the binary number:

100001.1001 (33.5625) would end up being being stored as 0 10000100 00001100100000000000000. Note that the leading 1 gets dropped in the significand.

You are correct in that you get about 6 accurate digits for the general case, but technically if you are requesting a fraction that is easily represented by sums of 1/2n, then the number of digits is much higher.

__________In theory, there is no difference between theory and practice. But, in practice, there is - Jan L.A. van de SnepscheutMMORPG's...Many Men Online Role Playing Girls - Radagar"Is Java REALLY slower? Does STL really bloat your exes? Find out with your friendly host, HoHo, and his benchmarking machine!" - Jakub Wasilewski

That's not completely accurate. It has a 1 bit sign, an 8 bit exponent, and a 23 bit significand. However, after normalization a leading 1 is assumed, so you effectively get 24 bits in the significand. This just means that the numbers 0 and 1 have to be specially defined.

For instance, the binary number:

100001.1001 (33.5625) would end up being being stored as 0 10000100 00001100100000000000000. Note that the leading 1 gets dropped in the significand.

That's interesting. I didn't know the about the dropping of the leading 1. It's a clever way to squeeze out an extra bit of precision - obviously the first significant bit has to be a 1, because the only other choice is 0! It's interesting that that trick is only possible in base 2.

The exponent is biased by 28 − 1 − 1 = 127 in this case (Exponents in the range −126 to +127 are representable. See the above explanation to understand why biasing is done). An exponent of −127 would be biased to the value 0 but this is reserved to encode that the value is a denormalized number or zero. An exponent of 128 would be biased to the value 255 but this is reserved to encode an infinity or not a number (NaN). See the chart above.

If the leading zero weren't dropped, then an 8-bit exponent would give you 0 => -127 and 255 => +128. However, the extrema are reserved to define 0 and infinity. (Zero cannot otherwise be represented since the leading 1 is assumed.) So the effective range is -126 to 127. Losing two values is a very small price to pay for gaining an extra bit. With a 7-bit exponent, you'd only get -63 to 64.