I can understand that decimal values are represented in memory in an abstract way and that can cause errors when managing high numbers but... it just fails with a simple subtraction like:
2.02 - 0.01

It feels like a very tacky implementation to me... if you use one pair of integers (int base, int power) to represent a floating point number (value = base * 10 ^ power) it should not be so prone to errors...

I can imagine it is done to save memory consumption but still feels cheese having to do these tricks in order to avoid precision errors.

Hope I don't disturb anyone with my opinion, because it's just that.
Regards!

Mathematicians imagine (but only imagine!) quantities with infinite precision. Computers deal with finite time and finite resources: there isn't paper enough in all the world to write down the real numbers between 0 and 1 using a finite alphabet. So computers cheat and use well established rules for dealing with quantites using only finite precision. Java follows these rules faithfully and in a cross platform way. (as detailed in the link)

It's worse because the mathematician imagines proper decimal subtraction until the day he imagines no more. Finiteness wins.

-----

The results you are getting are probably quite good enough: just ugly. (BigDecimal offers arbitrary, though still finite, precision if you must.) You deal with ugliness using some version of formatting like DecimalFormat or one of the printf() tribe.

It's worse because the mathematician imagines proper decimal subtraction until the day he imagines no more. Finiteness wins.

Analog computers represent numbers in R as a voltage level, not just as a few steenkin' bits. I wonder what has become of those hybrid machines (analog computers coupled to digital computers). I never see them anymore ...

kind regards,

Jos

I have the stamina of a seal; I lie on the beach instead of running on it.

Please keep in mind that asking a computer to subtract 0.01 from 2.02 is equally as "simple" as asking you to subtract a third from 1, but I'd like to see you try that in base 10 :P.

Computers work in base 2 (binary) while most humans are used to working in base 10 (decimal) and not all decimal numbers can be properly represented using binary. Similarly in base 3 subtracting a third from 1 is easy it's just 1 - 0.1 = 0.2, but it's impossible to represent a third in base 10 which is why I have to keep referring to it as a "third" rather than the numerical representation which in decimal is something like 0.33333333333333333333333333333333333333333333333 ect.

Last edited by Skiller; 08-31-2011 at 02:46 PM.

Currently developing Cave Dwellers, a Dwarf Fortress/Minecraft style game for Android.

The base 2 business is often mentioned in this context (and it does play a role in the standards of floating point arithmetic conventionally adopted) but the problem is deeper: you can't uniquely name the real numbers over ever so small a continuous range with finite strings drawn from a finite alphabet whatever the convention you use for the numerals.

(I deliberately put it that way to avoid the point Jos mentioned: you could just represent a quantity analogically with a voltage or whatever. I was actually thinking of those characters in Gulliver's Travels who avoided the ambiguities of language by using real objects to communicate.)