If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

no, there is absolutely nothing strange about it. I don't mean to be rude but you clearly do not understand how computers deal with numbers. YOU think in decimal. THEY think in binary.

YOU think that when you say 1.34 that the computer sees that as the same thing you have written down, but in fact it does no such thing. What it does is get the closest it can to your decimal fraction using binary fractions, and close is NOT exact. In this case it could, for example, get 1.339999999999 as the decimal equivalent of the closest binary fraction.

To compare floating point numbers, NEVER compare for an exact match, always do this:

if abs(var1 - var2) < .00000001 then
close enough for government work
else
they're substantially different
end if

bump, in case you missed my entry. It has nothing to do with the comparison function as blinky hypothesized, it's an artifact of the difference between decimal fractions and binary fractions (see my previous entry in the thread)

and I take it that you find that strange in some way, yes? It really appears that you guys just don't get it. doubles and singles won't act the same way because they have different numbers of significant digits ("significant bits", actually) so they will have differing rounding errors, so to expect them to behave identically is just silly. It all has to do with the difference between decimal fractions and binary fractions and the fact that computers NEVER have an unlimited number of significant digits.

If computers worked in decimal, they would make the same kind of mistakes, just with different numbers, again because of rounding errors.

Take the simplest imaginable decimal number, 1.0 --- Now you would think, since you think in decimal, that this would have an exact representation in the computer. Well, you would be wrong. The binary fraction that the computer generates has a decimal equivalent of .99999999... or thereabouts, depending on whether you use a single or a double. REMEMBER, I'm talking about floating point numbers, not integers. The integer 1 is prefectly representable in binary, but the floating point number 1.0 is not.

Expressed another way, decimal and binary have a different set of rational numbers. That is, numbers that can be represented by proper fractons in one radix cannot necessarily be expressed as proper fractions in the other radix, and when you add to that the fact that there are a limited number of significant digits (or bits) then you get the results that you are seeing, that you seem to find so puzzling.

I apologize for my snippy comment. Your goal of shedding further light on a murky situation is admirable. I went off on a rant just because I once again saw the equality comparison and my point (as you clearly understood) was that it is a bad idea to use equality comparisons.

I apologize for my snippy comment. Your goal of shedding further light on a murky situation is admirable. I went off on a rant just because I once again saw the equality comparison and my point (as you clearly understood) was that it is a bad idea to use equality comparisons.

plenderj, I'll respond to your question next.

Accepted.

I hope qixinzhi can do something with it.

With the code i provided you can compare the values for the amount of decimal scale in the floating point.
Just multiply by 1...... , the points beeing the decimal scale.

The floating point number has two parts, one containing the sign of the number and a fraction (some-times called a mantissa) and the other designating the position of the radix point in the number and called the exponent.
For example the decimal number +6132.789 is represented in floating-point notation as :

Code:

Fraction Exponent
+6.132789 + 04

Only the fraction adn the exponent are physically represented in computer registers; radix 10 and the decimal point of the fracion are assumed and are not shown explicitly. A floating-point binary number is represented in a similar manner, except that it uses radix 2 for the exponent.
For example, the binary number + 1001.11 is represented with an 8-bit fraction and 6-bit exponent as :

Code:

Fraction Exponent
01001110 000100

So that which you refer to as a fraction, is actually just the number itself, without the decimal point.
There is no rounding.

I believe this issue with VB not comparing correctly is a different problem completely.

plenderj, I don't know of any web site, but a little simple thought will allow you to follow what I'm explaining, so here goes.

FIRST, my example of 1.0 was incorrect so if that explains the problem to you, read no more. . I knew the concept to be correct, so I was hasty in my example. I should have used 1/10th, which IS a correct example for what I'm talking about.

A human, looking at the fraction 1/10th would be reasonably inclined to believe that it would be hard to mess up something so simple, but as you will see from my example, the real world is counter-intuitive in this regard because of the conversion from decimal to binary and the limited number of significant digits (bits).

Forgetting totally about binary for a moment, just think about the fact that foating point numbers are stored in a format that dictates that the precision part is always stored as a decimal fraction with the first significant digit immediately to the right of the decimal point. Thus 1/10th is stored as .1 x 10^1. In decimal, that's not a problem because the fraction 1/10th is a rational number.

Now lets move that over to binary. Once again, we have the situation that the precision part is stored as a binary floating point number with the precision stored with the first significant digit immediately to the right of the decimal point ("binary point", actually, in this case). So, if we wanted to store the number that we think of in decimal as .5, which we would store in decimal as .5 x 10^0, we store it in binary as .1 * 2^0.

This works just fine in binary, because the number 1/2 is a rational number in binary. That is, it can be expressed as a proper fraction in that radix system.

NOW comes the part that gets confusing, and that's to do the decimal number .1 (one tenth) as a binary floating point number which we CANNOT do precisely because 1/10th is not rational in binary. If you do the longhand division in binary, you'll see that when you divide 1 by 1010, you get an irrational, and in fact infinitely repeating, floating point number that looks like:

.00011001100110011001100 forever (... 1100 ...)

this gets stored as a binary floating point number as

.110011001100 ... * 2^3

and if you carry that out to 24 significant bits and then turn it back into decimal, you get something like .0999999999, which is obviously close to .1, but not quite. As an infinite series, it converges on .1, so if computers had infinite significant bits, then there would be no problem although they'd need infinite speed to go along with it, else things would slow down a bit :-)

The point is that computers don't store floating point numbers the way many users think they do and that leads to the kind of confusion that started this thread.

our paths crossed, so I'll just add this. The "rounding" you refer to would as you say not take place if it we were dealing only with numbers that are rational fractions in radix 2, but as I pointed out, this is not the case, so there IS rounding because of the limited number of significant digits (bits).

Ah, I see an even better way of saying what I'm trying to say. WHOLE numbers don't have the problem because they never require rounding. It's only fractions that cause the problem, as explained in my example, so your statement is correct, but ONLY for whole numbers.

But the fact is that that which is referred to as the "fraction" is just the number stripped of the decimal point.
Rounding would only occur towards the end of the number to make it fit into the required "fraction" section.

Rounding would only occur towards the end of the number to make it fit into the required "fraction" section

you are quite correct. Is it then your conclusion that since the rounding occurs way out in the less significant digits (or bits) that the equality should work? I think you are missing my point still. When you say

that which is referred to as the "fraction" is just the number stripped of the decimal point

you seem to be implying that this is true for fractions, but as I have demonstrated, it is not. You are taking a statement which is true for whole numbers and applying it to fractions, where is is NOT true and yet you still seem to believe it is true.

The point is that decimal fractions will NOT always store precisely in binary. How about decimal fractions in a decimal computer? Well, suppose we HAD a decimal computer and wanted to add 1/9 plus 8/9 and compare the result to 1.0 --- WE WOULD GET INEQUALITY. Think it through.

But the fact remains we're not adding 1/9 and 8/9.
We're specifying the exact number that we want to use.

For a number like 1.4, there is no rounding to be done.
1.4 is just 1.4

The fraction part, ie. 14 fits easily for a fraction.
The exponent would then be 1 or something.

For for any floating point value n, the fraction part is simply removing the decimal place from that value.
The IDE will automatically round the fraction part of the decimal value for you, so the CPU would only end up being given the exact same value you're looking at on the screen

plenderj, if your point were correct, and it is not, then this thread would never have started in the first place. It started because, as I continue to repeat, decimal fractions do not always store exactly in binary. If they did, and your point were correct, then the problem that started this thread would never have occurred. I have explained why and rather than look at my explanation in the detail it apparently requires, you just keep saying that it isn't true. Math is math, and if you do the math, you'll see that I have given a correct expanation.

The Single and Double data types are very precise—that is, they allow you to specify extremely small or large numbers. However, these data types are not very accurate because they use floating-point mathematics. Floating-point mathematics has an inherent limitation in that it uses binary digits to represent decimals. Not all the numbers within the range available to the Single or Double data type can be represented exactly in binary form, so they are rounded. Also, some numbers can't be represented exactly with any finite number of digits—pi, for example, or the decimal resulting from 1/3.

Because of these limitations to floating-point mathematics, you may encounter rounding errors when you perform operations on floating-point numbers. Compared to the size of the value you're working with, the rounding error will be very small. If you don't require absolute accuracy and can afford relatively small rounding errors, the floating-point data types are ideal for representing very small or very large values. On the other hand, if your values must be accurate—for example, if you're working with money values—you should consider one of the scaled integer data types.

OK guys, I'll give it one more try. I think we're at the point now where egos have gotten in the way of objectivity so if everyone could just take a deep breath and look at the math for a minute, you'll see the point.

But first:

That's not proof of his statements.

you are quite correct. The fact that Microsoft and I agree on something does not make either on of us right, and God knows I don't like being in agreement with Microsoft on anything. Still, in this particular case, what make BOTH of us right is the underlying math. You can expostulate all you want but at the end of the day math is math and there's nothing you can do to change the fundamental correctness of my explanation.

Well I mean, for nearly whole numbers, its going to be nearly perfectly accurate

again, you are correct, but as leather pointed out, my entire point was that close is not exact. If errors of the type being discussed in this thread were very large, then computers would be worthless for numeric computations, but as you have correctly stated and as Microsoft has also point out, the errors are quite small. Again, I'm talking about exactitude and you keep sliding off into correct statements about the smallness of the error and making incorrect statement about the fundamental issue concerning exactitude, which you clearly refuse to understand.

Yeah but in this case we're not even rounding.
Its just 1.4

There is no rounding to be done.

brings us back to the point that I have been, apparently unsuccessfully, trying to make. You are absolutely incorrect in this statement. The decimal fraction 1.4, when turned into binary, is an irrational number. The computer representation is 2^1 * .10110110110110110 ... (1100 forever) and when you turn that back into decimal, you do NOT get 1.4 although you get something very close, call it 1.399999999. Again, I would agree with you that this is quite a small error, but that's not the point of this thread. The point is that 1.4 is NOT the same as 1.39999999 and that is the fundamental reason for this thread having been started in the first place. You postulate that there is some other reason. I assure you that you will search in vain for any other reason. I have explained the reason.

For further help in understand this, I point back to my earlier statement about a decimal computer getting the wrong answer if you were to add 1/9 and 8/9. The point there is that 1/9 is irrational in decimal, so it will have rounding errors. It is .1111111111 ... forever and at some point you have to truncate it and that makes it incorrect.

kayjay, the answer to your question is very simple in concept and not a lot of use in practice. The answer is this: all numbers that are rational IN BINARY avoid rounding errors. All numbers that are irrational IN BINARY will always have rounding errors. The big problem we face is that it is very tedious to determine whether or not a number is rational in binary. Plenderj, for example, automatically assumed that since 1.4 is rational in decimal it is also rational in binary, a "fact" which I have gone to the trouble to show is not the case. I HATE doing long division and doing it in binary is a REAL pain but anyone who cares to do the math can see conclusively that 1.4 does NOT get represented in the computer as 1.4 but as 1.399999... which is why this whole discussion started in the first place.

If anyone has any further quesitons on this subject ... fugeddaboudit !!! I'm sick of the whole thing. I first encounted this problem in about 1963 and I've explained the whole thing so many times in my career that once more made little difference, but enough is enough.