In such languages as C or C++ when you do something like while( 1.0 + eps > 1.0 ) the expression is calculated not with 64 bits (double) but with the processor precision (80 bits or more depends on the processor and compile options). Below program calculates exactly on 32 bits (float) and 64 bits (double).

Non-integer data types are inherently prone to be *slightly* off due to round-off errors after arithmetic. Especially dealing with a function like sqrt(), where irrational answers can only be so precise.

@ats15 If your pow(x,2) is less precise or less efficient than x*x, you should file a bug with the library vendor.

@jeremi02:

1) there's the function cbrt for cube root (technically only available in standard C++ since C++11, but on most systems it has been around for 15+ years in math.h as part of the C library)

2) The value 0.2 is not a valid double. It lies between two doubles:
0.1999999999999999833466546306226518936455249786376953125 aka 7205759403792793 * 2-55
and
0.200000000000000011102230246251565404236316680908203125 aka 7205759403792794 * 2-55

it's a little closer to the second one, so when you write "0.2" in code, you're actually writing that second value, and that's what you're adding on each iteration of the loop.

pow is very difficult to implement correctly --- that is, so that things like pow(x*x, 0.5) returns x for those x where that's the right answer. Very few implementations (CRlibm being a notable exception) implement pow correctly.

In other cases like 0.1 + 0.3, the result actually isn’t really 0.4, but close enough that 0.4 is the shortest number that is closer to the result than to any other floating-point number. Many languages then display that number instead of converting the actual result back to the closest decimal fraction.

The underlined sentence is a little bit vague, can you explain this to me?

So I should always test if I can represent a specific number as a fraction with denominator being a power of 2 - if I can, the number will be precisely represented in my variable, otherwise it won't?

You just take that into account when choosing your algorithms, exactly as you would take into account that 1/3 or 3/7 is never precisely represented as a decimal fraction. Would you program a calculator to add "0.3333" on every step and expect to reach exact zero having started at -5? That "x += 0.2" looked just as crazy to me.

The underlined sentence is a little bit vague, can you explain this to me?

They are probably referring to how python does output: instead of simply converting binary to decimal and rounding to the output precision (as C and C++ output facilities do), it calculates the decimal fraction with the shortest number of digits that, when stored in a floating-point variable, gives the binary value you have.