The TL;DR is that you can't accurately represent even the rationals with a finite number of bits as floating point numbers. This includes some rationals that have a terminating representation in decimal, e.g. 0.1, because the computer represents them in binary and their binary representation does not terminate (in this case it's 0.00011001100110011...).

Also, Zarutian is correct in saying that floating point numbers trade accuracy for speed. The rationals can be represented perfectly (up to memory limitations) but math with these representations is significantly slower. In some cases, however, accuracy is actually impossible. I will leave my original comment to maintain the integrity of the thread.

If you're going to say something like that, at least include enough elucidation so someone like me doesn't come along with a half-assed explanation of an unintuitive declaration like that. Anyway...

"Accuracy is impossible" is a true statement owing to irrational numbers. i.e. You can't accurately describe e or π in floating point. And there's an infinite number of irrational numbers we don't have labels for, too.

Aside from rational numbers, accuracy is possible, so long as the numerator and denominator fit within certain representational size constraints.

(And there's something about size constraints and precision that should be said, but the complexity of fully describing it is beyond my available time, perhaps my available knowledge.)

It's not a tradeoff

Yes, it is. For any meaningful consideration of accuracy in computation, floating point is a tradeoff between accuracy and speed. If you have a float, you know how much memory any given value will take to approximate; any value you enter will be represented using the same amount of memory. If you have a fractional numeric type from an arbitrary-precision library, you don't necessarily know how much memory a value will take to represent until you've entered or calculated it; two different values may require different amounts of memory to represent.

What's this have to do with speed? Well, you can do floating-point in hardware fairly simply, since you can load the full representation into a register. You can't so simply do that with arbitrary-precision, since you can't really know how big your values are going to be. Since floating-point carries with it domain constraints, the hardware can be tuned to perform best within that domain.

"Accuracy is impossible" is a true statement owing to irrational numbers. i.e. You can't accurately describe e or π in floating point. And there's an infinite number of irrational numbers we don't have labels for, too.

Neither irrationals nor rationals can be represented as floating points without error. It's not a question of rationality/computability/describability at all. It's a question of limitations imposed by representing the numbers in floating point format with a finite number of bits. You are correct that there are accurate representations for the rationals up to memory limits, e.g., a {p, q} tuple, but floating point numbers are not one of them.

Yes, it is. For any meaningful consideration of accuracy in computation, floating point is a tradeoff between accuracy and speed.

This is a more accurate way to put it. You do trade off accuracy against speed and memory (it is possible to represent the rationals accurately -- but not as floating points, which are teh fast).

Much better, but there's still one more caveat. Unless my understanding of floating point numbers is more lacking than I already think it is, there is a fixed set of rational values representable by IEEE754 floating point numbers. Integer values from 0-(224), for example. (For 32-bit floats. And I'm not sure about their negation. I know I've had occasion to depend on 64-bit floats to hold ~50-bit integers when 64-bit integers weren't available.) Also, numbers representable solely by the mantissa.

If I understand numeric domains properly, any value a floating point variable is calculated to hold is a value that the floating point variable can accurately hold, and while that doesn't mean any arbitrary value can be accurately represented in floating point, it's also not the case that no rational value can be accurately represented in floating point.

We're splitting hairs, sure, but when talking about floats, the details can be important.

Now you are speaking hogwas. There is no computer that can possible give you the actual value the symbol "1" represents, it can only return that symbol, which represents the value. in the same sense a system that returns pi or sqrt(2) does in fact work with exact values, though it represents them using other symbols than just sequences of integers. It will not be able to return a rational value that coincides with the irational value, but that's because no such rational value exists. It's like saying that no computer can possibly return the number 1.1 because it cannot be represented in integers, and you only consider integers to be values. As long as your system can handle any given real number then it's per definition 100% accurate. It will return the exact value, and that value will not be in the form of a rational number simply because the value isn't a rational number.

I'll try one more time but I suspect it's futile. A non-computible number has an infinite, patternless sequence of digits following the decimal point. There is literally no possible way a finite computer could represent one. That's what 'non-computible' means.

So please explain to me this magical encoding that violates the laws of both mathematics and computer science.

I think you might not have the correct definition of computable number. A real number is computable if there is a program which, given input n, produces the nth digit of that number. The shifting nth root algorithm and the BBP formula prove that √2 and π, respectively, are computable.

A non-computable number is something like the number whose binary representation is 0.b1b2b3... where bn is 1 if Tn halts and 0 otherwise, where (Tn) is an enumeration of the Turing machines.

My description was necessary but not sufficient in that all non-computable numbers are non-terminating and patternless but not all non-terminating, patternless numbers are non-computible. I thought it served my point well enough so I didn't give a formal definition.

Based on the downvotes, it's pretty clear that this conversation has already gone off the rails so I'm not going to belabor the point.

no laws are violated. If you have sqrt(2) then you simply work with sqrt(2), no need to convert it to an approximate value in some particular encoding scheme. Similar to how you in math handle sqrt(-1) simply by handling it. To ease the mind of people such as you we hide it behind the symbol i, but that's no different from just accepting that we can do math with it even if does not coincide with any rational or even real number.

To put it differently, if you believe that you know of a numerical value that cannot be handled by Mathematica, write it down, however you wrote it down is how it will likely be encoded. From that encoding you'll be able to use whatever mathematical operations you want.