The beauty of math; the humor of stupidity.

You can’t even describe most numbers!

(This is a revised version of an old post; you can see the original here.)

Please read the addendum at the bottom – I got carried away with computability, and ended up screwing up quite badly.

In the comments on my last post, a bit of discussion came up about some of the strange properties of real numbers, and that made me want to go back and get this old post about numbers that can’t even be described, and update it.

In those comments, we were talking about whether or not π contains all sorts of interesting subsequences. The fact is, when it comes to π, we just don’t know.

But… We do know that there are many numbers that do. There’s a property called normality. Informally, normality just says that taken to infinity, all sequences of digits are equally likely to occur. In an infinitely long normal number, that means that any finite length subsequence will occur, given enough time. My name, your name, the complete video of the original version of Star Wars, a photograph of the dinner I just ate and never took a photo of – it’s all there, in any normal number.

If you think about that, at first, it might seem profound: there are numbers that contain absolutely everything that ever did exist, and that ever could exist. That seems like it should be amazing. If the numbers were at all special, it might be. But they aren’t. Normality isn’t a rare property that’s only possessed by a special meaningful number like π. It’s a property that’s possessed by almost every number. If you could create a randomly selected list of a billion real numbers (you can’t in reality, but you can mathematically, using a finite version of the axiom of choice), odds are, all of them would be normal – all of them would have this property.

But here’s where it gets really strange. Those numbers, those normal numbers that contain everything? Most of them can’t even be described.

It gets stranger than that. We know that we can’t really write down an irrational number. We can write down successively more and more precise approximations, but any finite representation won’t work. So we can’t actually write down the overwhelming majority of real numbers. But in fact, not only can’t we write them down, we can’t describe the vast majority of numbers.

Numbers are something that we deal with every day, and we think we understand them. Over the ages, people have invented a variety of notations that allow us to write those numbers down: the familiar arabic notation, roman numerals, fractions, decimals, continued fractions, algebraic series, etc. I could easily spend months on this blog just writing about different notations that we use to write numbers, and the benefits and weaknesses of each notation.

But the fact is, when it comes to the vast, overwhelming majority of numbers, all of those notations are utterly useless! That statement seems bizarre at best. As strange as it it seems, though it’s true. In order to understand it, though, we have to start at the very beginning: What does it mean for a number to be describable?

The basics are really easy to explain: A describable number is a number for which there is some finite representation. An indescribable number is a number for which there is no finite notation.

Integers are all describable.

Finite length decimals are all describable.

Fractions are all describable.

Infinite repeating decimals are describable. First, they’re all fractions,
so they can be written as fractions. But even without using fractions, you can
use a notation to express the repetition.

Many irrational numbers – like π and e are describable: we can’t
write down the sequence of digits, because they go on forever without repeating;
we can’t write them as a fraction; but we can write an algorithm
that will generate the digits.

All of those are describable. What isn’t? I can’t describe them to you – because, by definition, that’s impossible. They’re numbers whose digits aren’t predictable – not by any possible definition of predictable: there’s no algorithm, no process, no pattern – there is simple no way, in a finite amount of space, to describe how to write them down.

To show that most numbers are undescribable, I come at it from a computational perspective. When you say that a number is describable if there’s a finite process for describing how to write it down, what you’re really saying is that there’s a program that will generate it: if you can write a finite program that will generate a representation of the number, it’s describable. It doesn’t matter whether that program ever finishes or not – so if it takes it an infinite amount of time to compute the number, that’s fine – so long as the program is finite. According to this definition, π is describable: it’s notation in decimal form is infinite, but the program to generate that representation is finite.

An indescribable number is, therefore, a number for which there is no possible program which will ever generate the number in any form.

How can we show that the vast majority of numbers are not describable? It’s easy. In a branch of theoretical computer science called recursive function theory, we talk about program numberings. Basically, for any computing device, we can assign numbers to its programs. Every program is just a number.

This isn’t just an abstraction – it’s realy. Take a program – any program, written in any language. It’s a series of characters that can be written into a file. Each character is represented by a group of 8 bits. Take that set of numbers, and line their bits up, one after another. The result is a huge positive binary number. Taken this way, every sequence of characters is represented by exactly one unique natural number – and every number represents exactly one sequence of characters. Thus, every program that can be represented in a file – which is every possible computer program, valid or invalid – is represented by exactly one natural number.

Using this system, the set of all possible computer programs is a subset of the set of natural numbers. That means that the set of all possible computer programs is, at most, countably infinite.

The set (or more properly class) of all real numbers is not countably infinite. It’s much, much larger than that. That means that most real numbers cannot be described by any computable algorithm. They’re indescribable. There’s no process you can use to uniquely specify them: if there was, there’d be a computer program describing that process – but there can’t be any program for most numbers.

If we could write infinitely long computer programs, then the undescribable numbers would be describable – but their descriptions would, obviously, be infinitely long. Most numbers cannot be described in a finite amount of space. We can’t compute with them, we can’t describe them, we can’t identify them. We know that they’re there; we can prove that they’re there. All sorts of things that we count on as properties of real numbers wouldn’t work if the indescribable numbers weren’t there. But they’re totally inaccessible.

ADDENDUM: I screwed up.

I’m a computer science guy, and I love the theory of computability. I let that take over, without thinking enough about it, and managed to make a huge mistake here!

What I described above as describable numbers isn’t right. It’s the set of computable numbers. The set of computable numbers is a strict subset of the set of describable numbers. As several commenters pointed out, there are many numbers that are describable but not computable.

The describable numbers are all numbers for which there is any possible finite description that uniquely identifies the number. The countability argument still works: you can still enumerate all possible finite-length strings that could be descriptions, and define a one-to-one correspondence between strings that could be descriptions and natural numbers. The set of describable numbers is, thus, still countable, and the set of undescribables is not, which implies that the set of undescribables is far, far larger that the describables.

But my original explanation was stupidly wrong. I apologize for the error. I’m not editing the text of the original post, except to put a pointer to this addendum: in a case like this, I think editing it would be dishonest, a way of covering up my serious error.

Is there a distinction between “describability” and “computability” that you’re skating over here? Some non-computable numbers (like Chaitin’s constant) can be described/defined. Presumably the set of numbers that can be finitely defined by any formal axiom system are still a countable set, because their definition is still just representable as a string of characters.

It’s not clear if you mean to use “describable” to mean “definable”. It seems you do not; that instead you intend “describable” to mean “computable”. Okay.

But it’s important to note that there is a distinction. In classical mathematics, some numbers, like Chaitin’s constant, are definable (there is a formula which uniquely describes it), but not computable.

It is in fact possible that all real numbers are definable, even though it *appears* that the definable numbers are only countable. (You are comparing apples to oranges when you compare the cardinality of definable numbers to the cardinality of real numbers. See Skolem’s paradox)

And amusingly enough, it is also possible that all real numbers are computable, if we simply work in constructive mathematics. Yes, this is true despite the fact that there are countably many turing machines, yet still the real numbers are not countable in constructive mathematics. The trick is that in constructive mathematics, a subset of a countable set is not necessarily countable. A lot of unsettling facts go away when we move to constructive mathematics (they are replaced by facts which violate our old intuitions).

> The set of describable numbers is, thus, still countable, and the set of undescribables is not, which implies that the set of undescribables is far, far larger that the describables.

This is still wrong 🙂
See the mathoverflow response I linked above, it is very well written. You are comparing cardinalities in a way that doesn’t makes sense. There is no “set” of definable numbers within set theory (since the concept of definability within set theory cannot itself be described in set theory). It’s an external concept that only shows up when you consider a model of set theory. Hence it does not even make sense to compare the cardinality of definable numbers with the cardinality of the reals.

See Skolem’s paradox on wikipedia. Skolem’s paradox shows why the kind of reasoning you are doing doesn’t make sense.

You can’t use set theory to create a true set theoretic definition of the set of all descriptions of numbers – because we don’t have a formal way of specifying just what a “describable” number is.

But we can enumerate the set of possible descriptions. We can’t know which of them is actually a valid description – so all we can define is possible descriptions. But we know that the descriptions are contained within the set of all possible descriptions – and since we know that the set of all possible descriptions is enumerable and countable, that the set of actual descriptions can’t be larger than that.

So far so good, but the fact is that there are models of set theory in which all real numbers (in fact all sets!) are definable. There are even *countable* models of set theory (this is the lower Lowenheim-Skolem theorem. It’s the crux of Skolem’s paradox), despite the fact that you can prove that some sets are uncountable within set theory.

So for all you know, *every* real number *is* definable. Certainly you cannot prove within set theory that there exists an undefinable real number (not least because you can’t even write down the definition of “definable” in set theory).

The point is that the notion of cardinality is *not* “absolute”. It’s about the existence and non-existence of certain special kinds of maps. Cantor’s diagonalization result says that there is no surjective map from N to R. When you consider the model theory, this means that there cannot be an object in your model which represents such a surjective map. But your model can nonetheless be countable!

Ok, if I understand correctly, you want to say something like define D, the set of ‘possible descriptions’ as the set of (strings corresponding to) first-order formulas in the language of set theory. This is countable, fine. But now you want to argue that the “valid descriptions” are contained in the “possible descriptions”. But in order to do this you need to say what is a valid description! You want to say something like psi \in D is a valid description of a real number r if “psi(r) holds”, and r is the unique real number for which psi holds.

The problem is that you cannot define “psi(r) holds” within set theory (I guess this is literally Tarski’s undefinability theorem?). Intuitively I understand what you are trying to do, but formally it doesn’t make any sense.

So you can’t prove “there exists an undefinable number” because you can’t even define “undefinable”.

The right way to make sense of these concepts is to look at the model theory. And there it’s plain-as-day that there exists models where absolutely everything is definable.

What is this “finite version of the axiom of choice” you claim to need to build a finite list of random real numbers? The finite axiom of choice is trivially true (any finite product of nonempty sets is nonempty). On the other hand, you need countable choice to build even one “random” element of [0,1], since you may use the surjective map {0,1}^ℕ → [0,1]. Going from one to any finite count does not change this.

Just as a barometer reading from one of your very interested, but not mathematically trained, readers: The original post made good sense to me, Chaitin’s constant gets into rarefied territory (but I think I’ll get it after a couple more readings of its Wiki page), but Skolem’s paradox… is going to take a lot more reading!

More on topic, I still have difficulty making the leap from an infinite, normal string of digits to the idea that any finite sequence must exist in that string. How is that demonstrated? I know pi has been explored to a vast number of digits. Has there been found finite strings containing structure? A million zeros, for example, or a PNG or anything along those lines?

In a normal number, every finite sequence of digits has by definition the same probability of occuring (as other sequences of that length). Now take a specific finite sequence and assume it doesn’t occur in the number; but then it has probability 0, which is a contradiction.

Recognizing that Wikipedia might not be the most authoritative source, it talks about a disjunctive sequence as being defined as including all finite sequences and that normal numbers are all disjunctive sequences (but not all disjunctive sequences are normal).

I take it that the probability of highly structured sequences (such as a PNG or the works of Shakespeare) would be extremely low!

Yes, every finite sequence occurs in a normal number, since it has probability (or better, frequency) greater than 0; that’s the same as saying every normal sequence is disjunctive.

The probability (or frequency) of “highly structured” sequences (whatever that is) in a normal number is the same as that of any other sequence of the same length *by definition*. That’s the point of a normal number. The frequency only depends on the length of the sequence.

I don’t think there is even a good definition of “highly structured”. How do you measure structure? Are some PNGs more structured than others? Will the works of Shakespeare have more or less structure if they are compressed beforehand? Talking about structure is a red herring here.

I guess what I mean by “highly structured” is a sequence in which there is not an equal distribution of digits on most scales. For many various length sub-strings, the distribution would always be close to even.

I think it’s what information science calls entropy? A sequence with very low entropy is characterized by how any small change is very distinguishable, whereas different sequences with high entropy aren’t notably distinguishable.

One normal sequence looks much like any other, but a sequence with very low entropy (say a million zeros) is immediately distinguishable from any other sequence.

I recognize that any specific sequence is just as likely as any other, but my sticking point seems to be that there may be lots of (say) PNG images, but there seems a lot more sequences that aren’t.

Is there any sense of Cantorequse different infinities here, or is there really a 1:1 mapping between sequences with low entropy (normal sequences) and those with high entropy (and very non-normal distributions)?

Skolem’s paradox is a bit beyond me, even with a BS in mathematics and some reading in set theory. I got to find a good cheap book on the subject.

If you have an infinite normal string of digits, then each digit is effectively random. There will be an infinite number of 7s, and after each of those 7s there is a one in ten chance that there will be another 7 (so an infinite number of 77s) and after each of those one out of ten will be followed by a 7, and so and so forth, and the same for any string of digits you care to use.

By definition, any structure seen in a normal structure is imposed by us. “LIZAMINNELI”, in a random string of letters, appears roughly every 2 billion billion characters, so if we look at pi in base 26, we probably wouldn’t have found it. If you’re looking for a million zeros in the decimal version of pi, you’re probably looking at 10^million digits of pi; a million of any digit increases your odds to 1 in a 10^(million – 1) digits of pi.

Unless you’re using Bible code-like techniques, which are good at producing messages out of noise, any apparent structure of any significant size is going to be very rare.

This is wonderfully mind-blowing! I can see why Loepold Kronecker said, “Die ganzen Zahlen hat der liebe Gott gemacht, alles andere ist Menschenwerk.” The more I learn about real numbers, the more they seem irrational. 🙂