Given infinite computing power surely there cannot be any problem or puzzle that is incapable of solution? The famous or infamous incompletenes theory of Kurt Gödel says different.

We make the assumption that computers, logic and scientific thought can go anywhere we care to take them - there are no limits to what we can work out.

So, if I was to tell you that I had a mathematical theorem that so far didn’t have a proof you’d just assume that the mathematicians hadn’t really been working hard enough on the problem.

For example, the issue of Fermat’s theorem and its proof dragged on for centuries and then suddenly it was all solved by some new maths and some older maths cleverly applied.

Fermat's last theorem is a negative sort of theorem. It states that there are no integers a,b and c that satisfy:

an+bn=cn

for n>2 (there are lots of examples of a,b and c is n=2 or 1). It almost cries out for you to prove it by finding a counter example i.e an a,b,c that do satisfy the relationship for some n>2. But the proof is much harder than you could ever possibly imagine.

The four-color problem similarly dragged on for years but it was solved in a completely new way and a computer generated a large part of the proof. The four-color theorem states that you only need at most four colors to color in a map so that no two adjacent regions have the same color (where adjacent means sharing a boundary not just a corner). Again the four-color theorem sets your mind working to see if you can find a map where four-colors aren't enough to ensure that no two adjacent regions share the same color.

In the case of the four-color theorem the computer’s accurate logic was used to go through a long list of tests that would have been impossible for a human to complete in a reasonable time.

Humans plus computers seem to be capable of working things out in a way that is precise and efficient and surely there cannot be anything that is beyond reach?

The fact is it can be proved that there are truths that are indeed beyond the reach of logic.

This is essentially what Kurt Gödel proved in 1931. The idea isn’t a difficult one but it is often used to justify some very strange conclusions most of which aren’t justified at all!

It is important and fun to find out a little more about what it really means.

The mechanical maths machine

First we need to look a little more closely at how we really expect the world to work.

Back at the turn of the 20th century mathematics had moved on from being a “creative” subject to trying to be precise and logical.

The idea was that you stated your axioms – the things you regarded as true – and then you used these rules to derive new truths. This was the big project – the axiomatization of the whole of mathematics. If it could be achieved then we would be in position where we could say what was true with certainty – as long as the initial axioms were ok that is.

Notice that we are talking about some very fundamental things that you might think could be taken for granted.

For example, Alfred North Whitehead and Bertrand Russell produced a huge book (Principia Mathematica published in thee volumes in 1910, 1912 and 1913) that took several hundred pages to get to a proof that 1+1=2!

The point is that this was very detailed work building from very simple initial axioms which were so basic that no one could dispute them. The fact that 1+1 was 2 wasn't just an observation of physics but a logical necessity given the initial axioms.

The axiomatization of mathematics also potentially reduced maths to a mechanistic shuffling of symbols. You didn’t even need to give the theorems and proofs any interpretation in the real world. You could regard it as a game with symbols that you moved around according to the rules. The new arrangements are theorems and the way that you got the new arrangements from the existing ones are the proofs.

Many mathematicians rejected this soulless brand of mathematics but they still adopted the basic way of working and regarded a proof as a sequence of legal steps that takes us from a known truth to a new truth. The freedom and creativity was what came before the proof. You spend time thinking and experimenting to find a way to put together a logical argument from axioms to a new theorem.

The failure

What could possibly be wrong with this idea?

It seems so reasonable that what we know should be extended in this way using but the force of pure logic. There really isn’t any alternative to the idea of logical proof that sane people find acceptable.

It also seems entirely reasonable to suppose that applying this method will eventually get us to any statement that is true. What is more, you might as well give a computer the task of shuffling the symbols and seeing what is true.

Yes math is soulless and a matter for mechanical computation.

Consider now the problems discussed at the start – Fermat’s last theorem and the four-colour theorem – their solutions are just a matter of computing time.

Now we come to the downfall of the method.

You would assume that given a properly formed arrangement of symbols that the computer, or the human, could eventually work out either a proof of the theorem or an anti-proof, i.e. a proof that the negation of the theorem is true and the theorem is false. It might take a long time but in principle you could work out if the statement was true or false. This is the assumption of consistency that we suppose will hold for any reasonable or useful system of reasoning.

A theorem has to be either true or false – there is no middle ground and it isn’t reasonable to have proof that it is both.

This sounds so obvious that there is no need to really think about it too deeply but it is also equally obvious that a statement (let alone a proof or a theorem) has to be true or false – there is no middle ground.

Now consider:

This sentence is false

Well you can begin to see that things are not quite so simple.

If the statement is true it is false and if it is false it has to be true.

This is the paradox of Epimenides (named after the Cretan philosopher Epimenides of Knossos who was alive around 600 BC), and it is the key to Kurt Gödel’s proof that axiomatic systems aren’t consistent in the sense that there are theorems that cannot be proved true or false within the system.

There may be a larger system, i.e. one with more axioms in which these theorems are provable but that’s almost cheating.