Sunday, July 25, 2010

Computational Irreducibility Begins to Bite

One way to think about prime numbers is as follows:

Take a one dimensional binary pattern with all the bits set to zero. Now take the number a and starting at bit 2a set the binary bits at 3a,4a,5a etc, thus effectively describing a periodic pattern of set bits with an interval of a. This simple selection scheme for setting bits can be represented by the elementary mathematical function y = an + a where n =1,2,3,4..etc and where values of y gives us the locations of the bits that must be set.

Now, let us carry out this process of setting bits for all values of a = 2,3,4,5,6,7…. until limited by the length of the binary pattern. When this procedure has been completed all the bits that have not been set are placed at bit locations whose position value is a prime number. Basically a prime number is a number that doesn’t get selected by any of the periodic selection schemes defined by the equation with form y=an+a.

Now consider the mathematical object called “The Ulam Spiral of primes”, a concept that I got from this very interesting blog post by Mark Chu-Carroll. In the diagram below the natural numbers are arranged in spiral pattern and the primes are plotted in red.

If this process is continued and viewed from a distance this is what you get:

As MarkCC puts it:

Look at how many diagonal line segments you get! And look how many diagonal line segments occur along the same lines! Why do the prime numbers tend to occur in clusters along the diagonals of this spiral? I don't have a clue. Nor, to my knowledge, does anyone else!

Intuitions about it are almost certainly wrong. For example, when I first thought about it, I tried to find a numerical pattern around the diagonals. There are lots of patterns. For example, one of the simplest ones is that an awful lot of primes occur along the set of lines f(n) = 4n2+bn+c, for a variety of values of b and c. But what does that tell you? Alas, not much. Why do so many primes occur along those families of lines?

Interpreting this spiral diagram in terms of the 1-D binary pattern model, these 2-D spiral models are another way of generating a “bit set” selection scheme, except that these schemes are not periodic but instead the space between selections is increasing in proportion to value of n – this leads to the n2 term in MarkCC’s expression. What MarkCC is telling us is that some of these selection schemes pick up a high frequency of primes. He then asks the pertinent question:

Why do so many primes occur along those families of lines?

When I heard that question the first useful thought I had was to think of Stephen Wolfram’s work on computation and in particular of his concept of computational irreducibility. Taking our cue from Wolfram the answer may be that there is no answer to this question of “why?” other than to carry out the computation itself. Let me explain:

The time required to arrive at certain computed patterns cannot be shortened beyond a minimum computation length. Wolfram brings this concept out into sharp focus: He tells us that this irreducibility means what it says: Once we have the minimum length computation in our hands there is, by definition, no other computation available to demonstrate the result any quicker.

If computational irreducibility applies to the case in point then it would mean that the only way to show up those diagonals with a high incidence of primes is to do the long calculation of generating the spiral and the primes. When we ask the question “why” in this context we are, I suggest, looking for a succinct analytical answer to the question in the form of some relatively simple algorithm demonstrating the pattern in a relatively short execution time; in other words an analytical short cut. But according to Wolfram there may be no simpler way of demonstrating the pattern than to do it the long way. Or rather, the long way is in fact the short way because there is no shorter way to do it!

Of course, we don’t know for a fact that this particular problem is computationally/analytically irreducible. But even if it is reducible we may still find that the solution to the meta-problem of determining whether or not a task is reducible is a computation that itself is plagued by the limits of computational irreducibility.

I believe the generality of Wolfram’s views on this subject are valid. I also believe that it is possible to get an intuitive feel for why they are valid. As I have already said, when human beings ask “why?” they are asking for an answer in the form of some “short” algorithm that succeeds in arriving at an answer in a “short” execution time. We are human after all and we have limited space and limited time. Unsurprisingly, then, we want answers to be reached using as little space and time as possible. But if we are going to limit space and time the computations that are open to algorithmic demonstration are going to be accordingly limited; the reason for this is fairly obvious: As soon as we make stipulations limiting computational resources those stipulations automatically exclude the overwhelming number of computations that utilize either large algorithms or large execution times (or both). The number of short algorithms executing in short time is a limited resource and their allocation to problem solution runs out long, long before they have covered the whole domain of solutions. Thus the absence of a simple analytical solution to the question of the spiral and primes may be evidence of the demand pressure on the relatively short supply of succinctly performing algorithms: The demand for fast succinct algorithms outstrips their supply and thus most problems will have irreducibly long winded solutions.

THE IDEAS-EXPERIENCE CONTENTION

“Ideas Versus Experience!" is a slogan expressing the uneasy relation between what we think the world to be and what our actual experience suggests it is. Experience makes or breaks ideas. But the reverse is also true: Well established ideas can influence the recognition, acceptance and even the perception of experience. In short there is a two way dialogue between ideas and experience. Sometimes that dialogue can turn into an argument, even a row.

The success of Science is based on a formalisation of this potentially contentious relationship as it seeks to support or refute theoretical notions by systematically comparing them with experience. In the far less formalized contexts of daily living we probably are using a similar heuristic when we display a tendency to drop ideas that lose the confrontation with experience, and retain those that win. Winning ideas, like the gladiators of the Roman arenas, live to fight another day. This Darwinian slant on the struggle for ideological survival is itself ideology that must, for consistency sake, submit itself to the very process it purports to describe. For example, it should be able to give account of the resilience of traditional theories in the face of contra indicators. And in the extreme, explanation also needs to be given of how conspiracy theorists continue to hold on to a ramifying structure of untested elaborations.

Human theoretical visions are often grand and sweeping, confidently affirming states of affairs that are far beyond immediate sensation. The vast unseen domains covered by our best theories contrasts with the limitations on human perceptual resources, resources that only allow a very sparse experimental sampling of our most ambitious ideas. Even a professional scientist only ever tests a small portion of any theoretical structure. Moreover, our theoretical concepts are ambitious enough to cover unreachable areas like the center of distant stars, sub-atomic dimensions, events long past into history, and very complex inscrutable objects like the human mind and its societies. Thus, science is metaphysical in as much as it has to admit that it covers vast domains inaccessible to testing in practice if not in principle. For the intelligent layman even the keyhole view of the experimental scientist may not be available. So whence comes the authority to affirm ideas that are often so grandiose in their sweep that they make claim to impinge upon the very meaning of life the universe and everything?

The fact is that for most of us the really big ideas diffuse through to us from our societal context, and at most these big ideas are only illuminated here and there, at a very few places, with actual direct experience of the phenomenon they conceptualise. These big ideas float around in the ether-like-media of society in the form of books, lectures, programmes, and the Internet etc, objects that the postmodernist philosophers call "texts".

For someone such as myself the battle to make sense of life was joined as soon as I was aware of mystery. But looking back it is clear to me that I haven't done an honest scientific experiment in the whole of my life. For me the effort to bring sense and integration to the complexity of life boils down to grappling with the many texts of society; These texts carry both big theoretical ideas and raw data from which ideas crystallize. These texts are compared with other texts in the search for coherence and consistency. These texts take the place and role of experiments as one tests theories using texts. These texts are in effect the experience of life. My own modest attempt at grandiose theorising has already been presented to the world. (See this blog "Physics and the Wild Web") and my excuse is that I have simply inherited the theoretical hubris of the human race.

The postmodern philosopher Jacques Derrida said, "There is nothing outside the text". I think he was at least right about the primary role texts play in the life of those like myself whose instinct has been to grapple relentlessly with the riddles of life, and to seek out meaning and go where no man has gone before. But one could equally claim, "There is nothing outside the experience". For one might read the "text" of the clouds in the sky as a predictor of impending weather just as one reads socially generated symbolic configurations as the predictor of certain kinds of thought or potential experience. For the text is just as much an experience as more primeval phenomenon like thunder and lightning. Texts are part of our cosmic experience and they imperceptibly grade into the more conventional notion of experience. Where the crossover point is, is difficult to say.

Postmodernism may have got it right about the primary role of the text, but it has made one very serious mistake. Where I would radically differ from the postmodern view is that I believe the mass of social texts, if we allow them, naturally converge upon grand consistent narratives and those narratives are here to stay. And that is because these texts, I submit, are part of a world with an underlying rational integratedness and that integratedness is being revealed to us bit by bit. For revelation it is: Revelation is what we cannot discover unless God chooses that we discover it. My perceptions and reason, limited though they may be, were not self-made, they had to be discovered as gifts. These gifts are given to all of us; one can but trust and use them. Reason is a grace of Revelation. For this reason the hubris of theory making is justified.