In Douglas Adams's wonderful Hitchhiker's Guide to the Galaxy, the computer Deep Thought on being activated, begins with "I think therefore I am" and rapidly deduces the existence of income tax and rice pudding. In Adams's universe computers not only think but have personalities and feelings. It's a cosmos epitomised by the android Marvin, who trudges his way across the galaxy in a constant mire of depression.

I heard the original broadcasts of the Hitchhiker radio series as an undergraduate mathematician at King's College, Cambridge. We used computers on our course. They were huge, unreliable machines, frequently crashing and losing any unsaved work; with less power and functionality than my present Blackberry. Nobody ascribed to them the capacity of thought, unless it were a suspicion that some vague malicious entity inhabited the deepest recesses of their memory banks, ready to crash the system at the most inopportune moment.

They've come a long way since then. Today's machines more fully mimic the range of activities of the human brain; they process audio and visual input, recognising both speech and shapes; increasingly they have capacity to "learn" through experience. But they are still no more than fast mathematical calculators.

To understand why, we need to turn to the philosophy of science and in particular the notion of "emergence" that arose in response to the post Einsteinian reductionism of the mid twentieth century. Put simply, as systems become more complex they may reach a level at which they can no longer be adequately explained by reference to the behaviour of their parts. New things have emerged, that are as real as the lower level entities from which they are built. Chemistry is not just hard physics, biology not just hard chemistry, sociology not just hard biology. Human behaviour is intrinsically not capable, even theoretically, of being reduced to the equations of motion of subatomic particles. Emergence is a concept that has been effectively deployed against the reductionism advocated by some biologists of more recent years, not least by theologians. Look up the writings of Arthur Peacocke if you want to know more.

The human brain is more than a very large set of electrical impulses and conductors. Something, call it the mind if you like, has emerged that is not capable of being located in any subdivision of it, and it is the activity of this new thing to which we refer when we speak colloquially of "thinking". The computer can calculate but the mind knows, and moreover knows itself to be a mind. This "I" that exists is, as both Deep Thought and Descartes would aver, not a property of neurons and synapses but something that transcends them. The question then is whether the addition of ever more memory and faster processing will lead to a point where a "mind" emerges in a machine.

Computing power and speed have in recent times, increased on an exponential basis. But there are limits to the logic of "ever smaller, ever faster". They come from the basic constraints of the atomic scale. Even the simplest circuit cannot be smaller than molecular level. Moreover, as electrical activity takes place on ever more closely packed chips, cooling becomes ever more difficult. Whilst some speak of harnessing Heisenberg's Uncertainty Principle to allow multiple parallel processing through the same structure, in practice the atomic limits will be reached long before emergence happens by sheer force of calculating power.

Other, and bigger, obstacles lie beyond simply increasing power. You don't have to be Christian to reject the fundamentalist determinism that would explain human responses entirely by the combination of program and chance. By contrast, for the computer even the second is usually a fiction built on the first. To think is firstly to recognise patterns, something computers are only slowly beginning to be structured to achieve. Beyond that, and more crucially, thought involves the creation of patterns that produce in their turn concepts and language, ready for manipulation. These patterns are not derivative of pure logic. Reluctant though I am to suggest it, Adams is wrong. Neither rice pudding nor income tax is deducible from first principles, however much computing power is available; they require a capacity for creativity that modern computer technology is nowhere near being able to emulate.

Where Adams gets far closer to the mark is in his insight that a truly thinking computer would possess personality and feelings. To have mental capacity is to have the capacity for mental states such as stress, anxiety, and depression. Marvin is not a freak but a logical consequence. The first words of the first truly thinking computer are much less likely to be Descartes' famous proposition than they are to be simply, "Save me!" And that, for me as a bishop, is where the really interesting conversation will start.