Some personal views on nanotechnology, science and science policy from Richard Jones

Transhumanism has never been modern

Transhumanists are surely futurists, if they are nothing else. Excited by the latest developments in nanotechnology, robotics and computer science, they fearlessly look ahead, projecting consequences from technology that are more transformative, more far-reaching, than the pedestrian imaginations of the mainstream. And yet, their ideas, their motivations, do not come from nowhere. They have deep roots, perhaps surprising roots, and following those intellectual trails can give us some important insights into the nature of transhumanism now. From antecedents in the views of the early 20th century British scientific left-wing, and in the early Russian ideologues of space exploration, we’re led back, not to rationalism, but to a particular strand of religious apocalyptic thinking that’s been a persistent feature of Western thought since the middle ages.

Transhumanism is an ideology, a movement, or a belief system, which predicts and looks forward to a future in which an increasing integration of technology with human beings leads to a qualititative, and positive, change in human nature. It sees a trajectory from a current situation in which certain human disabilities and defects can be corrected, through an increasing tendency to use these technologies to enhance the capabilities of humans, to world in which human and machine are integrated to a cyborg existence. Finally, we may leave all traces of our biological past behind, as humans “upload” their intelligence into powerful computers. These ideas are intimately connected with the idea of a “Singularity”, a moment at which accelerating technological change becomes so fast that we pass through an “event horizon” to a radically unknowable future. According to Ray Kurzweil, transhumanism’s most visible and well known spokesman, this event will take place in or around 2045.

The idea of transhumanism is associated with three predicted technological advances. The first is a vision of a radical nanotechnology as sketched by K. Eric Drexler, in which matter is effectively digitised, with “matter compilers” or “molecular assemblers” able to build any object with atomic fidelity. This will be the route to the end of scarcity, and complete control over the material world. The second is a conviction – most vocally expounded by Aubrey de Grey – that it will shortly be possible to radically extend human lifespans, in effect eliminating ageing and death. The third is the belief that the exponential growth in computer power implied by Moore’s law, to be continued and accelerated through the arrival of advanced nanotechnology, makes the arrival of super-human level artificial intelligence both inevitable and imminent.

I am sceptical about all three claims on technical grounds (for a summary of my views on Drexler’s vision of nanotechnology, see here, for example). But here I want to focus, not on technology, but on cultural history. What is the origin of these ideas, and how do they tap into deeper cultural currents?

We can summarise the position of singularitarians like Kurzweil like this: we’re approaching a world where everything is abundant, where we all live for ever, and where a super-intelligent, super-benevolent entity looks after us all. What’s more, this is all going to happen in our lifetimes. We’ve heard this story before, of course. The connection between singularitarian ideas and religious eschatology is brilliantly captured in the phrase attributed to SF writer Ken MacLeod – the singularity is the “Rapture of the Nerds”.

The reason this jibe is so devastatingly effective is that it contains a deep truth. Kurzweil himself recognises the religious overtones of his ideas. In his book “The Singularity is Near” he writes “Evolution moves towards greater complexity, greater elegance, greater knowledge, greater beauty, greater creativity, and greater knowledge of subtler attributes such as love. In every monotheistic tradition God is likewise described as all of these qualities, only without any limitation…”, concluding, tellingly, “…we can regard, therefore, the freeing of our thinking from the severe limitations of its biological form to be an essentially spiritual undertaking.”

This line of thought has a long and fascinating pedigree. One can identify at least two distinct routes by which this kind of eschatological thinking developed to contribute to the modern transhumanist movement. For the first, we can look to the origin of the coinage “transhumanism” itself, by the British biologist Julian Huxley (not at all coincidentally, the brother of the author of the dystopian novel, “Brave New World”, Aldous Huxley). It was among the British scientific left between the wars that many of the themes of transhumanism were first developed. In a remarkable 1929 essay “The World, The Flesh and the Devil“ the Marxist scientist Desmond Bernal gives a slogan for transhumanism “Men will not be content to manufacture life: they will want to improve on it.” Bernal imagines a process of continuous human enhancement, until we arrive at his version of the Singularity: “Finally, consciousness itself may end or vanish in a humanity that has become completely etherealized, losing the close-knit organism, becoming masses of atoms in space communicating by radiation, and ultimately perhaps resolving itself entirely into light. That may be an end or a beginning, but from here it is out of sight.”

The title of Bernal’s essay hints at the influence of his Catholic upbringing – what was the influence of the Marxism? The aspect of Marxism as a project to fundamentally change human nature by materialist methods is made very clear in a Leon Trotsky pamphlet from 1923, describing life after the revolution: “Even purely physiologic life will become subject to collective experiments. The human species, the coagulated Homo sapiens, will once more enter into a state of radical transformation, and, in his own hands, will become an object of the most complicated methods of artificial selection and psycho-physical training.”

The second route to transhumanism also has a Russian dimension. It comes through the pioneer of rocketry and influential ideologue of space travel, Konstantin Tsiolkovsky. Tsiolkovsky was a key proponent of the philosophy of Cosmism, and was profoundly influenced by Cosmism’s founder, the 19th century philosopher and mystic Nikolai Fyodorov. Fyodorov’s system of thought blended religion and materialism to create a vision of transcendence not in a spiritual heaven, but in our own material universe. “God, according to the Copernican system, is the Father, not only doing everything for people, but also through people, demanding, as the God of the fathers, from everyone alive an uniting for the resuscitation of the dead and for the settling by the resurrected generations of worlds for the governing of these lastly”. It would be through science, and the complete mastery over the material world that this would give humans, that the apocalypse would happen, on earth: “We propose the possibility and the necessity to attain through ultimately all people the learning of and the directing of all the molecules and atoms of the external world, so as to gather the dispersed, to reunite the dissociated, i.e. to reconstitute the bodies of the fathers such as they had been before their end”.

Both routes converge on on the idea of a millennium – a period, believed to be imminent, when mankind would enjoy a sin-free existence of abundance, not on any spiritual plane, but in this world. The origins of these beliefs can be found in readings of the biblical books of Daniel and the Revelation of St John, but these interpretations are not strictly orthodox – to church fathers such as Augustine, events such as the millennium and the second coming were spiritual events in the lives of individual believers. But millennial thinking was widespread in Europe from the middle ages onwards, in a myriad of fissiparous sects led by prophets and revolutionaries of all kinds. But if there was a single inspiration for these movements, it was probably the 12th century abbot Joachim of Fiore, whose prophetic system was described by the historian Norman Cohn as “the most influential one known to Europe until the appearance of Marxism”.

One enormously important legacy of Joachim’s prophetic writings was a theory of history as unfolding in a predetermined way through three great ages. The first, the age of the law, was ended by the coming of Jesus, who initiated a second age, the age of the gospel. But a third age was imminent, the age of the spirit, a thousand year reign of the saints. In Cohn’s view, it is Joachim’s three age theory of history that has led, via Hegel and Marx, to all theories of historical inevitability; bringing the story up to date, we can include in these the transhumanist convictions about the inevitable progress of technology that have such clear precursors in the views of the British scientific Marxists. In the title of one of Kurzweil’s earlier books, “The age of spiritual machines”, one can hear the echoes of Joachite prophecies down the centuries.

Do these colourful antecedents to transhumanism matter? A thoughtful transhumanist might well ask, what is the problem if an idea has origins in religious thought? We can enjoy for a moment the irony that many transhumanists think of themselves as ultra-rational, skeptical atheists. But looking at the history of thought in general, and of science in particular, we see that many very good ideas have come out of religious thinking (and, for that matter, not everything that came out of Marxism was bad, either). The problem is that mixed up with those good ideas were some very bad and pernicious ones, and people who are ignorant of the history of ideas are ill-equipped to distinguish good from bad. One particular vice of some religious patterns of thought that has slipped into transhumanism, for example, is wishful thinking.

A transhumanist might well also point out that just because the antecedent to an idea was misguided in the past, that doesn’t mean that as it develops it will always be wrong. After all, people have anticipated being able to fly for a long time, and they looked silly to some right up to the moment when it was possible. That’s a good argument, and the proper sceptical response to it is to say “show me”. If you think that a technology for resurrecting dead people is within sight, we need to see the evidence. But we need to judge actually existing technologies rather than dubious extrapolations, particularly those based on readings of historical trends.

This leads me to what I think is the most pernicious consequence of the apocalyptic and millennial origins of transhumanism, which is its association with technological determinism. The idea that history is destiny has proved to be an extremely bad one, and I don’t think the idea that technology is destiny will necessarily work out that well either. I do believe in progress, in the sense that I think it’s clear that the material conditions are much better now for a majority of people than they were two hundred years ago. But I don’t think the continuation of this trend is inevitable. I don’t think the progress we’ve achieved is irreversible, either, given the problems, like climate change and resource shortages, that we have been storing up for ourselves in the future. I think people who believe that further technological progress is inevitable actually make it less likely – why do the hard work to make the world a better place, if you think that these bigger impersonal forces make your efforts futile?

On sources. The classic treatment of millennial and apocalyptic thinking in the middle ages is Norman Cohn’s “The Pursuit of the Millennium”. The connections to modern political movements are brought up to date by John Gray in “Black Mass: apocalyptic religion and the death of Utopia”, which argues that the science based Utopian movements of the twentieth century should be viewed as perverted versions of religious visions of the apocalypse. The Trotsky quotation is from this book. Patrick McCray’s “The Visioneers” (reviewed by me here) is a sympathetic account of the connections between the 1960’s and 70’s space colonies movement, and their inspirations from the thought of Tsiolkovsky, and K. Eric Drexler. The Fyodorov quotation is from his “The Philosophy of the Common Task”, as quoted by N. Berdyaev. The correspondence between the three “superlative technologies” of transhumanism and traditional religious superlatives was made by Dale Carrico, whose cogent critique of transhumanism I will return to in a later post.

8 thoughts on “Transhumanism has never been modern”

I agree that many people who self-identify as transhumanists also come across as technological determinists. For example, the banner of the “Scientific Transhumanism” group on Facebook brashly proclaims “The victory of transhumanism is inevitable”. But although this flavour of transhumanism is a vocal one, it’s far from being the only one, nor indeed the dominant one.

The nearest thing to canonical writing for transhumanism, the FAQ on the Humanity+ org website, avoids that emphasis. Instead, the FAQ has a lot to say about existential risks and uncertain futures. There is nothing guaranteed about the future progress of technology.

To my mind, the best writers on transhumanism – e.g. Max More, James Hughes, Nick Bostrom, Anders Sandberg, David Pearce, among others – emphasise possibility far more than inevitability. Aubrey de Grey is another who emphasises that the outcome of the programme of rejuvenation biotechnology is far from being pre-determined.

The history of precursors of transhumanist thought is certainly interesting, but it is does not constrain the present and future expression of this idea. To think otherwise is to fall victim to another version of inevitabilism 🙂

David, I anticipated a comment of the kind you made, because it’s been a common theme of previous online exchanges about transhumanism I’ve been involved in, asking why I concentrate on Kurzweil’s writings (say) when there are so many more subtle transhumanist thinkers. There’s some justice in it, too. But I’m primarily interested in the effects of transhumanism as they spill over into wider culture and thinking about science and technology, and in that context any subtlety is easily lost. The dominant flavour of transhumanism may not in practise be the one with the soundest intellectual pedigree, but the one which has the widest resonance with deeper cultural currents (not to mention the fact that Kurzweil gets a great deal more media coverage and sells a lot more books than the people you mention).

I have long been persuaded that transhumanism is essentially a religious movement. Mine is a minority position among transhumanists, but not as much as one may think, and several “spiritual transhumanist” groups embrace it explicitly.

I agree with the key role of both Russian Cosmists (e.g. Tsiolovsky, Fyodorov) and British Marxists (e.g. Bernal, Haldane) as precursors of contemporary transhumanism, and I am intrigued by your suggestion that Gioacchino da Fiore may be considered as a precursor of both.

But technological determinism is not as common among transhumanists as you think. We (that is, I and similarly inclined transhumanists) don’t make predictions, but plans. It isn’t a predetermined outcome fixed in stone, it’s a project.

Re “If you think that a technology for resurrecting dead people is within sight, we need to see the evidence.”

Of course – but I don’t think resurrection technology is “within sight.” At the same time, I consider the possible development (in the far future) of resurrection technology as not entirely incompatible with current scientific understanding, and quite compatible with some more speculative scientific models. That’s how I (and others) get through the night.

And what’s wrong with wishful thinking? Wishful thinking is what keeps you swimming in rough waters when the shore is so far that you can hardly see it. Of course besides wishing you must swim.

One reading of this article left me both conflicted and well, somewhat inadequate. I can admire the proselike language and earnest discussion, but in a sense you only confirmed my commoners view that the transhumanist is anchored by beliefs far more basic and secular. Perhaps the sub species of bio hackers produces a far more tactile viewpoint? Don’t get me wrong, I do enjoy the philosophical and spiritual descriptions as I do a sonat in form.
But , and it’s a biggie, I and many others in this movement have more concerns on how to engage technology than theology. I don’t know if a singularity is near or whether a neuromorphic qubit Ai will snuff us out for getting in the way of paper clip production. But I can appreciate the principle choices of natural order, join it, beat it or failing those, control it.

To me a transhuman endeavours to bridge the Darwinian devide from human to post human. Until there is a better definition or acknowledgement of who we are I’ll stick with the idea that the prime directive isn’t to become Godlike, but just survive this chaotic decade or two without being superseded.

Giulio, Glenn, I sympathise with both of your closing sentiments – “besides wishing you must swim” and we must try and “survive this chaotic decade or two without being superseded”. The issues for me are, which direction should we swim in, and what potential sources of chaos should we worry most about now? Neuromorphic quantum ai’s aren’t top of that list for me, though my reasons for that view would be an interesting discussion for later.

Honestly, I think you are reading too much into this. “The age of spiritual machines” is just a marketing gimmick to sell books. Considering that your selection criteria for transhumanists to focus on is “sells a lot of books” it is hardly surprising to encounter such.

As to the idea that religious futurism is the “antecedent” of secular futurism, that’s a bit of a weasel term isn’t it? Practically any secular philosophy is likely to have historical predecessors that are religious — including science itself. That’s just the result of the fact that religion once dominated the intellectual landscape.

Of course you are right to point out that there is a danger of wishful thinking overriding good judgment in this area. However, consider that futurism is a competitive memespace where many different incompatible ideas and visions are vying for attention. The less well-grounded ideas end up being shot down as evidence arises with which to do so, because other ideas (which may themselves be false) are looking for a chance to gain the spotlight.

However, the fact that many bad ideas get discredited or lose popularity does not invalidate the whole exercise — quite the contrary. The fact that bad futurisms can be discarded when they fail to mesh with new evidence is exactly *because* they are secular and science based, not mysticism based. The selection process is analogous to business failure in capitalism, alleles in natural selection, or hypotheses in science — a sign of progress, or at least of optimization for a particular niche.

To clarify, I’m not saying that popularity in the TH community is necessarily the best selection criteria possible for ideas — many transhumanists are probably best described as loons, and as I said tricks like describing one’s ideas as mystical when they really aren’t might get some of them an advantage in sheer numbers/exposure. However, there are quite a few notions that were popular in the early days of the movement — Leary’s enthusiasm for hallucinogenic drugs, O’Neil’s space colonies, etc. which have died down or been dramatically scaled back in emphasis since. I’d say the current movement has focused more directly on life extension, thanks in large part to Bostrom and de Grey, and of course to the natural fear of death that most people (especially young people) have.

As an example of how ideas can shift over time without becoming less optimistic, my own ideas for space settlement have morphed to be less about habitable space colonies (which I’d been an enthusiast of since childhood) and more about expanding to fill the physical limits of the nearby environment using self-replicating, fully automated technology. I realized that a Dyson Sphere (or cloud/swarm, if you will) does not need to take the form that science fiction writers usually describe. It could be put up at Mercury orbit and get the same amount of power as it would at earth orbit, with much lower material requirements. I also realized that a probe/factory/infrastructure can be self replicating at any given scale that happens to be feasible to work with. Nanorobotics is not actually necessary to have a self replicating source of goods, and indeed we do have such a non-nano source in the manufacturing world. It is limited primarily by its dependence on fossil fuels and the consequent pollution of the planet’s atmosphere, as well as access to raw materials and space. Remove these limits by manufacturing in space, and something on the order of a Dyson Sphere is actually a lot easier than it looks.

One implication of this is that a very large set of highly automated environments for creating perfect laboratory conditions can be made, relatively cheaply, without any dependence on having first developed molecular assemblers. That prospect in turn increases the chance that we can indeed create molecular assemblers in the long run, even assuming it is very hard to do by current standards. Furthermore, it also implies that semiconductor fabrication could be done on a very massive scale, and that large amounts of solar power would be available to be converted to computer processing power. So there is hope for brute-forcing computational problems that would be intractable to an earthbound industry (even if this hope has strict limits).

Luke, I’m not sure I’m as relaxed as you that bad ideas will naturally be discarded as a result of healthy competition in the “market place of ideas”. I think the lesson of history is that bad ideas can do a lot of damage before they are discarded, if they ever are, and that the persistence of bad ideas is often related to the depth of their cultural roots.