Category Archives: Transhumanism

I have previously replied to the overpopulation objection to radical life extenstion, the most common objection to those of us who want to defeat death. While my defense of indefinite lifespans centers primarily around moral concerns, the computer scientist Alexandre Maurer has recently offered powerful mathematical reasons to doubt the whole premise of the overpopulation objection.

His main conclusion is that fertility rates and not longevity are the true culprits in population increases. A spectacular extension of life will have a negligible effect on population growth compared with a slightly greater fertility rates. To explain, he offers a simple example.

Assume an initial population of 1000 people. The fertility rate is 2, and the life expectancy is 80. Women give birth at 20.Now, let us consider two variations:

Which of these two cases will lead to the greater population increase? A quick calculation gives the following results:

– After 500 years, the population will be 26 000 in case A, and at least 780 000 in case B: 30 times more than in case A.
– After 1000 years, the population will be 51 000 in case A, and at least 206 000 000 in case B: more than 4000 times case A! The gap will be enormous.

The point is that the disappearance of death “only causes a linear population increase; while a fertility rate slightly greater than 2 causes an exponential population increase.” And this means that early death is an inefficient means of population control compared to lower birth rates.

Another consideration is that:

There is an inverse correlation between fertility and longevity: population increases the most in the countries with the shortest life expectancy. The common cause is poverty: when infant mortality is high, there is an incentive to have many children to ensure that some of them eventually survive. In addition, when there is no retirement system, the only “retirement insurance” consists in having many children. Further, to this double incentive to have children, must be added the lack of access to contraception, and a lack of information about it.

The implication of all this is that “people concerned about overpopulation should focus on reducing inequalities and improving the standard of living of the poorest countries.”

In fact, in rich countries, underpopulation is more of a problem rather than overpopulation, and rich countries would benefit enourmously from increased healthy lifespans. Moreover, since rich countries will probably be the first to benefit from life-sustaining technologies, “is very unlikely that increasing life expectancy will result in an overpopulation crisis; especially since such an increase will first happen in rich countries, where the fertility rate is low.”

Moreover, better material security generally leads people to have less children. Remember too that a “even if we lived 1000 years, a fertility rate slightly lower than 2 (e.g.,1.9) is sufficient in the long-term to result in a decreasing population.”

So in addition to all the moral arguments I have made in a previous post, I add Maurer’s insight: fertility rates are much more significant in population increase than death rates.

Rubin finds nearly everything about the futurism of Kurzweil and Moravec problematic. It involves metaphysical speculation about evolution, complexity, and the universe; technical speculation about what may be possible; and philosophical speculation about the nature of consciousness, personal identity, and the mind-body problem. Yet Rubin avoids attacking the futurists, whom he calls “extinctionists,” on the issue of what is possible, focusing instead on their claim that a future robotic-type state is necessary or desirable.

Rubin argues that the argument that there is an evolutionary necessity for our extinction seems thin. Why should we expedite our own extinction? Why not destroy the machines instead? And the argument for the desirability of this vision raises another question. What is so desirable about a post-human life? The answer to this question, for Kurzweil, Moravec, and the transhumanists, is the power over human limitations that would ensue. The rationale that underlies this desire is the belief that we are but an evolutionary accident to be improved upon, transformed, and remade.

But this leads to another question: will we preserve ourselves after uploading into our technology? Rubin objects that there is a disjunction between us and the robots we want to become. Robots will bear little resemblance to us, especially after we have shed the bodies so crucial to our identities, making the preservation of a self all the more tenuous. Given this discontinuity, how can we know that we would want to be in this new world, or whether it would be better, any more than one of our primate ancestors could have imagined what a good human life would be like. Those primates would be as uncomfortable in our world, as we might be in the post-human world. We really have no reason to think we can understand what a post-humans life would be like, but it is not out of the question that the situation will be nightmarish.

Yet Rubin acknowledges that technology will evolve, moved by military, medical, commercial, and intellectual incentives, hence it is unrealistic to limit technological development. The key in stopping or at least slowing the trend is to educate individuals about the unique characteristics of being human which surpass machine life in so many ways. Love, courage, charity, and a host of other human virtues may themselves be inseparable from our finitude. Evolution may hasten our extinction, but even if it did not there is no need to pursue the process, because there is no reason to think the post-human world will be better than our present one. If we pursue such Promethean visions, we may end up worse off than before.

Share this:

(This article was reprinted in the online magazine of the Institute for Ethics & Emerging Technologies, February 12, 2016.)

If death is inevitable, then all we can do is die and hope for the best. But perhaps we don’t have to die. Many respectable scientists now believe that humans can overcome death and achieve immortality through the use of future technologies. But how will we do this?

The first way we might achieve physical immortality is by conquering our biological limitations—we age, become diseased, and suffer trauma. Aging research, while woefully underfunded, has yielded positive results. Average life expectancies have tripled since ancient times, increased by more than fifty percent in the industrial world in the last hundred years, and most scientists think we will continue to extend our life-spans. We know that we can further increase our life-span by restricting calories, and we increasingly understand the role that telomeres play in the aging process. We also know that certain jellyfish and bacteria are essentially immortal, and the bristlecone pine may be as well. There is no thermodynamic necessity for senescence—aging is presumed to be a byproduct of evolution —although why mortality should be selected for remains a mystery. There are reputable scientists who believe we can conquer aging altogether—in the next few decades with sufficient investment—most notably the Cambridge researcher Aubrey de Grey.

If we do unlock the secrets of aging, we will simultaneously defeat many other diseases as well, since so many of them are symptoms of aging. Many researchers now consider aging itself to be a disease which progresses as you age. There are a number of strategies that could render disease mostly inconsequential. Nanotechnology may give us nanobot cell-repair machines and robotic blood cells; biotechnology may supply replacement tissues and organs; genetics may offer genetic medicine and engineering; and full-fledged genetic engineering could make us impervious to disease.

Trauma is a more intransigent problem from the biological perspective, although it too could be defeated through some combination of cloning, regenerative medicine, and genetic engineering. We can even imagine that your physicality could be recreated from a bit of your DNA, and other technologies could then fast forward your regenerated body to the age of your traumatic death, where a backup file with all your experiences and memories would be implanted in your brain. Even the dead may be resuscitated if they have undergone the process of cryonics—preserving organisms at very low temperatures in glass-like states. Ideally these clinically dead would be brought back to life when future technology was sufficiently advanced. This may now be science fiction, but if nanotechnology fulfills its promise there is a reasonably good chance that cryonics will be successful.

In addition to biological strategies for eliminating death, there are a number of technological scenarios for immortality which utilize advanced brain scanning techniques, artificial intelligence, and robotics. The most prominent scenarios have been advanced by the renowned futurist Ray Kurzweil and the roboticist Hans Moravec. Both have argued that the exponential growth of computing power in combination with advances in other technologies will make it possible to upload the contents of one’s consciousness into a virtual reality. This could be accomplished by cybernetics, whereby hardware would be gradually installed in the brain until the entire brain was running on that hardware, or via scanning the brain and simulating or transferring its contents to a computer with sufficient artificial intelligence. Either way we would no longer be living in a physical world.

In fact we may already be living in a computer simulation. The Oxford philosopher and futurist Nick Bostrom has argued that advanced civilizations may have created computer simulations containing individuals with artificial intelligence and, if they have, we might unknowingly be in such a simulation. Bostrom concludes that one of the following must be the case: civilizations never have the technology to run simulations; they have the technology but decided not to use it; or we almost certainly live in a simulation.

If one doesn’t like the idea of being immortal in a virtual reality—or one doesn’t like the idea that they may already be in one now—one could upload one’s brain to a genetically engineered body if they liked the feel of flesh, or to a robotic body if they liked the feel of silicon or whatever materials comprised the robotic body. MIT’s Rodney Brooks envisions the merger of human flesh and machines, whereby humans slowly incorporate technology into their bodies, thus becoming more machine-like and indestructible. So a cyborg future may await us.

The rationale underlying most of these speculative scenarios has to do with adopting an evolutionary perspective. Once one embraces that perspective, it is not difficult to imagine that our descendants will resemble us about as much as we do the amino acids from which we sprang. Our knowledge is growing exponentially and, given eons of time for future innovation, it easy to envisage that humans will defeat death and evolve in unimaginable ways. For the skeptics, remember that our evolution is no longer moved by the painstakingly slow process of Darwinian evolution—where bodies exchange information through genes—but by cultural evolution—where brains exchange information through memes. The most prominent feature cultural evolution is the exponentially increasing pace of technological evolution—an evolution that may soon culminate in a technological singularity.

The technological singularity, an idea first proposed by the mathematician Vernor Vinge, refers to the hypothetical future emergence of greater than human intelligence. Since the capabilities of such intelligences are difficult for our minds to comprehend, the singularity is seen as an event horizon beyond which the future becomes nearly impossible to understand or predict. Nevertheless, we may surmise that this intelligence explosion will lead to increasingly powerful minds for which the problem of death will be solvable. Science may well vanquish death—quit possibly in the lifetime of some of my readers.

But why conquer death? Why is death bad? It is bad because it ends something which at its best is beautiful; bad because it puts an end to all our projects; bad because all the knowledge and wisdom of a person is lost at death; bad because of the harm it does to the living; bad because it causes people to be unconcerned about the future beyond their short lifespan; bad because it renders fully meaningful lives impossible; and bad because we know that if we had the choice, and if our lives were going well, we would choose to live on. That death is generally bad—especially for the physically, morally, and intellectually vigorous—is nearly self-evident.

Yes, there are indeed fates worse than death and in some circumstances, death may be welcomed. Nevertheless for most of us most of the time, death is one of the worst fates that can befall us. That is why we think that suicide and murder and starvation are tragic. That is why we cry at the funerals of those we love.

Our lives are not our own if they can be taken from us without our consent. We are not truly free unless death is optional.

All of this got me to thinking about the many online comments about my paper and the many emails I exchanged with other academics who responded to it. In the paper I concluded, roughly, that while cosmic evolution leaves me awestruck, we have good reasons to doubt that a more meaningful reality is unfolding. And this implies sobriety and skepticism regarding the claim that cosmic evolution provides meaning for our lives.

Generally my peers thought I had been too cautious in linking cosmic evolution and the meaning of life, as did this prominent European philosopher:

I agree that the best rational strategy is to oscillate between hope within a cosmic vision, tempered with skepticism. However, to maximize well-being, I’d rather argue that most people should believe in something like a grand cosmic vision (e.g. à la Teilhard de Chardin), and to leave the critical and skepticism, to the more learned, curious and academic scholars. I don’t think it does any good to people to preach the heat death of the universe.

The best email I received was from an English psychologist who said, “I might go so far as to say it was almost a religious experience reading your essay.” When I asked him to further explain, he replied,

The things I liked the most about your “cosmic vision” were that it removed both God and Man from centre stage while still providing the genuine possibility for personal meaning and that is genuinely cosmic in scope looking forwards more than backwards. I found it to be more optimistic than skeptical. It allowed the possibility that progress is a real thing through biological evolution (and whatever comes next). I thought the ending was more about sobriety than skepticism.

All our small attempts to make our world and ourselves better might amount to naught and people are free to think that. But they might well amount to something more. We can never know for sure but ‘meaning’ or progress seems to provide a heuristic by which to steer our own baby steps on the long path into the far, far future. It doesn’t bother me that I won’t be there. But it does inspire me to think that it is important to that future that enough of us are striving towards it. It’s a bit like Olaf Stapledon’s Star Maker without recourse to a Star Maker.

Then, after seeing Rifkin’s video, I asked myself again: Can I find meaning as a part of cosmic evolution? Is there something about being a part of this larger thing that gives my life meaning? Can I take comfort knowing that the future might be better than the past?

There is a lot to say about all this but let me begin here. While the story of cosmic evolution reveals the emergence of consciousness, beauty, and meaning, as well as the possibility of their exponential increase, it doesn’t imply that a more meaningful reality will necessarily unfold or that a state of perfect meaning will inevitably ensue. For example, we don’t know if our science and technology will bring about a utopia, a dystopia, or hasten our destruction. We don’t know what the future holds. This is reason enough to be skeptical about cosmic evolution providing a meaning to life.

Still we can hope that our lives are significant, that our descendants will live more meaningful lives than we do, that our science and technology will save us, and that life will culminate in, or at least approach, complete meaning. These hopes help us to brave the struggle of life, keeping alive the possibility that we will create a better and more meaningful reality.

Transhumanism and the Meaning of Life

The possibility of infinitely long, good, and meaningful lives brings the purpose of our lives into focus. The purpose of life is to diminish and, if possible, abolish all constraints on our being—intellectual, psychological, physical, and moral—and remake the external world in ways conducive to the emergence of meaning. This implies embracing our role as protagonists of the cosmic evolutionary epic, working to increase the quantity and quality of knowledge, love, joy, pleasure, beauty, goodness and meaning in the world, while diminishing their opposites. This is the purpose of our lives.

In a concrete way this implies being better thinkers, friends, lovers, artists, and parents. It means caring for the planet that sustains us and acting in ways that promote the flourishing of all being. Naturally there are disagreements about what this entails and how we move from theory to practice, but the way forward should become increasing clear as we achieve higher states of being and consciousness. As we become more intellectually and morally virtuous.

Nonetheless, knowing the purpose of our lives does not ensure that they are fully meaningful, for we may collectively fail in our mission to give life more meaning; we may not achieve our purpose. And if we don’t fulfill our purpose, then life wasn’t fully meaningful. Thus the tentative answer to our question—is life ultimately meaningful—is that we know how life could be ultimately meaningful, but we do not know if it is or will be ultimately meaningful. Life can be judged fully meaningful from an eternal perspective only if we fulfill our purpose by making it better and more meaningful.

Meaning then, like the consciousness and freedom from which it derives, is an emergent property of cosmic evolution—and we find our purpose by playing our small part in aiding its emergence. If we are successful our efforts will culminate in the overcoming of human limitations, and our (post-human) descendents will live fully meaningful lives. If we do achieve our purpose in the far distant future, if a fully meaningful reality comes to fruition, and if somehow we are a part of that meaningful reality, then we could say that our life and all life was, and is, deeply meaningful. In the interim we can find inspiration in the hope that we can succeed.

Share this:

(This articles was reprinted in Ethics & Emerging Technologies, July 26, 2014. Also reprinted in Humanity+ Magazine, August 11, 2014.)

A friend emailed me to say that he believed that transhumanists should strive to be free, if free will doesn’t currently exist, or strive to be freer, if humans currently possess some small modicum of free will. He also suggested that becoming transhuman would expedite either process. In short he was claiming that transhumanists should desire more freedom.

I’ll begin with a disclaimer. I have not done much with the free will problem beyond teaching the issue in introductory philosophy courses over the years. I have also penned two brief summaries of the free will issue, “The Case Against Free Will,” which summarizes the modern scientific objections to the existence of free will, and “Freedom and Determinism,” which summarizes some positions and counter positions on the topic. But that is all, so my knowledge of the issue is rudimentary. I will note that by a wide margin, most contemporary philosophers are compatibilists; they believe that free will and determinism are compatible. Here are the stats: (compatibilism 59.1%; libertarianism 13.7%; no free will 12.2%; other 14.9%.)

I am sympathetic with my friend’s thinking that transhumanists should want free will. Transhumanism is about overcoming all human limitations, including psychological ones, and I think psychological determinism is an obvious limitation. We are limited if we don’t have free will. (Yes, all these terms need to be carefully defined.) That makes sense to me, at least at first glance. If I can’t freely choose to desire psychological health or inner peace, or if I can’t desire to be transhuman, or explore new ideas or new types of consciousness, then I am limited. And transhumanists don’t believe in limitations.

If the majority of philosophers are correct that we now possess a bit of free will because we have highly complex brains—something that rocks, trees and worms don’t have—then why can’t more and better consciousness/intelligence make us more free? Perhaps consciousness and freedom are emergent properties of evolution. And if free will could emerge through natural selection, then why can’t we design ourselves or robots superintelligences to be more free?

I think the problem comes in explaining how you do this. Designing yourself or robot to be free seems counter-intuitive. Maybe you have to increase the intelligence of system and freedom will naturally emerge. But it is hard to see how you implant say a moral chip in your brain that would make you more free. Still, as we become transhuman, freedom and consciousness will hopefully increase.

Perhaps there is even a connection between intelligence and freedom. Maybe more intelligence makes you freer because you have more choices—you know more and can do more. For example, if I am ultimately omniscient I can think anything, or if I’m omnipotent I can do anything. So as we evolve progressively toward transhuman and post-human states, our ability to make choices unconstrained by genes and environment will naturally increase. Why wouldn’t it, if we could bypass genes or choose environments? And yes I think to do all this would be a good thing. (An aside. We also aren’t truly free if we have to die, so defeating death would go a long way to making us freer.)

All of this raises questions that E. O. Wilson raised almost 40 years ago in the final chapter of On Human Nature. Where do we want to go as a species? What goals are desirable? As I’ve stated multiple times in this blog, we should move toward a reality with more knowledge, freedom, beauty, truth, goodness, and meaning; and away from a reality with more of their opposites. We should overcome all pain, suffering and death and create a heaven on earth. We have a long way to go, but that is the only worthwhile goal for beings worthy of existence.