January 18, 2009

I'm sure my readers are familiar with Fermi's paradox. Some of you may even feel it's debated to death lately, but in this great memorial year (Darwin's 200th birthday, among other things) we'll be hearing a lot more about the origins of life and the trajectory of evolution. Fermi's paradox connects with all that, and I'll get to the connection in Part 2.

Here's a quick refresher. Enrico Fermi observed that there seems to be a contradiction between the fact that we have not encountered alien civilisations and facts about the scale of the universe (and, indeed, our own galaxy). The vastness of space, the enormous number of stars and planets, and the age of the stars all add up to a presumption that there should be plenty of life Out There, some of it much older than life on Earth. If there are intelligent beings in space that began with millions of years of head start over us, why don't they have technological civilisations far more advanced than our own? But if they do, why have we never encountered such things as alien space craft, probes, or radio signals?

Colonising the galaxy

Consider that the diameter of the galaxy is about 100,000 light years. Imagine for the sake of argument that there's a technological civilisation somewhere near the galactic centre. Then imagine that it has the capacity to send out space ships or self-replicating probes or similar devices at even 1 per cent of the speed of light. It could get a ship or a probe out to the galactic rim in something like five million years.

If the alien civilisation sends out a few ships every thousand years, they will soon mount up in numbers. Over a few million years, it could send out many thousands of ships. If the colonies founded by those ships themselves got in on the act and sent out ships of their own, and the colonies they founded sent out ships, we get ourselves an exponential increase.

It looks as if a sufficiently advanced and determined civilisation could colonise the galaxy, to a greater or lesser level of density, in "only" a few million years (a tiny amount of time in geological or astrophysical terms). Perhaps not all advanced technological civilisations have that ambition, but it would only take one that has the ambition plus a few million years' start on us, and the galaxy should be widely colonised by now – at least to some density level that we’d notice. Where are the space craft, the probes, the signals, maybe even the astrophysical engineering projects?

There seems to be good evidence that the galaxy doesn't contain even one civilisation that is old enough, advanced enough, and determined enough. So, why?

You might think that if the evolution of technological civilisations were a common event in the universe, there'd be at least one civilisation like this somewhere in the galaxy, with its billions of stars. Even if it started out on the distant rim, far away from us on the other side, that's just going to make it take a few million more years to reach us. So allow ten million years of head start – that's still nothing in the kind of timeframe we're talking about. If technological civilisations are commonplace, there should be some that are those millions of years ahead of us (and some will come along behind us, trailing by a few million years).

So, where are they?

Might it be that creating space craft that can travel reliably at even 1 per cent of the speed of light is harder than we assume? Or maybe advanced technological civilisations tend to destroy themselves? Or do they tend to stop expanding their populations, as human beings are doing? We're really guessing.

The most pessimistic solution is that they tend to destroy themselves. From the point of view of our own species, that solution would suggest that our self-destuction lies ahead. If we discover life elsewhere, then, it's bad news: the more common life is, the more common technological civilisations should be, and hence the more likely it is that the reason we don't see them is that they destroy themselves. QED.

But I don't think that's the best way to look at it. There are other possibilities. Perhaps technological civilisations tend to reach a technological singularity point, at which stage they are transformed so comprehensively and deeply that we wouldn't even recognise them. They might miniaturise themselves in some way that makes expansion into space pointless, or they might switch over to some kind of substrate that we would never recognise as a form of life (partly, no doubt, for their own convenience, but perhaps partly to avoid interfering with vulnerable civilisations at our level).

Another possibility – one that might bother my transhumanist friends almost as much as the self-destruction account – is that the rate of advance of technology does not accelerate to a singularity. I.e., the mathematical relationship between time and technological capacity may not be an exponential function . Perhaps it will turn out that we are now somewhere on the relatively steep part of a sigmoid curve. In that case, perhaps advanced technological civilisations never obtain the level of technological capacity that enables them to go out and colonise galaxies. Maybe there are hard limits to what is possible, or perhaps there are universal limits to desire. If this is the correct picture, transhumanists should be disappointed – what lies ahead for the human species may not be anywhere near as radical as they hope.

The sigmoid curve interpretation has a kind of intuitive rightness about it (which doesn't mean it's correct). First, when science fiction writers describe the future they tend to imagine reaching some higher technological level and things then going on without huge change for millions of years. But of course the content of science fiction might just be evidence of limits to our current imaginative capacities.

We might also be impressed by the now-embarrassing question, "Dude, where's my jet car?" It sometimes seems that, even as the power of computer hardware continues to follow Moore's Law, progress in what we can actually do with it seems to be slowing down. "Where's my robot maid?" If so, human technological potential may be limited, and we need to imagine the future of the world with bounded horizons. Not that that need lead to crippling pessimism – it would not demonstrate our inability to produce great advances in, say, health and life span. What is and is not possible may be different from what we intuit in advance.

I think, though, that there's another way to look at this. I'll be back in a few hours to go deeper into the Drake equation.

I would say there's just too much 'slack' for the sigmoid hypothesis to be true, too many massive gains in capability that could be realized just by applying existing or clearly feasible technologies.

Existing human genotypes and education produce the occasional Einstein. Existing technology (psychometric testing, IVF using donor sperm and eggs, PGD, DNA sequencing, schooling) could be used to create a world with billions of similar potential.

Rich societies could shift to wind, fission, solar, battery, and fuel cell power. It would be costly, but affordable (easily affordable for a population with enhanced human capital created using existing technology).

Plateau (as opposed to extinction or collapse) won't happen if we don't want it to.

Not me, I hope, because elsewhere I've written a lot of material that criticises exactly the fallacy you mention. Indeed, if I get time I plan to blog on that very issue again this week. We'll see.

I'm not sure what I may have said that gave that impression - if you did mean me - so I'm not sure how to correct it. But I think the post still makes sense if you remove any references to what "transhumanists" might want, hope for, be dismayed by, etc., and if you just take the (set of) theories relating to a singularity as a set of theories that happens to be available.

I think that transhumanists who are not singularitarians would still be dismayed at the idea that technological progress looks anything like a sigmoid curve. At least, a helluva lot of them would be.

GeorgeI like reading this kind of articles. Very interesting! I like deep-thinking. In these years, I realized that nothing is impossible. It depends on how do you think. There should be a proof for every saying somewhere deep in the universe. All our knowledge shows the major common consensus only. It is very reasonable as we have common senses and we are living on the same planet. But the major factor is that we are the same being. But it is a pity that we seldom believe in other beings. We simply ignored and denied the non-anthropocentric civilizations.I am so naturalistic. I guess all the anthropocentric civilizations were occurred naturally even the inherent extraordinary abilities of human beings. What do you think?

Hi, Russell. This statement:"Another possibility – one that might bother my transhumanist friends almost as much as the self-destruction account – is that the rate of advance of technology does not accelerate to a singularity."

It reads as if H+ individuals need the concept of a Kurzweilian Singularity (KS)in order for their philosophy to be realized. I am beginning to think that the recursive nature of S-curves is what will lead us in the direction of the understanding we will need to move to the next, albeit stronger and faster, level of tool use. I'm actually starting to think of a KS as a Reductionist heaven, trickle-down, old-school, 2D, and quaint in it's philosophy...And I can consider myself FOR using our technology to improve ourselves. Our problem is not one of can we do it, it is one of getting people to "think for themselves and question authority". But I'm nothing but a techno-hippie, so who would ever want to hear my long-haired rantings anyhow? :-)

I recently came accross your blog and have been reading along. I thought I would leave my first comment. I dont know what to say except that I have enjoyed reading. Nice blog. I will keep visiting this blog very often.

George Dvorsky

Canadian futurist, science writer, and ethicist, George Dvorsky has written and spoken extensively about the impacts of cutting-edge science and technology—particularly as they pertain to the improvement of human performance and experience. He is a contributing editor at io9, the Chairman of the Board at the Institute for Ethics and Emerging Technologies and is the program director for the Rights of Non-Human Persons program.