News of the death of Neil Armstrong on 25 August 2012 brought back so many vivid memories of the Moon landing – as I’m sure it did to hundreds of millions of people around the world. I remember being woken by my father to watch the grainy television images of Armstrong landing Eagle on the lunar surface, and his subsequent historic step. Although I was young, I knew even then that I was watching something historic. What I didn’t appreciated until later was the incredible coolness under pressure that Armstrong displayed in landing Eagle successfully; what I didn’t appreciate until much, much later was Armstrong’s integrity in refusing to cash in on his status as the world’s most famous person. (It’s said that Armstrong was aloof, reclusive, and unwilling to engage with the public. That’s not the case. My daughter’s primary school, for instance, has a plaque signed by Armstrong. He just decided to use his time in the way that he thought best, that’s all.) Neil Armstrong was a true hero.

Neil Armstrong (Credit: NASA)

A total of 12 astronauts, including Armstrong, have set foot on the Moon. Following Armstrong’s death, only 8 of those 12 astronauts survive. The youngest of the survivors is Charles Duke, and he is 76. Without wishing to be ghoulish, it won’t be many more years before there is no living human being who has set foot on the Moon. The sobering fact is that, since Eugene Cernan shook the Moondust from his boots in December 1972, no human has returned to our sister planet. It is hard to imagine anyone returning to the Moon anytime soon.

Does that matter?

I believe it does. To strive, to seek, to find… I believe those are important qualities for humankind to display. Science can certainly be done by unmanned probes and robots (as we are seeing right now with the wonderful Mars Curiosity rover), but if mankind itself chooses to stay at home on Earth and leave the striving to probes then I believe we are heading for trouble. The rich, developed countries possess an economic system that appears to be little more than a glorified Ponzi scheme; we seem hell-bent on burning up the planet; we aren’t developing new sources of energy at the rate our ever-increasing population requires. Earth is our spaceship and we are without lifeboat – and, unfortunately, it seems our species doesn’t want to build a lifeboat.

In Where Is Everybody? I argued that resolutions of the Fermi paradox that depend on “sociological” explanations – such as supposing that all extraterrestrial civilisations perish in some global catastrophe (nuclear warfare, biological warfare, civilisation-induced climate change … take your pick) – are unconvincing. That’s still the case. Nevertheless, maybe it’s because I’m in a gloomy mood following news of Armstrong’s death, but I do increasingly wonder whether one of those global catastrophes will end humanity’s chance of signalling its presence to the rest of the Universe.

The first modern SETI experiment – Frank Drake’s observations at Green Bank in 1960 – focused on two specific stars: Epsilon Eridani and Tau Ceti. Six decades later, astronomers have targeted another star for SETI observations: Gliese 581. This red dwarf star, which is about 20 light years distant, is not an unreasonable target: it has planets, two of which may be superEarths that are on the edge of the star’s habitable zone. What makes this particular SETI study interesting, however, is that it uses a technique that hasn’t been tried before in a SETI context: very long baseline interferometry (VLBI).

Hayden Rampadarath, and three colleagues from Curtin University in Australia, used the Australian Long Baseline Array (ALBA) in their work. This is a collection of three radio telescopes, separated by several hundred kilometres, which (once the signals from the different telescopes are combined) has an angular resolution that’s similar to the Hubble Space Telescope. The team used ALBA to observe Gliese 581 for a total of eight hours in June 2007, tuning in to frequencies near the waterhole. They discovered 222 candidate SETI signals, all of which were quickly excluded (and probably come from communications with geostationary satellites).

The interest in this paper is not that the study rules out Gliese 581 as a candidate for hosting extraterrestrial intelligence: it doesn’t. (While ALBA would pick up a transmission beamed directly to us from an Arecibo-type instrument, it obviously wouldn’t pick it up if the transmitter was pointing somewhere else or happened to be idle on the days when they looked. And ALBA wouldn’t pick up ‘leakage’ radio transmissions of the type that we typically broadcast.) No, the interest in this paper is that it demonstrates how SETI scientists can use VLBI as part of a targeted search strategy. What’s really exciting is that soon we’ll have interferometers with much more sensitivity than ALBA. The forthcoming Square Kilometer Array, in addition to being a revolutionary tool for astronomy, has the potential to enhance SETI enormously: imagine this great instrument listening to planets identified by Kepler…

Imagine that you’re a member of an advanced extraterrestrial civilisation and you want to broadcast your presence to the rest of the galaxy. Perhaps you want to transmit a detailed philosophical treatise to other intelligent life forms or maybe you just want to shout out “Hello Universe”. How would you go about doing it?

The honest answer is that we just don’t know. Our mechanisms for thinking have been influenced by ages of Earthbound biological and cultural evolution. Intelligent extraterrestrials (if such beings exist) will have a quite different evolutionary heritage and, presumably, they’ll think in quite different ways. It’s presumptuous to suppose we know how extraterrestrials would approach the problem of interstellar communication.

Nevertheless, we can make some attempt at answering the question. For example, humans and extraterrestrials (if they exist) must have some things in common: we live in the same universe and presumably are subject to the same laws of physics. Those factors in turn would surely influence any attempts at interstellar communication. For instance, we can plausibly argue that an extraterrestrial civilization would employ radio waves to transmit their message (since radio waves are cheap to produce, travel at the fastest possible speed, and at certain frequencies they are less likely to be absorbed by interstellar material than many other electromagnetic wavelengths). In the vast spectrum of radio frequencies, the region between 1.42 GHz (the hydrogen, H, line) and 1.64 GHz (the hydroxyl, OH, line) looks like a particularly promising place at which to broadcast: the region is naturally quiet and the combination of H and OH makes H2O – and water, so far as we know, is necessary for life. (So if we assume that they know that we know that they know that water is important, then this waterhole region may be the place at which civilisations gather.) It would make sense to send a narrowband signal (since it’s easy to make a really bright narrowband signal; furthermore, nature tends to generate wideband emissions, so a narrowband signal stands out as being artificially generated). And so on and so on. Arguments such as these have motivated those interested in the search for extraterrestrial intelligence to employ radio telescopes in their quest.

But let’s return briefly to our alien broadcaster. You’ve decided to broadcast a radio signal (for the reasons given above), but how to maximise your chance of some other civilisation receiving the signal? Well, the best way of doing that would be to broadcast isotropically, in all directions, continuously. Fine, but that’s going to be very expensive. To cut costs you might choose to transmit isotropically but only in bursts (perhaps one second in 100,000). Or you might transmit a highly directional beam towards one star before moving it on to another star, and not return to any particular star for quite some time.

Suppose an alien civilisation followed the logic outlined above. What would we see? Well, if a radio receiver happened to detect the signal then it would undoubtedly capture our attention. But the signal would soon be gone – either because the transmitter was down or because the transmission had moved on to some other star. We wouldn’t know for sure whether we’d heard from another civilisation.

Can you imagine how frustrating that would be?

Well, that’s the situation in which we find ourselves. On the night of 15 August 1977 the Ohio State University Radio Observatory – “Big Ear” – was pointing 20 degrees above the southern horizon. Just after 23:15 one of the telescope’s two feed horns began to register a signal. Over the next 30 seconds the signal reached a very strong peak and then, as the Earth rotated, the signal faded.

No one was present when the telescope registered the signal. The setup was such that a printer clattered out a line of characters, one every 12 seconds, with the characters reflecting intensity. It was only later that Jerry Ehman checked through the wads of computer printout. When he saw a pattern in the printout – 6, E, Q, U, J, 5 – he circled it and scribbled “Wow!” beside it. Ehman knew precisely what an interstellar radio transmission was supposed to look like – and it looked like this. It had the same signature as a celestial source passing through the telescope’s antenna beam, but the only natural radio sources in the beam were a thousand times fainter than the Wow. It had a narrow bandwidth. The frequency was close to the hydrogen line. Most intriguingly of all there were hints (if you really looked for them and then cast a favourable eye) that there was some sort of pattern involved with weaker signals. The Wow remains perhaps our best candidate for a signal from an extraterrestrial civilization. The trouble was, the Ohio State radio telescope looked for the signal again and in about a hundred days of additional observations it saw – nothing. Admittedly, that additional observing time added up to only four hours. The astronomers involved could justifiably have spent a longer time looking for the signal. (Longer observations would be particularly appropriate if extraterrestrial civilisations were employing a “lighthouse” approach to broadcasting: if we were in the line of fire then the rotating beam would periodically sweep across our view. Long listening times could pick up this periodicity.) On the other hand, the telescope was there to do astronomy not to search for extraterrestrial intelligence. The astronomers involved wrote up their observations and moved on to other things.

The printout showing Jerry Ehman's 'Wow' annotation next to a signal recorded by the Big Ear observatory. The characters relate to signal intensity, with the letter 'U' representing a 30-sigma peak.(Credit: Ohio State University Radio Observatory/NAAPO)

Enter urban planner and data analyst Robert H. Gray.

In his book The Elusive Wow: Searching for Extraterrestrial Intelligence, Bob Gray describes his remarkable attempt to track down the source of the Wow signal. The word ‘remarkable’ is appropriate here because Gray was an outsider (no astronomy PhD, no funding, no institution) who managed to get time on a Tasmanian radio telescope, a Harvard receiver, and even the Very Large Array in order to continue the search for the Wow signal. He also built his own automated microwave observatory to use in the search. (I’d like to know what his neighbours thought when they saw him rolling a 12-foot ex-military dish antenna through the narrow Chicago alleyways near his home.) Gray managed to do all this partly, I’m sure, through judicious application of a persuasive personality; but mainly because at all times he presented strong, well-reasoned, science-based arguments for searching for the Wow signal. His analysis of the data generated by his observing time on various telescopes was clearly of professional standard. So although Gray was an ‘amateur’ astronomer, he was an ‘amateur’ in the best, old-fashioned sense of the term.

The Elusive Wow is really two different books. The first part describes Gray’s personal relationship with the Wow signal. The story is told with clarity and humour, and along the way the reader learns a lot about how astronomy is actually done. The second part, which is much more traditional, presents an overview of the search for extraterrestrial intelligence – it gives the rationale behind, and the history of, SETI. For the newcomer to SETI, it might be advisable to read the second part first; a SETI veteran, I suspect, would skim the second part. There’s also an extensive bibliography, some useful links, and a photogallery of some SETI luminaries.

Even if you are an expert in SETI folklore you will enjoy part one of the book. In his approach to solving the Wow mystery Bob Gray bears an uncanny resemblance to Detective Columbo – he’s always asking “one more thing”. The difference is that Gray’s quarry is not a criminal but nature herself. Where did this extraordinary tenacity come from? Well, Gray relates a story told by his graduate school statistics professor, a story that clearly influenced him. In the early days of gasoline engines a researcher noticed a measurement in some gas samples that implied an implausibly good batch. Rather than ignoring the outlier as a fluke, the researcher tracked the path along which the gasoline had been shipped. He wanted an explanation, and he found it: the material had been contaminated in transit. The contaminant eventually became a valuable fuel additive that made engines run better. The moral was clear: if you are willing to hunt down the cause of a mysterious measurement then you might can something big. Nothing is more mysterious than a possible signal from extraterrestrial intelligence. Nothing could be bigger than confirmation of their existence.

Did Gray succeed in tracking down the Wow signal? Nope. Despite all the detective work, this is one case where the culprit got the better of Columbo.

The Wow signal might have come from a distant extraterrestrial source. But my bet? I think it was probably manmade interference of some sort, but that we’ll never know for sure.

This is a particularly interesting finding because the planet is orbiting an M-class dwarf. Thing is, more than three-quarters of the stars in our neighbourhood are M-class stars (mainly dwarfs, though there are a some red giants too). If rocky planets in the habitable zone are common around M-class dwarfs then there are lots of potentially habitable planets out there!

I doubt that M-class dwarfs are ideal locations for advanced life forms, however.

M-class dwarfs pump out much less energy than our Sun. Thus a planet in orbit around such a star must be close to the star if it is to have a surface temperature that’s similar to Earth’s. That in turn means that the planet is much more likely to be tidally locked. The problem is that tidal locking leads to extremes of climate: the star-facing side of a tidally locked planet would be in permanent light, the other side would be in neverending night. And that in turn means that surface temperatures actually wouldn’t be like Earth’s. One side would be extremely hot, the other side frigid. Furthermore, the temperature on the frigid side would be so low that any atmospheric gases would be frozen out; the day side would be left dry. (If a large planet possessed a moon, however, then conditions on the moon might be more hospitable: a moon that was tidally locked to its planet would have a day-night cycle as it orbited the planet.)

Another problem with M-class dwarfs is that they can be quite variable. Starspots are common, and they reduce the star’s energy output by up to 40% for significant periods; flares are less common, but when they occur they can double the star’s brightness in a matter of minutes.

The discovery of GJ 667Cc suggests that the Galaxy might contain billions of rocky planets where liquid water can exist. But whether those planets can host life … well, that’s a different question. Soon the search for exoplanets needs to become a search for biosignatures.

Solution 16 in If the Universe is Teeming with Aliens…Where is Everybody? is entitled “They are signaling, but we do not know how to listen”. In that section I discuss a solution to the Fermi paradox that some scientists have occasionally proposed: extraterrestrial civilizations are out there, and sending signals, but we don’t hear them because we’re listening in the wrong way. Perhaps, the argument goes, we’re listening for electromagnetic signals when we should be listening for modulated neutrino beams, gravitational waves, or cosmic rays. In particular, the possibility of using neutrinos for interstellar communication was proposed by Mieczyslaw Subotowicz as long ago as 1979 (in the paper “Interstellar communication by neutrino beam”, which appeared in volume 6 of Acta Astronoautica; see pages 213-220).

When I wrote Where is Everybody?, neutrino communication was not possible and it seemed to me unlikely that it would ever be possible in my lifetime. How quickly technology progresses! A recent paper by Daniel Stancil and colleagues (Demonstration of communication using neutrinos) reported on how a communications link was established between the NuMI beam line and the MINERvA detector (both at Fermilab). OK, so the distance involved here are not on an interstellar scale (in fact, the separation was only 1035m); and the transmission figures aren’t stunning (they achieved a decoded data rate of 0.1 bits per second, with a bit error rate of 1%). But it’s a start! Ten years ago this couldn’t be done; in ten years time this will be routine.

The work by Stancil and his colleagues will eventually have applications, particularly in scenarios where communication using electromagnetic waves is difficult or impossible. (The 1035m over which the neutrino beam allowed communication to take place included 240m of solid earth: this was direct communication, rather than boring a tunnel that could house a optical fiber that could then carry an electromagnetic. Or consider the case of communication with a submerged nuclear submarine: seawater is opaque to short-wavelength electromagnetic radiation, which is why submarines must come close to the surface and float a communications line, but neutrinos go straight through water – as they do with everything else.) So neutrinos may have a role in future communications technology. But will they play a role in communicating over interstellar distances?

I remain unconvinced that any extraterrestrial civilizations would choose to broadcast a signal using neutrinos. First, electromagnetic radiation is a faster signal carrier than a neutrino beam. (Despite the recent story about those OPERA neutrinos, we know that neutrinos don’t travel faster than light.) Second, its vastly easier to generate modulated electromagnetic radiation than it is to generate a modulated neutrino beam. Third, its vastly easier to detect modulated electromagnetic radiation than it is to detect a modulated neutrino beam. Fourth, because we share the same universe and are subject to the same laws of physics, we can be reasonably sure than any extraterrestrial civilization will know all of the above – they’ll know that we know that they know all this.

If there are any extraterrestrial civilizations out there trying to contact others in the universe, surely they’ll be using electromagnetic radiation. Won’t they?

The authors suggest the use of a two-tiered classification scheme in order to assess exoplanet habitability. The Earth Similarity Index (ESI), as its name implies, ranks planets based on their similarity to Earth in terms of mass, size, temperature and so on. The Planetary Habitability Index (PHI) ranks planets according to the presence of a stable substrate for life, available energy, appropriate chemistry, and the potential for the planet to hold a liquid solvent. The authors have formulated both indices in such a way that they can be updated as our knowledge advances; this is particularly important for the second tier of the classification scheme, the PHI, since that index requires more information than currently exists for any exoplanet.

The fun bit of the paper, though, is the appearance of a “top-10” list of objects as given by the ESI and the PHI.

The planet with the highest Earth Similarity Index is, of course, Earth. More interestingly, the object with the second-highest Earth Similarity Index is Gliese 581g (Earth has an ESI of 1; Gliese 581g has an ESI of 0.89).

The planet with the highest Planetary Habitability Index is, again no surprise, Earth (which has a PHI of 0.96). Titan, Mars and Europa occupy places 2-4 on the list. The exoplanet with the highest Planetary Habitability Index is, once again, Gliese 581g (with a PHI of 0.45).

So – is Gliese 581g the best place to be looking for alien life (perhaps, as has been suggested, by analysing reflected light from the planet in a search for biomarkers such as the presence of chlorophyll). Maybe. But it’s worth pointing out that it’s not at all certain that Gliese 581g even exists! The Lick-Carnegie Exoplanet Survey ‘discovered’ this exoplanet in September 2010; but the planet did not show up in an analysis of data from the High Accuracy Radial Velocity Planet Searcher. As things stand today, the existence of the planet is unconfirmed.

Perhaps the most habitable exoplanet, as of today, will turn out to have been a mirage.

A team of astronomers have recently published results of observations made by Hubble of the exoplanet GJ 1214b. They didn’t discover the planet: that honour belongs to the MEarth project, which uses robotic telescopes to survey nearby M dwarf stars in search of new Earth-like exoplanets. This project spotted GJ 1214b back in 2009. What’s new is that a Hubble was used to observer the planet during transit: because the star’s light is filtered through the planet’s atmosphere, astronomers can infer what gases might be present. The best fit to the Hubble data is that GJ 1214b has a dense atmosphere of water vapour.

Various observations allow astronomers to pin down the planet’s mass, size and orbital parameters. It turns out that GJ 1214b has a radius 2.7 times that of Earth’s and a mass 6.5 times that of Earth’s, which means its average density is about twice that of water. In comparison, Earth’s average density is 5.5 times that of water. In other words, GJ 1214b holds much more water than does Earth. It’s a true waterworld.

Water … so does that mean GJ 1214b could be home to life? Well that’s unlikely, because the planet orbits just two million kilometres away from its red-dwarf star. It’s temperature will be about 230°C. However, theories of planet formation suggest that GJ 1214b will have formed at a large distance from its star and then subsequently migrated to its current position. It therefore must have passed through the habitable zone.

There might not be life there now, but it’s possible – just possible – that life may have been there once.

A day after listening to Ray Kurzweil at Learning Without Frontiers 2012 I had the immense pleasure of listening to Martin Rees at the same conference. As Lord Puttnam said in his introductory remarks, Martin Rees is one of the most remarkable men, not just in the UK, but in the world. (You can hear some of what Lord Rees covered by checking out his TED talk.)

Martin Rees at the LWF conference, January 2010 Credit: LWF

The focus of Rees’ talk was on science teaching and science education (since the conference was about the future of learning), but he commented on Kurzweil’s talk of the previous day. Rees pointed out that one of the corollaries of an exponentially increasing level of technology is that individuals will soon have access to technologies that could destroy civilisation. If the world is a village, what happens if the village idiots get their hands on biological weapons that could wipe us all out? We might never get the chance to see whether Kurzweil’s Singularity will happen.

Again, I don’t see these Doomsday scenarios as being a satisfying solution to the Fermi paradox. But it’s a depressing thought that such scenarios might cause one particular technological species, namely us, from making our presence felt in the universe.

A couple of days ago I had the pleasure of listening to a talk by Ray Kurzweil at the Learning Without Frontiers 2012 conference. Kurzweil is a powerful, entertaining speaker. His talk ranged far beyond the narrow limits of his PowerPoint slides, and covered areas as diverse as his acquaintanceship with Noam Chomsky, his founding of the Singularity University, his numerous inventions and much else beside. But it was those PowerPoint slides that I found most interesting. Slide after slide showed evidence of the exponential increase in the power of information technology per unit currency. That increase has never paused over recent decades, and it shows no signs of abating any time soon. Moore’s Law in computing is just a special case of this exponential increase in the power of information technology. (This ‘Law’ is actually an observation first made in 1965 by Intel co-founder Gordon Moore. He noted that the number of components in integrated circuits had doubled every year from the invention of the integrated circuit in 1958 until 1965, and predicted that this trend would continue for years to come. Well, it was a pretty good prediction.)

Our lives will be transformed in the coming decades, in ways we can’t easily predict, because of the fact that different areas of science have now become information technologies and have, therefore, hopped onto that exponentially accelerating escalator. Think of human genetics, for example. Next year the technology used to analyse the human genome will twice as powerful as it is right now; a year later it will be four times more powerful; in three years’ time it will be eight times more powerful…

A page from the book of the human genome. One day biotechnologists will be rewriting this book, with consequences that are hard to foresee. (Credit: Rob Elliott)

The human brain isn’t very good at really ‘getting’ exponential increase. We have a gut understanding of linear increase, but not exponential increase. There’s probably a good reason for that: our distant ancestors lived in a worlds where they had to predict the future on a linear basis. (“If me and that lion continue our paths then we’ll meet in 20 seconds – I’d better head that way instead.”) Sometimes, even trained scientists don’t ‘get’ that difference between linear and exponential increases. They understand it at an intellectual level, of course, but typically they will vastly underestimate where technology will be in the near future. That was one of the clear points Kurzweil made, and it’s hard to disagree. In a few years time, the computing power that resides in an object the size of an iPhone will reside in something the size of a blood cell. We don’t know precisely how that technological miniaturisation will take place, but we can be pretty sure that it will happen. And what will that mean for all of us? We can only guess.

This idea of ever-accelerating technological advancement led Kurzweil and others to introduce and popularise the concept of a technological Singularity: a point in the not too distant future when advances in computing occur so rapidly, and computation becomes so powerful, that unaugmented human brains will be unable to comprehend the nature of these technologically transcendent ‘beings’.

Perhaps such a Singularity will happen. Perhaps not. But suppose it does happen. I was chatting to a couple of people at the conference who argued that this was the explanation for the Fermi paradox: we don’t see alien beings because they’ve merged with their technology, hit the Singularity, and become trancendent beings. I covered this argument in Where is Everybody? and, personally, I don’t see how it addresses the paradox. The question “where is everybody” applies just as well to transcendent machine intelligence as it does to biological intelligence. There’s no sign of either.

Solution 42 in my book Where is Everybody? is entitled “The Moon is Unique”. What has the Moon got to do with the Fermi paradox? Well, it seems quite likely that our Moon has played an important role in the development of life on Earth (for example, it stabilises Earth’s axial tilt and thus prevents extreme climatic variations) and it’s not entirely implausible that it played a role in the creation of life in the first place. However, the Moon was created in a giant collision between Earth and a Mars-like object. Had the parameters of that collision been slightly different, our Moon would not have formed with the size it has – and its effect on life would have been different. So, the argument goes, an Earth-Moon system such as our own might be rare – and so therefore might life.

I don’t believe that a scarcity of moons resolves the Fermi paradox – but for all sorts of reasons it would be good to understand more about moons in other planetary systems. And large moons – satellites such as Saturn’s Titan, for example – could themselves be hosts for life. The difficulty, of course, is in finding exomoons.

Kepler searches for exoplanets by looking for a periodic dimming caused by a planet transitting a star (Credit: NASA)

The Kepler mission, as we know, searches for exoplanets by looking for the periodic dip in a star’s brightness that occurs when a planet transits the star. The technique has resulted in the discovery of hundreds of planets. Well, suppose the planet has a moon that orbits in more or less in the same plane as the planet orbits its star: when planet and moon were side-by-side they would block more light than when one object was in front of another. By looking in detail at the periodic variation in brightness of the star it should be possible, in principle, to determine the moon’s mass and diameter (and hence its density).

In the paper, Kipping and his co-authors calculate that Kepler should in principle be able to discover exomoons with a mass as small as 0.1 Earth masses. Such an object would be much bigger than Ganymede or Titan. The discovery of such an exomoon would be important for science, but would not in itself shed much light on the question of extraterrestrial life (other than increasing, perhaps, the potential number of abodes for life). But if the hunt for exoplanets has taught us one thing, it’s that observations that once appeared technically impossible eventually become routine. Right now it might be impossible to search for an exomoon that’s similar to our own Moon. In a few years time it won’t be.