They also claim that CERN has failed to provide an environmental impact statement as required under the National Environmental Policy Act.

This case illustrates a disturbing new trend -- one that started with the development of the atomic bomb: we are increasingly coming into the possession of technologies that could cause the complete extinction of the human species.

Or, at the very least, technologies that we think might destroy us.

Memories of the Manhattan Project

We don't know for certain that the collider will create a black hole or cause some other unpredictable disaster.

But we suspect that it might. Thus, it has to be considered an existential risk.

But for a brief moment 63 years ago, some concerned observers held their breath and nervously watched at the bomb lit-up the New Mexico sky.

And now we have a new contender for the perceived existential threat de jour.

Let science be our guide

Is the Hadron Collider an existential risk? Well, based on our current understanding of physics, we have to conclude that there is a non-zero probability that the collider will act in a way that could destroy the planet.

Just how non-zero is the next big question.

Three years ago, Max Tegmark and Nick Bostrom wrote a piece for Naturein which they took a stab at the question. They warned that humanity has been lulled into a false sense of security and that "the fact that the Earth has survived for so long does not necessarily mean that such disasters are unlikely, because observers are, by definition, in places that have avoided destruction."

So let's have some fun and smash those particles together at extreme velocities.

But I have to wonder: what if they had concluded a one in a million chance? Is that sufficiently low? Remember, we're talking about the fate of all human life here.

What about one in a hundred thousand?

At what probability point do we call it all off?

How will we ever be able to agree? And would our conclusions cause us to become cripplingly risk averse?

I have no good answers to these questions, suffice to say that we need to continually develop our scientific sensibilities so that we can engage in risk assessment with facts instead of conjectures and hysteria.

The new normal

Moving forward, we can expect to see the sorts of objections being made by Wagner and Sancho become more frequent. Today's it's particle accelerator experiments. Tomorrow it will be molecular nanotechnology. The day after tomorrow it will be artificial intelligence.

And then there are all those things we haven't even thought of yet.

The trick for human civilization will be to figure out how to assess tangible threats, determine level of risk, and devise steps on how to take action.

But it doesn't end there. Inevitably, we will develop technologies that have great humanitarian potential, but are like double-edged swords. Molecular nanotechnology certainly comes to mind.

Consequently, we also have to figure out how to manage our possession of an ever-increasing arsenal of doomsday weapons.

It will be a juggling act where one single mistake will mean the end of all human life.

5 comments:

"And would our conclusions cause us to become cripplingly risk averse?"

This is the central point, the way I see it. What, really, are the costs of relinquishment?

When the technique was nuclear weapons, the cost of relinquishment may have been seen (at the time) of losing WWII.

Given all the decisions that individuals and groups have made over the course of human history to risk their lives fighting for better lives (or to refrain from such), one can see that there is some level of risk that at least some can accept. It just depends on what we stand to gain from risk or relinquishment.

And once you get into infinitesimal probabilities, the consequences become really the only thing people can meaningfully use to weigh their decisions with (and even then, the relative probabilities of being affected terrorism and climate change seem horribly lopsided, as you've posted before).

We don't risk losing WWII, here. What we stand to gain by taking the risk is also less than clear (to me, anyway). And that's not even beginning to take into account consultation and reconciliation of the vastly differing values of every entity that's potentially affected by this decision.

When talking about things like quarks we are already WAY outside of normal experience. You can't verify any of the claims on your own, unless you become a particle physicist.

To even raise these kinds of interesting objections, you are essentially accepting current particle physics theory as close to reality, rather than convention.

So, Wagner and Sancho appear to take this science and the people who develop it quite seriously, and take for granted the theories are correct and about reality.

That is, up until this particular experiment. Everything before that, yes we believe the particle physicists. But when those same physicists tell us there is virtually no risk with the LHC, well they might be wrong! And not just a little wrong, but HORRIBLY wrong, to the point where our planet will be destroyed!

It just seems like they are drawing a completely arbitrary line with these kinds of objections.

If there was real danger, why would the scientists still want to turn it on? Or why would there at least not be a large contingent of them speaking out against it? I don't find it plausible that most particle physicists WANT to kill themselves and everyone else either.

This issue has much to do with politics and rather little to do with science or its consequences.

Mr. Wagner filed a similar suit in the late nineties for RHIC at Brookhaven, for the same ostensible reasons (potential creation of mini-black holes, magnetic monopoles, strangelets or other forms of exotic matter that could cause instability and/or seeding). The RHIC has been operating without such problems for ten years, and cosmic rays of much higher energy than that to be produced by the LHC have gone through earth ever since it coalesced out of the protoplanetary disc.

Physicists, like all scientists, cannot say that a doomsday scenario will NEVER happen. They can only say that the likelihood is infinitesimal. As existential risks go, I would put this one at the bottom of the list. The chances of earth getting sterilized from a nearby supernova radiation wave front are much, much higher -- and so are the chances of our running out of oil.

In terms of the bigger picture, nothing comes without a price -- not even antibiotics, which have been the major cause of extended human lifespan ("heroic" medicine notwithstanding). Ironically, this same life extension may make people more and more risk-adverse, to the point of paralysis. And in the longer run, relinquishing such science may cost us the ability to leave earth, leaving us to drown in our own wastes.

Huh, I've never met another "Nato" until now. Also, my last name is of Welsh extraction. Odd coincidence.

Anyway, if we were generating energies much higher than those naturally created all the time, then there might be cause for worry, but that's not the case. I agree with Athena and ryan - in high-profile public physics experiments, it's not likely that the people who know what they're talking about will fail to notice things that create plausible existential threats. It's not like the unintended side-effect of some industrial herbicide where no one's paying attention to its impact on amphibian reproductive cycles - the wacky stuff in a particle physics experiment is the intended result and the cynosure of attention.

George Dvorsky

Canadian futurist, science writer, and ethicist, George Dvorsky has written and spoken extensively about the impacts of cutting-edge science and technology—particularly as they pertain to the improvement of human performance and experience. He is a contributing editor at io9, the Chairman of the Board at the Institute for Ethics and Emerging Technologies and is the program director for the Rights of Non-Human Persons program.