Elon Musk, the billionaire founder and CEO of the private spaceflight company SpaceX, wants to help establish a Mars colony of up to 80,000 people by ferrying explorers to the Red Planet for perhaps $500,000 a trip.

Relevant Sagan quote:

(...) we've put all our eggs in one basket. If we were on many worlds and were to mess up down here, there's a way for the human species to continue. I don't for a moment propose that the Earth is a disposable planet, and we have to put enormous efforts into making sure we don't muss up down here. But there is a chance.

This should also at least somewhat reduce the x-risk stemming from uFAI (a subset of uFAI's may not concern themselves with space travel), and may significantly reduce the x-risk posed by many other x-risk categories (bioengineered threats, catastrophic climate change, global nuclear catastrophe, grey goo scenarios, etcetera).

For total x-risk reduction, prima facie it's unclear to me how supporting an endeavor such as Musk's stacks up against the SI's effectiveness, measured by total x-risk reduction per donated currency unit.

This should also at least somewhat reduce the x-risk stemming from uFAI.

I doubt it. If travel to Mars becomes do-able for Musk, it'll be trivial for uFAI, and uFAI wouldn't be stupid enough to let a nearby technological civilization threaten the efficient achievement of its goals, whatever they are.

Good point, and this appears to be a general issue with human space settlement.

Suppose that the technology to do it cost-effectively arrives before AI e.g. cheap spaceflight arrives this century, and AI doesn't. Then even if an unfriendly AI shows up centuries later, it can catch-up with and overwhelm the human settlements, no matter how far from Earth they've reached. (The AI can survive higher accelerations, reach higher speeds, reproduce more quickly before launching new settlement missions etc.)

Worse, increasing the number of human settlements most likely increases the chance that someone, somewhere builds an unfriendly AI. So, somewhat surprisingly, space settlement could increase existential risk rather than reduce it.

I believe the reasoning is [more human settlements] --> [more total humans] --> [more humans to make an AI]. Whether or not settlements on eg Mars will actually lead to more total humans on the timescale we're talking about is up for debate.

Yes, there is this effect. I was making a general point about what happens if we get space settlements a long time between AI (or to be precise AGI). More lebensraum gives more people.

Also, the number of remote populations increases the chance of someone building an unfriendly AI. A single Earth-bound community has more chance of co-ordinating research on AI safety (which includes slowing down AI deployment until they are confident about safety, and policing rogue effects that just want to get on with building AIs). This seems much harder to achieve if there are lots of groups spread widely apart, with long travel times between them, and no interplanetary cops forcing everyone to behave. Or there may be fierce competition, with everyone wanting to get their own (presumed safe) AIs deployed before nasty ones show up from the other planets.

In many cases, spaceflight may not even be necessary, an uFAI that is still in stealth mode could just transmit itself over.

However (correct me on the jargon), uFAI mindspace certainly contains uFAI's that - for whatever reason - will not concern themselves with anything other than Earth.

For example, a uFAI that uses geometric discounting as part of its utility function (cf. page 6 here) may realize the eventual danger (decades down the line) from a Mars colony, but may invest all its resources to transform Earth into the desired state anyways.

Since the number of uFAI concepts that will also threaten Mars is strictly smaller (even if not much so) than the number of uFAI concepts with no constraints on Mars given, I think the statement that the risk would be somewhat smaller is fair to make (conjunction fallacy otherwise).

Preliminary note: While my assertion concerning uFAI x-risk reduction is certainly fair game for debate, it is ancillary to my main interest in this topic, which is overall x-risk reduction from all sources. That being said, I do think that the uFAI specific x-risk reduction is non-negligible, though I do agree it may well be minor.

Why should be an uFAI which ignores space travel more likely than an uFAI which ignores people dressed in green?

Two broad categories of explaining such a difference:

The advent of uFAI may lead to a series of events (e.g. nuclear winter) that 1) preclude the uFAI from pursuing space travel for the time being, 2) lead to the mutual demise of both humankind and the uFAI or 3) lead to a situation in which the cost/benefit analysis on the uFAI's part does not come out in favor of wiping out a Mars colony or 4) leaves the Mars colony enough time to implement counter measures of some sort, up to and including creating a friendly AI to protect them.

The utility function (which may well be a somewhat random one implemented by a researcher unwittingly creating the first AGI) could well yield strange results, especially if it is not change-invariant. For example, it may have an emphasis on building tools unable to achieve space flight (maybe the uFAI was originally supposed to only build as many cars as possible, favoring certain tools), be only concerned with the planet earth ("Save the planet"-type AI) or - as mentioned - be incapable of pursuing long-term plans due to geometric discounting of future rewards, and there always being something to optimize which only takes short-term planning (i.e. locked in a greedy pattern).

All of which is of course speculative, but a uFAI taking to the stars has "more"* scenarios going against it than a uFAI ignoring people dressed in green. (* in terms of composite probability, the number of scenarios is countably infinite for both)

OK, makes sense. If we assume the AI to be perfectly rational, it would probably give exterminating humanity out of Earth high priority, exactly because there is a chance of them building another AI.

However, to wipe out humanity from the Earth, the AI does not have to be very smart. One virus, well designed and well distributed, could do the job. An AI with some bugs could still be capable to make it... and then fail to properly arrange the space attack, or destroy itself by wrong self-modification.

Which catastrophic risks does a mars colony mitigate ? Using a list from a recent post by Stuart Armstrong (table by Anders Sandberg) ...

Earth impactors : yes

War : probably. Unlikely that Mars colonies would be valuable or strategically important enough to extended a war to Mars, but possible that the same conditions that led to war on Earth could lead to local war on Mars, or that war on Earth could be exploited by factions on Mars

Famine : yes. Keeping a Mars colony fed might be a major challenge especially at first, but independent of the same challenge on Earth. If famine on Earth is caused by a plant pathogen, it could spread to Mars, but there is the nice long quarantine.

Pandemics : probably. Until there is much more advanced propulsion technology that cuts trip time to days, the trip serves as a natural quarantine period. Also, really nasty features like the ability to persist in the environment, or replicate in non-human hosts, or spread via aerosols, don't present any additional threat on Mars.

Supernova, GRB : probably ? Unlike impactors, a supernova or GRB would affect both Earth and Mars. However, if the major impact on Earth is deaths by radiation of exposed people and destruction of agriculture by destruction of the ozone layer, then Mars should be much more resilient, since settlements have to be more radiation hardened anyway, and the agriculture would be under glass or under ground.

Climate change : yes

Global computer failure : probably not ? If Mars colony infrastructure is very robustly designed it might survive without computers. I expect that it would not be possible to software quarantine Mars.

Bioweapons : probably. For Mars to be included in a deliberate pandemic attack, you would need to get the agent into each separate Martian settlement, probably simultaneously. Unlike Earth, separate Martian cities could probably enforce effective travel restrictions and quarantines.

Nano weapons : no. Unlike bio weapons, presumably all you would have to do would be get some spores to somewhere on Mars.

The above is assuming that the Mars colonies are self-sufficient, otherwise a catastrophe on Earth is a catastrophe for Mars.

Eexistential risks are described a causing actual human extinction, or massive mortality and long term curtailment of human progress (e.g. putting human population and society back to the Stone Age). Mars colonies mitigate against the first, and could mitigate against the second if Mars is developed to the point where it is wealthy and has an independent space program - to the point where Mars could offer meaningful aid to Earth.

If a Mars colony mitigates catastrophic risk (extinction risk?) from climate change,
then climate change is not an existential risk to human civilization on earth.

If humans can thrive on Mars, Earth based humanity will be able to cope with any climate change less drastic than transforming the climate of Earth to something as hostile as the current climate of Mars.

Supernova, GRB : probably ? Unlike impactors, a supernova or GRB would affect both Earth and Mars. However, if the major impact on Earth is deaths by radiation of exposed people and destruction of agriculture by destruction of the ozone layer, then Mars should be much more resilient, since settlements have to be more radiation hardened anyway, and the agriculture would be under glass or under ground.

Is not a good addition. The Mars-hardened facilities will be hardened only for Mars conditions (unless it's extremely easy to harden against any level of radiation?) in order to cut colonization costs from 'mindbogglingly expensive and equivalent to decades of world GDP' to something more reasonable like 'decade of world GDP'. So given a supernova, they will have to upgrade their facilities anyway and they are worse positioned than anyone on Earth: no ozone layer, no atmosphere in general, small resource & industrial base, etc. Any defense against supernova on Mars could be better done on Earth.

Good point. Mars would only be better off if the colonies over-engineered their radiation protection. Otherwise anything that gets through Earth's natural protection would probably get through Martian settlements designed to give the same level of protection. It might be relatively cheap to over-engineer (e.g. digging in an extra meter), but it might not.

It might be relatively cheap to over-engineer (e.g. digging in an extra meter), but it might not.

FWIW, while researching my Moore's law essay, I found materials claiming that underground construction was more expensive but paid for itself via better heating/cooling. But that was for shallow cut-and-scrape constructions and I suspect 1 meter wouldn't take care of supernova radiation.

As far as I understand the issue, the danger is mainly from the temporary ozone layer depletion, with the resulting solar UV rays doing most of the damage, and not from any kind of direct supernova radiation. And UV is not hard to shield from.

If a Mars colony mitigates catastrophic risk (existential / extinction risk?) from climate change, then climate change is not an existential risk to human civilization on earth

This does not follow. One possible (although very unlikely) result of climate change is a much more severe situation resulting in a Venus like situation (although not as high as temp and not as much nasty stuff in the atmosphere). If that happens, Mars will be much easier to survive on than Earth, since with a lot of energy from nuclear power, extremely cold environments are much more hospitable than extremely hot environments. Current models makes such a strong runaway result unlikely, but it is a possibility.

Is climate change seriously considered to be an existential risk? It seems to 1st order climate change would just move population densities, to 2nd order there might be net less or net more land and ag resources after the climate change, and either 2nd or 3rd order, the rate of hurricanes and other weather storms is changed.

It doesn't seem to me that something which reduces human population from 6 billion to 2 billion should be considered an existential threat. A threat, yes, an expense we would prefer not to tolerate, perhaps. But a game ender? Not the way I play.

Hmm, the IPCC asserts this statement without providing any argument to support it.

Some quick thoughts: In the beginning, there were no oceans. The earth was molten and without form. Now, assume venusian-runaway is a possibility for for this planet's climate. Why has it not already occurred, much, much earlier in the planet's history?

The planet was very much hotter and more humid in the very distant past. The CO2 in the oceans and the methane in the permafrost was captured from the atmosphere. The O2 in the atmosphere is a biogenic waste product of photosynthesis.

I do think the oceans will boil eventually, not because of global warming, but because of solar warming, after the sun has depleted it's hydrogen.

My understanding is that all the Carbon which is fixed and stored under the ground in petroleum, coal, natural gas, and other "fossil" fuels was in the air of the earth as CO2 before it was fixed by plants and buried. So it would seem that even with ALL the fossil fuel carbon in the atmosphere, the earth supports life. Considering the adaptability of human life, especially with modern technology, I would be surprised if it was concluded that humanity would be wiped out by this.

The Mars colony could be useful to test the tools necessary to overcome the hostile climate, and it could make their development (possibly mass development) a higher priority.

So in case the Earth climate starts to change very rapidly, we would have a choice to use already developed and tested equipment, built in existing factories, instead of trying to invent it amidst global chaos.

If we can build a self sufficient small scale economy which is independent from earth's ecosystem services and industry base - i.e. an independent martian colony - most listed existential risks a martian colony might mitigate cease to be existential. This is since the mechanism of these existential risks is reduction of ecosystem services provided by earth's biosphere triggering a breakdown of our interconnected world economy with subsequent starvation of most people or even a breakdown of our interconnected world economy without significantly reduced ecosystem services.

It also applyes to: most pandemics (sub 100% lethality or shelter avaiable or some region spared), most supernova scenarios (breakdown of agriculture due to Ozone layer disruption, far away enough to not instantly fry the earth), some bio- and nanoweapons (sub 100% lethality or shelter avaiable or some region spared).

So a Mars colony will only exclusively survive some highly specific and thus unlikely scenarios: A nano-outbreak which can break into an earthbound shelter, but does not spread through space, a very intense GRB which hits earth but not Mars (is this even possible?), an earth impactor large enough to heat the atmosphere to several hundred °C, perhaps some weird physics disaster.

So what we should do to mitigate x-risks is building a self sufficient small scale economy which is independent from earth's ecosystem services and industry base, not ship it to Mars. Though I fear this is not possible at our current tech level.

I've thought the same thing. A big, deep, independent, hermetically sealed, geothermally powered complex under say Iceland or New Zealand gets you most of the x-risk mitigation that a martian colony does.

It is sufficient to have the tech necessary to be self sufficient in a small (say 1000 people) group independently from ecosystem services (i.e. food, water, maybe air - not an issue in most scenarios, organic raw materials, fossil fuels). This is the minimum requirement for a martian colony or a deep shelter anyway and much easier, especially if you use outside air. Though it is still very hard - today we don't come close to be independent form ecosystem services even with a supply chain of 7 billion people. I doubt it is possible at all short of some sort of MNT.

As long as we are not independent form ecosystem services in a small group a space- or underground- or oceanic colony as protection from xrisks is a pipe dream, because any settlement not completely independent from the mother civilisation will die long before the mother civilisation breaks down. Especially if transport is as demanding as to Mars.

Not very. We know that no gravity is lethal in the long term for human beings (for a variety of reasons), but we really have no sense of where the cutoff is -- moon gravity is probably too little, but we've no clue if Martian gravity might be enough. Human biology is definitely tuned to Earthly norms for obvious reasons, but it might benefit from greater or smaller g-effects; there simply isn't enough research on the subject, as for obvious reasons it's inordinately difficult to do the relevant studies.

Huh, I wonder how technically feasible it would be to build a base that's rotating fast enough so that it feels like earth gravity inside ... It could be just one large circular Maglev, which shouldn't require too much energy for suspension and maintaining a constant speed - though there's still friction and the fact that it complicates connections to things that are probably better left static.

Wouldn't Earth after catastrophic climate change or global nuclear war be still more livable than Mars? If we are OK with living in shielded colonies with artificial atmosphere and controlled climate, why not build them on Earth? We wouldn't suffer from low gravity here.

Fair enough -- I tend to figure at the point where we're talking about people knowingly taking actions that they are pretty sure will result in the extinction of local humanity, the additional motive to grab those guys over there within easy reach is not hard to tack on.

Well, it does look likely (not guaranteed - just 50% likely) that the primary target for the strike would be The Enemy (China-USA-Russia-EU-India-whoever). From what is publically known, the prepared plans from 20th century referred to first-strike/revenge dynamics...

Risking extinction on the Earth could be done just to slightly improve your chances not to be enslaved in the fallout or at least not to let The Enemy get away less destroyed than you. It means that you spend all that you can on your selected targets.

Africa would (except South Africa, maybe) would be collateral damage; striking Mars would be expending a lot of resources on bystanders.

If Mars has some interplanetary weapons, it can 1) credibly claim neutrality (we don't even trade with any side - not that we could hide that...) and 2) try to destroy strike from Earth mid-transit (Mars has months to prepare interceptors, and doing counter-interception maneouvres during interplanetary flight is very expensive).

Radiation levels immediately following a nuclear war might be much worse than Martian radiation levels. Moreover, even if is Earth more habitable after the war, if everyone on Earth is dead, this won't matter very much.

It is much easier for the nuclear war on Earth to accidentally kill essentially everyone even as Mars is left alone (simply because no attacks occur on Mars at all). But I agree that this isn't a very likely scenario.

Is there any catastrophic risk that a Mars colony mitigates against that isn't also mitigated by a self-sufficient, self-powered (e.g. geothermal) deep undergound colony with enforced long quarantine periods ?

I can't take grey goo seriously as a threat. We deal with brown goo - that is, biological reproducers - constantly. And our biggest problem with fighting off brown goo isn't that we can't destroy it, but that it shares so much in common with us that we can't destroy it without destroying ourselves as well. We have to use highly specialized weaponry to fight brown goo.

Gray goo, on the other hand, is going to have sufficient differences from us that we can use less discriminating weaponry - EMPs, for example, which are largely harmless to us.

They are harmless to our biological body, but not to the modern technology we've come to all but depend on -- being able to stop nanobots from devouring the planet but needing to destroy the Internet and mains power grid in the process would still be bad news, though one from which humans might eventually recover.

The point was less "Rar EMPs" (I imagine you could construct nanites that would survive an EMP) and more "Rar broadly tailored weaponry." I can't imagine exactly what form that weaponry would take - maybe anti-nanite-nanites or even biological parasites which strip the nanites of usable materials. Heck, maybe even just emitting audio waves at the resonant frequency of the nanites. The point is that we could tailor and utilize weaponry on a broad scale which has a minimal impact on us.

Whereas we can't (yet) design viruses that wipe out only, say, Russians, to go back to the cold war - and even if we could there would be a short mutational gap to targeting everybody.

Cheap ways of inetrplanetary transport will make interplanetary war more possible - e.g. cheap missles. Also high speed large spacecrafts are x-risks themselves if they are used as weapon like artificial asteroids which ram a planet.
From political point of view the exisence of new center of independent power also could lead to interplanetary war. From hystory we know cases when colonies became independent and had wars with the metropoly and even got world military domination.

We have no idea if a self sustaining colony is at all feasible. If it cannot eb self sustaining than it doesn't reduce existential risk at all and (as well as opportunity costs mentioned by others) distracts attention and political capital from keeping earth habitable.

Never forget opportunity costs of limited resources. To use a different example saving cute puppies isn't something anyone is against, but there's a question when one has limited resources of whether giving money to animal shelters is a the best use of it.

It is private resources and a relatively low amount as well so this is baarely applicable here. It is obviously not the optimal use of resources (because we probably can't even calculate the absolute optimal use of those resources) but if they are doing anything even slightly positive we should be for it and most will argue that this is a highly valuable endeavour for humanity (even if we only account the amount of useful tech that will be invented/improved during the course of this mission).

Huh? How is this relevant? The question isn't is it better use of the resources than some possible options, the relevant question is whether this is the best use. No one disagrees that this is a good use compared to many options. But if this isn't the best use, then yes, I'm against it, in the sense that I'd prefer it to go elsewhere.

If you would rather have something happen over nothing* happening at all than you are not against that something BY MY DEFINITION. And if you are against everything that requires resources but is not the optimal use for those resources then you must hate almost everything including yourself and all of your decisions and you should definitely be against wasting your time on arguing instead of inventing or working or volunteering or whatever.
*nothing as in nothing that uses those resources and not 'nothing in the universe'

Sorry, I edited immediately and added a disclaimer on what I meant by that, but it seems that the final edit of my post hasn't submitted. In the disclaimer I explained that I mean nothing to happen with the resources (which still wasn't a good explanation of what I meant) and tried to add a different explanation because of my bad wording that I pretty much mean that 'If you'd rather have event over ~event than..'

Sorry, where did hate come in?

If you are against everything you must (as in it is logical that) you must hate the situation. Anyway, the hate was not the point, the point was that no use of resources is optimal.

If you are against everything you must (as in it is logical that) you must hate the situation

This doesn't follow at all unless one is using a non-standard notion of hate. For example, given the choice I'd rather watch Casablanca than Field of Dreams, but they are both excellent movies. I wouldn't "hate" watching Field of Dreams.

the point was that no use of resources is optimal.

This makes me even more confused. How do I know that no use is optimal? Moreover, even if there isn't any optimal use, how does that not make some uses substantially more optimal than others?

Theoretically there is an optimal use, practically you can't calculate the optimal use and nothing you do is optimal. Anyway I retracted my previous 2 comments because this is kind of going in circles.

What's with the obsession with planets, anyways? Provided we have the technology to survive on another planet, it seems substantially cheaper to use that technology to survive in space, instead. Gravity wells are expensive.