Computing and mathematics legend Stephen Wolfram is worried about bigger problems than climate change or overpopulation. He just joined the Lifeboat Foundation, a think tank devoted to ways of protecting humanity from deadly nanoweapons and rogue artificial intelligences.

In case the Lifeboat Foundation can’t prevent an attack by self-replicating nanobots, they’ve also designed this handy space colony.
The Lifeboat Foundation

Mathematics and computing legend Stephen Wolfram (of Wolfram Alpha) is betting on the singularity. This week, the Lifeboat Foundation–a thinktank devoted to helping humanity survive existential risks as it “moves towards the Singularity”–announced that Wolfram was joining the organization’s advisory board. Wolfram will help the organization as they work toward solutions to future dangers like protecting the world from evil robots or self-replicating miniature weapons

advertisement

advertisement

The Singularity is a concept, popularized by Ray Kurzweil, that posits human nature will be fundamentally transformed by technology sometime in the not-too-distant future. In books such as The Singularity is Near, and The Age of Spiritual Machines, Kurzweil argues that man and machine will ultimately become one. While many experts lampoon the idea of Singularity ever happening, the concept has been massively influential in both science and popular culture.

Projects range from ideas that wouldn’t be out of place at government agencies to truly unique solutions for problems few have even imagined.

Wolfram has his own ideas about humanity and the future of technology. In a recent lecture at the 92nd Street Y’s Singularity Summit, Wolfram argued that the universe is best understood as a computer program and that technological advances will fundamentally alter human nature in unimaginable ways.

Lifeboat’s dedication to way-out there solutions for (arguably unlikely) existential risks has resulted in several unique projects. These projects range from conventional ideas that wouldn’t be out of place at government agencies to truly unique solutions for problems few have even imagined.

One program, the nanoshield, focuses on protecting humanity from self-replicating and miniaturized weapons. Futurists believe that military services worldwide may deploy “ecophages”–tiny weapons whose only goal is to replicate and attack enemy soldiers and their resources–sometime in the future. Specialists at the Lifeboat Foundation are studying the potential raw material that could be used to create ecophages; they are also studying contingency plans for everything from detecting ecophages to defenses that could be used to fight nanoweapon attacks. Radiation and sonic defenses against nanoweapons are considered the best bets.

Other experts at Lifeboat are focusing on one of Hollywood’s favorite tropes–protecting humanity from asteroid attacks. The asteroidshield program is working on precautionary methods that could someday save the planet. The obvious solution–destroying the asteroid, a al Armageddon–doesn’t work in real life; the destruction of an asteroid with missiles or other explosives would cause a catastrophic event thanks to asteroid debris. Instead, scientists working with Lifeboat propose that extensive asteroid detection measures be put into place and complemented with a post-detection program of altering asteroid orbits. As a test, Lifeboat urges that a space agency or military organization somewhere on the planet should attempt to “significantly alter the orbit of an asteroid in a controlled manner” by 2015.

Just as deadly weapons of mass destruction, robotic soldiers could end up destroying their creators as well.

However, Lifeboat’s arguably most interesting project aims to protect us from a threat we don’t even understand yet. AIshield is an ongoing program designed to come up with solutions to defend humanity from hostile artificial intelligences. According to Lifeboat, the organization believes that “Artificial General Intelligences” with abilities far exceeding humanity’s will come into existence over the next few decades. Scientists at Lifeboat believe some of these entities are likely to be either malevolent, to cause massive unintended negative consequences to the world, or at risk for “going rogue.” The result of Lifeboat’s research is some of the most readable (and movie-ready) scientific literature ever published:

advertisement

Artificial intelligence could be embedded in robots used as soldiers for ill. Such killerbots could be ordered to be utterly ruthless, and built to cooperate with such orders, thereby competing against those they are ordered to fight. Just as deadly weapons of mass destruction like nuclear bombs and biological weapons are threats to all humanity, robotic soldiers could end up destroying their creators as well. Solutions are hard to come by, and so far have not been found.

In the event that Lifeboat’s solutions to some of these problems come to pass, there’s also a plan to create small space colonies (pictured above) called Ark 1, which they describe as the ultimate gated communities. Instead of colonizing another planet, why not use the space above the Earth, which would allow for self-selecting groups of people to live together as the planet is slowly destroyed.

Other projects Lifeboat’s involved in tackle more, well… everyday existential threats. One ongoing study researches ways to protect the internet from catastrophic attack, while another is focused on creating a science fiction-like global immune system against terrorist threats that would find and stop terrorists before they commit any violent acts. While the Lifeboat Foundation’s budget is considerably smaller than that of mega-thinktanks, they also have a decent amount of funding: The Lifeboat Fund has already received more than $500,000. More importantly for the scientific community, Lifeboat is one of the only outlets where accomplished thinkers can focus on projects that will matter 200 years from now.