A global catastrophic risk is a hypothetical future event with the potential to seriously damage human well-being on a global scale.[2] Some events could destroy or cripple modern civilization. Other, even more severe, events could cause human extinction.[3] These are referred to as existential risks.

Researchers experience difficulty in studying human extinction directly, since humanity has never been destroyed before.[7] While this does not mean that it will not be in the future, it does make modelling existential risks difficult, due in part to survivorship bias.

The philosopher Nick Bostrom classifies risks according to their scope and intensity.[6] He considers risks that are at least "global" in scope and "endurable" in intensity to be global catastrophic risks. Those that are at least "trans-generational" (affecting all future generations) in scope and "terminal" in intensity are classified as existential risks. While a global catastrophic risk may kill the vast majority of life on earth, humanity could still potentially recover. An existential risk, on the other hand, is one that either destroys humanity entirely or prevents any chance of civilization recovering. Bostrom considers existential risks to be far more significant.[9]

Bostrom identifies four types of existential risk. "Bangs" are sudden catastrophes, which may be accidental or deliberate. He thinks the most likely sources of bangs are malicious use of nanotechnology, nuclear war, and the possibility that the universe is a simulation that will end. "Crunches" are scenarios in which humanity survives but civilization is irreversibly destroyed. The most likely causes of this, he believes, are exhaustion of natural resources, a stable global government that prevents technological progress, or dysgenic pressures that lower average intelligence. "Shrieks" are undesirable futures. For example, if a single mind enhances its powers by merging with a computer, it could dominate human civilization, which could be bad. Bostrom believes that this scenario is most likely, followed by flawed superintelligence and a repressive totalitarian regime. "Whimpers" are the gradual decline of human civilization or current values. He thinks the most likely cause would be evolution changing moral preference, followed by extraterrestrial invasion.[3]

The following are examples of individuals and institutions that have made probability predictions about existential events. Some risks, such as that from asteroid impact, with a one-in-a-million chance of causing humanity's extinction in the next century,[11] have had their probabilities predicted with considerable precision (though some scholars claim the actual rate of large impacts could be much higher than originally calculated).[12] Similarly, the frequency of volcanic eruptions of sufficient magnitude to cause catastrophic climate change, similar to the Toba Eruption, which may have almost caused the extinction of the human race,[13] has been estimated at about 1 in every 50,000 years.[14] The relative danger posed by other threats is much more difficult to calculate. In 2008, a group of "experts on different global catastrophic risks" at the Global Catastrophic Risk Conference at the University of Oxford suggested a 19% chance of human extinction over the next century. However, the conference report cautions that the methods used to average responses to the informal survey is suspect due to the treatment of non-responses.[citation needed] The probabilities estimated for various causes are summarized below.

There are significant methodological challenges in estimating these risks with precision. Most attention has been given to risks to human civilization over the next 100 years, but forecasting for this length of time is difficult. The types of threats posed by nature may prove relatively constant, though new risks could be discovered. Anthropogenic threats, however, are likely to change dramatically with the development of new technology; while volcanoes have been a threat throughout history, nuclear weapons have only been an issue since the 20th century. Historically, the ability of experts to predict the future over these timescales has proved very limited. Man-made threats such as nuclear war or nanotechnology are harder to predict than natural threats, due to the inherent methodological difficulties in the social sciences. In general, it is hard to estimate the magnitude of the risk from this or other dangers, especially as both international relations and technology can change rapidly.

Existential risks pose unique challenges to prediction, even more than other long-term events, because of observation selection effects. Unlike with most events, the failure of a complete extinction event to occur in the past is not evidence against their likelihood in the future, because every world that has experienced such an extinction event has no observers, so regardless of their frequency, no civilization observes existential risks in its history.[7] These anthropic issues can be avoided by looking at evidence that does not have such selection effects, such as asteroid impact craters on the Moon, or directly evaluating the likely impact of new technology.[8]

In 1950 Enrico Fermi, the Italian physicist, wondered why humans had not yet encountered extraterrestrial civilizations. He asked, “Where is everybody?”[16] Given the age of the universe and its vast number of stars, unless the Earth is very atypical, extraterrestrial life should be common. So why was there no evidence of extraterrestrial civilizations? This is known as the Fermi paradox.

One of the many proposed reasons, though not widely accepted, that humans have not yet encountered intelligent life from other planets (aside from the possibility that it does not exist), could be due to the probability of existential catastrophes. Namely, other potentially intelligent civilizations have been wiped out before humans could find them or they could find Earth.[7][17][18]

Some scholars have strongly favored reducing existential risk on the grounds that it greatly benefits future generations. Derek Parfit argues that extinction would be a great loss because our descendants could potentially survive for a billion years before the expansion of the Sun makes the Earth uninhabitable.[19][20] Bostrom argues that there is even greater potential in colonizing space. If future humans colonize space, they may be able to support a very large number of people on other planets, potentially lasting for trillions of years.[9] Therefore, reducing existential risk by even a small amount would have a very significant impact on the expected number of people that will exist in the future.

Little has been written arguing against these positions, but some scholars would disagree. Exponential discounting might make these future benefits much less significant. Gaverick Matheny has argued that such discounting is inappropriate when assessing the value of existential risk reduction.[11]

Some economists have discussed the importance of global catastrophic risks, though not existential risks. Martin Weitzman argues that most of the expected economic damage from climate change may come from the small chance that warming greatly exceeds the mid-range expectations, resulting in catastrophic damage.[4]Richard Posner has argued that we are doing far too little, in general, about small, hard-to-estimate risks of large scale catastrophes.[21]

Scope insensitivity influences how bad people consider the extinction of the human race to be. For example, when people are motivated to donate money to altruistic causes, the quantity they’re willing to give does not increase linearly with the magnitude of the issue: people are as concerned about 200,000 birds getting stuck in oil as they are about 2,000.[22] Similarly, people are often more concerned about threats to individuals than to larger groups.[23]

Some potential existential risks are consequences of man-made technologies.

In 2012, Cambridge University created The Cambridge Project for Existential Risk which examines threats to humankind caused by developing technologies.[25] The stated aim is to establish within the University a multidisciplinary research centre, Centre for the Study of Existential Risk, dedicated to the scientific study and mitigation of existential risks of this kind.[25]

The Cambridge Project states that the "greatest threats" to the human species are man-made; they are artificial intelligence, global warming, nuclear war and rogue biotechnology.[26]

It has been suggested that learning computers that rapidly become superintelligent may take unforeseen actions or that robots would out-compete humanity (one technological singularity scenario).[27] Because of its exceptional scheduling and organizational capability and the range of novel technologies it could develop, it is possible that the first Earth superintelligence to emerge could rapidly become matchless and unrivaled: conceivably it would be able to bring about almost any possible outcome, and be able to foil virtually any attempt that threatened to prevent it achieving its objectives.[28] It could eliminate, wiping out if it chose, any other challenging rival intellects; alternatively it might manipulate or persuade them to change their behavior towards its own interests, or it may merely obstruct their attempts at interference.[28] In Bostrom's book, Superintelligence: Paths, Dangers, Strategies, he defines this as the control problem.[29]

Vernor Vinge has suggested that a moment may come when computers and robots are smarter than humans. He calls this "the Singularity."[30] He suggests that it may be somewhat or possibly very dangerous for humans.[31] This is discussed by a philosophy called Singularitarianism.

Physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could evolve to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race".[32] In 2009, experts attended a conference hosted by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss whether computers and robots might be able to acquire any sort of autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[30] Various media sources and scientific groups have noted separate trends in differing areas which might together result in greater robotic functionalities and autonomy, and which pose some inherent concerns.[33][34][35]Eliezer Yudkowsky believes that risks from artificial intelligence are harder to predict than any other known risks. He also argues that research into artificial intelligence is biased by anthropomorphism. Since people base their judgments of artificial intelligence on their own experience, he claims that they underestimate the potential power of AI. He distinguishes between risks due to technical failure of AI, which means that flawed algorithms prevent the AI from carrying out its intended goals, and philosophical failure, which means that the AI is programmed to realize a flawed ideology.[36]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[37] There are also concerns about technology which might allow some armed robots to be controlled mainly by other robots.[38] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[39][40] One researcher states that autonomous robots might be more humane, as they could make decisions more effectively. However, other experts question this.[41]

On the other hand, a "friendly" AI could help reduce existential risk by developing technological solutions to threats.[36]

In PBS's Off Book, Gary Marcus asks "what happens if (AIs) decide we are not useful anymore?" Marcus argues that AI cannot, and should not, be banned, and that "the sensible thing to do" is to "start thinking now" about AI ethics.[42]

Many nanoscale technologies are in development or currently in use.[43] The only one that appears to pose a significant global catastrophic risk is molecular manufacturing, a technique that would make it possible to build complex structures at atomic precision.[44] Molecular manufacturing requires significant advances in nanotechnology, but once achieved could produce highly advanced products at low costs and in large quantities in nanofactories weighing a kilogram or more.[43][44] When nanofactories gain the ability to produce other nanofactories production may only be limited by relatively abundant factors such as input materials, energy and software.[43]

Molecular manufacturing could be used to cheaply produce, among many other products, highly advanced, durable weapons.[43] Being equipped with compact computers and motors these could be increasingly autonomous and have a large range of capabilities.[43]

Phoenix and Treder classify catastrophic risks posed by nanotechnology into three categories: (1) from augmenting the development of other technologies such as AI and biotechnology; (2) by enabling mass-production of potentially dangerous products that cause risk dynamics (such as arms races) depending on how they are used; (3) from uncontrolled self-perpetuating processes with destructive effects. At the same time, nanotechnology may be used to alleviate several other global catastrophic risks.[43]

Several researchers state that the bulk of risk from nanotechnology comes from the potential to lead to war, arms races and destructive global government.[43][45][46] Several reasons have been suggested why the availability of nanotech weaponry may with significant likelihood lead to unstable arms races (compared to e.g. nuclear arms races): (1) A large number of players may be tempted to enter the race since the threshold for doing so is low;[43] (2) the ability to make weapons with molecular manufacturing will be cheap and easy to hide;[43] (3) therefore lack of insight into the other parties' capabilities can tempt players to arm out of caution or to launch preemptive strikes;[43][47] (4) molecular manufacturing may reduce dependency on international trade,[43] a potential peace-promoting factor;[48] (5) wars of aggression may pose a smaller economic threat to the aggressor since manufacturing is cheap and humans may not be needed on the battlefield.[43]

Since self-regulation by all state and non-state actors seems hard to achieve,[49] measures to mitigate war-related risks have mainly been proposed in the area of international cooperation.[43][50] International infrastructure may be expanded giving more sovereignty to the international level. This could help coordinate efforts for arms control.[51] International institutions dedicated specifically to nanotechnology (perhaps analogously to the International Atomic Energy Agency IAEA) or general arms control may also be designed.[50] One may also jointly make differential technological progress on defensive technologies, a policy that players should usually favour.[43] The Center for Responsible Nanotechnology also suggest some technical restrictions.[52] Improved transparency regarding technological capabilities may be another important facilitator for arms-control.[53]

A grey goo is another catastrophic scenario, which was proposed by Eric Drexler in his 1986 book Engines of Creation[54] and has been a theme in mainstream media and fiction.[55][56] This scenario involves tiny self-replicating robots that consume the entire biosphere using it as a source of energy and building blocks. Nanotech experts including Drexler now discredit the scenario. According to Chris Phoenix a "So-called grey goo could only be the product of a deliberate and difficult engineering process, not an accident".[57]

Biotechnology can pose a global catastrophic risk in the form of natural pathogens or novel, engineered ones. Such a catastrophe may be brought about by usage in warfare, terrorist attacks or by accident.[58] Terrorist applications of biotechnology have historically been infrequent.[58] To what extent this is due to a lack of capabilities or motivation is not resolved.[58]

Exponential growth has been observed in the biotechnology sector and Noun and Chyba predict that this will lead to major increases in biotechnological capabilities in the coming decades.[58] They argue that risks from biological warfare and bioterrorism are distinct from nuclear and chemical threats because biological pathogens are easier to mass-produce and their production is hard to control (especially as the technological capabilities are becoming available even to individual users).[58]

Given current development, more risk from novel, engineered pathogens is to be expected in the future.[58] It has been hypothesized that there is an upper bound on the virulence (deadliness) of naturally occurring pathogens.[59] But pathogens may be intentionally or unintentionally genetically modified to change virulence and other characteristics.[58] A group of Australian researchers e.g. unintentionally changed characteristics of the mousepox virus while trying to develop a virus to sterilize rodents.[58] The modified virus became highly lethal even in vaccinated and naturally resistant mice.[46][60] The technological means to genetically modify virus characteristics are likely to become more widely available in the future if not properly regulated.[58]

The scenarios that have been explored most frequently are nuclear warfare and doomsday devices. Although the probability of a nuclear war per year is slim, Professor Martin Hellman, described it as inevitable on the long run; unless the probability approaches zero, inevitably there will come a day when civilization's luck runs out.[61] During the Cuban missile crisis, president Kennedy estimated the odds of nuclear war as being "somewhere between one out of three and even".[62]The United States and Russia have a combined arsenal of 15,315 nuclear weapons,[63] and there are an estimated total of 16,400 nuclear weapons in existence worldwide.[64]

While popular perception sometimes takes nuclear war as "the end of the world", experts assign low probability to human extinction from nuclear war.[65][66] In 1982, Brian Martin estimated that a US–Soviet nuclear exchange might kill 400-450 million directly and maybe several hundred million more through follow-up consequences.[65]

Nuclear war could yield unprecedented human death tolls and habitat destruction. Detonating such a large amount of nuclear weaponry would have a long-term effect on the climate, causing cold weather and reduced sunlight[67] that may generate significant upheaval in advanced civilizations.[68]

Global warming refers to the warming caused by human technology since at least the 19th century. Global warming reflects abnormal variations to the expected climate within the Earth's atmosphere and subsequent effects on other parts of the Earth. Projections of future climate change suggest further global warming, sea level rise, and an increase in the frequency and severity of some extreme weather events and weather-related disasters. Effects of global warming include loss of biodiversity, stresses to existing food-producing systems, and increased spread of infectious diseases such as malaria.

It has been suggested that runaway global warming (runaway climate change) might cause Earth to become searing hot like Venus. In less extreme scenarios it could cause the end of civilization, as we know it.[69]

The 20th century saw a rapid increase in human population due to medical developments and massive increase in agricultural productivity[75] made by the Green Revolution.[76] Between 1950 and 1984, as the Green Revolution transformed agriculture around the globe, world grain production increased by 250%. The Green Revolution in agriculture helped food production to keep pace with worldwide population growth or actually enabled population growth. The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon fueled irrigation.[77] David Pimentel, professor of ecology and agriculture at Cornell University, and Mario Giampietro, senior researcher at the National Research Institute on Food and Nutrition (INRAN), place in their study Food, Land, Population and the U.S. Economy the maximum U.S. population for a sustainable economy at 200 million. To achieve a sustainable economy and avert disaster, the United States must reduce its population by at least one-third, and world population will have to be reduced by two-thirds, says the study.[78]

The authors of this study believe that the mentioned agricultural crisis will only begin to impact us after 2020, and will not become critical until 2050. Geologist Dale Allen Pfeiffer claims that coming decades could see spiraling food prices without relief and massive starvation on a global level such as never experienced before.[79][80]

Wheat is humanity's 3rd most produced cereal. Extant fungal infections such as Ug99[81] (a kind of stem rust) can cause 100% crop losses in most modern varieties. Little or no treatment is possible and infection spreads on the wind. Should the world's large grain producing areas become infected then there would be a crisis in wheat availability leading to price spikes and shortages in other food products.[82]

Nick Bostrom suggested that in the pursuit of knowledge, humanity might inadvertently create a device that could destroy Earth and the Solar System.[83] Investigations in nuclear and high-energy physics could create unusual conditions with catastrophic consequences. For example, scientists worried that the first nuclear test might ignite the atmosphere.[84][85] More recently, others worried that the RHIC[86] or the Large Hadron Collider might start a chain-reaction global disaster involving black holes, strangelets, or false vacuum states. These particular concerns have been refuted,[87][88][89][90] but the general concern remains.

The death toll for a pandemic is equal to the virulence (deadliness) of the pathogen or pathogens, multiplied by the number of people eventually infected. It has been hypothesised that there is an upper limit to the virulence of naturally evolved pathogens.[59] This is because a pathogen that quickly kills its hosts might not have enough time to spread to new ones, while one that kills its hosts more slowly or not at all will allow carriers more time to spread the infection, and thus likely out-compete a more lethal species or strain.[92] This simple model predicts that if virulence and transmission are not linked in any way, pathogens will evolve towards low virulence and rapid transmission. However, this assumption is not always valid and in more complex models, where the level of virulence and the rate of transmission are related, high levels of virulence can evolve.[93] The level of virulence that is possible is instead limited by the existence of complex populations of hosts, with different susceptibilities to infection, or by some hosts being geographically isolated.[59] The size of the host population and competition between different strains of pathogens can also alter virulence.[94] Interestingly, a pathogen that only infects humans as a secondary host and usually infects another species (a zoonosis) may have little constraint on its virulence in people, since infection here is an accidental event and its evolution is driven by events in another species.[95] There are numerous historical examples of pandemics[96] that have had a devastating effect on a large number of people, which makes the possibility of global pandemic a realistic threat to human civilization.

Climate change refers to Earth's natural variations in climate over time. The climate has changed slowly such as during ice ages, and warmer periods when palm trees grew in Antarctica. It has been hypothesized that there was also a period called "snowball Earth" when all the oceans were covered in a layer of ice. These global climatic changes occurred slowly, prior to the rise of human civilization about 10 thousand years ago near the end of the last Major Ice Age when the climate become more stable. However, abrupt climate change on the decade time scale has occurred regionally. Since civilization originated during a period of stable climate, a natural variation into a new climate regime (colder or hotter) could pose a threat to civilization.

In the history of the Earth, twelve ice ages are known to have occurred. More ice ages will be possible at an interval of 40,000–100,000 years. An ice age would have a serious impact on civilization because vast areas of land (mainly in North America, Europe, and Asia) could become uninhabitable. It would still be possible to live in the tropical regions, but with possible loss of humidity and water. Currently, the world is existing in an interglacial period within a much older glacial event. The last glacial expansion ended about 10,000 years ago, and all civilizations evolved later than this. Scientists do not predict that a natural ice age will occur anytime soon.

A geological event such as massive flood basalt, volcanism, or the eruption of a supervolcano[97] leading to a so-called volcanic winter, similar to a nuclear winter. One such event, the Toba eruption,[98] occurred in Indonesia about 71,500 years ago. According to the Toba catastrophe theory,[99] the event may have reduced human populations to only a few tens of thousands of individuals. Yellowstone Caldera is another such supervolcano, having undergone 142 or more caldera-forming eruptions in the past 17 million years.[100] A massive volcano eruption would produce extraordinary intake of volcanic dust, toxic and greenhouse gases into the atmosphere with serious effects on global climate (towards extreme global cooling; volcanic winter when in short term, and ice age when in long term) or global warming (if greenhouse gases were to prevail).

When the supervolcano at Yellowstone last erupted 640,000 years ago, the magma and ash ejected from the caldera covered most of the United States west of the Mississippi river and part of northeastern Mexico.[101] Another such eruption could threaten civilization.

Such an eruption could also release large amounts of gases that could alter the balance of the planet's carbon dioxide and cause a runaway greenhouse effect[dubious– discuss][citation needed], or enough pyroclastic debris and other material might be thrown into the atmosphere to partially block out the sun and cause a volcanic winter, as happened in 1816 following the eruption of Mount Tambora, the so-called Year Without a Summer. Such an eruption might cause the immediate deaths of millions of people several hundred miles from the eruption, and perhaps billions of deaths[102] worldwide, due to the failure of the monsoon[citation needed], resulting in major crop failures causing starvation on a massive scale.[102]

A much more speculative concept is the Verneshot: a hypothetical volcanic eruption caused by the buildup of gas deep underneath a craton. Such an event may be forceful enough to launch an extreme amount of material from the crust and mantle into a sub-orbital trajectory.

Another possibility is a megatsunami. A megatsunami could, for example, destroy the entire East Coast of the United States. The coastal areas of the entire world could also be flooded in case of the collapse of the West Antarctic Ice Sheet.[103] While none of these scenarios are likely to destroy humanity completely, they could regionally threaten civilization. There have been two recent high-fatality tsunamis—after the 2011 Tōhoku earthquake and the 2004 Indian Ocean earthquake, although they were not large enough to be considered megatsunamis. A megatsunami could have astronomical origins as well, such as an asteroid impact in an ocean.

The magnetic poles of the Earth shifted many times in geologic history. The duration of such a shift is still debated. Theories exist that say that during that time, the magnetic field around the Earth would be weakened or nonexistent, threatening electrical civilization or even several species by allowing radiation from the sun, especially solar flares or cosmic background radiation to reach the surface. However, these theories have been somewhat discredited, as statistical analysis shows no evidence for a correlation between past reversals and past extinctions.[104][105]

REP. STEWART: ... are we technologically capable of launching something that could intercept [an asteroid]? ... DR. A'HEARN: No. If we had spacecraft plans on the books already, that would take a year ... I mean a typical small mission ... takes four years from approval to start to launch ...

Several asteroids have collided with earth in recent geological history. The Chicxulub asteroid, for example, is theorized to have caused the extinction of the non-avian dinosaurs 66 million years ago at the end of the Cretaceous. If such an object struck Earth it could have a serious impact on civilization. It is even possible that humanity would be completely destroyed. For this to occur the asteroid would need to be at least 1 km (0.62 mi) in diameter, but probably between 3 and 10 km (2–6 miles).[107] Asteroids with a 1 km diameter have impacted the Earth on average once every 500,000 years.[107] Larger asteroids are less common. Small near-Earth asteroids are regularly observed.

In 1.4 million years, the star Gliese 710 is expected to start causing an increase in the number of meteoroids in the vicinity of Earth when it passes within 1.1 light years of the Sun and perturbing the Oort cloud. Dynamic models by García-Sánchez predict a 5% increase in the rate of impact.[108] Objects perturbed from the Oort cloud take millions of years to reach the inner Solar System.

A number of astronomical threats have been identified. Massive objects, e.g. a star, large planet or black hole, could be catastrophic if a close encounter occurred in the Solar System. In April 2008, it was announced that two simulations of long-term planetary movement, one at Paris Observatory and the other at University of California, Santa Cruz indicate a 1% chance that Mercury's orbit could be made unstable by Jupiter's gravitational pull sometime during the lifespan of the Sun. Were this to happen, the simulations suggest a collision with Earth could be one of four possible outcomes (the others being Mercury colliding with the Sun, colliding with Venus, or being ejected from the Solar System altogether). If Mercury were to collide with Earth, all life on Earth could be obliterated: an asteroid 15 km wide is believed to have caused the extinction of the non-avian dinosaurs, whereas Mercury is 5,000 km in diameter.[112]

A similar threat is a hypernova, produced when a hypergiant star explodes and then collapses, sending vast amounts of radiation sweeping across hundreds of lightyears. Hypernovas have never been observed; however, a hypernova may have been the cause of the Ordovician–Silurian extinction events. The nearest hypergiant is Eta Carinae, approximately 8,000 light-years distant.[115] The hazards from various astrophysical radiation sources were reviewed in 2011.[116]

A solar superstorm, which is a drastic and unusual decrease or increase in the Sun's power output, could have severe consequences for life on Earth. (See solar flare)

If our universe lies within a false vacuum, a bubble of lower-energy vacuum could come to exist by chance or otherwise in our universe, and catalyze the conversion of our universe to a lower energy state in a volume expanding at nearly the speed of light, destroying all that we know without forewarning.[118] Such an occurrence is called a vacuum metastability event.

The belief that the Mayan civilization's Long Count calendar ended abruptly on December 21, 2012 was a misconception due to the Mayan practice of using only five places in Long Count Calendar inscriptions. On some monuments the Mayan calculated dates far into the past and future but there is no end of the world date. There was a Piktun ending (a cycle of 13,144,000 day Bak'tuns) on December 21, 2012. A Piktun marks the end of a 1,872,000 day or approximately 5125 year period and is a significant event in the Mayan calendar. However, there is no historical or scientific evidence that the Mayans believed it would be a doomsday. Some believe it was just the beginning of another Piktun.[119]

The cataclysmic pole shift hypothesis was formulated in 1872. Revisited repeatedly in the second half of the 20th century, it proposes that the axis of the Earth with respect to the crust could change extremely rapidly, causing massive earthquakes, tsunamis, and damaging local climate changes. The hypothesis is contradicted by the mainstream scientific interpretation of geological data, which indicates that true polar wander does occur, but very slowly over millions of years. Sometimes this hypothesis is confused with the accepted theory of geomagnetic reversal in which the magnetic poles reverse, but which has no influence on the axial poles or the rotation of the solid earth.

The Svalbard Global Seed Vault is a vault buried 400 feet inside a mountain in the Arctic with over ten tons of seeds from all over the world. 100 million seeds from more than 100 countries were placed inside as a precaution to preserve all the world’s crops. A prepared box of rice originating from 104 countries was the first to be deposited in the vault, where it will be kept at −18 °C (0 Fahrenheit). Thousands more plant species will be added as organizers attempt to get specimens of every agricultural plant in the world. Cary Fowler, executive director of the Global Crop Diversity Trust said that by preserving as many varieties as possible, the options open to farmers, scientists and governments were maximized. “The opening of the seed vault marks a historic turning point in safeguarding the world’s crop diversity,” he said. Even if the permafrost starts to melt, the seeds will be safe inside the vault for up to 200 years. Some of the seeds will even be viable for a millennium or more, including barley, which can last 2,000 years, wheat (1,700 years), and sorghum (almost 20,000 years).[122]

^Ventrudo, Brian (5 June 2009). "So Where Is ET, Anyway?". Universe Today. Retrieved 10 March 2014. Some believe [the Fermi Paradox] means advanced extraterrestrial societies are rare or nonexistent. Others suggest they must destroy themselves before they move on to the stars.

^New role for robot warriors; Drones are just part of a bid to automate combat. Can virtual ethics make machines decisionmakers?, by Gregory M. Lamb / Staff writer, Christian Science Monitor, February 17, 2010.

^"The Rise of Artificial Intelligence". PBSOff Book. 11 July 2013. Event occurs at 6:29-7:26. Retrieved 24 October 2013. ...what happens if (AIs) decide we are not useful anymore? I think we do need to think about how to build machines that are ethical. The smarter the machines gets, the more important that is... [T]here are so many advantages to AI in terms of human health, in terms of education and so forth that I would be reluctant to stop it. But even if I did think we should stop it, I don't think it's possible... if, let's say, the US Government forbade development in kind of the way they did development of new stem cell lines, that would just mean that the research would go offshore, it wouldn't mean it would stop. The more sensible thing to do is start thinking now about these questions... I don't think we can simply ban it.