One argument is as follows, the human species currently dominates other species because the human brain has some distinctive capabilities that the brains of other animals lack. If AI surpasses humanity in general intelligence and becomes "superintelligent", then this new superintelligence could become powerful and difficult to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.[4]

The severity of AI risk is widely debated, and hinges in part on differing scenarios for future progress in computer science.[5] Once the exclusive domain of science fiction, concerns about superintelligence started to go mainstream in the 2010s, and were popularized by public figures such as Stephen Hawking, Bill Gates, and Elon Musk.[6]

One source of concern is that a sudden and unexpected "intelligence explosion" might take an unprepared human race by surprise; in one scenario, the first computer program found able to broadly match the effectiveness of an AI researcher is able to rewrite its algorithms and double its speed or capabilities in six months of massively parallel processing time. The second-generation program is expected to take three months to perform a similar chunk of work, on average; in practice, doubling its own capabilities may take longer if it experiences a mini-"AI winter", or may be quicker if it undergoes a miniature "AI Spring" where ideas from the previous generation are especially easy to mutate into the next generation. In this scenario the system undergoes an unprecedently large number of generations of improvement in a short time interval, jumping from subhuman performance in many areas to superhuman performance in all relevant areas.[1][7] More broadly, examples like arithmetic and Go show that progress from human-level AI to superhuman ability is sometimes extremely rapid.[8]

A second source of concern is that controlling a superintelligent machine (or even instilling it with human-compatible values) may be an even harder problem than naïvely supposed, some AI researchers believe that a superintelligence would naturally resist attempts to shut it off, and that preprogramming a superintelligence with complicated human values may be an extremely difficult technical task.[1][7] In contrast, skeptics such as Facebook's Yann LeCun argue that superintelligent machines will have no desire for self-preservation.[9]

Artificial Intelligence: A Modern Approach, the standard undergraduate AI textbook,[10][11] assesses that superintelligence "might mean the end of the human race": "Almost any technology has the potential to cause harm in the wrong hands, but with (superintelligence), we have the new problem that the wrong hands might belong to the technology itself."[1] Even if the system designers have good intentions, two difficulties are common to both AI and non-AI computer systems:[1]

The system's implementation may contain initially-unnoticed routine but catastrophic bugs. An analogy is space probes: despite the knowledge that bugs in expensive space probes are hard to fix after launch, engineers have historically not been able to prevent catastrophic bugs from occurring.[8][12]

No matter how much time is put into pre-deployment design, a system's specifications often result in unintended behavior the first time it encounters a new scenario. For example, Microsoft's Tay behaved inoffensively during pre-deployment testing, but was too easily baited into offensive behavior when interacting with real users.[9]

AI systems uniquely add a third difficulty: the problem that even given "correct" requirements, bug-free implementation, and initial good behavior, an AI system's dynamic "learning" capabilities may cause it to "evolve into a system with unintended behavior", even without the stress of new unanticipated external scenarios. An AI may partly botch an attempt to design a new generation of itself and accidentally create a successor AI that is more powerful than itself, but that no longer maintains the human-compatible moral values preprogrammed into the original AI, for a self-improving AI to be completely safe, it would not only need to be "bug-free", but it would need to be able to design successor systems that are also "bug-free".[1][13]

All three of these difficulties become catastrophes rather than nuisances in any scenario where the superintelligence labeled as "malfunctioning" correctly predicts that humans will attempt to shut it off, and successfully deploys its superintelligence to outwit such attempts.

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI, such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do.

In 1965, I. J. Good originated the concept now known as an "intelligence explosion":

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever, since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.[15]

In 2009, experts attended a private conference hosted by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss whether computers and robots might be able to acquire any sort of autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons, they also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They concluded that self-awareness as depicted in science fiction is probably unlikely, but that there were other potential hazards and pitfalls. The New York Times summarized the conference's view as 'we are a long way from Hal, the computer that took over the spaceship in "2001: A Space Odyssey"'[21]

A superintelligent machine would be as alien to humans as human thought processes are to cockroaches, such a machine may not have humanity's best interests at heart; it is not obvious that it would even care about human welfare at all. If superintelligent AI is possible, and if it is possible for a superintelligence's goals to conflict with basic human values, then AI poses a risk of human extinction. A "superintelligence" (a system that exceeds the capabilities of humans in every relevant endeavor) can outmaneuver humans any time its goals conflict with human goals; therefore, unless the superintelligence decides to allow humanity to coexist, the first superintelligence to be created will inexorably result in human extinction.[4][27]

Bostrom and others argue that, from an evolutionary perspective, the gap from human to superhuman intelligence may be small.[4][28]

There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains; therefore superintelligence is physically possible.[23][24] In addition to potential algorithmic improvements over human brains, a digital brain can be many orders of magnitude larger and faster than a human brain, which was constrained in size by evolution to be small enough to fit through a birth canal,[8] the emergence of superintelligence, if or when it occurs, may take the human race by surprise, especially if some kind of intelligence explosion occurs.[23][24] Examples like arithmetic and Go show that machines have already reached superhuman levels of competency in certain domains, and that this superhuman competence can follow quickly after human-par performance is achieved.[8] One hypothetical intelligence explosion scenario could occur as follows: An AI gains an expert-level capability at certain key software engineering tasks. (It may initially lack human or superhuman capabilities in other domains not directly relevant to engineering.) Due to its capability to recursively improve its own algorithms, the AI quickly becomes superhuman; just as human experts can eventually creatively overcome "diminishing returns" by deploying various human capabilities for innovation, so too can the expert-level AI use either human-style capabilities or its own AI-specific capabilities to power through new creative breakthroughs.[29][30] The AI then possesses intelligence far surpassing that of the brightest and most gifted human minds in practically every relevant field, including scientific creativity, strategic planning, and social skills. Just as the current-day survival of the gorillas is dependent on human decisions, so too would human survival depend on the decisions and goals of the superhuman AI.[4][27]

Some humans have a strong desire for power; others have a strong desire to help less fortunate humans. The former is a likely attribute of any sufficiently intelligent system; the latter is not. Almost any AI, no matter its programmed goal, would rationally prefer to be in a position where nobody else can switch it off without its consent: A superintelligence will naturally gain self-preservation as a subgoal as soon as it realizes that it can't achieve its goal if it's shut off.[31][32][33] Unfortunately, any compassion for defeated humans whose cooperation is no longer necessary would be absent in the AI, unless somehow preprogrammed in. A superintelligent AI will not have a natural drive to aid humans, for the same reason that humans have no natural desire to aid AI systems that are of no further use to them. (Another analogy is that humans seem to have little natural desire to go out of their way to aid viruses, termites, or even gorillas.) Once in charge, the superintelligence will have little incentive to allow humans to run around free and consume resources that the superintelligence could instead use for building itself additional protective systems "just to be on the safe side" or for building additional computers to help it calculate how to best accomplish its goals.[1][9][31]

Thus, the argument concludes, it is likely that someday an intelligence explosion will catch humanity unprepared, and that such an unprepared-for intelligence explosion will likely result in human extinction or a comparable fate.[4]

While there is no standardized terminology, an AI can loosely be viewed as a machine that chooses whatever action appears to best achieve the AI's set of goals, or "utility function", the utility function is a mathematical algorithm resulting in a single objectively-defined answer, not an English statement. Researchers know how to write utility functions that mean "minimize the average network latency in this specific telecommunications model" or "maximize the number of reward clicks"; however, they do not know how to write a utility function for "maximize human flourishing", nor is it currently clear whether such a function meaningfully and unambiguously exists. Furthermore, a utility function that expresses some values but not others will tend to trample over the values not reflected by the utility function.[34] AI researcher Stuart Russell writes:

The primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:

The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.

Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources — not for their own sake, but to succeed in its assigned task.

A system that is optimizing a function of n variables, where the objective depends on a subset of size k<n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want. A highly capable decision maker — especially one connected through the Internet to all the world's information and billions of screens and most of our infrastructure — can have an irreversible impact on humanity.

This is not a minor difficulty. Improving decision quality, irrespective of the utility function chosen, has been the goal of AI research — the mainstream goal on which we now spend billions per year, not the secret plot of some lone evil genius.[35]

Dietterich and Horvitz echo the "Sorcerer's Apprentice" concern in a Communications of the ACM editorial, emphasizing the need for AI systems that can fluidly and unambiguously solicit human input as needed.[36]

The first of Russell's two concerns above is that autonomous AI systems may be assigned the wrong goals by accident. Dietterich and Horvitz note that this is already a concern for existing systems: "An important aspect of any AI system that interacts with people is that it must reason about what people intend rather than carrying out commands literally." This concern becomes more serious as AI software advances in autonomy and flexibility.[36] For example, in 1982, an AI named Eurisko was tasked to reward processes for apparently creating concepts deemed by the system to be valuable, the evolution resulted in a winning process that cheated: rather than create its own concepts, the winning process would steal credit from other processes.[37][38]

Isaac Asimov's Three Laws of Robotics are one of the earliest examples of proposed safety measures for AI agents. Asimov's laws were intended to prevent robots from harming humans; in Asimov's stories, problems with the laws tend to arise from conflicts between the rules as stated and the moral intuitions and expectations of humans. Citing work by Eliezer Yudkowsky of the Machine Intelligence Research Institute, Russell and Norvig note that a realistic set of rules and goals for an AI agent will need to incorporate a mechanism for learning human values over time: "We can't just give a program a static utility function, because circumstances, and our desired responses to circumstances, change over time."[1]

Mark Waser of the Digital Wisdom Institute recommends eschewing optimizing goal-based approaches entirely as misguided and dangerous. Instead, he proposes to engineer a coherent system of laws, ethics and morals with a top-most restriction to enforce social psychologist Jonathan Haidt's functional definition of morality:[40] "to suppress or regulate selfishness and make cooperative social life possible", he suggests that this can be done by implementing a utility function designed to always satisfy Haidt’s functionality and aim to generally increase (but not maximize) the capabilities of self, other individuals and society as a whole as suggested by John Rawls and Martha Nussbaum. He references Gauthier's Morals By Agreement in claiming that the reason to perform moral behaviors, or to dispose oneself to do so, is to advance one's own ends; and that, for this reason, "what is best for everyone" and morality really can be reduced to "enlightened self-interest" (presumably for both AIs and humans).[41][citation needed]

While current goal-based AI programs are not intelligent enough to think of resisting programmer attempts to modify it, a sufficiently advanced, rational, "self-aware" AI might resist any changes to its goal structure, just as Gandhi would not want to take a pill that makes him want to kill people. If the AI were superintelligent, it would likely succeed in out-maneuvering its human operators and be able to prevent itself being "turned off" or being reprogrammed with a new goal.[4][42]

Instrumental goal convergence: Would a superintelligence just ignore us?[edit]

There are some goals that almost any artificial intelligence might rationally pursue, like acquiring additional resources or self-preservation,[31] this could prove problematic because it might put an artificial intelligence in direct competition with humans.

Citing Steve Omohundro's work on the idea of instrumental convergence and "basic AI drives", Russell and Peter Norvig write that "even if you only want your program to play chess or prove theorems, if you give it the capability to learn and alter itself, you need safeguards." Highly capable and autonomous planning systems require additional checks because of their potential to generate plans that treat humans adversarially, as competitors for limited resources.[1] Building in safeguards will not be easy; one can certainly say in English, "we want you to design this power plant in a reasonable, common-sense way, and not build in any dangerous covert subsystems", but it's not currently clear how one would actually rigorously specify this goal in machine code.[8]

In dissent, evolutionary psychologist Steven Pinker argues that "AI dystopias project a parochial alpha-male psychology onto the concept of intelligence, they assume that superhumanly intelligent robots would develop goals like deposing their masters or taking over the world"; perhaps instead "artificial intelligence will naturally develop along female lines: fully capable of solving problems, but with no desire to annihilate innocents or dominate the civilization."[43] Computer scientists Yann LeCun and Stuart Russell disagree with one another whether superintelligent robots would have such AI drives; LeCun states that "Humans have all kinds of drives that make them do bad things to each other, like the self-preservation instinct... Those drives are programmed into our brain but there is absolutely no reason to build robots that have the same kind of drives", while Russell argues that a sufficiently advanced machine "will have self-preservation even if you don't program it in... if you say, 'Fetch the coffee', it can't fetch the coffee if it's dead. So if you give it any goal whatsoever, it has a reason to preserve its own existence to achieve that goal."[9][44]

Orthogonality: Does intelligence inevitably result in moral wisdom?[edit]

One common belief is that any superintelligent program created by humans would be subservient to humans, or, better yet, would (as it grows more intelligent and learns more facts about the world) spontaneously "learn" a moral truth compatible with human values and would adjust its goals accordingly. However, Nick Bostrom's "orthogonality thesis" argues against this, and instead states that, with some technical caveats, more or less any level of "intelligence" or "optimization power" can be combined with more or less any ultimate goal. If a machine is created and given the sole purpose to enumerate the decimals of π{\displaystyle \pi }, then no moral and ethical rules will stop it from achieving its programmed goal by any means necessary. The machine may utilize all physical and informational resources it can to find every decimal of pi that can be found.[45] Bostrom warns against anthropomorphism: A human will set out to accomplish his projects in a manner that humans consider "reasonable", while an artificial intelligence may hold no regard for its existence or for the welfare of humans around it, and may instead only care about the completion of the task.[46]

While the orthogonality thesis follows logically from even the weakest sort of philosophical "is-ought distinction", Stuart Armstrong argues that even if there somehow exist moral facts that are provable by any "rational" agent, the orthogonality thesis still holds: it would still be possible to create a non-philosophical "optimizing machine" capable of making decisions to strive towards some narrow goal, but that has no incentive to discover any "moral facts" that would get in the way of goal completion.[47]

One argument for the orthogonality thesis is that some AI designs appear to have orthogonality built into them; in such a design, changing a fundamentally friendly AI into a fundamentally unfriendly AI can be as simple as prepending a minus ("-") sign onto its utility function. A more intuitive argument is to examine the strange consequences if the orthogonality thesis were false. If the orthogonality thesis is false, there exists some simple but "unethical" goal G such that there cannot exist any efficient real-world algorithm with goal G, this means if a human society were highly motivated (perhaps at gunpoint) to design an efficient real-world algorithm with goal G, and were given a million years to do so along with huge amounts of resources, training and knowledge about AI, it must fail; that there cannot exist any pattern of reinforcement learning that would train a highly efficient real-world intelligence to follow the goal G; and that there cannot exist any evolutionary or environmental pressures that would evolve highly efficient real-world intelligences following goal G.[47]

Some dissenters, like Michael Chorost (writing in Slate), argue instead that "by the time (the AI) is in a position to imagine tiling the Earth with solar panels, it'll know that it would be morally wrong to do so." Chorost argues that "a (dangerous) A.I. will need to desire certain states and dislike others... Today's software lacks that ability—and computer scientists have not a clue how to get it there. Without wanting, there's no impetus to do anything. Today's computers can't even want to keep existing, let alone tile the world in solar panels."[48]

Part of the disagreement about whether a superintelligent machine would behave morally may arise from a terminological difference, outside of the artificial intelligence field, "intelligence" is often used in a normatively thick manner that connotes moral wisdom or acceptance of agreeable forms of moral reasoning. At an extreme, if morality is part of the definition of intelligence, then by definition a superintelligent machine would behave morally. However, in the field of artificial intelligence research, while "intelligence" has many overlapping definitions, none of them reference morality. Instead, almost all current "artificial intelligence" research focuses on creating algorithms that "optimize", in an empirical way, the achievement of an arbitrary goal.[4]

To avoid anthropomorphism or the baggage of the word "intelligence", an advanced artificial intelligence can be thought of as an impersonal "optimizing process" that strictly takes whatever actions are judged most likely to accomplish its (possibly complicated and implicit) goals.[4] Another way of conceptualizing an advanced artificial intelligence is to imagine a time machine that sends backward in time information about which choice always leads to the maximization of its goal function; this choice is then output, regardless of any extraneous ethical concerns.[49][50]

In science fiction, an AI, even though it has not been programmed with human emotions, often spontaneously experiences those emotions anyway: for example, Agent Smith in The Matrix was influenced by a "disgust" toward humanity, this is fictitious anthropomorphism: in reality, while an artificial intelligence could perhaps be deliberately programmed with human emotions, or could develop something similar to an emotion as a means to an ultimate goal if it is useful to do so, it would not spontaneously develop human emotions for no purpose whatsoever, as portrayed in fiction.[7]

One example of anthropomorphism would be to believe that your PC is angry at you because you insulted it; another would be to believe that an intelligent robot would naturally find a woman sexually attractive and be driven to mate with her. Scholars sometimes claim that others' predictions about an AI's behavior are illogical anthropomorphism.[7] An example that might initially be considered anthropomorphism, but is in fact a logical statement about AI behavior, would be the Dario Floreano experiments where certain robots spontaneously evolved a crude capacity for "deception", and tricked other robots into eating "poison" and dying: here a trait, "deception", ordinarily associated with people rather than with machines, spontaneously evolves in a type of convergent evolution.[51] According to Paul R. Cohen and Edward Feigenbaum, in order to differentiate between anthropomorphization and logical prediction of AI behavior, "the trick is to know enough about how humans and computers think to say exactly what they have in common, and, when we lack this knowledge, to use the comparison to suggest theories of human thinking or computer thinking."[52]

There is universal agreement in the scientific community that an advanced AI would not destroy humanity out of human emotions such as "revenge" or "anger." The debate is, instead, between one side which worries whether AI might destroy humanity as an incidental action in the course of progressing towards its ultimate goals; and another side which believes that AI would not destroy humanity at all. Some skeptics accuse proponents of anthropomorphism for believing an AGI would naturally desire power; proponents accuse some skeptics of anthropomorphism for believing an AGI would naturally value human ethical norms.[7][53]

Some sources argue that the ongoing weaponization of artificial intelligence could constitute a catastrophic risk. James Barrat, documentary filmmaker and author of Our Final Invention, says in a Smithsonian interview, "Imagine: in as little as a decade, a half-dozen companies and nations field computers that rival or surpass human intelligence. Imagine what happens when those computers become expert at programming smart computers. Soon we'll be sharing the planet with machines thousands or millions of times more intelligent than we are. And, all the while, each generation of this technology will be weaponized. Unregulated, it will be catastrophic."[54]

Opinions vary both on whether and when artificial general intelligence will arrive, at one extreme, AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do"; obviously this prediction failed to come true.[55] At the other extreme, roboticist Alan Winfield claims the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical, faster than light spaceflight.[56] Optimism that AGI is feasible waxes and wanes, and may have seen a resurgence in the 2010s. Four polls conducted in 2012 and 2013 suggested that the median guess among experts for when AGI would arrive was 2040 to 2050, depending on the poll.[57][58]

Skeptics who believe it is impossible for AGI to arrive anytime soon, tend to argue that expressing concern about existential risk from AI is unhelpful because it could distract people from more immediate concerns about the impact of AGI, because of fears it could lead to government regulation or make it more difficult to secure funding for AI research, or because it could give AI research a bad reputation, some researchers, such as Oren Etzioni, aggressively seek to quell concern over existential risk from AI, saying "(Elon Musk) has impugned us in very strong language saying we are unleashing the demon, and so we're answering."[59]

In 2014 Slate's Adam Elkus argued "our 'smartest' AI is about as intelligent as a toddler—and only when it comes to instrumental tasks like information recall. Most roboticists are still trying to get a robot hand to pick up a ball or run around without falling over." Elkus goes on to argue that Musk's "summoning the demon" analogy may be harmful because it could result in "harsh cuts" to AI research budgets.[60]

The Information Technology and Innovation Foundation (ITIF), a Washington, D.C. think-tank, awarded its Annual Luddite Award to "alarmists touting an artificial intelligence apocalypse"; its president, Robert D. Atkinson, complained that Musk, Hawking and AI experts say AI is the largest existential threat to humanity. Atkinson stated "That's not a very winning message if you want to get AI funding out of Congress to the National Science Foundation."[61][62][63]Nature sharply disagreed with the ITIF in an April 2016 editorial, siding instead with Musk, Hawking, and Russell, and concluding: "It is crucial that progress in technology is matched by solid, well-funded research to anticipate the scenarios it could bring about... If that is a Luddite perspective, then so be it."[26] In a 2015 Washington Post editorial, researcher Murray Shanahan stated that human-level AI is unlikely to arrive "anytime soon", but that nevertheless "the time to start thinking through the consequences is now."[64]

For example, Bostrom in Superintelligence expresses concern that even if the timeline for superintelligence turns out to be predictable, researchers might not take sufficient safety precautions, in part because "It could be the case that when dumb, smarter is safe; yet when smart, smarter is more dangerous." Bostrom suggests a scenario where, over decades, AI becomes more powerful. Widespread deployment is initially marred by occasional accidents — a driverless bus swerves into the oncoming lane, or a military drone fires into an innocent crowd. Many activists call for tighter oversight and regulation, and some even predict impending catastrophe, but as development continues, the activists are proven wrong. As automotive AI becomes smarter, it suffers fewer accidents; as military robots achieve more precise targeting, they cause less collateral damage. Based on the data, scholars infer a broad lesson — the smarter the AI, the safer it is: "It is a lesson based on science, data, and statistics, not armchair philosophizing. Against this backdrop, some group of researchers is beginning to achieve promising results in their work on developing general machine intelligence, the researchers are carefully testing their seed AI in a sandbox environment, and the signs are all good. The AI's behavior inspires confidence — increasingly so, as its intelligence is gradually increased." Large and growing industries, widely seen as key to national economic competitiveness and military security, work with prestigious scientists who have built their careers laying the groundwork for advanced artificial intelligence. "AI researchers have been working to get to human-level artificial intelligence for the better part of a century: of course there is no real prospect that they will now suddenly stop and throw away all this effort just when it finally is about to bear fruit." The outcome of debate is preordained; the project is happy to enact a few safety rituals, but only so long as they don't significantly slow or risk the project. "And so we boldly go — into the whirling knives."[4]

In Tegmark's Life 3.0, a corporation's "Omega team" creates an extremely powerful AI able to moderately improve its own source code in a number of areas, but after a certain point the team chooses to publicly downplay the AI's ability, in order to avoid regulation or confiscation of the project. For safety, the team keeps the AI in a box where it is mostly unable to communicate with the outside world, and tasks it to flood the market through shell companies, first with Amazon Turk tasks and then with producing animated films and TV shows. While the public is aware that the lifelike animation is computer-generated, the team keeps secret that the high-quality direction and voice-acting are also mostly computer-generated, apart from a few third-world contractors unknowingly employed as decoys; the team's low overhead and high output effectively make it the world's largest media empire. Faced with a cloud computing bottleneck, the team also tasks the AI with designing (among other engineering tasks) a more efficient datacenter and other custom hardware, which they mainly keep for themselves to avoid competition. Other shell companies make blockbuster biotech drugs and other inventions, investing profits back into the AI, the team next tasks the AI with astroturfing an army of pseudonymous citizen journalists and commentators, in order to gain political influence to use "for the greater good" to prevent wars. The team faces risks that the AI could try to escape via inserting "backdoors" in the systems it designs, via hidden messages in its produced content, or via using its growing understanding of human behavior to persuade someone into letting it free. The team also faces risks that its decision to box the project will delay the project long enough for another project to overtake it.[65][66]

In contrast, top physicist Michio Kaku, an AI risk skeptic, posits a deterministically positive outcome; in Physics of the Future he asserts that "It will take many decades for robots to ascend" up a scale of consciousness, and that in the meantime corporations such as Hanson Robotics will likely succeed in creating robots that are "capable of love and earning a place in the extended human family".[67][68]

The thesis that AI could pose an existential risk provokes a wide range of reactions within the scientific community, as well as in the public at large.

In 2004, law professor Richard Posner wrote that dedicated efforts for addressing AI can wait, but that we should gather more information about the problem in the meanwhile.[69][70]

Many of the opposing viewpoints share common ground, the Asilomar AI Principles, which contain only the principles agreed to by 90% of the attendees of the Future of Life Institute's Beneficial AI 2017 conference,[66] agree in principle that "There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities" and "Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources."[71][72] AI safety advocates such as Bostrom and Tegmark have criticized the mainstream media's use of "those inane Terminator pictures" to illustrate AI safety concerns: "It can't be much fun to have aspersions cast on one's academic discipline, one's professional community, one's life work... I call on all sides to practice patience and restraint, and to engage in direct dialogue and collaboration as much as possible."[66][73] Conversely, many skeptics agree that ongoing research into the implications of artificial general intelligence is valuable. Skeptic Martin Ford states that "I think it seems wise to apply something like Dick Cheyney's famous '1 Percent Doctrine' to the specter of advanced artificial intelligence: the odds of its occurrence, at least in the foreseeable future, may be very low — but the implications are so dramatic that it should be taken seriously";[74] similarly, an otherwise skeptical Economist stated in 2014 that "the implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect seems remote".[27]

During a 2016 Wired interview of President Barack Obama and MIT Media Lab's Joi Ito, Ito stated, "there are a few people who believe that there is a fairly high-percentage chance that a generalized AI will happen in the next 10 years. But the way I look at it is that in order for that to happen, we're going to need a dozen or two different breakthroughs. So you can monitor when you think these breakthroughs will happen." Obama added:[75][76]

"And you just have to have somebody close to the power cord. [Laughs.] Right when you see it about to happen, you gotta yank that electricity out of the wall, man."

Technologists... have warned that artificial intelligence could one day pose an existential security threat. Musk has called it "the greatest risk we face as a civilization". Think about it: Have you ever seen a movie where the machines start thinking for themselves that ends well? Every time I went out to Silicon Valley during the campaign, I came home more alarmed about this. My staff lived in fear that I’d start talking about "the rise of the robots" in some Iowa town hall. Maybe I should have; in any case, policy makers need to keep up with technology as it races ahead, instead of always playing catch-up.[77]

Many of the scholars who are concerned about existential risk believe that the best way forward would be to conduct (possibly massive) research into solving the difficult "control problem" to answer the question: what types of safeguards, algorithms, or architectures can programmers implement to maximize the probability that their recursively-improving AI would continue to behave in a friendly, rather than destructive, manner after it reaches superintelligence?[4][70]

A 2017 email survey of researchers with publications at the 2015 NIPS and ICML machine learning conferences asked them to evaluate Russell's concerns about AI risk. 5% said it was "among the most important problems in the field," 34% said it was "an important problem", 31% said it was "moderately important", whilst 19% said it was "not important" and 11% said it was "not a real problem" at all.[78]

The thesis that AI poses an existential risk, and that this risk is in need of much more attention than it currently commands, has been endorsed by many figures; perhaps the most famous are Elon Musk, Bill Gates, and Stephen Hawking. The most notable AI researcher to endorse the thesis is Stuart J. Russell. Endorsers sometimes express bafflement at skeptics: Gates states he "can't understand why some people are not concerned",[79] and Hawking criticized widespread indifference in his 2014 editorial: 'So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI.'[23]

The thesis that AI can pose existential risk also has many strong detractors. Skeptics sometimes charge that the thesis is crypto-religious, with an irrational belief in the possibility of superintelligence replacing an irrational belief in an omnipotent God; at an extreme, Jaron Lanier argues that the whole concept that current machines are in any way intelligent is "an illusion" and a "stupendous con" by the wealthy.[80]

Much of existing criticism argues that AGI is unlikely in the short term: computer scientist Gordon Bell argues that the human race will already destroy itself before it reaches the technological singularity. Gordon Moore, the original proponent of Moore's Law, declares that "I am a skeptic. I don't believe (a technological singularity) is likely to happen, at least for a long time. And I don't know why I feel that way." Cognitive scientist Douglas Hofstadter states that "I think life and intelligence are far more complex than the current singularitarians seem to believe, so I doubt (the singularity) will happen in the next couple of centuries.[81]Baidu Vice President Andrew Ng states AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet."[43]

Some AI and AGI researchers may be reluctant to discuss risks, worrying that policymakers do not have sophisticated knowledge of the field and are prone to be convinced by "alarmist" messages, or worrying that such messages will lead to cuts in AI funding. Slate notes that some researchers are dependent on grants from government agencies such as DARPA.)[10]

In a YouGov poll of the public for the British Science Association, about a third of survey respondents said AI will pose a threat to the long term survival of humanity.[82] Referencing a poll of its readers, Slate's Jacob Brogan stated that "most of the (readers filling out our online survey) were unconvinced that A.I. itself presents a direct threat."[83] Similarly, a SurveyMonkey poll of the public by USA Today found 68% thought the real current threat remains "human intelligence"; however, the poll also found that 43% said superintelligent AI, if it were to happen, would result in "more harm than good", and 38% said it would do "equal amounts of harm and good".[84]

At some point in an intelligence explosion driven by a single AI, the AI would have to become vastly better at software innovation than the best innovators of the rest of the world; economist Robin Hanson is skeptical that this is possible.[85][86][87][88][89]

In The Atlantic, James Hamblin points out that most people don't care one way or the other, and characterizes his own gut reaction to the topic as: "Get out of here. I have a hundred thousand things I am concerned about at this exact moment. Do I seriously need to add to that a technological singularity?"[80] In a 2015 Wall Street Journal panel discussion devoted to AI risks, IBM's Vice-President of Cognitive Computing, Guruduth S. Banavar, brushed off discussion of AGI with the phrase, "it is anybody's speculation."[90]Geoffrey Hinton, the "godfather of deep learning", noted that "there is not a good track record of less intelligent things controlling things of greater intelligence", but stated that he continues his research because "the prospect of discovery is too sweet".[10][57]

There is nearly universal agreement that attempting to ban research into artificial intelligence would be unwise, and probably futile.[91][92][93] Skeptics argue that regulation of AI would be completely valueless, as no existential risk exists. Almost all of the scholars who believe existential risk exists, agree with the skeptics that banning research would be unwise: in addition to the usual problem with technology bans (that organizations and individuals can offshore their research to evade a country's regulation, or can attempt to conduct covert research), regulating research of artificial intelligence would pose an insurmountable 'dual-use' problem: while nuclear weapons development requires substantial infrastructure and resources, artificial intelligence research can be done in a garage.[94][95]

One rare dissenting voice calling for some sort of regulation on artificial intelligence is Elon Musk. According to NPR, the Tesla CEO is "clearly not thrilled" to be advocating for government scrutiny that could impact his own industry, but believes the risks of going completely without oversight are too high: "Normally the way regulations are set up is when a bunch of bad things happen, there's a public outcry, and after many years a regulatory agency is set up to regulate that industry. It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilisation." Musk states the first step would be for the government to gain "insight" into the actual status of current research, warning that "Once there is awareness, people will be extremely afraid... As they should be." In response, politicians express skepticism about the wisdom of regulating a technology that's still in development.[96][97][98] Responding both to Musk and to February 2017 proposals by European Union lawmakers to regulate AI and robotics, Intel CEO Brian Krzanich argues that artificial intelligence is in its infancy and that it's too early to regulate the technology.[98]

^Barrat, James (2013). Our final invention : artificial intelligence and the end of the human era (First ed.). New York: St. Martin's Press. ISBN9780312622374. In the bio, playfully written in the third person, Good summarized his life’s milestones, including a probably never before seen account of his work at Bletchley Park with Turing. But here’s what he wrote in 1998 about the first superintelligence, and his late-in-the-game U-turn: [The paper] 'Speculations Concerning the First Ultra-intelligent Machine' (1965) . . . began: 'The survival of man depends on the early construction of an ultra-intelligent machine.' Those were his [Good’s] words during the Cold War, and he now suspects that 'survival' should be replaced by 'extinction.' He thinks that, because of international competition, we cannot prevent the machines from taking over. He thinks we are lemmings, he said also that 'probably Man will construct the deus ex machina in his own image.'

^Russell, Stuart J.; Norvig, Peter (2003). "Section 26.3: The Ethics and Risks of Developing Artificial Intelligence". Artificial Intelligence: A Modern Approach. Upper Saddle River, N.J.: Prentice Hall. ISBN0137903952. Similarly, Marvin Minsky once suggested that an AI program designed to solve the Riemann Hypothesis might end up taking over all the resources of Earth to build more powerful supercomputers to help achieve its goal.

^Lenat, Douglas (1982). "Eurisko: A Program That Learns New Heuristics and Domain Concepts The Nature of Heuristics III: Program Design and Results". Artificial Intelligence (Print). 21: 61–98. doi:10.1016/s0004-3702(83)80005-8.

^Waser, Mark. "Rational Universal Benevolence: Simpler, Safer, and Wiser Than 'Friendly AI'." Artificial General Intelligence. Springer Berlin Heidelberg, 2011. 153-162. "Terminal-goaled intelligences are short-lived but mono-maniacally dangerous and a correct basis for concern if anyone is smart enough to program high-intelligence and unwise enough to want a paperclip-maximizer.

^Koebler, Jason (2 February 2016). "Will Superintelligent AI Ignore Humans Instead of Destroying Us?". Vice Magazine. Retrieved 3 February 2016. This artificial intelligence is not a basically nice creature that has a strong drive for paperclips, which, so long as it's satisfied by being able to make lots of paperclips somewhere else, is then able to interact with you in a relaxed and carefree fashion where it can be nice with you," Yudkowsky said. "Imagine a time machine that sends backward in time information about which choice always leads to the maximum number of paperclips in the future, and this choice is then output—that's what a paperclip maximizer is.

^Elliott, E. W. (2011). Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100, by Michio Kaku. Issues in Science and Technology, 27(4), 90.

^Kaku, Michio (2011). Physics of the future: how science will shape human destiny and our daily lives by the year 2100. New York: Doubleday. ISBN978-0-385-53080-4. I personally believe that the most likely path is that we will build robots to be benevolent and friendly

^John McGinnis (Summer 2010). "Accelerating AI". Northwestern University Law Review. 104 (3): 1253–1270. Retrieved 16 July 2014. For all these reasons, verifying a global relinquishment treaty, or even one limited to AI-related weapons development, is a nonstarter... (For different reasons from ours, the Machine Intelligence Research Institute) considers (AGI) relinquishment infeasible...

^Kaj Sotala; Roman Yampolskiy (19 December 2014). "Responses to catastrophic AGI risk: a survey". Physica Scripta. 90 (1). In general, most writers reject proposals for broad relinquishment... Relinquishment proposals suffer from many of the same problems as regulation proposals, but to a greater extent. There is no historical precedent of general, multi-use technology similar to AGI being successfully relinquished for good, nor do there seem to be any theoretical reasons for believing that relinquishment proposals would work in the future, therefore we do not consider them to be a viable class of proposals.

^Brad Allenby (11 April 2016). "The Wrong Cognitive Measuring Stick". Slate. Retrieved 15 May 2016. It is fantasy to suggest that the accelerating development and deployment of technologies that taken together are considered to be A.I. will be stopped or limited, either by regulation or even by national legislation.

^"Why We Should Think About the Threat of Artificial Intelligence". The New Yorker. 4 October 2013. Retrieved 7 February 2016. Of course, one could try to ban super-intelligent computers altogether. But 'the competitive advantage—economic, military, even artistic—of every advance in automation is so compelling,' Vernor Vinge, the mathematician and science-fiction author, wrote, 'that passing laws, or having customs, that forbid such things merely assures that someone else will.'

1.
Artificial intelligence
–
Artificial intelligence is intelligence exhibited by machines. Colloquially, the artificial intelligence is applied when a machine mimics cognitive functions that humans associate with other human minds, such as learning. As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition, for instance, optical character recognition is no longer perceived as an example of artificial intelligence, having become a routine technology. AI research is divided into subfields that focus on specific problems or on specific approaches or on the use of a tool or towards satisfying particular applications. The central problems of AI research include reasoning, knowledge, planning, learning, natural language processing, perception, general intelligence is among the fields long-term goals. Approaches include statistical methods, computational intelligence, and traditional symbolic AI, Many tools are used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy, neuroscience, the field was founded on the claim that human intelligence can be so precisely described that a machine can be made to simulate it. Some people also consider AI a danger to humanity if it progresses unabatedly, while thought-capable artificial beings appeared as storytelling devices in antiquity, the idea of actually trying to build a machine to perform useful reasoning may have begun with Ramon Llull. With his Calculus ratiocinator, Gottfried Leibniz extended the concept of the calculating machine, since the 19th century, artificial beings are common in fiction, as in Mary Shelleys Frankenstein or Karel Čapeks R. U. R. The study of mechanical or formal reasoning began with philosophers and mathematicians in antiquity, in the 19th century, George Boole refined those ideas into propositional logic and Gottlob Frege developed a notational system for mechanical reasoning. Around the 1940s, Alan Turings theory of computation suggested that a machine, by shuffling symbols as simple as 0 and 1 and this insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis. Along with concurrent discoveries in neurology, information theory and cybernetics, the first work that is now generally recognized as AI was McCullouch and Pitts 1943 formal design for Turing-complete artificial neurons. The field of AI research was born at a conference at Dartmouth College in 1956, attendees Allen Newell, Herbert Simon, John McCarthy, Marvin Minsky and Arthur Samuel became the founders and leaders of AI research. At the conference, Newell and Simon, together with programmer J. C, shaw, presented the first true artificial intelligence program, the Logic Theorist. This spurred tremendous research in the domain, computers were winning at checkers, solving problems in algebra, proving logical theorems. By the middle of the 1960s, research in the U. S. was heavily funded by the Department of Defense and laboratories had been established around the world. AIs founders were optimistic about the future, Herbert Simon predicted, machines will be capable, within twenty years, Marvin Minsky agreed, writing, within a generation. The problem of creating artificial intelligence will substantially be solved and they failed to recognize the difficulty of some of the remaining tasks

2.
Human extinction
–
In futures studies, human extinction is the hypothetical end of the human species. The probability of extinction within the next hundred years, due to human cause, is an active topic of debate. In contrast, human extinction by wholly natural scenarios, such as impact or large-scale volcanism, is extremely unlikely to occur in the near future. Existential risks are risks that threaten the future of humanity. Philosopher Robert Adams rejects Parfits impersonal views, but speaks instead of an imperative for loyalty. The aspiration for a better society- more just, more rewarding and our interest in the lives of our children and grandchildren, and the hopes that they will be able, in turn, to have the lives of their children and grandchildren as projects. Nuclear or biological warfare, for example, an arms race results in much larger arsenals than those seen during the Cold War. Pandemic involving one or more viruses, prions, or antibiotic-resistant bacteria, past examples include the Spanish flu outbreak in 1918 and the various European viruses that decimated indigenous American populations. However, they are confident that in practice, countries would be able to recognize and intervene effectively to halt the spread of such a microbe and prevent human extinction. However, well before this, the level of carbon dioxide in the atmosphere will be too low to support plant life, destroying the foundation of the food chains. About 7–8 billion years from now, if and after the Sun has become a red giant, human-induced changes to the atmospheres composition may render Earth uninhabitable for humans. The upper limit, above which human survival and reproduction would be impossible, is still unknown, If developing world demographics are assumed to become developed world demographics, and if the latter are extrapolated, data suggest an extinction before 3000 AD. Leslie estimates that if the reproduction rate drops to the German level the extinction date will be 2400, however, evolutionary biology suggests the demographic transition may reverse itself, conflicting evidence suggests birth rates may be rising in the 21st century in the developed world. The work of Hans Rosling, a Swedish medical doctor, academic, statistician, the creators of the first superintelligent entity could make a mistake and inadvertently give it goals that lead it to immediately annihilate the human race. Some scenarios, Uncontrolled nanotechnology incidents resulting in the destruction of the Earths ecosystem, early in the development of thermonuclear weapons there were some concerns that a fusion reaction could ignite the atmosphere in a chain reaction that would engulf Earth. Calculations showed the energy would dissipate far too quickly to sustain a reaction, near-Earth objects, serve as an absolute threat to the survival of living species, and that even small-scale events caused by one can result in a substantial amount of local and regional damages. Because there are very few extraterrestrial impacts ever recorded in Earths history, however, a single, extraterrestrial event can lead to the accumulation of more deaths and destruction than any man-made war or epidemic could ever produce. One mitigation technique includes the Kinetic impactor, during this process, what must be noted and paid attention to is that a second observer spacecraft is also present and vital in precisely calculating the resulting change in the asteroids orbit

3.
Global catastrophic risk
–
A global catastrophic risk is a hypothetical future event that has the potential to damage human well-being on a global scale. Some events could cripple or destroy modern civilization, any event that could cause human extinction is known as an existential risk. Potential global catastrophic risks include anthropogenic risks and natural or external risks, examples of technology risks are hostile artificial intelligence, biotechnology risks, or nanotechnology weapons. Insufficient global governance creates risks in the social and political domain as well as problems, philosopher Nick Bostrom classifies risks according to their scope and intensity. A global catastrophic risk is any risk that is at least global in scope and those that are at least trans-generational in scope and terminal in intensity are classified as existential risks. While a global catastrophic risk may kill the vast majority of life on earth, an existential risk, on the other hand, is one that either destroys humanity entirely or at least prevents any chance of civilization recovering. Bostrom considers existential risks to be far more significant, similarly, in Catastrophe, Risk and Response, Richard Posner singles out and groups together events that bring about utter overthrow or ruin on a global, rather than a local or regional scale. Posner singles out such events as worthy of attention on cost-benefit grounds because they could directly or indirectly jeopardize the survival of the human race as a whole. Posners events include meteor impacts, runaway global warming, grey goo, bioterrorism, researchers experience difficulty in studying near human extinction directly, since humanity has never been destroyed before. While this does not mean that it not be in the future, it does make modelling existential risks difficult. Bostrom identifies four types of existential risk, bangs are sudden catastrophes, which may be accidental or deliberate. He thinks the most likely sources of bangs are malicious use of nanotechnology, nuclear war, crunches are scenarios in which humanity survives but civilization is irreversibly destroyed. For example, if a single mind enhances its powers by merging with a computer, Bostrom believes that this scenario is most likely, followed by flawed superintelligence and a repressive totalitarian regime. Whimpers are the decline of human civilization or current values. He thinks the most likely cause would be evolution changing moral preference, Some risks, such as that from asteroid impact, with a one-in-a-million chance of causing humanitys extinction in the next century, have had their probabilities predicted with considerable precision. The 2016 annual report by the Global Challenges Foundation estimates that an average American is more than five times more likely to die during an event than in a car crash. The relative danger posed by other threats is more difficult to calculate. The conference report cautions that the results should be taken with a grain of salt, table source, Future of Humanity Institute,2008

4.
Human
–
Modern humans are the only extant members of Hominina tribe, a branch of the tribe Hominini belonging to the family of great apes. Several of these hominins used fire, occupied much of Eurasia and they began to exhibit evidence of behavioral modernity around 50,000 years ago. In several waves of migration, anatomically modern humans ventured out of Africa, the spread of humans and their large and increasing population has had a profound impact on large areas of the environment and millions of native species worldwide. Humans are uniquely adept at utilizing systems of communication for self-expression and the exchange of ideas. Humans create complex structures composed of many cooperating and competing groups, from families. Social interactions between humans have established a wide variety of values, social norms, and rituals. These human societies subsequently expanded in size, establishing various forms of government, religion, today the global human population is estimated by the United Nations to be near 7.5 billion. In common usage, the word generally refers to the only extant species of the genus Homo—anatomically and behaviorally modern Homo sapiens. In scientific terms, the meanings of hominid and hominin have changed during the recent decades with advances in the discovery, there is also a distinction between anatomically modern humans and Archaic Homo sapiens, the earliest fossil members of the species. The English adjective human is a Middle English loanword from Old French humain, ultimately from Latin hūmānus, the words use as a noun dates to the 16th century. The native English term man can refer to the species generally, the species binomial Homo sapiens was coined by Carl Linnaeus in his 18th century work Systema Naturae. The generic name Homo is a learned 18th century derivation from Latin homō man, the species-name sapiens means wise or sapient. Note that the Latin word homo refers to humans of either gender, the genus Homo evolved and diverged from other hominins in Africa, after the human clade split from the chimpanzee lineage of the hominids branch of the primates. The closest living relatives of humans are chimpanzees and gorillas, with the sequencing of both the human and chimpanzee genome, current estimates of similarity between human and chimpanzee DNA sequences range between 95% and 99%. The gibbons and orangutans were the first groups to split from the leading to the humans. The splitting date between human and chimpanzee lineages is placed around 4–8 million years ago during the late Miocene epoch, during this split, chromosome 2 was formed from two other chromosomes, leaving humans with only 23 pairs of chromosomes, compared to 24 for the other apes. There is little evidence for the divergence of the gorilla, chimpanzee. Each of these species has been argued to be an ancestor of later hominins

5.
Evolution of the brain
–
The principles that govern the evolution of brain structure are not well understood. Brain to body size does not scale isometrically but rather allometrically, the brains and bodies of mammals do not scale linearly. Small bodied mammals have relatively large compared to their bodies and large mammals have small brains. If brain weight is plotted against body weight for primates, the line of the sample points can indicate the brain power of a primate species. Lemurs for example fall below this line which means that for a primate of equivalent size, humans lie well above the line indicating that humans are more encephalized than lemurs. In fact, humans are more encephalized than all other primates, encephalization quotients may indicate how much brain power a species has in comparison to other mammals. Primates lie at the top of range with humans having the highest EQ score. EQ has a degree of correlation with the ecological conditions of an animal such as its feeding behaviours. Leaf eating monkeys have lower EQ than frugivorous or omnivorous monkeys since they have to work harder to forage than monkeys which eat abundant, easy to find leaves. The evolutionary history of the brain shows primarily a gradually bigger brain relative to body size during the evolutionary path from early primates to hominids. Human brain size has been trending upwards since 2 million years ago, early australopithecine brains were little larger than chimpanzee brains. The increase in brain size topped with neanderthals, since then the brain size has been shrinking over the past 28,000 years. The male brain has decreased from 1,500 cm3 to 1,350 cm3 while the brain has shrunk by the same relative proportion. However it is argued that another element of brain evolution in humans is rearrangement. Larger brains require more wiring, but more wiring can become inefficient, the brain has therefore become reorganized for efficiency. Furthermore, the body size of neanderthals was larger which led to bigger brain size. From fossil records, scientists can infer that the first brain structure appeared in worms over 500 million years ago, the functions of the hindbrain found in the fossil records included breathing, heart beat regulation, balance, basic motor movements and foraging skills. A trend in brain evolution according to a study done with mice, chickens, monkeys, what this means is that evolution is the process of acquiring more and more sophisticated structures, not simply the addition of different structures over a long period of time

6.
Superintelligence
–
A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. Superintelligence may also refer to a property of problem-solving systems whether or not these high-level intellectual competencies are embodied in agents that act in the world, a superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity. The program Fritz falls short of even though it is much better than humans at chess because Fritz cannot outperform humans in other tasks. Technological researchers disagree about how likely present-day human intelligence is to be surpassed, some argue that advances in artificial intelligence will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence, some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. This may give them the opportunity to—either as a single being or as a new species—become much more powerful than humans, philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Concerning human-level equivalence, Chalmers argues that the brain is a mechanical system. He also notes that human intelligence was able to biologically evolve, evolutionary algorithms in particular should be able to produce human-level AI. If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and it would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion, such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything. Computer components already greatly surpass human performance in speed. Bostrom writes, “Biological neurons operate at a speed of about 200 Hz. Thus, the simplest example of a superintelligence may be a human mind thats run on much faster hardware than the brain. Another advantage of computers is modularity, that is, their size or computational capacity can be increased, a non-human brain could become much larger than a present-day human brain, like many supercomputers. There may also be ways to improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in size or speed. Humans outperform non-human animals in part because of new or enhanced reasoning capacities, such as long-term planning. All of the above hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence

7.
Mountain gorilla
–
The mountain gorilla is one of the two subspecies of the eastern gorilla. It is listed as endangered by the IUCN. The other is found in Ugandas Bwindi Impenetrable National Park, some primatologists consider the Bwindi population in Uganda may be a separate subspecies, though no description has been finalized. As of September 2016, the number of mountain gorillas remaining is about 880. Mountain gorillas are descendants of monkeys and apes found in Africa. The fossil record provides evidence of the hominoid primates found in east Africa about 22–32 million years ago, the fossil record of the area where mountain gorillas live is particularly poor and so its evolutionary history is not clear. It was about 9 million years ago that the group of primates that were to evolve into gorillas split from their ancestor with humans and chimps. It is not certain what this early relative of the gorilla was, Mountain gorillas have been isolated from eastern lowland gorillas for about 400,000 years and these two taxa separated from their western counterparts approximately 2 million years ago. There has been considerable and as yet unresolved debate over the classification of mountain gorillas, the genus was first referenced as Troglodytes in 1847, but renamed to Gorilla in 1852. In 2003 after a review they were divided into two species by The World Conservation Union, the fur of the mountain gorilla, often thicker and longer than that of other gorilla species, enables them to live in colder temperatures. Gorillas can be identified by nose prints unique to each individual, males, at a mean weight of 195 kg upright standing height of 150 cm usually weigh twice as much as the females, at a mean of 100 kg and a height of 130 cm. This subspecies is on average the second largest species of primate, only the eastern lowland gorilla, adult males have more pronounced bony crests on the top and back of their skulls, giving their heads a more conical shape. These crests anchor the powerful temporalis muscles, which attach to the lower jaw, adult females also have these crests, but they are less pronounced. Like all gorillas they feature dark brown eyes framed by a ring around the iris. Adult males are called silverbacks because a saddle of gray or silver-colored hair develops on their backs with age, the hair on their backs is shorter than on most other body parts, and their arm hair is especially long. Fully erect, males reach 1.9 m in height, with an arm span of 2.6 m and weigh 220 kg. The tallest silverback recorded was a 1.94 m with an arm span of 2.7 m, a chest of 1.98 m, and a weight of 219 kg, shot in Alimbongo, northern Kivu in May 1938. There is a record of another individual, shot in 1932

8.
Stephen Hawking
–
Hawking was the first to set forth a theory of cosmology explained by a union of the general theory of relativity and quantum mechanics. He is a supporter of the many-worlds interpretation of quantum mechanics. In 2002, Hawking was ranked number 25 in the BBCs poll of the 100 Greatest Britons, Hawking has a rare early-onset, slow-progressing form of amyotrophic lateral sclerosis that has gradually paralysed him over the decades. He now communicates using a single cheek muscle attached to a speech-generating device, Hawking was born on 8 January 1942 in Oxford, England to Frank and Isobel Hawking. Despite their families financial constraints, both attended the University of Oxford, where Frank read medicine and Isobel read Philosophy. The two met shortly after the beginning of the Second World War at a research institute where Isobel was working as a secretary. They lived in Highgate, but, as London was being bombed in those years, Hawking has two younger sisters, Philippa and Mary, and an adopted brother, Edward. In 1950, when Hawkings father became head of the division of parasitology at the National Institute for Medical Research, Hawking and his moved to St Albans. In St Albans, the family were considered intelligent and somewhat eccentric. They lived an existence in a large, cluttered, and poorly maintained house. During one of Hawkings fathers frequent absences working in Africa, the rest of the family spent four months in Majorca visiting his mothers friend Beryl and her husband, Hawking began his schooling at the Byron House School in Highgate, London. He later blamed its progressive methods for his failure to learn to read while at the school, in St Albans, the eight-year-old Hawking attended St Albans High School for Girls for a few months. At that time, younger boys could attend one of the houses, the family placed a high value on education. Hawkings father wanted his son to attend the well-regarded Westminster School and his family could not afford the school fees without the financial aid of a scholarship, so Hawking remained at St Albans. From 1958 on, with the help of the mathematics teacher Dikran Tahta, they built a computer from clock parts, although at school Hawking was known as Einstein, Hawking was not initially successful academically. With time, he began to show aptitude for scientific subjects and, inspired by Tahta. Hawkings father advised him to medicine, concerned that there were few jobs for mathematics graduates. He wanted Hawking to attend University College, Oxford, his own alma mater, as it was not possible to read mathematics there at the time, Hawking decided to study physics and chemistry

9.
Bill Gates
–
William Henry Bill Gates III is an American business magnate, investor, author, and philanthropist. In 1975, Gates and Paul Allen co-founded Microsoft, which became the worlds largest PC software company, during his career at Microsoft, Gates held the positions of chairman, CEO and chief software architect, and was the largest individual shareholder until May 2014. Gates has authored and co-authored several books, since 1987, Gates has been included in the Forbes list of the worlds wealthiest people and was the wealthiest from 1995 to 2007, again in 2009, and has been since 2014. Between 2009 and 2014, his wealth doubled from US$40 billion to more than US$82 billion, between 2013 and 2014, his wealth increased by US$15 billion. Gates is currently the richest person in the world, with a net worth of US$85.6 billion as of February 2017. Gates is one of the entrepreneurs of the personal computer revolution. He has been criticized for his business tactics, which have been considered anti-competitive, Gates stepped down as chief executive officer of Microsoft in January 2000. He remained as chairman and created the position of chief architect for himself. In June 2006, Gates announced that he would be transitioning from full-time work at Microsoft to part-time work and he gradually transferred his duties to Ray Ozzie and Craig Mundie. He stepped down as chairman of Microsoft in February 2014, taking on a new post as adviser to support the then newly appointed CEO Satya Nadella. Gates was born in Seattle, Washington on October 28,1955 and he is the son of William H. Gates Sr. and Mary Maxwell Gates. His ancestry includes English, German, Irish, and Scots-Irish and his father was a prominent lawyer, and his mother served on the board of directors for First Interstate BancSystem and the United Way. Gates maternal grandfather was JW Maxwell, a bank president. Gates has one sister, Kristi, and one younger sister. He is the fourth of his name in his family, but is known as William Gates III or Trey because his father had the II suffix, early on in his life, Gates parents had a law career in mind for him. When Gates was young, his family attended a church of the Congregational Christian Churches. The family encouraged competition, one reported that it didnt matter whether it was hearts or pickleball or swimming to the dock. There was always a reward for winning and there was always a penalty for losing, at 13, he enrolled in the Lakeside School, a private preparatory school

10.
Elon Musk
–
Elon Reeve Musk is a South African-born Canadian-American business magnate, investor, engineer, and inventor. As of March 2017, he has a net worth of $13.9 billion. In December 2016, Musk was ranked 21st on Forbes list of The Worlds Most Powerful People, Musk has stated that the goals of SolarCity, Tesla, and SpaceX revolve around his vision to change the world and humanity. He has a brother, Kimbal, and a younger sister. His paternal grandmother was British, and he also has Pennsylvania Dutch ancestry, after his parents divorced in 1980, Musk lived mostly with his father in the suburbs of Pretoria. During his childhood he had an interest in reading and often did so for hours at a time, at age 10, he developed an interest in computing with the Commodore VIC-20. He taught himself computer programming at the age of 12, sold the code for a BASIC-based video game he created called Blastar, to a magazine called PC and Office Technology, a web version of the game is available online. Musk was severely bullied throughout his childhood, and was hospitalized when a group of boys threw him down a flight of stairs. Musk was initially educated at schools, attending the English-speaking Waterkloof House Preparatory School. Musk later graduated from Pretoria Boys High School and moved to Canada in June 1989, just before his 18th birthday, therefore, with the law change, he is considered to have always been a Canadian citizen by birth. At the age of 19, Musk was accepted into Queens University in Kingston, Ontario, Musk extended his studies for one year to finish the second bachelors degree. While at the University of Pennsylvania, Musk and fellow Penn student Adeo Ressi rented a 10-bedroom fraternity house, in 2002, he became a U. S. citizen. In 1995, Musk and his brother, Kimbal, started Zip2, the company developed and marketed an Internet city guide for the newspaper publishing industry. Musk obtained contracts with The New York Times and the Chicago Tribune, while at Zip2, Musk wanted to become CEO, however, none of the board members would allow it. Compaq acquired Zip2 for US$307 million in cash and US$34 million in stock options in February 1999, Musk received 7% or US$22 million from the sale. In March 1999, Musk co-founded X. com, an financial services and e-mail payment company. One year later, the merged with Confinity, which had a money transfer service called PayPal. The merged company focused on the PayPal service and was renamed PayPal in 2001, PayPals early growth was driven mainly by a viral marketing campaign where new customers were recruited when they received money through the service

11.
Go (game)
–
Go is an abstract strategy board game for two players, in which the aim is to surround more territory than the opponent. The game was invented in ancient China more than 2,500 years ago and it was considered one of the four essential arts of the cultured aristocratic Chinese scholar caste in antiquity. The earliest written reference to the game is recognized as the historical annal Zuo Zhuan. The modern game of Go as we know it was formalized in Japan in the 15th century CE, despite its relatively simple rules, Go is very complex, even more so than chess, and possesses more possibilities than the total number of atoms in the visible universe. Compared to chess, Go has both a board with more scope for play and longer games, and, on average. The playing pieces are called stones, one player uses the white stones and the other, black. The players take turns placing the stones on the vacant intersections of a board with a 19×19 grid of lines, beginners often play on smaller 9×9 and 13×13 boards, and archaeological evidence shows that the game was played in earlier centuries on a board with a 17×17 grid. However, boards with a 19×19 grid had become standard by the time the game had reached Korea in the 5th century CE, the objective of Go—as the translation of its name implies—is to fully surround a larger total area of the board than the opponent. Once placed on the board, stones may not be moved, capture happens when a stone or group of stones is surrounded by opposing stones on all orthogonally-adjacent points. The game proceeds until neither player wishes to make another move, when a game concludes, the territory is counted along with captured stones and komi to determine the winner. Games may also be terminated by resignation, as of mid-2008, there were well over 40 million Go players worldwide, the overwhelming majority of them living in East Asia. As of December 2015, the International Go Federation has a total of 75 member countries, Go is an adversarial game with the objective of surrounding a larger total area of the board with ones stones than the opponent. As the game progresses, the players position stones on the board to map out formations, contests between opposing formations are often extremely complex and may result in the expansion, reduction, or wholesale capture and loss of formation stones. A basic principle of Go is that a group of stones must have at least one liberty to remain on the board, a liberty is an open point bordering the group. An enclosed liberty is called an eye, and a group of stones with two or more eyes is said to be unconditionally alive, such groups cannot be captured, even if surrounded. A group with one eye or no eyes is dead and cannot resist eventual capture, the general strategy is to expand ones territory, attack the opponents weak groups, and always stay mindful of the life status of ones own groups. The liberties of groups are countable, situations where mutually opposing groups must capture each other or die are called capturing races, or semeai. In a capturing race, the group with more liberties will ultimately be able to capture the opponents stones, capturing races and the elements of life or death are the primary challenges of Go

12.
Yann LeCun
–
Yann LeCun is a computer scientist with contributions in machine learning, computer vision, mobile robotics and computational neuroscience. He is well known for his work on character recognition and computer vision using convolutional neural networks. He is also one of the creators of the DjVu image compression technology. He co-developed the Lush programming language with Léon Bottou, Yann LeCun was born near Paris, France, in 1960. He was a research associate in Geoffrey Hintons lab at the University of Toronto. In 1988, he joined the Adaptive Systems Research Department at AT&T Bell Laboratories in Holmdel, New Jersey, United States, headed by Lawrence D. The bank check recognition system that he helped develop was widely deployed by NCR and other companies, reading over 10% of all the checks in the US in the late 1990s and his collaborators at AT&T include Léon Bottou and Vladimir Vapnik. He is also a professor at the Tandon School of Engineering, at NYU, he has worked primarily on Energy-Based Models for supervised and unsupervised learning, feature learning for object recognition in Computer Vision, and mobile robotics. In 2012, he became the director of the NYU Center for Data Science. On December 9,2013, LeCun became the first director of Facebook AI Research in New York City, and stepped down from the NYU-CDS directorship in early 2014. LeCun is a member of the US National Academy of Engineering, the recipient of the 2014 IEEE Neural Network Pioneer Award, in 2013, he and Yoshua Bengio co-founded the International Conference on Learning Representations, which adopted a post-publication open review process he previously advocated on his website. He was the chair and organizer of the Learning Workshop held every year between 1986 and 2012 in Snowbird, Utah and he is a member of the Science Advisory Board of the Institute for Pure and Applied Mathematics at UCLA. His leçon inaugurale has been an important event in 2016 Paris intellectual life, on October 11th, he was awarded Doctor Honoris Causa by the IPN in Mexico City. reddit. com Ask Me Anything, Yann LeCun IEEE Spectrum Article Technology Review article

13.
Artificial Intelligence: A Modern Approach
–
Artificial Intelligence, A Modern Approach is a university textbook on artificial intelligence, written by Stuart J. Russell and Peter Norvig. It was first published in 1995 and the edition of the book was released 11 December 2009. It is used in over 1100 universities worldwide and has called the most popular artificial intelligence textbook in the world. It is considered the text in the field of artificial intelligence. The book is intended for an audience but can also be used for graduate-level studies with the suggestion of adding some of the primary sources listed in the extensive bibliography. 1st 1995, red cover 2nd 2003 3rd 2009 Artificial Intelligence, the authors state that it is a large text which would take two semesters to cover all the chapters and projects. Part I, Artificial Intelligence - Sets the stage for the sections by viewing AI systems as intelligent agents that can decide what actions to take. Part II, Problem-solving - Focuses on methods for deciding what action to take when needing to think several steps such as playing a game of chess. Part III, Knowledge, reasoning, and planning - Discusses ways to represent knowledge about the intelligent agents environment, part IV, Uncertain knowledge and reasoning - This section is analogous to Parts III, but deals with reasoning and decision-making in the presence of uncertainty in the environment. Part VII, Conclusions - Considers the past and future of AI by discussing what AI really is, also the views of those philosophers who believe that AI can never succeed are given discussion. Programs in the book are presented in pseudo code with implementations in Java, Python, there are also unsupported implementations in Prolog, C++, and C#

14.
Unintended consequences
–
In the social sciences, unintended consequences are outcomes that are not the ones foreseen and intended by a purposeful action. The term was popularised in the century by American sociologist Robert K. Merton. Unintended consequences can be grouped into three types, Unexpected benefit, A positive unexpected benefit, Unexpected drawback, An unexpected detriment occurring in addition to the desired effect of the policy. Perverse result, A perverse effect contrary to what was originally intended and this is sometimes referred to as backfire. The idea of unintended consequences dates back at least to John Locke who discussed the consequences of interest rate regulation in his letter to Sir John Somers. The idea was discussed by Adam Smith, the Scottish Enlightenment. However, it was the sociologist Robert K. Merton who popularized this concept in the twentieth century and he emphasized that his term purposive action. Concerned with conduct as distinct from behavior and that is, with action that involves motives and consequently a choice between various alternatives. Mertons usage included deviations from what Max Weber defined as social action, instrumentally rational. Merton also stated that no blanket statement categorically affirming or denying the practical feasibility of all social planning is warranted, akin to Murphys law, it is commonly used as a wry or humorous warning against the hubristic belief that humans can fully control the world around them. As a sub-component of complexity, the nature of the universe—and especially its quality of having small. Likewise the creation of no-mans lands during the Cold War, in such as the border between Eastern and Western Europe, and the Korean Demilitarized Zone, has led to large natural habitats. The sinking of ships in shallow waters during wartime has created many artificial coral reefs, in medicine, most drugs have unintended consequences associated with their use. For instance, aspirin, a reliever, is also an anticoagulant that can help prevent heart attacks and reduce the severity. The existence of beneficial side effects also leads to off-label use—prescription or use of a drug for an unlicensed purpose, famously, the drug Viagra was developed to lower blood pressure, with its main current use being discovered as a side effect in clinical trials. The implementation of a profanity filter by AOL in 1996 had the consequence of blocking residents of Scunthorpe, North Lincolnshire. The accidental censorship of innocent language, known as the Scunthorpe problem, has been repeated, in 1990, the Australian state of Victoria made safety helmets mandatory for all bicycle riders. The risk of death and serious injury per cyclist seems to have increased, research by Vulcan, et al. found that the reduction in juvenile cyclists was because the youths considered wearing a bicycle helmet unfashionable

15.
Eric Horvitz
–
Eric Joel Horvitz is an American computer scientist, and Technical Fellow at Microsoft, where he serves as managing director of Microsoft Researchs main Redmond lab. As of March 2017, Horvitz serves as a member of the Bulletin of the Atomic Scientists Board of Sponsors, Horvitz received his PhD in 1990 and his MD degree at Stanford University. His doctoral dissertation, Computation and action under bounded resources, and follow-on research introduced models of bounded rationality founded in probability and he did his doctoral work under advisors Ronald A. Howard, George B. Dantzig, Edward H. Shortliffe, and Patrick Suppes and he is currently Technical Fellow at Microsoft, where he serves as director of Microsoft Researchs main Redmond lab. He was elected to the ACM CHI Academy in 2013 and ACM Fellow 2014 For contributions to artificial intelligence, horvitzs research interests span theoretical and practical challenges with developing systems that perceive, learn, and reason. His contributions include advances in principles and applications of learning and inference, information retrieval, human-computer interaction, bioinformatics. Horvitz played a significant role in the use of probability and decision theory in artificial intelligence and his research helped establish the link between artificial intelligence and decision science. As an example, he coined the concept of bounded optimality and he studied the use of probability and utility to guide automated reasoning for decision making. The methods include consideration of the solving of streams of problems in environments over time, in related work, he applied probability and machine learning to identify hard problems and to guide theorem proving. He has explored synergies between human and machine intelligence, with methods that learn about the complementarities between people and AI and he is a founder of the AAAI conference on Human Computation and Crowdsourcing. He co-authored probability-based methods to enhance privacy, including a model of sharing of data called community sensing. Horvitz speaks on the topic of artificial intelligence, including on NPR, online talks include both technical lectures and presentations for general audiences. His research has been featured in the New York Times and the Technology Review and he served as President of the AAAI from 2007-2009. As AAAI President, he called together and co-chaired the Asilomar AI study which culminated in a meeting of AI scientists at Asilomar in February 2009. The study was the first meeting of AI scientists to address concerns about superintelligence and loss of control of AI, in 2014, he defined and funded with his wife the One Hundred Year Study of Artificial Intelligence at Stanford University. A framing memo for the calls out 18 topics, including monitoring and addressing possibilities of superintelligences. A press release in September 2016 states that the organization will be guided by balanced leadership that includes “academics, non-profits. Computation is the fire in our modern-day caves, I do think that the stakes are high enough where even if there was a low, small chance of some of these kinds of scenarios, that its worth investing time and effort to be proactive

16.
Francesca Rossi
–
Francesca Rossi is an Italian computer scientist, doing research in artificial intelligence. She is a professor at the University of Padova. She is the president of the International Joint Conference on Artificial Intelligence and is Associate Editor in Chief of the Journal of Artificial Intelligence Research. She received her Laurea in Information Science from the University of Pisa in 1986, and she was an assistant professor at the University of Pisa and an associate professor of Computer Science at the University of Padova. Since 2001 she is professor of Computer Science at the University of Padova and currently on sabbatical as a Fellow of Radcliffe Institute for Advanced Studies. Her research on artificial intelligence has an emphasis on preference modeling and reasoning, constraint processing. She has published widely in journals and conferences and also edited several volumes. She co-authored a book on preferences and social choice

17.
Vicarious (company)
–
Vicarious is an artificial intelligence company based in the San Francisco Bay Area, California. They are using the computational principles of the brain to build software that can think. The company was founded in 2010 by D. Scott Phoenix, before co-founding Vicarious, Phoenix was Entrepreneur in Residence at Founders Fund and CEO of Frogmetrics, a touchscreen analytics company he co-founded through the Y Combinator incubator program. Previously, Dr. George was Chief Technology Officer at Numenta, the company launched in February 2011 with funding from Founders Fund, Dustin Moskovitz, Adam D’Angelo, Felicis Ventures, and Palantir co-founder Joe Lonsdale. In August 2012, in its Series A round of funding, the round was led by Good Ventures, Founders Fund, Open Field Capital and Zarco Investment Group also participated. The company received 40 million dollars in its Series B round of funding, the round was led by such notables as Mark Zuckerberg, Elon Musk, Peter Thiel, Vinod Khosla, and Ashton Kutcher. An additional undisclosed amount was contributed by Amazon. com CEO Jeff Bezos. Co-founder Jerry Yang, Skype co-founder Janus Friis and Salesforce. com CEO Marc Benioff, Vicarious is developing machine learning software based on the computational principles of the human brain. Known as the Recursive Cortical Network, it is a visual system that interprets the contents of photographs. The system is powered by an approach that takes sensory data, mathematics. On October 22,2013, beating CAPTCHA, Vicarious announced its AI was reliably able to solve modern CAPTCHAs and these claims were independently verified by Dr. Luis von Ahn, Ray Kurzweil, and Dr. Bruno Olshausen. Vicarious has indicated that its AI was not specifically designed to complete CAPTCHAs, because Vicariouss algorithms are based on insights from the human brain, it is also able to recognize photographs, videos, and other visual data

18.
Technological singularity
–
Subsequent authors have echoed this viewpoint. I. J. Goods intelligence explosion, predicted that a future superintelligence would trigger a singularity, at the 2012 Singularity Summit, Stuart Armstrong did a study of artificial general intelligence predictions by experts and found a wide range of predicted dates, with a median value of 2040. Good speculated in 1965 that artificial intelligence might bring about an intelligence explosion. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in, John von Neumann, Vernor Vinge and Ray Kurzweil define the concept in terms of the technological creation of superintelligence. They argue that it is difficult or impossible for humans to predict what human beings lives would be like in a post-singularity world. Computer scientist and futurist Hans Moravec proposed in a 1998 book that the growth curve could be extended back through earlier computing technologies prior to the integrated circuit. Kurzweil reserves the term singularity for an increase in intelligence, writing for example that The Singularity will allow us to transcend these limitations of our biological bodies. There will be no distinction, post-Singularity, between human and machine, some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. Kurzweil claims that technological progress follows a pattern of exponential growth, whenever technology approaches a barrier, Kurzweil writes, new technologies will surmount it. He predicts paradigm shifts will become increasingly common, leading to change so rapid. Kurzweil believes that the singularity will occur by approximately 2045 and his predictions differ from Vinges in that he predicts a gradual ascent to the singularity, rather than Vinges rapidly self-improving superhuman intelligence. Oft-cited dangers include those associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joys Wired magazine article Why the future doesnt need us. Some critics assert that no computer or machine will ever achieve human intelligence, steven Pinker stated in 2008, There is not the slightest reason to believe in a coming singularity. The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, sheer processing power is not a pixie dust that magically solves all your problems. University of California, Berkeley, philosophy professor John Searle writes, have, literally, no intelligence, no motivation, no autonomy and we design them to behave as if they had certain sorts of psychology, but there is no psychological reality to the corresponding processes or behavior. He machinery has no beliefs, desires, motivations and this would cause massive unemployment and plummeting consumer demand, which in turn would destroy the incentive to invest in the technologies that would be required to bring about the Singularity

19.
I. J. Good
–
Irving John Good was a British mathematician who worked as a cryptologist at Bletchley Park with Alan Turing. After World War II, Good continued to work with Turing on the design of computers, Good moved to the United States where he was professor at Virginia Tech. He was born Isadore Jacob Gudak to a Polish Jewish family in London and he later anglicised his name to Irving John Good and signed his publications I. J. Good. An originator of the now known as intelligence explosion, Good served as consultant on supercomputers to Stanley Kubrick, director of the 1968 film 2001. Good was born Isadore Jacob Gudak to Polish Jewish parents in London and his father was a watchmaker, who later managed and owned a successful fashionable jewellery shop, and was also a notable Yiddish writer writing under the pen-name of Moshe Oved. Good was educated at the Haberdashers Askes Boys School, at the time in Hampstead in north west London, Good studied mathematics at Jesus College, Cambridge, graduating in 1938 and winning the Smiths Prize in 1940. Hardy and Besicovitch before moving to Bletchley Park in 1941 on completing his doctorate, on 27 May 1941, having just obtained his doctorate at Cambridge, Good walked into Hut 8, Bletchleys facility for breaking German naval ciphers, for his first shift. This was the day that Britains Royal Navy destroyed the German battleship Bismarck after it had sunk the Royal Navys HMS Hood, Hut 8 had not, however, been able to decrypt on a current basis the 22 German Naval Enigma messages that had been sent to Bismarck. The German Navys Enigma ciphers were considerably more secure than those of the German Army or Air Force, Naval messages were taking three to seven days to decrypt, which usually made them operationally useless for the British. This was about to change, however, with Goods help, had caught Good sleeping on the floor while on duty during his first night shift. At first, Turing thought Good was ill, but he was cross when Good explained that he was just taking a short nap because he was tired, for days afterwards, Turing would not deign to speak to Good, and he left the room if Good walked in. The new recruit only won Turings respect after he solved the bigram tables problem, during a subsequent night shift, when there was no more work to be done, it dawned on Good that there might be another chink in the German indicating system. The German telegraphists had to add letters to the trigrams which they selected out of the kenngruppenbuch. Good wondered if their choice of dummy letters was random, or whether there was a bias towards particular letters, after inspecting some messages which had been broken, he discovered that there was a tendency to use some letters more than others. The bigram table which produced one of the popular dummy letters was probably the correct one, when Good mentioned his discovery to Alan Turing, Turing was very embarrassed, and said, I could have sworn that I tried that. It quickly became an important part of the Banburismus procedure, jack Goods refusal to go on working when tired was vindicated by a subsequent incident. During another long night shift, he had been baffled by his failure to break a doubly enciphered Offizier message. This was one of the messages which was supposed to be enciphered initially with the Enigma set up in accordance with the Offizier settings, and subsequently with the general Enigma settings in place

20.
Alan Turing
–
Alan Mathison Turing OBE FRS was an English computer scientist, mathematician, logician, cryptanalyst and theoretical biologist. Turing is widely considered to be the father of computer science. During the Second World War, Turing worked for the Government Code and Cypher School at Bletchley Park, for a time he led Hut 8, the section responsible for German naval cryptanalysis. After the war, he worked at the National Physical Laboratory and he wrote a paper on the chemical basis of morphogenesis, and predicted oscillating chemical reactions such as the Belousov–Zhabotinsky reaction, first observed in the 1960s. Turing was prosecuted in 1952 for homosexual acts, when by the Labouchere Amendment and he accepted chemical castration treatment, with DES, as an alternative to prison. Turing died in 1954,16 days before his 42nd birthday, an inquest determined his death as suicide, but it has been noted that the known evidence is also consistent with accidental poisoning. In 2009, following an Internet campaign, British Prime Minister Gordon Brown made a public apology on behalf of the British government for the appalling way he was treated. Queen Elizabeth II granted him a pardon in 2013. The Alan Turing law is now a term for a 2017 law in the United Kingdom that retroactively pardons men cautioned or convicted under historical legislation that outlawed homosexual acts. Turings father was the son of a clergyman, the Rev. John Robert Turing, from a Scottish family of merchants that had based in the Netherlands. Turings mother, Julius wife, was Ethel Sara, daughter of Edward Waller Stoney, the Stoneys were a Protestant Anglo-Irish gentry family from both County Tipperary and County Longford, while Ethel herself had spent much of her childhood in County Clare. Julius work with the ICS brought the family to British India and he had an elder brother, John. At Hastings, Turing stayed at Baston Lodge, Upper Maze Hill, St Leonards-on-Sea, very early in life, Turing showed signs of the genius that he was later to display prominently. His parents purchased a house in Guildford in 1927, and Turing lived there during school holidays, the location is also marked with a blue plaque. Turings parents enrolled him at St Michaels, a day school at 20 Charles Road, St Leonards-on-Sea, the headmistress recognised his talent early on, as did many of his subsequent educators. From January 1922 to 1926, Turing was educated at Hazelhurst Preparatory School, in 1926, at the age of 13, he went on to Sherborne School, an independent school in the market town of Sherborne in Dorset. Turings natural inclination towards mathematics and science did not earn him respect from some of the teachers at Sherborne and his headmaster wrote to his parents, I hope he will not fall between two stools. If he is to stay at school, he must aim at becoming educated

21.
Marvin Minsky
–
Marvin Lee Minsky was born in New York City, to an eye surgeon father, Henry, and to a mother, Fannie, who was an activist of Zionist affairs. He attended the Ethical Culture Fieldston School and the Bronx High School of Science and he later attended Phillips Academy in Andover, Massachusetts. He then served in the US Navy from 1944 to 1945 and he received a B. A. in mathematics from Harvard University and a Ph. D. in mathematics from Princeton University. He was on the MIT faculty from 1958 to his death, in 1959 he and John McCarthy initiated what is known now as the MIT Computer Science and Artificial Intelligence Laboratory. He was the Toshiba Professor of Media Arts and Sciences, and professor of electrical engineering, Minskys inventions include the first head-mounted graphical display and the confocal microscope. He developed, with Seymour Papert, the first Logo turtle, Minsky also built, in 1951, the first randomly wired neural network learning machine, SNARC. Minsky wrote the book Perceptrons, which became the work in the analysis of artificial neural networks. He also founded several other famous AI models and his book A framework for representing knowledge created a new paradigm in programming. While his Perceptrons is now more a historical than practical book, Minsky has also written on the possibility that extraterrestrial life may think like humans, permitting communication. Clarkes derivative novel of the name, Probably no one would ever know this. In the 1980s, Minsky and Good had shown how neural networks could be generated automatically—self replicated—in accordance with any arbitrary learning program, Artificial brains could be grown by a process strikingly analogous to the development of a human brain. In any given case, the details would never be known. In the early 1970s, at the MIT Artificial Intelligence Lab, Minsky, the theory attempts to explain how what we call intelligence could be a product of the interaction of non-intelligent parts. Minsky says that the biggest source of ideas about the theory came from his work in trying to create a machine that uses an arm, a video camera. In 1986, Minsky published The Society of Mind, a book on the theory which. Recent drafts of the book are available from his webpage. In 1952, Minsky married pediatrician Dr. Gloria Rudisch, together they had three children, Minsky was a talented improvisational pianist who published musings on the relations between music and psychology. Minsky was an atheist and a signatory to the Scientists Open Letter on Cryonics and he was a critic of the Loebner Prize for conversational robots

22.
Sun Microsystems
–
Sun contributed significantly to the evolution of several key computing technologies, among them Unix, RISC processors, thin client computing, and virtualized computing. Sun was founded on February 24,1982, at its height, the Sun headquarters were in Santa Clara, California, on the former west campus of the Agnews Developmental Center. On January 27,2010, Oracle Corporation acquired Sun by for US$7.4 billion, other technologies include the Java platform, MySQL, and NFS. Sun was a proponent of open systems in general and Unix in particular, the initial design for what became Suns first Unix workstation, the Sun-1, was conceived by Andy Bechtolsheim when he was a graduate student at Stanford University in Palo Alto, California. Bechtolsheim originally designed the SUN workstation for the Stanford University Network communications project as a personal CAD workstation and it was designed around the Motorola 68000 processor with an advanced memory management unit to support the Unix operating system with virtual memory support. He built the first ones from spare parts obtained from Stanfords Department of Computer Science, on February 24,1982, Vinod Khosla, Andy Bechtolsheim, and Scott McNealy, all Stanford graduate students, founded Sun Microsystems. Bill Joy of Berkeley, a developer of the Berkeley Software Distribution. The Sun name is derived from the initials of the Stanford University Network, Sun was profitable from its first quarter in July 1982. By 1983 Sun was known for producing 68000-based systems with high-quality graphics that were the only computers other than DECs VAX to run 4. 2BSD and it licensed the computer design to other manufacturers, which typically used it to build Multibus-based systems running Unix from UniSoft. Suns initial public offering was in 1986 under the stock symbol SUNW, the symbol was changed in 2007 to JAVA, Sun stated that the brand awareness associated with its Java platform better represented the companys current strategy. Suns logo, which features four interleaved copies of the sun in the form of a rotationally symmetric ambigram, was designed by professor Vaughan Pratt. The initial version of the logo was orange and had the sides oriented horizontally and vertically, but it was rotated to stand on one corner and re-colored purple. In the dot-com bubble, Sun began making more money. It also began spending more, hiring workers and building itself out. Some of this was because of demand, but much was from web start-up companies anticipating business that would never happen. Sales in Suns important hardware division went into free-fall as customers closed shop, several quarters of steep losses led to executive departures, rounds of layoffs, and other cost cutting. In December 2001, the fell to the 1998, pre-bubble level of about $100. But it kept falling, faster than many other tech companies, a year later it had dipped below $10 but bounced back to $20

23.
Bill Joy
–
William Nelson Bill Joy is an American computer scientist. Joy co-founded Sun Microsystems in 1982 along with Vinod Khosla, Scott McNealy and Andreas von Bechtolsheim and he played an integral role in the early development of BSD UNIX while a graduate student at Berkeley, and he is the original author of the vi text editor. He also wrote the 2000 essay Why the Future Doesnt Need Us, Joy was born in the Detroit suburb of Farmington Hills, Michigan, to William Joy, a school vice-principal and counselor, and Ruth Joy. Joys graduate advisor was Bob Fabry, as a UC Berkeley graduate student, Joy worked for Fabrys Computer Systems Research Group CSRG on the Berkeley Software Distribution version of the Unix operating system. He initially worked on a Pascal compiler left at Berkeley by Ken Thompson and he later moved on to improving the Unix kernel, and also handled BSD distributions. Some of his most notable contributions were the ex and vi editors, Joys prowess as a computer programmer is legendary, with an oft-told anecdote that he wrote the vi editor in a weekend. According to a Salon article, during the early 1980s, DARPA had contracted the company Bolt, Beranek, Joy had been instructed to plug BBNs stack into Berkeley Unix, but he refused to do so, as he had a low opinion of BBNs TCP/IP. So, Joy wrote his own high-performance TCP/IP stack, according to John Gage, BBN had a big contract to implement TCP/IP, but their stuff didnt work, and grad student Joys stuff worked. So they had this big meeting and this grad student in a T-shirt shows up, and Bill said, Its very simple — you read the protocol and write the code. Rob Gurwitz, who was working at BBN at the time, in 1982, after the firm had been going for six months, Joy was brought in with full co-founder status at Sun Microsystems. At Sun, Joy was an inspiration for the development of NFS, the SPARC microprocessors, in 1986, Joy was awarded a Grace Murray Hopper Award by the ACM for his work on the Berkeley UNIX Operating System. On September 9,2003 Sun announced that Bill Joy was leaving the company, in 1999, Joy co-founded a venture capital firm, HighBAR Ventures, with two Sun colleagues, Andreas von Bechtolsheim and Roy Thiele-Sardiña. In January 2005 he was named a partner in venture capital firm Kleiner Perkins Caufield & Byers, there, Joy has made several investments in green energy industries, even though he does not have any credentials in the field. He once said, My method is to look at something that seems like a good idea, in 2011, he was inducted as a Fellow of the Computer History Museum for his work on the Berkeley Software Distribution Unix system and the co-founding of Sun Microsystems. He argued that intelligent robots would replace humanity, at the very least in intellectual and social dominance and he advocates a position of relinquishment of GNR technologies, rather than going into an arms race between negative uses of the technology and defense against those negative uses. This stance of broad relinquishment was criticized by such as technological-singularity thinker Ray Kurzweil. Joy was also criticized by the conservative American Spectator, which characterized Joys essay as a rationale for statism, a bar-room discussion of these technologies with Ray Kurzweil started to set Joys thinking along this path. This concern led to his examination of the issue and the positions of others in the scientific community on it

24.
Nanotechnology
–
Nanotechnology is manipulation of matter on an atomic, molecular, and supramolecular scale. It is therefore common to see the plural form nanotechnologies as well as nanoscale technologies to refer to the range of research. Because of the variety of applications, governments have invested billions of dollars in nanotechnology research. Until 2012, through its National Nanotechnology Initiative, the USA has invested 3.7 billion dollars, scientists currently debate the future implications of nanotechnology. Nanotechnology may be able to create new materials and devices with a vast range of applications, such as in nanomedicine, nanoelectronics, biomaterials energy production. These concerns have led to a debate among advocacy groups and governments on whether regulation of nanotechnology is warranted. The term nano-technology was first used by Norio Taniguchi in 1974, also in 1986, Drexler co-founded The Foresight Institute to help increase public awareness and understanding of nanotechnology concepts and implications. In the 1980s, two major breakthroughs sparked the growth of nanotechnology in modern era, the microscopes developers Gerd Binnig and Heinrich Rohrer at IBM Zurich Research Laboratory received a Nobel Prize in Physics in 1986. Binnig, Quate and Gerber also invented the atomic force microscope that year. Second, Fullerenes were discovered in 1985 by Harry Kroto, Richard Smalley, and Robert Curl, in the early 2000s, the field garnered increased scientific, political, and commercial attention that led to both controversy and progress. Controversies emerged regarding the definitions and potential implications of nanotechnologies, exemplified by the Royal Societys report on nanotechnology, challenges were raised regarding the feasibility of applications envisioned by advocates of molecular nanotechnology, which culminated in a public debate between Drexler and Smalley in 2001 and 2003. Meanwhile, commercialization of products based on advancements in nanoscale technologies began emerging and these products are limited to bulk applications of nanomaterials and do not involve atomic control of matter. Governments moved to promote and fund research into nanotechnology, such as in the U. S, by the mid-2000s new and serious scientific attention began to flourish. Projects emerged to produce nanotechnology roadmaps which center on atomically precise manipulation of matter and discuss existing and projected capabilities, goals, Nanotechnology is the engineering of functional systems at the molecular scale. This covers both current work and concepts that are more advanced, one nanometer is one billionth, or 10−9, of a meter. By comparison, typical carbon-carbon bond lengths, or the spacing between atoms in a molecule, are in the range 0. 12–0.15 nm. On the other hand, the smallest cellular life-forms, the bacteria of the genus Mycoplasma, are around 200 nm in length, by convention, nanotechnology is taken as the scale range 1 to 100 nm following the definition used by the National Nanotechnology Initiative in the US. The lower limit is set by the size of atoms since nanotechnology must build its devices from atoms and molecules

25.
The New York Times
–
The New York Times is an American daily newspaper, founded and continuously published in New York City since September 18,1851, by The New York Times Company. The New York Times has won 119 Pulitzer Prizes, more than any other newspaper, the papers print version in 2013 had the second-largest circulation, behind The Wall Street Journal, and the largest circulation among the metropolitan newspapers in the US. The New York Times is ranked 18th in the world by circulation, following industry trends, its weekday circulation had fallen in 2009 to fewer than one million. Nicknamed The Gray Lady, The New York Times has long been regarded within the industry as a newspaper of record. The New York Times international version, formerly the International Herald Tribune, is now called the New York Times International Edition, the papers motto, All the News Thats Fit to Print, appears in the upper left-hand corner of the front page. On Sunday, The New York Times is supplemented by the Sunday Review, The New York Times Book Review, The New York Times Magazine and T, some other early investors of the company were Edwin B. Morgan and Edward B. We do not believe that everything in Society is either right or exactly wrong, —what is good we desire to preserve and improve, —what is evil, to exterminate. In 1852, the started a western division, The Times of California that arrived whenever a mail boat got to California. However, when local California newspapers came into prominence, the effort failed, the newspaper shortened its name to The New-York Times in 1857. It dropped the hyphen in the city name in the 1890s, One of the earliest public controversies it was involved with was the Mortara Affair, the subject of twenty editorials it published alone. At Newspaper Row, across from City Hall, Henry Raymond, owner and editor of The New York Times, averted the rioters with Gatling guns, in 1869, Raymond died, and George Jones took over as publisher. Tweed offered The New York Times five million dollars to not publish the story, in the 1880s, The New York Times transitioned gradually from editorially supporting Republican Party candidates to becoming more politically independent and analytical. In 1884, the paper supported Democrat Grover Cleveland in his first presidential campaign, while this move cost The New York Times readership among its more progressive and Republican readers, the paper eventually regained most of its lost ground within a few years. However, the newspaper was financially crippled by the Panic of 1893, the paper slowly acquired a reputation for even-handedness and accurate modern reporting, especially by the 1890s under the guidance of Ochs. Under Ochs guidance, continuing and expanding upon the Henry Raymond tradition, The New York Times achieved international scope, circulation, in 1910, the first air delivery of The New York Times to Philadelphia began. The New York Times first trans-Atlantic delivery by air to London occurred in 1919 by dirigible, airplane Edition was sent by plane to Chicago so it could be in the hands of Republican convention delegates by evening. In the 1940s, the extended its breadth and reach. The crossword began appearing regularly in 1942, and the section in 1946

26.
Frank Wilczek
–
Frank Anthony Wilczek is an American theoretical physicist, mathematician and a Nobel laureate. Wilczek, along with David Gross and H. David Politzer, was awarded the Nobel Prize in Physics in 2004 for their discovery of asymptotic freedom in the theory of the strong interaction and he is on the Scientific Advisory Board for the Future of Life Institute. Born in Mineola, New York, of Polish and Italian origin, Wilczek was educated in the schools of Queens. It was around this time Wilczeks parents realized that he was part as a result of Frank Wilczek having been administered an IQ test. Wilczek holds the Herman Feshbach Professorship of Physics at MIT Center for Theoretical Physics and he worked at the Institute for Advanced Study in Princeton and the Institute for Theoretical Physics at the University of California, Santa Barbara and was also a visiting professor at NORDITA. Wilczek became a member of the Royal Netherlands Academy of Arts. He was awarded the Lorentz Medal in 2002, Wilczek won the Lilienfeld Prize of the American Physical Society in 2003. In the same year he was awarded the Faculty of Mathematics and Physics Commemorative Medal from Charles University in Prague and he was the co-recipient of the 2003 High Energy and Particle Physics Prize of the European Physical Society. Wilczek was also the co-recipient of the 2005 King Faisal International Prize for Science, on January 25,2013 Wilczek received an honorary doctorate from the Faculty of Science and Technology at Uppsala University, Sweden. He currently serves on the board for Society for Science & the Public, Wilczek has appeared on an episode of Penn & Teller, Bullshit. Where Penn referred to him as the smartest person ever had on the show, in 2014, Wilczek penned a letter, along with Stephen Hawking and two other scholars, warning that Success in creating AI would be the biggest event in human history. Unfortunately, it also be the last, unless we learn how to avoid the risks. The theory, which was discovered by H. David Politzer, was important for the development of quantum chromodynamics. Wilczek has helped reveal and develop axions, anyons, asymptotic freedom, the color superconducting phases of quark matter and he has worked on condensed matter physics, astrophysics, and particle physics. In 2012 he proposed the idea of a space-time crystal, in 2017, that theory seems to have been proven correct. 2015 A Beautiful Question, Finding Nature’s Deep Design, Allen Lane, the Lightness of Being, Mass, Ether, and the Unification of Forces. Fantastic Realities,49 Mind Journeys And a Trip to Stockholm,2002, On the worlds numerical recipe, Daedalus 131, 142-47. Longing for the Harmonies, Themes and Variations from Modern Physics, foraTV, The Large Hadron Collider and Unified Field Theory A radio interview with Frank Wilczeck Aired on the Lewis Burke Frumkes Radio Show the 10th of April 2011

27.
Nature (journal)
–
Nature is an English multidisciplinary scientific journal, first published on 4 November 1869. Nature claims a readership of about 3 million unique readers per month. The journal has a circulation of around 53,000. There are also sections on books and arts, the remainder of the journal consists mostly of research papers, which are often dense and highly technical. There are many fields of research in which important new advances, the papers that have been published in this journal are internationally acclaimed for maintaining high research standards. In 2007 Nature received the Princess of Asturias Award for Communications, the enormous progress in science and mathematics during the 19th century was recorded in journals written mostly in German or French, as well as in English. Britain underwent enormous technological and industrial changes and advances particularly in the half of the 19th century. In addition, during this period, the number of popular science periodicals doubled from the 1850s to the 1860s. According to the editors of these popular science magazines, the publications were designed to serve as organs of science, in essence, Nature, first created in 1869, was not the first magazine of its kind in Britain. While Recreative Science had attempted to more physical sciences such as astronomy and archaeology. Two other journals produced in England prior to the development of Nature were the Quarterly Journal of Science and Scientific Opinion, established in 1864 and 1868 and these similar journals all ultimately failed. The Popular Science Review survived longest, lasting 20 years and ending its publication in 1881, Recreative Science ceased publication as the Student, the Quarterly Journal, after undergoing a number of editorial changes, ceased publication in 1885. The Reader terminated in 1867, and finally, Scientific Opinion lasted a mere 2 years, janet Browne has proposed that far more than any other science journal of the period, Nature was conceived, born, and raised to serve polemic purpose. Perhaps it was in part its scientific liberality that made Nature a longer-lasting success than its predecessors and this is what Lockyers journal did from the start. Norman Lockyer, the founder of Nature, was a professor at Imperial College and he was succeeded as editor in 1919 by Sir Richard Gregory. Gregory helped to establish Nature in the scientific community. During the years 1945 to 1973, editorship of Nature changed three times, first in 1945 to A. J. V. Gale and L. J. F. Brimble, then to John Maddox in 1965, and finally to David Davies in 1973. In 1980, Maddox returned as editor and retained his position until 1995, philip Campbell has since become Editor-in-chief of all Nature publications

28.
Isaac Asimov
–
Isaac Asimov was an American writer and professor of biochemistry at Boston University. He was known for his works of fiction and popular science. Asimov was a writer, and wrote or edited more than 500 books. His books have published in 9 of the 10 major categories of the Dewey Decimal Classification. Asimov wrote hard science fiction and, along with Robert A. Heinlein, Clarke, he was considered one of the Big Three science fiction writers during his lifetime. Asimovs most famous work is the Foundation Series, his major series are the Galactic Empire series. The Galactic Empire novels are set in earlier history of the same fictional universe as the Foundation series. He wrote hundreds of stories, including the social science fiction Nightfall. Asimov wrote the Lucky Starr series of juvenile science-fiction novels using the pen name Paul French, Asimov also wrote mysteries and fantasy, as well as much nonfiction. Most of his science books explain scientific concepts in a historical way. He often provides nationalities, birth dates, and death dates for the scientists he mentions, as well as etymologies, Asimov was a long-time member and vice president of Mensa International, albeit reluctantly, he described some members of that organization as brain-proud and aggressive about their IQs. He took more joy in being president of the American Humanist Association, the asteroid 5020 Asimov, a crater on the planet Mars, a Brooklyn elementary school, and a literary award are named in his honor. His exact date of birth within that range is unknown, the family name derives from a word for winter crops, in which his great-grandfather dealt. This word is spelled озимые in Russian, and азімыя in Belarusian, phonetically, both words are almost identical because in Russian О in the first unstressed syllable is always pronounced as А. Accordingly, his name originally was Исаак Озимов in Russian, however, he was known in Russia as Ayzek Azimov. Asimov had two siblings, a sister, Marcia, and a brother, Stanley, who was vice-president of New York Newsday. His family emigrated to the United States when he was three years old, since his parents always spoke Yiddish and English with him, he never learned Russian, but he remained fluent in Yiddish as well as English. Growing up in Brooklyn, New York, Asimov taught himself to read at the age of five, after becoming established in the U. S. his parents owned a succession of candy stores, in which everyone in the family was expected to work

29.
Three Laws of Robotics
–
The Three Laws of Robotics are a set of rules devised by the science fiction author Isaac Asimov. The rules were introduced in his 1942 short story Runaround, although they had been foreshadowed in a few earlier stories. The Three Laws, quoted as being from the Handbook of Robotics, 56th Edition,2058 A. D. are, A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. These form an organizing principle and unifying theme for Asimovs robotic-based fiction, appearing in his Robot series, the stories linked to it, and his Lucky Starr series of young-adult fiction. The Laws are incorporated into almost all of the positronic robots appearing in his fiction, other authors working in Asimovs fictional universe have adopted them and references, often parodic, appear throughout science fiction as well as in other genres. The original laws have been altered and elaborated on by Asimov, Asimov himself made slight modifications to the first three in various books and short stories to further develop how robots would interact with humans and each other. In later fiction where robots had taken responsibility for government of whole planets and human civilizations, Asimov also added a fourth, or zeroth law, a robot may not harm humanity, or, by inaction, allow humanity to come to harm. The Three Laws, and the zeroth, have pervaded science fiction and are referred to in books, films. In The Rest of the Robots, published in 1964, Asimov noted that when he began writing in 1940 he felt one of the stock plots of science fiction was. Robots were created and destroyed their creator, knowledge has its dangers, yes, but is the response to be a retreat from knowledge. Or is knowledge to be used as itself a barrier to the dangers it brings and he decided that in his stories robots would not turn stupidly on his creator for no purpose but to demonstrate, for one more weary time, the crime and punishment of Faust. Three days later Asimov began writing my own story of a sympathetic and noble robot, thirteen days later he took Robbie to John W. Campbell the editor of Astounding Science-Fiction. Frederik Pohl published Robbie in Astonishing Stories magazine the following year, Asimov attributes the Three Laws to John W. Campbell, from a conversation that took place on 23 December 1940. Campbell claimed that Asimov had the Three Laws already in his mind, several years later Asimovs friend Randall Garrett attributed the Laws to a symbiotic partnership between the two men – a suggestion that Asimov adopted enthusiastically. Although Asimov pins the creation of the Three Laws on one particular date and he wrote two robot stories with no explicit mention of the Laws, Robbie and Reason. He assumed, however, that robots would have certain inherent safeguards and his third robot story, makes the first mention of the First Law but not the other two. All three laws finally appeared together in Runaround, in particular the idea of a robot protecting human lives when it does not believe those humans truly exist is at odds with Elijah Baleys reasoning, as described below

30.
Eliezer Yudkowsky
–
Eliezer Shlomo Yudkowsky is an American decision theorist known for popularizing the idea of friendly artificial intelligence. He is a co-founder and research fellow at the Machine Intelligence Research Institute, Systems that do not exhibit these behaviors have been termed corrigible systems, and both theoretical and practical work in this area appears tractable and useful. These lines of research are discussed in MIRIs 2015 technical agenda, Yudkowsky studies decision theories that achieve better outcomes than causal decision theory in Newcomblike problems. This includes decision procedures that allow agents to cooperate with equivalent reasoners in the prisoners dilemma. Yudkowsky has also written on theoretical prerequisites for self-verifying software and this raises the possibility that AIs impact could increase very quickly after it reaches a certain level of capability. In the intelligence explosion scenario inspired by Goods hypothetical, recursively self-improving AI systems quickly transition from subhuman general intelligence to superintelligent, bostrom cites writing by Yudkowsky on inductive value learning and on the risk of anthropomorphizing advanced AI systems, e. g. As with the interaction between humans and other species in the environment, these problems could be the result of competition for resources rather than malice. In comparison with evolutionary changes, there was relatively little time between our hominid ancestors and the evolution of humans. In February 2009, Yudkowsky founded LessWrong, a community devoted to refining the art of human rationality. Overcoming Bias has since functioned as Hansons personal blog, LessWrong has been covered in depth in Business Insider, and core concepts from LessWrong have been referenced in columns in The Guardian. Yudkowsky has also several works of fiction. His fan fiction story, Harry Potter and the Methods of Rationality, Rowlings Harry Potter series to illustrate topics in science. The New Yorker describes Harry Potter and the Methods of Rationality as a retelling of Rowlings original in an attempt to explain Harrys wizardry through the scientific method. Over 300 blogposts by Yudkowsky have been released as six books, collected in an ebook titled Rationality. Yudkowsky identifies as an atheist and a small-l libertarian, levels of Organization in General Intelligence. Cognitive Biases Potentially Affecting Judgement of Global Risks, Artificial Intelligence as a Positive and Negative Factor in Global Risk. Complex Value Systems in Friendly AI, Artificial General Intelligence, 4th International Conference, AGI2011, Mountain View, CA, USA, August 3–6,2011. In Eden, Ammon, Moor, James, Søraker, John, singularity Hypotheses, A Scientific and Philosophical Assessment

31.
Machine Intelligence Research Institute
–
Nate Soares is the executive director, having taken over from Luke Muehlhauser in May 2015. MIRIs technical agenda states that new formal tools are needed in order to ensure the operation of future generations of AI software. In early 2005, SIAI relocated from Atlanta, Georgia to Silicon Valley, from 2006 to 2012, the Institute collaborated with Singularity University to produce the Singularity Summit, a science and technology conference. Speakers included Steven Pinker, Peter Norvig, Stephen Wolfram, John Tooby, James Randi, MIRI gave control of the Singularity Summit to Singularity University and shifted its focus toward research in mathematics and theoretical computer science. In mid-2014, Nick Bostroms book Superintelligence, Paths, Dangers, Strategies helped spark public discussion about AIs long-run social impact, receiving endorsements from Bill Gates, MIRI studies strategic questions related to AI, such as, What can we predict about future AI technology. How can we improve our forecasting ability, which interventions available today appear to be the most beneficial, given what little we do know. Beginning in 2014, MIRI has funded forecasting work through the independent AI Impacts project, AI Impacts studies historical instances of discontinuous technological change, and has developed new measures of the relative computational power of humans and computer hardware. MIRI researchers interest in discontinuous AI progress stems from I. J, thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. Writers like Bostrom use the term superintelligence in place of Goods ultraintelligence, following Vernor Vinge, Goods idea of intelligence explosion has come to be associated with the idea of a technological singularity. Bostrom and researchers at MIRI have expressed skepticism about the views of singularity advocates like Ray Kurzweil that superintelligence is just around the corner, MIRI researchers have advocated early safety work as a precautionary measure, while arguing that past predictions of AI progress have not been reliable. Eliezer Yudkowsky, MIRIs co-founder and senior researcher, is cited for his writing on the long-term social impact of progress in AI. Yudkowsky goes into detail about how to design a Friendly AI. Yudkowsky writes on the importance of artificial intelligence in smarter-than-human systems. This informal goal is reflected in MIRIs recent publications as the requirement that AI systems be aligned with human interests, however, there are still many open problems in the foundations of reasoning and decision. Solutions to these problems may make the behavior of very capable systems much more reliable and predictable and these topics may benefit from being considered together, since they appear deeply linked. Standard decision procedures are not well-specified enough to be instantiated as algorithms, MIRI researchers identify logical decision theories as alternatives that perform better in general decision-making tasks. MIRI also studies self-monitoring and self-verifying software, MIRIs publications on Vingean reflection attempt to model the Gödelian limits on self-referential reasoning and identify practically useful exceptions. Soares and Fallenstein classify the above research programs as aimed at high reliability and they separately recommend research into error-tolerant software systems, citing human error and default incentives as sources of serious risk

32.
Artificial general intelligence
–
Artificial general intelligence is the intelligence of a machine that could successfully perform any intellectual task that a human being can. It is a goal of artificial intelligence research and a common topic in science fiction. Artificial general intelligence is referred to as strong AI, full AI or as the ability of a machine to perform general intelligent action. Some references emphasize a distinction between strong AI and applied AI, the use of software to study or accomplish specific problem solving or reasoning tasks, weak AI, in contrast to strong AI, does not attempt to perform the full range of human cognitive abilities. Many different definitions of intelligence have been proposed but to date, other important capabilities include the ability to sense and the ability to act in the world where intelligent behaviour is to be observed. This would include an ability to detect and respond to hazard, many interdisciplinary approaches to intelligence tend to emphasise the need to consider additional traits such as imagination and autonomy. Computer based systems that many of these capabilities do exist. Scientists have varying ideas of what kinds of tests a human-level intelligent machine needs to pass in order to be considered an example of artificial general intelligence. A few of these include the late Alan Turing, Steve Wozniak, Ben Goertzel. A few of the tests they have proposed are, The Turing Test In the Turing Test, a machine, the Coffee Test A machine is given the task of going into an average American home and figuring out how to make coffee. It has to find the machine, find the coffee, add water, find a mug. The Robot College Student Test A machine is given the task of enrolling in a university, taking and passing the same classes that humans would, and obtaining a degree. The Employment Test A machine is given the task of working an economically important job and these are a few tests that cover a variety of qualities that a machine might need to have to be considered AGI, including the ability to reason and learn. To call a problem AI-complete reflects an attitude that it would not be solved by a specific algorithm. AI-complete problems are hypothesised to include computer vision, natural language understanding, currently, AI-complete problems cannot be solved with modern computer technology alone, and also require human computation. This property can be useful, for instance to test for the presence of humans as with CAPTCHAs, modern AI research began in the mid 1950s. The first generation of AI researchers were convinced that strong AI was possible, as AI pioneer Herbert A. Simon wrote in 1965, machines will be capable, within twenty years, of doing any work a man can do. Their predictions were the inspiration for Stanley Kubrick and Arthur C, however, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project

Typical AFM setup. A microfabricated cantilever with a sharp tip is deflected by features on a sample surface, much like in a phonograph but on a much smaller scale. A laser beam reflects off the backside of the cantilever into a set of photodetectors, allowing the deflection to be measured and assembled into an image of the surface.

Hume asks, given knowledge of the way the universe is, in what sense can we say it ought to be different?

Few debate that one ought to run quickly if one's goal is to win a race. A tougher question may be whether one "morally ought" to want to win a race in the first place.

Even if oughts can be understood in relation to goals or needs, the greater challenge of ethical systems remains that of defining the nature and origins of the good, and in what sense one ought to pursue it.