Some references emphasize a distinction between strong AI and
"applied AI" (also called "narrow AI" or
"weak AI"): the use of software to study or accomplish specific
problem solving or reasoning tasks that do not encompass (or in some
cases are completely outside of) the full range of human cognitive
abilities.

Requirements

Many different definitions of intelligence have been proposed (such
as being able to pass the Turing test)
but there is to date no definition that satisfies everyone.
However, there is wide agreement among artificial
intelligence researchers that intelligence is required to do the
following:

It remains to be shown whether any of these traits are necessary for strong
AI—for example, it is not clear if consciousness is necessary for a
machine to reason as well as human beings can. It is also not clear
whether any of these traits are sufficient for intelligence: if a
machine is built with a device that simulates the neural correlates of
consciousness, would it automatically have the ability to
represent knowledge or use natural language? It is also possible
that some of these properties, such as sentience, naturally emerge
from a fully intelligent machine, or that it becomes natural to
ascribe these properties to machines once they begin to
act in a way that is clearly intelligent. For example, intelligent
action may be sufficient for sentience, rather than the other way
around.

Research approaches

History of mainstream AI research

Modern AI research began in the mid 1950s. The first generation of
AI researchers were convinced that strong AI was possible and that
it would exist in just a few decades. As AI pioneer Herbert Simon wrote in 1965: "machines will be
capable, within twenty years, of doing any work a man can do."
Their predictions were the inspiration for Stanley Kubrick and Arthur C.Clarke's character HAL
9000, who accurately embodied what AI researchers believed they
could create by the year 2001. Of note is the fact that AI pioneer
Marvin Minsky was a consultant on the
project of making HAL 9000 as realistic as possible according to
the consensus predictions of the time, having himself said on the
subject in 1967, "Within a generation...the problem of creating
'artificial intelligence' will substantially be solved."

However, in the early 70s, it became obvious that researchers had
grossly underestimated the difficulty of the project. The agencies
that funded AI became skeptical of strong AI and put researchers
under increasing pressure to produce useful technology, or "applied
AI". As the eighties began, Japan's fifth generation computer project
revived interest in strong AI, setting out a ten year timeline that
included strong AI goals like "carry on a casual conversation". In
response to this and the success of expert systems, both industry and government
pumped money back into the field. However, the market for AI
spectacularly collapsed in the late 80s and the goals of the fifth
generation computer project were never fulfilled. For the second
time in 20 years, AI researchers who had predicted the imminent
arrival of strong AI had been shown to be fundamentally mistaken
about what they could accomplish.

By the 1990s, AI researchers had gained a reputation for making
promises they could not keep. AI researchers became reluctant to
make any kind of prediction at all and avoid any mention of "human
level" artificial intelligence, for fear of being labeled a
"wild-eyed dreamer." Confidence in the field arguably saw a
resurgence with the likes of the 1997 Deep Blue victory chess
match.

Mainstream AI research

For the most part, researchers today choose to focus on specific
sub-problems where they can produce verifiable results and
commercial applications, such as neural
nets, computer vision or
data mining.

Most mainstream AI researchers hope that strong AI can be developed
by combining the programs that solve various subproblems using an
integrated agent architecture,
cognitive architecture or
subsumption architecture.
Hans Moravec wrote in 1988 "I am
confident that this bottom-up route to artificial intelligence will
one day meet the traditional top-down route more than half way,
ready to provide the real world competence and the commonsense knowledge that has
been so frustratingly elusive in reasoning programs. Fully
intelligent machines will result when the metaphorical golden spike is driven uniting the two
efforts."

Artificial general intelligence

Artificial General Intelligence research aims to create AI that can
replicate human-level intelligence completely, often called an
Artificial General Intelligence (AGI) to distinguish from less
ambitious AI projects. (The concept is derived from the psychometric notion of naturalgeneral intelligence
(often denoted "g")[62943], though no adherence to any particular theory
of g is implied.) As yet, researchers have devoted little
attention to AGI, with some claiming that intelligence is too
complex to be completely replicated in the near term. Some small
groups of computer scientists are doing AGI research, however.
Organizations pursuing AGI include Adaptive
AI, Artificial
General Intelligence Research Institute and Singularity
Institute for Artificial Intelligence with the open-sourceOpenCog
project, and TexAI. One recent addition is
Numenta, a project based on the theories of
Jeff Hawkins, the creator of the
Palm Pilot. While Numenta takes a
computational approach to general intelligence, Hawkins is also the
founder of the RedWood Neuroscience Institute, which explores
conscious thought from a biological perspective. AND Corporation has been active in this
field since 1990, and has developed machine intelligence processes
based on phase coherence principles, having strong similarities to
digital holography and QM with respect to quantum collapse of the wave function.

Simulated human brain model

Simulated human brain model could be one of the quickest means of
achieving strong AI, as it doesn't require complete understanding
of how intelligence works. Basically, a very powerful computer
would simulate a human brain, often in the form of a network of
neurons. For example, given a map of all (or most) of the neurons
in a functional human brain, and a good understanding of how a
single neuron works, it is theoretically possible for a computer
program to simulate the working brain over time. Given some method
of communication, this simulated brain might then be shown to be
fully intelligent. The exact form of the simulation varies: instead
of neurons, a simulation might use groups of neurons, or
alternatively, individual molecules might be simulated. It's also
unclear which portions of the human brain would need to be modeled:
humans can still function while missing portions of their brains,
and areas of the brain are associated with activities (such as
breathing) that might not be necessary to think.

Speculation: human brains have developed to accommodate certain
necessities, such as breathing and interpreting sensory input from
a variety of sources. Without adequate simulations of these
necessities (such as, for example, input that simulates the
sensation of sufficient oxygen levels in the body), it is possible
that an artificial brain could have difficulty functioning. In
addition, human brains are reliant for stability on a number of
mediating factors, including stages of development and external
training. An artificial duplicate of the human brain, without input
of mediation, could conceivably suffer from a number of cognitive
and functional difficulties. In addition, the construction and
sustenance of an artificial brain raises moral questions, namely
regarding personhood, freedom, and death. Does a "brain in a box" constitute a person?
What rights would such an entity have, under law or otherwise? Once
activated, would human beings have the obligation to continue its
operation? Would the shutdown of an artificial brain constitute
death, sleep, unconsciousness, or some other state for
which no human description exists? After all, an artificial brain
is not subject to post-mortem cellular decay (and associated loss
of function) as human brains are, so an artificial brain could,
theoretically, resume functionality exactly as it would if it was
before it was shut down?

This approach would require three things:

Hardware. An extremely powerful computer would be
required for such a model. FuturistRay Kurzweil in the book "The Singularity Is Near" (2005)
looks at various estimates for the hardware required to equal the
human brain and writes "These estimates all result in comparable
orders of magnitude (10^14 to 10^15 cps). Given the early stage of
human-brain reverse engineering, I will use a more conservative
figure of 10^16 cps for our subsequent discussions." 10^16 cps is
equivalent to 10 petaflops. Using Top500 projections, it might be estimated that such
levels of computing power might be reached using the top-performing
CPU-based supercomputers by ~2015 (for 100 petaflops), up to a more
conservative estimate of ~2025 (for 100,000 petaflops). However,
considering that GPU processing and Stream Processing power appears to double
every year, these estimates will be reached much sooner using
GPGPU processing as high-end GPU's such as the
AMD FireStream can already process
over one teraflop. It should also be noted, however, that the
overhead introduced by the modeling of the biological, chemical,
and physical details of neural behaviour (especially on a molecular
scale) might require a simulator to have access to computational
power much greater than that of the brain itself and that current
simulations and estimates do not account for the importance of
Glial cells which outnumber neurons
10:1.

Software. Software to simulate the function of a brain
would be required. This assumes, as is the consensus in neuroscience, that the human mind is the
central nervous system and is
governed by currently known and understood physical laws . Constructing the simulation
would require a great deal of knowledge about the physical and
functional operation of the human brain, and might require detailed
information about a particular human brain's structure. Information
would be required both of the function of different types of
neurons, and of how they are connected. Note that the particular
form of the software dictates the hardware necessary to run it. For
example, an extremely detailed simulation including molecules or
small groups of molecules would require enormously more processing
power than a simulation that models neurons using a simple
equation, and a more accurate model of a neuron would be expected
to be much more expensive computationally than a simple model. The
more neurons in the simulation, the more processing power it would
require.

Understanding. Finally, it requires sufficient
understanding thereof to be able to model it mathematically. This
could be done either by understanding the central nervous system,
or by mapping and copying it. Neuroimaging technologies are improving
rapidly, and Kurzweil predicts that a map of sufficient quality
will become available on a similar timescale to the required
computing power. However, the simulation would also have to capture
the detailed cellular behaviour of neurons and glial cells, presently only understood in the
broadest of outlines.

Once such a model is built, it will be easily altered and thus open
to trial-and-error experimentation. This is likely to lead to huge
advances in understanding, allowing the model's intelligence to be
improved/motivations altered.

The Blue Brain project aims to use one of
the fastest supercomputer architectures in the world, IBM's Blue Gene platform, to
simulate a single neocortical column
consisting of approximately 60,000 neurons and 5 km of
interconnecting synapses. The eventual goal of the project is to
use supercomputers to replicate an entire brain announced Henry
Markam, director of the Blue Brain project at the TED conference in
2009, believing this could be achievable in as little as 10 years
time.

The brain gets its power from performing many parallel operations,
a standard computer from performing operations very quickly. It
should be noted, however, that supercomputers also perform many
operations in parallel. Good examples of this are the Cray and NEC
vector computers which operate as a single machine but perform many
calculations at once. Another example is any form of cluster
computing, where multiple single computers operate as one.The human
brain has roughly 100 billion neurons operating simultaneously,
connected by roughly 100 trillion synapses. By comparison, a modern
computer microprocessor uses only 1.7
billion transistors.[62944] Although estimates of the brain's
processing power put it at around 1014 (100 trillion)
neuron updates per second, it is expected that the first
unoptimized simulations of a human brain in real time will require
a computer capable of 1018FLOPS.
Non-real time simulations of the human brain (1011
neurons) were performed in 2005 [62945] and it took 50 days on a cluster of 27
processors to simulate 1 second of a model (see also [62946]). By comparison a general purpose CPU
(circa 2006) operates at a few GFLOPS (109 FLOPS). (each
FLOP may require as many as 20,000 logic operations).

However, a neuron is estimated to spike 200 times per second (this
giving an upper limit on the number of operations). Signals between
them are transmitted at a maximum speed of 150 meters per second. A
modern 2 GHz processor operates at 2 billion cycles per
second, or 10,000,000 times faster than a human neuron, and signals
in electronic computers travel at roughly half the speed of light;
faster than signals in humans by a factor of 1,000,000. The brain
consumes about 20W of power whereas supercomputers may use as much
as 1MW or an order of 100,000 more (note: Landauer limit is 3.5x1020
op/sec/watt at room temperature).

Critics of this approach believe that it is possible to achieve AI
directly without imitating nature and often have used the analogy
that early attempts to construct flying machines modeled them after
birds, but that modern aircraft do not look like birds. The direct
approach is used in AI - What is this, where it is shown that if we
have a formal definition of AI, it can be found by enumerating all
possible programs and then testing each of them to see whether it
has produced Artificial Intelligence, or has not.

Artificial consciousness research

Artificial consciousness research aims to create and study
artificially conscious systems. Igor
Aleksander argues that the principles for creating a conscious
machine already existed but that it would take forty years to train
such a machine to understand language.

Franklin’s Intelligent Distribution Agent

Stan Franklin (1995, 2003) defines an
autonomous agent as possessing
functional consciousness when it is
capable of several of the functions of consciousness as identified
by Bernard Baars’ Global Workspace Theory (GWT). His
brain child IDA (Intelligent Distribution Agent) is a software
implementation of GWT, which makes it functionally conscious by
definition. IDA’s task is to negotiate new assignments for sailors
in the US Navy after they end a tour of duty, by matching each
individual’s skills and preferences with the Navy’s needs. IDA
interacts with Navy databases and communicates with the sailors via
natural language email dialog while obeying a large set of Navy
policies. The IDA computational model was developed during
1996-2001 at Stan Franklin’s "Conscious" Software Research Group at
the University of
Memphis. It "consists of approximately a quarter-million lines
of Java code, and almost
completely consumes the resources of a 2001 high-end workstation."
It relies heavily on codelets, which are "special purpose,
relatively independent, mini-agent[s] typically implemented as a
small piece of code running as a separate thread." In IDA’s
top-down architecture, high-level cognitive functions are
explicitly modeled; see and for details. While IDA is functionally
conscious by definition, Franklin does “not attribute phenomenal consciousness to [his]
own 'conscious' software agent, IDA, in spite of her many
human-like behaviours. This in spite of watching several US Navy
detailers repeatedly nodding their heads saying 'Yes, that’s how I
do it' while watching IDA’s internal and external actions as she
performs her task."

Haikonen’s cognitive architecture

Pentti considers classical rule-based computing inadequate for
achieving AC: "the brain is definitely not a computer. Thinking is
not an execution of programmed strings of commands. The brain is
not a numerical calculator either. We do not think by numbers."
Rather than trying to achieve mind and consciousness by identifying and implementing
their underlying computational rules, Haikonen proposes "a special
cognitive architecture to
reproduce the processes of perception,
inner imagery, inner speech, pain,
pleasure, emotions
and the cognitive functions behind these.
This bottom-up architecture would produce higher-level functions by
the power of the elementary processing units, the artificial neurons, without algorithms or programs". Haikonen believes that, when
implemented with sufficient complexity, this architecture will
develop consciousness, which he considers to be "a style and way of
operation, characterized by distributed signal representation,
perception process, cross-modality reporting and availability for
retrospection." Haikonen is not alone in this process view of
consciousness, or the view that AC will spontaneously emerge in
autonomous agents that have a
suitable neuro-inspired architecture of complexity; these are
shared by many, e.g. and . A low-complexity implementation of the
architecture proposed by was reportedly not capable of AC, but did
exhibit emotions as expected .

Ben Goertzel's OpenCog

Ben Goertzel is pursuing an embodied
AGI through the open-source OpenCog project.
Current code includes embodied virtual pets capable of learning
simple English-language commands,as well as integration with real-world
robotics, being done in at the robotics lab of Hugo de Garis at Xiamen University.

An artificial intelligence system can (only) act like
it thinks and has a mind.

The first one is called "the strong AI hypothesis" and the
second is "the weak AI hypothesis" because the first one
makes the stronger statement: it assumes something special
has happened to the machine that goes beyond all its abilities that
we can test. Searle referred to the "strong AI hypothesis" as
"strong AI". This usage, which is fundamentally different than the
subject of this article, is common in academic AI research and
textbooks.

The term "strong AI" is now used to describe any artificial
intelligence system that acts like it has a mind, regardless of
whether a philosopher would be able to determine if it
actually has a mind or not. Dijkstra has been quoted as saying, "The
question of whether a computer can think is no more interesting
than the question of whether a submarine can swim."

As Russell and Norvig write: "Most AI researchers take the
weak AI hypothesis for granted, and don't care about the strong AI
hypothesis."AI researchers are interested in a related
statement (that some sources confusingly call "the strong AI
hypothesis"):

An artificial intelligence system can think (or act
like it thinks) as well as or better than people do.

This assertion, which hinges on the breadth and power of machine
intelligence, is the subject of this article.

This list of intelligent traits is based on the topics covered
by major AI textbooks, including: , , and .

Note that consciousness is difficult to define. A
popular definition, due to Thomas Nagel, is that it "feels like" something
to be conscious. If we are not conscious, then it doesn't feel like
anything. Nagel uses the example of a bat: we can sensibly ask
"what does it feel like to be a bat?" However, we are unlikely to
ask "what does it feel like to be a toaster?" Nagel concludes that
a bat appears to be conscious (i.e. has consciousness) but a
toaster does not. See

The Lighthill report specifically criticized
AI's "grandiose objectives" and led the dismantling of AI research
in England. In the U.S., DARPA became determined to fund only "mission-oriented
direct research, rather than basic undirected research". See under
"Shift to Applied Research Increases Investment". See also and

, and see also

, , under "Shift to Applied Research Increases Investment"

As AI founder John McCarthy wrote
in his Reply to Lighthill, "it would be a great relief
to the rest of the workers in AI if the inventors of new general
formalisms would express their hopes in a more guarded form than
has sometimes been the case."

"At its low point, some computer scientists and software
engineers avoided the term artificial intelligence for fear of
being viewed as wild-eyed dreamers."

As defined in a standard AI textbook: "The assertion that
machines could possibly act intelligently (or, perhaps better, act
as if they were intelligent) is called the 'weak AI' hypothesis by
philosophers, and the assertion that machines that do so are
actually thinking (as opposed to simulating thinking) is called the
'strong AI' hypothesis."