Imagine...

Imagine if computers could learn and think. If machines were truly 'intelligent.' If software was more flexible and
adaptable to work the way you want it to. If you could converse with your computer in plain English. If your business
could operate more efficiently and effectively, with lower cost and higher customer satisfaction, by using more intelligent
IT systems.

This optimistic vision is rapidly moving closer to reality. The foundational knowledge and technology to build computers
with human-level learning and thinking ability are now finally emerging. Recent advances in computer technology combined
with insights from fields as varied as psychology, philosophy, evolution, brain physiology, and information theory allow
us to finally solve the previously intractable problems of creating real AI. The long-promised power of truly intelligent
machines will soon be available to help us solve the many problems facing mankind.

We expect skepticism. Haven't we been promised real artificial intelligence for 30 years or more? Yet all we see around
us are 'stupid' computer programs that don't understand what we actually want to do, and respond with cryptic error messages
when things go wrong. What is more, they cannot adapt to changing circumstances or requirements, and they don't learn
from their mistakes.

A new approach to AI, called 'artificial general intelligence', or AGI, has emerged. It promises to finally overcome
the limitations of traditional AI, and usher in a new era of vastly superior computer systems and tools.

What exactly is AGI, and how does it differ from conventional AI?

Computer systems based on AGI technology ('AGIs') are specifically engineered to be able to learn. They are able to
acquire a wide range of knowledge and skills via learning similar to the way we do. Unlike current computer systems,
AGIs do not need to be programmed to do new tasks. Instead, they are simply instructed and taught by humans. Additionally,
these systems can learn by themselves both implicitly 'on-the-job', and explicitly by reading and practicing. Furthermore,
just like humans, they resiliently adapt to changing circumstances.

This general ability to learn through natural interaction with the environment as well as from teachers, allows them
to autonomously expand and adapt their abilities over time they become ever more knowledgeable, smarter, and more useful.

In addition to their intrinsic learning ability, AGIs are also designed to function in a goal-directed manner. This
means that they automatically focus their attention on information and activities that are likely to help solve problems
they have been given. For example, an AGI trained and instructed to look for inconsistencies in arthritis medication
studies will spend its time perusing relevant articles, news, and background information, and request pertinent additional
information or clarification from other researchers. On the other hand, an AGI assigned to be a personal assistant will
seek out knowledge and skills necessary for that job, such as learning how to deal with various types of business associates,
schedules, priorities, and travel arrangements, as well as the personal preferences
of its boss.

AGIs learn both conceptually and contextually. Conceptual learning implies that knowledge is assimilated in a suitably
generalized and abstract form: Skills acquired for one task are available for similar, but non-identical tasks, while
at the same time making the system much more useful and robust when coping with environmental changes. Context, on the
other hand, allows the system to utilize relevant background information to appropriately tailor its responses to each
specific situation. It can take into account such crucial factors as recent actions and events, current goals and priorities,
who it is communicating with, and anything else that affects its current actions.

Other central AGI features include an ability to anticipate events and outcomes, and the ability to introspect to
be aware of its own cognitive states (such as novelty, confusion, certainty, its level of ability, etc). These design
features, combined with the fact that AGIs directly perceive their environments via built-in senses, endow them with
human-like understanding of facts and situations.

In contrast, systems based on conventional AI technology provide little or no learning capability beyond their initial
one-time training phase (if any). Traditional computer programs are designed for specific applications, and are incapable
of being used for any other purpose. In fact, even within their given domain any new requirements or changes to their
operating environment require costly program changes.

To use a human analogy to highlight the difference, imagine an entirely unschooled person. If we wanted to put them
to work on an assembly line, we could instruct them with a very detailed script for a specific set of actions; in other
words, rote learning, with no real understanding (like programming an 'expert system'). Or, we could
take on the much more difficult task of teaching them to read and write, to think logically and to learn. This would
enable them to learn and re-learn any number of jobs in the factory and elsewhere; and to perform them much more intelligently
with understanding. This is the AGI approach. Furthermore, an educated person (or AGI) can also manage other entities
with low-level skills, or those that possess highly specialized knowledge, thereby greatly increasing their own productivity.

In summary, an AGI's ability to learn implies a number of advantages over conventional AI technology: It can be taught,
instead of having to be programmed; it learns from experience and can learn by itself; it can deal with ambiguity and
unknown situations, know when to ask for help, and recover from errors resiliently and autonomously.

Note that all these advantages are in addition to computer systems' natural strengths: large 'photographic' memories,
high speed, accuracy, upgradeability, seamless interfacing with other systems, etc. Another key feature of such trainable/
trained systems is that, unlike skilled humans, they can be duplicated, and efficiently pool knowledge and experience.
These capabilities allow for rapid up-scaling of production. For example, various AGIs, after having been trained in
particular specialties, could pool their knowledge and then be duplicated hundreds of times imbuing each one of them
with their combined knowledge. From there on these AGIs can pursue coordinated, yet individual paths, while regularly
updating each other.

Making it happen

Adaptive A.I. Inc. is a small but innovative company that was founded in 2001 with the express purpose of developing
and commercializing AGI technology. After an initial three-year research phase, the company is now engaged in an ambitious
multi-year development project to actually build a fully functioning AGI with human-level cognitive ability.

While the system's initial cognitive ability will roughly match that of a child, in many respects it will be much
more capable. As indicated above, it will have encyclopedic knowledge, the patience and self-discipline of a saint, and
enjoy the accuracy, memory and speed of a computer.

Because an AGI is not a copy of a human mind but something completely new, one must expect different mannerisms -
like someone from a different culture and background, only more so. Their natural environment is that of computer data,
software tools, network resources, and the Internet. They will interact with people via voice and text.

Our company's approach calls for the extensive leveraging of existing technology instead of re-inventing, we aim to
capitalize on existing hardware and software components, as well as published theoretical research. We believe that to
a large extent, the 'pieces of the puzzle' for achieving AGI already exist. Our ingenuity is applied primarily to finding,
selecting, and intelligently integrating existing know-how, while inventing and developing the crucial missing pieces.

Why hasn't this been done before?

Given its enormous commercial potential, one may wonder why AGI isn't a well-known, well-funded area of research and
development. This is an interesting question.

Several contributing factors seem to be accidents of history. Firstly, we now find ourselves in the depth of the AI
winter a period of deep pessimism and lethargy towards AGI ambitions following the spectacular failure of early AI promises.
In backlash to unfulfilled expectations of 30 and 40 years ago, Artificial Intelligence is still a swearword to many.
Without delving into detailed analysis of these early failures, suffice it to say that hardware and software technologies
and cognitive theories had simply not advanced sufficiently to enable the creation of human-level artificial intelligence.

However, while limitations of early technology were a definite handicap, several other theoretical and practical limitations,
errors, and blind spots were and are even bigger impediments. These include the following:

Belief that human-level AI is impossible. At the most basic level, this is usually caused by remnants of an ancient
philosophical position called Dualism. This idea that there is an inherent dichotomy between mind and body leads many
researchers to reject even the theoretical possibility of AGI. Thus they don't even try to solve the problem.

Not in my lifetime. Of those who do not in principle object to the possibility of AGI, many do not believe that
it can happen in their lifetime, if ever. Some hold this position because they themselves tried and failed in their
youth. Others believe that AGI is not the best or fastest approach to achieving real AI, or are at a total loss on how
to go about it. One popular idea is that we need to reverse-engineer the human brain one function at a time in order
to create intelligent machines.

There is no such thing as general intelligence. A great percentage of researchers reject the validity or importance
of general intelligence. For many, controversies in psychology (such as those stoked by The Bell Curve) make this an
unpopular, if not taboo subject. Others, conditioned by decades of domain-specific work, simply do not see the benefits
of AGI or of having intelligent systems with general learning ability.

We should not try to create AGI. Some oppose AGI development on moral grounds, or because they fear it.

We don't know how to do it. Many potential AGI entrepreneurs and researchers simply don't enter our field, because
they lack crucial insights on how to achieve real artificial intelligence. There are many ways to be misdirected, and
academia, if anything, hinders in that regard. To name just one of the most common errors entrenched in conventional
AI thinking: the mistaken belief that intelligence is primarily about having knowledge. We see the ability to acquire
knowledge i.e. to learn as far more fundamental.

Poor AI theory. There are a many theories of artificial intelligence. Most of them will not lead to practical systems
possessing general intelligence. Several theoretical errors and blind spots have already been mentioned. Here are a
few more common traps: The belief that AI can be solved by language alone, or conversely, that they require full embodiment
(robotics); approaches that focus unduly on vision (or any other single aspect); overly abstract mathematical or philosophical
theories that lack real-world grounding (universal Turing Machines, quantum consciousness and qualia); rigid rule-based
designs, and statistical models that require near infinite processing power.

Short-term academic and commercial pressure. Today, the bulk of AI research and development focuses on narrow applications
that are quite domain specific. From a competitive point-of-view it doesn't really matter whether this results from
a theoretical rejection of general intelligence, or simply from practical, short-term commercial or academic pressures;
it is a lot quicker and cheaper to solve specific problems one at a time than to develop general learning. Of course,
many are so focused on particular, narrow aspects of intelligence that they simply don't get around to looking at the
big picture they leave it to others to make it happen. It is also important to note that there are often strong financial
and institutional pressures to pursue specialized AI.

Loss of project focus. The few projects that do pursue AGI based on relatively sound models run yet another risk:
they can easily lose focus. Sometimes commercial considerations hijack a project's direction, while others get sidetracked
by (relatively) irrelevant technical issues, such as trying to match an unrealistically high level of performance, fixating
on biological feasibility of design, or attempting to implement high-level functions prematurely.

The knowledge and technology now exists to build systems with real intelligence. We can finally escape the AI winter.