The term "artificial intelligence" was coined in 1956 by Prof. John McCarthy at an interdisciplinary conference in New Hampshire which came to be known in artificial intelligence lore as "the Dartmouth Conference."

The Dartmouth Conference was driven by the thought that if the right group of scientists got together and cooperated, computing machines could be developed that would simulate intelligence. However, the researchers could not reach agreement as to how to proceed, and split into several groups going in different directions.

Artificial intelligence branches off in multiple directions

Despite the strict criteria suggested by Turing in 1950, and despite attempts to coordinate and unify the research in order to achieve true artificial intelligence, researchers diverged in multiple directions.

The AI field grows rapidly through the 60s and 70s

The field established itself at several universities and grew furiously for the next 20 years. In 1957, the General Problem Solver was invented by Allen Newell and Herbert Simon. In 1958, McCarthy invented the widely used LISP language, a novel approach to programming that allowed computer programs to operate upon themselves, and the first chat program, called Eliza, was developed in 1966 by Joseph Weizenbaum.

Expert systems are introduced

In 1970, the first expert system, which queried a "knowledge base" of facts to deduce the answers to questions, was created. In 1972, the PROLOG language, which provided a standard way to encode logic, was released.

In the mid-70s, Barbara Grosz of SRI established discourse modeling, an approach to understanding regular human language. This later developed into the notion of "centering," a novel approach to understanding the complex, interconnected references inside a conversation.