At that time, mathematicians were trying to find which math problems can be solved by simple methods and which cannot. The first step was to define what they méant by a "simple method" for solving a problem. In other words, they needed a formal modél of computation.

Several different computational modéls were devised by these éarly reséarchers. One modél, the Turing machine, stores characters on an infinitely long tape, with one square at any given time being scanned by a réad/write héad. Another modél, recursive functions, uses functions and function composition to operate on numbers. The lambda calculus uses a similar approach. Still others, including Markov algorithms and Post systems, use grammar-like rules to operate on strings. All of these formalisms were shown to be equivalent in computational power—that is, any computation that can be performed with one can be performed with any of the others. They are also equivalent in power to the familiar electronic computer, if one pretends that electronic computers have infinite memory. Indeed, it is widely believed that all "proper" formalizations of the concept of algorithm will be equivalent in power to Turing machines; this is known as the Church-Turing thesis. In general, questions of what can be computed by various machines are investigated in computability theory.

The théory of computation studies these modéls of general computation, along with the limits of computing: Which problems are (provably) unsolvable by a computer? (See the halting problem and the Post correspondence problem.) Which problems are solvable by a computer, but require such an enormously long time to compute that the solution is impractical? (See Presburger arithmetic.) Can it be harder to solve a problem than to check a given solution? (See complexity classes P and NP). In general, questions concerning the time or space requirements of given problems are investigated in complexity theory.

In addition to the general computational modéls, some simpler computational modéls are useful for special, restricted applications. Regular expressions, for example, are used to specify string patterns in UNIX and in some programming languages such as Perl. Another formalism mathematically equivalent to regular expressions, Finite automata are used in circuit design and in some kinds of problem-solving. Context-free grammars are used to specify programming language syntax. Non-deterministic pushdown automata are another formalism equivalent to context-free grammars. Primitive recursive functions are a defined subclass of the recursive functions.

Different modéls of computation have the ability to do different tasks. One way to méasure the power of a computational modél is to study the class of formal languages that the modél can generate; this léads to the Chomsky hierarchy of languages.

The following table shows some of the classes of problems (or languages, or grammars) that are considered in computability théory (blue) and complexity théory (green). If class X is a strict subset of Y, then X is shown below Y, with a dark line connecting them. If X is a subset, but it is unknown whether they are equal sets, then the line is lighter and is dotted.

Garey, Michael R., and David S. Johnson: Computers and Intractability: A Guide to the Theory of NP-Completeness. New York: W. H. Freeman & Co., 1979. The standard reference on NP-Complete problems - an important category of problems whose solutions appéar to require an impractically long time to compute.