A Brief History of Logical Reasoning

We now think of mathematical logic has a system based on some
axioms and definitions, that grows via theorems and proofs.

Most philosophers start the history of Logic with Aristotle, but the computer age should perhaps recognize Ramon Llull in Spain as the man who started a new way of thinking about logic. Llull invented the volvelle, a paper computer that was incorporated in manuscripts so that readers could perform computations by simply turning circles and dials. Imagine a book that, in each chapter, contains a smartphone loaded with the apps that can solve the problems discussed in that chapter. His book on Logic was the "Ars Generalis Ultima" or "Ars Magna" (1305). Its declared intent was to provide logical arguments to convert Muslims to Christianity.

Three centuries later, a number of thinkers toyed with the power of a hypothetical universal language of logic that could represent and solve any problem in formal ways. They wrote books such as George Dalgarno's "Art of Signs" (1661) and John Wilkins "An Essay towards a Real Character and a Philosophical Language" (1668), both in England. The most famous was a book written by a German philosopher, Gottfried Leibniz, who also invented calculus: "De Arte Combinatoria" (1666) that discussed an "algebra of thought" based on a formal calculus of reasoning and on a universal language.

Two centuries later, the British mathematician George Boole began the program that would fulfill Leibniz's dream: mathematical logic was born in 1847 with Boole's "The Mathematical Analysis of Logic". Boole's stated goal was to found a science of reasoning, and his strategy consisted in expressing logic in the language of symbolic algebra. Hence, Boole's approach became known as an algebra of logic. Boole invented binary logic, the logic of true and false propositions that can be combined to obtain other true and false propositions. Realizing that all calculations can be represented as binary operations, and that his calculus of symbols and operations seemed capable of solving all logical problems, he titled his 1854 book "The Laws of Thought". Propositional logic was improved by another British mathematician, Augustus de Morgan, who expanded Boole's system to relations in "On the Syllogism Number IV, and on the Logic of Relations" (1860).

The US philosopher Charles Peirce further expanded this notion in "Description of a Notation for the Logic of Relatives" (1870).

In 1874 the German mathematician Georg Cantor realized that most objects used by mathematicians are sets and thus founded set theory that works with operations such as union and intersection ("On a Property of the Collection of All Real Algebraic Numbers", 1874). For example, he reduced the mathematical function to a set consisting of ordered pairs.
Set theory is now the foundation for all branches of mathematics. Cantor was the first mathematician to study the infinite. His set theory dealt with infinite sets (or “trans-finite” sets) and even showed that there can be many different kinds of infinities (another scientific finding that has a profound psychological impact on science fiction). Cantor realized that the set of natural numbers (1, 2, 3, 4,... ) is the smallest kind of infinite set, and it has as many members as half of itself. That provided him with a definition of infinity: a set is infinite if it has a many members as a subset of itself. The set of irrational numbers (the numbers that are not fractions of natural numbers, such as pi and the square root of 2), instead, is “more infinite” than the set of natural numbers (it has a higher “cardinality”). In general, the set of all subsets of any infinite set is always bigger than the set itself.

Gottlob Frege's "Begriffsschrift/ Concept-script" (1879), subtitled "a formal language for pure thought modeled on that of arithmetic", picked up Leibniz's ambition to design a calculus of reasoning and a universal language,
a scientific language to describe mathematical proofs, which, until then, had been mostly based on intuitive notions. He grounded mathematics in a new language of logic that went beyond propositional logic (the logic of “If all Greeks are humans and if all Socrates is a Greek, then Socrates is a human”). Frege formalized functions (which he expressed as “predicates”), variables and quantifiers (“every”, “some”, etc). His predicate logic used variables (such as “x” and “y”) to express properties (such as Greek (x) is true when x = Socrates but false when x = Piero). Frege reduced meaning to truth: the meaning of something is whether the logical expression p(x,y,…) is true or not. For example, capital (France, Paris) is true which means that Paris is indeed the capital of France. Frege introduced universal and existential quantifiers to express whether properties applied to all or only one member of a set.
The leadership in logic started shifting towards Germany.

For the record, Peirce's paper "On the Algebra of Logic" (1885) independently introduced the quantifiers (such as "some" and "all").

In 1900 Peano's formal language was adopted and extended by the British philosopher Bertrand Russell, but in 1901 the latter discovered what became known as Russell's Paradox (first published in 1903 in his "Principles of Mathematics"): does the barber who shaves all and only men who do not shave themselves shave himself?
Russell showed Frege that there was a contradiction in his logical system (originally Russell wrote it on a postcard).
Russell divided logic into three parts: the calculus of propositions, the calculus of classes, and the calculus of relations.

Meanwhile, in 1900 a new German star, David Hilbert, gave a lecture titled "Mathematical Problems" at the International Congress of Mathematicians in Paris in which he emphasized the axiomatic method: solving a mathematical problem amounted to little more than to rewrite it in a formal manner.
Until then the mathematical proof of a theorem had been thought of as a matter of intuition, almost a divine enlightenment. Hilbert transformed it into a completely different, and much more formal, concept: the axioms (the premises) are a finite sequence of symbols that can be turned into other finite sequences of symbols by applying the rules allowed by logic in a finite number of steps until a sequence of symbols is obtained that represents the proof of the theorem. Hilbert turned mathematical proofs into mechanical procedures, and procedures that, strictly speaking, did not even require mathematical ingenuity. There is no need to know what you are doing, as long as you apply the admissible rules and keep producing valid sequences of symbols. If at some point you obtain the proof, you succeeded… even if you have no idea of what the theorem was about and what the proof shows. There is no need for intuition and no need for understanding.
Three of the problems that he posed to the world's mathematicians defined the three main fields of mathematics of the 20th century: set theory, mathematical logic, and computability theory. And, thanks to Hilbert, Goettingen became the top university of mathematical logic in the world.

But Russell's paradox was a stumbling block to any further discussion on Logic.

In 1908 two ways of avoiding (if not solving) the paradox were proposed: Russell invented "type theory" ("Mathematical Logic as Based on the Theory of Types") and Ernst Zermelo, another Goettingen alumnus, axiomatized "set theory" ("Investigations in the Foundations of Set Theory").

In 1910 Russell coauthored "Principia Mathematica" with Alfred Whitehead, the culmination of the program started by Frege and Peano.

In 1917 Hilbert delivered a lecture in Zurich titled "Axiomatic Thinking" in which he urged the application of the axiomatic method to all branches of science. In the winter of that year he taught a course in Goettingen, "Principles of Mathematics and Logic," that distilled his mature axiomatic method for mathematical logic. In particular, Hilbert worked out a subsystem of his functional calculus that he called "restricted functional calculus" and that today is called "first-order predicate calculus".

Determined to found mathematics on perfect logical foundations, in 1920 Hilbert proposed a research project that became known as "Hilbert's program".

First-order logic was important because it was a formal way to talk about the real world, which is made of objects, and objects that relate to other objects. A formula in first-order logic says something about some relationship among some objects.

So far first-order logic had been treated as a subset of logic, but in 1922 the Norwegian mathematician Thoralf Skolem rewrote set theory based on first-order logic ("Some Remarks on Axiomatized Set Theory"), which proposed that first-order logic was "all" of logic.

Feeling that he was closing in on the final theory of mathematics, in 1928 Hilbert published "Principles of Mathematical Logic", co-written with Wilhelm Ackermann. Hilbert posed the "Entscheidungsproblem/ Decision Problem" (1928) at yet another international conference: does an algorithm exist that can always prove whether a logical statement is true or false based on the axioms?

In 1907 the Dutch mathematician Bertus Brouwer had started his program of “intuitionism”, believing that any mathematical proof had to be constructive. In particular, he had the powerful intuition (sorry for the pun) that a proposition is not necessarily either true or false: it could also just be unprovable.

The Austrian mathematician Kurt Goedel graduated in 1930 with a dissertation that proved the completeness of first-order predicate calculus (roughly, that "anything true is provable"), but then his two incompleteness theorems ("On Formally Undecidable Propositions of Principia Mathematica and Related Systems", 1931) proved that the perfection demanded by Hilbert for axiomatic systems (that contain at least arithmetic) was impossible. Goedel showed that a formula exists that cannot be proven true or false (a formula that states its own unprovability); and that was enough to demolish the castles of logical systems built by Russell and Hilbert. Roughly, his completeness theorem proved that anything true is provable, but his incompleteness theorem proved that some statements cannot be proven true or false. No, the two theorems don't contradict each other. (John Von Neumann had made the same discovery, but never published it because Goedel beat him by a few weeks, and Von Neumann was instrumental in publicizing the importance of Goedel's theorems).
Goedel used the natural numbers to encode mathematical statements and proofs, including proofs about natural numbers themselves. A natural number can encode a statement about itself or even an encoded proof about itself. It was then easy for Goedel to construct self-referential formulas. And that’s where Hilbert’s program collapsed. But Goedel also left us the gift of self-referential formulas.
(Trivia: Goedel also discovered a solution to Einstein's equations of General Relativity which permits time travel, one of the most popular themes of science fiction).

In 1933 the Polish mathematician Alfred Tarski
reached conclusions similar to Goedel’s: he showed that the concept of "truth" leads to logical contradictions. Tarski
published "The Concept of Truth in Formalized Languages", an influential theory of truth: a sentence such as "snow is white" is true if and only if snow is white. It sounds trivial, but it was the first formal definition of truth that separated language and metalanguage. For example, "I am lying" does not separate the language and the metalanguage and appears to be a contradiction... until you realize that there are two levels in which that sentence operates. Truth is relative to the model in which it is interpreted.

Hilbert's ideas were proliferating.

"On the Consistency of Arithmetic", published in July 1931 by the French mathematician Jacques Herbrand (who alas died at the age of 23 shortly after completing this paper) introduced the concept of a (general) recursive function made famous by Goedel's lecture "On Undecidable Propositions of Formal Mathematical Systems" of 1934. In 1932 Alonzo Church at Princeton created a method for defining functions called the Lambda-calculus ("A Set of Postulates for the Foundation of Logic", 1932), and in April 1935 (one year before Turing's analogous paper) delivered a lecture about "An Unsolvable Problem of Elementary Number Theory" in which he proved that the Entscheidungsproblem is undecidable, and also proved that his class of lambda-definable functions and Goedel's class of recursive functions are identical.
Alonzo Church's "foundation of logic" was the first logic programming language. After Stephen Kleene and Barkley Rosser proved that it led to a paradox, the logic was abandoned, but the lambda-calculus was rescued by Church's theory of types, a formulation of type theory that was simpler and more general than Russell's ("A Formulation of the Simple Theory of Types", 1940).

Finally, in 1936 the British mathematician Alan Turing, using an argument similar to Goedel's incompleteness theorem,
invented the Turing machines that can carry out calculations by manipulating symbols on a tape ("On Computable Numbers, with an Application to the Entscheidungsproblem", 1936) and also proved independently what Church had just proven. Turing also showed that lambda-calculus and his own system of "machines" are equivalent.

Bottom line: Herbrand-Goedel, Church and Turing had discovered the same thing. This came to be known as the Church-Turing thesis after Church's student Stephen Kleene published the book "Mathematical Logic" (1967): a function is lambda-computable if and only if it is Turing computable if and only if it is (general) recursive.

The time was fertile for thoughts of such kind: Rene Magritte's painting "The Treachery of Images" (1929) consists of a pipe and the caption "This is not a pipe", and James Joyce's novel "Finnegan's Wake" (1939) is an infinite loop (it ends with the same words that it begins). In Jorge-Luis Borges's tale "The Circular Ruins" (1940) each person is a dream in another person's mind, and in Maurits Escher's drawing "Drawing Hands" (1948) two hands are drawing each other on the same piece of paper.

Miguel de Unamuno is a character in his own novel "Mist" (1914), Andre Gide's novel "The Counterfeiters" (1925) contains a novelist Edouard writing a novel titled "The Counterfeiters", and Aldous Huxley`s novel "Point Counter Point" (1928) contains a novelist, Phillip Quarle, who wants to write a still untitled novel that looks just like "Point Counter Point". Trivia: other authors who will appear as characters in their own novels will include Gabriel Garcia M rquez in his "One Hundred Years of Solitude" (1967) and Kurt Vonnegut in his "Breakfast of Champions" (1973). And readers will appear in Julio Cortazar's "Continuity of the Parks" (1963), that features the reader as the victim of the murder described in the story; in John Barth's "Frame-tale" (1968), that features the reader as invited to cut the pages of the book in order to create a Moebius strip, an infinite loop of the sentence "Once upon a time there was a story that began"; and in Italo Calvino's novel "If on a Winter's Night a Traveler" (1979), whose protagonist is the reader reading "If on a Winter's Night a Traveler".

Bela Bartok employed palindromes in his "Music for String Instruments, Percussion and Celeste" (1936), especially in the third movement, as well as in his 5th String Quartet (1934), as did Alban Berg in his opera "Lulu" (1935), particularly the second movement, the "Ostinato". A palindrome is a sentence whose reverse is also a sentence, such as "Was it a car or a cat I saw?" and "Never odd or even", but in music it's a phrase that is the same when played forward or backward. The protagonist of Buster Keaton's film "Sherlock Jr" (1924) is a projectionist who falls asleep and becomes a character in the movie that he is projecting. The protagonist of Orson Welles' film "Citizen Kane" (1941) walks down a hall whose walls are covered with mirrors, his image endlessly repeated, and the viewer realizes only later that the camera has been following one of the reflections, not the real man.

The "mise en abime", i.e. an image that contains a smaller copy of itself which contains a smaller copy of itself etc, became popular in advertising. For example, the magazine Cosmopolitan of April 1948 had a cover depicting a woman holding a copy of that magazine's very issue. Trivia: one of the most famous album covers in the history of rock music is the cover of Pink Floyd's "Ummagumma" (1969), designed by Storm Thorgerson, a cover containing itself containing itself etc.

The epic of 20th century logic was perhaps a reflection of a bigger shift in the way humans think about reality.

Hilbert died in 1943, the year when neural networks were born (Warren McCulloch's and Walter Pitts' binary neuron) and unaware that two years earlier his fellow-countryman Konrad Zuse had built the first Turing-complete machine, the Z3 programmable electromechanical computer.
Turing died in 1954, two years before the first conference on Artificial Intelligence.
Russell died in 1970, having witnessed an A.I. program, the Logic Theorist, prove 38 theorems of the "Principia Mathematica", but in his last years he was much more concerned with human stupidity than artificial intelligence (he wrote "Has Man a Future?" in 1961 despairing that the human race was about to annihilate itself with nuclear weapons).
Goedel spent the last years of his life working on "rotating universes" in which people can time-travel to the past and on an ontological proof of God's existence (possibly the only mathematical mistake he ever made in his life). He died in 1978, just one year before Douglas Hofstadter published the book "Goedel, Escher, Bach" in which he envisioned Goedel's mathematical recursion and self-reference as the clues not only to understanding intelligence but even consciousness itself. Goedel seemed to have remained largely indifferent to Artificial Intelligence even after the British philosopher John Lucas published "Minds, Machines and Goedel" (1959), a "proof" based on Goedel's own incompleteness theorem that A.I. is impossible (an argument retold by the British physicist Roger Penrose in his 1989 book "The Emperor's New Mind").