Mathematical Logic and Computability

First of all, I want to apologize for my English and the names I'm going to use on this wiki, because many of them probably have different names when written in books (Guillermo Templado). The purpose of this wiki is to study ordinary logic or natural logic, using mathematical methods or a "mathematical logic." This wiki has 3 chapters and a chapter 0:

This wiki is a summary of the book by J. Sancho San Román: Lógica matemática y computabilidad. In Chapter 3, we will discuss Turing machines, Church's thesis, and Kurt Gödel's incompleteness theorem (using lambda calculus), and of course, we'll prove them. It's important to tell that we'll assume axiom of choice which is equivalent to Zorn's lemma (Chapter 0.5), though; nevertheless there exists logic without assuming this axiom in set theory. It is not necessary, but knowledge of a first course in abstract algebra or group theory would be very useful.

\(\space \space\) All chapters are connected. This wiki will be continued in another page when it be necessary.

Preface

Mathematical logic is a fundamental instrument in the construction of computers and the formation of programming languages.
This wiki is going to be written thinking about people who want to know the precise concepts and fundamental rules of logic, or the reasons behind these rules. In particular, these concepts and rules will be of interest for all people working in logical programming, such as PROLOG, LISP, or expert systems of artificial intelligence.

For getting to understand this wiki, you will need a lot of time, review chapters over and over, constantly memorize, and use a lot of ordinary logic.

Introduction

Mathematical logic or symbolic logic is a science whose primary task is the study of ordinary logic or natural logic with mathematical methods.
The ordinary logic or natural logic, basically, is composed of an ordinary language and sequences of sentences of said language that are called deductions. Language and deductions constitute the fundamental instrument of human mind, with which it has built all the sciences known today.

The study of anything from real life is done by creating a model of that thing. For this, some elements of the thing are taken, and some properties are chosen. Each element is represented by an ideal entity that resembles it, and the model is defined by these ideal entities that are the elements of the model, and by the chosen properties.

Can you study ordinary logic with mathematical methods? To answer this question, I will transcribe a few paragraphs of LUKASIEWICZ:

"Philosophical logic includes epistemological problems: What is the truth? Is there any criterion of truth? However, these issues do not belong to logic... If we remove from philosophical logic everything that belongs to epistemology, psychology and philosophy in general, what we have is the so-called formal logic, whose content is properly logical... There are no two logics, mathematical and philosophical, there is a single logic, founded by Aristotle, completed by the old school of the Stoics and continued, often with great subtlety, by medieval logicians, and this is the logic that develops mathematical logic."

Chapter 0: Preliminaries

There are 9 sections in this chapter, and there will be some theorems and examples without proof. This doesn't mean you shouldn't prove it. You always have to prove all, for a total knowledge and understanding of this wiki. This chapter has many "known" concepts of a first course in abstract algebra and the notion of cardinality and countable sets.

\(\ \ \)1. Sets

Any explanation of the meaning of the word "set" would use more complicated concepts than the one we are trying to define; however, we have a more or less clear notion, or sufficient coincidence, about the meaning that each person gives to this word in question. For us, a set is a well-defined collection of different objects (elements).

\(\ \ \)2. Propositional Symbols and Quantifiers

A "proposition" is the expression that an event occurs. If this really happens, the proposition is said to be true; if it does not, it is said to be false. If whenever a proposition \(J\) is true, another \(K\) is also true, then it is said that \(J\) implies \(K.\) In this case, \(J\) is a sufficient condition for \(K\), and \(K\) is necessary if \(J:\)

\[J \implies K. \quad (\text{logical implication})\]

When not only \(J\) implies \(K\), but also \(K\) implies \(J\), it is said that propositions \(J\) and \(K\) are logically equivalent. Then the event of \(J\) occurs if and only if the event of \(K\) occurs:

\[ J \iff K. \quad (\text{logical equivalence})\]

The propositions "J = Today is Tuesday" and "K = Tomorrow is Wednesday" are logically equivalent.

There exist more propositional symbols. We'll see them later. There exist 2 symbols, \(\exists\) and \(\forall\), called quantifiers that are used with the following meanings:

"\(\exists a \in A\, /\, a \text{ satisfies } P\)" means there exists an element \(a\) belonging to the set \(A\) such that \(a\) satisfies the property \(P\).

"\(\forall a \in A,\, a \text{ satisfies }P\)" means every (each) element \(a\) belonging to the set \(A\) satisfies the property \(P\).

\(\ \ \)3. Subsets: Intersection and Union

Set \(A\) is a subset of set \(E\) if each element of \(A\) belongs to \(E\). It's written \(A \subseteq E\).

According to this, \(E \subseteq E\) but in this case the inclusion is not strict. When \(A \subseteq E\), but \(E\) contains an element \(a\) such that \(a \notin A\), i. e, \(A \neq E\), the inclusion is strict and \(A\) is a proper subset of \(E\). It's written \(A \subset E,\space A \neq E\). Note that

A partition of a set \(E \neq \emptyset\) is a collection \(F\) of subsets of \(E\), none empty, such that each element of \(E\) belongs to one and only one of the members of \(F\).
Each non-empty element of \(F\) is a class of the partition of \(E\), i.e, \(E \neq \emptyset\) is the disjoint union of non-empty subsets of \(E\), where each one of these non-empty subsets is an element of \(F\).

A binary relation \(R\) in \(E\) is said to be a reflexive relation if for each element \(a \in E\) is satisfied \(aRa\).

It is said that \(R\) is a symmetric relation when \(aRb \implies bRa\).

It is said that \(R\) is a transitive relation when \(aRb \text{ and } bRc \implies aRc\).

It is said that \(R\) is an anti-symmetric relation when \(aRb \text{ and } bRa \implies a = b\).

A binary relation \(R\) in \(E \neq \emptyset\) is said to be a equivalence relation if it owns reflexive, symmetric and transitive relations. Given an equivalence relation \(R\) in \(E \neq \emptyset\), and an element \(a\) in \(E\), \([a] = \{b \in E \text{ / } bRa\}\) is called the class of equivalence of \(a.\)

Note that \(a \in [a]\). The set of classes of equivalence is denoted by \(\frac{E}{R}\) and it's called quotient set made by the classes of equivalence of \(R.\) If there is no doubt about the equivalence relation of which one is speaking, the equivalence class of \(a\), \([a]\), is sometimes denoted by \(a\).

Every equivalence relation in a set \(E \neq \emptyset\) gives rise to a partition of the set \(E\), and conversely, every partition of a set \(E \neq \emptyset\) gives rise to an equivalence relation in the set \(E\).

A partition of a set \(E \neq \emptyset\) gives rise to an equivalence relation in the set \(E\), where each class of the partition of \(E\) is a class of equivalence of \(E\). On the other hand, an equivalence relation in a set \(E \neq \emptyset\) gives rise to a partition of the set \(E\), because each element \(a \in E\) satisfies \(a \in [a]\), and given two classes of equivalence \([a], [b]\) in \(E\), we have two possibilities:

A binary relation \(R\) in \(E \neq \emptyset\) is said to be a partial order relation if it owns reflexive, anti-symmetric and transitive relations. In this case, \(E\) is said to be partially ordered by \(R\).

A partially ordered subset \(C\) of \(E\) is a chain, if every two elements of \(C\) satisfy \(aRb\) or \(bRa\), that is, any two elements of \(C\) are comparable and, in this case, \(C\) is totally ordered with R.

An element \(a \in E\) (a partially ordered set) is said to be maximal if for each element \(x \in E\) it satisfies this: \(aRx \implies x = a.\)

The power set \(\mathbb{P}(E)\) of a set \(E \neq \emptyset\) is a partially ordered set with the binary relation \(\subseteq\), i.e. \(I = \{(A, B) \text { / } A \subseteq E, B \subseteq E \text{ and } A \subseteq B\}\) is a partial order relation in \(E.\)

Zorn's Lemma

Let \( X\) be a partially ordered set with the property that every chain in \(X\) has an upper bound in \( X.\) Then \( X\) contains a maximal element.

See axiom of choice. This lemma can't be proved, it is a choice made by us. You can try to prove the equivalence between axiom of choice and Zorn's lemma, which needs an "advanced machinery" or, in other words, a "deep logic thought."

A totally ordered set is said to be well-ordered if each and every non-empty subset has a smallest or least element.

The set \(\mathbb{N}\) of natural numbers is well-ordered with the binary relation of order \(\leq.\)

\(\ \ \)6. Applications, Functions, and Operations

For seeing the definition of application = function, you only need review this wiki.

Given two sets \( E \) and \( B \), an \(n\)-ary function is a function from \(E^n\) to \(B\), i.e. a function \(f : E^n \to B\).

An internal \(n\)-ary operation in \(E\) is a \(n\)-ary function from \(E^n\) to \(E\), that is. a function \(f : E^n \to E\).

Sum and multiplication are internal binary operations in \(\mathbb{N}\). It's also said that \(\mathbb{N}\) is closed or stable for these internal binary operations. Subtraction is not an internal binary operation in \(\mathbb{N}\).

The restriction of a function \(f\) is a new function obtained by choosing a smaller domain \(A\) for the original function \(f.\) The notation \(\displaystyle f_{| \small \text{A}}\) is also used.

An algebraic structure is a set \(E\) provided with some relations and (internal or external) operations.

\(\ \ \)7. Induction Principles in \(\mathbb{N}\)

See the beginnings of the following wiki pages: Induction and Strong Induction. This section is very important. So, you should really read them.

\(\ \ \)8. Countable Sets

A set \(F\) is said to be finite if there exists a bijective application (function) from \(F\) to the set \(\{1, 2, \ldots , n\}\) with \(n \in \mathbb{Z}^{+}\), or \(F = \emptyset\).

A set \(A\) is a countable set if \(A\) is finite, or there exists a bijective function from \(A\) to \(\mathbb{N}.\)

\(\ \ \)9. Parentheses Matching

Two symbols frequently used in any written language are the left parenthesis (bracket) "(" and the right parenthesis (bracket) ")". In an expression that contains parentheses, they are paired, one left with one right. But there is something else: each left parenthesis undoubtedly determines its right partner, and vice versa.

Let \(p_1, p_2, ... , p_r\) be a sequence of \(r\) parentheses. A proper-pairing or parentheses matching of this sequence is a bijective application \(f\) from the set of left parentheses to that of right parentheses, which satisfies the following:

If \(f(p_i) = p_j,\) then \(i < j\).

Given two distinct pairs \( \big\langle p_i , f(p_i) = p_j \big\rangle \) and \(\big\langle p_h, f(p_h) = p_k \big\rangle\), one of two occurs: either one pair is inside the other \(i < h < k < j\), or they are exterior to each other \((\)for example, \(i < j < h < k),\) i. e. In no case, there is an overlapping.

If a sequence of \((2n + 2)\) parentheses admits a proper-pairing, it is unique.

By induction on \(n.\)

If \(n = 0\), the theorem is obvious because there are only two parentheses.

Assume that the theorem is true for \(n \in \mathbb{N}\), and let us see that it is true for \(n + 1 \in \mathbb{N}\). Let \(p_1, ... , p_{2n + 4}\) be a sequence of parentheses that supports a proper-pairing \(f\). And let \(p_i\) be the rightmost parenthesis to the left.Since there is a proper- pairing (parentheses matching), \(i > 1\), and in addition \(p_i\) must have as pair \(p_ {i -1}\), or else there would be two overlapping pairs. Suppressing the parentheses \(p_i\) and \(p_{i -1}\), there remains a sequence of \(2n + 2\) parentheses that supports a proper-pairing, which by induction hypothesis is unique. Therefore, it is also unique the proper-pairing (parentheses matching) \(f\) of the \(2n +4\) parentheses. \(_\square\)

Chapter 1: Logic or Calculus of Propositional Logic

This chapter has 6 sections. We'll build a language and truth tables, and use Induction and chapter 0. Efforts will be made to leave exercises and solutions to each section of this chapter, if its theory is achieved (ended).

If my brother plays guitar, then Christian works a lot: \(A \implies B\).

My brother plays guitar if and only if Christian works a lot: \(A \iff B\).

\(\ \ \)1. Languages in the Propositional Calculation

A (formal) language is a set of symbols (alphabet) plus a set of finite sequences of these symbols, called words or phrases and also well-formed formulas. The definition and properties of these "words" are the object of the syntax (rules of writing words) of language.

Binary language

Alphabet: \(\{0, 1\}\)

Words: sequences of the type \(01101010111\), or sequences of these

The most usual language of propositional logic has an alphabet that is composed:

Parentheses: \((\),\()\)

Connectives: \(\neg, \wedge, \vee, \Rightarrow, \iff\)

Assertion symbols: a subset of \(\{A, B, ..., A_1, A_2, ...\}\)

Definition 1.1.1.

A language expression (or simply expression) is a finite sequence of \(n\) alphabet symbols, \(n\) not null (zero). It's written \(s_1 ... s_n\) in a row.

To say that the most usual language of propositional logic is the previous one, is not an absolute statement, only indicative. In addition, with the notation of Lukasiewicz, the parentheses are not necessary.

In the set of language expressions we define 5 internal operations (1 unary and 4 binary), as follows, indicating \(\alpha\) and \(\beta\) expressions:

A) each assertion symbol is in \(I;\)
B) If the \(\alpha\) and \(\beta\) expressions are in \(I\), they are also in \(I\), \(\neg \alpha, (\alpha \wedge \beta), (\alpha \vee \beta), (\alpha \Rightarrow \beta), (\alpha \iff \beta)\).

A well-formed formula (wff) is an expression that belongs to every inductive set, that is,

Now, we call well-formed formula each expression that is the last term of a construction succession.
Definitions 1.1.2. and 1.1.3. are equivalent, that is, they give rise to the same set \(W\) of well-formed formulas. This will be tested (proved) in section 2 , Chapter 1.

We have to prove that the set \(T\) of well-formed formulas that fulfill the thesis is inductive. To do this, we will prove the following steps: that each assertion symbol \(A\) belongs to \(T\), and that it is closed for operations \(E_{\neg}, E_{\wedge}, E_{\vee}, E_{\Rightarrow}, E_{\iff}\).

Step A) If the well-formed formula \(A\) lacks parentheses, then it belongs to \(T\).

Step \(\neg\)) If the wff \(\alpha\) fulfills the thesis, \(\neg \alpha\) also fulfills the thesis, because of sequence of parenteheses of \(\neg \alpha\) is the same as \(\alpha\).

Step \(\vee\)) If \(\alpha\) and \(\beta\) fulfills the thesis, then there is a proper pairing (parentheses matching) for \((\alpha \vee \beta)\): The one formed by the proper pairing of \(\alpha\), the one of \(\beta\), and the pair of extreme parentheses of \((\alpha \vee \beta)\).

Step \(\wedge, \Rightarrow, \iff\)) It is tested in the same way as in the previous step. \(_\square\)

\(\boxed{1}\) .- Given a well-formed formula \(\psi =\neg \alpha\), the proper pairing of \(\psi\) is the same as \(\alpha\). And given a well-formed formula \(\psi = (\alpha \text{ & } \beta)\), the proper pairing of \(\psi\) is composed of \(\alpha\), \(\beta\), and the pair of \(\psi\) extreme parentheses.

\(\boxed{2}\) .- Let \(\psi = \ldots p_i \ldots p_j \ldots\) being \(\psi\) a wff (well-formed formula), and \(p_i p_j\) a couple of parentheses of the proper pairing of \(\psi\). Then, the part \(p_i \ldots p_j \) of \(\psi\) is a wff.

Proof.-

\(\boxed{1}\) was demonstrated in the preceding theorem.

\(\boxed{2}\) exercise by induction on (for) wffs

Solution.- Let \(I\) be the set of well-formed formulas that verify the property.

Due to definition 1.1.3 a well-formed formula (wff) is an assertion symbol or one of the following five types: \(\neg \alpha, (\alpha \wedge \beta), (\alpha \vee \beta), (\alpha \Rightarrow \beta), (\alpha \iff \beta)\) being \(\alpha\) and \(\beta\) wffs.

We will call the initial part of a row \(s_1 ... s_n\) to any row \(s_1 ... s_k\) where \( k < n\).

Lemma 1.1.7. A well-formed formula (wff) can not be an initial part of another well-formed formula (wff).

Proof.- By induction, chapter 0.7.

Let \(L\) be the \(\psi\) length, where \(\psi\) is a well-formed formula (\(L\) = number of symbols in \(\psi\)). If \(L = 1\), \(\psi = A\) would be an assertion symbol, but then it can not be \(\psi\) the initial part of another well-formed formula, because \(A \ldots\) isn't a wff.

Let's suppose that \(L > 1\) and that the statement holds for all well-formed formulas of length less than \(L\).

If \(\psi = \neg \alpha\) and \(\psi\) was the initial part of another well-formed formula \(\psi' = \neg \beta\), \(\alpha\) would be an initial part of \(\beta\), against the induction hypothesis, since the length of \(\alpha < L\).

\(\boxed{3}\) If \(\psi = \ldots (\ldots) \ldots\) is a wff and \((\ldots)\) is a wff, then the 2 parentheses are a couple in the proper pairing of \(\psi\).

Proof.-

\(\boxed{1}\) \(A \neq \neg \alpha\) and \(A \neq (\alpha \text{ & } \beta)\), obviously. It must also be \(\neg \alpha \neq (\gamma \text{ & } \delta)\) since \(\neg \neq (\). It can't either be \( (\alpha \vee \beta) = (\gamma \Rightarrow \delta)\) because if it were \(\alpha \neq \gamma\) due to 1.1.7 \(\alpha\) would be an initial part of \(\gamma\) or viceversa (Contradiction), and if it were \(\alpha = \gamma\) this would imply \(\vee = \Rightarrow\)( False).

\(\boxed{2}\) If \(\neg \alpha = \neg \beta\) it is clear that \( \alpha = \beta\). If \( (\alpha \vee \beta) = (\gamma \vee \delta)\) it can't be \(\alpha \neq \gamma\) because of 1.1.7, therefore \(\alpha = \gamma\) and from there \(\beta = \delta\).The same happens with the other binary connectives.

\(\boxed{3}\) Let \(\psi = \ldots \text{ p } \ldots \text{ p '} \ldots\) and \(\alpha = \text{ p } \ldots \text{ p '}\) , being \(\alpha\) and \(\psi\) wffs. If\(\text{ p"}\) is the right parenthesis corresponding to \(\text{ p }\) in the proper pairing of \(\psi\), then \(\text{ p } \ldots \text{ p''}\) is a wff \(\beta\) (Corollary 1.1.6, (2)), Then it has to be \(\text{ p' } = \text{ p" }\) because if not, \(\alpha\) would be initial part of \(\beta\), or viceversa, against the lemma 1.1.7. \(\square\)

Corollary 1.1.9.

The five constructive operations \(E_{\neg}, E_{\wedge}, E_{\vee}, E_{\Rightarrow}, E_{\iff},\)restricted to the set of well-formed formulas, fulfill:

\(\boxed{1}\) Their images (or ranges) are disjoint two by two, and with the set of assertion symbols.

\(\boxed{2}\) they are one to one (injective functions, or, injective operations)

Proof.- \(\boxed{1}\) and \(\boxed{2}\) are equivalent to \(\boxed{1}\) and \(\boxed{2}\) of the previous theorem, expressed in another way. \(\square\)

Definition 1.1.10.

It is called a subformula of a well-formed formula \(s_1 ... s_n\) to a part \(s_h s_{h + 1} ... s_{h + r}\) that is also a well-formed formula.

Theorem 1.1.11.

Given a connective \(C\) of a well-formed formula \(\psi\), \(C\) appears in one and only a subformula of \(\psi\), of the form \(C \alpha\) or \((\alpha \space C \space \beta)\), according to \(C = \neg\) or binary \(C\).

Proof.- By definition (1.1.3), \(C\) appears in at least one subformula of the indicated well-formed formula \(\psi \). Let's prove that it appears only in one and only one.

Case \(C = \neg\)) If there were two subformulas \(C \alpha\) and \(C \beta\), it would follow that \(\alpha\) is an initial part of \(\beta\), or viceversa, against lemma 1.1.7.

Case binary \(C\)) Let's suppose that there are two subformulas \((\alpha \space C \space \beta)\) and \((\gamma \space C \space \delta)\).Let's call \(p, p'\) the extreme parentheses of the first subformula and \(q, q'\) to those of the second subformula. Because of the repeated lemma 1.1.7, \(\beta = \delta\) and therefore \(p' = q'\). But then, due to theorem 1.1.8(3), it has to be \(p = q\) \(\square\).

Definition 1.1.12.

Let \(C\) be a connective of a well-formed formula \(\psi\). If \(C = \neg\), it's called the reach(or scope) of \(C\) in \(\psi\),to the wff \(\alpha\) of the subformula \(\neg \alpha\) of \(\psi\). If \(C\) is binary, it is called the (anterior and posterior) reaches (or scopes) of \(C\) in \(\psi\) to the wffs \(\alpha\) and \(\beta\) of the subformula \((\alpha \space C \space \beta)\) of \(\psi\).

Parentheses form an essential part of our definition of wff. Example: \(\neg (A \vee B) \neq (\neg A \vee B)\). In the desire to simplify language, one may ask whether the introduction of parentheses is superfluous. The answer is no, in most cases. Nonetheless, the writing of a well-formed formula \(\psi\) can be greatly simplified, establishing certain conventions set out below:

\(\ \ \)2. Induction and Recursion

Let E be a set with some internal operations and \(B \subseteq E\). For convenience, we will assume that there are only two operations: a \(g: E \to E\) (unary) and a \(f : E \times E \to E\) (binary).

Definition 1.2.1.-
It's said that a subset \(S\) of \(E\) is inductive if it contains \(B\) and is closed for operations \(g\) and \(f\).(See example, chapter 0.6)

We will call construction succession in \(E\) a finite sequence of element os \(E\), \(\langle \alpha_1, ... , \alpha_n \rangle\) in which each term \(\alpha_i\) fulfills one of these three conditions:

\(C = C^{\small 0} = C_{\small 0} \) (definition 1.2.4) is said to be freely generated by \(B\) by the operations \(g\) and \(f\), if the restrictions of \(g\) and \(f\) to \(C\) satisfy:

\(\boxed{1}\) Their images (or ranges) are disjoint, and also with the set \(B\).

\(\boxed{2}\) they are one to one (injective functions)

\(C\) is a \((1, 2)\)-free algebra generated by \(B\) ( in algebraic language).

The set of well-formed formulas is freely generated by the assertion symbols by the connective operations.

Recursion theorem 1.2.7.

Let \(C \subseteq E\) be freely generated by \(B \subseteq E\) by the operations \(g\) and \(f\). Let \(R\) be a set of values, and given the functions \(v : B \to R\), \(G : R \to R\), \(F: R \times R \to R\), then there exists one and only one application \(\overline{v}: C \to R\) such that:

\(\boxed{\text{Existence}}\) \(D_1 = C_1 = B, \space D_n = C_n - C_{n - 1}, \text{ with } n \ge 2\). \(D_n = \) the set of elements \(z \in C\) that are the last term of a construction succession of \(n\) terms, but not \(n -1\) terms, (n = minimum number of terms of a construction succession of \(z\)). Given \(z \in C\) there is only one \(n\) such that \(z \in D_n\), and furthemore, \(z\) is in one and only one of the sets \(B \), \(\text{ Im (g) }\) or \(\text{ Im (f) }\) ( since \(C\) is a \((1, 2)\)-free algebra or \(C\) is freely generated).

In the second case,\(z = f(x,y)\), with \((x, y) \in C^2\) is univocally determined by \(z\) since \(f\) is an injective function. We are going to define \(\overline{v}(z) = \overline{v}(f(x, y)) = F(\overline{v}(x), \overline{v}(y))\), being known by induction \(\overline{v}(x)\) and \(\overline{v}(y)\), since \(x \in D_i\) and \(y \in D_j\) with \(i < n \) and \(j < n\) .

Let \(C\) be a free algebra generated by \(B\) with operations \(g\) and \(f\), and \(R\) one (1,2)-algebra with operations \(G\) and \(F\). Given an application \(v: B \to R\), there exists one and only one application \(\overline{v}: C \to R\), which extends a \(v\) and is a homeomorphism of (1,2)-algebras.

\(\ \ \)3. True Ratings

Semantics from the ordinary language study the meaning of words, their variations, and problems related to their meaning.

In this section, we will try to give a "value" to each well-formed formula (wff). The definition and study of these "assessments" (which may come from "interpretations" of language) constitutes what is called the semantics of propositional language.

Let's consider such a language, and a set \(\{T, F\}\) of 2 elements that we'll call truth values (\(T\) = true and \(F\) = false).

Due to Recursion Theorem (1.2.7.) , and because of \(W\) is freely generated by S by a unary operation \(E_{\neg}\) and four binary operations \(E_{\wedge}, E_{\vee}, E_{\Rightarrow}, E_{\iff}\) (Corollary 1.1.9.) and the set of values \(\{T, F\}\) is equipped with the unary operation \(G_{\neg}\) and the four binary oberations \(G_{\wedge}, G_{\vee}, G_{\Rightarrow}, G_{\iff}\), there exists one and only one application (function) \(\overline{v}: W \to \{T, F\}\), extension of \(v\), satisfying:

You find yourself on the island of knights and knaves, where every inhabitant is of one of two types: a knight who always tells the truth, or a knave who always lie.

You know that Artemis is a knight.

Which of the island's inhabitants can say that "If I am a knave, then Artemis is a knight"?

Note: You are not an inhabitant of the island.

Inspiration.

Only knights
Only knaves
Everybody in the island
Nobody in the island
Not enough information

\(\underline{\text{Truth tables}}\)

Given a well-formed formula (wff) \(\alpha\), each truth assessment,\(v\), assigns \(\alpha\) a value \(\overline{v}(\alpha)\).
It is clear that if the set of assertion symbols \(S\) is infinitum , the list of all truth assesments would be infinitum. But since the value \(\overline{v} (\alpha)\) depends only on the values \(v(A_i)\) of the assertion symbols \(A_i\) contained in \(\alpha\), it suffices to know these values for each \(v\), which causes the list of assesments be reduced to the list of n-uples \(\langle v(A_1), ..., v(A_n) \rangle\) possibles, whose total number is \(2^{n}\).

Due to conventions to suppress parentheses in a wff (at the end of Chapter 1, section1), let's consider \(\alpha = A \vee B \Rightarrow C\), then we get:

This is a truth table of (for) \(\alpha\). (My apologies for the pic(GuillermoTemplado)). This is, with only 3 assertion symbols we can create \(2^3 = 8\) possible truth assesments.

Definition 1.3.3.

A truth assessment \(v\) is said to satisfy a wff \(\alpha\) (\(v\) satisfies \(\alpha\)) when \(\overline{v} (\alpha)\) = T.

Let, now, \(\sum\) be a set of wffs and \(\alpha\) a wff.

Definition 1.3.4.

We'll say that \(\sum\) implies tautologically \(\alpha\) if every truth assessment that satisfies all wffs of \(\sum\), it also satisfies \(\alpha\).
It's written \(\sum \vDash \alpha\). Otherwise, \(\sum \nvDash \alpha\).

1) \(\sum = \emptyset\). In this case, \(\emptyset \vDash \alpha\) means \(\alpha\) is true for each truth assessment. In this case, \(\alpha\) is a tautological well-formed formula, or \(\alpha\) is a tautology. It's also written \(\vDash \alpha\).

2) There is no truth assessments that satisfies all well-formed formulas of \(\sum\). In this case, \(\sum \vDash \psi, \forall \psi\) wff (trivially.)

1)

\(\vDash \alpha \Rightarrow \alpha\).

\(\vDash \alpha \Rightarrow (\beta \Rightarrow \alpha)\).

2)
\(\{A, \neg A\} \vDash \psi, \forall \psi\).

Is this a tautology?

\( (A \vee B) \iff (\neg A \Rightarrow B) \)

Can you explain it better?
No, it isn't
Yes,It is
There is insufficient information

Is this a tautology?

\[\bigg((A \vee B) \vee C\bigg) \iff \bigg(A \vee (B \vee C)\bigg)\]

No, it isn't
Yes, it is
There is insufficient information
It's transitive law and it is always true

There is insuffcient information
It's transitive law and it is always true
Yes,it is
No, it isn't

Definition 1.3.5.

Two well-formed formulas \(\alpha\), \(\beta\) are said to be tautologically equivalent when \(\alpha \vDash \beta\) and \(\beta \vDash \alpha\). In this case, it's written \(\alpha \iff \beta\).

Corollary 1.3.6.

\(\alpha \vDash \beta\) if and only if \(\vDash \alpha \Rightarrow \beta\). The well-formed formulas(wffs) \(\alpha\), \(\beta\) are tautologically equivalent if and only if \(\vDash \alpha \iff \beta\).

Two tautologically equivalent well-formed formulas express the same fact, since one is true if and only if the other is true. In ordinary logic, they are usually said to be equivalent or the same to be stated as the other, although it would be better to say that they are semantically equivalent.

The set \(W\) of well-formed formulas (wffs) is generated by \(S\) (set of the assertion symbols), by the connective operations. Let's now consider the set \(W'\) of the (wffs) generated by \(S\) by two of the previous(above) operations, for example, \(E_{\neg}, E_{\Rightarrow}\). It is clear that \(W' \subseteq W\). It's said that the system \(\{\neg, \Rightarrow \}\) is a complete system of connectives, if for each wff \(\alpha\) of \(W\) there exists a wff \(\beta\) of \(W'\) which is tautologically equivalent to \(\alpha\).

Exercise 1.3.2. \(\{\wedge, \Rightarrow \}\) is not a complete system of connectives. (Hint: There doesn't exist a wff with only these connectives and tautologically equivalent to \(\neg (A \wedge B)\) ).