In some deduction systems there is a rule* that given $\exists x (\phi(x))$, we can infer $\phi(y)$, where $y$ is a fresh variable (i.e., one we haven't yet mentioned in this context). Call this rule "EI."

(Edit: in the opening sentence I originally said "in natural deduction systems there is typically a rule that..." Andrej Bauer has kindly informed me that natural deduction systems typically do not have this rule. In this post I am, I have learned, using a somewhat unusual set of conventions regarding the treatment of free variables.)

Call this the "simple definition" of semantic entailment: $T \models U$ iff, for all $M,A$, if $M,A \models T$ then $M,A \models U$. We can't use the simple definition in a system with EI, because $A$ might not contain an appropriate value in a fresh variable we instantiate into.

For contrast, call this the "complicated definition" of semantic entailment: $T \models U$ iff, for all $M,A$, if $M,A \models T$ then $M,B \models U$, for some $B \supseteq A|_{\text{fv}(T)}$. That is, we can change the values of unused variables across semantic entailments. This definition is compatible with EI.

My questions:

Does a typical Hilbert system (e.g., the one on Wiki) allow for anything like EI? Can we actually infer $\phi(y)$ (with $y$ fresh) from $\exists x (\phi(x))$? If not, how do we make up for the lack of this feature?

Can a typical Hilbert system be interpreted by the simple definition of semantic entailment?

The Henkin-style completeness proofs with which I am familiar (e.g., this one) make essential use of EI in the step of constructing a maximal, consistent superset with witnesses. If Hilbert systems don't have EI, how do we fulfill the function of this step? If Hilbert systems don't have EI, is it even possible to prove them complete using a Henkin-style proof, or do we need to use a completely different method?

I'm asking because I'm trying to write a completeness proof for a non-classical logic (a variant of LP), with a Hilbert-style deduction system.

What? How do you infer $\phi(y)$ from $\exists x . \phi(x)$? That makes no sense, as it would allow me to infer by generalization that $forall y . \phi(y)$ so that any existential sentence would imply its universal variant. You probably mean something else.
–
Andrej BauerDec 18 '12 at 16:47

Not only is it nonstandard, but your opening sentence is highly misleading. You say "in natural-deduction style there is typically a rule that given $\exists x (ϕ(x))$, we can infer $ϕ(y)$, where $y$ is a fresh variable." Natural deduction is a specific thing. If you do not mean natural deduction when you say natural deduction, you should put a Big warning sign in front of your question. I am downvoting the question until it is rephrased to be less misleading.
–
Andrej BauerDec 19 '12 at 1:30

1

Also, free variables are not "implicitly universally quantified" in usual treatments of logic. That is a myth propagated by non-logicians who have insufficient training in logic to tell the difference between the meaning of an open statement and the meaning of its universal closure. And I would be curious to see what your introduction rule for universal quantification looks like, given that you are allowed to deduce $\phi(y)$ from $\exists x . \phi(x)$. We are definitely not talking anything like "Hilbert-style" or "natural deduction style" here.
–
Andrej BauerDec 19 '12 at 1:33

1

A somewhat similar device is used in proof complexity. In Extended Frege systems, the extension rule allows to introduce a formula of the form $\let\eq\leftrightarrow p\eq\phi$, provided p does not occur in the previous part of the proof, in $\phi$, and in the conclusion of the proof. (This is propositional logic, so $p$ is a propositional variable, and the proof has no non-logical premises.) I guess one can think of it as taking the quantified propositional tautology $\exists p\,(p\eq\phi)$, and applying an $\exists$-elimination rule of this sort. ...
–
Emil JeřábekDec 20 '12 at 12:08

3 Answers
3

First, the standard definition of semantic entailment is neither the “simple” one nor the “complicated” one, but the following: $T\models U$ iff for every $M$, if $M,A\models T$ for every $A$, then $M,A\models U$ for every $A$.

First-order Hilbert-style usually employ some form of a generalization rule: the simplest one is
$$\phi\vdash\forall x\\,\phi,$$
other common variants include
$$\begin{align}
\psi\to\phi\vdash\psi\to\forall x\\,\phi,\\\\
\phi\to\psi\vdash\exists x\\,\phi\to\psi,
\end{align}$$
where $x$ must not occur free in $\psi$. (The choice of the rules depends on other axioms of the system, and of course on the logic, if you are dealing with non-classical systems.) Notice that these rules are not sound with respect to either your “simple” or “complicated” definition, but they are sound with respect to the semantics I gave above.
(Note also that the system on the Wikipedia page, with no generalization rules, is quite unconventional.)

The way to simulate existential instantiation in Hilbert systems is by means of a “meta-rule”, much like you’d use the deduction theorem to simulate the implication introduction rule. The most common formulation is:

Lemma 1: If $T\vdash\phi(c)$, where $c$ is a constant not appearing in $T$ or $\phi$, then $T\vdash\forall x\\,\phi(x)$.

A version with explicit existential quantifiers may look like this:

Lemma 1’: If $T\vdash\psi(c)\to\phi$, where $c$ is a constant not appearing in $T$, $\phi$, or $\psi$, then $T\vdash\exists x\\,\psi(x)\to\phi$.

Both lemmas follow easily by replacing the constant everywhere with a fresh variable, and applying an appropriate version of the generalization rule. In order to simulate the natural deduction existential elimination rule, you are in a situation where you have already derived (or assume) $\exists x\\,\psi(x)$. You add $\psi(c)$ as an extra assumption, where $c$ is a fresh constant, and derive the desired result $\phi$. By deduction theorem (you have to make sure to satisfy its hypotheses, such as by not using generalization rules in the proof snippet, or by assuming $\psi(c)$ is a sentence), this implies the provability of $\psi(c)\to\phi$, and therefore of $\exists x\\,\psi(x)\to\phi$ by Lemma 1’.

In particular, the construction of a Henkin completion of a theory basically needs that if $T+\exists x\\,\psi(x)$ is consistent, where $\psi(x)$ has no other free variable, then $T+\psi(c)$ is consistent, where $c$ is a fresh constant. This follows from Lemma 1’ and the deduction theorem in the way I indicated.

Emil, thanks for the thorough and helpful response! I need to play with your math more before I understand fully, but I wanted to thank you for writing. I will be getting back to you once I have finished playing.
–
Nick ThomasDec 18 '12 at 23:30

Emil: OK, I understand. Thank you for your help! I am marking this as the answer. I would be curious to see fuller proofs of Lemmas 1 and 1'. I am also curious to know what restrictions need to be placed on the deduction theorem. Not using the generalization rule is sufficient for DT to hold; but can we give a condition that is necessary and sufficient? And what would be best of all, can you point me to a source which treats these questions? I've had difficulty finding sources which address these sorts of nitty-gritty details, especially for Hilbert systems. Thank you!
–
Nick ThomasDec 19 '12 at 1:59

1

As for deduction theorem, it is typically stated for first-order Hilbert calculi in one of these two forms: (1) If $\psi$ is a sentence and $T+\psi\vdash\phi$, then $T\vdash\psi\to\phi$. (2) If $\phi$ is derivable from $T+\psi$ by a proof where generalization rules are only applied to formulas derived from $T$ alone, then $T\vdash\psi\to\phi$. I don’t think there is a good way to state a necessary and sufficient condition; $T\vdash\psi\to\phi$ iff there exists a proof of $\phi$ from $T+\psi$ of the form stated in (2) (because you can first derive $\psi\to\phi$ from $T$, and then apply ...
–
Emil JeřábekDec 19 '12 at 15:09

... modus ponens with $\psi$), but this is not particularly helpful, because the conclusion may hold even if a given proof of $\phi$ from $T+\psi$ does not obey this restriction. As for a source, I am kind of curious where you learn about Hilbert systems that it does not treat this stuff, introductory textbooks in logic usually do. But to give a specific reference, you can look in Shoenfield’s Mathematical logic. (His Hilbert system is a bit atypical in that it does not use $\to$ as the main connective, his basic logical operators are $\lor,\neg,\exists$, but the principle is the same.) ...
–
Emil JeřábekDec 19 '12 at 15:16

... His calculus is described in section 2.6. The deduction theorem (in form (1)) is stated on page 33, and on the same page a form of my Lemmas 1 and 1’ is given as the “theorem on constants”. Its application in a Henkin-style completeness proof can be seen in Lemma 3 on page 46.
–
Emil JeřábekDec 19 '12 at 15:21

For comparison, Enderton's textbook uses a Hilbert-style system. He derives EI in a form that is essentially what Emil Jeřábek calls Lemma 1', but as a metatheorem:

(EI) If $\Gamma, \phi(c) \vdash \psi$ where $c$ does not occur in $\Gamma$, $\phi(x)$, or $\psi$, then $$\Gamma, (\exists x)\phi(x) \vdash \psi,$$ and there is a deduction witnessing this fact that does not mention $c$.

Here we do not deduce $\phi(c)$ from $(\exists x) \phi(x)$, rather we assume $\phi(c)$ as a temporary hypothesis, for an appropriate $c$, knowing that we can later weaken that hypothesis to $(\exists x)\phi(x)$. But $\phi(c)$ does not appear on the right side of the turnstile in the metatheorem: it is never a conclusion, only a hypothesis.

Also, Enderton does define $\vDash$ via your "simple definition": $\phi \vDash \psi$ means that for every structure $M$ and variable assignment $a$, if $M$ satisfies $\phi$ with $a$ then $M$ satisfies $\psi$ with $a$. In particular, he points out the example that $Q(x) \not\vDash (\forall z) Q(z)$, where $Q$ is a unary relation symbol, and in this sense free variables are indeed not "implicitly universally quantified" in his definition. He is still able to prove that $\Gamma \vdash \phi$ if and only if $\Gamma \vDash \phi$, with no restrictions on free variables, by being careful with the logical axioms he assumes in his Hilbert-style system. He does get universal generalization as a metatheorem: if $\Gamma \vdash \phi(x)$ and $\Gamma$ does not mention $x$ then $\Gamma \vdash (\forall x)\phi(x)$.

This is quite different than the definition of $\vDash$ mentioned by Emil Jeřábek, in which $Q(x) \vDash (\forall z) Q(z)$. Let's call that "implicitly universally quantified". I have found in several cases that authors who are concerned with universal algebra or equational theories seem to prefer to use the definition in which free variables are implicitly universally quantified, while those who are concerned with model theory may not even define satisfaction or logical implication for formulas with free variables (instead they define what it means for a tuple of elements to satisfy a formula in a given structure, which is slightly different). All the definitions agree if we only consider sentences, of course.

One advantage of the "simple definition" is that the Deduction Theorem simply states that $Gamma, \phi \vDash \psi$ is equivalent to $Gamma \vDash \phi \to \psi$, without any fuss with free variables. After trying vainly to use the "simple definition" with the wrong crowd, I found it was better to use $\vDash \phi \to \psi$ so that everybody agrees.
–
François G. Dorais♦Dec 21 '12 at 21:09

Regarding the point: "In some deduction systems there is a rule that given $\exists x (\phi(x))$, we can infer $\phi(y)$, where $y$ is a fresh variable (i.e., one we haven't yet mentioned in this context). Call this rule "EI."

Maybe it is useful to remember that some ML textbooks (e.g.I.Copi, Symbolic Logic, 1954) state EI rule in a wrong way. Quine (see Methods of Logic, revised ed 1966; 1st ed 1950) used some "unnatural" restrictions to avoid fallacies, but the relevant chapter is (for me) worth to be read, because it sohws good counterexapmles to the "unrestriceted" use of EI.

Regarding the definition of logical consequence (or logical entailment : $\vDash$), there are some subtleties to be taken in account.

You can define it using truth in a model (e.g.Shoenfield) or you can state it in terms of satisfaction (e.g.Enderton). The two definitions make no difference regarding validity and the completeness theorem, but they "interact" in different ways with the rules of the calculus.

If we want that deduction rules track faithfully the relation of logical consequence, we must choose them consistently with the definition of $\vDash$.

We have that, in a structure M, $\phi(x)$ is true iff $\forall x \phi(x)$ is true, but we have structures where some assignement a satisfy $\phi(x)$ but not $\forall x \phi(x)$.

In this case, if you state the $\vDash$-relation in terms of satisfaction, you will have $\phi(x) \vdash \forall x \phi(x)$ but $\phi(x) \nvDash \forall x \phi(x)$.

You need also restriction on the Deduction Theorem, in order to avoid that $\vdash \phi(x) \rightarrow \forall x \phi(x)$, because otherwise you will contradict the soundness of the calculus (because $\nvDash \phi(x) \rightarrow \forall x \phi(x))$.

Different strategies are possible : Enderton avoids Gen-rule and uses only one deduction rule (MP); Deduction Th applies without restrictions and he proves the Gen-theorem (basically a derived-rule) : if $\Gamma \vdash \phi$ and $x$ do not occur free in any formula in $\Gamma$, then $\Gamma \vdash \forall x \phi$.

This rule cannot reproduce the fallacious $\phi(x) \rightarrow \forall x \phi(x)$ because, in order to apply Gen Theorem, you must put $\Gamma = \{ \phi(x) \}$ but the restriction on $x$ NOT occurring free in $\Gamma$ is not satisfied.