Demonstrating that the assumption $A=B$ leads to a true statement is a vacuous truth. In order the show that $A=B$, prove that the difference $\Delta =A-B$ is zero. The subtle change being that $\Delta$ is not assumed to be zero.

What are some other examples of subtle logical pitfalls that the amateur Mathematician should be aware of?

Here is an specific argument that shows how assuming $A=B$ leads to absurdity.

A falsity implies anything. Assuming that the false statement is true implies that the two undefined objects $a$ and $b$ are equal, absurd. However, if we define the difference as $\Delta$, then a true statement is forced.

7 Answers
7

I've encountered many students who mistakenly conclude that if an implication is true, then the converse must be true.

That is, mistakenly concluding that if $p \rightarrow q$, then $q \rightarrow p$.

The same error in reasoning comes when there is a chain of right-directional implications, and then assuming that then shows the equivalence of the original claim and the conclusion at the end of the chain of implications.

A bit more subtly: I often encounter the erroneous conclusion that if $p\rightarrow q$, then $\lnot p\rightarrow \lnot q$.

Also, when asked to prove a biconditional "if and only if" statement like $p \iff q$, some stop after proving only $p \rightarrow q$, thinking they are done.

And more subtly, when starting with an equation, then operating on each side of the equation (resulting in an equation), students often assume that whatever is the case about the end result also holds for the original. E.g. when given something of the form, $$\begin{eqnarray} y &=& f(x)\tag{1} \\ \text{So} \;\;y^2 &=& (f(x))^2\tag{2}\end{eqnarray}$$ and then (mistakenly) concluding solutions to $(2)$ are solutions to $(1)$. Or, e.g., given $$\begin{eqnarray}y^2 &=& x^2\tag{3} \\ \text{So} \;\;\sqrt{y^2} &=& \sqrt{x^2}\tag{4}\end{eqnarray},$$ then (mistakenly) concluding that the only solutions to $(3)$ are the solutions to $(4)$.

Additionally, one possible pitfall is not correctly applying DeMorgan's laws:

As it relates to the distribution of negation over conjunction and the distribution of negation over disjunction: Mistakenly equating $\lnot (p \land q)$ with $ \lnot p \land \lnot q$ or $\lnot (p \lor q)$ with $\lnot p \lor \lnot q$.

As it relates to containment in the complement of a union of sets and containment in the complement of the intersection of sets: E.g. Making the mistake of equating $\neg(A \cup B)$ with $\neg A \cup \neg B$ (and similarly in the case of the complement of an intersection).

Also, the negation of a quantified proposition seems to be problematic for some: making the mistake of equating $\lnot \forall x, P(x)$ with $\forall x, \lnot P(x)$, and similarly, when negating an existentially quantified statement.

One subtle issue that many people fall prey to is confusing induction on the natural numbers with metainduction on the natural numbers. The difference is that the former is an argument by induction formalized inside a formal "object" theory, such as ZFC, while the latter is an inductive argument carried out in the metatheory that we use to study the object theory.

For example, consider the claim "for every $n \in \mathbb{N}$, the number $n!$ is defined". Here $n!$ is defined to be the product $\prod_{i=1}^n i$; the problem is to show that this product actually has a value for each $n$.

It is tempting to try to prove that by saying "Given $n$, we can write down $n! = n\cdot (n-1) \cdot \cdots \cdot 1$, and thus $n!$ must be defined". But that method does not actually work to prove the statement "for all $n \in \mathbb{N}$, $n!$ is defined" in a theory such as ZFC. There are two reasons:

The proof above, when written out explicitly without an ellipsis, gets longer and longer as $n$ gets bigger and bigger, because it takes more and more space to write out the numbers conveniently omitted by the ellipsis. So the argument above really provides a sequence of formal proofs, one for each $n$ we can write down, that $n!$ is defined.

If a statement is provable in ZFC, it is true in all models of ZFC, and there are models of ZFC in which there are nonstandard natural numbers. For a nonstandard $n$ in some nonstandard model, we cannot write down the "finite" product $n!$, because there are actually infinitely many numbers less than $n$ from the perspective of the model. (For such numbers, even though $n!$ is defined, it is not given by any term in the language of arithmetic.)

The way we actually prove "$n!$ is defined" in ZFC is by induction within ZFC, which is able to formalize the following argument. First, $0! = 1$ is defined. Now assume $k!$ is defined; then $(k+1)! = (k+1)\cdot k!$ is defined as well. Hence, by induction, $n!$ is defined for all $n$. This proof does not rely on our ability to "write" $n$, it simply uses the principle of induction and the fact, provable in ZFC based on the definition I gave, that $(k+1)! = (k+1)\cdot k!$.

Because most mathematics is carried out in natural language, where the distinction between object theory and metatheory is not clear, the distinction above can be easy to overlook. But when we start actually formalizing things in formal theories, the distinction becomes crucial.

There are many examples of formal theories $T$ and statements $\phi(n)$, taking one natural number as argument, such that for every $n$, $T$ proves $\phi(n)$, but at the same time $T$ does not prove $(\forall n)\phi(n)$. The way such examples are usually obtained it by showing "for each $n$, $T$ proves $\phi(n)$" by metainduction (induction in the metatheory), and then using some other technique to show "$T$ does not prove $(\forall n)\phi(n)$."

I think this is probably the subtlest point I have yet to encounter in set theory.

Let's assume the universe of sets satisfies $\sf ZFC$.

Fact I: For every $T_0$, a finite subset of $\sf ZFC$, we can prove in $\sf ZFC$ that $T_0$ has a model.

Fact II: The compactness theorem is provable from $\sf ZFC$, so if a theory $T$ is such that every finite fragment has a model, then $T$ has a model.

Fact III: If $\sf ZFC$ is consistent, then $\sf ZFC$ cannot prove its own consistency. In particular, if we assume that the universe of sets satisfies $\sf ZFC$, then it is impossible to prove that there is a set which is a model of $\sf ZFC$.

On first sight it seems that the first two facts should contradict the third! But the truth is that the first fact is proved in the meta-theory. And we cannot apply compactness arguments in the meta-theory, and have the results in the theory itself!

This pitfall is $\large\sf\text{confusing between meta-theory and theory}$. It is very easy to fall for it when making your first steps in set theory and consistency results.

Part of the reason for this confusion, I am sorry to say, is that set theorists are often not very clear about the metatheory/object theory distinction in their writing. (Of course I have some bias about this from working with proof theory.) The confusion with point 1 could be avoided by writing "for every metafinite subset $S$ of the ZFC axioms, ZFC proves that $S$ has a model".
–
Carl MummertMay 23 '13 at 12:33

Actually, the construction of JDH indicated here shows that Fact I is enough to construct, within any given model of ZFC, a transitive set that externally is a model of ZFC!
–
Zhen LinMay 23 '13 at 12:48

@Zhen Lin: This is one of the most confusing theorems I've seen ever. I love it!
–
Asaf KaragilaMay 23 '13 at 12:49

@Asaf: that result is pretty, but it is a standard sort of overspill argument
–
Carl MummertMay 24 '13 at 1:15

@Carl: And half the crazy non-AC results are merely standard use of permutations to carry partial structure, but not the AC-induced substructure. That doesn't make things less amazing or lovable! :-)
–
Asaf KaragilaMay 24 '13 at 1:18

Given a space with some structure or some relations, it is easy to assume statements that seem obviously false (or true) without some subtle thinking. Specifically, we must logically think about what next step we are taking.

I will provide 2 examples:

1) Suppose we have some set $S$. We make a statement: For some set $S$, every infinite subset of $S$ has $4$ elements.

This statement is not wrong because we must consider sets which do not have any infinite subsets, then our statement cannot be wrong as it is not applicable to that set.. A finite set will do.

2) A formula for computing the residues of a holomorphic functions $f$ and some reasonably well defined function $g$, except perhaps at some singular points $z_0, z_1, ...$ of the complex number $z$ is

$Res_{z_k}(f,g) = f(z_k)Res_{z_k}(g) $

Which is generally false, the subtle logic required: when doing integration with $\infty$ as a limit we need to take care, as we do with exchange operands as $\sum, \int$, etc . However for specific examples this residue formula is true.

Therefore, can we use this formula to prove something desirable for our given integral or residues.

However, the subtle logic required for a full proof (or answer) is that it is not enough to use this formula for our specific case; we must show why it holds.

But a bigger mistake and more "obvious" is to use this formula for any function $f,g$. From subtle logic to ridiculous logic. It doesn't help that this formula can hold for functions that are, say, both non-holomorphic.

For example, most people will recognize that, in the real numbers, you can't divide by $0$. However, if $x$ is a variable that could be zero, they don't notice the problem with dividing by $x$.

This isn't just a source of gimmicky "$1=2$"-type proofs: it causes problems in solving real problems, because people will divide by something that could be zero, and in the process lose their ability to find those solutions where that quantity is zero. One context where this is a serious problem is when using the method of Lagrange multipliers.