This definition is where the 'if and only if' expression comes from, since

Q{\displaystyle Q} implies P{\displaystyle P}

can be phrased P{\displaystyle P} if Q{\displaystyle Q} and

P{\displaystyle P} implies Q{\displaystyle Q}

can be phrased as P{\displaystyle P} only if Q{\displaystyle Q}.

So see that this matches the previous definition, notice that P{\displaystyle P} implies Q{\displaystyle Q} is False when P{\displaystyle P} is True and Q{\displaystyle Q} is False, meanwhile Q{\displaystyle Q} implies P{\displaystyle P} is False when Q{\displaystyle Q} is True and P{\displaystyle P} is False, so the expression (P{\displaystyle P} implies Q{\displaystyle Q}) and (Q{\displaystyle Q} implies P{\displaystyle P}) can only be true when P{\displaystyle P} and Q{\displaystyle Q} are both True or both False.

By combining this definition with the other rules of inference we get the following:

If by making the assumption not P{\displaystyle P} one can derive Q{\displaystyle Q}, and if by making the assumption not Q{\displaystyle Q} one can derive P{\displaystyle P} then deduce P{\displaystyle P} iff Q{\displaystyle Q}.

In this case we already have P{\displaystyle P} or Q{\displaystyle Q} implies Q{\displaystyle Q} or P{\displaystyle P} as Prop. 1 on the previous page. By switching the letters Q{\displaystyle Q} with P{\displaystyle P} in the proposition we get Q{\displaystyle Q} or P{\displaystyle P} implies P{\displaystyle P} or Q{\displaystyle Q}. So the proof is just a matter of putting these two previous results together.

At this point we've broken the proof into parts which can be tackled separately. In the first case we need to prove an implication, but it seems easier to do the usual way. In the second case we need to prove disjunction and there are two ways of doing this. It looks like the easier way is to assume not P{\displaystyle P} and prove Q{\displaystyle Q}. Now that we have several ways of proving things it may be that some ways are simpler than others, but without enough experience to get a feel for it you may have to use trial and error to find the simplest way. Keep in mind though, that when you see a proof in a book or article, the author may have gone through quite a few failed attempts before finding a proof that worked, but you don't get to see the failures.

(To save space, we're constructing both halves of the proof at once; normally you'd do one at time though.)

In the first half we need to prove Q{\displaystyle Q} and it appears that the best way to do this is using the method of contradiction. For the second half we already have enough results from previous pages to fill in the rest. As usual, we encourage you to try to fill in the rest yourself before looking at the final result. By different methods you may even find a simpler proof than the one given.

We've now covered nearly all of the commonly used rules of inference, so there is no shortage of statements that can be proved now. Some of the following are relatively trivial extensions of previous results, and some will require one or more subproofs, but it's up to you to figure out which is which.

In some cases a theorem may state that a group of several statements are equivalent to each other. For example the statement of the theorem might be in the form:

Theorem: The following are equivalent:

1. Statement 1

2. Statement 2

3. Statement 3

4. Statement 4

This says Statement 1 iff Statement 2, Statement 1 iff Statement 3, ... Statement 3 iff Statement 4, with all 6 possible variations. The usual way to proof this type of theorem is to prove implications in a cycle. In this case you would prove,

P1: Statement 1 implies Statement 2

P2: Statement 2 implies Statement 3

P3: Statement 3 implies Statement 4

P4: Statement 4 implies Statement 1

This is very efficient since by proving just four implications you've in effect proven all 12 possible implications between two of the four statements. The reason this works is by repeated application of the following

When two statements are logically equivalent they have the same truth value. So it seems reasonable to claim that if P{\displaystyle P} is equivalent to Q{\displaystyle Q} and E(P){\displaystyle E(P)} is some expression involving P{\displaystyle P}, then E(P){\displaystyle E(P)} is equivalent to E(Q){\displaystyle E(Q)}, where E(Q){\displaystyle E(Q)} is obtained from E(P){\displaystyle E(P)} by replacing the statement P{\displaystyle P} by Q{\displaystyle Q}.

As an example of how this might be applied, we have from Prop. 15 above that P{\displaystyle P} is equivalent to not not P{\displaystyle P}. It would be nice to conclude then that

This is a valid conclusion, but the tools to justify it belong to the realm of mathematical logic and are outside the scope of this book. We can give proofs on a case by case basis though, and some examples serve to demonstrate how these proofs can be constructed. If the expression E(P){\displaystyle E(P)} involves only implication and the logical constant False{\displaystyle False}, then we can apply repeatedly apply propositions 13 and 14 above.