Proof that if A is a subset of P(A), P(A) is a subset of P(P(A))

Attached is a proof that . I would like to verify that the proof is correct. Specifically I would like to know for sure that moving the universal quantifier from within the parenthesis to outside of them is OK. Sorry to bore you guys with all this simple stuff!

In classical logic every formula is equivalent to a formula in prenex normal form (i.e., a formula starting with a string of quantifiers followed by a string of terms called the matrix of the formula). The Wikipedia link above details the conversions for disjunction, as is the case here. You have (x)(x not in A or (z)(x not in z or x in A)). This is equivalent to (x)(x not in A or (z)(x not in z) or x in A) which is then equivalent to (x)(z)(x not in A or x in A or x not in z), as you stated. However, I fail to see how this represents A is a subset of the power set of A. In fact, it is just not true that A is a subset of the power set of A. Take a simple counter example. Let A = {1, 2}. Let 0 denote the empty set. Then the power set of A is {0, {1}, {2}, A}. Now it is true that A is in the power set of A, but to be a subset of it A must contain something of the power set. In this case, that would be like saying A = {1, 2, {1}} or A = {1, 2, A} or {1, 2, {}} = {1, 2, 0}. Therefore, it is not true. Both the antecedent and the consequent of that implication are false. However, classically this entails that it is a valid material conditional because a false antecedent makes it vacuously true.

I have to question, though, whether or not your proof was to be logical or mathematical. You approached it by claiming some logical statements. The mathematician, however, would follow a typical methodology. To prove the implication you first assume the antecedent. The goal is to then derive the consequent. To derive the consequent you need to show that p(A) is a subset of its power set. To show something is a subset you arbitrarily choose an element, call it x, from p(A) and then demonstrate that it also is in p(p(A)). Since the choice was arbitrary, it could have been any element in p(A). Therefore, we can universally generalize this result to all x (universal quantifier introduction). This establishes the proof (whether or not the antecedent assumed was required in the proof). But as I said, the terms of this implication are false. You will not be able to show for any x in p(A) that it is also in p(p(A)).

However, I fail to see how this represents A is a subset of the power set of A. In fact, it is just not true that A is a subset of the power set of A. Take a simple counter example. Let A = {1, 2}. Let 0 denote the empty set. Then the power set of A is {0, {1}, {2}, A}. Now it is true that A is in the power set of A, but to be a subset of it A must contain something of the power set. In this case, that would be like saying A = {1, 2, {1}} or A = {1, 2, A} or {1, 2, {}} = {1, 2, 0}. Therefore, it is not true. Both the antecedent and the consequent of that implication are false. However, classically this entails that it is a valid material conditional because a false antecedent makes it vacuously true.

The premise was that . Unless the premise itself contains a contradiction (which it doesn't), the premise can be assumed to be true. In fact there are an infinite number of sets that allow for A to be a subset of itself (start with and keep taking its powerset). Notice that we are not speaking about where is the universe of sets.

Originally Posted by bryangoodrich

I have to question, though, whether or not your proof was to be logical or mathematical. You approached it by claiming some logical statements. The mathematician, however, would follow a typical methodology. To prove the implication you first assume the antecedent. The goal is to then derive the consequent. To derive the consequent you need to show that p(A) is a subset of its power set. To show something is a subset you arbitrarily choose an element, call it x, from p(A) and then demonstrate that it also is in p(p(A)). Since the choice was arbitrary, it could have been any element in p(A). Therefore, we can universally generalize this result to all x (universal quantifier introduction). This establishes the proof (whether or not the antecedent assumed was required in the proof). But as I said, the terms of this implication are false. You will not be able to show for any x in p(A) that it is also in p(p(A)).

Actually, I do have a proof that demonstrates given the premise (for 1 of the two cases). It's just that half-way through working on the second case did I realize the proof could be given by revealing a simple tautology.

Man, I don't know what I was thinking. It was late last night. You're right. It is part of the definition of transitive sets that such sets will be subsets of their power sets, and it defines a natural ordering on transitive sets. I still don't get the expressions in your paper, though. Over what are x and z ranging? Your quantification is ambiguous. I would assume (1) that its quantification over any set for which you then need to bound your quantifier (e.g., ) or (2) make it explicit in your statement (e.g., ). Without either of those, its proper interpretation is unclear.

Man, I don't know what I was thinking. It was late last night. You're right. It is part of the definition of transitive sets that such sets will be subsets of their power sets, and it defines a natural ordering on transitive sets. I still don't get the expressions in your paper, though. Over what are x and z ranging? Your quantification is ambiguous. I would assume (1) that its quantification over any set for which you then need to bound your quantifier (e.g., ) or (2) make it explicit in your statement (e.g., ). Without either of those, its proper interpretation is unclear.

Well, in defense of myself, while my proof may not be very standard, I would say its just as valid (rigorous) and is another way of looking at the problem. The quantifiers introduce bound variables that can be used to describe the statements in (1) and (2) in precise logical terms. However I have a tendency to overdo this and logically expand everything in a proof exercise, which often makes things harder than they should be.

Thanks for this. I really appreciate the breakdown and seeing the proof done in the way the author probably intended. One of my stumbling blocks thus far has been the tendency to break down every high level expression into its logical form and go from there. I see what you are doing here is moving back and forth between the higher level expressions (such as subset and powerset) and the lower level logical operations. This approach certainly makes a lot more sense most of the time and I'll have to start getting used to it.

I think if you were to do the proof in a first-order predicate logic, you should still go about the same fashion I did above. While you can certainly use some transformation rules to alter quantifiers or move things into a prenex normal form, you still have to go about instantiating the universal quanitifers. While my higher-level set theoretic approach masks what is going on in language like "let ... be arbitrarily chosen ...", the logical operations are still there. Research universal generalization and when it is appropriate. That is often the crux of such proofs, because what we're talking about is a property that has individual consequences (e.g., if x is in A then x is in B whenever A is a subset of B). We wouldn't want to talk about this, nor could we even prove anything about it, without talking about individuals such as x. But the choice of x is to be arbitrary so that whatever constant it was, it could have been any of them. Thus, since the choice had no bearing on how x in A implies x is in B, we can say it is true for any such choice of x in A. Then we can tack on the universal quantifier. And then we can claim it is subset containment given how we define such a property. In effect, I could take that proof outline above and the proof that I did, and put it entirely into terms of a first-order language of sets, or maybe a second-order predicate calculus.

You're correct. We usually don't belabor the details of logic, but that does not mean we shouldn't be aware of them, especially when we do increase our formality. Ontolog was trying to go about the problem in terms of first-order expressions in the paper he attached. My point was merely that whether we went with more loose mathematical expressions or strict logical ones, the fundamental ideas should be congruent, including important technicalities involving things like universal generalization. For those that understand it, simply saying is sufficient. However, I know many math professors that will dock students for lacking rigor, too. I know some to mark you down for using the wrong form of universal quanitifier "any" or "each" or ... The point remains, as long as the ideas are accurate, the rest is just our communication of it.

Additionally here, if you either have a proof in a formal language, or can fairly easily see that such an argument can get formalized in some formal language, then you basically have a verification (to some degree in the latter case) of such a proof right there. The most precise way to verify any informal proof comes as to actually put it into a formal language, or to have an argument which allows for an outline of how it could get formalized. If an argument just comes as convincing and we can't tell if it can actually get formalized, then it might actually qualify as a sophism. After all, plenty of times people do convince themselves of wrong ideas. Some of Euclid's "proofs" might qualify as examples here. Though formal proofs may get prolix, informal proofs can get so terse that they just appear convincing without any hint as to how they can get checked. Since Ontolog wanted *to verify* his proof what BryanGoodrich did made a lot of sense.