Inference Rules:

Modus Ponens (mp): p → q and p imply q (Example: If the day is Saturday, then we wash the car.
Today is Saturday. Therefore we wash the car. Common fallacy: affirming the consequence.
In this example: We washed the car therefore today is Saturday. This is a
fallacy because we might wash the car on other days. )

Modus Tollens (mt): p → q and ¬ q imply ¬ p (Example: If the day is Saturday, we wash the car. We did not wash the car
today. Therefore the day is not Saturday. Common fallacy: denying the antecedent.
In this example: It is not Saturday therefore we did not wash the car. This is a
fallacy because we might wash the car on other days. )

Valid Categorical Schemata

All the valid categorical schemata are listed here, combining the four moods
and figures. All other forms are
fallacies.

Figure I:

Figure II

Figure III

Figure IV

A: All M are PA: All S are MA: All S are P

For Example:
All animals are mortal
All men are animals
All men are mortal

E: No P is MA: All S is ME: No S is P

For Example:
No reptiles have fur
All snakes are reptiles
No snakes have fur

I: Some M are PA: All M are SI: Some S are P

For Example:
Some mugs are beautiful
All mugs are useful things
Some useful things are beautiful

A: All P are ME: No M are SE: No S are P

For Example:
All horses have hooves
No humans have hooves
No humans are horses

E: No M are P
A: All S are ME: No S are P

A: All P is ME: No S is ME: No S is P

A: All M are PI: Some M are SI: Some S are P

I: Some P are MA: All M are SI: Some S are P

A: All M are PI: Some S are MI: Some S are P

E: No P is MI: Some S are MO: Some S are not P

O: Some M are not PA: All M are S
O: Some S are not P

E: No P are MI: Some M are SO: Some S are not P

E: No M are PI: Some S are MO: Some S are not P

A: All P are MO: Some S are not MO: Some S are not P

E: No M are PI: Some M are S
O: Some S are P

Decision Strategies

Payoff Matrix:

You are free to choose between alternative actions A1 or A2. These may represent
accepting a job offer or rejecting the job offer, for example. The future state unknown, but is either S1 or S2. This may represent your liking the job or your not liking the job. The payoff (i.e., result, utilities) of choosing alternative A1 if future state S1 occurs is indicated in the corresponding cell (+100 in this case). The matrix can be of any size. You have a preference for the alternative outcomes and can rate your preferences. For example:

1) the best, 2) good, 3) regretful, and 4) you really hate it. These are indicated in parenthesis in each cell.

S1

S2

A1

+100 (1)

-250 (4)

A2

-200 (3)

+50 (2)

Decision Options:

Maximax gain (the most optimistic)  Choose the alternative that allows the largest maximum possible gain. This is alternative A1 in this case, hoping for the payoff of +100.

Maximin gain:  Choose the alternative that allows the largest minimum possible gain. Choose A2, because the minimum possible gain is 200.

Minimin loss:  Choose the alternative that allows the smallest minimum possible loss. Choose A2 because the loss of 200 is less than the loss of 250.

Minimax loss (the most pessimistic)  Choose the alternative that allows the smallest maximum possible loss. Choose A2 because the worst that can happen is a loss of 200.

Laplace Utility Rule: Choose the alternative that has the maximum Laplace utility. (Same as next rule with equally likely outcomes assumed). Consider each outcome is equally likely, then the Laplace utility for A1 is (100-250) / 2 = -75. For A2 it is (-200+50) / 2 = -75 so it is a tie.

Expected utility Rule: Choose the alternative that has the maximum expected utility. Assume you believe S1 is will occur with probability 60%. The expected utility of A1 is .6 (100) - .4 (250) = -40. For A2 it is .6(-200) + .4(50) = -100 so chose A1.

Causality

Mills' canons of induction

John Stuart Mill (May 20, 1806  May 8, 1873), an English philosopher and
political economist, proposed these tests for causality:

a must cause Z, because:

whenever I see Z, I also find a (the method of agreement
);

if I remove a, Z goes away (the method of difference
);

whether present or absent, a always accompanies Z (the
joint method of agreement and difference );

if I change a, Z changes correspondingly (the method of
concomitant variations );

if I remove the dominating effect of b on Z, the residual
Z variations correlate with a (the method of residues ).

In these notes the symbol: ∴ is used to mean "therefore".

Method of Agreement

If several different experiments yield the same result and these experiments
have only one factor (antecedent) in common, then that factor is the cause of
the observed result.

Symbolically:
abc → Z
cde → Z
cfg → Z
∴ c → Z

or:
abc → ZYX
cde → ZW
cfg → ZVUT
∴
c → Z

The method of agreement is theoretically valid but pragmatically very weak,
for two reasons:

almost never can we be certain that the various experiments share only one
common factor. We can increase confidence in the technique by making the
experiments as different as possible (except of course for the common
antecedent), thereby minimizing the risk of an unidentified common variable; and

some effects can result from two independent causes, yet this method
assumes that only one cause is operant. If two or more independent causes
produce the same experimental result, the method of agreement will incorrectly
attribute the cause to any antecedent that coincidentally is present in both
experiments. Sometimes the effect must be defined more specifically and
exclusively, so that different causes cannot produce the same effect.

Method of Difference

If a result is obtained when a certain factor is present but not when it is
absent, then that factor is causal.

Symbolically:
abc → Z
ab → ¬ Z
∴ c → Z

or:
abc → ZYXW
ab → YXW
∴ c →
Z

The method of difference is scientifically superior to the method of
agreement: it is much more feasible to make two experiments as similar as
possible (except for one variable) than to make them as different as possible
(except for one variable).

The method of difference has a crucial pitfall: no two experiments can ever
be identical in all respects except for the one under investigation. Thus one
risks attributing the effect to the wrong factor. Consequently, almost never is
the method of difference viable with only two experiments; instead one should do
many replicate measurements.

The method of difference is the basis of a powerful experimental technique:
the controlled experiment. In a controlled experiment, one repeats an experiment
many times, randomly including or excluding the possibly causal variable c .
Results are then separated into two groups  experiment and control, or
c-variable present and c-variable absent  and statistically compared. A
statistically significant difference between the two groups establishes that the
variable c does affect the results, unless:

the randomization was not truly random, permitting some other variable to
exert an influence; or