We find instances in which an increases in x (level of economic development) causes an increase in y (movement toward democracy) in some cases but does not have this effect in others, where y is caused by an entirely different set of causes.

2.

Cause dependency on time

We find cases in which an increase in x (social democratic governance) is associated with an increase in y (social spending) at one point in time (postwar period for example) but not in another (1990s).

3.

Same cause different outcomes

We find instances in which an increase in x (social protest) causes an outcome y (government turn over) in some cases but an entirely different outcome (repression) in other cases.

4.

Outcomes are the effects of various causes that depend on each other

We find instances in which an outcome y (successful wage coordination) depends on the value of many other variables - v (union density), w (social democratic governance), and x social policy regime) - whose values are in turn jointly dependent on each other

5.

Circular causality

We find cases in which increases increase in x (support for democracy) increase y (the stability of democracy) and in which increases in y also tend to increase x.

Suppose that you make a simple statement like "ice is cold." If someone asks you "How do you know that ice is cold?", one appropriate answer is to cite some "way of knowing." Some common "ways of knowing" are:

Thus, when asked how you know that ice is cold, you could answer "Common sense - my experience with ice - tells me that it is cold." The problem with this answer is that different people can have very different experiences. A native Antarctican, for example, might respond "You’re wrong. Ice isn’t cold. It’s wet and solid." You can also answer "Tradition - it is a common premise of my culture", or you could answer "Authority - pointing to the definition of ice in a scientific dictionary or high-school textbook". You could say "Religion - this is a pillar of my religion!". Yet, you could also say "Induction - i made observations as to the relative warms of different objects". Using "Science, you the following hypothesis "all objects which are frozen are cold". While each way of knowing has its strengths and weaknesses and all contribute to some forms of knowledge only some of them are used in the scientific process.

C. Empiricism vs. Rationalism:

Although it is a false dichotomy in some respects - and an oversimplification in many respects - it is sometimes useful to think of empiricism and rationalism as opposing philosophical schools.

Empiricism: is a philosophical doctrine holding that all real knowledge is based on experience. Although some form of empiricism can be traced back to Aristotle (at least), it is most closely associated with the English philosophers Locke and Bacon. In modern times, empiricism has become less dogmatic. Although the school now includes any philosophical system that is grounded in sensory experience, the closest descendants of Locke and Bacon are the logical positivists. Positivism claim descent from empiricists. The positivists argue that knowledge cannot be valued unless it is verified by experience (observation). The keystone of logical positivism - that nothing is true unless it has been verified empirically - has been rejected by positivist philosophers because, they argue, the principle of verification itself cannot be verified empirically. Take for example the positivist argument that the quantity and quality of knowledge could be increased y replication. To learn about gravity, one might drop an object from a tall building an observe its fall; confidence in one's understanding of gravity, expressed as a probability, grows in proportion to the number of times the experiment is replicated. Popper rejected this argument on two fundamental grounds. First, confidence in one's understanding of gravity, or any other phenomenon, cannot be expressed as a probability. Second confidence in one's understanding growth only when an experiment fails - when the object falls upward, that is, we learn only from failures.

Rationalism: is a method of inquiry that regards reason as the chief source and test of knowledge and, in contrast to empiricism, tends to discountenance sensory experience. It holds that, because reality itself has an inherently rational structure, there are truths--especially in logic and mathematics but also in ethics and metaphysics--that the intellect can grasp directly. In ethics, rationalism relies on a "natural light," and in theology it replaces supernatural revelation with reason.

The inspiration of rationalism has always been mathematics, and rationalists have stressed the superiority of the deductive over all other methods in point of certainty. According to the extreme rationalist doctrine, all the truths of physical science and even history could in principle be discovered by pure thinking and set forth as the consequences of self-evident premises. This view is opposed to the various systems which regard the mind as a tabula rasa (blank tablet) in which the outside world, as it were, imprints itself through the senses.

The opposition between rationalism and empiricism is, however, rarely so simple and direct, inasmuch as many thinkers have admitted both sensation and reflection. Locke, for example, is a rationalist in the weakest sense, holding that the materials of human knowledge (ideas) are supplied by sense experience or introspection, but that knowledge consists in seeing necessary connections between them, which is the function of reason.

Causality refers to the "way of knowing"
that one thing causes another. Early philosophers concentrated on conceptual
issues and questions. Later philosophers concentrated on more concrete
issues and questions. The change in emphasis from conceptual to concrete,
i.e. coincides with the rise of empiricism; Hume (1711-76) is probably
the first philosopher to posit a wholly empirical definition of causality.
Of course, both the definition of "cause" and the "way of knowing" whether
X and Y are causally linked have changed significantly over time. Some
philosophers deny the existence of "cause" and some philosophers who accept
its existence, argue that it can never be known by empirical methods. Modern
scientists, on the other hand, define causality in limited contexts (e.g.,
in a controlled experiment).

Aristotle's Causality: Any discussion of causality
begins with Aristotle's Metaphysics. There Aristotle defined four
distinct types of cause: the material, formal, efficient, and final types.
To illustrate these definitions, think of a vase, made (originally) from
clay by a potter, as the "effect" of some "cause," Aristotle would say
that clay is the material cause of the vase. The vase's form (vs. some
other form that the clay might assume such as a bowl) is its formal cause.
The energy invested by the potter is its efficient cause. And finally,
the potter's intent is its final cause of the vase. Aristotle's final cause
involves a teleological explanation and virtually all modern scientists
reject teleology. Nevertheless, for Aristotle, all "effects" are purposeful;
every thing comes into existence for some purpose (telos). Modern
scientists may also find Aristotle's material and formal causes curious.
Can fuel "cause" a fire? Can a mold "cause" an ingot? On the other hand,
Aristotle's efficient cause is quite close to what physicists mean by the
phrase "X causes Y." Indeed, this causal type ideally suited to modern
science. An efficient cause ordinarily has an empirical correlate, for
example; X is an event (usually a motion) producing another event, Y (usually
another motion). Lacking any similar empirical correlates, material, formal,
and (especially) final causes resist all attempts at empirical testing.

Galileo's Causality: Galileo was one of many Enlightenment
scientists who wrote explicitly about causality. Galileo viewed cause as
the set of necessary and sufficient conditions for an effect. If X and
Y are causes of Z, in other words, then Z will occur whenever both X and
Y occur; on the other hand, if only X or only Y occurs, then Z will not
occur. We can state this more succinctly as "If and only if both X and
Y occur, then Z occurs." There is one problem with Galileo's definition.
First, the list of causes for any Z would have to include every factor
that made even the slightest difference in Z. This list could be so long
that it would be impossible to find something that was not a cause of Z.
This makes it virtually impossible to tests many causal hypotheses and,
so, it makes Galileo's definition practically useless to scientists.

Hume's Causality: David Hume's (1711-76) major
philosophical work, A Treatise of Human Nature, lays the foundation
for the modern view of causality. Hume rejected the existing rationalist
concept of cause, arguing that causality was not a real relationship between
two things but, rather, a perception. Accordingly, Hume's definition of
causality emphasizes three elements that can verified (albeit post facto)
through observation. According to Hume, "X causes Y" if

(1) Precedence: X precedes Y in time.

(2) Contiguity: X and Y are contiguous in space and time.

(3) Constant Conjunction: X and Y always co-occur (or
not occur).

At first glance, Hume's definition seems fool-proof. But consider the causal
proposition that "day causes night." This proposition satisfies all of
Hume's three criteria but, yet, fails to satisfy our common expectation
of causality. Day does not cause night and this highlights a potential
flaw in Hume's definition. Indeed, each of Hume's three criteria pose special
problems for modern scientific method.

Mill's Causality: Unlike earlier philosophers who
concentrated on conceptual issues, John Stuart Mill concentrated on the
problems of operationalizing causality. Mill argued that causality could
not be demonstrated without experimentation. His four general methods for
establishing causation are (1) the method of concomitant variation ["Whatever
phenomenon varies in any manner, whenever another phenomenon varies in
some particular manner, is either a cause or an effect of that phenomenon,
or is connected with it through some fact of causation."]; (2) the method
of difference ["If an instance in which the phenomenon under investigation
occurs and an instance in which it does not occur, have every circumstance
in common save one, that one occurring in the former; the circumstances
in which alone the two instances differ, is the effect, or the cause, or
an indispensable part of the cause of the phenomena."]; (3) the method
of residues ["Subduct from any phenomena such part as is known by previous
inductions to be the effect of certain antecedents, and the residue of
the phenomena is the effect of the remaining antecedents."]; and (4) the
method of agreement ["If two or more instances of a phenomena under investigation
have only one circumstance in common, the circumstance in which alone all
the instances agree, is the cause (or effect) of the given phenomenon."].
All modern experimental designs are based on one or more of these methods.

Probabilistic Causality: One approach to the practical
problem posed by Hume's constant conjunction criterion is to make the criterion
probabilistic. If we let P(Y | X) denote the probability that Y will occur
given that X has occurred, then constant conjunction requires that

P(Y | X)=1 and P(Y | ~X)=0

where ~X indicates that X has not occurred. The problem, of course,
is that biological and social phenomena virtually never satisfy this criterion.
Probabilistic causalities address this problem by requiring only that the
occurrence of X make the occurrence of Y more probable. In the same notation,
if

P(Y | X) > P(Y | ~X)

then "X causes Y." While this makes the constant conjunction criterion
more practical, however, it raises other problems. To illustrate, suppose
that X has two effects, Y1 and Y2, and that Y1
precedes Y2. A widely used example is the atmospheric electrical
event that causes lightening and thunder. Since we always see lightening
(Y1) before we hear thunder (Y2), it appears that
"lightening causes thunder. Indeed, Y1 and Y2 satisfy
the probabilistic criterion

P(Y2 | Y1) > P (Y2)

that we require of Y1=>Y2. But in fact, lightening
does not cause thunder. The foremost proponent of probabilistic causality,
Patrick Suppes, solves this problem by requiring further that Y1
and Y2 have no common cause. As we discover at a later point,
research designs constitute a method for ruling out common causes.

Design as Operational Causality: The history of
causality can be broken into two eras. The first era begins with Aristotle
and ends with Hume. The second era begins with John Stuart Mill and continues
today. The difference between Hume and Mill may be unclear; after all,
both were orthodox empiricists. But while Hume and Mill had much in common,
Hume's causality was largely conceptual. Little attention was paid to the
practical problem of implementing the concepts. Mill, on the other hand,
described exactly how working scientists could implement (or operationalize)
his causality. The most influential modern philosophers have followed Mill's
example. Although the field of (experimental) design often deals with causality
only implicitly, we can think of design as operationalized causality.

Rubin Causality: Many proposed causalities work
well in one context (or appear to, at least) but not in another. To solve
this problem, some modern philosophers have tried to limit their causalities
to specific contexts, circumstances, or conditions. Accordingly, Rubin
causality (named for Donald B. Rubin) is defined in the limited context
of an experimental milieu. Under Rubin causality, any relationship
demonstrated in an experiment (where the units of analysis are randomly
assigned to experimental and control groups) is a valid causal relationship;
any relationship that cannot be demonstrated in an experiment is not causal.
To illustrate, suppose that we want to measure the effectiveness of a putative
anti-bacterial soap. We apply the soap to a single bacterium. If the bacterium
dies, the soap works. But if the bacterium dies, we still have this problem:
sooner or later, all bacteria die; maybe this one died of natural
causes. We eliminate this (and every other alternative hypothesis) by showing
that a placebo treatment does not kill the bacterium. But since the bacterium
is already dead, how is this possible? The fundamental dilemma of
causality, according to Rubin, is that, if we use an experimental unit
(a bacterium, e.g.) to show that "X causes Y," we cannot use that
same unit to show that some "non-X does not cause Y." We solve this dilemma
by assuming that all units are more or less the same. This allows us to
treat one bacterium with the anti-bacterial soap and another with a placebo.
To make sure that the two bacteria are virtually indistinguishable, however,
we randomly assign the bacteria to the soap and placebo. Since random assignment
is unfeasible in some situations, Rubin causality holds that some variables
(e.g., "race") cannot be causes.

E. Necessity, Sufficiency and Causal Complexity

The main argument of this part of the unit is that researchers interested in diversity, especially as it is manifested in causal complexity, should avoid, as much as possible, making simplifying assumptions about the nature of causation. Specifically, avoid assuming that the individual causes they examine are either necessary or sufficient for the outcomes they study.

While the case study, almost by definition, offers little basis for causal generalization, it has the advantage of providing the investigator intensive knowledge of a case and its history and thus a more in-depth view of causation. Case-study researchers are able to triangulate different kinds of evidence from a variety of different sources in their attempt to construct full and compelling representation of causation. In short, case studies maximize validity in the investigation of casual processes.

While the case study is an excellent research strategy for studying how something comes about, it does not provide a good basis for assessing the generality or the nature of causation the researcher identifies. For example, there is no way to tell if the causes are either necessary or sufficient for the outcome in question. Comparative analysis however may be more useful here.

Yet, social scientists have been slow to recognize that different analytic strategies are relevant to the assessment of different kinds of causes. At the most basic level, it is important to recognize that the study of necessity works backwards from instances of an outcome and is search for common antecedent conditions. The study of sufficiency, by contrast, works forward from instances of a causal conditions to see if these instances agree in displaying the outcome.

If, indeed, the most common form of social causation involves causes that are neither necessary nor sufficient, then the assessment of the cross-tabulation of a single cause with the outcome in questions provides little useful information. Instead, as I show in the next section, the investigator should cross-tabulate the outcome against combinations of causes. This shift in analytical strategy is the first step on the road to the analysis of causal complexity, defined here as a situation where no single cause is either necessary or sufficient.

Types of Causes and assumptions about Social Complexity

Cause

Sufficient

Not Sufficient

Necessary

1. The greatest empirical scope (apply to all relevant instances).
2. The greatest empirical power (because the cause by itself produce the outcome)
3. However, such causes are rare

Assumption of Social Simplicity

Example: technology ----> strike

1. great empirical scope
2. lack empirical power (because they work only in conjunction with other causes)

Example: economy*wage ----> strike

Not Necessary

Causes here are powerful because they can act alone to bring about an outcome, but their empirical scope is limited because there are other causes that also produce the same outcomes.

Example: sourcingout + wage ----> strike

Causes are limited both in empirical scope and power, because they cannot produce the outcome on their own, nor are they always present as antecedent conditions. This is the most complex type of causation and the most prevalent