Risk

Since the 1970s, studies of risk have grown into a major
interdisciplinary field of research. Although relatively few
philosophers have focused their work on risk, there are important
connections between risk studies and several philosophical
subdisciplines. This entry summarizes the most well-developed of these
connections and introduces some of the major topics in the philosophy
of risk. It consists of six sections dealing with the definition of
risk and with treatments of risk related to epistemology, the
philosophy of science, the philosophy of technology, ethics, and the
philosophy of economics.

In non-technical contexts, the word “risk” refers, often
rather vaguely, to situations in which it is possible but not certain
that some undesirable event will occur. In technical contexts, the
word has several more specialized uses and meanings. Five of these are
particularly important since they are widely used across
disciplines:

1. risk = an unwanted event which may or may not
occur.

An example of this usage is: “Lung cancer is one of the major
risks that affect smokers.”

2. risk = the cause of an unwanted event which may
or may not occur.

An example of this usage is: “Smoking is by far the most
important health risk in industrialized countries.” (The
unwanted event implicitly referred to here is a disease caused by
smoking.) Both (1) and (2) are qualitative senses of risk. The word
also has quantitative senses, of which the following is the oldest
one:

3. risk = the probability of an unwanted event
which may or may not occur.

This usage is exemplified by the following statement: “The risk
that a smoker's life is shortened by a smoking-related disease
is about 50%.”

4. risk = the statistical expectation value of an unwanted
event which may or may not occur.

The expectation value of a possible negative event is the product of
its probability and some measure of its severity. It is common to use
the number of killed persons as a measure of the severity of an
accident. With this measure of severity, the “risk” (in
sense 4) associated with a potential accident is equal to the
statistically expected number of deaths. Other measures of severity
give rise to other measures of risk.

Although expectation values have been calculated since the
17th century, the use of the term “risk” in
this sense is relatively new. It was introduced into risk analysis in
the influential Reactor Safety Study, WASH-1400, (Rasmussen et al.,
1975). Today it is the standard technical meaning of the term
“risk” in many disciplines. It is regarded by some risk
analysts as the only correct usage of the term.

5. risk = the fact that a decision is made under conditions
of known probabilities (“decision under risk” as
opposed to “decision under uncertainty”)

In addition to these five common meanings of “risk”
there are several other more technical meanings, which are
well-established in specialized fields of inquiry. Some of the major
definitions of risk that are used in economic analysis will be
introduced below in section 5.

Although most of the above-mentioned meanings of “risk”
have been referred to by philosophers, a large part of the
philosophical literature on risk refers to risk in the more informal
sense that was mentioned at the beginning of this section, namely as a
state of affairs in which an undesirable event may or may not occur.

Terminological note: Some philosophers distinguish between
“subjective” and “objective”
probabilities. Others reserve the term “probability” for
the subjective notion. Here, the former terminology is used,
i.e. “probability” can refer either to subjective
probability or to objective chances.

When there is a risk, there must be something that is unknown or
has an unknown outcome. Therefore, knowledge about risk is knowledge
about lack of knowledge. This combination of knowledge and lack
thereof contributes to making issues of risk complicated from an
epistemological point of view.

In non-regimented usage, “risk” and
“uncertainty” differ along the subjective—objective
dimension. Whereas “uncertainty” seems to belong to the
subjective realm, “risk” has a strong objective component.
If a person does not know whether or not the grass snake is poisonous,
then she is in a state of uncertainty with respect to its ability to
poison her. However, since this species has no poison there is no risk
to be poisoned by it. The relationship between the two concepts
“risk” and “uncertainty” seems to be in part
analogous to that between “truth” and
“belief”.

Regimented decision-theoretical usage differs from this. In decision
theory, a decision is said to be made “under risk” if the
relevant probabilities are available and “under
uncertainty” if they are unavailable or only partially
available. Partially determined probabilities are sometimes expressed
with probability intervals, e.g., “the probability of rain
tomorrow is between 0.1 and 0.4”. (The term “decision
under ignorance” is sometimes used about the case when no
probabilistic information at all is available.)

Although this distinction between risk and uncertainty is
decision-theoretically useful, from an epistemological point of view it
is in need of clarification. Only very rarely are probabilities known
with certainty. Strictly speaking, the only clear-cut cases of
“risk” (known probabilities) seem to be idealized textbook
cases that refer to devices such as dice or coins that are supposed to
be known with certainty to be fair. In real-life situations, even if we
act upon a determinate probability estimate, we are not fully certain
that this estimate is exactly correct, hence there is uncertainty. It
follows that almost all decisions are made “under
uncertainty”. If a decision problem is treated as a decision
“under risk”, this does not mean that the decision in
question is made under conditions of completely known probabilities.
Rather, it means that a choice has been made to simplify the
description of this decision problem by treating it as a case of known
probabilities. This is often a highly useful idealization in decision
theory. However, in practical applications it is important to
distinguish between those probabilities that can be treated as known
and those that are uncertain and therefore much more in need of
continuous updating. Typical examples of the former are the failure
frequencies of a technical component that are inferred from extensive
and well-documented experience of its use. The latter case is
exemplified by experts' estimates of the expected failure
frequencies of a new type of component.

A major problem in the epistemology of risk is how to deal with the
severe limitations that characterize our knowledge of the behaviour of
unique complex systems that are essential for estimates of risk, such
as the climate system, ecosystems, the world economy, etc. Each of
these systems contains so many components and potential interactions
that it is in practice unpredictable. However, in spite of this
fundamental uncertainty, meaningful statements about some aspects of
these systems can be made. The epistemological status of such
statements, and the nature of the uncertainty involved, are in need of
clarification.

In the risk sciences, it is common to distinguish between
“objective risk” and “subjective risk”. The
former concept is in principle fairly unproblematic since it refers to
a frequentist interpretation of probability. The latter concept is
more ambiguous. In the early psychometric literature on risk (from the
1970s), subjective risk was often conceived as a subjective estimate
of objective risk. In more recent literature, a more complex picture
has emerged. Subjective appraisals of (the severity of) risk depend to
a large extent on factors that are not covered in traditional measures
of objective risk (such as control and tampering with nature). If the
terms are taken in this sense, subjective risk is influenced by the
subjective estimate of objective risk, but cannot be identified with
it. In the psychological literature, subjective risk is often
conceived as the individual's overall assessment of the seriousness of
a danger or alleged danger. Such individual assessments are commonly
called “risk perception”, but strictly speaking the term
is misleading. This is not a matter of perception, but rather a matter
of attitudes and expectations. Subjective risk can be studied with
methods of attitude measurement and psychological scaling
(Sjöberg 2004).

Issues of risk have given rise to heated debates on what levels of
scientific evidence are needed for policy decisions. The proof
standards of science are apt to cause difficulties whenever science is
applied to practical problems that require standards of proof or
evidence other than those of science.

Two major types of errors are possible in a decision whether or not to
accept a scientific hypothesis. The first of these consists in
concluding that there is a phenomenon or an effect when in fact there
is none. This is called an error of type I (false positive). The
second consists in missing an existing phenomenon or effect. This is
called an error of type II (false negative). In the internal dealings
of science, errors of type I are in general regarded as more
problematic than those of type II. The common scientific standards of
statistical significance substantially reduce the risk of type I
errors but do not protect against type II errors.

Many controversies on risk assessment concern the balance between
risks of type I and type II errors. Whereas science gives higher
priority to avoiding type I errors than to avoiding type II errors,
the balance can shift when errors have practical consequences. This
can be seen from a case in which it is uncertain whether there is a
serious defect in an airplane engine. A type II error, i.e., acting as
if there were no such a defect when there is one, would in this case
be counted as more serious than a type I error, i.e., acting as if
there were such a defect when there is none. (The distinction between
type I and type II errors depends on the delimitation of the effect
under study. In discussions of risk, this delimitation is mostly
uncontroversial.)

In this particular case it is fairly uncontroversial that avoidance of
type II error should be given priority over avoidance of type I
error. In other words, it is better to delay the flight and then find
out that the engine was in good shape than to fly with an engine that
turns out to malfunction. However, in other cases the balance between
the two error types is more controversial. Controversies are common,
for instance, over what degree of evidence should be required for
actions against possible negative effects of chemical substances on
human health and the environment.

Figure 1. The use of scientific data for policy purposes.

Such controversies can be clarified with the help of a simple but
illustrative model of how scientific data influence both scientific
judgments and practical decisions. Scientific knowledge begins with
data that originate in experiments and other observations. (See Figure
1.) Through a process of critical assessment, these data give rise to
the scientific corpus (arrow 1). Roughly speaking, the corpus consists
of those statements that could, for the time being, legitimately be
made without reservation in a (sufficiently detailed) textbook. When
determining whether or not a scientific hypothesis should be accepted
for the time being as part of the corpus, the onus of proof falls on
its adherents. Similarly, those who claim the existence of an as yet
unproven phenomenon have the burden of proof. These proof standards
are essential for the integrity of science.

The most obvious way to use scientific information for policy-making
is to employ information from the corpus (arrow 2). For many purposes,
this is the only sensible thing to do. However, in risk management
decisions exclusive reliance on the corpus may have unwanted
consequences. Suppose that toxicological investigations are performed
on a substance that has not previously been studied from a
toxicological point of view. These investigations turn out to be
inconclusive. They give rise to science-based suspicions that the
substance is dangerous to human health, but they do not amount to full
scientific proof in the matter. Since the evidence is not sufficient
to warrant an addition to the scientific corpus, this information
cannot influence policies in the standard way (via arrows 1 and
2). There is a strict requirement to avoid type I errors in the
process represented by arrow 1, and this process filters out
information that might in this case have been practically relevant and
motivated certain protective measures.

In cases like this, a direct road from data to policies is often taken
(arrow 3). This means that a balance between type I and type II errors
is determined in the particular case, based on practical
considerations, rather than relying on the standard scientific
procedure with its strong emphasis on the avoidance of type I
errors.

It is essential to distinguish here between two kinds of
risk-related decision processes. One consists in determining which
statements about risks should be included in the scientific corpus. The
other consists in determining how risk-related information should
influence practical measures to protect health and the environment. It
would be a strange coincidence if the criteria of evidence in these two
types of decisions were always the same. Strong reasons can be given
for strict standards of proof in science, i.e. high entry requirements
for the corpus. At the same time, there can be valid policy reasons to
allow risk management decisions to be influenced by sound scientific
indications of danger that are yet not sufficiently well-established to
qualify for inclusion into the scientific corpus.

In a framework of expected utilities, the balance between type I and
type II errors can be clarified with the aid of numerical estimates of
the seriousness of the different types of errors. Let
PI denote the probability of type I error and
LI the value of the expected loss from such an
error (acting as if there were a danger if there is none). Similarly,
let PII denote the probability of type II error
and LII the value of the expected loss from that
type of error (acting as if there were no danger if there is indeed
one). Then, in an expected utility framework, one should act as if the
danger exists if PII×LII
is greater than PI×LI
but not if PI×LI is
greater than
PII×LII.

Safety and the avoidance of risk are major concerns in practical
engineering. Safety engineering has also increasingly become the
subject of academic investigations. However, these discussions are
largely fragmented between different areas of technology. The same
basic ideas or “safety philosophies” are discussed under
different names for instance in chemical, nuclear, and electrical
engineering. Nevertheless, much of the basic thinking seems to be the
same in the different areas of safety engineering (Möller and
Hansson 2008).

Simple safety principles, often expressible as rules of thumbs, have a
central role in safety engineering. Three of the most important of
these are inherent safety, safety factors, and multiple barriers.

Inherent safety, also called primary prevention, consists in the
elimination of a hazard. It is contrasted with secondary prevention
that consists in reducing the risk associated with a hazard. For a
simple example, consider a process in which inflammable materials are
used. Inherent safety would consist in replacing them by
non-inflammable materials. Secondary prevention would consist in
removing or isolating sources of ignition and/or installing
fire-extinguishing equipment. As this example shows, secondary
prevention usually involves added-on safety equipment. The major
reason to prefer inherent safety to secondary prevention is that as
long as the hazard still exists, it can be realized by some
unanticipated triggering event. Even with the best of control
measures, if inflammable materials are present, some unforeseen chain
of events can start a fire.

Safety factors are numerical factors employed or used as part of the
design process for our houses, tools, etc., in order to ensure that
our constructions are stronger than the bare minimum expected to be
required for their functions. Elaborate systems of safety factors
have been specified in norms and standards (Clausen et
al. 2006). A safety factor most commonly refers to the ratio
between a measure of the maximal load not leading to the specified
type of failure and a corresponding measure of the maximal expected
load. It is common to make bridges and other constructions strong
enough to withstand twice or three times the predicted maximal
load. This means that a safety factor of two or three is employed.

Multiple safety barriers are arranged in chains. Ideally, each barrier
is independent of its predecessors so that if the first fails, then
the second is still intact, etc. For example, in an ancient fortress,
if the enemy managed to pass the first wall, then additional layers
would protect the defending forces. Some engineering safety barriers
follow the same principle of concentric physical barriers. Others are
arranged serially in a functional rather than a spatial sense. One of
the lessons that engineers have learned from the Titanic
disaster is that improved construction of early barriers is not of
much help if it leads to neglect of the later barriers (in that case
lifeboats). The major problem in the construction of safety barriers
is to make them as independent of each other as possible. If two or
more barriers are sensitive to the same type of impact, then one and
the same destructive force can get rid of all of them in one
swoop.

Inherent safety, safety factors, and multiple barriers have an
important common feature: They all aim at protecting us not only
against risks that can be assigned meaningful probability estimates,
but also against dangers that cannot be probabilized, such as the
possibility that some unanticipated type of event gives rise to an
accident. It remains, however, for philosophers of technology to
investigate the principles underlying safety engineering more in
detail and to clarify how they relate to other principles of
engineering design (Doorn and Hansson 2011).

Problems of risk have seldom been treated systematically in moral
philosophy. A possible defence of this limitation is that moral
philosophy can leave it to decision theory to analyse the complexities
that indeterminism and lack of knowledge give rise to in real life.
According to the conventional division of labour between the two
disciplines, moral philosophy provides assessments of human behaviour
in well-determined situations. Decision theory takes assessments of
these cases for given, adds the available probabilistic information,
and derives assessments for rational behaviour in an uncertain and
indeterministic world. On this view, no additional input of moral
values is needed to deal with indeterminism or lack of knowledge,
since decision theory operates exclusively with criteria of
rationality.

Examples are easily found that exhibit the problematic nature of this
division between the two disciplines. Compare the act of throwing down
a brick on a person from a high building to the act of throwing down a
brick from a high building without first making sure that there is
nobody beneath who can be hit by the brick. The moral difference
between these two acts is not obviously expressible in a probability
calculus. An ethical analysis of the difference will have to refer to
the moral aspects of risk-taking as compared to intentional ill-doing.
More generally speaking, a reasonably complete account of the ethics
of risk must distinguish between intentional and unintentional risk
exposure and between voluntary risk-taking, risks imposed on a person
who accepts them, and risks imposed on a person who does not accept
them. This cannot be done in a framework that treats risks as
probabilistic mixtures of outcomes (unless, of course, these outcomes
are so widely defined that they include all relevant moral
considerations, including the various mental states involved at
different points of times in the evaluated event).

Methods of moral analysis are needed that can guide decisions on
risk-takings and risk-impositions. A first step is to investigate how
standard moral theories can deal with problems of risk that are
presented in the same way as in decision theory, namely as the (moral)
evaluation of probabilistic mixtures of (deterministic) scenarios.

In utilitarian theory, there are two obvious, alternative
approaches to such problems. One is the actualist
solution. It consists in assigning to a (probabilistic) mixture of
potential outcomes a utility that is equal to the utility of the
outcome that actually materializes. To exemplify this approach,
consider a decision whether or not to reinforce a bridge before it is
being used for a single, very heavy transport. There is a 50% risk
that the bridge will fall down if it is not reinforced. Suppose that a
decision is made not to reinforce the bridge and that everything goes
well; the bridge is not damaged. According to the actualist approach,
the decision was right. This is, of course, contrary to common moral
intuitions.

The other established utilitarian approach is the maximization of
expected utility. This means that the utility of a mixture of
potential outcomes is defined as the probability-weighted average of
the utilities of these outcomes.

The expected utility criterion has been criticized along several
lines. One criticism is that it disallows a common form of
cautiousness, namely disproportionate avoidance of large disasters.
For example, provided that human deaths are valued equally and
additively, this framework does not allow that one prefers a
probability of 1 in 1000 that one person will die to a probability of
1 in 100000 that fifty persons will die. The expected utility
framework can also be criticized for disallowing a common expression
of strivings for fairness, namely disproportionate avoidance of
high-probability risks for particular individuals. Hence, in the
choice between exposing one person to a probability of 0.9 to be
killed and exposing each of one hundred persons to a probability of
0.01 of being killed, it requires that the former alternative be
chosen. In summary, expected utility maximization prohibits what seem
to be morally reasonable standpoints on risk taking and risk
imposition.

However, it should be noted that the expected utility criterion does
not necessarily follow from utilitarianism. Utilitarianism in a wide
sense (Scanlon 1982) is compatible with other ways of evaluating
uncertain outcomes (most notably with actual consequence
utilitarianism, but in principle also for instance with a maximin
criterion). Therefore, criticism directed against expected utility
maximization does not necessarily show a defect in utilitarian
thinking.

The problem of dealing with risk in rights-based moral
theories was formulated by Robert Nozick: “Imposing how
slight a probability of a harm that violates someone's rights also
violates his rights?” (Nozick 1974, 7).

An extension of a rights-based moral theory to indeterministic cases
can be obtained by prescribing that if A has a right that
B does not bring about a certain outcome, then A
also has a right that B does not perform any action that (at
all) increases the probability of this outcome. Unfortunately, such a
strict extension of rights is untenable in social
practice. Presumably, A has the right not to be killed by
B, but it would not be reasonable to extend this right to all
actions by B that give rise to a very small increase in the
risk that A dies — such as driving a car in the town
where A lives. Such a strict interpretation would make human
society impossible.

Hence, a right not to be risk-exposed will have to be defeasible so
that it can be overridden when the increases in probability are
small. However, it remains to find a credible criterion for when it
should be overridden. As Nozick observed, a probability limit is not
credible in “a tradition which holds that stealing a penny or a
pin or anything from someone violates his rights. That tradition does
not select a threshold measure of harm as a lower limit, in
the case of harms certain to occur.” (Nozick 1974, 75)

The problem of dealing with risks in deontological theories
is similar to the corresponding problem in rights-based theories. The
duty not to harm other people can be extended to a duty not to perform
actions that increase their risk of being harmed. However, society as
we know it is not possible unless exceptions to this rule are accepted.
The determination of criteria for such exceptions is problematic in the
same way as for rights-based theories. All reasonable systems of moral
obligations will contain a fairly general prohibition against actions
that kill another person. Such a prohibition can (and should) be
extended to actions that involve a large risk that a person is killed.
However, it cannot be extended to all actions that lead to a small
increase in the risk that a person is killed, since in that case it
could not be allowed for instance to drive a car. A limit must be drawn
between reasonable and unreasonable impositions of risk. It seems as if
such a limit has to be based on concepts, such as probabilities, that
are not part of the internal resources of deontological theories.

Contract theories may appear somewhat more promising than
the theories discussed above. The criterion that they offer for the
deterministic case, namely consent among all those involved, can also
be applied to risky options. It could be claimed that risk impositions
are acceptable if and only if they are supported by a consensus. Such a
consensus, as conceived in contract theories, is either actual or
hypothetical.

Actual consensus is unrealistic in a complex society in which everyone
performs actions with marginal but additive effects on many people's
lives. According to the criterion of actual consensus, any local
citizen will have a veto against anyone else who wants to drive a car
in the town where she lives. In this way citizens can block each
other, creating a society of stalemates.

Hypothetical consensus has been developed as a criterion in contract
theory in order to deal with inter-individual problems. It does not
seem to be helpful in solving the problems of risk. As an example of
this, the risks and uncertainties of real life are of a quite
different nature than the hypothetical uncertainty (or ignorance)
about one's own identity that is part of Rawls's initial
situation. The argumentation that Rawls used to obtain a solution for
these hypothetical uncertainties is based on features (such as the
impossibility of obtaining any form of probabilistic information) that do
not apply to real-world risks. This is not surprising since
uncertainties about social and natural risks are very different from
the hypothetical uncertainty in the original position about whom one
is representing. At least this variant of a hypothetical contract
situation does not have resources to deal with the moral appraisal of
risk.

In summary, the problem of appraising risks from a moral point of
view does not seem to have any satisfactory solution in established
moral theories. The following are three possible elements of a
solution:

1. It may be useful to shift the focus from risks, described
two-dimensionally in terms of probability and severity (or
one-dimensionally as the product of these) to actions of risk-taking
and risk-imposing. Such actions have many morally relevant properties
in addition to the two dimensions mentioned, such as who exposes whom
to a risk, issues of fairness and distribution, voluntariness,
etc.

2. Important moral intuitions are accounted for by assuming that each
person has a prima facie moral right not to be exposed to risk of
negative impact, such as damage to her health or her property, through
the actions of others. However, this is a prima facie right that has
to be overridden in quite a few cases, in order to make social life at
all possible. Therefore, the recognition of this right gives rise to
what can be called an exemption problem, namely the problem
of determining when it is rightfully overridden.

3. Part of the solution to the exemption problem may be obtained by
allowing for reciprocal exchanges of risks and benefits. Hence, if
A is allowed to drive a car, exposing B to certain
risks, then in exchange B is allowed to drive a car, exposing
A to the corresponding risks. In order to deal with the
complexities of modern society, this principle must also be applied to
exchanges of different types of risks and benefits. Exposure of a
person to a risk can then be regarded as acceptable if it is part of
an equitable social system of risk-taking that works to her
advantage. (Such a system can be required to contain mechanisms that
eliminate, or compensate for, social inequalities that are caused by
disease and disability.) (Hansson 2003)

Risks have a central role in economic activities. In capitalist market
economies, taking economic risks is an essential part of the role of
the entrepreneur. Decisions on investments and activities on financial
markets can only be understood against the background of the risks
involved. Therefore it is no surprise that modern economic theory,
with its emphasis on mathematical models of economic activities, has
developed several formal models of risk taking.

Portfolio analysis, which was developed in the 1950s by Harry
Markowitz, James Tobin and others, was an important step forward in
the economic analysis of risk. These authors employed a simple
statistical measure, namely the standard deviation (or alternatively
the variance, that is the square of the standard deviation) as a
measure of riskiness. Hence, in a comparison between two investment
alternatives, the one whose economic outcome is calculated to have the
largest standard deviation is regarded as the most risky one. In a
comparison between different such alternatives, each of them can be
characterized by two numbers, namely its expectation value and its
standard deviation or riskiness. Investors typically prefer
investments with as high expectation values and as low riskiness as
possible. However, investors differ in the relative weight that they
assign to expectations respectively risk avoidance. Given these
decision weights, an individual's optimal portfolio can be
determined.

Since the late 1960s, alternative measures of risk have been
developed. Perhaps the most influential of these was provided by
Michael Rothschild and Joseph Stiglitz: If we move probability mass
from the centre to the tails of a probability distribution, while
keeping its mean unchanged, then we increase the risk associated with
the distribution. A measure based on this principle (mean preserving
spread) can be constructed that has more attractive mathematical
properties than those of the older standard deviation measure.

Roughly speaking, a person is risk-averse if she prefers a certain
outcome to a risky outcome with the same expected utility. A person's
degree of risk aversion can be measured as her willingness to pay (or
to accept a lower expected utility) in order to avoid a risk. Provided
that an agent's utility function u(x) is twice
continuously differentiable, her risk aversion at any point x
can be measured as
−u′′(x)/u′(x).
Hence, a person with the utility function u1 is
more risk averse at a point x than one with utility function
u2 if and only if
−u′′1(x)/u′1(x)
>
−u′′2(x)/u′2(x). This
is the Arrow-Pratt measure of risk aversion. It has the advantage of
being invariant under transformations of the utility function that
preserve the preference relation that it represents (i.e. it is
invariant under multiplication of the utility with a positive constant
and addition of an arbitrary constant).

In later years, economic analysis has been increasingly influenced by
studies in psychology and experimental economics. Such studies reveal
that actual agents often do not conform with theoretically derived
rationality criteria. One of the most popular descriptive theories
that tries to capture actual behaviour under risk is prospect theory,
which was developed by Daniel Kahneman and Amos Tversky around
1980. It distinguishes between two stages in a decision process. In
the first phase, the editing phase, gains and losses in the
different options are identified. They are defined relative to some
neutral reference point that is usually the current asset position. In
the second phase, the evaluation phase, the options are
evaluated in a way that resembles expected utility analysis, but both
utilities and probabilities are replaced by other, similar
measures. Utility is replaced by a measure that is asymmetrical
between gains and losses. Objective probabilities are transformed by
a function that gives more weight to probability differences close to
the ends than to those near the centre of the distribution. Thus it
makes a greater difference to decrease the probability of a negative
outcome from 2 to 1 per cent than to decrease it from 51 to 50
percent.

Prospect theory can explain some of the ways in which actual behaviour
deviates from theoretical models of rational behaviour under
risk. Hence the overweighting of probability changes close to zero or
unity can be used to explain why people both buy insurance and buy
lottery tickets. However, prospect theory is not plausible as a
normative theory for rational behaviour under risk. Probably,
normative and descriptive theories of risk will have to go in
different directions.

Machina, M.D. and M. Rothschild, 1987, “Risk”, in
J. Eatwell, M. Milgate, and P. Newman (eds.), The New Palgrave: A
Dictionary of Economic Theory and Doctrine (Volume 4), London and
New York: Macmillan and Stockton, pp.
201–205.

The SEP would like to congratulate the National Endowment for the Humanities on its 50th anniversary and express our indebtedness for the five generous grants it awarded our project from 1997 to 2007.
Readers who have benefited from the SEP are encouraged to examine the NEH’s anniversary page and, if inspired to do so, send a testimonial to neh50@neh.gov.