Possibility theory is an uncertainty theory devoted to the handling of incomplete information. As such, it complements probability theory. It differs from the latter by the use of a pair of dual set-functions (possibility and necessity measures) instead of only one. This feature makes it easier to capture partial ignorance. Besides, it is not additive and makes sense on ordinal structures. The name Theory of Possibility was coined in (Zadeh 1978), inspired by (Gaines and Kohout 1975). In Zadeh's view, possibility distributions were meant to provide a graded semantics to natural language statements. However, possibility and necessity measures can also be the basis of a full-fledged representation of partial belief that parallels probability (Dubois and Prade 1988). Then, it can be seen either as a coarse, non-numerical version of probability theory, or as a framework for reasoning with extreme probabilities, or yet as a simple approach to reasoning with imprecise probabilities (Dubois, Nguyen and Prade, 2000).

Contents

Basic Notions

A possibility distribution is a mapping \(\pi\) from a set of states of affairs S to a totally ordered scale such as the unit interval \([0, 1]\ .\) The function \(\pi\) represents the knowledge of an agent (about the actual state of affairs) distinguishing what is plausible from what is less plausible, what is the normal course of things from what is not, what is surprising from what is expected. It represents a flexible restriction on what the actual state of affairs is, with the following conventions:

If the state space is exhaustive, at least one of its elements should be the actual world, so that at least one state is totally possible (normalisation). Distinct values may simultaneously have a degree of possibility equal to 1.

Possibility theory is driven by the principle of minimal specificity. It states that any hypothesis not known to be impossible cannot be ruled out. A possibility distribution is said to be at least as specific as another one if and only if each state is at least as possible according to the latter as to the former (Yager 1983). Then, the most specific one is the most restrictive and informative.

In the possibilistic framework, extreme forms of partial knowledge can be captured, namely:

Complete knowledge: for some state \(s_0, \pi(s_0) = 1 \) and \(\pi(s) = 0 \) for other states s (only \( s_0 \) is possible)

Given a simple query of the form does an event A occur?, where A is a subset of states, or equivalently does the actual state lie in A, a response to the query can be obtained by computing degrees of possibility and necessity, respectively (if the possibility scale is \( [0, 1] \) ):
\[
\Pi(A) = \sup_{s\in A} \pi(s); N(A) = \inf_{s\notin A} 1 - \pi(s).
\]

Human knowledge is often expressed in a declarative way using statements to which some belief qualification is attached. Certainty-qualified pieces of uncertain information of the form \(A \) is certain to degree \(\alpha \) can then be modelled by the constraint \(N(A) \geq \alpha. \) The least specific possibility distribution reflecting this information assign possibility 1 to states where \(A \) is true and \(1 - \alpha \) to states where A is false.

Apart from \(\Pi \ ,\) which represents the idea of potential possibility, another measure of guaranteed possibility can be defined (Dubois, Hajek and Prade, 2000) \[\Delta(A) = \inf_{s\in A} \pi(s).\] It estimates to what extent all states in \(A \) are actually possible according to evidence.

Notions of conditioning and independence were studied for possibility measures. Conditional possibility is defined similarly to probability theory using a Bayesian like equation of the form (Dubois and Prade, 1988):
\[
\Pi(B\cap A) = \Pi(B\mid A) \star \Pi(A)
\]
However, in the ordinal setting the operation \(\star\) cannot be a product and is changed into the minimum. In the numerical setting, there are several ways to define conditioning, not all of which have this form (Walley, 1996). There are several variants of possibilistic independence (De Cooman, 1997; Dubois et al. 1997; De Campos and Huete, 1999). Generally, independence in ordinal possibility theory is neither symmetric, nor insensitive to negation. For non Boolean variables, independence between events is not equivalent to independence between variables. Joint possibility distributions on Cartesian products of domains can be represented by means of graphical structures similar to Bayesian networks for joint probabilities (see Borgelt et al. 2000; Benferhat Dubois Garcia and Prade 2002). Such graphical structures can be taken advantage of for evidence propagation (Ben Amor et al, 2003) or learning (Borgelt and Kruse, 2003).

Historical Background

Zadeh was not the first scientist to speak about formalising notions of possibility. The modalities possible and necessary have been used in philosophy at least since the Middle-Ages in Europe, based on Aristotle's works. More recently these concepts became the building blocks of modal logic that emerged at the beginning of the XXth century. In this approach, possibility and necessity are all-or-nothing notions, and handled at the syntactic level. Independently from Zadeh's view, the notion of possibility as opposed to probability was central in the works of one economist, Shackle, and is present in those of two philosophers, Cohen and Lewis:

The English economist G. L. S. Shackle (1962) introduced a full-fledged approach to uncertainty and decision in the 1940-1970's based on degrees of potential surprise of events. They are degrees of impossibility, that is, degrees of necessity of the opposite events. The degree of surprise of an event made of a set of elementary hypotheses is the degree of surprise of its least surprising realisation. Potential surprise is understood as disbelief. The disbelief notion introduced later in (Spohn 1990) employs the same type of convention as potential surprise, using the set of ordinals as a disbelief scale. Shackle also introduces a notion of conditional possibility. A framework very similar to the one of Shackle was proposed by the philosopher L. J. Cohen who considered the problem of legal reasoning (Cohen, 1977).

The philosopher David Lewis introduced a graded notion of possibility in the form of a weak order between possible worlds he calls comparative possibility (Lewis, 1973). He relates this concept of possibility to a notion of similarity between possible worlds and a most plausible one. Comparative possibility relations are instrumental in defining the truth conditions of counterfactual statements. Comparative possibility relations \( \geq_{\Pi} \) obey the key axiom, for all events A, B, C\[ A \geq_\Pi B \rightarrow C \cup A \geq_{\Pi} C \cup B.\] This axiom was later independently proposed in (Dubois 1986) in an attempt to derive a possibilistic counterpart to De Finetti and Savage comparative probabilities.

Zadeh (1978) proposed an interpretation of membership functions of fuzzy sets as possibility distributions encoding flexible constraints induced by natural language statements. Zadeh articulated the relationship between possibility and probability, noticing that what is probable must preliminarily be possible. However, the view of possibility degrees developed in his paper refers to the idea of graded feasibility (degrees of ease, as in the example of how many eggs can Hans eat for his breakfast) rather than to the epistemic notion of plausibility laid bare by Shackle. Nevertheless, the key axiom of maxitivity for possibility measures is highlighted. Later on, Zadeh (1979) acknowledged the connection between possibility theory, belief functions and upper/lower probabilities, and proposed their extensions to fuzzy events and fuzzy information granules.

This state of fact lays bare two branches of possibility theory: the qualitative and the quantitative one (Dubois and Prade, 1998).

Qualitative possibility theory

Qualitative possibility relations can be partially specified by a set of constraints of the form \( A >_{\Pi} B \ ,\) where A and B are events that may take the form of logical formulae. If these constraints are consistent, this relation can be completed through the principle of minimal specificity. A plausibility ordering on S can be obtained by assigning to each state of affairs its highest possibility level in agreement with the constraints (see Benferhat et al.1997, 1998). A general discussion on the relation between possibility relations and partial orders on state of affairs is in (Halpern, 1997).

Qualitative possibility relations can be represented by (and only by) possibility measures ranging on any totally ordered set (especially a finite one, Dubois 1986). This absolute representation on an ordinal scale is slightly more expressive than the purely relational one. When the finite set S is large and generated by a propositional language, qualitative possibility distributions can be efficiently encoded in possibilistic logic (Dubois, Lang, Prade, 1994). A possibilistic logic base K is a set of pairs \( (\phi,\alpha )\ ,\) where \( \phi\) is a Boolean expression and \( \alpha \) takes on values on an ordinal scale. This pair encodes the constraint \( N(\phi ) \geq \alpha \) where \( N(\phi) \) is the degree of necessity of the set of models of \( \phi \ .\) Each prioritized formula \( (\phi,\alpha )\) expresses a necessity-qualified statement. It is interpreted as the least specific possibility distribution on interpretations where this statement holds. Thus a prioritized formula has a fuzzy set of models. The fuzzy intersection of the fuzzy sets of models of all prioritized formulas in K yields an associated plausibility ordering on S.

Syntactic deduction from a set of prioritized clauses is achieved by refutation using an extension of the standard resolution rule, whereby \( (\phi \vee \psi, \min(\alpha, \beta)) \) can be derived from \( (\phi \vee \xi, \alpha ) \) and \( (\psi \vee \neg \xi, \beta). \) This rule, which evaluates the validity of an inferred proposition by the validity of the weakest premiss, goes back to Theophrastus, a disciple of Aristotle. Possibilistic logic is an inconsistency-tolerant extension of propositional logic that provides a natural semantic setting for mechanizing non-monotonic reasoning (Benferhat, Dubois, Prade, 1998), with a computational complexity close to that of propositional logic.

The main idea behind qualitative possibility theory is that the state of the world is by default assumed to be as normal as possible, neglecting less normal states. In particular, the events accepted as true are those true in all the most plausible states, namely the ones with positive degrees of necessity These assumptions lead us to interpret the plausible inference of a proposition B from another A, under knowledge \( \pi \) as follows: B should be true in all the most plausible states were A is true. It also means that the agent considers B as an accepted belief in the context A. This kind of inference is nonmonotonic in the sense that in the presence of additional information C, B may fail to remain an accepted belief. This is similar to the fact that a conditional probability \( P(B\mid A\cap C) \) may be low even if \( P(B\mid A) \) is high. The properties of this consequence relation are now well-understood (Benferhat, Dubois and Prade, 1997).

Decision-theoretic foundations of qualitative possibility were established by Dubois et al. (2001) in the act-based setting of Savage. Two qualitative decision rules under uncertainty can be justified: a pessimistic one and an optimistic one, respectively generalizing Wald's maximin and maximax criteria under ignorance, refined by accounting for a plausibility ordering on the state space.

Quantitative possibility theory

The phrase "quantitative possibility" refers to the case when possibility degrees range in the unit interval. In that case, a precise articulation between possibility and probability theories is useful to provide an interpretation to possibility and necessity degrees. Several such interpretations can be consistently devised (see (Dubois, 2006) for a detailed survey):

a degree of possibility can be viewed as an upper probability bound (Walley, 1996; De Cooman and Aeyels, 1999), Especially, probabilistic inequalities such as Chebychev one, can be interpreted as defining possibility distributions (Dubois et al., 2004).

a possibility measure is also a special case of a plausibility function in the theory of evidence (Shafer, 1987). It is then equivalent to a consonant random set.

a possibility distribution can be viewed as a likelihood function (Dubois, Moral and Prade, 1997).

Confidence or dispersion intervals are often extracted from statistical information and are attached a confidence level like 0.95 per cent. Varying this confidence level yields a family of nested intervals that can be represented as a possibility distribution (Dubois et al. 2004).

Following a very different approach, possibility theory can represent probability distributions with extreme, infinitesimal values (Spohn, 1990).

The theory of large deviations in probability theory also handles set-functions that look like possibility measures (Nguyen and Bouchon-Meunier, 2003).

There are finally close connections between possibility theory and idempotent analysis (Kolokoltsov and Maslov, 1997).

Applications

Possibility theory has not been the main framework for engineering applications of fuzzy sets in the past. However, two directions where it proved useful can be highlighted:

Interval analysis has been generalized in the setting of possibility theory. This is the calculus of fuzzy numbers; see (Dubois, Kerre, Mesiar and Prade, 2000) for a survey. It is then analogous to random variable calculations. Finding the potential of possibilistic representations in computing conservative bounds for probabilistic calculations is certainly a major challenge.

Possibility theory also offers a framework for preference modeling in constraint-directed reasoning. Both prioritized and soft constraints can be captured by possibility distributions expressing degrees of feasibility rather than plausibility (Dubois, Fargier and Prade, 1996).

Lastly, possibility theory is also being studied from the point of view of its relevance in cognitive psychology. Experimental results in cognitive psychology (Raufaste et al., 2003) suggest that there are situations where people reason about uncertainty using the rules or possibility theory, rather than with those of probability theory.