Among the many sorts of problems encountered in decision theory, allocation problems occupy a central position. Such problems call for the assignment of a nonnegative real number to each member of a finite (more generally, countable) set of entities, in such a way that the values so assigned sum to some fixed positive real number s. Familiar cases include the problem of specifying a probability mass function on a countable set of possible states of the world (s=1), and the distribution (...) of a certain sum of money, or other resource, among various enterprises. In determining an s-allocation it is common to solicit the opinions of more than one individual, which leads immediately to the question of how to aggregate their typically differing allocations into a single “consensual” allocation. Guided by the traditions of social choice theory (in which the aggregation of preferential orderings, or of utilities is at issue) decision theorists have taken an axiomatic approach to determining acceptable methods of allocation aggregation. In such approaches so-called “independence” conditions have been ubiquitous. Such conditions dictate that the consensual allocation assigned to each entity should depend only on the allocations assigned by individuals to that entity, taking no account of the allocations that they assign to any other entities. While there are reasons beyond mere simplicity for subjecting allocation aggregation to independence, this radically anti-holistic stricture has frequently proved to severely limit the set of acceptable aggregation methods. As we show in what follows, the limitations are particularly acute in the case of three or more entities which must be assigned nonnegative values summing to some fixed positive number s. For if the set V⊆[0,s] of values that may be assigned to these entities satisfies some simple closure conditions and (as is always the case in practice) Vis finite, then independence allows only for dictatorial or imposed (i.e., constant) aggregation. This theorem builds on and extends a theorem of Bradley and Wagner (Episteme, 9, 91–99 2012) and, when V={0,1}, yields as a corollary an impossibility theorem of Dietrich (Journal of Economic Theory, 126, 286–298 2006) on judgment aggregation. (shrink)

Jonathan Weisberg claims that certain probability assessments constructed by Jeffrey conditioning resist subsequent revision by a certain type of after-the-fact defeater of the reasons supporting those assessments, and that such conditioning is thus “inherently anti-holistic.” His analysis founders, however, in applying Jeffrey conditioning to a partition for which an essential rigidity condition clearly fails. Applied to an appropriate partition, Jeffrey conditioning is amenable to revision by the sort of after-the-fact defeaters considered by Weisberg in precisely the way that he demands.

Evidentiary propositions E 1 and E 2, each p-positively relevant to some hypothesis H, are mutually corroborating if p(H|E 1 ∩ E 2) > p(H|E i ), i = 1, 2. Failures of such mutual corroboration are instances of what may be called the corroboration paradox. This paper assesses two rather different analyses of the corroboration paradox due, respectively, to John Pollock and Jonathan Cohen. Pollock invokes a particular embodiment of the principle of insufficient reason to argue that instances of (...) the corroboration paradox are of negligible probability, and that it is therefore defeasibly reasonable to assume that items of evidence positively relevant to some hypothesis are mutually corroborating. Taking a different approach, Cohen seeks to identify supplementary conditions that are sufficient to ensure that such items of evidence will be mutually corroborating, and claims to have identified conditions which account for most cases of mutual corroboration. Combining a proposed common framework for the general study of paradoxes of positive relevance with a simulation experiment, we conclude that neither Pollock’s nor Cohen’s claims stand up to detailed scrutiny. I am quite prepared to be told…”oh, that is an extreme case: it could never really happen!” Now I have observed that this answer is always given instantly, with perfect confidence, and without any examination of the proposed case. It must therefore rest on some general principle: the mental process being something like this—“I have formed a theory. This case contradicts my theory. Therefore, this is an extreme case, and would never occur in practice.”Rev. Charles L. Dodgson. (shrink)

It has often been recommended that the differing probability distributions of a group of experts should be reconciled in such a way as to preserve each instance of independence common to all of their distributions. When probability pooling is subject to a universal domain condition, along with state-wise aggregation, there are severe limitations on implementing this recommendation. In particular, when the individuals are epistemic peers whose probability assessments are to be accorded equal weight, universal preservation of independence is, with a (...) few exceptions, impossible. Under more reasonable restrictions on pooling, however, there is a natural method of preserving the independence of any fixed finite family of countable partitions, and hence of any fixed finite family of discrete random variables. (shrink)

Suppose that several individuals who have separately assessed prior probability distributions over a set of possible states of the world wish to pool their individual distributions into a single group distribution, while taking into account jointly perceived new evidence. They have the option of first updating their individual priors and then pooling the resulting posteriors or first pooling their priors and then updating the resulting group prior. If the pooling method that they employ is such that they arrive at the (...) same final distribution in both cases, the method is said to be externally Bayesian, a property first studied by Madansky . We show that a pooling method for discrete distributions is externally Bayesian if and only if it commutes with Jeffrey conditioning, parameterized in terms of certain ratios of new to old odds, as in Wagner , rather than in terms of the posterior probabilities of members of the disjoint family of events on which such conditioning originates. (shrink)

The incense used in some cults and oracles in antiquity seems to have possessed the power to induce visions and prophecies. a study of its components, from an ethnobotanical perspective, reveals us their psychoactive power.

A decision problem in which the values of the decision variables must sum to a fixed positive real number s is called an "allocation problem," and the problem of aggregating the allocations of n experts the "allocation aggregation problem." Under two simple axiomatic restrictions on aggregation, the only acceptable allocation aggregation method is based on weighted arithmetic averaging (Lehrer and Wagner, Rational Consensus in Science and Society, 1981). In this note it is demonstrated that when the values assigned to the (...) variables are restricted to a finite set (as is always the case in practice), the aforementioned axioms allow only dictatorial aggregation. (shrink)

The right interpretation of subjective probability is implicit in the theories of upper and lower odds, and upper and lower previsions, developed, respectively, by Cedric Smith (1961) and Peter Walley (1991). On this interpretation you are free to assign contingent events the probability 1 (and thus to employ conditionalization as a method of probability revision) without becoming vulnerable to a weak Dutch book.

We establish a probabilized version of modus tollens, deriving from p(E|H)=a and p()=b the best possible bounds on p(). In particular, we show that p() 1 as a, b 1, and also as a, b 0. Introduction Probabilities of conditionals Conditional probabilities 3.1 Adams' thesis 3.2 Modus ponens for conditional probabilities 3.3 Modus tollens for conditional probabilities.

A simple rule of probability revision ensures that the final result ofa sequence of probability revisions is undisturbed by an alterationin the temporal order of the learning prompting those revisions.This Uniformity Rule dictates that identical learning be reflectedin identical ratios of certain new-to-old odds, and is grounded in the oldBayesian idea that such ratios represent what is learned from new experiencealone, with prior probabilities factored out. The main theorem of this paperincludes as special cases (i) Field's theorem on commuting probability-kinematical (...) revisions and (ii) the equivalence of two strategiesfor generalizing Jeffrey's solution to the old evidence problem tothe case of uncertain old evidence and probabilistic new explanation. (shrink)

A simple rule of probability revision ensures that the final result of a sequence of probability revisions is undisturbed by an alteration in the temporal order of the learning prompting those revisions. This Uniformity Rule dictates that identical learning be reflected in identical ratios of certain new-to-old odds, and is grounded in the old Bayesian idea that such ratios represent what is learned from new experience alone, with prior probabilities factored out. The main theorem of this paper includes as special (...) cases (i) Field's theorem on commuting probability-kinematical revisions and (ii) the equivalence of two strategies for generalizing Jeffrey's solution to the old evidence problem to the case of uncertain old evidence and probabilistic new explanation. (shrink)

The so-called "non-commutativity" of probability kinematics has caused much unjustified concern. When identical learning is properly represented, namely, by identical Bayes factors rather than identical posterior probabilities, then sequential probability-kinematical revisions behave just as they should. Our analysis is based on a variant of Field's reformulation of probability kinematics, divested of its (inessential) physicalist gloss.

Garber (1983) and Jeffrey (1991, 1995) have both proposed solutions to the old evidence problem. Jeffrey's solution, based on a new probability revision method called reparation, has been generalized to the case of uncertain old evidence and probabilistic new explanation in Wagner 1997, 1999. The present paper reformulates some of the latter work, highlighting the central role of Bayes factors and their associated uniformity principle, and extending the analysis to the case in which an hypothesis bears on a countable family (...) of evidentiary propositions. This extension shows that no Garber-type approach is capable of reproducing the results of generalized reparation. (shrink)

Jeffrey has devised a probability revision method that increases the probability of hypothesis H when it is discovered that H implies previously known evidence E. A natural extension of Jeffrey's method likewise increases the probability of H when E has been established with sufficiently high probability and it is then discovered, quite apart from this, that H confers sufficiently higher probability on E than does its logical negation H̄.

Jeffrey conditionalization is generalized to the case in which new evidence bounds the possible revisions of a prior below by a Dempsterian lower probability. Classical probability kinematics arises within this generalization as the special case in which the evidentiary focal elements of the bounding lower probability are pairwise disjoint.

It is shown that the Fisher smoking problem and Newcomb's problem are decisiontheoretically identical, each having at its core an identical case of Simpson's paradox for certain probabilities. From this perspective, incorrect solutions to these problems arise from treating them as cases of decisionmaking under risk, while adopting certain global empirical conditional probabilities as the relevant subjective probabihties. The most natural correct solutions employ the methodology of decisionmaking under uncertainty with lottery acts, with certain local empirical conditional probabilities adopted as (...) the relevant subjective probabilities. (shrink)