The article proceeds upon the assumption that the beliefs and degrees of belief of rational agents satisfy a number of constraints, including: consistency and deductive closure for belief sets, conformity to the axioms of probability for degrees of belief, and the Lockean Thesis concerning the relationship between belief and degree of belief. Assuming that the beliefs and degrees of belief of both individuals and collectives satisfy the preceding three constraints, I discuss what further constraints may be imposed on the aggregation (...) of beliefs and degrees of belief. Some possibility and impossibility results are presented. The possibility results suggest that the three proposed rationality constraints are compatible with reasonable aggregation procedures for belief and degree of belief. (shrink)

In attempting to form rational personal probabilities by direct inference, it is usually assumed that one should prefer frequency information concerning more specific reference classes. While the preceding assumption is intuitively plausible, little energy has been expended in explaining why it should be accepted. In the present article, I address this omission by showing that, among the principled policies that may be used in setting one’s personal probabilities, the policy of making direct inferences with a preference for frequency information for (...) more specific reference classes yields personal probabilities whose accuracy is optimal, according to all proper scoring rules, in situations where all of the relevant frequency information is point-valued. Assuming that frequency information for narrower reference classes is preferred, when the relevant frequency statements are point-valued, a dilemma arises when choosing whether to make a direct inference based upon relatively precise-valued frequency information for a broad reference class, R, or upon relatively imprecise-valued frequency information for a more specific reference class, R*. I address such cases, by showing that it is often possible to make a precise-valued frequency judgment regarding R* based on precise-valued frequency information for R, using standard principles of direct inference. Having made such a frequency judgment, the dilemma of choosing between and is removed, and one may proceed by using the precise-valued frequency estimate for the more specific reference class as a premise for direct inference. (shrink)

The article begins by describing two longstanding problems associated with direct inference. One problem concerns the role of uninformative frequency statements in inferring probabilities by direct inference. A second problem concerns the role of frequency statements with gerrymandered reference classes. I show that past approaches to the problem associated with uninformative frequency statements yield the wrong conclusions in some cases. I propose a modification of Kyburg’s approach to the problem that yields the right conclusions. Past theories of direct inference have (...) postponed treatment of the problem associated with gerrymandered reference classes by appealing to an unexplicated notion of projectability . I address the lacuna in past theories by introducing criteria for being a relevant statistic . The prescription that only relevant statistics play a role in direct inference corresponds to the sort of projectability constraints envisioned by past theories. (shrink)

The present article illustrates a conflict between the claim that rational belief sets are closed under deductive consequences, and a very inclusive claim about the factors that are sufficient to determine whether it is rational to believe respective propositions. Inasmuch as it is implausible to hold that the factors listed here are insufficient to determine whether it is rational to believe respective propositions, we have good reason to deny that rational belief sets are closed under deductive consequences.

In a recent article, Joel Pust argued that direct inference based on reference properties of differing arity are incommensurable, and so direct inference cannot be used to resolve the Sleeping Beauty problem. After discussing the defects of Pust's argument, I offer reasons for thinking that direct inferences based on reference properties of differing arity are commensurable, and that we should prefer direct inferences based on logically stronger reference properties, regardless of arity.

In this article, I introduce the term “cognitivism” as a name for the thesis that degrees of belief are equivalent to full beliefs about truth-valued propositions. The thesis (of cognitivism) that degrees of belief are equivalent to full beliefs is equivocal, inasmuch as different sorts of equivalence may be postulated between degrees of belief and full beliefs. The simplest sort of equivalence (and the sort of equivalence that I discuss here) identifies having a given degree of belief with having a (...) full belief with a specific content. This sort of view was proposed in [C. Howson and P. Urbach, Scientific reasoning: the Bayesian approach. Chicago: Open Court (1996)].In addition to embracing a form of cognitivism about degrees of belief, Howson and Urbach argued for a brand of probabilism. I call a view, such as Howson and Urbach’s, which combines probabilism with cognitivism about degrees of belief “cognitivist probabilism”. In order to address some problems with Howson and Urbach’s view, I propose a view that incorperates several of modifications of Howson and Urbach’s version of cognitivist probabilism. The view that I finally propose upholds cognitivism about degrees of belief, but deviates from the letter of probabilism, in allowing that a rational agent’s degrees of belief need not conform to the axioms of probability, in the case where the agent’s cognitive resources are limited. (shrink)

Meta-induction, in its various forms, is an imitative prediction method, where the prediction methods and the predictions of other agents are imitated to the extent that those methods or agents have proven successful in the past. In past work, Schurz demonstrated the optimality of meta-induction as a method for predicting unknown events and quantities. However, much recent discussion, along with formal and empirical work, on the Wisdom of Crowds has extolled the virtue of diverse and independent judgment as essential to (...) maintenance of 'wise crowds'. This suggests that meta-inductive prediction methods could undermine the wisdom of the crowd inasmuch these methods recommend that agents imitate the predictions of other agents. In this article, we evaluate meta-inductive methods with a focus on the impact on a group's performance that may result from including meta-inductivists among its members. In addition to considering cases of global accessibility (i.e., cases where the judgments of all members of the group are available to all of the group's members), we consider cases where agents only have access to the judgments of other agents within their own local neighborhoods. (shrink)

The applicability of Bayesian conditionalization in setting one’s posterior probability for a proposition, α, is limited to cases where the value of a corresponding prior probability, PPRI(α|∧E), is available, where ∧E represents one’s complete body of evidence. In order to extend probability updating to cases where the prior probabilities needed for Bayesian conditionalization are unavailable, I introduce an inference schema, defeasible conditionalization, which allows one to update one’s personal probability in a proposition by conditioning on a proposition that represents a (...) proper subset of one’s complete body of evidence. While defeasible conditionalization has wider applicability than standard Bayesian conditionalization (since it may be used when the value of a relevant prior probability, PPRI(α|∧E), is unavailable), there are circumstances under which some instances of defeasible conditionalization are unreasonable. To address this difficulty, I outline the conditions under which instances of defeasible conditionalization are defeated. To conclude the article, I suggest that the prescriptions of direct inference and statistical induction can be encoded within the proposed system of probability updating, by the selection of intuitively reasonable prior probabilities. (shrink)

It is well known that there are, at least, two sorts of cases where one should not prefer a direct inference based on a narrower reference class, in particular: cases where the narrower reference class is gerrymandered, and cases where one lacks an evidential basis for forming a precise-valued frequency judgment for the narrower reference class. I here propose (1) that the preceding exceptions exhaust the circumstances where one should not prefer direct inference based on a narrower reference class, and (...) (2) that minimal frequency information for a narrower (non-gerrymandered) reference class is sufficient to yield the defeat of a direct inference for a broader reference class. By the application of a method for inferring relatively informative expected frequencies, I argue that the latter claim does not result in an overly incredulous approach to direct inference. The method introduced here permits one to infer a relatively informative expected frequency for a reference class R', given frequency information for a superset of R' and/or frequency information for a sample drawn from R'. (shrink)

There are numerous formal systems that allow inference of new conditionals based on a conditional knowledge base. Many of these systems have been analysed theoretically and some have been tested against human reasoning in psychological studies, but experiments evaluating the performance of such systems are rare. In this article, we extend the experiments in [19] in order to evaluate the inferential properties of c-representations in comparison to the well-known Systems P and Z. Since it is known that System Z and (...) c-representations mainly differ in the sorts of inheritance inferences they allow, we discuss subclass inheritance and present experimental data for this type of inference in particular. (shrink)

An agent’s belief in a proposition, E0, is justified by an infinite regress of deferred justification just in case the belief that E0 is justified, and the justification for believing E0 proceeds from an infinite sequence of propositions, E0, E1, E2, etc., where, for all n ≥ 0, En+1 serves as the justification for En. In a number of recent articles, Atkinson and Peijnenburg claim to give examples where a belief is justified by an infinite regress of deferred justification. I (...) argue here that there is no reason to regard Atkinson and Peijnenburg’s examples as cases where a belief is so justified. My argument is supported by careful consideration of the grounds upon which relevant beliefs are held within Atkinson and Peijnenburg’s examples. (shrink)

Formal and empirical work on the Wisdom of Crowds has extolled the virtue of diverse and independent judgment as essential to the maintenance of ‘wise crowds’. In other words, com-munication and imitation among members of a group may have the negative effect of decreasing the aggregate wisdom of the group. In contrast, it is demonstrable that certain meta-inductive methods provide optimal means for predicting unknown events. Such meta-inductive methods are essentially imitative, where the predictions of other agents are imitated to (...) the extent that those agents have proven successful in the past. Despite the (self-serving) optimality of meta-inductive methods, their imitative nature may undermine the ‘wisdom of the crowd’, since these methods recommend that agents imitate the predictions of other agents. In this paper, I present a replication of selected results of Thorn and Schurz, illustrating the effect on a group’s performance that may result from having members of a group adopt meta-inductive methods. I then expand on the work of Thorn and Schurz by considering three simple measures by which meta-inductive prediction methods may improve their own performance, while simultaneously mitigating their negative impact on group performance. The effects of adopting these maneuvers are investigated using computer simulations. (shrink)

We describe a prediction method called "Attractivity Weighting" (AW). In the case of cue-based paired comparison tasks, AW's prediction is based on a weighted average of the cue values of the most successful cues. In many situations, AW's prediction is based on the cue value of the most successful cue, resulting in behavior similar to Take-the-Best (TTB). Unlike TTB, AW has a desirable characteristic called "access optimality": Its long-run success is guaranteed to be at least as great as the most (...) successful cue. While access optimality is a desirable characteristic, concerns may be raised about the short-term performance of AW. To evaluate such concerns, we here present a study of AW's short-term performance. The results suggest that there is little reason to worry about the short-run performance of AW. Our study also shows that, in random sequences of paired comparison tasks, the behavior of AW and TTB is nearly indiscernible. (shrink)

In previous work, we studied four well known systems of qualitative probabilistic inference, and presented data from computer simulations in an attempt to illustrate the performance of the systems. These simulations evaluated the four systems in terms of their tendency to license inference to accurate and informative conclusions, given incomplete information about a randomly selected probability distribution. In our earlier work, the procedure used in generating the unknown probability distribution (representing the true stochastic state of the world) tended to yield (...) probability distributions with moderately high entropy levels. In the present article, we present data charting the performance of the four systems when reasoning in environments of various entropy levels. The results illustrate variations in the performance of the respective reasoning systems that derive from the entropy of the environment, and allow for a more inclusive assessment of the reliability and robustness of the four systems. (shrink)

Systems of logico-probabilistic (LP) reasoning characterize inference from conditional assertions interpreted as expressing high conditional probabilities. In the present article, we investigate four prominent LP systems (namely, systems O, P, Z, and QC) by means of computer simulations. The results reported here extend our previous work in this area, and evaluate the four systems in terms of the expected utility of the dispositions to act that derive from the conclusions that the systems license. In addition to conforming to the dominant (...) paradigm for assessing the rationality of actions and decisions, our present evaluation complements our previous work, since our previous evaluation may have been too severe in its assessment of inferences to false and uninformative conclusions. In the end, our new results provide additional support for the conclusion that (of the four systems considered) inference by system Z offers the best balance of error avoidance and inferential power. Our new results also suggest that improved performance could be achieved by a modest strengthening of system Z. (shrink)

This essay is devoted to the study of useful ways of thinking about the nature of interpretation, with particular attention being given to the so called normative character of mental explanation. My aim of illuminating the nature of interpretation will be accomplished by examining several views, some of which are common to both Donald Davidson and Daniel Dennett, concerning its unique characteristics as a method of prediction and explanation. Moreover, some of the views held by Davidson and Dennett will be (...) adopted, elaborated, and defended. The conclusions of these philosophers do not, however, form an acceptable whole. Thus I will attempt to moderate some of their views. In particular, I will attempt to show up the deficits of Davidson's view of the mental by defending the possibility some sort of psycho-physical reduction. Despite such philosophical pretensions, major parts of this essay will be devoted to sketching the foundations of a method for the interpretation of intentional behaviour which I take to embody the key features of our ordinary practice of interpretation. In particular, I will attempt to sketch the bases for a method of interpretation which is sensitive to the methodological considerations associated with the seemingly unique normative character of mental explanation. To this end, I will also investigate the question of how certain formal measures of coherence can be made to yield models for understanding the actual and possible bases of interpretation. (shrink)