In this paper I adduce a new argument in support of the claim that IBE is an autonomous (indispensable) form of inference, based on a familiar, yet surprisingly, under-discussed, problem for Hume’s theory of induction. I then use some insights thereby gleaned to argue for the (reductionist) claim that induction is really IBE, and draw some normative conclusions.

In the Tractatus Wittgenstein criticizes Frege and Russell's view that laws of inference (Schlussgesetze) "justify" logical inferences. What lies behind this criticism, I argue, is an attack on Frege and Russell's conceptions of logical entailment. In passing, I examine Russell's dispute with Bradley on the question whether all relations are "internal".

How do we go about weighing evidence, testing hypotheses, and making inferences? The model of "inference to the best explanation" (IBE) -- that we infer the hypothesis that would, if correct, provide the best explanation of the available evidence--offers a compelling account of inferences both in science and in ordinary life. Widely cited by epistemologists and philosophers of science, IBE has nonetheless remained little more than a slogan. Now this influential work has been thoroughly revised and updated, and features (...) a new introduction and two new chapters. Inference to the Best Explanation is an unrivaled exposition of a theory of particular interest in the fields both of epistemology and the philosophy of science. (shrink)

I argue that the accounts of inference recently presented (in this journal) by Paul Boghossian, John Broome, and Crispin Wright are unsatisfactory. I proceed in two steps: First, in Sects. 1 and 2, I argue that we should not accept what Boghossian calls the “Taking Condition on inference” as a condition of adequacy for accounts of inference. I present a different condition of adequacy and argue that it is superior to the one offered by Boghossian. More precisely, (...) I point out that there is an analog of Moore’s Paradox for inference; and I suggest that explaining this phenomenon is a condition of adequacy for accounts of inference. Boghossian’s Taking Condition derives its plausibility from the fact that it apparently explains the analog of Moore’s Paradox. Second, in Sect. 3, I show that neither Boghossian’s, nor Broome’s, nor Wright’s account of inference meets my condition of adequacy. I distinguish two kinds of mistake one is likely to make if one does not focus on my condition of adequacy; and I argue that all three—Boghossian, Broome, and Wright—make at least one of these mistakes. (shrink)

What is the connection between justification and the kind of consequence relations that are studied by logic? In this essay, I shall try to provide an answer, by proposing a general conception of the kind of inference that counts as justified or rational.

The idea that knowledge can be extended by inference from what is known seems highly plausible. Yet, as shown by familiar preface paradox and lottery-type cases, the possibility of aggregating uncertainty casts doubt on its tenability. We show that these considerations go much further than previously recognized and significantly restrict the kinds of closure ordinary theories of knowledge can endorse. Meeting the challenge of uncertainty aggregation requires either the restriction of knowledge-extending inferences to single premises, or eliminating epistemic uncertainty (...) in known premises. The first strategy, while effective, retains little of the original idea—conclusions even of modus ponens inferences from known premises are not always known. We then look at the second strategy, inspecting the most elaborate and promising attempt to secure the epistemic role of basic inferences, namely Timothy Williamson’s safety theory of knowledge. We argue that while it indeed has the merit of allowing basic inferences such as modus ponens to extend knowledge, Williamson’s theory faces formidable difficulties. These difficulties, moreover, arise from the very feature responsible for its virtue- the infallibilism of knowledge. (shrink)

The thesis of underdetermination presents a major obstacle to the epistemological claims of scientific realism. That thesis is regularly assumed in the philosophy of science, but is puzzlingly at odds with the actual history of science, in which empirically adequate theories are thin on the ground. We propose to advance a case for scientific realism which concentrates on the process of scientific reasoning rather than its theoretical products. Developing an account of causal–explanatory inference will make it easier to resist (...) the thesis of underdetermination. For, if we are not restricted to inference to the best explanation only at the level of major theories, we will be able to acknowledge that there is a structure in data sets which imposes serious constraints on possible theoretical alternatives. We describe how Differential Inference, a form of inference based on contrastive explanation, can be used in order to generate causal hypotheses. We then go on to consider how experimental manipulation of differences can be used to achieve Difference Closure, thereby confirming claims of causal efficacy and also eliminating possible confounds. The model of Differential Inference outlined here shows at least one way in which it is possible to ‘reason from the phenomena’. (shrink)

Order of information plays a crucial role in the process of updating beliefs across time. In fact, the presence of order effects makes a classical or Bayesian approach to inference difficult. As a result, the existing models of inference, such as the belief-adjustment model, merely provide an ad hoc explanation for these effects. We postulate a quantum inference model for order effects based on the axiomatic principles of quantum probability theory. The quantum inference model explains order (...) effects by transforming a state vector with different sequences of operators for different orderings of information. We demonstrate this process by fitting the quantum model to data collected in a medical diagnostic task and a jury decision-making task. To further test the quantum inference model, a new jury decision-making experiment is developed. Using the results of this experiment, we compare the quantum inference model with two versions of the belief-adjustment model, the adding model and the averaging model. We show that both the quantum model and the adding model provide good fits to the data. To distinguish the quantum model from the adding model, we develop a new experiment involving extreme evidence. The results from this new experiment suggest that the adding model faces limitations when accounting for tasks involving extreme evidence, whereas the quantum inference model does not. Ultimately, we argue that the quantum model provides a more coherent account for order effects that was not possible before. (shrink)

This article generalizes the explanationist account of inference to the best explanation (IBE). It draws a clear distinction between IBE and abduction and presents abduction as the first step of IBE. The second step amounts to the evaluation of explanatory power, which consist in the degree of explanatory virtues that a hypothesis exhibits. Moreover, even though coherence is the most often cited explanatory virtue, on pain of circularity, it should not be treated as one of the explanatory virtues. Rather, (...) coherence should be equated with explanatory power and considered to be derivable from the other explanatory virtues: unification, explanatory depth and simplicity. (shrink)

The underconsideration argument against inference to the best explanation and scientific realism holds that scientists are not warranted in inferring that the best theory is true, because scientists only ever conceive of a small handful of theories at one time, and as a result, they may not have considered a true theory. However, antirealists have not developed a detailed alternative account of why explanatory inference nevertheless appears so central to scientific practice. In this paper, I provide new defences (...) against some recent objections to the underconsideration argument, while also developing an account of explanatory inference that both survives these criticisms and does not entail realism. (shrink)

We must rethink our assessment of Hume’s theory of probabilistic inference. Hume scholars have traditionally dismissed his naturalistic explanation of how we make inferences under conditions of uncertainty; however, psychological experiments and computer models from cognitive science provide substantial support for Hume’s account. Hume’s theory of probabilistic inference is far from obsolete or outdated; on the contrary, it stands at the leading edge of our contemporary science of the mind.

Kirsten Besheer has recently considered Descartes’ doubting appropriately in the context of his physiological theories in the spirit of recent important re-appraisals of his natural philosophy. However, Besheer does not address the notorious indubitability and its source that Descartes claims to have discovered. David Cunning has remarked that Descartes’ insistence on the indubitability of his existence presents “an intractable problem of interpretation” in the light of passages that suggest his existence is “just as dubitable as anything else”. However, although the (...) cogito argument is widely thought to be central to the force of Descartes’ indubitability, for his part, Cunning does not consider its relevance and force. Accordingly, this article is concerned with the cogito argument and the question central to Hintikka’s seminal contribution, described by Cottingham as “Perhaps the most debated question,” namely, whether or not the cogito can be construed as a logical inference. Clearly, an inferential account has the potential to explain the certainty of Descartes’ conclusion that he exists. Recently, Sarkar offers what he characterizes as “novel and fairly conclusive reasons why the cogito cannot be construed as an argument,” asserting “the discovery of the cogito can only be an intuition not a deduction.” Obviously, it would greatly support the opposing inferential construal if a remotely plausible logical argument could be proposed. Toward this end, I defend the virtues of my ‘Diagonal’ account of Descartes’ cogito Above all, I show how my analysis meets the requirement that any satisfactory solution to the problem of the cogito would reconcile Descartes’ claim that the cogito is a certain inference with his claim that it is an intuitive kind of knowledge. Through a critical discussion of analyses such as that of Gallois , I show that it is possible to provide a textually faithful analysis that permits seeing the cogito as both inference and intuition because it may be seen as an exercise in the mathematical method of Analysis. Above all, as Feldman requires, I show that the Diagonal account is not only textually elegant, but permits crediting Descartes with a worthy insight, thereby resolving the tension between what Howell has termed the Humean and Cartesian problems, namely, the elusiveness and the certainty of the self. (shrink)

Evolutionary psychology is a science in the making, working toward the goal of showing how psychological adaptation underlies much human behavior. The knee-jerk reaction that sociobiology is unscientific because it tells just-so stories has become a common charge against evolutionary psychology as well. My main positive thesis is that inference to the best explanation is a proper method for evolutionary analyses, and it supplies a new perspective on the issues raised in Schlinger's (1996) just-so story critique. My main negative (...) thesis is that, like many nonevolutionist critics, Schlinger's objections arise from misunderstandings of the evolutionary approach.Evolutionary psychology has progressed beyond telling just-so stories. It has found a host of ingenious special techniques to test hypotheses about the adaptive significance and proximate mechanisms of behavior. Naturalistic data using the comparative method combined with controlled tests using statistical analyses of data provide good evidence for a variety of hypotheses about behavioral control mechanisms — whether in nonhumans or in humans. For instance, the work of Gangestad and Thornhill on evolved mate preferences and fluctuating asymmetry of body type (FA) is a model of success. As the quantity and quality of evidence increase, we are entitled not just to regard such evolutionary hypotheses as preferable, but also as true. Such studies combine to show that the best explanation of the psychic unity of humankind — common patterns across societies, history, and cultures exposed by evolutionists — is the gendered, adapted, evolved species-typical design of the mind. (shrink)

This paper considers an application of work on probabilistic measures of coherence to inference to the best explanation (IBE). Rather than considering information reported from different sources, as is usually the case when discussing coherence measures, the approach adopted here is to use a coherence measure to rank competing explanations in terms of their coherence with a piece of evidence. By adopting such an approach IBE can be made more precise and so a major objection to this mode of (...) reasoning can be addressed. Advantages of the coherence-based approach are pointed out by comparing it with several other ways to characterize ‘best explanation’ and showing that it takes into account their insights while overcoming some of their problems. The consequences of adopting this approach for IBE are discussed in the context of recent discussions about the relationship between IBE and Bayesianism. (shrink)

In this paper the informativeness account of assertion (Pagin in Assertion. Oxford University Press, Oxford, 2011) is extended to account for inference. I characterize the conclusion of an inference as asserted conditionally on the assertion of the premises. This gives a notion of conditional assertion (distinct from the standard notion related to the affirmation of conditionals). Validity and logical validity of an inference is characterized in terms of the application of method that preserves informativeness, and contrasted with (...) consequence and logical consequence, that is defined in terms of truth preservation. The proposed account is compared with that of Prawitz (Logica yearbook 2008, pp. 175-192. College Publications, London, 2009). (shrink)

Original and penetrating, this book investigates of the notion of inference from signs, which played a central role in ancient philosophical and scientific method. It examines an important chapter in ancient epistemology: the debates about the nature of evidence and of the inferences based on it--or signs and sign-inferences as they were called in antiquity. As the first comprehensive treatment of this topic, it fills an important gap in the histories of science and philosophy.

Change, Choice and Inference develops logical theories that are necessary both for the understanding of adaptable human reasoning and for the design of intelligent systems. The book shows that reasoning processes - the drawing on inferences and changing one's beliefs - can be viewed as belonging to the realm of practical reason by embedding logical theories into the broader context of the theory of rational choice. The book unifies lively and significant strands of research in logic, philosophy, economics and (...) artificial intelligence. It elaborates on the relevant theories and provides a mathematically precise foundation for the thesis that large parts of theoretical reason can be subsumed under practical reason. (shrink)

This monograph provides a new account of justified inference as a cognitive process. In contrast to the prevailing tradition in epistemology, the focus is on low-level inferences, i.e., those inferences that we are usually not consciously aware of and that we share with the cat nearby which infers that the bird which she sees picking grains from the dirt, is able to fly. Presumably, such inferences are not generated by explicit logical reasoning, but logical methods can be used to (...) describe and analyze such inferences. Part 1 gives a purely system-theoretic explication of belief and inference. Part 2 adds a reliabilist theory of justification for inference, with a qualitative notion of reliability being employed. Part 3 recalls and extends various systems of deductive and nonmonotonic logic and thereby explains the semantics of absolute and high reliability. In Part 4 it is proven that qualitative neural networks are able to draw justified deductive and nonmonotonic inferences on the basis of distributed representations. This is derived from a soundness/completeness theorem with regard to cognitive semantics of nonmonotonic reasoning. The appendix extends the theory both logically and ontologically, and relates it to A. Goldman's reliability account of justified belief. This text will be of interest to epistemologists and logicians, to all computer scientists who work on nonmonotonic reasoning and neural networks, and to cognitive scientists. (shrink)

We reflect on lessons that the lottery and preface paradoxes provide for the logic of uncertain inference. One of these lessons is the unreliability of the rule of conjunction of conclusions in such contexts, whether the inferences are probabilistic or qualitative; this leads us to an examination of consequence relations without that rule, the study of other rules that may nevertheless be satisfied in its absence, and a partial rehabilitation of conjunction as a ‘lossy’ rule. A second lesson is (...) the possibility of rational inconsistent belief; this leads us to formulate criteria for deciding when an inconsistent set of beliefs may reasonably be retained. (shrink)

What is an inference? Logicians and philosophers have proposed various conceptions of inference. I shall first highlight seven features that contribute to distinguish these conceptions. I shall then compare three conceptions to see which of them best explains the special force that compels us to accept the conclusion of an inference, if we accept its premises.

The classical theory of semantic information (ESI), as formulated by Bar-Hillel and Carnap in 1952, does not give a satisfactory account of the problem of what information, if any, analytically and/or logically true sentences have to offer. According to ESI, analytically true sentences lack informational content, and any two analytically equivalent sentences convey the same piece of information. This problem is connected with Cohen and Nagel's paradox of inference: Since the conclusion of a valid argument is contained in the (...) premises, it fails to provide any novel information. Again, ESI does not give a satisfactory account of the paradox. In this paper I propose a solution based on the distinction between empirical information and analytic information. Declarative sentences are informative due to their meanings. I construe meanings as structured hyperintensions, modelled in Transparent Intensional Logic as so-called constructions. These are abstract, algorithmically structured procedures whose constituents are sub-procedures. My main thesis is that constructions are the vehicles of information. Hence, although analytically true sentences provide no empirical information about the state of the world, they convey analytic information, in the shape of constructions prescribing how to arrive at the truths in question. Moreover, even though analytically equivalent sentences have equal empirical content, their analytic content may be different. Finally, though the empirical content of the conclusion of a valid argument is contained in the premises, its analytic content may be different from the analytic content of the premises and thus convey a new piece of information. (shrink)

This study aims to understand scientific inference for the evolutionary procedure of Continental Drift based on abductive inference, which is important for creative inference and scientific discovery during problem solving. We present the following two research problems: (1) we suggest a scientific inference procedure as well as various strategies and a criterion for choosing hypotheses over other competing or previous hypotheses; aspects of this procedure include puzzling observation, abduction, retroduction, updating, deduction, induction, and recycle; and (2) (...) we analyze the “theory of continental drift” discovery, called the Earth science revolution, using our multistage inference procedure. Wegener’s Continental Drift hypothesis had an impact comparable to the revolution caused by Darwin’s theory of evolution in biology. Finally, the suggested inquiry inference model can provide us with a more consistent view of science and promote a deeper understanding of scientific concepts. (shrink)

This article discusses how inference to the best explanation (IBE) can be justified as a practical meta-argument. It is, firstly, justified as a practical argument insofar as accepting the best explanation as true can be shown to further a specific aim. And because this aim is a discursive one which proponents can rationally pursue in—and relative to—a complex controversy, namely maximising the robustness of one’s position, IBE can be conceived, secondly, as a meta-argument. My analysis thus bears a certain (...) analogy to Sellars’ well-known justification of inductive reasoning (Sellars, In: Essays in honour of Carl G. Hempel, 1969); it is based on recently developed theories of complex argumentation (Betz, In: Theorie dialektischer Strukturen, 2010a). (shrink)

Inference versus consequence , an invited lecture at the LOGICA 1997 conference at Castle Liblice, was part of a series of articles for which I did research during a Stockholm sabbatical in the autumn of 1995. The article seems to have been fairly effective in getting its point across and addresses a topic highly germane to the Uppsala workshop. Owing to its appearance in the LOGICA Yearbook 1997 , Filosofia Publishers, Prague, 1998, it has been rather inaccessible. Accordingly it (...) is republished here with only bibliographical changes and an afterword. (shrink)

Do accounts of scientific theory formation and revision have implications for theories of everyday cognition? We maintain that failing to distinguish between importantly different types of theories of scientific inference has led to fundamental misunderstandings of the relationship between science and everyday cognition. In this article, we focus on one influential manifestation of this phenomenon which is found in Fodor's well-known critique of theories of cognitive architecture. We argue that in developing his critique, Fodor confounds a variety of distinct (...) claims about the holistic nature of scientific inference. Having done so, we outline more promising relations that hold between theories of scientific inference and ordinary cognition. (shrink)

Our main aim in this paper is to contribute towards a better understanding of the epistemology of absence-based inferences. Many absence-based inferences are classified as fallacies. There are exceptions, however. We investigate what features make absence-based inferences epistemically good or reliable. In Section 2 we present Sanford Goldberg’s account of the reliability of absence-based inference, introducing the central notion of epistemic coverage. In Section 3 we approach the idea of epistemic coverage through a comparison of alethic and evidential principles. (...) The Equivalence Schema–a well-known alethic principle–says that it is true that $p$ if and only if $p$ . We take epistemic coverage to underwrite a suitably qualified evidential analogue of the Equivalence Schema: for a high proportion of values of $p$ , subject $S$ has evidence that $p$ due to her reliance on source $S^{*}$ if and only if $p$ . We show how this evidential version of the Equivalence Schema suffices for the reliability of certain absence-based inferences. Section 4 is dedicated to exploring consequences of the Evidential Equivalence Schema. The slogan ‘absence of evidence is evidence of absence’ has received a lot of bad press. More elaborately, what has received a lot of bad press is something like the following idea: absence of evidence sufficiently good to justify belief in $p$ is evidence sufficiently good to justify belief in $\sim p$ . A striking consequence of the Evidential Equivalence Schema is that absence of evidence sufficiently good to justify belief in p is evidence sufficiently good to justify belief in $\sim p$ . We establish this claim in Section 4 and show how this supports the reliability of an additional type of absence-based inference. Section 4 immediately raises the following question: how can we make philosophically good sense of the idea that absence of evidence is evidence of absence? We address this question in Section 5. Section 6 contains some summary remarks. (shrink)

Biological and Cultural Bases of Human Inference addresses the interface between social science and cognitive science. In this volume, Viale and colleagues explore which human social cognitive powers evolve naturally and which are influenced by culture. Updating the debate between innatism and culturalism regarding human cognitive abilities, this book represents a much-needed articulation of these diverse bases of cognition. Chapters throughout the book provide social science and philosophical reflections, in addition to the perspective of evolutionary theory and the central (...) assumptions of cognitive science. The overall approach of the text is based on three complementary levels: adult performance, cognitive development, and cultural history and prehistory. Scholars from several disciplines contribute to this volume, including researchers in cognitive, developmental, social and evolutionary psychology, neuropsychology, cognitive anthropology, epistemology, and philosophy of mind. This contemporary, important collection appeals to researchers in the fields of cognitive, social, developmental, and evolutionary psychology and will prove valuable to researchers in the decision sciences. (shrink)

Performance on the Wason selection task varies with content. This has been taken to demonstrate that there are different cognitive modules for dealing with different conceptual domains. This implication is only legitimate if our underlying cognitive architecture is formal. A non-formal system can explain content-sensitive inference without appeal to independent inferential modules.

The aim of this book is to present the fundamental theoretical results concerning inference rules in deductive formal systems. Primary attention is focused on: admissible or permissible inference rules the derivability of the admissible inference rules the structural completeness of logics the bases for admissible and valid inference rules. There is particular emphasis on propositional non-standard logics (primary, superintuitionistic and modal logics) but general logical consequence relations and classical first-order theories are also considered. The book is (...) basically self-contained and special attention has been made to present the material in a convenient manner for the reader. Proofs of results, many of which are not readily available elsewhere, are also included. The book is written at a level appropriate for first-year graduate students in mathematics or computer science. Although some knowledge of elementary logic and universal algebra are necessary, the first chapter includes all the results from universal algebra and logic that the reader needs. For graduate students in mathematics and computer science the book is an excellent textbook. (shrink)

Much of the recent work on the epistemology of causation has centered on two assumptions, known as the Causal Markov Condition and the Causal Faithfulness Condition. Philosophical discussions of the latter condition have exhibited situations in which it is likely to fail. This paper studies the Causal Faithfulness Condition as a conjunction of weaker conditions. We show that some of the weaker conjuncts can be empirically tested, and hence do not have to be assumed a priori. Our results lead to (...) two methodologically significant observations: (1) some common types of counterexamples to the Faithfulness condition constitute objections only to the empirically testable part of the condition; and (2) some common defenses of the Faithfulness condition do not provide justification or evidence for the testable parts of the condition. It is thus worthwhile to study the possibility of reliable causal inference under weaker Faithfulness conditions. As it turns out, the modification needed to make standard procedures work under a weaker version of the Faithfulness condition also has the practical effect of making them more robust when the standard Faithfulness condition actually holds. This, we argue, is related to the possibility of controlling error probabilities with finite sample size (“uniform consistency”) in causal inference. (shrink)

In informal terms, abductive reasoning involves inferring the best or most plausible explanation from a given set of facts or data. It is a common occurrence in everyday life and crops up in such diverse places as medical diagnosis, scientific theory formation, accident investigation, language understanding, and jury deliberation. In recent years, it has become a popular and fruitful topic in artificial intelligence research. This volume breaks new ground in the scientific, philosophical, and technological study of abduction. It presents new (...) ideas about inferential and information-processing foundations for knowledge and certainty. The authors argue that knowledge arises from experience by processes of abductive inference, in contrast to the view that it arises non-inferentially, or that deduction and inductive generalization are enough to account for knowledge. Much AI research is hypothetical, so the importance of this book is that it reports key discoveries about abduction that have been made as a result of designing, building, testing, and analyzing actual working knowledge-based systems for medical diagnosis and other abductive tasks. The book tells the story of six generations of increasingly sophisticated generic abduction machines, RED-1, RED-2, PEIRCE, MDX2, TIPS, QUAWDS, and the discovery of reasoning strategies that make it computationally feasible to form well-justified composite explanatory hypotheses, despite the threat of combinatorial explosion. The final chapter argues that perception is logically abductive and presents a layered-abduction computational model of perceptual information processing. This book will be of great interest to researchers in AI, cognitive science, and philosophy of science. (shrink)

In the svārthānumāna chapter of his Pramāṇavārttika, the Buddhist philosopher Dharmakīrti presented a defense of his claim that legitimate inference must rest on a metaphysical basis if it is to be immune from the risks ordinarily involved in inducing general principles from a finite number of observations. Even if one repeatedly observes that x occurs with y and never observes y in the absence of x, there is no guarantee, on the basis of observation alone, that one will never (...) observe y in the absence of x at some point in the future. To provide such a guarantee, claims Dharmakīrti, one must know that there is a causal connection between x and y such that there is no possibility of y occurring in the absence of x. In the course of defending this central claim, Dharmakīrti ponders how one can know that there is a causal relationship of the kind necessary to guarantee a proposition of the form “Every y occurs with an x.” He also dismisses an interpretation of his predecessor Dignāga whereby Dignāga would be claiming non-observation of y in the absence of x is sufficient to warrant to the claim that no y occurs without x. The present article consists of a translation of kārikās 11–38 of Pramānavārttikam, svārthānumānaparicchedaḥ along with Dharmakīrti’s own prose commentary. The translators have also provided an English commentary, which includes a detailed introduction to the central issues in the translated text and their history in the literature before Dharmakīrti. (shrink)

We look at two fundamental logical processes, often intertwined in planning and problem solving: inference and update. Inference is an internal process with which we uncover what is implicit in the information we already have. Update, on the other hand, is produced by external communication, usually in the form of announcements and in general in the form of observations, giving us information that might not have been available (even implicitly) before. Both processes have received attention from the logic (...) community, usually separately. In this work, we develop a logical language that allows us to describe them together. We present syntax, semantics and a complete axiom system; we discuss similarities and differences with other approaches and mention how the work can be extended. (shrink)

The main objective of this paper is to sketch unifying conceptual and formal framework for inference that is able to explain various proof techniques without implicitly changing the underlying notion of inference rules. We base this framework upon the so-called two-dimensional, i.e., deduction to deduction, account of inference introduced by Tichý in his seminal work The Foundation’s of Frege’s Logic (1988). Consequently, it will be argued that sequent calculus provides suitable basis for such general concept of (...) class='Hi'>inference and therefore should not be seen just as technical tool, but philosophically well-founded system that can rival natural deduction in terms of its “naturalness”. (shrink)

Recent advances in the cognitive psychology of inference have been of great interest to philosophers of science. The present paper reviews one such area, namely studies based upon Wason's 4-card selection task. It is argued that interpretation of the results of the experiments is complex, because a variety of inference strategies may be used by subjects to select evidence needed to confirm or disconfirm a hypothesis. Empirical evidence suggests that which strategy is used depends in part on the (...) semantic, syntactic, and pragmatic context of the inference problem at hand. Since the factors of importance are also present in real-world science, and similarly complicate its interpretation, the selection task, though it does not present a quick fix, represents a kind of microcosm of great utility for the understanding of science. Several studies which have examined selection strategies in more complex problem-solving environments are also reviewed, in an attempt to determine the limits of generalizability of the simpler selection tasks. Certain interpretational misuses of laboratory research are described, and a claim made that the issue of whether or not scientists are rational should be approached by philosophers and psychologists with appropriate respect for the complexities of the issue. (shrink)

This paper offers an interpretation of multiple-conclusion sequents as a kind of meta-inference rule: just as single-conclusion sequents represent inferences from sentences to sentences, so multiple-conclusion sequents represent a certain kind of inference from single-conclusion sequents to single-conclusion sequents. The semantics renders sound and complete the standard structural rules of reflexivity, monotonicity (or thinning), and transitivity (or cut). The paper is not the first one to attempt to account for multiple-conclusion sequents without invoking notions of truth or falsity—but (...) unlike earlier such efforts, which have typically helped themselves to primitive notions of both acceptance and rejection, the present one makes do with the former alone. For technical reasons, the treatment is limited to sequents with non-empty succedents. (shrink)

This paper develops an inference system for natural language within the ‘Natural Logic’ paradigm as advocated by van Benthem (1997), Sánchez (1991) and others. The system that we propose is based on the Lambek calculus and works directly on the Curry-Howard counterparts for syntactic representations of natural language, with no intermediate translation to logical formulae. The Lambek-based system we propose extends the system by Fyodorov et~al. (2003), which is based on the Ajdukiewicz/Bar-Hillel (AB) calculus Bar Hillel, (1964). This enables (...) the system to deal with new kinds of inferences, involving relative clauses, non-constituent coordination, and meaning postulates that involve complex expressions. Basing the system on the Lambek calculus leads to problems with non-normalized proof terms, which are treated by using normalization axioms. (shrink)

Edgar Allan Poe’s standing as a literary figure, who drew on (and sometimes dabbled in) the scientific debates of his time, makes him an intriguing character for any exploration of the historical interrelationship between science, literature and philosophy. His sprawling ‘prose-poem’ Eureka (1848), in particular, has sometimes been scrutinized for anticipations of later scientific developments. By contrast, the present paper argues that it should be understood as a contribution to the raging debates about scientific methodology at the time. This methodological (...) interest, which is echoed in Poe’s ‘tales of ratiocination’, gives rise to a proposed new mode of—broadly abductive—inference, which Poe attributes to the hybrid figure of the ‘poet-mathematician’. Without creative imagination and intuition, Science would necessarily remain incomplete, evenby its own standards. This concern with imaginative (abductive) inference ties in nicely with his coherentism, which grants pride of place to the twin virtues of Simplicity and Consistency, which must constrain imagination lest it degenerate into mere fancy. (shrink)

This paper is an attempt to give a general explanation of pragmatic aspects of linguistic negation. After a brief survey of classical accounts of negation within pragmatic theories (as speech act theory, argumentation theory and polyphonic theory), the main pragmatic uses of negation (illocutionary negation, external negation, lowering and majoring negation) are discussed within relevance theory. The question of the relevance of negative utterance is raised, and a general inferential schema (based on the so-called invited inference) is proposed and (...) tested for the main uses of negation discussed in the paper. (shrink)

Deep inference is a natural generalisation of the one-sided sequent calculus where rules are allowed to apply deeply inside formulas, much like rewrite rules in term rewriting. This freedom in applying inference rules allows to express logical systems that are difficult or impossible to express in the cut-free sequent calculus and it also allows for a more fine-grained analysis of derivations than the sequent calculus. However, the same freedom also makes it harder to carry out this analysis, in (...) particular it is harder to design cut elimination procedures. In this paper we see a cut elimination procedure for a deep inference system for classical predicate logic. As a consequence we derive Herbrand's Theorem, which we express as a factorisation of derivations. (shrink)

We introduce a graphical framework for Bayesian inference that is sufficiently general to accommodate not just the standard case but also recent proposals for a theory of quantum Bayesian inference wherein one considers density operators rather than probability distributions as representative of degrees of belief. The diagrammatic framework is stated in the graphical language of symmetric monoidal categories and of compact structures and Frobenius structures therein, in which Bayesian inversion boils down to transposition with respect to an appropriate (...) compact structure. We characterize classical Bayesian inference in terms of a graphical property and demonstrate that our approach eliminates some purely conventional elements that appear in common representations thereof, such as whether degrees of belief are represented by probabilities or entropie quantities. We also introduce a quantum-like calculus wherein the Frobenius structure is noncommutative and show that it can accommodate Leifer's calculus of 'conditional density operators'. The notion of conditional independence is also generalized to our graphical setting and we make some preliminary connections to the theory of Bayesian networks. Finally, we demonstrate how to construct a graphical Bayesian calculus within any dagger compact category. (shrink)

A certain type of inference rules in modal logics, generalizing Gabbay's Irreflexivity rule, is introduced and some general completeness results about modal logics axiomatized with such rules are proved.

Much research on cognitive development focuses either on early-emerging domain-specific knowledge or domain-general learning mechanisms. However, little research examines how these sources of knowledge interact. Previous research suggests that young infants can make inferences from samples to populations (Xu & Garcia, 2008) and 11- to 12.5-month-old infants can integrate psychological and physical knowledge in probabilistic reasoning (Teglas, Girotto, Gonzalez, & Bonatti, 2007; Xu & Denison, 2009). Here, we ask whether infants can integrate a physical constraint of immobility into a statistical (...)inference mechanism. Results from three experiments suggest that, first, infants were able to use domain-specific knowledge to override statistical information, reasoning that sometimes a physical constraint is more informative than probabilistic information. Second, we provide the first evidence that infants are capable of applying domain-specific knowledge in probabilistic reasoning by using a physical constraint to exclude one set of objects while computing probabilities over the remaining sets. (shrink)

This article discusses the “Argument from Inference” raised against the view that definite descriptions are semantically referring expressions. According to this argument, the indicated view is inadequate since it evaluates some invalid inferences with definite descriptions as “valid” and vice versa. I argue that the Argument from Inference is basically wrong. Firstly, it is crucially based on the assumption that a proponent of the view that definite descriptions are referring expressions conceives them as directly referring terms, i.e., the (...) terms which contribute their referents into the semantic content of the sentences in which they occur. However, the framework of direct reference is not essential to the idea that descriptions might have semantic referential interpretation. Secondly, the Argument of Inference - if correct - suffices to establish an overgeneralized conclusion that even paradigmatically referring terms, like proper names, cannot be semantically referential. This fact indicates that the argument is flawed. In the final part of this article, I briefly consider what the source of the problem with the Argument of Inference might be. (shrink)

Shepard has argued that a universal law should govern generalization across different domains of perception and cognition, as well as across organisms from different species or even different planets. Starting with some basic assumptions about natural kinds, he derived an exponential decay function as the form of the universal generalization gradient, which accords strikingly well with a wide range of empirical data. However, his original formulation applied only to the ideal case of generalization from a single encountered stimulus to a (...) single novel stimulus, and for stimuli that can be represented as points in a continuous metric psychological space. Here we recast Shepard's theory in a more general Bayesian framework and show how this naturally extends his approach to the more realistic situation of generalizing from multiple consequential stimuli with arbitrary representational structure. Our framework also subsumes a version of Tversky's set-theoretic model of similarity, which is conventionally thought of as the primary alternative to Shepard's continuous metric space model of similarity and generalization. This unification allows us not only to draw deep parallels between the set-theoretic and spatial approaches, but also to significantly advance the explanatory power of set-theoretic models. Key Words: additive clustering; Bayesian inference; categorization; concept learning; contrast model; features; generalization; psychological space; similarity. (shrink)

The three main approaches in statistical inference—classical statistics, Bayesian and likelihood—are in current use in phylogeny research. The three approaches are discussed and compared, with particular emphasis on theoretical properties illustrated by simple thought-experiments. The methods are problematic on axiomatic grounds (classical statistics), extra-mathematical grounds relating to the use of a prior (Bayesian inference) or practical grounds (likelihood). This essay aims to increase understanding of these limits among those with an interest in phylogeny.

Randomized controlled clinical trials play an important role in the development of new medical therapies. There is, however, an ethical issue surrounding the use of randomized treatment allocation when the patient is suffering from a life threatening condition and requires immediate treatment. Such patients can only benefit from the treatment they actually receive and not from the alternative therapy, even if it ultimately proves to be superior. We discuss a novel new way to analyse data from such clinical trials based (...) on the use of the recently developed theory of imprecise probabilities. This work draws an explicit distinction between the related but nevertheless distinct questions of inference and decision in clinical trials. The traditional question of scientific interest asks 'Which treatment offers the greater chance of success?' and is the primary reason for conducting the clinical trial. The question of decision concerns the welfare of the patients in the clinical trial, asking whether the accumulated evidence favours one treatment over the other to such an extent that the next patient should decline randomization and instead express a preference for one treatment. Consideration of the decision question within the framework of imprecise probabilities leads to a mathematical definition of equipoise and a method for governing the randomization protocol of a clinical trial. This paper describes in detail the protocol for the conduct of clinical trials based on this new method of analysis, which is illustrated in a retrospective analysis of data from a clinical trial comparing the anti-emetic drugs ondansetron and droperidol in the treatment of postoperative nausea and vomiting. The proposed methodology is compared quantitatively using computer simulation studies with conventional clinical trial designs and is shown to maintain high statistical power with reduced sample sizes, at the expense of a high type I error rate that we argue is irrelevant in some specific circumstances. Particular emphasis is placed on describing the type of medical conditions and treatment comparisons where the new methodology is expected to provide the greatest benefit. (shrink)

In science, it sometimes occurs that an event is directly observed, and on other occasions that it is not directly observed but one can make the unambiguous inference that it has occurred. Is there any difference concerning the analysis of data arising from these two situations? In this note we show that there is such a difference in one case arising frequently in genetics. The difference derives from the fact that the ability to make the unambiguous inference arises (...) only from a restricted form of data. (shrink)

Abstract In his Aquinas Lecture 1992 at Marquette University, Ernan McMullin discusses whether there is a pattern of inference that particularly characterizes the sciences of nature. He pursues this theme both on a historical and a systematic level. There is a continuity of concern across the ages that separate the Greek inquiry into nature from our own vastly more complex scientific enterprise. But there is also discontinuity, the abandonment of earlier ideals as unworkable. The natural sciences involve many types (...) of inference; three of these interlock in a special way to produce “retroductive inference,” the kind of complex inference that supports causal theory. (shrink)