In the late 19th century great changes in theories of light and electricity were in direct conflict with certitude, the view that scientific knowledge is infallible. What is, then, the epistemic status of scientific theory? To resolve this issue Duhem and Poincaré proposed images of fallible knowledge, Instrumentalism and Conventionalism, respectively. Only in 1919–1922, after Einstein's relativity was published, he offered arguments to support Fallibilism, the view that certainty cannot be achieved in science. Though Einstein did not consider Duhem's Instrumentalism, (...) he argued against Poincaré's Conventionalism. Hitherto, Einstein's Fallibilism, as presented at first in a rarely known essay of 1919, was left in the dark. Recently, Howard obscured its meaning. Einstein's essay was never translated into English. In my paper I provide its translation and attempt to shed light on Einstein's view and its context; I also direct attention to Einstein's images of philosophical opportunism in scientific practice. (shrink)

We argue that abduction does not work in isolation from other inference mechanisms and illustrate this through an inference scheme designed to evaluate multiple hypotheses. We use game theory to relate the abductive system to actions that produce new information. To enable evaluation of the implications of this approach we have implemented the procedures used to calculate the impact of new information in a computer model. Experiments with this model display a number of features of collective belief-revision leading to consensus-formation, (...) such as the influence of bias and prejudice. The scheme of inferential calculations invokes a Peircian concept of ‘belief’ as the propensity to choose a particular course of action. (shrink)

Science uses its firmest conclusions to arrive at new ones which may well completely destroy these, previously firmest, conclusions. The perceptive may notice that when the previously firmest conclusions are demolished we may remain in the dark with no conclusion worth replacing it with. But only when we replace it with a firmer conclusion can we speak of a bootstrap operation rather than of a refutations. Often, to conclude, the ad hoc nature of a fact-like statement is rooted in the (...) theoretical background against which it is couched; given a different theoretical background and it fully falls into place, as the expression goes. If an observation report is at once a corollary of our scientific theory, then it is unproblematic. If it conflicts with our scientific theory, either we reject the theory or try to find an excuse for not rejecting it. When, however, a small theory which well integrates in our theoretical background is attacked by a well corroborated fact-like theory and all its defences are refuted, then a revolution may be under way. Such events may be rare, but they are the more interesting ones. At times we alter our whole theoretical outlook around a rather fact-like theory which then gets refuted. We then look silly from any viewpoint except that which takes the process to be a bootstrap operation! (shrink)

The motivation behind the collection of papers presented in this THEORIA forum on Abductive reasoning is my book Abductive Reasoning: Logical Investigations into the Processes of Discovery and Explanation. These contributions raise fundamental questions. One of them concerns the conjectural character of abduction. The choice of a logical framework for abduction is also discussed in detail, both its inferential aspect and search strategies. Abduction is also analyzed as inference to the best explanation, as well as a process of epistemic change, (...) both of which chal-lenge the argument-like format of abduction. Finally, the psychological question of whether humans reason abduc-tively according to the models proposed is also addressed. I offer a brief summary of my book and then comment on and respond to several challenges that were posed to my work by the contributors to this issue. (shrink)

The purpose of this paper is to improve on the logical and measure-theoretic foundations for the notion of probability in the law of evidence, which were given in my contributions Åqvist [ (1990) Logical analysis of epistemic modality: an explication of the Bolding–Ekelöf degrees of evidential strength. In: Klami HT (ed) Rätt och Sanning (Law and Truth. A symposium on legal proof-theory in Uppsala May 1989). Iustus Förlag, Uppsala, pp 43–54; (1992) Towards a logical theory of legal evidence: semantic analysis (...) of the Bolding–Ekelöf degrees of evidential strength. In: Martino AA (ed) Expert systems in law. Elsevier Science Publishers BV, Amsterdam, North-Holland, pp 67–86]. The present approach agrees with the one adopted in those contributions in taking its main task to be that of providing a semantic analysis, or explication, of the so called Bolding–Ekelöf degrees of evidential strength (“proof-strength”) as applied to the establishment of matters of fact in law-courts. However, it differs from the one advocated in our earlier work on the subject in explicitly appealing to what is known as “Pro-et-Contra Argumentation”, after the famous Norwegian philosopher Arne Naess. It tries to bring out the logical form of that interesting kind of reasoning, at least in the context of the law of evidence. The formal techniques used here will be seen to be largely inspired by the important work done by Patrick Suppes, notably Suppes [(1957) Introduction to logic. van Nostrand, Princeton and (1972) Finite equal-interval measurement structures. Theoria 38:45–63]. (shrink)

This paper sets forth a familiar theme, that science essentially consists of two interdependent episodes, one imaginative, the other critical. Hypotheses and other imaginative conjectures are the initial stage of scientific inquiry because they provide the incentive to seek the truth and a clue as to where to find it. But scientific conjectures must be subject to critical examination and empirical testing. There is a dialogue between the two episodes; observations made to test a hypothesis are the inspiration for new (...) conjectures. Inductive generalizations may also inspire hypotheses, but cannot validate them. A hypothesis is empirically tested by ascertaining whether or not predictions about the world of experience deduced from the hypothesis agree with what is actually observed. This has been appropriately considered the 'criterion of demarcation' that distinguishes science from other knowledge. But scientific hypotheses must satisfy other tests as well, e.g., whether they have explanatory value and further understanding. I briefly explore such issues as verifiability and falsifiability, empirical content and truthfulness, contingency and certainty, fact and theory, error and fraud. Science like any human activity is subject to error and to the foibles and other failings of human beings. But severe attempts of empirical falsification and other trials yield knowledge that stands the test of time and provides a foothold for further knowledge. Moreover, scientists have developed social mechanisms, such as peer review and publication, to evaluate their work. Because the research of scientists depends on the validity of previous knowledge, it is of great consequence that they discern valid from invalid knowledge and thus scientists are inclined to transcend ideology, nationality, friendship, monetary interest and other prejudices when the mettle of scientific knowledge is at stake. I use historical examples to illustrate some relevant aspects of scientific practice: its success (Mendel), misrepresentation (Darwin), ideological abuse (Lysenko), arrogant violation of the requirement of testing (Koch), theory replacement (Priestly and Lavoisier, Newton and Einstein), and the indispensability of context (Oswald Avery and Alfred Wegener). (shrink)

The notion of a severe test has played an important methodological role in the history of science. But it has not until recently been analyzed in any detail. We develop a generally Bayesian analysis of the notion, compare it with Deborah Mayo’s error-statistical approach by way of sample diagnostic tests in the medical sciences, and consider various objections to both. At the core of our analysis is a distinction between evidence and confirmation or belief. These notions must be kept separate (...) if mistakes are to be avoided; combined in the right way, they provide an adequate understanding of severity. Those who think that the weight of the evidence always enables you to choose between hypotheses “ignore one of the factors (the prior probability) altogether, and treat the other (the likelihood) as though it ...meant something other than it actually does. This is the same mistake as is made by someone who has scruples about measuring the arms of a balance (having only a tape measure at his disposal ...), but is willing to assert that the heavier load will always tilt the balance (thereby implicitly assuming, although without admitting it, that the arms are of equal length!). (Bruno de Finetti, Theory of Probability)2. (shrink)

The concepts of supportive evidence and of relevant evidence seem very closely related to each other. Supportive evidence is clearly always relevant as well. But must relevant evidence be defined as evidence which is either supportive or weakeking? In an explicit or implicit manner, this is indeed the position of many philosophers. The paradox of ideal evidence, however, shows us that this is to restrictive. Besides increasing or decreasing the probability attached to some hypothesis, evidence can alter or interact with (...) the background assumptions underlying the hypothesis.In most circumstances, the (post hoc) relevance of evidence can indeed be judged by its effect on the confidence one attaches to hypotheses. Occasionally, as in the circumstances described by Example I, and more generally called “the Paradox of Ideal Evidence”, the relevance of evidence to an hypothesis can only be understood by appeal to a broader sense. (shrink)

Predictivism asserts that where evidence E confirms theory T, E provides stronger support for T when E is predicted on the basis of T and then confirmed than when E is known before T's construction and 'used', in some sense, in the construction of T. Among the most interesting attempts to argue that predictivism is a true thesis (under certain conditions) is that of Patrick Maher (1988, 1990, 1993). The purpose of this paper is to investigate the nature of predictivism (...) using Maher's analysis as a starting point. I briefly summarize Maher's primary argument and expand upon it; I explore related issues pertaining to the causal structure of empirical domains and the logic of discovery. (shrink)

Leśniewski’s systems deviate greatly from standard logic in some basic features. The deviant aspects are rather well known, and often cited among the reasons why Leśniewski’s work enjoys little recognition. This paper is an attempt to explain why those aspects should be there at all. Leśniewski built his systems inspired by a dream close to Leibniz’s characteristica universalis: a perfect system of deductive theories encoding our knowledge of the world, based on a perfect language. My main claim is that Leśniewski (...) built his characteristica universalis following the conditions of de Jong and Betti’s Classical Model of Science (2008) to an astounding degree. While showing this I give an overview of the architecture of Leśniewski’s systems and of their fundamental characteristics. I suggest among others that the aesthetic constraints Leśniewski put on axioms and primitive terms have epistemological relevance. (shrink)

The book develops the necessary background in probability theory underlying diverse treatments of stochastic processes and their wide-ranging applications. With this goal in mind, the pace is lively, yet thorough. Basic notions of independence and conditional expectation are introduced relatively early on in the text, while conditional expectation is illustrated in detail in the context of martingales, Markov property and strong Markov property. Weak convergence of probabilities on metric spaces and Brownian motion are two highlights. The historic role of size-biasing (...) is emphasized in the contexts of large deviations and in developments of Tauberian Theory. The authors assume a graduate level of maturity in mathematics, but otherwise the book will be suitable for students with varying levels of background in analysis and measure theory. In particular, theorems from analysis and measure theory used in the main text are provided in comprehensive appendices, along with their proofs, for ease of reference. Rabi Bhattacharya is Professor of Mathematics at the University of Arizona. Edward Waymire is Professor of Mathematics at Oregon State University. Both authors have co-authored numerous books, including the graduate textbook, Stochastic Processes with Applications. Advanced undergrads and graduate students, analysts. (shrink)

Bishop and Trout here present a unique and provocative new approach to epistemology (the theory of human knowledge and reasoning). Their approach aims to liberate epistemology from the scholastic debates of standard analytic epistemology, and treat it as a branch of the philosophy of science. The approach is novel in its use of cost-benefit analysis to guide people facing real reasoning problems and in its framework for resolving normative disputes in psychology. Based on empirical data, Bishop and Trout show how (...) people can improve their reasoning by relying on Statistical Prediction Rules (SPRs). They then develop and articulate the positive core of the book. Their view, Strategic Reliabilism, claims that epistemic excellence consists in the efficient allocation of cognitive resources to reliable reasoning strategies, applied to significant problems. The last third of the book develops the implications of this view for standard analytic epistemology; for resolving normative disputes in psychology; and for offering practical, concrete advice on how this theory can improve real people's reasoning. This is a truly distinctive and controversial work that spans many disciplines and will speak to an unusually diverse group, including people in epistemology, philosophy of science, decision theory, cognitive and clinical psychology, and ethics and public policy. (shrink)

An inference to a new explanation may be both logically non-ampliative and epistemically ampliative. Included among the premises of the latter form is the explanadum--a unique premise which is capable of embodying what we do not know about the matter in question, as well as legitimate aspects of what we do know. This double status points to a resolution of the Meno paradox. Ampliative inference of this sort, it is argued, has much in common with Nickles' idea of discoverability and, (...) together with the mapping and correction procedures (briefly summarized) required for such inference, may suggest a broadening of the concept of justification which would incorporate much of what has been defended in theories of discovery. (shrink)

When an economy is at the upper part of the Laffer curve, a reduction in tax rates will, somewhat paradoxically, lead to a rise in the amount of money, both relatively and absolutely, the taxpayer will retain, but, also, to an increase in government revenues collected. The former result is a welcome one, from the libertarian perspective, not so the latter. Does this example exhibit a slight anomaly for the free enterprise philosophy , or does it furnish a true conundrum. (...) The present paper argues that both are true. (shrink)

'IRS' is our term for the logical empiricist idea that the best way to understand the epistemic bearing of observational evidence on scientific theories is to model it in terms of Inferential Relations among Sentences representing the evidence, and sentences representing hypotheses the evidence is used to evaluate. Developing ideas from our earlier work, including 'Saving the Phenomena'(Phil Review 97, 1988, p.303-52 )we argue that the bearing of observational evidence on theory depends upon causal connections and error characteristics of the (...) processes by which data is produced and used to detect features of phenomena. Neither of these depends upon, or is greatly illuminated by a consideration of, formal relations among observation and theoretical sentences or propositions. By taking causal structures and error characteristics, you too can evade the IRS. In doing so, you can gain insight into Hempel’s raven paradox, theory loading, and other issues from the standard philosophical literature on confirmation theory. (shrink)

Whether science can be regarded as value-neutral remains a contestable issue. Much of that debate is confused because it is not made clear exactly what the term science is meant to include. Three conceptions can be delineated: the iconic, the indexical, and the interpretative. The iconic employs a wide usage of the term science to include any process of inquiry. The indexical refers to the way the outcomes of inquiry can be made subject to testing and criticism. The interpretative conception, (...) growing out of the iconic, emphasizes the methodology of science, marking it off from other forms of inquiry. These three conceptions of science—delineated in the writings of Charles Peirce—have haunted debates in the philosophy of science during the twentieth century. But whichever conception is adopted, none of these three can offer a satisfactory account of the way in which socio-ethical judgments come to be formed for their application in everyday life. (shrink)

There are different kinds of uncertainty. I outline some of the various ways that uncertainty enters science, focusing on uncertainty in climate science and weather prediction. I then show how we cope with some of these sources of error through sophisticated modelling techniques. I show how we maintain confidence in the face of error.

Popular interest in the progress of physical science has increased very rapidly in the last few years. Perhaps the spectacular ‘mysteries’ of wireless and the intriguing paradoxes of the theory of relativity are the chief causes. For every home now has its Magic Box—a piece of pure physics; there is not a familiar thing in it, not even that sine qua non of all things that ‘work’—a wheel, only mysterious parts called condensers, grid-leaks, inductances, and thermionic valves. And surely, when (...) a Sunday newspaper produces a facsimile page of Einstein’s recent paper in German containing abstruse tensor equations, reverence for the mathematical physicist is nearing its zenith. (shrink)

The paper assumes that to be of practical interest process must be understood as physical action that takes place in the world rather than being an idea in the mind. It argues that if an ontology of process is to accommodate actuality, it must be represented in terms of relative probabilities. Folk physics cannot accommodate this, and so the paper appeals to scientific culture because it is an emergent knowledge of the world derived from action in it. Process is represented (...) as a contradictory probability distribution that does not depend on a spatio-temporal frame. An actuality is a probability density that grounds the values of probabilities to constitute their distributions. Because probability is a conserved value, probability distributions are subject to the constraint of symmetry and must be zero-sum. An actuality is locked-in by other actualities to become a zero-sum symmetry of probability values. It is shown that the locking-in of actualities constructs spatio-temporal locality, lends actualities specificity, and makes them a contradiction. Localization is the basis for understanding empirical observation. Because becoming depends on its construction of being, processes exist as trajectories. The historical trajectories of evolution and revolution as well as the non-historical trajectory of strong emergence are how processes are observed to exist. (shrink)

Van Fraassen, like Popper before him, assumes that confirmation and disconfirmation relations are logical relations and thus hold only among abstract items. This raises a problem about how experience, for Popper, and observables, for van Fraassen, enter into epistemic evaluations. Each philosopher offers a drastic proposal: Popper holds that basic statements are accepted by convention; van Fraassen introduces his “pragmatic tautology.” Another alternative is to reject the claim that all evaluative relations are logical relations. Ayer proposed this option in responding (...) to Popper, as did Sosa in a different context. I argue that this option should be pursued and propose a line of research that the option suggests.Keywords: Popper; van Fraassen; Representation; Confirmation; Logic; Cognitive abilities. (shrink)

In recent years, pragmatism in general and John Dewey in particular have been of increasing interest to philosophers of science. Dewey's work provides an interesting alternative package of views to those which derive from the logical empiricists and their critics, on problems of both traditional and more recent vintage. Dewey's work ought to be of special interest to recent philosophers of science committed to the program of analyzing ``science in practice.'' The core of Dewey's philosophy of science is his theory (...) of inquiry---what he called ``logic.'' There is a major lacuna in the literature on this point, however: no contemporary philosophers of science have engaged with Dewey's logical theory, and scholars of Dewey's logic have rarely made connections with philosophy of science. This paper aims to fill this gap, to correct some significant errors in the interpretation of key ideas in Dewey's logical theory, and to show how Dewey's logic provides resources for a philosophy of science. (shrink)

In "Epistemic Permissiveness", Roger White presents several arguments against Extreme Permissivism, the view that there are possible cases where, given one's total evidence, it would be rational to either believe P, or to believe ~P. In this paper, we carefully reconstruct White's arguments and then argue that they do not succeed.

Styles of reasoning are important devices to understand scientific practice. As I use the concept, a style of reasoning is a pattern of inferential relations that are used to select, interpret, and support evidence for scientific results. In this paper, I defend the view that there is a plurality of styles of reasoning: different domains of science often invoke different styles. I argue that this plurality is an important source of disunity in scientific practice, and it provides additional arguments in (...) support of the disunity claim. I also contrast Ian Hacking’s broad characterization of styles of reasoning with a narrow understanding that I favor. Drawing on examples from molecular biology, chemistry and mathematics, I argue that differences in style of reasoning lead to differences in the way the relevant results are obtained and interpreted. The result is a pluralist view about styles of reasoning that is sensitive to nuances of inferential relations in scientific activity. (shrink)

The place of induction in the framing and test of scientific hypotheses is investigated. The meaning of 'induction' is first equated with generalization on the basis of case examination. Two kinds of induction are then distinguished: the inference of generals from particulars (first degree induction), and the generalization of generalizations (second degree induction). Induction is claimed to play a role in the framing of modest empirical generalizations and in the extension of every sort of generalizations--not however in the invention of (...) high-level hypotheses containing theoretical predicates. It is maintained, on the other hand, that induction by enumeration is essential in the empirical test of the lowest-level consequences of scientific theories, since it occurs in the drawing of "conclusions" from the examination of empirical evidence. But it is also held that the empirical test is insufficient, and must be supplemented with theorification, or the expansion of isolated hypotheses into theories. Refutation is not viewed as a substitute for confirmation but as its complement, since the very notion of unfavorable case is meaningful only in connection with the concept of positive instance. Although the existence of an inductive method is disclaimed, it is maintained that the various patterns of plausible reasoning (inductive inference included) are worth being investigated. It is concluded that scientific research follows neither the advice of inductivism nor the injunction of deductivism, but takes a middle course in which induction is instrumental both heuristically and methodologically, although the over-all pattern of research is hypothetico-deductive. (shrink)

The Prosecutor's Fallacy is a well-known hazard in the assessment of probabilistic evidence that can lead to faulty inferences. It is perhaps best known via its role in the assessment of DNA match evidence in courts of law. A prosecutor, call him Burger, presents DNA evidence in court that links a defendant, Crumb, to a crime. The conditional probability of a DNA match given that Crumb is not guilty, or p(M | ~G), is very low: according to Burger, one chance (...) in tens of millions. Burger goes on to argue that this very low probability entails another low probability. He asserts that it is very improbable that Crumb is not guilty given the match, and so p(~G | M) is also very low. As this latter probability is precisely what the jury is called upon to assess, Burger's assertion is likely to lead the jury into convicting Crumb. (shrink)

Abstract. This article traces the intellectual history of scientific studies of intercessory prayer published in English between 1965 and the present by focusing on the conflict and discussion they prompted in the medical literature. I analyze these debates with attention to how researchers articulate the possibilities and limits medical science has for studying intercessory prayer over time. I delineate three groups of researchers and commentators: those who think intercessory prayer can and should be studied scientifically, those who are more skeptical (...) and articulate the limits of science around this topic, and those who focus primarily on the pragmatic applications of this knowledge. I analyze these contests as examples of what Thomas Gieryn calls “epistemic authority” as medical researchers engage in what he describes as “boundary-work” or “the discursive attribution of selected qualities to scientists, scientific methods, and scientific claims for the purposes of drawing a rhetorical boundary between science and some less authoritative residual non-science.” (Gieryn 1999, 4 (Gieryn 1999, 4)). (shrink)

Arthur S. Eddington, FRS, (1882–1944) was one of the most prominent British scientists of his time. He made major contributions to astrophysics and to the broader understanding of the revolutionary theories of relativity and quantum mechanics. He is famed for his astronomical observations of 1919, confirming Einstein’s prediction of the curving of the paths of starlight, and he was the first major interpreter of Einstein’s physics to the English-speaking world. His 1928 book, The Nature of the Physical World, here re-issued (...) in a critical, annotated edition, was largely responsible for his fame as a public interpreter of science and has had a significant influence on both the public and the philosophical understanding of 20th-century physics. In degree, Eddington’s work has entered into our contemporary understanding of modern physics, and, in consequence, critical attention to his most popular book repays attention. Born at Kendal near Lake Windermere in the northwest of England into a Quaker background, Eddington attended Owens College, Manchester, and afterward Trinity College, Cambridge, where he won high mathematical honors, including Senior Wrangler. He became Plumian Professor of Astronomy at Cambridge in 1913 and in 1914 Director of the Cambridge Observatory. Eddington was a conscientious objector during the First World War. By the end of his career, he was widely esteemed and had received honorary degrees from many universities. He was elected president of the Royal Astronomical Society (1921–1923), and was subsequently elected President of the Physical Society (1930–1932), the Mathematical Association (1932), and the International Astronomical Union (1938–1944). Eddington was knighted in 1930 and received the Order of Merit in 1938. During the 1930s, his popular and more philosophical books made him a well known figure to the general public. Philosophers have found his writings of considerable interest, and have debated his themes for nearly a hundred years. (shrink)