This paper is intended to explore Jeffrey's proposal for the measurement of the simplicity of scientific laws. The first part is a sketch of Jeffreys' development of a view on simplicity, which will be followed by a discussion of what seem to be some rather crucial defects in the proposal as it stands. It will be suggested here that no plausible way of countering these defects seems available.

The desire to minimize the number of individual new entities postulated is often referred to as quantitative parsimony. Its influence on the default hypotheses formulated by scientists seems undeniable. I argue that there is a wide class of cases for which the preference for quantitatively parsimonious hypotheses is demonstrably rational. The justification, in a nutshell, is that such hypotheses have greater explanatory power than less parsimonious alternatives. My analysis is restricted to a class of cases I shall refer to as (...) additive. Such cases involve the postulation of a collection of qualitatively identical individual objects which collectively explain some particular observed phenomenon. Especially clear examples of this sort occur in particle physics. 1 Introduction 2 Particle physics: a case study 3 Three kinds of simplicity 4 Explanatory power 5 Explanation and non-observation 6 Parsimony and scientific methodology 7 Conclusions. (shrink)

The received view of an ad hochypothesis is that it accounts for only the observation(s) it was designed to account for, and so non-ad hocness is generally held to be necessary or important for an introduced hypothesis or modification to a theory. Attempts by Popper and several others to convincingly explicate this view, however, prove to be unsuccessful or of doubtful value, and familiar and firmer criteria for evaluating the hypotheses or modified theories so classified are characteristically available. These points (...) are obscured largely because the received view fails to adequately separate psychology from methodology or to recognise ambiguities in the use of ''ad hoc''. (shrink)

Bayesians often assume, suppose, or conjecture that for any reasonable explication of the notion of simplicity a prior can be designed that will enforce a preference for hypotheses simpler in just that sense. Further, it is often claimed that the Bayesian framework automatically implements Occam’s razor—that conditionalizing on data consistent with both a simple theory and a complex theory more or less inevitably favours the simpler theory. But it is shown here that there are simplicity-driven approaches to curve-fitting problems that (...) cannot be captured within the orthodox Bayesian framework and that the automatic razor does not function for such problems. (shrink)

The theoretical virtue of parsimony values the minimizing of theoretical commitments, but theoretical commitments come in two kinds: ontological and ideological. While the ontological commitments of a theory are the entities it posits, a theory’s ideological commitments are the primitive concepts it employs. Here, I show how we can extend the distinction between quantitative and qualitative parsimony, commonly drawn regarding ontological commitments, to the domain of ideological commitments. I then argue that qualitative ideological parsimony is a theoretical virtue. My defense (...) proceeds by demonstrating the merits of qualitative ideological parsimony and by showing how the qualitative conception of ideological parsimony undermines two notable arguments from ideological parsimony: David Lewis’ defense of modal realism and Ted Sider’s defense of mereological nihilism. (shrink)

Many scholars claim that a parsimony principle has ontological implications. The most common such claim is that a parsimony principle entails that the “world” is simple. This ontological claim appears to often be coupled with the assumption that a parsimony principle would be corroborated if the “world” were simple. I clarify these claims, describe some minimal features of simplicity, and then show that both these claims are either false or they depend upon an implausible notion of simplicity. In their stead, (...) I propose a minimal ontological claim: a parsimony principle entails a minimal realism about the existence of objects and laws, in order to allow that the descriptions of the relevant phenomena contain patterns. (shrink)

In this article, we examine in detail the New Atheists' most serious argument for the conclusion that God does not exist, namely, Richard Dawkins's Ultimate 747 Gambit. Dawkins relies upon a strong explanatory principle involving simplicity. We systematically inspect the various kinds of simplicity that Dawkins may invoke. Finding his crucial premises false on any common conception of simplicity, we conclude that Dawkins has not given good reason to think God does not exist.

Theoretical simplicity is difficult to characterize, and evidently can depend upon a number of distinct factors. One such desirable characteristic is that the laws of a theory have relatively few "counterinstances" whose accommodation requires the invocation of a ceteris paribus condition and ancillary explanation. It is argued that, when one theory is reduced to another, such that the laws of the second govern the behavior of the parts of the entities in the domain of the first, there is a characteristic (...) gain in simplicity of the sort mentioned: while I see no way of quantitatively measuring the "amount" of defeasibility of the laws of a theory, microreduction can be shown to decrease that "amount.". (shrink)

This paper discusses the role that appeals to theoretical simplicity (or parsimony) have played in the debate between nativists and empiricists in cognitive science. Both sides have been keen to make use of such appeals in defence of their respective positions about the structure and ontogeny of the human mind. Focusing on the standard simplicity argument employed by empiricist-minded philosophers and cognitive scientists—what I call “the argument for minimal innateness”—I identify various problems with such arguments—in particular, the apparent arbitrariness of (...) the relevant notions of simplicity at work. I then argue that simplicity ought not be seen as a theoretical desideratum in its own right, but rather as a stand-in for other desirable features of theories. In this deflationary vein, I argue that the best way of interpreting the argument for minimal innateness is to view it as an indirect appeal to various potential biological constraints on the amount of innate structure that can wired into the human mind. I then consider how nativists may respond to this biologized version of the argument, and discuss the role that similar biological concerns have played in recent nativist theorizing in the Minimalist Programme in generative linguistics. (shrink)

This paper discusses Kevin Kelly’s recent attempt to justify Ockham’s Razor in terms of truth-finding efficiency. It is argued that Kelly’s justification fails to warrant confidence in the empirical content of theories recommended by Ockham’s Razor. This is a significant problem if, as Kelly and many others believe, considerations of simplicity play a pervasive role in scientific reasoning, underlying even our best tested theories, for the proposal will fail to warrant the use of these theories in practical prediction.

" The Way of the Small teaches ways to embrace even life's more difficult passages such as aging, failure, illness, or the loss of a loved one, making even our pain a path to the sacred that helps us find meaning in life as it happens. * ...

Recent attempts to answer ontological questions through conceptual analysis have been controversial. Nonetheless, contemporary metaphysicians mostly agree that if the existence of certain things analytically follows from sentences we already accept, then there is no further ontological commitment involved in affirming the existence of those things. More generally, it is plausible that whenever a sentence analytically entails another, the conjunction of those sentences requires nothing more of the world for its truth than the former sentence alone. In his ‘Analyticity and (...) Ontology’, Louis deRosset tries to produce counterexamples to these principles by means of linguistic stipulations. I aim to show where his arguments go wrong. (shrink)

Parsimony is a virtue of empirical theories. Is it also a virtue of philosophical theories? I review four contemporary accounts of the virtue of parsimony in empirical theorizing, and consider how each might apply to two prominent appeals to parsimony in the philosophical literature, those made on behalf of physicalism and on behalf of nominalism. None of the accounts of the virtue of parsimony extends naturally to either of these philosophical cases. This suggests that in typical philosophical contexts, ontological simplicity (...) has no evidential value. (shrink)

We attempt to clarify the main conceptual issues in approaches to ‘objectification’ or ‘measurement’ in quantum mechanics which are based on superselection rules. Such approaches venture to derive the emergence of classical ‘reality’ relative to a class of observers; those believing that the classical world exists intrinsically and absolutely are advised against reading this paper. The prototype approach (K. Hepp, Helv. Phys. Acta45 (1972), 237–248) where superselection sectors are assumed in the state space of the apparatus is shown to be (...) untenable. Instead, one should couple system and apparatus to an environment, and postulate superselection rules for the latter. These are motivated by the locality of any observer or other (actual or virtual) monitoring system. In this way ‘environmental’ solutions to the measurement problem (H.D. Zeh, Found. Phys.1 (1970), 69–76; W. H. Zurek, Phys. Rev.D26 (1982), 1862–1880 and Progr. Theor. Phys.89 (1993), 281–312) become consistent and acceptable, too. Points of contact with the modal interpretation are briefly discussed. We propose a minimal value attribution to observables in theories with superselection rules, in which only central observables have properties. In particular, the eigenvector-eigenvalue link is dropped. This is mainly motivated by Ockham's razor. (shrink)

Incommensurability was Kuhn’s worst mistake. If it is to be found anywhere in science, it would be in physics. But revolutions in theoretical physics all embody theoretical unification. Far from obliterating the idea that there is a persisting theoretical idea in physics, revolutions do just the opposite: they all actually exemplify the persisting idea of underlying unity. Furthermore, persistent acceptance of unifying theories in physics when empirically more successful disunified rivals can always be concocted means that physics makes a persistent (...) implicit assumption concerning unity. To put it in Kuhnian terms, underlying unity is a paradigm for paradigms. We need a conception of science which represents problematic assumptions concerning the physical comprehensibility and knowability of the universe in the form of a hierarchy, these assumptions becoming less and less substantial and more and more such that their truth is required for science, or the pursuit of knowledge, to be possible at all, as one goes up the hierarchy. This hierarchical conception of science has important Kuhnian features, but also differs dramatically from the view Kuhn expounds in his The Structure of Scientific Revolutions. In this paper, I compare and contrast these two views in a much more detailed way than has been done hitherto. I show how the hierarchical view can be construed to emerge from Kuhn’s view as it is modified to overcome objections. I argue that the hierarchical conception of science is to be preferred to Kuhn’s view. (shrink)

Most scientists would hold that science has not established that the cosmos is physically comprehensible – i.e. such that there is some as-yet undiscovered true physical theory of everything that is unified. This is an empirically untestable, or metaphysical thesis. It thus lies beyond the scope of science. Only when physics has formulated a testable unified theory of everything which has been amply corroborated empirically will science be in a position to declare that it has established that the cosmos is (...) physically comprehensible. But this argument presupposes a widely accepted but untenable conception of science which I shall call standard empiricism. According to standard empiricism, in science theories are accepted solely on the basis of evidence. Choice of theory may be influenced for a time by considerations of simplicity, unity, or explanatory capacity, but not in such a way that the universe itself is permanently assumed to be simple, unified or physically comprehensible. In science, no thesis about the universe can be accepted permanently as a part of scientific knowledge independently of evidence. Granted this view, it is clear that science cannot have established that the universe is physically comprehensible. Standard empiricism is, however, as I have indicated, untenable. Any fundamental physical theory, in order to be accepted as a part of theoretical scientific knowledge, must satisfy two criteria. It must be (1) sufficiently empirically successful, and (2) sufficiently unified. Given any accepted theory of physics, endlessly many empirically more successful disunified rivals can always be concocted – disunified because they assert that different dynamical laws govern the diverse phenomena to which the theory applies. These disunified rivals are not considered for a moment in physics, despite their greater empirical success. This persistent rejection of empirically more successful but disunified rival theories means, I argue, that a big, highly problematic, implicit assumption is made by science about the cosmos, to the effect, at least, that the cosmos is such that all seriously disunified theories are false. Once this point is recognized, it becomes clear, I argue, that we need a new conception of science which makes explicit, and so criticizable and improvable the big, influential, and problematic assumption that is at present implicit in physics in the persistent preference for unified theories. This conception of science, which I call aim-oriented empiricism, represents the assumption of physics in the form of a hierarchy of assumptions. As one goes up the hierarchy, the assumptions become less and less substantial, and more and more nearly such that their truth is required for science, or the pursuit of knowledge, to be possible at all. At each level, that assumption is accepted which (a) best accords with the next one up, and (b) has, associated with it the most empirically progressive research programme in physics, or holds out the greatest hope of leading to such an empirically progressive research programme. In this way a framework of relatively insubstantial, unproblematic, fixed assumptions and associated methods is created, high up in the hierarchy, within which much more substantial and problematic assumptions and associated methods, low down in the hierarchy, can be changed, and indeed improved, as scientific knowledge improves. One assumption in this hierarchy of assumptions, I argue, is that the cosmos is physically comprehensible – that is, such that some yet-to-be-discovered unified theory of everything is true. Hence the conclusion: improve our ideas about the nature of science and it becomes apparent that science has already established that the cosmos is physically comprehensible – in so far as science can ever establish anything theoretical. (shrink)

For three decades I have expounded and defended aim-oriented empiricism, a view of science which, l claim, solves a number of problems in the philosophy of science and has important implications for science itself and, when generalized, for the whole of academic inquiry, and for our capacity to solve our current global problems. Despite these claims, the view has received scant attention from philosophers of science. Recently, however, David Miller has criticized the view. Miller’s criticisms are, however, not valid.

When scientists choose one theory over another, they reject out of hand all those that are not simple, unified or explanatory. Yet the orthodox view of science is that evidence alone should determine what can be accepted. Nicholas Maxwell thinks he has a way out of the dilemma.

We attempt to clarify the main conceptual issues in approaches to 'objectification' or 'measurement' in quantum mechanics which are based on superselection rules. Such approaches venture to derive the emergence of classical 'reality' relative to a class of observers; those believing that the classical world exists intrinsically and absolutely are advised against reading this paper. The prototype approach (K. Hepp, Helv. Phys. Acta 45 (1972), (...) 237-248) where superselection sectors are assumed in the state space of the apparatus is shown to be untenable. Instead, one should couple system and apparatus to an environment, and postulate superselection rules for the latter. These are motivated by the locality of any observer or other (actual or virtual) monitoring system. In this way 'environmental' solutions to the measurement problem (H. D. Zeh, Found. Phys.1 (1970), 69-76; W. H. Zurek, Phys. Rev. D26 (1982), 1862-1880 and Progr. Theor. Phys. 89 (1993), 281-312) become consistent and acceptable, too. Points of contact with the modal interpretation are briefly discussed. We propose a minimal value attribution to observables in theories with superselection rules, in which only central observables have properties. In particular, the eigenvector-eigenvalue link is dropped. This is mainly motivated by Ockham's razor. (shrink)

In this paper, I motivate the view that quantitative parsimony is a theoretical virtue: that is, we should be concerned not only to minimize the number of kinds of entities postulated by our theories (i. e. maximize qualitative parsimony), but we should also minimize the number of entities postulated which fall under those kinds. In order to motivate this view, I consider two cases from the history of science: the postulation of the neutrino and the proposal of Avogadro's hypothesis. I (...) also consider two issues concerning how a principle of quantitative parsimony should be framed. (shrink)

given at the 2007 Formal Epistemology Workshop at Carnegie Mellon June 2nd. Good compression must track higher vs lower probability of inputs, and this is one way to approach how simplicity tracks truth.

A platitude that took hold with Kuhn is that there can be several equally good ways of balancing theoretical virtues for theory choice. Okasha recently modelled theory choice using technical apparatus from the domain of social choice: famously, Arrow showed that no method of social choice can jointly satisfy four desiderata, and each of the desiderata in social choice has an analogue in theory choice. Okasha suggested that one can avoid the Arrow analogue for theory choice by employing a strategy (...) used by Sen in social choice, namely, to enhance the information made available to the choice algorithms. I argue here that, despite Okasha’s claims to the contrary, the information-enhancing strategy is not compelling in the domain of theory choice. (shrink)

Among theories which ﬁt all of our data, we prefer the simpler over the more complex. Why? Surely not merely for practical convenience or aesthetic pleasure. But how could we be justiﬁed in this preference without knowing in advance that the world is more likely to be simple than complex? And isn’t this a rather extravagant a priori assumption to make? I want to suggest some steps we can take toward reducing this embarrassment, by showing that the assumption which supports (...) favouring simplicity is far more modest than it ﬁrst seems. (shrink)

The idea that simplicity matters in science is as old as science itself, with the much cited example of Ockham's Razor, 'entia non sunt multiplicanda praeter necessitatem': entities are not to be multiplied beyond necessity. A problem with Ockham's razor is that nearly everybody seems to accept it, but few are able to define its exact meaning and to make it operational in a non-arbitrary way. Using a multidisciplinary perspective including philosophers, mathematicians, econometricians and economists, this monograph examines simplicity by (...) asking six questions: What is meant by simplicity? How is simplicity measured? Is there an optimum trade-off between simplicity and goodness-of-fit? What is the relation between simplicity and empirical modelling? What is the relation between simplicity and prediction? What is the connection between simplicity and convenience? The book concludes with reflections on simplicity by Nobel Laureates in Economics. (shrink)