In this sequence of philosophical essays about natural science, the author argues that fundamental explanatory laws, the deepest and most admired successes of modern physics, do not in fact describe regularities that exist in nature. Cartwright draws from many real-life examples to propound a novel distinction: that theoretical entities, and the complex and localized laws that describe them, can be interpreted realistically, but the simple unifying laws of basic theory cannot.

It is often supposed that the spectacular successes of our modern mathematical sciences support a lofty vision of a world completely ordered by one single elegant theory. In this book Nancy Cartwright argues to the contrary. When we draw our image of the world from the way modern science works - as empiricism teaches us we should - we end up with a world where some features are precisely ordered, others are given to rough regularity and still others behave in (...) their own diverse ways. This patchwork makes sense when we realise that laws are very special productions of nature, requiring very special arrangements for their generation. Combining classic and newly written essays on physics and economics, The Dappled World carries important philosophical consequences and offers serious lessons for both the natural and the social sciences. (shrink)

Ever since David Hume, empiricists have barred powers and capacities from nature. In this book Cartwright argues that capacities are essential in our scientific world, and, contrary to empiricist orthodoxy, that they can meet sufficiently strict demands for testability. Econometrics is one discipline where probabilities are used to measure causal capacities, and the technology of modern physics provides several examples of testing capacities (such as lasers). Cartwright concludes by applying the lessons of the book about capacities and probabilities to the (...) explanation of the role of causality in quantum mechanics. (shrink)

Hunting Causes and Using Them argues that causation is not one thing, as commonly assumed, but many. There is a huge variety of causal relations, each with different characterizing features, different methods for discovery and different uses to which it can be put. In this collection of new and previously published essays, Nancy Cartwright provides a critical survey of philosophical and economic literature on causality, with a special focus on the currently fashionable Bayes-nets and invariance methods - and it exposes (...) a huge gap in that literature. Almost every account treats either exclusively how to hunt causes or how to use them. But where is the bridge between? It's no good knowing how to warrant a causal claim if we don't know what we can do with that claim once we have it. This book will interest philosophers, economists and social scientists. (shrink)

This paper critically analyses the concept of evidence in evidence-based-policy arguing that there is key problem: that there is no existing practicable theory of evidence, one which is philosophically grounded and yet applicable for evidencebased policy. The paper critically considers both philosophical accounts of evidence and practical treatments of evidence in evidence-based-policy. It argues that both fail in different ways to provide a theory of evidence that is adequate for evidence-basedpolicy. The paper is a valuable contribution to the part of (...) the research project that explores how evidence can and should be used to reduce contingency in science, and policy based on science. (shrink)

The claims of randomized controlled trials to be the gold standard rest on the fact that the ideal RCT is a deductive method: if the assumptions of the test are met, a positive result implies the appropriate causal conclusion. This is a feature that RCTs share with a variety of other methods, which thus have equal claim to being a gold standard. This article describes some of these other deductive methods and also some useful non-deductive methods, including the hypothetico-deductive method. (...) It argues that with all deductive methods, the benefit that the conclusions follow deductively in the ideal case comes with a great cost: narrowness of scope. This is an instance of the familiar trade-off between internal and external validity. RCTs have high internal validity but the formal methodology puts severe constraints on the assumptions a target population must meet to justify exporting a conclusion from the test population to the target. The article reviews one such set of assumptions to show the kind of knowledge required. The overall conclusion is that to draw causal inferences about a target population, which method is best depends case-by-case on what background knowledge we have or can come to obtain. There is no gold standard. (shrink)

This paper argues that even when simple analogue models picture parallel worlds, they generally still serve as isolating tools. But there are serious obstacles that often stop them isolating in just the right way. These are obstacles that face any model that functions as a thought-experiment but they are especially pressing for economic models because of the paucity of economic principles. Because of the paucity of basic principles, economic models are rich in structural assumptions. Without these no interesting conclusions can (...) be drawn. This, however, makes trouble when it comes to exporting conclusions from the model to the world. One uncontroversial constraint on induction from special cases is to beware of extending conclusions to situations that we know are different in relevant respects. In the case of economic models it is clear by inspection that the unrealistic structural assumptions of the model are intensely relevant to the conclusion. Any inductive leap to a real situation seems a bad bet. (shrink)

In “The Toolbox of Science” (1995) together with Towfic Shomar we advocated a form of instrumentalism about scientific theories. We separately developed this view further in a number of subsequent works. Steven French, James Ladyman, Otavio Bueno and Newton Da Costa (FLBD) have since written at least eight papers and a book criticising our work. Here we defend ourselves. First we explain what we mean in denying that models derive from theory – and why their failure to do so should (...) be lamented. Second we defend our use of the London model of superconductivity as an example. Third we point out both advantages and weaknesses of FLBD’s techniques in comparison to traditional Anglophone versions of the semantic conception. Fourth we show that FLBD’s version of the semantic conception has not been applied to our case study. We conclude by raising doubts about FLBD’s overall project. (shrink)

We currently have on offer a variety of different theories of causation. Many are strikingly good, providing detailed and plausible treatments of exemplary cases; and all suffer from clear counterexamples. I argue that, contra Hume and Kant, this is because causation is not a single, monolithic concept. There are different kinds of causal relations imbedded in different kinds of systems, readily described using thick causal concepts. Our causal theories pick out important and useful structures that fit some familiar cases—cases we (...) discover and ones we devise to fit. (shrink)

What kinds of evidence reliably support predictions of effectiveness for health and social care interventions? There is increasing reliance, not only for health care policy and practice but also for more general social and economic policy deliberation, on evidence that comes from studies whose basic logic is that of JS Mill's method of difference. These include randomized controlled trials, case–control studies, cohort studies, and some uses of causal Bayes nets and counterfactual-licensing models like ones commonly developed in econometrics. The topic (...) of this paper is the 'external validity' of causal conclusions from these kinds of studies. We shall argue two claims. Claim, negative: external validity is the wrong idea; claim, positive: 'capacities' are almost always the right idea, if there is a right idea to be had. If we are right about these claims, it makes big problems for policy decisions. Many advice guides for grading policy predictions give top grades to a proposed policy if it has two good Mill's-method-of difference studies that support it. But if capacities are to serve as the conduit for support from a method-of-difference study to an effectiveness prediction, much more evidence, and much different in kind, is required. We will illustrate the complexities involved with the case of multisystemic therapy, an internationally adopted intervention to try to diminish antisocial behaviour in young people. (shrink)

There is a takeover movement fast gaining influence in development economics, a movement that demands that predictions about development outcomes be based on randomized controlled trials. The problem it takes up—of using evidence of efficacy from good studies to predict whether a policy will be effective if we implement it—is a general one, and affects us all. My discussion is the result of a long struggle to develop the right concepts to deal with the problem of warranting effectiveness predictions. Whether (...) I have it right or not, these are questions of vast social importance that philosophers of science can, and should, help answer. (shrink)

We call for a new philosophical conception of models in physics. Some standard conceptions take models to be useful approximations to theorems, that are the chief means to test theories. Hence the heuristics of model building is dictated by the requirements and practice of theory-testing. In this paper we argue that a theory-driven view of models can not account for common procedures used by scientists to model phenomena. We illustrate this thesis with a case study: the construction of one of (...) the first comprehensive model of superconductivity by London brothers in 1934. Instead of theory-driven view of models, we suggest a phenomenologically-driven one. (shrink)

Randomized controlled trials (RCTs) are widely taken as the gold standard for establishing causal conclusions. Ideally conducted they ensure that the treatment ‘causes’ the outcome—in the experiment. But where else? This is the venerable question of external validity. I point out that the question comes in two importantly different forms: Is the specific causal conclusion warranted by the experiment true in a target situation? What will be the result of implementing the treatment there? This paper explains how the probabilistic theory (...) of causality implies that RCTs can establish causal conclusions and thereby provides an account of what exactly that causal conclusion is. Clarifying the exact form of the conclusion shows just what is necessary for it to hold in a new setting and also how much more is needed to see what the actual outcome would be there were the treatment implemented. (shrink)

The claims of RCTs to be the gold standard rest on the fact that the ideal RCT is a deductive method: if the assumptions of the test are met, a positive result implies the appropriate causal conclusion. This is a feature that RCTs share with a variety of other methods, which thus have equal claim to being a gold standard. This paper describes some of these other deductive methods and also some useful non-deductive methods, including the hypothetico-deductive method. It argues (...) that with all deductive methods, the benefit that the conclusions follow deductively in the ideal case comes with a great cost: narrowness of scope. This is an instance of the familiar trade-off between internal and external validity. RCTs have high internal validity but the formal methodology puts severe constraints on the assumptions a target population must meet to justify exporting a conclusion from the test population to the target. The paper reviews one such set of assumptions to show the kind of knowledge required. The overall conclusion is that to draw causal inferences about a target population, which method is best depends case-by-case on what background knowledge we have or can come to obtain. There is no gold standard. (shrink)

In their rich and intricate paper ‘Independence, Invariance, and the Causal Markov Condition’, Daniel Hausman and James Woodward ([1999]) put forward two independent theses, which they label ‘level invariance’ and ‘manipulability’, and they claim that, given a specific set of assumptions, manipulability implies the causal Markov condition. These claims are interesting and important, and this paper is devoted to commenting on them. With respect to level invariance, I argue that Hausman and Woodward's discussion is confusing because, as I point out, (...) they use different senses of ‘intervention’ and ‘invariance’ without saying so. I shall remark on these various uses and point out that the thesis is true in at least two versions. The second thesis, however, is not true. I argue that in their formulation, the manipulability thesis is patently false and that a modified version does not fare better. Furthermore, I think their proof that manipulability implies the causal Markov condition is not conclusive. In the deterministic case it is valid but vacuous, whereas it is invalid in the probabilistic case. 1 Introduction 2 Intervention, invariance and modularity 3 The causal Markov condition: CM1 and CM2 4 From MOD to the causal Markov condition and back 5 A second argument for CM2 6 The proof of the causal Markov condition for probabilistic causes 7 ‘Cartwright's objection’ defended 8 Metaphysical defenses of the causal Markov condition 9 Conclusion. (shrink)

For evidence-based practice and policy, randomised controlled trials are the current gold standard. But exactly why? We know that RCTs do not, without a series of strong assumptions, warrant predictions about what happens in practice. But just what are these assumptions? I maintain that, from a philosophical stance, answers to both questions are obscured because we don't attend to what causal claims say. Causal claims entering evidence-based medicine at different points say different things and, I would suggest, failure to attend (...) to these differences makes much current guidance about evidence for medical and social policy misleading. (shrink)

Opponents of ceteris paribus laws are apt to complain that the laws are vague and untestable. Indeed, claims to this effect are made by Earman, Roberts and Smith in this volume. I argue that these kinds of claims rely on too narrow a view about what kinds of concepts we can and do regularly use in successful sciences and on too optimistic a view about the extent of application of even our most successful non-ceteris paribus laws. When it comes to (...) testing, we test ceteris paribus laws in exactly the same way that we test laws without the ceteris paribus antecedent. But at least when the ceteris paribus antecedent is there we have an explicit acknowledgment of important procedures we must take in the design of the experiments — i.e., procedures to control for “all interferences” even those we cannot identify under the concepts of any known theory. (shrink)

An international team of four authors, led by distinguished philosopher of science, Nancy Cartwright, and leading scholar of the Vienna Circle, Thomas E. Uebel, have produced this lucid and elegant study of a much-neglected figure. The book, which depicts Neurath's science in the political, economic and intellectual milieu in which it was practised, is divided into three sections: Neurath's biographical background and the socio-political context of his economic ideas; the development of his theory of science; and his legacy as illustrated (...) by his contemporaneous involvement in academic and political debates. Coinciding with the renewal of interest in logical positivism, this is a timely publication which will redress a current imbalance in the history and philosophy of science, as well as making a major contribution to our understanding of the intellectual life of Austro-Germany in the inter-war years. (shrink)

WE AIM HERE to outline a theory of evidence for use. More specifically we lay foundations for a guide for the use of evidence in predicting policy effectiveness in situ, a more comprehensive guide than current standard offerings, such as the Maryland rules in criminology, the weight of evidence scheme of the International Agency for Research on Cancer (IARC), or the US ‘What Works Clearinghouse’. The guide itself is meant to be well-grounded but at the same time to give practicable (...) advice, that is, advice that can be used by policy-makers not expert in the natural and social sciences, assuming they are well-intentioned and have a reasonable but limited amount of time and resources available for searching out evidence and deliberating. (shrink)

How can philosophy of science be of more practical use? One thing we can do is provide practicable advice about how to determine when one empirical claim is relevant to the truth of another; i.e., about evidential relevance. This matters especially for evidence-based policy, where advice is thin—and misleading—about how to tell what counts as evidence for policy effectiveness. This paper argues that good efficacy results, which are all the rage now, are only a very small part of the story. (...) To tell what facts are relevant for judging policy effectiveness, we need to construct causal scenarios about will happen when the policy is implemented. (shrink)

Mechanisms are now taken widely in philosophy of science to provide one of modern science’s basic explanatory devices. This has raised lively debate concerning the relationship between mechanisms, laws and explanation. This paper focuses on cases where a mechanism gives rise to a ceteris paribus law, addressing two inter-related questions: What kind of explanation is involved? and What is going on in the world when mechanism M affords behavior B described in a ceteris paribus law? We explore various answers offered (...) by ‘new mechanists’ and others before setting out and explaining our own answers: mechanistic explanations are a species of oldfashioned covering-law explanation and this often accounts in part for their explanatory power; and B is what it takes for some set of principles that govern the features of M’s parts in their arrangement in M all to be instanced together. (shrink)

Probability is a guide to life partly because it is a guide to causality. Work over the last two decades using Bayes nets supposes that probability is a very sure guide to causality. I think not, and I shall argue that here. Almost all the objections I list are well-known. But I have come to see them in a different light by reflecting again on the original work in this area by Wolfgang Spohn and his recent defense of it in (...) a paper titled “Bayesian Nets Are All There Is to Causality”. (shrink)

This article agrees with Philip Kitcher that we should aim for a well-ordered science, one that answers the right questions in the right ways. Crucial to this is to address questions of use: Which scientific account is right for which system in which circumstances? This is a difficult question: evidence that may support a scientific claim in one context may not support it in another. Drawing on examples in physics and other sciences, this article argues that work on the warrant (...) of theories in philosophy of science needs to change. Emphasis should move from the warrant of theories in the abstract to questions of evidence for use. (shrink)

The volume brings together for the first time original essays by leading philosophers working on powers in relation to metaphysics, philosophy of natural and social science, philosophy of mind and action, epistemology, ethics and social and political philosophy. In each area, the concern is to show how a commitment to real causal powers affects discussion at the level in question. In metaphysics, for example, realism about powers is now recognized as providing an alternative to orthodox accounts of causation, modality, properties (...) and laws. Dispositional realist philosophers of science, meanwhile, argue that a powers ontology allows for a proper account of the nature of scientific explanation. In the philosophy of mind there is the suggestion that agency is best understood in terms of the distinctive powers of human beings. Those who take virtue theoretic approaches in epistemology and ethics have long been interested in the powers that allow for knowledge and/or moral excellence. In social and political philosophy, finally, powers theorists are interested in the powers of sociological phenomena such as collectivities, institutions, roles and/or social relations, but also in the conditions of possibility for the cultivation of the powers of individuals. The book will be of interest to philosophers working in any of these areas, as well as to historians of philosophy, political theorists and critical realists. (shrink)

Four distinguished authors have been brought together to produce this elegant study of a much-neglected figure. The book is divided into three sections: Neurath's biographical background and the economic and social context of his ideas; his theory of science; and the development of his role in debates on Marxist concepts of history and his own conception of science. Coinciding with the emerging serious interest in logical positivism, this timely publication will redress a current imbalance in the history and philosophy of (...) science. (shrink)

Most of the regularities that get represented as ‘laws’ in our sciences arise from, and are to be found regularly associated with, the successful operation of a nomological machine. Reference to the nomological machine must be included in the cp-clause of a cp-law if the entire cp-claim is to be true. We agree, for example, ‘ceteris paribus aspirins cure headaches’, but insist that they can only do so when swallowed by someone with the right physiological makeup and a headache. Besides (...) providing a necessary condition on the truth of the cp-law claim, recognising the nomological machine has great practical importance. Referring to the nomological machine makes explicit where the regularities are to be found, which is of central importance to the use of cp-laws for prediction and manipulation. Equally important, bringing the nomological machine to the fore brings into focus the make-up of the machine—its parts, their powers and their arrangements—and its context case-by-case. (shrink)

We argue against the common view that it is impossible to give a causal account of the distant correlations that are revealed in EPR-type experiments. We take a realistic attitude about quantum mechanics which implies a willingness to modify our familiar concepts according to its teachings. We object to the argument that the violation of factorizability in EPR rules out causal accounts, since such an argument is at best based on the desire to retain a classical description of nature that (...) consists of processes that are continuous in space and time. We also do not think special relativity prohibits the superluminal propagation of causes in EPR, for the phenomenon of quantum measurement may very well fall outside the domain of application of special relativity. It is possible to give causal accounts of EPR as long as we are willing to take quantum mechanics seriously, and we offer two such accounts. (shrink)

Are the laws of nature consistent with contingency about what happens in the world? That depends on what the laws of nature actually are, but it also depends on what they are like. The latter is the concern of this chapter, which looks at three views that are widely endorsed: ‘Humean’ regularity accounts, laws as relations among universals, and disposition/powers accounts. Given an account of what laws are, what follows about how much contingency, and of what kinds, laws allow? In (...) all three cases, the authors argue, the root idea of what laws are does not settle the issue of whether they allow contingency. Advocates of the different accounts may argue for one view or another on the issue, but this will be an add-on rather than a consequence of the basic view about what laws are. -/- . (shrink)

The volume brings together for the first time original essays by leading philosophers working on powers in relation to metaphysics, philosophy of natural and social science, philosophy of mind and action, epistemology, ethics and social and political philosophy. In each area, the concern is to show how a commitment to real causal powers affects discussion at the level in question. In metaphysics, for example, realism about powers is now recognized as providing an alternative to orthodox accounts of causation, modality, properties (...) and laws. Dispositional realist philosophers of science, meanwhile, argue that a powers ontology allows for a proper account of the nature of scientific explanation. In the philosophy of mind there is the suggestion that agency is best understood in terms of the distinctive powers of human beings. Those who take virtue theoretic approaches in epistemology and ethics have long been interested in the powers that allow for knowledge and/or moral excellence. In social and political philosophy, finally, powers theorists are interested in the powers of sociological phenomena such as collectivities, institutions, roles and/or social relations, but also in the conditions of possibility for the cultivation of the powers of individuals. The book will be of interest to philosophers working in any of these areas, as well as to historians of philosophy, political theorists and critical realists. (shrink)

Empirical adequacy matters directly - as it does for antirealists - if we aim to get all or most of the observable facts right, or indirectly - as it does for realists - as a symptom that the claims we make about the theoretical facts are right. But why should getting the facts - either theoretical or empirical - right be required of an acceptable theory? Here we endorse two other jobs that good theories are expected to do: helping us (...) with a) understanding and b) managing the world. Both are of equal, often greater, importance than getting a swathe of facts right, and empirical adequacy fares badly in both. It is not needed for doing these jobs and in many cases it gets in the way of doing them efficiently. (shrink)

This article critically analyses the concept of evidence in evidence‐based policy, arguing that there is a key problem: there is no existing practicable theory of evidence, one which is philosophically‐grounded and yet applicable for evidence‐based policy. The article critically considers both philosophical accounts of evidence and practical treatments of evidence in evidence‐based policy. It argues that both fail in different ways to provide a theory of evidence that is adequate for evidence‐based policy. The article contributes to the debate about how (...) evidence can and should be used to reduce contingency in science and in policy based on science. (shrink)