This is a much-needed new introduction to a field that has been transformed in recent years by exciting new subjects, ideas, and methods. It is designed for students in both philosophy and the social sciences. Topics include ontology, objectivity, method, measurement, and causal inference, and such issues as well-being and climate change.

Most of the regularities that get represented as ‘laws’ in our sciences arise from, and are to be found regularly associated with, the successful operation of a nomological machine. Reference to the nomological machine must be included in the cp-clause of a cp-law if the entire cp-claim is to be true. We agree, for example, ‘ceteris paribus aspirins cure headaches’, but insist that they can only do so when swallowed by someone with the right physiological makeup and a headache. Besides (...) providing a necessary condition on the truth of the cp-law claim, recognising the nomological machine has great practical importance. Referring to the nomological machine makes explicit where the regularities are to be found, which is of central importance to the use of cp-laws for prediction and manipulation. Equally important, bringing the nomological machine to the fore brings into focus the make-up of the machine—its parts, their powers and their arrangements—and its context case-by-case. (shrink)

Causation is in trouble?at least as it is pictured in current theories in philosophy and in economics as well, where causation is also once again in fashion. In both disciplines the accounts of causality on offer are either modelled too closely on one or another favoured method for hunting causes or on assumptions about the uses to which causal knowledge can be put?generally for predicting the results of our efforts to change the world. The first kind of account supplies no (...) reason to think that causal knowledge, as it is pictured, is of any use; the second supplies no reason to think our best methods will be reliable for establishing causal knowledge. So, if these accounts are all there is to be had, how do we get from method to use? Of what use is knowledge of causal laws that we work so hard to obtain? (shrink)

Randomized controlled trials (RCTs) are widely taken as the gold standard for establishing causal conclusions. Ideally conducted they ensure that the treatment ‘causes’ the outcome—in the experiment. But where else? This is the venerable question of external validity. I point out that the question comes in two importantly different forms: Is the specific causal conclusion warranted by the experiment true in a target situation? What will be the result of implementing the treatment there? This paper explains how the probabilistic theory (...) of causality implies that RCTs can establish causal conclusions and thereby provides an account of what exactly that causal conclusion is. Clarifying the exact form of the conclusion shows just what is necessary for it to hold in a new setting and also how much more is needed to see what the actual outcome would be there were the treatment implemented. (shrink)

What kinds of evidence reliably support predictions of effectiveness for health and social care interventions? There is increasing reliance, not only for health care policy and practice but also for more general social and economic policy deliberation, on evidence that comes from studies whose basic logic is that of JS Mill's method of difference. These include randomized controlled trials, case–control studies, cohort studies, and some uses of causal Bayes nets and counterfactual-licensing models like ones commonly developed in econometrics. The topic (...) of this paper is the 'external validity' of causal conclusions from these kinds of studies. We shall argue two claims. Claim, negative: external validity is the wrong idea; claim, positive: 'capacities' are almost always the right idea, if there is a right idea to be had. If we are right about these claims, it makes big problems for policy decisions. Many advice guides for grading policy predictions give top grades to a proposed policy if it has two good Mill's-method-of difference studies that support it. But if capacities are to serve as the conduit for support from a method-of-difference study to an effectiveness prediction, much more evidence, and much different in kind, is required. We will illustrate the complexities involved with the case of multisystemic therapy, an internationally adopted intervention to try to diminish antisocial behaviour in young people. (shrink)

This paper critically analyzes Sherrilyn Roush's (Tracking truth: knowledge, evidence and science, 2005) definition of evidence and especially her powerful defence that in the ideal, a claim should be probable to be evidence for anything. We suggest that Roush treats not one sense of 'evidence' but three: relevance, leveraging and grounds for knowledge; and that different parts of her argument fare differently with respect to different senses. For relevance, we argue that probable evidence is sufficient but not necessary for Roush's (...) own two criteria of evidence to be met. With respect to grounds for knowledge, we agree that high probability evidence is indeed ideal for the central reason Roush gives: When believing a hypothesis on the basis of e it is desirable that e be probable. But we maintain that her further argument that Bayesians need probable evidence to warrant the method they recommend for belief revision rests on a mistaken interpretation of Bayesian conditionalization. Moreover, we argue that attempts to reconcile Roush's arguments with Bayesianism fail. For leveraging, which we agree is a matter of great importance, the requirement that evidence be probable suffices for leveraging to the probability of the hypothesis if either one of Roush's two criteria for evidence are met. Insisting on both then seems excessive. To finish, we show how evidence, as Roush defines it, can fail to track the hypothesis. This can remedied by adding a requirement that evidence be probable, suggesting another rationale for taking probable evidence as ideal—but only for a grounds-for-knowledge sense of evidence. (shrink)

How can philosophy of science be of more practical use? One thing we can do is provide practicable advice about how to determine when one empirical claim is relevant to the truth of another; i.e., about evidential relevance. This matters especially for evidence-based policy, where advice is thin—and misleading—about how to tell what counts as evidence for policy effectiveness. This paper argues that good efficacy results (as in randomized controlled trials), which are all the rage now, are only a very (...) small part of the story. To tell what facts are relevant for judging policy effectiveness, we need to construct causal scenarios about will happen when the policy is implemented. (shrink)

This paper argues that even when simple analogue models picture parallel worlds, they generally still serve as isolating tools. But there are serious obstacles that often stop them isolating in just the right way. These are obstacles that face any model that functions as a thought-experiment but they are especially pressing for economic models because of the paucity of economic principles. Because of the paucity of basic principles, economic models are rich in structural assumptions. Without these no interesting conclusions can (...) be drawn. This, however, makes trouble when it comes to exporting conclusions from the model to the world. One uncontroversial constraint on induction from special cases is to beware of extending conclusions to situations that we know are different in relevant respects. In the case of economic models it is clear by inspection that the unrealistic structural assumptions of the model are intensely relevant to the conclusion. Any inductive leap to a real situation seems a bad bet. (shrink)

Nancy Cartwright is one of the most distinguished and influential contemporary philosophers of science. Despite the profound impact of her work, there is neither a systematic exposition of Cartwright’s philosophy of science nor a collection of articles that contains in-depth discussions of the major themes of her philosophy. This book is devoted to a critical assessment of Cartwright’s philosophy of science and contains contributions from Cartwright's champions and critics. Broken into three parts, the book begins by addressing Cartwright's views on (...) the practice of model building in science and the question of how models represent the world before moving on to a detailed discussion of methodologically and metaphysically challenging problems. Finally, the book addresses Cartwright's original attempts to clarify profound questions concerning the metaphysics of science. With contributions from leading scholars, such as Ronald N. Giere and Paul Teller, this unique volume will be extremely useful to philosophers of science the world over. (shrink)

Hunting Causes and Using Them argues that causation is not one thing, as commonly assumed, but many. There is a huge variety of causal relations, each with different characterizing features, different methods for discovery and different uses to which it can be put. In this collection of new and previously published essays, Nancy Cartwright provides a critical survey of philosophical and economic literature on causality, with a special focus on the currently fashionable Bayes-nets and invariance methods – and it exposes (...) a huge gap in that literature. Almost every account treats either exclusively how to hunt causes or how to use them. But where is the bridge between? It’s no good knowing how to warrant a causal claim if we don’t know what we can do with that claim once we have it. This book will interest philosophers, economists and social scientists. (shrink)

In “The Toolbox of Science” (1995) together with Towfic Shomar we advocated a form of instrumentalism about scientific theories. We separately developed this view further in a number of subsequent works. Steven French, James Ladyman, Otavio Bueno and Newton Da Costa (FLBD) have since written at least eight papers and a book criticising our work. Here we defend ourselves. First we explain what we mean in denying that models derive from theory – and why their failure to do so should (...) be lamented. Second we defend our use of the London model of superconductivity as an example. Third we point out both advantages and weaknesses of FLBD’s techniques in comparison to traditional Anglophone versions of the semantic conception. Fourth we show that FLBD’s version of the semantic conception has not been applied to our case study. We conclude by raising doubts about FLBD’s overall project. (shrink)

Daniel Hausman and James Woodward claim to prove that the causal Markov condition, so important to Bayes-nets methods for causal inference, is the ‘flip side’ of an important metaphysical fact about causation—that causes can be used to manipulate their effects. This paper disagrees. First, the premise of their proof does not demand that causes can be used to manipulate their effects but rather that if a relation passes a certain specific kind of test, it is causal. Second, the proof is (...) invalid. Third, the kind of testability they require can easily be had without the causal Markov condition. Introduction Earlier views: manipulability v testability Increasingly weaker theses The proof is invalid MOD* is implausible Two alternative claims and their defects A true claim and a valid argument Indeterminism Overall conclusion. (shrink)