Through an examination of the actual research strategies and assumptions underlying the Human Genome Project (HGP), it is argued that the epistemic basis of the initial model organism programs is not best understood as reasoning via causal analog models (CAMs). In order to answer a series of questions about what is being modeled and what claims about the models are warranted, a descriptive epistemological method is employed that uses historical techniques to develop detailed accounts which, in turn, help to reveal (...) forms of reasoning that are explicit, or more often implicit, in the practice of a particular field of scientific study. It is suggested that a more valid characterization of the reasoning structure at work here is a form of case-based reasoning. This conceptualization of the role of model organisms can guide our understanding and assessment of these research programs, their knowledge claims and progress, and their limitations, as well as how we educate the public about this type of biomedical research. (shrink)

Scientific models represent aspects of the empirical world. I explore to what extent this representational relationship, given the specific properties of models, can be analysed in terms of propositions to which truth or falsity can be attributed. For example, models frequently entail false propositions despite the fact that they are intended to say something "truthful" about phenomena. I argue that the representational relationship is constituted by model users "agreeing" on the function of a model, on the fit with data and (...) on the aspects of a phenomenon that are modelled. Model users weigh the propositions entailed by a model and from this decide which of these propositions are crucial to the acceptance and continued use of the model. Thus, models represent phenomena when certain propositions they entail are true, but this alone does not exhaust the representational relationship. Therefore, the constraints that produce the choice of the relevant propositions in a model must also be examined and their analysis contributes to understanding the relationship between models and phenomena. (shrink)

I discuss here the definition of computer simulations, and more specifically the views of Humphreys, who considers that an object is simulated when a computer provides a solution to a computational model, which in turn represents the object of interest. I argue that Humphreys's concepts are not able to analyse fully successfully a case of contemporary simulation in physics, which is more complex than the examples considered so far in the philosophical literature. I therefore modify Humphreys's definition of simulation. I (...) allow for several successive layers of computational models, and I discuss the relations that exist between these models, the computer, and the object under study. An aim of my proposal is to clarify the distinction between computational models and numerical methods, and to better understand the representational and the computational functions of models in simulations. (shrink)

We propose that scientific representation is a special case of a more general notion of representation, and that the relatively well worked-out and plausible theories of the latter are directly applicable to the scien- tific special case. Construing scientific representation in this way makes the so-called “problem of scientific representation” look much less inter- esting than it has seemed to many, and suggests that some of the (hotly contested) debates in the literature are concerned with non-issues.

In this paper we present a new framework of idealization in biology. We characterize idealizations as a network of counterfactual and hypothetical conditionals that can exhibit different “degrees of contingency”. We use this idea to say that, in departing more or less from the actual world, idealizations can serve numerous epistemic, methodological or heuristic purposes within scientific research. We defend that, in part, this structure explains why idealizations, despite being deformations of reality, are so successful in scientific practice. For illustrative (...) purposes, we provide an example from population genetics, the Wright-Fisher Model. (shrink)

Environmental advisory institutions around the world assume that ecological theory can directly inform decision-making in environmental policy and natural resource management . Accordingly, theoretical ecological models are supposed to serve as reliable guides for adjudicating between policy and management alternatives. Leading ecologists even promise that TEMs can “provide a strong guide for environmental management and resource conservation” . At the same time, criticisms of theory-based policy and management have persisted since the 1970s—after the overall failure of the International ..

Several prominent philosophers of science, most notably Ron Giere, propose that scientific theories are collections of models and that models represent the objects of scientific study. Some, including Giere, argue that models represent in the same way that pictures represent. Aestheticians have brought the picturing relation under intense scrutiny and presented important arguments against the tenability of particular accounts of picturing. Many of these arguments from aesthetics can be used against accounts of representation in philosophy of science. I rely on (...) Dominic Lopes' recent summary of arguments against various views of picturing and reformulate some of them to fit the philosophy of science context. My specific targets here are Giere and Steven French. I go on to argue that assuming all scientific models and images represent in the same way is not the best guide to understanding scientific practice. (shrink)

Kenneth Wilson won the Nobel Prize in Physics in 1982 for applying renormalization group, which he learnt from quantum field theory (QFT), to problems in statistical physics—the induced magnetization of materials (ferromagnetism) and the evaporation and condensation of fluids (phase transitions). See Wilson (1983). The renormalization group got its name from its early applications in QFT. There, it appeared to be a rather ad hoc method of subtracting away unwanted infinities. The further allegation was that the procedure is so horrendously (...) complicated that one cannot see the forest for the trees. The second allegation is justified in the applications that made it famous. But it is not true of the following example, which appears in Chowdhury and Stauffer (2000, 486-488). (shrink)

Models occupy a central role in the scientific endeavour. Among the many purposes they serve, representation is of great importance. Many models are representations of something else; they stand for, depict, or imitate a selected part of the external world (often referred to as target system, parent system, original, or prototype). Well-known examples include the model of the solar system, the billiard ball model of a gas, the Bohr model of the atom, the Gaussian-chain model of a polymer, the MIT (...) bag model of quark confinement, the Lorenz model of the atmosphere, the Lotka-Volterra model of the predator-prey interaction, or the hydraulic model of an economy, to mention just a few. All these models represent their target systems (or selected parts of them) in one way or another. (shrink)

We present a linear formalism which makes explicit and precise the confirming effect of independent multiple observers and repeated trials on composite ratings, taking as parameters quantitative estimates of the subjective inputs discussed. -/- Note that the subjective probability used here is so used to study the past not predict the future and is rather limited to what has been called in artificial intelligence "certainty factors," which are arbitrary, or, more well-known, the arbitrary values ascribed to predicates in fuzzy "logic." (...) This is clear because the scales themselves are arbitrary and of the radio-button type. (shrink)

A second demonstration (more powerful because more subtle) of how a prevalent scope error can render a model invalid, and thus how difficult modeling really is. The prevalence indicates the difficulty, as the error is often built-in and very subtle and thus easily escapes notice.

A demonstration of two difficulties, both prevalent, in modeling. The first is scopal errors, which are often hard to detect because of their subtlety. The second is that two equations, though facially identical, are implicitly conjoined to /different/ inequalities, limiting the range of the variables or parameters in the equations, thereby changing the (here, ecological) interpretation of the equation, and thus its meaning, and therefore whether it is or is not an adequate model.

In a recent article in this Journal, Fumagalli argues that economists are provisionally justified in resisting prominent calls to integrate neural variables into economic models of choice. In other articles, various authors engage with Fumagalli’s argument and try to substantiate three often-made claims concerning neuroeconomic modelling. First, the benefits derivable from neurally informing some economic models of choice do not involve significant tractability costs. Second, neuroeconomic modelling is best understood within Marr’s three-level of analysis framework for information-processing systems. And third, (...) neural findings enable choice modellers to confirm the causal relevance of variables posited by competing economic models, identify causally relevant variables overlooked by existing models, and explain observed behavioural variability better than standard economic models. In this paper, I critically examine these three claims and respond to the related criticisms of Fumagalli’s argument. Moreover, I qualify and extend Fumagalli’s account of how trade-offs between distinct modelling desiderata hamper neuroeconomists’ attempts to improve economic models of choice. I then draw on influential neuroeconomic studies to argue that even the putatively best available neural findings fail to substantiate current calls for a neural enrichment of economic models. (shrink)

Though it is held that some models in science have explanatory value, there is no conclusive agreement on what provides them with this value. One common view is that models have explanatory value vis-à-vis some target systems because they are developed using an abstraction process. Though I think this is correct, I believe it is not the whole picture. In this paper, I argue that, in addition to the well-known process of abstraction understood as an omission of features or information, (...) there is also a family of abstraction processes that involve aggregation of features or information and that these processes play an important role in endowing the models they are used to build with explanatory value. After offering a taxonomy of abstraction processes involving aggregation, I show by considering in detail several models drawn from different sciences that the abstraction processes involving aggregation that are used to build these models are responsible for their having explanatory value. (shrink)

The present paper focuses on a particular class of models intended to describe and explain the physical behaviour of systems that consist of a large number of interacting particles. Such many-body models are characterized by a specific Hamiltonian (energy operator) and are frequently employed in condensed matter physics in order to account for such phenomena as magnetism, superconductivity, and other phase transitions. Because of the dual role of many-body models as models of physical sys-tems (with specific physical phenomena as their (...) explananda) as well as mathematical structures, they form an important sub-class of scientific models, from which one can expect to draw general conclusions about the function and functioning of models in science, as well as to gain specific insight into the challenge of modelling complex systems of correlated particles in condensed matter physics. In particular, it is argued that many-body models contribute novel elements to the process of inquiry and open up new avenues of cross-model confirmation and model-based understanding. In contradistinction to phenomenological models, which have received comparatively more philosophical attention, many-body models typically gain their strength not from ‘empirical fit’ per se, but from their being the result of a constructive application of mature formalisms, which frees them from the grip of both ‘fundamental theory’ and an overly narrow conception of ‘empirical success’. (shrink)

I argue for an intentional conception of representation in science that requires bringing scientific agents and their intentions into the picture. So the formula is: Agents (1) intend; (2) to use model, M; (3) to represent a part of the world, W; (4) for some purpose, P. This conception legitimates using similarity as the basic relationship between models and the world. Moreover, since just about anything can be used to represent anything else, there can be no unified ontology of models. (...) This whole approach is further supported by a brief exposition of some recent work in cognitive, or usage-based, linguistics. Finally, with all the above as background, I criticize the recently much discussed idea that claims involving scientific models are really fictions. (shrink)

Most recent philosophical thought about the scientific representation of the world has focused on dyadic relationships between language-like entities and the world, particularly the semantic relationships of reference and truth. Drawing inspiration from diverse sources, I argue that we should focus on the pragmatic activity of representing, so that the basic representational relationship has the form: Scientists use models to represent aspects of the world for specific purposes. Leaving aside the terms "law" and "theory," I distinguish principles, specific conditions, models, (...) hypotheses, and generalizations. I argue that scientists use designated similarities between models and aspects of the world to form both hypotheses and generalizations. (shrink)

A general account of modeling in physics is proposed. Modeling is shown to involve three components: denotation, demonstration, and interpretation. Elements of the physical world are denoted by elements of the model; the model possesses an internal dynamic that allows us to demonstrate theoretical conclusions; these in turn need to be interpreted if we are to make predictions. The DDI account can be readily extended in ways that correspond to different aspects of scientific practice.

Semantic dispositionalism is the theory that a speaker’s meaning something by a given linguistic symbol is determined by her dispositions to use the symbol in a certain way. According to an objection by Saul Kripke, further elaborated in Kusch (2005), semantic dispositionalism involves ceteris paribus-clauses and idealisations, such as unbounded memory, that deviate from standard scientific methodology. I argue that Kusch misrepresents both ceteris paribus-laws and idealisation, neither of which factually approximate the behaviour of agents or the course of events, (...) but, rather, identify and isolate nature’s component parts and processes. An analysis of current results in cognitive psychology vindicates the idealisations involved and certain counterfactual assumptions in science generally. In particular, results suggest that there can be causal continuity between the dispositional structure of actual objects and that of highly idealised objects. I conclude by suggesting that we can assimilate ceteris paribus-laws with disposition ascriptions insofar as they involve identical idealising assumptions. (shrink)

Arabidopsis is currently the most popular and well-researched model organism in plant biology. This paper documents this plant's rise to scientific fame by focusing on two interrelated aspects of Arabidopsis research. One is the extent to which the material features of the plant have constrained research directions and enabled scientific achievements. The other is the crucial role played by the international community of Arabidopsis researchers in making it possible to grow, distribute and use plant specimen that embody these material features. (...) I argue that at least part of the explosive development of this research community is due to its successful standardisation and to the subsequent use of Arabidopsis specimen as material models of plants. I conclude that model organisms have a double identity as both samples of nature and artifacts representing nature. It is the resulting ambivalence in their representational value that makes them attractive research tools for biologists. (shrink)

Modeling is an important scientific practice, yet it raises significant philosophical puzzles. Models are typically idealized, and they are often explored via imaginative engagement and at a certain “distance” from empirical reality. These features raise questions such as what models are and how they relate to the world. Recent years have seen a growing discussion of these issues, including a number of views that treat modeling in terms of indirect representation and analysis. Indirect views treat the model as a bona (...) fide object, specified by the modeler and used to represent and reason about some portion of the concrete empirical world. On some indirect views, model systems are abstract entities, such as mathematical structures, while on other views they are concrete hypothetical things. Here I assess these views and offer a novel account of models. I argue that regarding models as abstracta results in some significant tensions with the practice of modeling, especially in areas where non-mathematical models are common. Furthermore, viewing models as concrete hypotheticals raises difficult questions about model-world relations. The view I argue for treats models as direct, albeit simplified, representations of targets in the world. I close by suggesting a treatment of model-world relations that draws on a recent work by Stephen Yablo concerning the notion of partial truth. (shrink)

Some philosophers of science – the present author included – appeal to fiction as an interpretation of the practice of modeling. This raises the specter of an incompatibility with realism, since fiction-making is essentially non-truth-regulated. I argue that the prima facie conflict can be resolved in two ways, each involving a distinct notion of fiction and a corresponding formulation of realism. The main goal of the paper is to describe these two packages. Toward the end I comment on how to (...) choose among them. (shrink)

Many biological investigations are organized around a small group of species, often referred to as ‘model organisms’, such as the fruit fly Drosophila melanogaster. The terms ‘model’ and ‘modelling’ also occur in biology in association with mathematical and mechanistic theorizing, as in the Lotka–Volterra model of predator-prey dynamics. What is the relation between theoretical models and model organisms? Are these models in the same sense? We offer an account on which the two practices are shown to have different epistemic characters. (...) Theoretical modelling is grounded in explicit and known analogies between model and target. By contrast, inferences from model organisms are empirical extrapolations. Often such extrapolation is based on shared ancestry, sometimes in conjunction with other empirical information. One implication is that such inferences are unique to biology, whereas theoretical models are common across many disciplines. We close by discussing the diversity of uses to which model organisms are put, suggesting how these relate to our overall account. 1 Introduction2 Volterra and Theoretical Modelling3 Drosophila as a Model Organism4 Generalizing from Work on Model Organisms5 Phylogenetic Inference and Model Organisms6 Further Roles of Model Organisms6.1 Preparative experimentation6.2 Model organisms as paradigms6.3 Model organisms as theoretical models6.4 Inspiration for engineers6.5 Anchoring a research community7 Conclusion. (shrink)

This article argues for an anti-deflationist view of scientific representation. Our discussion begins with an analysis of the recent Callender–Cohen deflationary view on scientific representation. We then argue that there are at least two radically different ways in which a thing can be represented: one is purely symbolic, and therefore conventional, and the other is epistemic. The failure to recognize that scientific models are epistemic vehicles rather than symbolic ones has led to the mistaken view that whatever distinguishes scientific models (...) from other representational vehicles must merely be a matter of pragmatics. It is then argued that even though epistemic vehicles also contain conventional elements, they do their job of demonstration in spite of such elements. (shrink)

The aim of this article is to discuss and develop the diagnose of the Hardy-Weinberg law made by van Fraassen (1987, p. 110), according to which: 1) that law cannot be considered a law used as an axiom for the classical population genetics as a whole, since it is an equilibrium-law that holds only under certain special conditions; 2) it just determines a subclass of models; 3) its generalization shades off into logical vacuity; and 4) more complex variants of the (...) law can be deduced for more realistic assumptions. The discussion and development of such a diagnose will be carried out with the notions of another semantic conceptions of theories, related to that of van Fraassen, namely, the structuralist view of theories, and a rational reconstruction of classical population genetics made within the framework of such a metatheory, also presented in this paper. (shrink)

Modeling involves the use of false idealizations, yet there is typically a belief or hope that modeling somehow manages to deliver true information about the world. The paper discusses one possible way of reconciling truth and falsehood in modeling. The key trick is to relocate truth claims by reinterpreting an apparently false idealizing assumption in order to make clear what possibly true assertion is intended when using it. These include interpretations in terms of negligibility, applicability, tractability, early-step, and more. Elaborations (...) are suggested about their precise formulations, mutual relationships, and truth-aptness. (shrink)

The behavior/structure methodological dichotomy as locus of scientific inquiry is closely related to the issue of modeling and theory change in scientific explanation. Given that the traditional tension between structure and behavior in scientific modeling is likely here to stay, considering the relevant precedents in the history of ideas could help us better understand this theoretical struggle. This better understanding might open up unforeseen possibilities and new instantiations, particularly in what concerns the proposed technological modification of the human condition. The (...) sequential structure of this paper is twofold. The contribution of three philosophers better known in the humanities than in the study of science proper are laid out. The key theoretical notions interweaving the whole narrative are those of mechanization, constructability and simulation. They shall provide the conceptual bridge between these classical thinkers and the following section. Here, a panoramic view of three significant experimental approaches in contemporary scientific research is displayed, suggesting that their undisclosed ontological premises have deep roots in the Western tradition of the humanities. This ontological lock between core humanist ideals and late research in biology and nanoscience is ultimately suggested as responsible for pervasively altering what is canonically understood as “human”. (shrink)

Reference models of the earth’s interior play an important role in the acquisition of knowledge about the earth’s interior and the earth as a whole. Such models are used as a sort of standard reference against which data are compared. I argue that the use of reference models merits more attention than it has gotten so far in the literature on models, for it is an example of a method of doing science that has a long and significant history, and (...) a study of reference models could increase our understanding of this methodology. (shrink)

As scientists begin to study increasingly complex questions, many have turned to computer simulation to assist in their inquiry. This methodology has been challenged by both analytic modelers and experimentalists. A primary objection of analytic modelers is that simulations are simply too complicated to perform model verification. From the experimentalist perspective it is that there is no means to demonstrate the reality of simulation. The aim of this paper is to consider objections from both of these perspectives, and to argue (...) that a proper understanding and application of robustness analysis is able to resolve them. ‡The author would like to thank Cristina Bicchieri, Michelle Foa, Paul Humphreys and Michael Weisberg for their helpful comments and suggestions. †To contact the author, please write to: Department of Philosophy, University of Pennsylvania, 433 Logan Hall, 249 S. 36th Street, Philadelphia, PA, 19104-6304; e-mail: rmuldoon@sas.upenn.edu. (shrink)

Currently, the widely used notion of activity is increasingly present in computer science. However, because this notion is used in specific contexts, it becomes vague. Here, the notion of activity is scrutinized in various contexts and, accordingly, put in perspective. It is discussed through four scientific disciplines: computer science, biology, economics, and epistemology. The definition of activity usually used in simulation is extended to new qualitative and quantitative definitions. In computer science, biology and economics disciplines, the new simulation activity definition (...) is first applied critically. Then, activity is discussed generally. In epistemology, activity is discussed, in a prospective way, as a possible framework in models of human beliefs and knowledge. (shrink)

Investigating random homicides involves constructing models of an odd sort. While the differences between these models and scientific models are radical, calling them models is justified both by functional and structural similarities. Serial homicide investigations illustrate the marked difference between theoretical models in science and the models applied in these criminal investigations. This is further illustrated by considering Glymourian bootstrapping in attempts to solve such homicides. The solutions that result differ radically from explanations in science that are confirmed or disconfirmed (...) by occurrences. Unlike the scientist, the flatfoot gumshoe is also barefoot: he is bereft of a general, determinative theoretical frame. This result shows that criminal investigations do not apply science in the Galilean sense. (shrink)

Philosophers of science have examined The Theory of Island Biogeography by Robert MacArthur and E. O. Wilson (1967) mainly due to its important contribution to modeling in ecology, but they have not examined it as a representative case of ecological explanation. In this paper, I scrutinize the type of explanation used in this paradigmatic work of ecology. I describe the philosophy of science of MacArthur and Wilson and show that it is mechanistic. Based on this account and in light of (...) contributions to the mechanistic conception of explanation due to Craver (2007), and Bechtel and Richardson (1993), I argue that MacArthur and Wilson use a mechanistic approach to explain the species-area relationship. In light of this examination, I formulate a normative account of mechanistic explanation in ecology. Furthermore, I argue that it offers a basis for methodological unification of ecology and solves a dispute on the nature of ecology. Lastly, I show that proposals for a new paradigm of biogeography appear to maintain the norms of mechanistic explanation implicit in The Theory of Island Biogeography. (shrink)

Progress in the last few decades in what is widely known as “Chaos Theory” has plainly advanced understanding in the several sciences it has been applied to. But the manner in which such progress has been achieved raises important questions about scientific method and, indeed, about the very objectives and character of science. In this presentation, I hope to engage my audience in a discussion of several of these important new topics.

How can a model that stops short of representing the whole truth about the causal production of a phenomenon help us to understand the phenomenon? I answer this question from the perspective of what I call the simple view of understanding, on which to understand a phenomenon is to grasp a correct explanation of the phenomenon. Idealizations, I have argued in previous work, flag factors that are casually relevant but explanatorily irrelevant to the phenomena to be explained. Though useful to (...) the would-be understander, such flagging is only a first step. Are there any further and more advanced ways that idealized models aid understanding? Yes, I propose: the manipulation of idealized models can provide considerable insight into the reasons that some causal factors are difference-makers and others are not, which helps the understander to grasp the nature of explanatory connections and so to better grasp the explanation itself. (shrink)

This paper discusses the idea that some of the causal factors that are responsible for the production of a natural phenomenon are explanatorily irrelevant and, thus, may be omitted or distorted. It argues against Craig Callender’s suggestion that the standard explanation of phase transitions in statistical mechanics may be considered a causal explanation, in Michael Strevens’ sense, as a distortion that can nevertheless successfully represent causal relations.

In this paper I challenge Paolo Palmieri’s reading of the Mach-Vailati debate on Archimedes’s proof of the law of the lever. I argue that the actual import of the debate concerns the possible epistemic (as opposed to merely pragmatic) role of mathematical arguments in empirical physics, and that construed in this light Vailati carries the upper hand. This claim is defended by showing that Archimedes’s proof of the law of the lever is not a way of appealing to a non-empirical (...) source of information, but a way of explicating the mathematical structure that can represent the empirical information at our disposal in the most general way. (shrink)

Why do policies fail? How can we objectively choose the best policy from two (or more) competing alternatives? How can we create better policies? To answer these critical questions this book presents an innovative yet workable approach. Avoiding Policy Failure uses emerging metapolicy methodologies in case studies that compare successful policies with ones that have failed. Those studies investigate the systemic nature of each policy text to gain new insights into why policies fail. -/- In addition to providing intriguing directions (...) for research, this book also suggests a bold new standard for evaluating policies. While this method is broadly generalizable, specific examples are provided showing how to develop better Economic Policy, Military Policy, and Constitutional Organizations. This book shows scholars, researchers, and policy analysts how to develop more effective policies so that we may achieve our highest aspirations and avoid the horrendous failures of the past. (shrink)