Authors:Sarah M. Roe; Bert BaumgaertnerAbstract: Abstract Mechanistic accounts of explanation have recently found popularity within philosophy of science. Presently, we introduce the idea of an extended mechanistic explanation, which makes explicit room for the role of environment in explanation. After delineating Craver and Bechtel’s (2007) account, we argue this suggestion is not sufficiently robust when we take seriously the mechanistic environment and modeling practices involved in studying contemporary complex biological systems. Our goal is to extend the already profitable mechanistic picture by pointing out the importance of the mechanistic environment. It is our belief that extended mechanistic explanations, or mechanisms that take into consideration the temporal sequencing of the interplay between the mechanism and the environment, allow for mechanistic explanations regarding a broader group of scientific phenomena.PubDate: 2016-10-21DOI: 10.1007/s10838-016-9356-6

Abstract: Abstract The aim of this paper is to provide a characterization of ability theories of practice and, in this process, to defend Pierre Bourdieu’s ability theory against Stephen Turner’s objections. In part I, I outline ability theorists’ conception of practices together with their objections to claims about rule following and rule explanations. In part II, I turn to the question of what ability theorists take to be the alternative to rule following and rule explanations. Ability theorists have offered, and been ascribed, somewhat different answers to this question, just as their replies, or positive accounts, have been heavily criticized by Turner. Due to this state of the debate, I focus on the positive account advanced by a single—and highly famous—ability theorist of practice, Pierre Bourdieu. Moreover, I show that despite Turner’s claims to the contrary, his arguments do not refute Bourdieu’s positive account.PubDate: 2016-09-15DOI: 10.1007/s10838-016-9355-7

Authors:José Luis Luján; Oliver Todt; Juan Bautista BengoetxeaAbstract: Abstract Mechanistic information is used in the field of risk assessment in order to clarify two controversial methodological issues, the selection of inference guides and the definition of standards of evidence. In this paper we present an analysis of the concept of mechanistic information in risk assessment by recurring to previous philosophical analyses of mechanistic explanation. Our conclusion is that the conceptual analysis of mechanistic explanation facilitates a better characterization of the concept of mechanistic information. However, it also shows that the use of this kind of information in risk assessment is heavily influenced by pragmatic factors, which have not been sufficiently taken into account in philosophical analysis. Mechanistic models are like hypothesis that have to be validated empirically. Due to their dependence on the standards of evidence, they are subject to the same pragmatic factors. Therefore, recurring to mechanistic information does not lead to closure of the methodological controversies in risk assessment.PubDate: 2016-09-01DOI: 10.1007/s10838-015-9306-8

Authors:René van Woudenberg; Joelle Rothuizen-van der SteenAbstract: Abstract It has recently been argued that the following Rule should be part of any characterization of science: Claims concerning specific disputed facts should be endorsed only if they are sufficiently supported by the application of validated methods of research or discovery, and moreover that acceptance of this Rule should lead one to reject religious belief. This paper argues, first, that the Rule, as stated, should not be accepted as it suffers from a number of problems. And second, that even if the Rule were to be acceptable, it should not lead one to reject religious belief.PubDate: 2016-09-01DOI: 10.1007/s10838-015-9313-9

Authors:Michael PoznicAbstract: Abstract The notion of scientific representation plays a central role in current debates on modeling in the sciences. One or maybe the major epistemic virtue of successful models is their capacity to adequately represent specific phenomena or target systems. According to similarity views of scientific representation, models should be similar to their corresponding targets in order to represent them. In this paper, Suárez’s arguments against similarity views of representation will be scrutinized. The upshot is that the intuition that scientific representation involves similarity is not refuted by the arguments. The arguments do not make the case for the strong claim that similarity between vehicles and targets is neither necessary nor sufficient for scientific representation. Especially, one claim that a similarity view wants to uphold, still, is the following thesis: only if a vehicle is similar to a target in relevant respects and to a specific degree of similarity then the vehicle is a scientific representation of that target.PubDate: 2016-09-01DOI: 10.1007/s10838-015-9307-7

Authors:Nicola MößnerAbstract: Abstract Without doubt, there is a great diversity of scientific images both with regard to their appearances and their functions. Diagrams, photographs, drawings, etc. serve as evidence in publications, as eye-catchers in presentations, as surrogates for the research object in scientific reasoning. This fact has been highlighted by Stephen M. Downes who takes this diversity as a reason to argue against a unifying representation-based account of how visualisations play their epistemic role in science. In the following paper, I will suggest an alternative explanation of the diversity of scientific images. This account refers to processes which are caused by the social setting of science. What exactly is meant by this, I will spell out with the aid of Ludwik Fleck’s theory of the social mechanisms of scientific communication.PubDate: 2016-09-01DOI: 10.1007/s10838-016-9327-y

Abstract: Abstract Recent philosophical analyses of the epistemic dimension of images in the sciences show a certain trend in acknowledging potential roles of these images beyond their merely decorative or pedagogical functions. We argue, however, that this new debate has yet paid little attention to a special type of pictures, we call ‘visual metaphor’, and its versatile heuristic potential in organizing data, supporting communication, and guiding research, modeling, and theory formation. Based on a case study of Conrad Hal Waddington’s epigenetic landscape images in biology, we develop a descriptive framework applicable to heuristic roles of various visual metaphors in the sciences.PubDate: 2016-08-12DOI: 10.1007/s10838-016-9353-9

Abstract: Abstract Where should computer simulations be located on the ‘usual methodological map’ (Galison 1996, 120) which distinguishes experiment from (mathematical) theory? Specifically, do simulations ultimately qualify as experiments or as thought experiments? Ever since Galison raised that question, a passionate debate has developed, pushing many issues to the forefront of discussions concerning the epistemology and methodology of computer simulation. This review article illuminates the positions in that debate, evaluates the discourse and gives an outlook on questions that have not yet been addressed.PubDate: 2016-08-10DOI: 10.1007/s10838-016-9354-8

Abstract: Abstract In this paper, I offer a detailed reconstruction and a critical analysis of Abraham Maslow’s neglected psychological reading of Thomas Kuhn’s famous dichotomy between ‘normal’ and ‘revolutionary’ science, which Maslow briefly expounded four years after the first edition of Kuhn’s The Structure of Scientific Revolutions, in his small book The Psychology of Science: A Reconnaissance (1966), and which relies heavily on his extensive earlier general writing in the motivational and personality psychology. Maslow’s Kuhnian ideas, put forward as part of a larger program for the psychology of science, outlined already in his 1954 magnum opus Motivation and Personality, are analyzed not only in the context of Kuhn’s original ‘psychologizing’ attitude toward understanding the nature and development of science, but also in a broader historical, intellectual and social context.PubDate: 2016-07-22DOI: 10.1007/s10838-016-9352-x

Abstract: Abstract We apply Benacerraf’s distinction between mathematical ontology and mathematical practice (or the structures mathematicians use in practice) to examine contrasting interpretations of infinitesimal mathematics of the seventeenth and eighteenth century, in the work of Bos, Ferraro, Laugwitz, and others. We detect Weierstrass’s ghost behind some of the received historiography on Euler’s infinitesimal mathematics, as when Ferraro proposes to understand Euler in terms of a Weierstrassian notion of limit and Fraser declares classical analysis to be a “primary point of reference for understanding the eighteenth-century theories.” Meanwhile, scholars like Bos and Laugwitz seek to explore Eulerian methodology, practice, and procedures in a way more faithful to Euler’s own. Euler’s use of infinite integers and the associated infinite products are analyzed in the context of his infinite product decomposition for the sine function. Euler’s principle of cancellation is compared to the Leibnizian transcendental law of homogeneity. The Leibnizian law of continuity similarly finds echoes in Euler. We argue that Ferraro’s assumption that Euler worked with a classical notion of quantity is symptomatic of a post-Weierstrassian placement of Euler in the Archimedean track for the development of analysis, as well as a blurring of the distinction between the dual tracks noted by Bos. Interpreting Euler in an Archimedean conceptual framework obscures important aspects of Euler’s work. Such a framework is profitably replaced by a syntactically more versatile modern infinitesimal framework that provides better proxies for his inferential moves.PubDate: 2016-07-19DOI: 10.1007/s10838-016-9334-z

Abstract: Abstract One of the many research projects of Jaakko Hintikka was entitled “Logical tools for human thinking and their history”. This is in fact an apt summary of the lifetime work of this master logician who developed several new methods and systems in mathematical and philosophical logic, among them distributive normal forms, model sets, possible-worlds semantics, epistemic logic, doxastic logic, inductive logic, semantic information, game-theoretical semantics, interrogative approach to inquiry, and independence-friendly logic. He applied them to study problems in philosophy of language, formal epistemology, and philosophy of science. He combined systematic work with novel interpretations of important historical figures like Aristotle, Leibniz, Kant, Peirce, and Wittgenstein. Hintikka was one of the most cited analytic philosophers, and he influenced logic and philosophy also as a successful teacher and the long-time editor of the journal Synthese.PubDate: 2016-07-05DOI: 10.1007/s10838-016-9347-7

Abstract: Abstract The aim of the article is to propose a way to determine to what extent a hypothesis H is confirmed if it has successfully passed a classical significance test. Bayesians have already raised many serious objections against significance testing, but in doing so they have always had to rely on epistemic probabilities and a further Bayesian analysis, which are rejected by classical statisticians. Therefore, I will suggest a purely frequentist evaluation procedure for significance tests that should also be accepted by a classical statistician. This procedure likewise indicates some additional problems of significance tests. In some situations, such tests offer only a weak incremental support of a hypothesis, although an absolute confirmation is necessary, and they overestimate positive results for small effects, since the confirmation of H is often rather marginal in these cases. In specific cases, for example, in cases of ESP-hypotheses, such as precognition, this phenomenon leads too easily to a significant confirmation and can be regarded as a form of the probabilistic falsification fallacy.PubDate: 2016-06-22DOI: 10.1007/s10838-016-9341-0

Abstract: Abstract Transcendental conceptions of subjectivity, beginning with Descartes and including Kant, Fichte, and Husserl as well as neo-transcendental accounts of the 20th century, try to explicate a subject’s subjectivity as a necessary condition for all theoretical and practical validity claims. According to this conception, only this subject-theoretical presupposition allows for an adequate foundation of terms of authorship of action (autonomy) and self-determination. However, the conceptual self-explication of this position faces some inherent difficulties, which has repeatedly been pointed out even by representatives of this school of thought themselves. Moreover, it seems as if the constitutional achievements of transcendental philosophy are increasingly being detached from philosophy: due to the development of the modern sciences of man, they are step by step conceived as objects of empirical research. This paper looks critically into this thesis of detachment.PubDate: 2016-06-21DOI: 10.1007/s10838-016-9331-2

Authors:K. Brad WrayAbstract: Abstract Devitt (J Gen Philos Sci 42:285–293, 2011) has developed an interesting defense of realism against the threats posed by (1) the Pessimistic Induction and (2) the Argument from Unconceived Alternatives. Devitt argues that the best explanation for the success of our current theories, and the fact that they are superior to the theories they replaced, is that they were developed and tested with the aid of better methods than the methods used to develop and test the many theories that were discarded earlier in the history of science. It is no surprise that theories developed earlier in the history of science needed to be replaced. But our current theories are different, having been developed and tested with the aid of these more recently developed superior methods. I critically analyze Devitt’s defense of realism. I argue that recent developments in methodology cannot support the claims Devitt makes. I present an argument I call the “Argument from Unconceived Methods.” Given the history of science, it seems likely that scientists will continue to develop new methods in the future. And some of these methods will enable scientists to generate data that cannot be reconciled with the currently accepted theories. Consequently, it seems that our current best theories are not immune from being replaced in the future by radically different theories.PubDate: 2016-05-07DOI: 10.1007/s10838-016-9338-8

Authors:Finnur DellsénAbstract: Abstract There are many putative counterexamples to the view that all scientific explanations are causal explanations. Using a new theory of what it is to be a causal explanation, Bradford Skow has recently argued that several of the putative counterexamples fail to be non-causal. This paper defends some of the counterexamples by showing how Skow’s argument relies on an overly permissive theory of causal explanations.PubDate: 2016-05-06DOI: 10.1007/s10838-016-9333-0