In recent years, the philosophical focus of the modeling literature has shifted from descriptions of general properties of models to an interest in different model functions. It has been argued that the diversity of models and their correspondingly different epistemic goals are important for developing intelligible scientific theories . However, more knowledge is needed on how a combination of different epistemic means can generate and stabilize new entities in science. This paper will draw on Rheinberger’s practice-oriented account of (...) knowledge production. The conceptual repertoire of Rheinberger’s historical epistemology offers important insights for an analysis of the modelling practice. I illustrate this with a case study on network modeling in systems biology where engineering approaches are applied to the study of biological systems. I shall argue that the use of multiple representational means is an essential part of the dynamic of knowledge generation. It is because of—rather than in spite of—the diversity of constraints of different models that the interlocking use of different epistemic means creates a potential for knowledge production. (shrink)

Contemporary visual epistemic practices in the biological sciences raise new questions of how to transform an iconic data measurements into images, and how the process of an imaging technique may change the material it is ‘depicting’. This case-oriented study investigates microscopic imagery, which is used by system and synthetic biologists alike. The core argument is developed around the analysis of two recent methods, developed between 2003 and 2006: localization microscopy and photo-induced cell death. Far from functioning merely as illustrations (...) of work done by other means, images can be determined as tools for discovery in their own right and as objects of investigation. Both methods deploy different constellations of intended and unintended interactions between visual appearance and underlying biological materiality. To characterize these new ways of interaction, the article introduces the notions of ‘operational images’ and ‘operational agency’. Despite all their novelty, operational images are still subject to conventions of seeing and depicting: Phenomena emerging with the new method of localization microscopy have to be designed according to image traditions of older, conventional fluorescence microscopy to function properly as devices for communication between physicists and biologists. The article emerged from a laboratory study based on interviews conducted with researchers from the Kirchhoff-Institute for Physics and German Cancer Research Center at Bioquant, Heidelberg, in 2011. (shrink)

The recent discussion on scientific representation has focused on models and their relationship to the real world. It has been assumed that models give us knowledge because they represent their supposed real target systems. However, here agreement among philosophers of science has tended to end as they have presented widely different views on how representation should be understood. I will argue that the traditional representational approach is too limiting as regards the epistemic value of modelling given the focus on (...) the relationship between a single model and its supposed target system, and the neglect of the actual representational means with which scientists construct models. I therefore suggest an alternative account of models as epistemictools. This amounts to regarding them as concrete artefacts that are built by specific representational means and are constrained by their design in such a way that they facilitate the study of certain scientific questions, and learning from them by means of construction and manipulation. (shrink)

I distinguish between two senses in which feminists have argued that the knower is social: 1. situated or socially positioned and 2. interdependent. I argue that these two aspects of the knower work in cooperation with each other in a way that can produce willful hermeneutical ignorance, a type of epistemic injustice absent from Miranda Fricker's Epistemic Injustice. Analyzing the limitations of Fricker's analysis of the trial of Tom Robinson in Harper Lee's To Kill a Mockingbird with attention (...) to the way in which situatedness and interdependence work in tandem, I develop an understanding of willful hermeneutical ignorance, which occurs when dominantly situated knowers refuse to acknowledge epistemictools developed from the experienced world of those situated marginally. Such refusals allow dominantly situated knowers to misunderstand, misinterpret, and/or ignore whole parts of the world. (shrink)

This article develops a social epistemological analysis of Web-based search engines, addressing the following questions. First, what epistemic functions do search engines perform? Second, what dimensions of assessment are appropriate for the epistemic evaluation of search engines? Third, how well do current search engines perform on these? The article explains why they fulfil the role of a surrogate expert, and proposes three ways of assessing their utility as an epistemic tool—timeliness, authority prioritisation, and objectivity. “Personalisation” is a (...) current trend in Internet-delivered services, and consists in tailoring online content to the interests of the individual user. It is argued here that personalisation threatens the objectivity of search results. Objectivity is a public good; so there is a prima facie case for government regulation of search engines. (shrink)

Simulation as an epistemic tool between theory and practice: A Comparison of the Relationship between Theory and Simulation in Science and in Folk Psychology In this paper I explore the concept of simulation that is employed by proponents of the so-called simulation theory within the debate about the nature and scientific status of folk psychology. According to simulation theory, folk psychology is not a sort of theory that postulates theoretical entities (mental states and processes) and general laws, but a (...) practice whereby we put ourselves into others’ shoes and simulate their situation from our own perspective. On the basis of this sort of simulation, we supposedly know how we would act or think or feel, and then expect the same of others. A closer look at the concept of simulation reveals some problems with this view, but also helps to clarify the insight motivating simulation theory. Specifically, I defend the thesis that the analogy to simulations in science shows us how theoretical elements in folk psychology can be complemented by (i.e. not replaced by) the central idea of simulation theory – namely that our own cognitive habits and dispositions provide us with a resource that is distinct from propositional knowledge in folk psychology. I also discuss the idea that our use of simulations during cognitive development enables us to imitate the people around us and thereby to become more similar to them, which in turn makes simulation an increasingly effective epistemic strategy. Insofar as theoretical elements – such as the distinctions, relations, and entities referred to in folk psychological discourse – play a role in imitative learning, they are causally embedded in our cognitive development, so we have good reason to regard them as being among the real causes of our behavior. (shrink)

Epistemic expressivism is the application of a nexus of ideas, which is prominent in ethical theory (more specifically, metaethics), to parallel issues in epistemological theory (more specifically, metaepistemology). Here, in order to help those new to the debate come to grips with epistemic expressivism and recent discussions of it, I first briefly present this nexus of ideas as it occurs in ethical expressivism. Then, I explain why and how some philosophers have sought to extend it to a version (...) of epistemic expressivism. Finally, I consider a number of objections and replies with the aim of giving the reader the tools needed to begin to evaluate the promise and prospects of epistemic expressivism. (shrink)

In this brief paper, starting from recent works, we analyze from conceptual point of view this basic question: can be the nature of quantum entangled states be interpreted ontologically or epistemologically? According to some works, the degrees of freedom (and the tool of quantum partitions) of quantum systems permit us to establish a possible classification between factorizable and entangled states. We suggest, that the "choice" of degree of freedom (or quantum partitions), even if mathematically justified introduces an epistemic element, (...) not only in the systems but also in their classification. We retain, instead, that there are not two classes of quantum states, entangled and factorizable, but only a single class of states: the entangled states. In fact, the factorizable states become entangled for a different choice of their degrees of freedom (i.e. they are entangled with respect to other observables). In the same way, there are no partitions of quantum system which have an ontological superior status with respect to any other. For all these reasons, both mathematical tools utilize(i.e quantum partitions or degrees of freedom) are responsible for creating an improper classification of quantum systems. Finally, we argue that we cannot speak about a classification of quantum systems: all quantum states exhibit a unique objective nature, they are all entangled states. (shrink)

In recent years, philosophical developments of the notion of distributed and/or scaffolded cognition have given rise to the “extended mind” thesis. Against the popular belief that the mind resides solely in the brain, advocates of the extended mind thesis defend the claim that a significant portion of human cognition literally extends beyond the brain into the body and a heterogeneous array of physical props, tools, and cultural techniques that are reliably present in the environment in which people grow, think, (...) and act (Clark and Chalmers 1998; Clark 1997, 2003, 2008; Wilson 2004; Rowlands 1999, 2012; Menary 2007; Theiner 2011). However, as commentators who are friendly to the idea of distributed cognition have pointed out, the philosophical debate over extended cognition has predominantly focused on the impact of tools on our thinking while somewhat neglecting the distinctively social and cultural dimensions of cognitive scaffolding (Sterelny 2004, 2010; Caporael 1997a, 1997b; Smith and Semin 2004; Wilson 2005; Barnier et al. 2008; Sutton et al. 2010; Theiner, Allen, and Goldstone 2010). -/- To reorient the reigning paradigm, Hutchins (2010, 445) has recently proposed the “hypothesis of enculturated cognition” (HEnC) as an alternative to Clark’s (2003, 2008) largely individualistic vision of the extended mind. According to the HEnC, the “ecological assemblies of human cognition make pervasive use of cultural products” and are typically “assembled … in ongoing cultural practices” (ibid.). Cultural practices, for Hutchins, are essentially “the things people do in interaction with one another” (ibid., 440). My goal in this chapter is to follow up on Hutchins’s call to “spur the program forward” (ibid., 445), by generalizing Kirsh and Maglio’s (1994) distinction between pragmatic and epistemic actions from the level of individuals to the level of groups. The concept of a collective epistemic action refers to the ways in which groups of people actively change the structure of their social organization, with the epistemic goal of reshaping and augmenting their cognitive performance as integrated collectivities. By placing a renewed emphasis on the interactions between people, rather than between people and their tools, I hope to reconnect the cognitive-scientifically-driven “extended mind” thesis with complementary areas of social-scientific research in which groups are analyzed as the seats of action and cognition in their own right. In particular, the literature to which I aim to build a bridge in this paper is, on the one hand, certain segments of social and organizational psychology (Larson and Christensen 1993; Hinsz et al. 1997; Mohammed and Dumville 2001), and, on the other hand, theories of collective and institutional action (Ostrom 1990; List and Pettit 2011). (shrink)

We investigate a speci c model of knowledge and beliefs and their dynamics. The model is inspired by public announcement logic and the approach to puzzles concerning knowledge using that logic. In the model epistemic considerations are based on ontology. The main notion that constitutes a bridge between these two disciplines is the notion of epistemic capacities. Within the model we study scenarios in which agents can receive false announcements and can have incomplete or improper views about other (...) agent's epistemic capacities. Moreover, we try to express the description of problem speci cation using the tools from applied ontology { RDF format for information and the Protege editor. (shrink)

In this paper, the radical view that transparent equipment is the result of an ecological assembly between tool users and physical aspects of the world is critically assessed. According to this perspective, tool users are normally viewed as plastically organized hybrid agents. In this view, such agents are able to interact with tools (artefacts or technologies) in ways that are opportunistic and fully locked to the local task environment. This intimate and flexible interaction would provide grounds for the thesis (...) that cognitive agents and tools constitute literal extended cognitive systems. By contrast, a revised understanding of tool use transparency will be attempted. In this perspective, the interplay between on-line and off-line thinking is understood in terms of a socially reified cognitive delegation that subsumes the advantages normally associated to the so-called ‘open-ended ecological controllers.’ Thus, the notion of transparent technologies can be explored on the basis of a derived or mediated cognitive delegation. This view will be complemented by the notion of communities of practice (CoP). Special sorts of CoP will be proposed as suitable and flexible cognitive environments for the development of tool transparency. (shrink)

Philosophers have usually offered a number of ways of describing hypotheses generation, but all aim at demonstrating that the activity of generating hypotheses is paradoxical, illusory or obscure, and then not analysable. Those descriptions are often so far from Peircian pragmatic prescription and so abstract to result completely unknowable and obscure. The “computational turn” gives us a new way to understand creative processes in a strictly pragmatic sense. In fact, by exploiting artificial intelligence and cognitive science tools, computational philosophy (...) allows us to test concepts and ideas previously conceived only in abstract terms. It is in the perspective of these actual computational models that I find the central role of abduction in the explanation of creative reasoning in science. Creativity and discovery are no more seen as a mysterious irrational process, but, thanks to constructive accounts, as a complex relationship among different inferential steps that can be clearly analysed and identified. I maintain that the computational philosophy analysis of model-based and manipulative abduction and of external and epistemic mediators is important not only to delineate the actual practice of abduction, but also to further enhance the development of programs computationally adequate in rediscovering, or discovering for the first time, for example, scientific hypotheses or mathematical theorems. (shrink)

Information exchange can be seen as a dynamic process of raising and resolving issues. The goal of this paper is to provide a logical framework to model and reason about this process. We develop an inquisitive dynamic epistemic logic (IDEL), which enriches the standard framework of dynamic epistemic logic (DEL), incorporating insights from recent work on inquisitive semantics. At a static level, IDEL does not only allow us to model the information available to a set of agents, like (...) standard epistemic logic, but also the issues that the agents entertain. At a dynamic level, IDEL does not only allow us to model the effects of communicative actions that provide new information, like standard DEL, but also the effects of actions that raise new issues. Thus, IDEL provides the fundamental tools needed to analyze information exchange as a dynamic process of raising and resolving issues. (shrink)

I examine how particular social arrangements and incentive structures encourage the honest reporting of experimental results and minimize fraudulent scientific work. In particular I investigate how epistemic communities can achieve this goal by promoting members to police the community. Using some basic tools from game theory, I explore a simple model in which scientists both conduct research and have the option of investigating the findings of their peers. I find that this system of peer policing can in many (...) cases ensure high levels of honesty. (shrink)

What are the consequences of evolutionary theory for the epistemic standing of our beliefs? Evolutionary considerations can be used to either justify or debunk a variety of beliefs. This paper argues that evolutionary approaches to human cognition must at least allow for approximately reliable cognitive capacities. Approaches that portray human cognition as so deeply biased and deficient that no knowledge is possible are internally incoherent and self-defeating. As evolutionary theory offers the current best hope for a naturalistic epistemology, evolutionary (...) approaches to epistemic justification seem to be committed to the view that our sensory systems and belief-formation processes are at least approximately accurate. However, for that reason they are vulnerable to the charge of circularity, and their success seems to be limited to commonsense beliefs. This paper offers an extension of evolutionary arguments by considering the use of external media in human cognitive processes: we suggest that the way humans supplement their evolved cognitive capacities with external tools may provide an effective way to increase the reliability of their beliefs and to counter evolved cognitive biases. (shrink)

I argue that scientific explanation has a pragmatic dimension that is epistemically relevant. Philosophers with an objectivist approach to scientific explanation (e.g. Hempel, Trout) hold that the pragmatic aspects of explanation do not have any epistemic import. I argue against this view by focusing on the role of models in scientific explanation. Applying recent accounts of modelling (Cartwright, Morgan and Morrison) to a case-study of nineteenth-century physics, I analyse the pragmatic dimension of the process of model construction. I highlight (...) the crucial roles that conceptual tools, skills, and commitments play in this dimension, and show how they contribute to the epistemic aim of science. (shrink)

This article argues that understanding everyday practices in neurobiological labs requires us to take into account a variety of different action positions: self-conscious social actors, technical artifacts, conscious organisms, and organisms being merely alive. In order to understand the interactions among such diverse entities, highly differentiated conceptual tools are required. Drawing on the theory of the German philosopher and sociologist Helmuth Plessner, the paper analyzes experimenters as self-conscious social persons who recognize monkeys as conscious organisms. Integrating Plessner’s ideas into (...) the stock of concepts used in science and technology studies provides richer descriptions of laboratory life. In particular, this theory allows an understanding of a crucial feature of neurobiological brain research: the construction of the brain as the epistemic object of brain research. As such, the brain must be isolated from the acting and interacting organism in a complicated process. (shrink)

This paper addresses the problem posed by the current split between the two opposed hypotheses in the growing literature on the fallacy of begging the question the epistemic hypothesis, based on knowledge and belief, and the dialectical one, based on formal dialogue systems. In the first section, the nature of split is explained, and it is shown how each hypothesis has developed. To get the beginning reader up to speed in the literature, a number of key problematic examples are (...) analyzed illustrating how both approaches can be applied. Useful tools are brought to bear on them, including the automated argument diagramming system Araucaria, and profiles of dialogue used to represent circular argumentation in a dialogue tableau format. These tools are used to both to model circular reasoning and to provide the contextual evidence needed to properly determine whether the circular reasoning in a given case is better judged fallacious or not. A number of technical problems that have impeded the development of both hypotheses are studied. One central problem is the distinction between argument and explanation. It is concluded that the best way to move forward and solve these problems is to reformulate the two hypotheses in such a way that they might be able to co-exist. On this basis, a unified methodology is proposed that allows each hypothesis to move forward as a legitimate avenue for research using the same tools. (shrink)

In this paper I study intentions of the form , that is, intentions with a we-content, and their role in interpersonal coordination. I focus on the notion of epistemic support for such intentions. Using tools from epistemic game theory and epistemic logic, I cast doubt on whether such support guarantees the other agents' conditional mediation in the achievement of such intentions, something that appears important if intentions with a we-content are to count as genuine intentions. I (...) then formulate a stronger version of epistemic support, one that does indeed ensure the required mediation, but I then argue that it rests on excessively strong informational conditions. In view of this I provide an alternative set of conditions that are jointly sufficient for coordination in games, and I argue that these conditions constitute a plausible alternative to the proposed notion of epistemic support. (shrink)

Branching-time temporal logics have proved to be an extraordinarily successful tool in the formal specification and verification of distributed systems. Much of their success stems from the tractability of the model checking problem for the branching time logic CTL, which has made it possible to implement tools that allow designers to automatically verify that systems satisfy requirements expressed in CTL. Recently, CTL was generalised by Alur, Henzinger, and Kupferman in a logic known as Alternating-time Temporal Logic (ATL). The key (...) insight in ATL is that the path quantifiers of CTL could be replaced by cooperation modalities, of the form , where is a set of agents. The intended interpretation of an ATL formula is that the agents can cooperate to ensure that holds (equivalently, that have a winning strategy for ). In this paper, we extend ATL with knowledge modalities, of the kind made popular in the work of Fagin, Halpern, Moses, Vardi and colleagues. Combining these knowledge modalities with ATL, it becomes possible to express such properties as group can cooperate to bring about iff it is common knowledge in that . The resulting logic — Alternating-time Temporal Epistemic Logic (ATEL) — shares the tractability of model checking with its ATL parent, and is a succinct and expressive language for reasoning about game-like multiagent systems. (shrink)

This volume brings historians of science and social historians together to consider the role of "little tools"--such as tables, reports, questionnaires, dossiers, index cards--in establishing academic and bureaucratic claims to authority and objectivity. From at least the eighteenth century onward, our science and society have been planned, surveyed, examined, and judged according to particular techniques of collecting and storing knowledge. Recently, the seemingly self-evident nature of these mundane epistemic and administrative tools, as well as the prose in (...) which they are cast, has demanded historical examination. The essays gathered here, arranged in chronological order by subject from the late seventeenth to the late twentieth century, involve close readings of primary texts and analyses of academic and bureaucratic practices as parts of material culture. The first few essays, on the early modern period, largely point to the existence of a "juridico-theological" framework for establishing authority. Later essays demonstrate the eclipse of the role of authority per se in the modern period and the emergence of the notion of "objectivity." Most of the essays here concern the German cultural space as among the best exemplars of the academic and bureaucratic practices described above. The introduction to the volume, however, is framed at a general level the closing essays also extend the analyses beyond Germany to broader considerations on authority and objectivity in historical practice. The volume will interest scholars of European history and German studies as well as historians of science. Peter Becker is Professor of Central European History, European University Institute. William Clark is Lecturer in History and Philosophy of Science, Cambridge University. (shrink)

Taking up Kimberlé Crenshaw's conclusion that black feminist theorists seem to continue to find themselves in many ways “speaking into the void” (Crenshaw 2011, 228), even as their works are widely celebrated, I examine intersectionality critiques as one site where power asymmetries and dominant imaginaries converge in the act of interpretation (or cooptation) of intersectionality. That is, despite its current “status,” intersectionality also faces epistemic intransigence in the ways in which it is read and applied. My aim is not (...) to suggest that intersectionality cannot (or should not) be critiqued, nor do I maintain that celebratory applications/interpretations are immune from epistemic distortion when it comes to interpreting intersectionality. Rather, my goal is to demonstrate that critiques of intersectionality are one important site to examine hermeneutic marginalization and interpretive violence; the politics of citation; and the impact of dominant expectations or established social imaginaries on meaning-making. In so doing, I aim to consider more fully how entrenched ways of thinking are frequently relied upon to interpret and critique intersectionality, even as these are often the very frameworks that intersectionality theorists have identified as highly problematic tools of misrepresentation, erasure, and violation. This slippage away from intersectionality's outlooks, whether in critical or laudatory contexts, is a pivotal site of epistemic negotiation we must examine more closely. (shrink)

Branching-time temporal logics have proved to be an extraordinarily successful tool in the formal specification and verification of distributed systems. Much of their success stems from the tractability of the model checking problem for the branching time logic CTL, which has made it possible to implement tools that allow designers to automatically verify that systems satisfy requirements expressed in CTL. Recently, CTL was generalised by Alur, Henzinger, and Kupferman in a logic known as "Alternating-time Temporal Logic" (ATL). The (...) key insight in ATL is that the path quantifiers of CTL could be replaced by "cooperation modalities", of the form $\langle \langle \Gamma \rangle \rangle $ , where Γ is a set of agents. The intended interpretation of an ATL formula $\langle \langle \Gamma \rangle \rangle \varphi $ is that the agents Γ can cooperate to ensure that φ holds (equivalently, that Γ have a winning strategy for φ). In this paper, we extend ATL with knowledge modalities, of the kind made popular in the work of Fagin, Halpern, Moses, Vardi and colleagues. Combining these knowledge modalities with ATL, it becomes possible to express such properties as "group Γ can cooperate to bring about φ iff it is common knowledge in Γ that ψ". The resulting logic -- Alternating-time Temporal Epistemic Logic (ATEL) -- shares the tractability of model checking with its ATL parent, and is a succinct and expressive language for reasoning about game-like multiagent systems. (shrink)

How should one attribute epistemic credit to an agent, and hence, knowledge, when cognitive processes include an extensive use of human or mechanical enhancers, informational tools, and devices which allow one to complement or modify one's own cognitive system? The concept of integration of a cognitive system has been used to address this question. For true belief to be creditable to a person's ability, it is claimed, the relevant informational processes must be or become part of the cognitive (...) character of the agent, as a result of a process of enculturation. We argue that this view does not capture the role of sensitivity to epistemic norms in forming true beliefs. An analysis of epistemic actions, basic and extended, is proposed as offering an appropriate framework for crediting an agent with knowledge. (shrink)

The papers in this special issue are based on presentations delivered at the conference Epistemic Aspects of Many-valued Logics, held at the Institute of Philosophy of the Academy of Sciences of the Czech Republic, in Prague, 2010. All papers consequently revolve around the application of non-classical logical tools—mathematical fuzzy logic and/or probability theory—to epistemological issues.Timothy Williamson employs a modal epistemic logic enriched with probabilities to generalize an argument against the KK-principle. He argues that we can know a (...) proposition even if our evidential probability for that proposition is low. In fact he argues that the evidential probability of a known proposition can be (arbitrarily) close to 0. The argument is first presented with a basic idealized model, which is then extended to much more complicated and realistic models. This then raises a problem for decision theory, since you can know that p, while your evidence tells you (strongly) that not p. Wil .. (shrink)

Much of the plausibility of epistemic conservatism derives from its prospects of explaining our rationality in holding memory beliefs. In the first two parts of this paper, I argue for the inadequacy of the two standard approaches to the epistemology of memory beliefs, preservationism and evidentialism. In the third, I point out the advantages of the conservative approach and consider how well conservatism survives three of the strongest objections against it. Conservatism does survive, I claim, but only if qualified (...) in certain ways. Appropriately qualified, conservatism is no longer the powerful anti-skeptical tool some have hoped for, but a doctrine closely connected with memory. (shrink)

What is the nature of children's trust in testimony? Is it based primarily on evidential correlations between statements and facts, as stated by Hume, or does it derive from an interest in the trustworthiness of particular speakers? In this essay, we explore these questions in an effort to understand the developmental course and cognitive bases of children's extensive reliance on testimony. Recent work shows that, from an early age, children monitor the reliability of particular informants, differentiate between those who make (...) true and false claims and keep that differential accuracy in mind when evaluating new information from these people. We argue that this selective trust is likely to involve the mentalistic appraisal of speakers rather than surface generalizations of their behavior. Finally, we review the significance of children's deference to adult authority on issues of naming and categorization. In addition to challenging a purely inductive account of trust, these and other findings reflect a potentially rich set of tools brought by children to the task of learning from people's testimony. (shrink)

When reasoning about complex domains, where information available is usually only partial, nonmonotonic reasoning can be an important tool. One of the formalisms introduced in this area is Reiter's Default Logic (1980). A characteristic of this formalism is that the applicability of default (inference) rules can only be verified in the future of the reasoning process. We describe an interpretation of default logic in temporal epistemic logic which makes this characteristic explicit. It is shown that this interpretation yields a (...) semantics for default logic based on temporal epistemic models. A comparison between the various semantics for default logic will show the differences and similarities of these approaches and ours. (shrink)

The paper presents an epistemic logic with quantification over agents of knowledge and with a syntactical distinction between de re and de dicto occurrences of terms. Knowledge de dicto is characterized as ‘knowledge that’, and knowlegde de re as ‘knowledge of’. Transition semantics turns out to be an adequate tool to account for the distinctions introduced.

Some philosophers have attempted to utilize the conceptual tools of ethics in order to understand epistemology. One instantiation of this understands justification in terms of having a certain kind of epistemic right, namely, a right to believe. In variations of this theme, some hold that justification involves having the authority to believe, or being entitled to believe. But by examining the putative analogies between different versions of rights and justification, I demonstrate that justification should not be understood as (...) having a right to believe. (shrink)

Pluralistic ignorance is a socio-psychological phenomenon that involves a systematic discrepancy between people’s private beliefs and public behavior in certain social contexts. Recently, pluralistic ignorance has gained increased attention in formal and social epistemology. But to get clear on what precisely a formal and social epistemological account of pluralistic ignorance should look like, we need answers to at least the following two questions: What exactly is the phenomenon of pluralistic ignorance? And can the phenomenon arise among perfectly rational agents? In (...) this paper, we propose answers to both these questions. First, we characterize different versions of pluralistic ignorance and define the version that we claim most adequately captures the examples cited as paradigmatic cases of pluralistic ignorance in the literature. In doing so, we will stress certain key epistemic and social interactive aspects of the phenomenon. Second, given our characterization of pluralistic ignorance, we argue that the phenomenon can indeed arise in groups of perfectly rational agents. This, in turn, ensures that the tools of formal epistemology can be fully utilized to reason about pluralistic ignorance. (shrink)

Evidentialism as its leading proponents describe it has two distinct senses, these being evidentialism as a conceptual analysis of epistemic justification, and as a prescriptive ethics of belief—an account of what one ‘ought to believe’ under different epistemic circumstances. These two senses of evidentialism are related, but in the work of leading evidentialist philosophers, in ways that I think are deeply problematic. Although focusing on Richard Feldman’s ethics of belief, this chapter is critical of evidentialism in both senses. (...) However, I share with authors like Feldman and Earl Conee, that epistemology has important prescriptive functions, and that a sound, civic ethics of belief is of more than merely philosophic importance. One reason why an ethics of belief might be important to problems of practice is the need we have for tools to more effectively mediate the renewed round of ‘culture wars’ we are experiencing in Anglo-American cultures. I mean especially that grand cultural clash between science and religion, reason and faith, secularist atheism and religious fundamentalism, etc. Let us start with the genealogical question of why there is such a grand cultural debate in the first place, and why the debate especially as played out in public and popular forums and even in the courtrooms seems so volatile and so often to confusedly drag everything—beliefs, values, passions, etc., with it. These are questions that I think Sigmund Freud’s classic Civilization and its Discontents can help us understand. Freud was a major voice in criticism of the stern and often hypocritical Victorian morality, a voice pointing out the price of its sometimes high-handed, guiltinducing curtailments of the satisfactions sought by the individual. But for Freud while there are real differences in the moral demands that different societies or traditions place upon people, there is something inevitable about the conflict itself, for “replacement of the power of the individual by the power of a community constitutes the decisive step of civilization... (shrink)

Among many reasons for which contemporary philosophers take coherentism in epistemology seriously, the most important is probably the perceived inadequacy of alternative accounts, most notably misgivings about foundationalism. But coherentism also receives straightforward support from cases in which beliefs are apparently justified by their coherence. From the perspective of those against coherentism, this means that an explanation is needed as to why in these cases coherence apparently justifies beliefs. Curiously, this task has not been carried out in a serious way (...) in the anti-coherentist literature although there is no scarcity of objections to coherentism. The traditional charge has been that justification by coherence is circular. More recently the isolation problem allegedly reveals that coherentism justifies beliefs that should not be justified. Questions have also been raised with respect to the basing relation and feasibility.1 However, these objections do not explain why some beliefs appear to be justified by their coherence. This paper fills this gap in the anti-coherentist literature by offering a noncoherentist account of justification by coherence. The paper proceeds as follows. Section I delineates the framework of discussion and develops some conceptual tools needed in later analyses. Section II argues that there are genuine cases of an increase in existing empirical justification by coherence, but that it does not require coherence to generate additional justification—coherence serves as a channel of justification among beliefs, which is no more problematic than channeling of justification from basic to derived beliefs in foundationalism. Section III makes a stronger case for justification by coherence, where each of the coherent beliefs has no independent empirical justification; but Section IV argues that even in these cases coherence need not generate justification—coherence licenses the channeling of justification from outside sources.. (shrink)

In a recent paper on realism and pragmatism published in this journal, Osmo Kivinen and Tero Piiroinen have been pleading for more methodological work in the philosophy of the social sciences—refining the conceptual tools of social scientists—and less philosophically ontological theories. Following this de-ontologizing approach, we scrutinize the debates on social explanation and contribute to the development of a pragmatic social science methodology. Analyzing four classic debates concerning explanation in the social sciences, we propose to shift the debate away (...) from (a) the ontologizing defenses of forms of social explanation, and (b) a winner-takes-all-approach. Instead, we advocate (c) a pragmatic approach towards social explanation, elaborating a rigorous framework for explanatory pluralism detached from the debates on social ontology. (shrink)

Scientific collaboration can only be understood along the epistemic and cognitive grounding of scientific disciplines. New scientific discoveries in astrophysics led to a major restructuring of the elite network of astrophysics. To study the interplay of the epistemic grounding and the social network structure of a discipline, a mixed-methods approach is necessary. It combines scientometrics, quantitative network analysis and visualization tools with a qualitative network analysis approach. The centre of the international collaboration network of astrophysics is demarcated (...) by identifying the 225 most productive astrophysicists. For the years 1998–1999 and 2001–2006 four co-authorship networks are constructed comprehending each a two year period. A visualization of the longitudinal network data gives first hints on the structural development of the network. The network of 2005–2006 is analyzed in depth. Based on cohesion analysis tools for network analysis, two main cores and three smaller ones are identified. Scientists in each core and additionally in structurally interesting positions are identified and 17 qualitative expert interviews are conducted with them. The visualization of the network of 2005–2006 is used in the interviews as a stimulus for the interviewees. An analysis of the three most often used keywords of the 225 astrophysicists is included and combined with the other data. The triangulation of these approaches shows that major epistemic changes in astrophysics, e.g. the discovery of the accelerating expansion of the universe, together with technical and organizational innovations, leads to a restructuring of the network of the discipline. The importance of a combination of qualitative and quantitative network analysis tools for the understanding of the interplay of cognitive and social structure in the sociology of science is substantiated. (shrink)

The Committee on Common Problems of Genetics, Paleontology, and Systematics (United States National Research Council) marks part of a critical transition in American evolutionary studies. Launched in 1942 to facilitate cross-training between genetics and paleontology, the Committee was also designed to amplify paleontologist voices in modern studies of evolutionary processes. During coincidental absences of founders George Gaylord Simpson and Theodosius Dobzhansky, an opportunistic Ernst Mayr moved into the project's leadership. Mayr used the opportunity for programmatic reforms he had been pursuing (...) elsewhere for more than a decade. These are evident in the Bulletins he distributed under Committee auspices. In his brief tenure as Committee leader, Mayr gained his first substantial foothold within the coalescing community infrastructure of evolutionary studies. Carrying this momentum forward led Mayr directly into the project to launch the journal Evolution. The sociology of interdisciplinary activity provides useful tools for understanding the Committee's value in the broad sweep of change in evolutionary studies during the synthesis period. (shrink)

In a recent paper on realism and pragmatism published in this journal, Osmo Kivinen and Tero Piiroinen have been pleading for more methodological work in the philosophy of the social sciences—refining the conceptual tools of social scientists—and less philosophically ontological theories. Following this de-ontologizing approach, we scrutinize the debates on social explanation and contribute to the development of a pragmatic social science methodology. Analyzing four classic debates concerning explanation in the social sciences, we propose to shift the debate away (...) from (a) the ontologizing defenses of forms of social explanation, and (b) a winner-takes-all-approach. Instead, we advocate (c) a pragmatic approach towards social explanation, elaborating a rigorous framework for explanatory pluralism detached from the debates on social ontology. (shrink)

The aim of this paper is, on the one hand, to critically investigate Kuhn’s stance on the assessment of the pursuit worthiness of scientific theories, and, on the other hand, to show the actuality of some of Kuhn’s points on this issue, in view of their critical analysis. To this end we show that Kuhn presents certain tools, which may help scientists to overcome communication breakdowns when engaging in the process of rational deliberation regarding the question whether a theory (...) is worthy of further pursuit. These tools are persuasion, translation and interpretation. However, we argue that the perspective of epistemic semantic monism present in Kuhn’s work obstructs the full applicability of these tools. We show that dropping this perspective makes the notions of persuasion and interpretation more fruitful, and moreover, allows for a pluralism of scientific theories and practices that complements the pluralism based on disagreement among scientists, emphasized by Kuhn. (shrink)

This paper takes issue with Slavoj Zizek's constructed opposition between Spinoza and Hegel. Where Zizek views Hegel's non-dualistic relational epistemology as a substantial improvement over Spinoza's purported dogmatic account of a reality which is external to the perceiver, I argue that Hegel inherited such an epistemology from Spinoza. Ultimately, it is Spinoza who provides Hegel with the conceptual tools for knowledge of the "transphenomenal" within the context of human finitude.

Arabidopsis is currently the most popular and well-researched model organism in plant biology. This paper documents this plant's rise to scientific fame by focusing on two interrelated aspects of Arabidopsis research. One is the extent to which the material features of the plant have constrained research directions and enabled scientific achievements. The other is the crucial role played by the international community of Arabidopsis researchers in making it possible to grow, distribute and use plant specimen that embody these material features. (...) I argue that at least part of the explosive development of this research community is due to its successful standardisation and to the subsequent use of Arabidopsis specimen as material models of plants. I conclude that model organisms have a double identity as both samples of nature and artifacts representing nature. It is the resulting ambivalence in their representational value that makes them attractive research tools for biologists. (shrink)

Epistemic diversity is widely approved of by social epistemologists. This paper asks, more specifi cally, how much epistemic diversity, and what kinds of epistemic diversity are normatively appropriate? Both laissez-faire and highly directive approaches to epistemic diversity are rejected in favor of the claim that diversity is a blunt epistemic tool. There are typically a number of diff erent options for adequate diversifi cation. The paper focuses on scientifi c domains, with particular attention to recent (...) theories of smell. (shrink)

This paper focuses on the question of how to resolve disagreement and uses the Lehrer-Wagner model as a formal tool for investigating consensual decision-making. The main result consists in a general definition of when agents treat each other as epistemic peers (Kelly 2005; Elga 2007), and a theorem vindicating the “equal weight view” to resolve disagreement among epistemic peers. We apply our findings to an analysis of the impact of social network structures on group deliberation processes, and we (...) demonstrate their stability with the help of numerical simulations. (shrink)

Epistemology and epistemic logic At first sight, the modern agenda of epistemology has little to do with logic. Topics include different definitions of knowledge, its basic formal properties, debates between externalist and internalist positions, and above all: perennial encounters with sceptics lurking behind every street corner, especially in the US. The entry 'Epistemology' in the Routledge Encyclopedia of Philosophy (Klein 1993) and the anthology (Kim and Sosa 2000) give an up-to-date impression of the field. Now, epistemic logic started (...) as a contribution to epistemology, or at least a tool in its modus operandi, with the seminal book Knowledge and Belief (Hintikka's 1962, 2005). Formulas like Ki for "the agent i knows that " Bi for "the agent i believes that " provided logical forms for stating and analyzing philosophical propositions and arguments. And more than that, their model-theoretic semantics in terms of ranges of alternatives provided an appealing extensional way of thinking about what agents know or believe in a given situation. In particular, on Hintikka's view, an agent knows those propositions which are true in all situations compatible with what she knows about the actual world; i.e., her current range of uncertainty. (shrink)

In this paper, I take scientific models to be epistemic representations of their target systems. I define an epistemic representation to be a tool for gaining information about its target system and argue that a vehicle’s capacity to provide specific information about its target system—its informativeness—is an essential feature of this kind of representation. I draw an analogy to our ordinary notion of interpretation to show that a user’s aim of faithfully representing the target system is necessary for (...) securing this feature. (shrink)

Hodgkin and Huxley’s 1952 model of the action potential is an apparent dream case of covering-law explanation. The model appeals to general laws of physics and chemistry (specifically, Ohm’s law and the Nernst equation), and the laws, coupled with details about antecedent and background conditions, entail many of the significant properties of the action potential. However, Hodgkin and Huxley insist that their model falls short of an explanation. This historical fact suggests either that there is more to explaining the action (...) potential than subsuming it under a general laws or that Hodgkin and Huxley were wrong about the explanatory import of their model. In this paper, I defend Hodgkin and Huxley’s view that their model alone does not explain the action potential (contra Weber 2005). I argue further that neuroscientists lacked crucial explanatory details about the action potential until they could describe the molecular and ionic mechanisms by virtue of which their model holds (see Bogen 2005). Mathematical generalizations are important epistemictools for assessing mechanistic explanations, but they are neither necessary nor sufficient for adequate explanations, even at the lowest levels of organization where biological phenomena are integrated with physics and chemistry. (shrink)

Both the irreducible complexity of biological phenomena and the aim of a universalized biology (life-as-it-could-be) have lead to a deep methodological shift in the study of life; represented by the appearance of ALife, with its claim that computational modelling is the main tool for studying the general principles of biological phenomenology. However this methodological shift implies important questions concerning the aesthetic, engineering and specially the epistemological status of computational models in scientific research: halfway between the well established categories of theory (...) and experiment. ALife models become powerful epistemic artefacts allowing the simulation of emergent phenomena, the interaction between different levels of organization and the integration of different causal factors in the very same manipulable object. The use of computational models in ALife can be classified in four main categories depending on their position between theoretical and empirical practices: generic, conceptual, functional and mechanistic. For each of these categories we analyse their epistemic value and select paradigmatic examples that illustrate how ALife models can be fruitfully inserted in the study of life. (shrink)

This paper introduces DEMO, a Dynamic Epistemic Modelling tool. DEMO allows modelling epistemic updates, graphical display of update results, graphical display of action models, formula evaluation in epistemic models, translation of dynamic epistemic formulas to PDL formulas, and so on. The paper implements the reduction of dynamic epistemic logic [16, 2, 3, 1] to PDL given in [12]. The reduction of dynamic epistemic logic to automata PDL from [24] is also discussed and implemented. (...) class='Hi'>Epistemic models are minimized under bisimulation, and update action models are minimized under action emulation (the appropriate structural notion for having the same update eﬀect, cf. [13]). The paper is an exemplar of tool building for epistemic update logic. It contains the full code of an implementation in Haskell [22], in ‘literate programming’ style [23], of DEMO. (shrink)

A testable model of the origin of money is outlined. Based on the notion of epistemic structures, the account integrates the tool and drug views using a common underlying model, and addresses the two puzzles presented by Lea & Webley (L&W) – money's biological roots and the adaptive significance of our tendency to acquire money. (Published Online April 5 2006).