Computational modeling of the brain holds great promise as a bridge from brain to behavior. To fulfill this promise, however, it is not enough for models to be 'biologically plausible': models must be structurally accurate. Here, we analyze what this entails for so-called psychobiological models, models that address behavior as well as brain function in some detail. Structural accuracy may be supported by (1) a model's a priori plausibility, which comes from a reliance on evidence-based assumptions, (2) fitting existing data, (...) and (3) the derivation of new predictions. All three sources of support require modelers to be explicit about the ontology of the model, and require the existence of data constraining the modeling. For situations in which such data are only sparsely available, we suggest a new approach. If several models are constructed that together form a hierarchy of models, higher-level models can be constrained by lower-level models, and low-level models can be constrained by behavioral features of the higher-level models. Modeling the same substrate at different levels of representation, as proposed here, thus has benefits that exceed the merits of each model in the hierarchy on its own. (shrink)

In this paper I show that Proclus is an adherent of the Classical Model of Science as set out elsewhere in this issue (de Jong and Betti 2008), and that he adjusts certain conditions of the Model to his Neoplatonic epistemology and metaphysics. In order to show this, I develop a case study concerning philosophy of nature, which, despite its unstable subject matter, Proclus considers to be a science. To give this science a firm foundation Proclus distills from Plato’s Timaeus (...) the basic concepts Being and Becoming and a number of basic propositions, among others the quasi-definitions of the basic concepts. He subsequently explains the use of these quasi-definitions, that are actually epistemic guides, in such a way that he obtains a connection between a rational and an empirical approach to the natural world. A crucial task in establishing the connection is performed by the faculty of doxa and by geometrical conversion. The result is that Proclus secures a universal, necessary and known foundation for all of philosophy of nature. (shrink)

One of the hardest questions to answer for a (Neo)platonist is to what extent and how the changing and unreliable world of sense perception can itself be an object of scientific knowledge. My dissertation is a study of the answer given to that question by the Neoplatonist Proclus (Athens, 411-485) in his Commentary on Plato’s Timaeus. I present a new explanation of Proclus’ concept of nature and show that philosophy of nature consists of several related subdisciplines matching the ontological stratification (...) of nature. Moreover, I demonstrate that for Proclus philosophy of nature is a science, albeit a hypothetical one, which takes geometry as its methodological paradigm. I also offer an explanation of Proclus’ view of what is later called the mathematization of physics, i.e. the role of the substance of mathematics, as opposed to its method, in explaining the natural world. Finally, I discuss Proclus’ views of the discourse of philosophy of nature and its iconic character. (shrink)

This volume collects Late Ancient, Byzantine and Medieval appropriations of Aristotle's Posterior Analytics, addressing the logic of inquiry, concept formation, the question whether metaphysics is a science, and the theory of demonstration.

Phillips & Silverstein's ambitious link between receptor abnormalities and the symptoms of schizophrenia involves a certain amount of fuzziness: No detailed mechanism is suggested through which the proposed abnormality would lead to psychological traits. We propose that detailed simulation of brain regions, using model neural networks, can aid in understanding the relation between biological abnormality and psychological dysfunction in schizophrenia.

In this rich and impressive new book, Henry Somers-Hall gives a nuanced analysis of the philosophical relationship between G. W. F. Hegel and Gilles Deleuze. He convincingly shows that a serious study of Hegel provides an improved insight into Deleuze’s conception of pure difference as the transcendental condition of identity. Somers-Hall develops his argument in three steps. First, both Hegel and Deleuze formulate a critique of representation. Second, Hegel’s proposed alternative is as logically consistent as Deleuze’s. Third, Deleuze can account (...) for evolution, whereas Hegel cannot. (shrink)

Privacy is valued by many. But what it means to have privacy remains less than clear. In this paper, I argue that the notion of privacy should be understood in epistemic terms. What it means to have (some degree of) privacy is that other persons do not stand in significant epistemic relations to those truths one wishes to keep private.

Subject sensitive invariantism is the view that whether a subject knows depends on what is at stake for that subject: the truth-value of a knowledge-attribution is sensitive to the subject's practical interests. I argue that subject sensitive invariantism cannot accept a very plausible principle for memory to transmit knowledge. I argue, furthermore, that semantic contextualism and contrastivism can accept this plausible principle for memory to transmit knowledge. I conclude that semantic contextualism and contrastivism are in a dialectical position better than (...) subject sensitive invariantism is. (shrink)

Amartya Sen argues that for the advancement of justice identification of ‘perfect’ justice is neither necessary nor sufficient. He replaces ‘perfect’ justice with comparative justice. Comparative justice limits itself to comparing social states with respect to degrees of justice. Sen’s central thesis is that identifying ‘perfect’ justice and comparing imperfect social states are ‘analytically disjoined’. This essay refutes Sen’s thesis by demonstrating that to be able to make adequate comparisons we need to identify and integrate criteria of comparison. This is (...) precisely the aim of a theory of justice (such as John Rawls’s theory): identifying, integrating and ordering relevant principles of justice. The same integrated criteria that determine ‘perfect’ justice are needed to be able to adequately compare imperfect social states. Sen’s alternative approach, which is based on social choice theory, is incapable of avoiding contrary, indeterminate or incoherent directives where plural principles of justice conflict. (shrink)

In 'Literature Suspends Death: Sacrifice and Storytelling in Kierkegaard, Kafka and Blanchot' Chris Danta takes Genesis 22 as the starting point for an investigation of the role of literary imagination. His aim is to read the Genesis story from a literary-theoretical perspective in order to show how it can 'illuminate the secular situation of the literary writer.' To do this, Danta stages a fruitful confrontation between Søren Kierkegaard as defender of religion and inwardness and Franz Kafka and Maurice Blanchot as (...) defenders of literature. In this review, three important points in this confrontation are highlighted. 1. The problem of identification. 2. The moment of substitution. 3. The spectrality of the writer. (shrink)

In order to explain such puzzling cases as the Bank Case and the Airport Case, semantic contextualists defend two theses. First, that the truth-conditions of knowledge sentences fluctuate in accordance with features of the conversational context. Second, that this fluctuation can be explained by the fact that 'knows' is an indexical. In this paper, I challenge both theses. In particular, I argue (i) that it isn't obvious that 'knows' is an indexical at all, and (ii) that contrastivism can do the (...) same work as contextualism is supposed to do, without being linguistically implausible. (shrink)

In this introduction to the special issue of Social Epistemology on epistemological contrastivism, I make some remarks on the history of contrastivism, describe three main versions of contrastivism, and offer a guide through the papers that compose this issue.

The Danish word 'incognito' means to appear in disguise, or to act under an unfamiliar, assumed name (or title) in order to avoid identification. As a concept, incognito occurs in several of Kierkegaard’s works, but only becomes a subject of reflection in two: the Concluding Unscientific Postscript to Philosophical Fragments by Johannes Climacus and Practice in Christianity by Anti-Climacus. Both pseudonyms develop the concept from their own perspective and must be understood on their own terms. Johannes Climacus treats incognito as (...) a category of existence, defining it as a comic contradiction that creates a disguise in order to hide and protect the inwardness of the existing individual. However, Anti-Climacus treats incognito as a category of communication. He defines it as “a sign of contradiction” that creates a disguise in order to activate and disclose the inwardness of a listener or reader. (shrink)

I reply to Martijn Blaauw's recent article about subject sensitive invariantism, in which he argues that SSI, unlike its contextualist and contrastivist competitors, cannot give a proper account of memorial knowledge. I argue that these theories are on a par when it comes to such an account.

Many philosophers are building a solid case in favour of the knowledge account of assertion (KAA). According to KAA, if one asserts that P one represents oneself as knowing that P. KAA has recently received support from linguistic data about prompting challenges, parenthetical positioning and predictions. In this article, I add another argument to this rapidly growing list: an argument from what I will call ‘reinforcing parenthesis’.

In Simultaneity and Delay: A Dialectical Theory of Staggered Time, the Canadian philosopher Jay Lampert challenges theories that define time in terms of absolute simultaneity and continuous succession. To counter these theories he introduces an alternative: the dialectic of simultaneity and delay. According to Lampert, this dialectic constitutes a temporal succession that is no longer structured as a continuous line, but that is built out of staggered time-flows and delayed reactions. The bulk of the book consists of an attempt to (...) give a conceptual order to the ‘unsystematic analyses of simultaneity and delay sprinkled through the history of philosophy’ (2). This conceptual analysis leads us through ancient (Plato and Plotinus), medieval (Origen) and late modern issues (Kant, Hegel and Lessing), as well as scientific discussions (Einstein, McTaggart), and culminates in the central chapter of the book, which attempts to show ‘how the problems of the great simultaneity philosophers - Husserl and Bergson - might be solved by the great delay philosophers - Derrida and Deleuze’ (147). In this review, I will focus on three points. 1. The problem of synchronization. 2. The problem of synthesis. 3. Theproblem of localization. (shrink)

In cases of imaginative contagion, imagining something has doxastic or doxastic-like consequences. In this reply to Tamar Szabó Gendler's article in this collection, I investigate what the philosophical consequences of these cases could be. I argue (i) that imaginative contagion has consequences for how we should understand the nature of imagination and (ii) that imaginative contagion has consequences for our understanding of what belief-forming mechanisms there are. Along the way, I make some remarks about what the consequences of the contagion (...) cases are for the relation between knowledge and imagination. (shrink)

John Turri has recently provided two problem cases for the knowledge account of assertion (KAA) to argue for the express knowledge account of assertion (EKAA). We defend KAA by explaining away the intuitions about the problem cases and by showing that our explanation is theoretically superior to EKAA.

This article discusses the possibility of a rationally justified choice between two options neither of which is better than the other while they are not equally good either (‘3NT’). Joseph Raz regards such options as incomparable and argues that reason cannot guide the choice between them. Ruth Chang, by contrast, tries to show that many cases of putative incomparability are instead cases of parity—a fourth value relation of comparability, in addition to the three standard value relations ‘better than’, ‘worse than’ (...) and ‘equally good as’. It follows, she argues, that many choice situations in which rationally justified choice seems precluded are in fact situations within the reach of practical reason. This article has three aims: (1) it challenges Chang’s argument for the possibility of parity; (2) it demonstrates that, even if parity would exist, its problematic implications for practical reason would not differ from those of Raz’s incomparability; (3) it discusses the underlying cause of hard cases of comparison: the fact that none of the three standard value relations applies (‘3NT’). It will be shown that the problematic implications for the rational justification of the choice are due to 3NT itself, irrespective of whether 3NT is explained as incomparability or parity. (shrink)

A central intuition many epistemologists seem to have is that knowledge is distinctively valuable. In his paper 'Radical Scepticism, Epistemic Luck and Epistemic Value', Duncan Pritchard rejects the virtue-theoretic explanation of this intuition. This explanation says that knowledge is distinctively valuable because it is a cognitive achievement. It is maintained, in the first place, that the arguments Pritchard musters against the thesis that knowledge is a cognitive achievement are unconvincing. It is argued, in the second place, that even (...) if the arguments against the thesis that knowledge is a cognitive achievement were convincing, there is another explanation of the intuition that knowledge has final value available: the question-relative treatment of knowledge. (shrink)

In almost all of his early works Gilles Deleuze is concerned with one and the same problem: the problem of genesis. In response to this problem, Deleuze argues for a system of heterogenesis. In this article, I argue that Deleuze’s system of heterogenesis operates on three levels: (1) the differential multiplicity of virtual Ideas; (2) the implied multiplicity of intensive dramas; (3) the extensive and qualitative diversity of actual concepts. As I hope to show, the relation between these three levels (...) can be explained in terms of the logic of expression that Deleuze develops in 'Expressionism in Philosophy: Spinoza.' In this way, I hope to gain clarity without losing nuance. The rather technical and abstract language of expression will become more concrete and understandable when it is understood in terms of intensive dramas, virtual Ideas and actual concepts. At the same time, the three levels of expression make it possible to show how these notions are interlinked. Accordingly, this article is divided in four parts. In the first part, I will explain how Deleuze takes up Kant’s discovery of the principle of difference as a reaction against the model of representation. The second part focuses on the two multiplicities (virtual Ideas and intensive dramas) that produce diversity (actual concepts). In the third part, these two multiplicities are linked to each other with the help of the logic of expression that Deleuze derives from Spinoza. The fourth, concluding part will connect the logic of expression to the complex dynamic of difference and repetition. (shrink)

Computer simulations can be useful tools to support philosophers in validating their theories, especially when these theories concern phenomena showing nontrivial dynamics. Such theories are usually informal, whilst for computer simulation a formally described model is needed. In this paper, a methodology is proposed to gradually formalise philosophical theories in terms of logically formalised dynamic properties. One outcome of this process is an executable logic-based temporal specification, which within a dedicated software environment can be used as a simulation model to (...) perform simulations. This specification provides a logical formalisation at the lowest aggregation level of the basic mechanisms underlying a process. In addition, dynamic properties at a higher aggregation level that may emerge from the mechanisms specified by the lower level properties, can be specified. Software tools are available to support specification, and to automatically check such higher level properties against the lower level properties and against generated simulation traces. As an illustration, three case studies are discussed showing successful applications of the approach to formalise and analyse, among others, Clark’s theory on extended mind, Damasio’s theory on core consciousness, and Dennett’s perspective on intertemporal decision making and altruism. (shrink)

This volume brings together state-of-the-art research on the contrastive treatment of philosophical concepts and questions, including knowledge, belief, free will, moral luck, Bayesian confirmation theory, causation, and explanation.

One of the most interesting anti-skeptical theories that has been proposed in the recent literature is epistemological contrastivism. In this paper, I answer some important objections to contrastivism that have been put forward by Steven Luper. The upshot of this paper is that Luper’s objections fail to damage contrastivism.

Contextualism is a quite popular research program nowadays. In essence, the contextualist holds that the truth conditions of knowledge attributing and of knowledge denying sentences vary in accordance with the context in which the sentences are uttered. This theory is positively motivated by its (alleged) capability of best explaining certain intuitions we have about knowledge attributions and knowledge denials. In this paper, I will argue that this positive motivation isn't as compelling as the contextualists think it to be. This I (...) will do by construing a so-called ‘warranted assertability maneuver’ (or WAM) against contextualism which shows that, with respect to knowledge attributing and denying sentences, the con-textualist has confused a variance in warranted assertability conditions for a variance in truth conditions. (shrink)

I reply to comments by Gerry Hough, Peter Baumann and Martijn Blaauw on my book Moral Skepticisms. The main issues concern whether modest justifiedness is epistemic and how it is related to extreme justifiedness; how contrastivists can handle crazy contrast classes, indeterminacy and common language; whether Pyrrhonian scepticism leads to paralysis in decision-making or satisfies our desires to evaluate beliefs as justified or not; and how contextualists can respond to my arguments against relevance of contrast classes.

The growing political, social and scientific attention that is being devoted to the moral aspects of teaching has implications for teacher education. This paper reports on a study of the actual moral education practices of 54 teacher educators within one institution. We encouraged these teacher educators to make their values explicit and to explain how they put them into practice. Nine teacher educators were studied in detail. These teacher educators were then stimulated to reflect on their values by completing charts (...) to analyse the moral aspects of their practices. In addition, one of their lessons was videotaped and discussed. An important conclusion of this study is that whilst the responsibility for preparing student teachers for moral education rests with individual teacher educators, this process is largely implicit and unplanned. This is due in part to the lack of a language for expressing the moral dimension in teaching. Both teacher educators and students emphasise the importance of the role that attitudes play in the expression of values by teacher educators. (shrink)

A decade ago, Isham and Butterfield proposed a topos-theoretic approach to quantum mechanics, which meanwhile has been extended by Döring and Isham so as to provide a new mathematical foundation for all of physics. Last year, three of the present authors redeveloped and refined these ideas by combining the C*-algebraic approach to quantum theory with the so-called internal language of topos theory (Heunen et al. in arXiv:0709.4364). The goal of the present paper is to illustrate our abstract setup through the (...) concrete example of the C*-algebra M n (ℂ) of complex n×n matrices. This leads to an explicit expression for the pointfree quantum phase space Σ n and the associated logical structure and Gelfand transform of an n-level system. We also determine the pertinent non-probabilisitic state-proposition pairing (or valuation) and give a very natural topos-theoretic reformulation of the Kochen–Specker Theorem.In our approach, the nondistributive lattice ℘(M n (ℂ)) of projections in M n (ℂ) (which forms the basis of the traditional quantum logic of Birkhoff and von Neumann) is replaced by a specific distributive lattice $\mathcal{O}(\Sigma_{n})$ of functions from the poset $\mathcal{C}(M_{n}(\mathbb{C}))$ of all unital commutative C*-subalgebras C of M n (ℂ) to ℘(M n (ℂ)). The lattice $\mathcal{O}(\Sigma_{n})$ is essentially the (pointfree) topology of the quantum phase space Σ n , and as such defines a Heyting algebra. Each element of $\mathcal{O}(\Sigma_{n})$ corresponds to a “Bohrified” proposition, in the sense that to each classical context $C\in\mathcal{C}(M_{n}(\mathbb{C}))$ it associates a yes-no question (i.e. an element of the Boolean lattice ℘(C) of projections in C), rather than being a single projection as in standard quantum logic. Distributivity is recovered at the expense of the law of the excluded middle (Tertium Non Datur), whose demise is in our opinion to be welcomed, not just in intuitionistic logic in the spirit of Brouwer, but also in quantum logic in the spirit of von Neumann. (shrink)

This study investigates to what extent the amount of variation in a visual scene causes speakers to mention the attribute color in their definite target descriptions, focusing on scenes in which this attribute is not needed for identification of the target. The results of our three experiments show that speakers are more likely to redundantly include a color attribute when the scene variation is high as compared with when this variation is low (even if this leads to overspecified descriptions). We (...) argue that these findings are problematic for existing algorithms that aim to automatically generate psychologically realistic target descriptions, such as the Incremental Algorithm, as these algorithms make use of a fixed preference order per domain and do not take visual scene variation into account. (shrink)

Robert Oakes has argued that theism defeats the 'doctrine of public-world fallibilism'. That is, Oakes has argued that theism supports infallibilism about public-world beliefs such as 'There is an olive on the floor', or 'I have two hands'. Given the enormous discussion of radical scepticism in the recent epistemological literature, this argument is well worth investigating. In this short note, however, I argue that the argument Oakes presents is unconvincing. The truth of theism does not support public-world infallibilism.

Psycholinguistic studies often look at the production of referring expressions in interactive settings, but so far few referring expression generation algorithms have been developed that are sensitive to earlier references in an interaction. Rather, such algorithms tend to rely on domain-dependent preferences for both content selection and linguistic realization. We present three experiments showing that humans may opt for dispreferred attributes and dispreferred modifier orderings when these were primed in a preceding interaction (without speakers being consciously aware of this). In (...) addition, we show that speakers are more likely to produce overspecified references, including dispreferred attributes (although minimal descriptions with preferred attributes would suffice), when these were similarly primed. (shrink)