This article explores a concept of artistic transgression I call aesthetic disobedience that runs parallel to the political concept of civil disobedience. Acts of civil disobedience break some law in order to publicly draw attention to and recommend the reform of a conflict between the commitments of a legal system and some shared commitments of a community. Likewise, acts of aesthetic disobedience break some entrenched artworld norm in order to publicly draw attention to and recommend the reform of a conflict (...) between artworld commitments and some shared commitments of a community. Considering artistic transgressions under the concept of aesthetic disobedience highlights often-overlooked features of modern artworld practices. Most significantly, it draws attention to the deliberative participation of a wide variety of citizens of the artworld, including not just artists and performers but also members of audiences, in the transformation of the rules and boundaries of the artworld itself. (shrink)

Religious traditions can be sources of values and attitudes supporting the liberal polity in ways that political theorizing and conceptions of public reason often fail to recognize. moreover, religious traditions can give support through the ways reason is crucial to their self-understanding. one understanding of Judaism is examined as an example. Also, the particularism of traditions can encourage commitment to universally valid values and ideals. reason’s role in Judaism and other religious traditions makes possible constructive interaction between those traditions and (...) between religious and secular thought. exclusion of religiously grounded considerations from the discourse and deliberations of liberal polities can be counterproductively illiberal. (shrink)

This paper evaluates Jonathan Quong’s attempt to defend a version of political liberalism from the asymmetry objection. I object that Quong’s defence relies on a premise that has not been adequately supported and does not look as if it can be given adequate support.

In a recent paper, Jonathan Quong tries to offer further support for “the proposition that there are sometimes agent-relative prerogatives to harm nonliable persons.” In this brief paper, I will demonstrate that Quong’s argument implicitly relies on the premise that the violinist in Thomson’s famous example has a right not to be unplugged. Yet, first, Quong provides no argument in support of this premise; and second, the premise is clearly wrong. Moreover, throughout his paper Quong just question-beggingly and without (...) argument assumes that one cannot lose rights in other ways than by one’s own responsible action. I conclude that Quong has failed to provide further support for his thesis. (shrink)

Jonathan Lowe has argued that a particular variation on C.I. Lewis' notion of strict implication avoids the paradoxes of strict implication. We show that Lowe's notion of implication does not achieve this aim, and offer a general argument to demonstrate that no other variation on Lewis' notion of constantly strict implication describes the logical behaviour of natural-language conditionals in a satisfactory way.

Joanna Mary Firth and Jonathan Quong argue that both an instrumental account of liability to defensive harm, according to which an aggressor can only be liable to defensive harms that are necessary to avert the threat he poses, and a purely noninstrumental account which completely jettisons the necessity condition, lead to very counterintuitive implications. To remedy this situation, they offer a “pluralist” account and base it on a distinction between “agency rights” and a “humanitarian right.” I argue, first, that (...) this distinction is spurious; second, that the conclusions they draw from this distinction do not cohere with its premises; third, that even if one granted the distinction, Firth’s and Quong’s implicit premise that you can forfeit your agency rights but not your “humanitarian right” is unwarranted; fourth, that their attempt to mitigate the counterintuitive implications of their own account in the Rape case relies on mistaken ad-hoc assumptions; fifth, that even if they were successful in somewhat mitigating said counterintuitive implications, they would still not be able to entirely avoid them; and sixth, that even in the unlikely case that none of these previous five critical points are correct, Firth and Quong still fail to establish that aggressors can be liable to unnecessary defensive harm since they fail to establish that unnecessary harm can ever be defensive in the first place. (shrink)

This paper criticizes an influential argument from Thomas Nagel’s THE POSSIBILTIY OF ALTRUISM, an argument that plays a foundational role in the philosophies of (at least) Philippa Foot, John McDowell and Jonathan Dancy. Nagel purports to prove that a person can be can be motivated to perform X by the belief that X is likely to bring about Y, without a causally active or biffy desire for Y. If Cullity and Gaut are to be believed (ETHICS AND PRACTICAL REASONING) (...) this is widely regarded within the practical reasoning industry as an established fact. My thesis is a simple one. Nagel’s argument is an abject failure and the philosophies that are founded on it are built upon sand. There is a little bit of rather amateurish X-Phi at the end, but I don’t want readers to get too excited about this as it is essentially icing on the cake. This paper is not primarily an exercise in Experimental Philosophy but in Baby Logic, and it’s central thesis is a logical one, namely that Nagel (to put the point politely) fails to prove his thesis. (shrink)

The recovery of ancient skepticism in the sixteenth century had broad consequences in various intellectual domains, including fictional discourse. In the following centuries several authors echoed skeptical philosophical discourse and made literary use of skepticism. Jonathan Swift (1667-1745) is inserted in the hall of the modern writers who echoed and assimilated the skeptical tradition. Satires as A Tale of a Tub (1704), The Battle of Books (1704) and Gulliver's Travels (1726) are framed with marks of skepticism. Thus, my purpose (...) is to present a study on the incorporation of skeptical aspects and arguments in Swift's fictional and satirical discourse. I will try to show that Swift assimilates and uses artifices of skeptical discourse, and therefore, given his relation and knowledge of this tradition, has a place in the history of skeptical thinking. (shrink)

Many epistemologists have been attracted to the view that knowledge-wh can be reduced to knowledge-that. An important challenge to this, presented by Jonathan Schaffer, is the problem of “convergent knowledge”: reductive accounts imply that any two knowledge-wh ascriptions with identical true answers to the questions embedded in their wh-clauses are materially equivalent, but according to Schaffer, there are counterexamples to this equivalence. Parallel to this, Schaffer has presented a very similar argument against binary accounts of knowledge, and thereby in (...) favour of his alternative contrastive account, relying on similar examples of apparently inequivalent knowledge ascriptions, which binary accounts treat as equivalent. In this article, I develop a unified diagnosis and solution to these problems for the reductive and binary accounts, based on a general theory of knowledge ascriptions that embed presuppositional expressions. All of Schaffer's apparent counterexamples embed presuppositional expressions, and once the effect of these is taken into account, it becomes apparent that the counterexamples depend on an illicit equivocation of contexts. Since epistemologists often rely on knowledge ascriptions that embed presuppositional expressions, the general theory of them presented here will have ramifications beyond defusing Schaffer's argument. (shrink)

I argue that John Dewey’s analysis of imagination enables an account of learning from imaginary cases consistent with Jonathan Dancy’s moral particularism. Moreover, this account provides a more robust account of learning from cases than Dancy’s own. Particularism is the position that there are no, or at most few, true moral principles, and that competent reasoning and judgment do not require them. On a particularist framework, one cannot infer from an imaginary case that because a feature has a particular (...) moral importance there, that it must have that import in an actual case. Instead, for Dancy, cases can yield “reminders,” and a person with a lot of experience (real or imagined) brings a “checklist” of features that can matter to a situation. Using the Nathan-David exchange from 2 Samuel and Martha Nussbaum’s “Steerforth’s Arm” from Love’s Knowledge, I show that this account does not explain all instances of learning from cases. Drawing on recent work on cases, I argue that cases can be educative by serving an exploratory function, probing what one takes to be known and provoking change in the background one uses in evaluating a situation. I then argue that Dewey’s work on imagination in his comments on sympathy and in A Common Faith and Art as Experience enables such a role for cases on a particularist framework. Mark Johnson’s recent work on metaphor further illuminates how Dewey’s account of art can be exploratory. I contend that this account affords an exploratory role for cases consistent with Dancy’s particularism. (shrink)

Causal contextualism holds that sentences of the form ‘c causes e’ have context-sensitive truth-conditions. We consider four arguments invoked by Jonathan Schaffer in favor of this view. First, he argues that his brand of contextualism helps solve puzzles about transitivity. Second, he contends that how one describes the relata of the causal relation sometimes affects the truth of one’s claim. Third, Schaffer invokes the phenomenon of contrastive focus to conclude that causal statements implicitly designate salient alternatives to the cause (...) and effect. Fourth, he claims that the appropriateness of a causal statement depends on what is contextually taken for granted or made salient. We show that causal invariantism can explain these linguistic data at least as well as contextualism. We then argue that pace Schaffer, some causal sentences are always correct and can never be plausibly denied, regardless of the context. (shrink)

Every normative claim is faced with the questions of how and why we should accept it. Jonathan Weinberg et al. think there is no good answer to these questions and call this the "normativity problem." I argue that if we try to posit that there is a problem of normativity, then logically we will fall into circularity â€“ even if these questions of normativity do not have any answer. But there is still a problem that emerges in conflict resolution (...) between normative issues. To avoid circularity, I formulate this problem and call it the "revised problem of normativity." Next, I enumerate and assess probable responses to this revised edition of the problem. Finally, I defend "basic normative issue(s)" as the only valid solution to the revised problem as well as the possibility of a scientific surveyability of it (them). (shrink)

Selim Berker argues that particularists do not have a coherent notion of reasons for action because they cannot show that contributory reasons always contribute to overall reason or moral judgments in accordance with their valence. I argue that Berker fails to demonstrate that particularists cannot show this to be the case. He also wrongly assumes that they need to know this to be the case to legitimately speak of reasons for action. Furthermore, Jonathan Dancy’s account of practical reasoning explains (...) how particularists can legitimately speak of reasons for action while claiming that reasons sometimes make contributions contrary to their valence. (shrink)

The 18th-century American philosopher Jonathan Edwards argues that nothing endures through time. I analyze his argument, paying particular attention to a central principle it relies on, namely that “nothing can exert itself, or operate, when and where it is not existing”. I also consider what I supposed to follow from the conclusion that nothing endures. Edwards is sometimes read as the first four-dimensionalist. I argue that this is wrong. Edwards does not conclude that things persist by having different temporal (...) parts; he concludes that nothing persists. (shrink)

A finished sketch for a light-and-shadow projection device by the Paduan mechanical artisan Johannes de Fontana (c.1395–1455), in his manuscript book of drawings now known as Liber Bellicorum Instrumentorum, depicts a machine for communicating ideas or information through spectacle. The manuscript is fairly well known, and this sketch is just one of many interesting images worthy of study in its 70 leaves. A couple dozen manuscripts of the mechanical arts from this period survive, the best-studied of which fall into the (...) “Sienese school” and the “German school.” Fontana falls outside these, for he had far less influence than the Sienese. His work also is too early, it seems, to count in narratives directed toward the flowering of technological illustration in the sixteenth century. Of his images of subjects other than hydraulic and military machines only one deep study has been made, concerning two of the automata, although the present sketch has lately attracted a glance or two. Historians of technology pay scant attention to the first half of the fifteenth century, five decades that seem merely to repeat medieval knowledge and have the disadvantage to their prestige of falling “before Leonardo.” Whether one views Fontana as an engineer or as a science fiction illustrator, a great deal in the manuscript has not been given its due. The brief normative account in the literature so far on Fontana focuses on politics and warfare. My account in the case of his castellus image in this paper emphasizes issues of imagery, communication, subjectivity, moral feeling, spiritual life, and personhood. This account runs along two lines. For the first, I will suggest some untried ideas for approaching this image. In part this is in pursuit of what Jonathan Sawday calls the imaginative history of machines and mechanisms, though more largely it concerns contributing to a broad-range history of communication and persuasion. If we look at the image from our standpoint in aworld accustomed to the reproduction of images, we readily see in it an early step toward our present control of the display and diffusion of images. Fontana’s castle of shadows(castellus umbrarum), based on a worldwide transfer of technical knowledge about imagery in antiquity (and even in pre-history), presents some of the continuing questions driving thereproduction of imagery and the dispersal of information. As a practical matter, a sense ofproximity to Fontana and his time, as opposed to a sense of untranslatable distance, helps to broaden the historiography. My second line of thought is to oppose my account of Fontana’s’s castellus to an interpretation, and to the thinking behind it, that has started to appear on the borders of disciplinary history. This other interpretation reflects an increasingly influential approach to the history of technology and cultural theory that employs a growing and powerful line of philosophical thought. In 2003 Philippe Codognet, a philosopher of technology, published an essay in which he described Fontana’s castle of shadows as a specimen of the pre-historyof virtual reality devices. His reference of the castle of shadows is a bit casual, perhaps accidental in feeling; but it has begun to stimulate interest in Fontana’s striking idea and hasgiven it a bit of renown. Codognet’s view (along with his reproduction of the image) has been picked up by thinkers who are concerned with post-humanistic ideas derived from philosophical work in which the distinction between human persons and objects is deflated in such a way that both persons and objects are correctly characterized by attributes commonly divided into subjective and objective. What’s more, they are characterized by attributes that, under this view, are incorrectly distinguished from one another as the human, the organic, and the inorganic. The ontology supporting this approach denies the privileged epistemological relationship of humans to the world. This school of thought is object-oriented ontology, also known in a more radical form as speculative realism. Its potential influence on historiography is great, and part of it is and will be valuable. Its current actual influence is centered on medieval cultural studies and on the history of technology. (shrink)

This study offers a comprehensive summary and critical discussion of Alice Crary’s Beyond Moral Judgment. While generally sympathetic to her goal of defending the sort of expansive vision of the moral previously championed by Cora Diamond and Iris Murdoch, concerns are raised regarding the potential for her account to provide a satisfactory treatment of both “wide” objectivity and moral disagreement. Drawing on the work of Jonathan Lear and Jonathan Dancy, I suggest possible routes by which her position could (...) be expanded and possibly strengthened. (shrink)

Steinberg has recently proposed an argument against Schaffer’s priority monism. The argument assumes the principle of Necessity of Monism, which states that if priority monism is true, then it is necessarily true. In this paper, I argue that Steinberg’s objection can be eluded by giving up Necessity of Monism for an alternative principle, that I call Essentiality of Fundamentality, and that such a principle is to be preferred to Necessity of Monism on other grounds as well.

In this paper, we shall describe and critically evaluate four contemporary theories which attempt to solve the problem of the infinite regress of reasons: BonJour's ‘impure’ coherentism, BonJour's foundationalism, Haack's ‘foundherentism’ and Dancy's pure coherentism. These theories are initially put forward as theories about the justification of our empirical beliefs; however, in fact they also attempt to provide a successful response to the question of their own ‘metajustification.’ Yet, it will be argued that 1) none of the examined theories is (...) successful as a theory of justification of our empirical beliefs, and that 2) they also fall short of being adequate theories of metajustification. It will be further suggested that the failure of these views on justification is not coincidental, but is actually a consequence of deeper and tacitly held problematic epistemological assumptions (namely, the requirements of justificatory generality and epistemic priority), whose acceptance paves the way towards a generalized scepticism about empirical justification. (shrink)

Moral particularism, on some interpretations, is committed to a shapeless thesis: the moral is shapeless with respect to the natural. (Call this version of moral particularism ‘shapeless moral particularism’). In more detail, the shapeless thesis is that the actions a moral concept or predicate can be correctly applied to have no natural commonality (or shape) amongst them. Jackson et al. (Ethical particularism and patterns, Oxford University Press, Oxford, 2000) argue, however, that the shapeless thesis violates the platitude ‘predication supervenes on (...) nature’—predicates or concepts apply because of how things are, and therefore ought to be rejected. I defend shapeless moral particularism by arguing that Jackson et al’s contention is less compelling than it firstly appears. My defense is limited in the sense that it does not prove shapeless moral particularism to be right and it leaves open the possibility that shapeless moral particularism might attract criticisms different from the ones advanced by Jackson et al. But at the very least, I hope to say enough to undermine Jackson et al’s powerful attack against it. The plan of this paper is as follows. Section 1 glosses the view of moral particularism and why it is taken to be essentially committed to the shapeless thesis. Section 2 examines a Wittgensteinian argument for the shapeless thesis. I shall argue that the Canberrans’ counter-arguments against it on grounds of disjunctive commonality and conceptual competence do not succeed. Section 3 explicates Canberrans’ predication supervenience argument against the shapeless thesis. Section 4 offers my criticisms of the Canberrans’ predication supervenience argument. In view of the above discussions, in Sect. 5, I conclude that there is no compelling argument (from the Canberrans) to believe that the shapeless thesis fails (as I have argued in Sect. 4). In fact, there is some good reason for us to believe it (as I have argued in Sect. 2). If so, I contend that moral particularism, when construed as essentially committed to the shapeless thesis, still remains as a live option. (shrink)

This review confirms Herman’s work as a praiseworthy contribution to East-West and comparative philosophical literature. Due credit is given to Herman for providing English readers with access to Buber’s commentary on, a personal translation of, the Chuang-Tzu; Herman’s insight into the later influence of I and Thou on Buber’s understanding of Chuang-Tzu and Taoism is also appropriately commended. In latter half of this review, constructive criticisms of Herman’s work are put forward, such as formatting inconsistencies, a tendency toward verbosity and (...) jargon, and a neglect of seemingly important hermeneutical issues. Such issues, seemingly substantive but neglected by Herman, are the influence of Buber’s prior familiarity with Hasidic teachings on his encounter with Chuang-Tzu, as well as the prevalence of Hasidic and Taoist thought in Buber’s conception of good and evil. (shrink)

This is a review of The Turing Guide (2017), written by Jack Copeland, Jonathan Bowen, Mark Sprevak, Robin Wilson, and others. The review includes a new sociological approach to the problem of computability in physics.

In this paper we present the comparative study by Jonathan Stoltz, about the similarities between Timothy Williamson's epistemology and the Indo-Tibetan epistemological traditions that developed between the eighth and thirteenth centuries, which lie above that for them the knowledge is a kind of mental state as well as a restricted study of the return to metaphysics, a name that refers to the interesting position of authority born of an attempt to overcome his classical and strong Anglo-American analytical roots.

In “Against Arguments from Reference” (Mallon et al., 2009), Ron Mallon, Edouard Machery, Shaun Nichols, and Stephen Stich (hereafter, MMNS) argue that recent experiments concerning reference undermine various philosophical arguments that presuppose the correctness of the causal-historical theory of reference. We will argue three things in reply. First, the experiments in question—concerning Kripke’s Gödel/Schmidt example—don’t really speak to the dispute between descriptivism and the causal-historical theory; though the two theories are empirically testable, we need to look at quite different data (...) than MMNS do to decide between them. Second, the Gödel/Schmidt example plays a different, and much smaller, role in Kripke’s argument for the causal-historical theory than MMNS assume. Finally, and relatedly, even if Kripke is wrong about the Gödel/Schmidt example—indeed, even if the causal-historical theory is not the correct theory of names for some human languages—that does not, contrary to MMNS’s claim, undermine uses of the causalhistorical theory in philosophical research projects. (shrink)

This paper attempts to develop an ethico-aesthetic framework for enriching one's life and ethical outlook. Drawing primarily from Nietzsche, Foucault, and Heidegger, an argument is made that Heidegger's understanding of this issue was mistaken. The ontological crisis of modernity is not the overt influence of mathematics as a worldview over poetics and more traditionally aesthetic approaches. It is the rampant mis-and over-application of abstraction within one's view of the world while denying the material realities of life as we live it. (...) This runaway abstractive worldview leads to the misapplication of mathematics and other sciences which in turn facilitate the dehumanization of life and those within it. When we try to solve the real problems of our material human lives through overly abstractive means, then we arrive at inauthentic arguments that fuel popular disdain for philosophy as irrelevant and nothing more than the purview of the elite. The goal is a recalibration of the argument toward addressing the denial of materiality within Modernism. (shrink)

ABSTRACTRecent debate over the semantics and pragmatics of epistemic modals has focused on intuitions about cross-contextual truth-value assessments. In this paper, we advocate a different approach to evaluating theories of epistemic modals. Our strategy focuses on judgments of the incompatibility of two different epistemic possibility claims, or two different truth value assessments of a single epistemic possibility claim. We subject the predictions of existing theories to empirical scrutiny, and argue that existing contextualist and relativist theories are unable to account for (...) the full pattern of observed judgments. As a way of illustrating the theoretical upshot of these results, we conclude by developing a novel theory of epistemic modals that is able to predict the results. (shrink)

In “Neurosentimentalism and Moral Agency”, Philip Gerrans and Jeanette Kennett argue that prominent versions of metaethical sentimentalism and moral realism ignore the importance, for moral agency and moral judgment, of the capacity to experientially project oneself into the past and possible futures – to engage in ‘mental time travel’. They contend that such views are committed to taking subjects with impaired capacities for MTT to be moral judgers, and thus confront a dilemma: either allow that these subjects are moral agents, (...) or deny that moral agency is required for moral judgment. In reply, we argue for two main claims. First, it is implausible that moral agency is required for moral judgment, and Gerrans and Kennett give us no good reason for thinking it is. Second, at least some of the subjects in question seem able to make moral judgments, and Gerrans and Kennett give us no good reason to doubt that they can. We conclude that they have not shown a problem for any of the metaethical views in question. (shrink)

The most pressing difficulty coherentism faces is, I believe, the problem of justified inconsistent beliefs. In a nutshell, there are cases in which our beliefs appear to be both fully rational and justified, and yet the contents of the beliefs are inconsistent, often knowingly so. This fact contradicts the seemingly obvious idea that a minimal requirement for coherence is logical consistency. Here, I present a solution to one version of this problem.

in the treatise, hume claims to identify many “fictions of the imagination” among both “vulgar” and philosophical beliefs. To name just a few, these include the fiction of one aggregate composed of many parts,1 the fiction of a material object’s identity through change, and the fiction of a human mind’s identity through change and interruption in its existence. Hume claims that these fictions and others like them are somehow defective: in his words, they are “improper,” “inexact,” or not “strict”. I (...) will argue that this claim conflicts with other commitments.. (shrink)

We attempt to improve the understanding of the notion of agene being `for a phenotypic trait or traits. Considering theimplicit functional ascription of one thing being `for another,we submit a more restrictive version of `gene for talk.Accordingly, genes are only to be thought of as being forphenotypic traits when good evidence is available that thepresence or prevalence of the gene in a population is the resultof natural selection on that particular trait, and that theassociation between that trait and the gene (...) in question isdemonstrably causal. It is therefore necessary to gatherstatistical, biochemical, historical, as well as ecologicalinformation before properly claiming that a gene is for aphenotypic trait. Instead of hampering practical use of the `genefor talk, our approach aims at stimulating much needed researchinto the functional ecology and comparative evolutionary biologyof gene action. (shrink)

This paper challenges the notion that the only way to progress to a post-capitalist society is through the wholesale destruction of the capitalist economic system. Instead, I argue that Craft —an existential state and praxis informed by the creation and maintenance of objects of utility—is uniquely situated to effectively reclaim these systems due to its its focus on materiality over abstraction and its unique position as a socially aware form of praxis. This argument focuses not on competition, but on hyper-abstraction (...) as the key driver of capitalist exploitation and its most glaring ethical flaw. Karl Marx's work on commodity fetishism is key to understanding this misguided form of abstraction which displaces commodities so far from their functional form that they feed into what Martin Heidegger termed gestell , or enframing. Postmodern attempts to destabilize capitalist influence in the fine arts, like the de-objectification of the 1960s described by Ursula Meyer, often fell victim to the same fetishistic mindset and simply increased the hold of capitalism within the arts. The enframing worldview that Heidegger warns us about is fed by hyper-abstraction, and while he directly offers up art as the remedy to this situation via poiēsis , key moments in his writings on the related notion of geschick support this new notion of Craft , rather than the fine arts, as a more capable system for the rehabilitation of modern society. (shrink)

According to Conciliationism, rationality calls for a removal of dissenting opinions – in the end, the disagreement should lead to skepticism toward the disputed proposition for all the involved parties. However, psychological data regarding group inquiry indicates that groups with dissenting members are more successful in their inquiry with respect to the disputed propositions. So, according to the psychological data, rationality calls for preserving dissent – disagreement should be embraced as a great tool for getting at true beliefs. In this (...) paper I analyze this apparent conflict. (shrink)

The developing body of empirical work on the "Gettier effect" indicates that, in general, the presence of a Gettier-type structure in a case makes participants less likely to attribute knowledge in that case. But is that a sufficient reason to diverge from a JTB theory of knowledge? I argue that considerations of good model selection, and worries about noise and overfitting, should lead us to consider that a live, open question. The Gettier effect is perhaps so transient, and so sensitive (...) to other, epistemologically-inappropriate factors, that it raises the question of whether it ought to be counted as something to include in our theories -- or as a piece of noise to be excluded from them. (shrink)

Despite a recent explosion of interest in the ethics of armed conflict, the traditional just war criterion that war be waged by a “legitimate authority” has received less attention than other components of the theory. Moreover, of those theorists who have addressed the criterion, many are deeply skeptical about its moral significance. This article aims to add some clarity and precision to the authority criterion and to debates surrounding it, and to suggest that this skepticism may be too quick. First, (...) it provides an analysis of the authority criterion, and argues there are (at least) two distinct moral claims associated with the criterion, requiring separate evaluation. Second, it outlines an increasingly influential “reductivist” approach to just war theory, and explains how it grounds powerful objections to the authority criterion. Third, and in response, it sketches the most promising strategies for providing a (qualified) defense of authority, and the further questions and complications they raise. Importantly, these strategies aim to rehabilitate the authority criterion from within a broadly reductivist view. (shrink)

The collaboration of Language and Computing nv (L&C) and the Institute for Formal Ontology and Medical Information Science (IFOMIS) is guided by the hypothesis that quality constraints on ontologies for software ap-plication purposes closely parallel the constraints salient to the design of sound philosophical theories. The extent of this parallel has been poorly appreciated in the informatics community, and it turns out that importing the benefits of phi-losophical insight and methodology into application domains yields a variety of improvements. L&C’s LinKBase® (...) is one of the world’s largest medical domain ontologies. Its current primary use pertains to natural language processing ap-plications, but it also supports intelligent navigation through a range of struc-tured medical and bioinformatics information resources, such as SNOMED-CT, Swiss-Prot, and the Gene Ontology (GO). In this report we discuss how and why philosophical methods improve both the internal coherence of LinKBase®, and its capacity to serve as a translation hub, improving the interoperability of the ontologies through which it navigates. (shrink)

A mysterious remark to Friedrich Waismann on 30 December 1929 marks the only occasion where Wittgenstein refers to both Heidegger and Kierkegaard. Yet although this has generated much controversy, little attention has been paid to the charge of nonsense that Wittgenstein here appears to bring against Heidegger; thus, the supporting argument that may be latent has not been unearthed. Through analysis of this remark, Wittgenstein's arguments in the Tractatus and 'A Lecture on Ethics', and Heidegger's account of anxiety (Angst) in (...) Being and Time, I argue that we can extract an argument against the central question of Heidegger's philosophy: the question of being. To understand this, I examine the Kierkegaardian ideas employed by Wittgenstein and Heidegger and attempt to show that this argument can be partly understood in Kierkegaardian terms. I further argue that examining what Heidegger means by 'being' (Sein) shows that Wittgenstein's argument does not meet its target. (shrink)

Currently, the widely used notion of activity is increasingly present in computer science. However, because this notion is used in specific contexts, it becomes vague. Here, the notion of activity is scrutinized in various contexts and, accordingly, put in perspective. It is discussed through four scientific disciplines: computer science, biology, economics, and epistemology. The definition of activity usually used in simulation is extended to new qualitative and quantitative definitions. In computer science, biology and economics disciplines, the new simulation activity definition (...) is first applied critically. Then, activity is discussed generally. In epistemology, activity is discussed, in a prospective way, as a possible framework in models of human beliefs and knowledge. (shrink)

Harold Hodes in [1] introduces an extension of first-order modal logic featuring a backtracking operator, and provides a possible worlds semantics, according to which the operator is a kind of device for ‘world travel’; he does not provide a proof theory. In this paper, I provide a natural deduction system for modal logic featuring this operator, and argue that the system can be motivated in terms of a reading of the backtracking operator whereby it serves to indicate modal scope. I (...) prove soundness and completeness theorems with respect to Hodes’ semantics, as well as semantics with fewer restrictions on the accessibility relation. (shrink)

This dissertation examines the conceptual and theoretical foundations of the most general and most widely used framework for understanding social evolution, W. D. Hamilton's theory of kin selection. While the core idea is intuitive enough (when organisms share genes, they sometimes have an evolutionary incentive to help one another), its apparent simplicity masks a host of conceptual subtleties, and the theory has proved a perennial source of controversy in evolutionary biology. To move towards a resolution of these controversies, we need (...) a careful and rigorous analysis of the philosophical foundations of the theory. My aim in this work is to provide such an analysis. I begin with an examination of the concepts behavioural ecologists employ to describe and classify types of social behaviour. I stress the need to distinguish concepts that are often conflated: for example, we need to distinguish simple cooperation from collaboration in collective tasks, behaviours from strategies, and control from manipulation and coercion. I proceed from here to the formal representation of kin selection via George R. Price’s covariance selection mathematics. I address a number of interpretative issues the Price formalism raises, including the vexed question of whether kin selection theory is ‘formally equivalent’ to multi-level selection theory. In the second half of the dissertation, I assess the uses and limits of Hamilton’s rule for the evolution of social behaviour; I provide a precise statement of the conditions under which the rival neighbour-modulated fitness and inclusive fitness approaches in contemporary kin selection theory are equivalent (and describe cases in which they are not); and I criticize recent formal attempts to establish the controversial claim that kin selection leads to organisms behaving as if maximizing their inclusive fitness. (shrink)

Developments in biological technology in the last few decades highlight the surprising and ever-expanding practical benefits of stem cells. With this progress, the possibility of combining human and nonhuman organisms is a reality, with ethical boundaries that are not readily obvious. These inter-species hybrids are of a larger class of biological entities called “chimeras.” As the concept of a human-nonhuman creature is conjured in our minds, either incredulous wonder or grotesque horror is likely to follow. This paper seeks to mitigate (...) those worries and demotivate reasonable concerns raised against chimera research, all the while pressing current ethical positions toward their plausible conclusions. -/- In service of this overall aim, first, I intend to show that chimeras are far less foreign and fantastic in light of recent research in the lab; second, I intend to show that anti-realist (so-called “constructivist”) commitments regarding species ontology render the species distinction (i.e., the divide between human and nonhuman) superfluous as a basis for ethical practice; and third, I discuss some prevailing dignity accounts regarding the practical ethics of the creation, research, and treatment of chimeras. Consequently I intend to show that the adoption of this particular set of views (constructivist ontology, capacity-based ethics) in conjunction with recent research ought to justify a parallel with what we accord to humans persons, and that trajectory allows for cases of moral permissibility. (shrink)

Public reason as a political ideal aims to reconcile reasonable disagreement; however, is public reason itself the object of reasonable disagreement? Jonathan Quong, David Estlund, Andrew Lister, and some other philosophers maintain that public reason is beyond reasonable disagreement. I argue this view is untenable. In addition, I consider briefly whether or not two main versions of the public reason principle, namely, the consensus version and the convergence version, need to satisfy their own requirements. My discussion has several important (...) implications for the debate on public reason. (shrink)