In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out the implications of some of these (...) well-settled theories with respect to the prerequisites that an ICT must satisfy in order to count as a moralagent accountable for its behavior. I argue that each of the various elements of the necessary conditions for moral agency presupposes consciousness, i.e., the capacity for inner subjective experience like that of pain or, as Nagel puts it, the possession of an internal something-of-which-it is-is-to-be-like. I ultimately conclude that the issue of whether artificial moral agency is possible depends on the issue of whether it is possible for ICTs to be conscious. (shrink)

One of the enduring concerns of moral philosophy is deciding who or what is deserving of ethical consideration. This special issue of Philosophy and Technology investigates whether and to what extent machines, of various designs and configurations, can or should be considered moral subjects, defined here as either a moralagent, a moral patient, or both. The articles that comprise the issue were competitively selected from papers initially prepared for and presented at a symposium on (...) this subject matter convened during the AISB/IACAP 2012 World Congress, held in at the University of Birmingham in Birmingham, UK. (shrink)

Abstract This paper suggests that dissatisfaction with traditional teaching practices is fundamentally a moral complaint. Treating students as receptacles offends our sense of human dignity. We feel the need for students to be treated as moral agents. The paper explores the concept of moral agency by, first, looking at an episode of instruction from Plato's Meno, and then drawing from it three necessary elements of moral agency??choice, vision and an end?in?view. Choice is necessary because, to be (...) a moralagent, a person must have more than one course of action available, as well as both the authority and the competence to choose which course of action to follow. Vision is necessary because a moralagent is a person who sees the world in a certain way: moral agency is as much a matter of world?making as of choosing and behaving in the world. An end?in?view is necessary because it provides the engagement and social context that enable choice and vision to operate. The paper concludes that while teachers cannot endow students with moral agency, they can create an environment that encourages moral agency. (shrink)

This paper examines ways in which current moral values are influenced by earlier patterns of thinking carried forward in root metaphors whose meanings were often framed by the analogues settled upon in the past by thinkers who were influenced by the silences and prejudices of their culture. It is argued that such tacitly inherited metaphors reproduce the myth of the individual as a moralagent and that this both is ecologically unsustainable and undermines other important ways of (...) understanding the individual. In various ways the form of critical thinking to which it gives rise stands in the way of individuals exercising what Gregory Bateson refers to as their ?ecological intelligence? in a manner that properly takes account of the interdependent nature of cultural and natural ecologies. It is argued that if in the West (at least) a shift could be made to relying on ?ecology? as the dominant root metaphor the whole system of Western moral values would undergo a radical change to one better suited to a current Western environmental situation. (shrink)

As arti® cial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an arti® cial moralagent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing arti® cial moral agents result both from controversies among ethicists about moral (...) theory itself, and from computational limits to the implementation of such theories. In this paper the ethical disputes are surveyed, the possibility of a `moral Turing Test ’ is considered and the computational di culties accompanying the diŒerent types of approach are assessed. Human-like performance, which is prone to include immoral actions, may not be acceptable in machines, but moral perfection may be computationally unattainable. The risks posed by autonomous machines ignorantly or deliberately harming people and other sentient beings are great. The development of machines with enough intelligence to assess the eŒects of their actions on sentient beings and act accordingly may ultimately be the most important task faced by the designers of arti® cially intelligent automata. (shrink)

Climate Change and the MoralAgent examines the moral foundations of climate change and makes a case for collective action on climate change by appealing to moralized collective self-interest, collective ability to aid, and an expanded understanding of collective responsibility for harm.

In this paper, we take moral agency to be that context in which a particular agent can, appropriately, be held responsible for her actions and their consequences. In order to understand moral agency, we will discuss what it would take for an artifact to be a moralagent. For reasons that will become clear over the course of the paper, we take the artifactual question to be a useful way into discussion but ultimately misleading. We (...) set out a number of conceptual pre-conditions for being a moralagent and then outline how one should — and should not — go about attributing moral agency. In place of a litmus test for such an agency — such as Allen et al.'s Moral Turing Test — we suggest some tools from the conceptual spaces theory and the unified conceptual space theory for mapping out the nature and extent of that agency. (shrink)

Symposium contribution on Mark Schroeder's Slaves of the Passions. Argues that Schroeder's account of agent-neutral reasons cannot be made to work, that the limited scope of his distinctive proposal in the epistemology of reasons undermines its plausibility, and that Schroeder faces an uncomfortable tension between the initial motivation for his view and the details of the view he develops.

Much contemporary art seems morally out of control. Yet, philosophers seem to have trouble finding the right way to morally evaluate works of art. The debate between autonomists and moralists, I argue, has turned into a stalemate due to two mistaken assumptions. Against these assumptions, I argue that the moral nature of a work's contents does not transfer to the work and that, if we are to morally evaluate works we should try to conceive of them as moral (...) agents. Ethical autonomism holds that art's autonomy consists in its demand that art appreciators take up an artistic attitude. A work's agency then is in how it merits their audiences' attitudinal switch. Ethical autonomism allows for the moral assessment of art works without giving up their autonomy, by viewing artistic merit as a moral category and art-relevant moral evaluation as having the form of art criticism. (shrink)

Abstract Thus far psychology has not been very successful in integrating the feelings and emotions into its account of the moral life. In part this may be because it has lacked a clear image of the person as a sentient being. Such an image is presented here, derived primarily from depth psychology and cognitive?developmentalism. The preconscious and unconscious ?levels? of psychic activity postulated by the former can be interpreted as the continuation of preoperational and more archaic forms of thinking, (...) below or behind conscious awareness. The person is more fragmented, more liable to inner conflict, than cognitive?developmental theory generally allows. This image can be used to shed light on several problem areas in moral psychology: the theory of social action, gender differences, the nature of moral knowledge, and behavioural consistency. (shrink)

This paper aims to lay bare the underlying logical structure of early Buddhist moral thinking. It argues that moral vocabulary in the Pali Suttas varies depending on the kind of agent under discussion and that this variance reflects an understanding that the phenomenology of moral experience also differs on the same basis. An attempt is made to spell this out in terms of attachment. The overall picture of Buddhist ethics that emerges is that of an (...) class='Hi'>agent-based moral contextualism. This account does not imply that the prescription for moral conduct differs according to class of agent, but rather that the correct description of moral experience does. Further it implies that the descriptions of the moral experiences of different classes of agent differ phenomenologically, rather than in terms of overt behavioral characteristics. While most of the discussion is centered on the distinction between ordinary persons and disciples in higher training, the paper concludes with a brief exploration of the problematic moral experience of the arahat. (shrink)

In this paper, I propose a deliberative model of the concept of the international community. The international community is a community of the world's people, peoples, and states insofar as they take themselves to be part of a potentially universal agency. I suggest that we distinguish the possibility that a more 'concrete' agent represents the international community from the practice that states, organizations, and individuals engage in of offering claims about the beliefs and attitudes of the international community in (...) support of their own actions. I argue that while any agent can act out of an appreciation of the importance of the perspective of the international community, only an organization the primary purpose of which is to represent the international community can represent it. I close with some remarks on how the United States might act responsibly not on behalf of the international community but as a member of it. (shrink)

Abstract The article addresses the question of the teacher's role. Should teachers perceive themselves as being role?models for their students? In this study reported responses from 65 prospective teachers in six colleges of education in Norway were analyzed. The respondents were randomly drawn from a sample of 286 college students who took part in a longitudinal study investigating the development of professional perspectives and behaviour in prospective teachers. The data discussed here were collected by the use of semi?structured interviews which (...) lasted anywhere from 45 to 90 minutes. The average age of the respondents was 23.5 years. The results reveal great differences in the respondents understanding of what it implies for teachers to be role?models students can look up to and identify with. Half of the respondents agreed with the idea that teachers should consciously act as role?models toward their students. The rest of the group was divided: some oppose the idea while some are uncertain. (shrink)

The paper focuses on the conditions under which an agent can be justifiably held responsible or liable for the harmful consequences of his or her actions. Kant has famously argued that as long as the agent fulfills his or her moral duty, he or she cannot be blamed for any potential harm that might result from his or her action, no matter how foreseeable these may (have) be(en). I call this the Duty-Absolves-Thesis or DA. I begin by (...) stating the thesis in a more precise form and then go on to assess, one by one, several possible justifications for it: that (i) it wasn’t the view Kant himself actually held or was committed to; (ii) there is nothing strange about the DA, either theoretically or intuitively; (iii) the DA is more plausible as an account of legal (either criminal or tort) liability; (iv) the DA becomes perfectly plausible when conceived as a thesis about what insulates the agent from either remedial moral responsibility or the demands of compensatory justice; (v) the rationale for the DA is to protect our moral assessment of agents and their actions from the threat of moral luck. I show, with the help of the infamous Inquiring Murderer example, all these (and some other) justificatory attempts unsuccessful. I conclude that besides being counter-intuitive, the DA-thesis also lacks firm theoretical grounding and should therefore be rejected as (part of) an account of outcome moral responsibility. (shrink)

After discussing the distinction between artifacts and natural entities, and the distinction between artifacts and technology, the conditions of the traditional account of moral agency are identified. While computer system behavior meets four of the five conditions, it does not and cannot meet a key condition. Computer systems do not have mental states, and even if they could be construed as having mental states, they do not have intendings to act, which arise from an agent’s freedom. On the (...) other hand, computer systems have intentionality, and because of this, they should not be dismissed from the realm of morality in the same way that natural objects are dismissed. Natural objects behave from necessity; computer systems and other artifacts behave from necessity after they are created and deployed, but, unlike natural objects, they are intentionally created and deployed. Failure to recognize the intentionality of computer systems and their connection to human intentionality and action hides the moral character of computer systems. Computer systems are components in human moral action. When humans act with artifacts, their actions are constituted by the intentionality and efficacy of the artifact which, in turn, has been constituted by the intentionality and efficacy of the artifact designer. All three components – artifact designer, artifact, and artifact user – are at work when there is an action and all three should be the focus of moral evaluation. (shrink)

Some authors have argued that the human use of reproductive cloning and genetic engineering should be prohibited because these biotechnologies would undermine the autonomy of the resulting child. In this paper, two versions of this view are discussed. According to the first version, the autonomy of cloned and genetically engineered people would be undermined because knowledge of the method by which these people have been conceived would make them unable to assume full responsibility for their actions. According to the second (...) version, these biotechnologies would undermine autonomy by violating these people’s right to an open future. There is no evidence to show that people conceived through cloning and genetic engineering would inevitably or even in general be unable to assume responsibility for their actions; there is also no evidence for the claim that cloning and genetic engineering would inevitably or even in general rob the child of the possibility to choose from a sufficiently large array of life plans. (shrink)

Among contemporary ethicists, Hume is perhaps best known for his views about morality’s practical import and his spectator-centered account of moral evaluation. Yet according to the so-called “spectator complaint”, these two aspects of Hume’s moral theory cannot be reconciled with one another. I argue that the answer to the spectator complaint lies in Hume’s account of “goodness” and “greatness of mind”. Through a discussion of these two virtues, Hume makes clear the connection between his views about moral (...) motivation and his understanding of moral evaluation by providing us with two portraits of the Humean moralagent. (shrink)

In the 17th century, ‘reflexivity’ was coined as a new term for introspection and self-awareness. It thus was poised to serve the instrumental function of combating skepticism by asserting a knowing self. In this Cartesian paradigm, introspection ends in an entity of self-identity. An alternate interpretation recognized how an infinite regress of reflexivity would render ‘the self’ elusive, if not unknowable. Reflexivity in this latter mode was rediscovered by post-Kantian philosophers, most notably Hegel, who defined the self in its self-reflective (...) encounter with an other, and whose full articulation would occur at the final culmination of Reason's evolution. In the rising tide of 19th-century individualism, Emerson and Kierkegaard formulated constructions both in debt to, and in opposition against, Hegelian metaphysics. For each, although employing distinct strategies of self-consciousness, ‘the self’ reached its apogee through divine encounter. Characterized by personal responsibility and individual choice, their philosophies would later be subsumed by secular existentialists committed to defining moral individualism and asserting the possibilities for human freedom and selfauthentication. (shrink)

In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moralagent. I detail three requirements for a robot to be seen as a moralagent. The first is achieved when the robot is significantly autonomous from any programmers or operators of (...) the machine. The second is when one can analyze or explain the robot's behavior only by ascribing to it some predisposition or 'intention' to do good or harm. And finally, robot moral agency requires the robot to behave in a way that shows and understanding of responsibility to some other moralagent. Robots with all of these criteria will have moral rights as well as responsibilities regardless of their status as persons. (shrink)

Many claim that a plausible moral theory would have to include a principle of beneficence, a principle telling us to produce goods that are both welfarist and agent-neutral. But when we think carefully about the necessary connection between moral obligations and reasons for action, we see that agents have two reasons for action, and two moral obligations: they must not interfere with any agent's exercise of his rational capacities and they must do what they can (...) to make sure that agents have rational capacities to exercise. According to this distinctively deontological view of morality, though we are obliged to produce goods, the goods in question are non-welfarist and agent-relative. The value of welfare thus turns out to be, at best, instrumental. (shrink)

In modern technical societies computers interact with human beings in ways that can affect moral rights and obligations. This has given rise to the question whether computers can act as autonomous moral agents. The answer to this question depends on many explicit and implicit definitions that touch on different philosophical areas such as anthropology and metaphysics. The approach chosen in this paper centres on the concept of information. Information is a multi-facetted notion which is hard to define comprehensively. (...) However, the frequently used definition of information as data endowed with meaning can promote our understanding. It is argued that information in this sense is a necessary condition of cognitivist ethics. This is the basis for analysing computers and information processors regarding their status as possible moral agents. Computers have several characteristics that are desirable for moral agents. However, computers in their current form are unable to capture the meaning of information and therefore fail to reflect morality in anything but a most basic sense of the term. This shortcoming is discussed using the example of the Moral Turing Test. The paper ends with a consideration of which conditions computers would have to fulfil in order to be able to use information in such a way as to render them capable of acting morally and reflecting ethically. (shrink)

In this paper we discuss the hypothesis that, ‘moral agency is distributed over both humans and technological artefacts’, recently proposed by Peter-Paul Verbeek. We present some arguments for thinking that Verbeek is mistaken. We argue that artefacts such as bridges, word processors, or bombs can never be (part of) moral agents. After having discussed some possible responses, as well as a moderate view proposed by Illies and Meijers, we conclude that technological artefacts are neutral tools that are at (...) most bearers of instrumental value. (shrink)

In modern technical societies computers interact with human beings in ways that can affect moral rights and obligations. This has given rise to the question whether computers can act as autonomous moral agents. The answer to this question depends on many explicit and implicit definitions that touch on different philosophical areas such as anthropology and metaphysics. The approach chosen in this paper centres on the concept of information. Information is a multi-facetted notion which is hard to define comprehensively. (...) However, the frequently used definition of information as data endowed with meaning can promote our understanding. It is argued that information in this sense is a necessary condition of cognitivist ethics. This is the basis for analysing computers and information processors regarding their status as possible moral agents. Computers have several characteristics that are desirable for moral agents. However, computers in their current form are unable to capture the meaning of information and therefore fail to reflect morality in anything but a most basic sense of the term. This shortcoming is discussed using the example of the Moral Turing Test. The paper ends with a consideration of which conditions computers would have to fulfil in order to be able to use information in such a way as to render them capable of acting morally and reflecting ethically. (shrink)

The ethical authority carried in the conventions of fairness and human well-being has been widely adopted under the idea of “moral economy,” forming an eclectic and interdisciplinary debate. Significant, though external to this debate, is a corpus of medieval thought which exhibits a fundamental interest in legitimate market protocols, and the political rights and obligations of agents in relation to the common good of the community. This article asserts the imperative status of a customary basis for understanding not just (...) the analytic version of moral economy but the legacy contained in what might be termed the “the moral economy of Aquinas.”. (shrink)

The goal of this paper is to connect managerial behavior on the “agent-steward” scale to managerial moral development and motivation. I introduce agent- and steward-like behavior: the former is self-serving while the latter is others-serving. I suggest that managerial moral development and motivation may be two of the factors that may predict the tendency of managers to behave in a self-serving way (like agents) or to serve the interests of the organization (like stewards). Managers at low (...) levels of moral development are more likely to behave like agents, while managers at higher levels of moral development are more likely to behave like stewards. I also argue that managers at the highest level of moral development may serve the interests of people other than the firm’s owners and thereby transfer wealth from the firm’s owners to third parties. Moral motivation is likely to be a factor that moderates the proposed relationships. Finally, I develop propositions that address the role of material incentives in controlling behavior of managers at different levels of moral development. (shrink)

Two groups of agents, G1 and G2, face a *moral conflict* if G1 has a moral obligation and G2 has a moral obligation, such that these obligations cannot both be fulfilled. We study moral conflicts using a multi-agent deontic logic devised to represent reasoning about sentences like "In the interest of group F of agents, group G of agents ought to see to it that phi". We provide a formal language and a consequentialist semantics. An (...) illustration of our semantics with an analysis of the Prisoner’s Dilemma follows. Next, necessary and sufficient conditions are given for (1) the possibility that a single group of agents faces a moral conflict, for (2) the possibility that two groups of agents face a moral conflict within a single moral code, and for (3) the possibility that two groups of agents face a moral conflict. (shrink)

Theories of individual well‐being fall into three main categories: hedonism, the desire‐fulfilment theory, and the list theory (which maintains that there are some things that can benefit a person without increasing the person's pleasure or desire‐fulfilment). The paper briefly explains the answers that hedonism and the desire‐fulfilment theory give to the question of whether being virtuous constitutes a benefit to the agent. Most of the paper is about the list theory's answer.

My goal in this essay is to say something helpful about the philosophical foundations of deontic restraints, i.e., moral restraints on actions that are, roughly speaking, grounded in the wrongful character of the actions themselves and not merely in the disvalue of their results. An account of deontic restraints will be formulated and offered against the backdrop of three related, but broader, contrasts or puzzles within moral theory. The plausibility of this account of deontic restraints rests in part (...) on how well this account resolves the puzzles or illuminates the contrasts which make up this theoretical backdrop. (shrink)