As software developers design artificial agents , they often have to wrestle with complex issues, issues that have philosophical and ethical importance. This paper addresses two key questions at the intersection of philosophy and technology: What is deception? And when is it permissible for the developer of a computer artifact to be deceptive in the artifact’s development? While exploring these questions from the perspective of a software developer, we examine the relationship of deception and trust. Are developers using deception to (...) gain our trust? Is trust generated through technological “enchantment” warranted? Next, we investigate more complex questions of how deception that involves AAs differs from deception that only involves humans. Finally, we analyze the role and responsibility of developers in trust situations that involve both humans and AAs. (shrink)

In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question (...) is, “Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?” To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificial agents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility. (shrink)

In this paper, we examine some ethical implications of a controversial court decision in the United States involving Verizon (an Internet Service Provider or ISP) and the Recording Industry Association of America (RIAA). In particular, we analyze the impacts this decision has for personal privacy and intellectual property. We begin with a brief description of the controversies and rulings in this case. This is followed by a look at some of the challenges that peer-to-peer (P2P) systems, used to share digital (...) information, pose for our legal and moral systems. We then examine the concept of privacy to better understand how the privacy of Internet users participating in P2P file-sharing practices is threatened under certain interpretations of the Digital Millennium Copyright Act (DMCA) in the United States. In particular, we examine the implications of this act for a new form of “panoptic surveillance” that can be carried out by organizations such as the RIAA. We next consider the tension between privacy and property-right interests that emerges in the Verizon case, and we examine a model proposed by Jessica Litman for distributing information over the Internet in a way that respects both privacy and property rights. We conclude by arguing that in the Verizon case, we should presume in favor of privacy as the default position, and we defend the view that a presumption should be made in favor of sharing (rather than hoarding) digital information. We also conclude that in the Verizon case, a presumption in favor of property would have undesirable effects and would further legitimize the commodification of digital information – a recent trend that is reinforced by certain interpretations of the DMCA on the part of lawmakers and by aggressive tactics used by the RIAA. (shrink)

This essay examines some ethical aspects of stalkingincidents in cyberspace. Particular attention is focused on the Amy Boyer/Liam Youens case of cyberstalking, which has raised a number of controversial ethical questions. We limit our analysis to three issues involving this particular case. First, we suggest that the privacy of stalking victims is threatened because of the unrestricted access to on-linepersonal information, including on-line public records, currently available to stalkers. Second, we consider issues involving moral responsibility and legal liability for Internet (...) service providers (ISPs) when stalking crimesoccur in their `space' on the Internet. Finally, we examine issues of moral responsibility for ordinary Internet users to determine whether they are obligated to inform persons whom they discover to be the targets of cyberstalkers. (shrink)

In this age of information technology, it is morally imperative that equal access to information via computer systems be afforded to people with disabilities. This paper addresses the problems that computer technology poses for students with disabilities and discusses what is needed to ensure equity of access. particularly in a university environment.