Computational Ethics Systems
One main research activity in machine ethics is developing computational ethics systems. The status is that there are several such systems, however, a paucity of overall standards bodies, general ethics modules, and an articulation of universal principles that might be included like human dignity, informed consent, privacy, and benefit-harm analysis. Some standards bodies that are starting to address these ideas include the IEEE’ s Technical Committee on Robot Ethics and European committees involved in RoboLaw and Roboethics.

One required feature of computational ethics systems could be the ability to flexibly apply different systems of ethics to more accurately reflect the ways that human intelligent agents approach real-life situations. For example, it is known from early programming efforts that simple models like Bentham and Mill’s utilitarianism are not robust enough ethics models. They do not incorporate comprehensive human notions of justice that extend beyond the immediate situation in decision-making. What is helpful is that machine systems on their own have evolved more expansive models than utilitarianism such as a prima facie duty approach. In the prima facie duty approach, there is a more complex conceptualization of intuitive duties, reputation, and the goal of increasing benefit and decreasing harm in the world. This is more analogous to real-life situations where there are multiple ethical obligations competing to determine the right action. GenEth is a machine ethics sandbox that is available to explore these kinds of systems for Mac OS, with details discussed in this conference paper.

There could be the flexible application of different ethics systems, and also integrated ethics systems. As in philosophy, computational ethics modules connote the
idea of metaethics, a means of evaluating and integrating multiple
ethical frameworks. These computational frameworks differ by ethical
parameters and machine type; for example an integrated system is needed
to enable a connected car to interface with a smart highway. The French ETHICAA (Ethics and Autonomous Agents) project seeks to develop embedded and integrated metaethics systems.

An ongoing debate is whether machine ethics should be separate modules or part of regular decision-making. Even though ultimately ethics might be best as a feature of any kind of decision-making, ethics are easiest to implement now in the early stages of development as a standalone module. Another point is that ethics models may vary significantly by culture; consider for example collectivist versus individualist societies, and how these ideals might be captured in code-based computational ethics modules. Happily for implementation, however, the initial tier of required functionality might be easy to achieve: obtaining ethicist consensus on overall how we want robots to treat us as humans. QA’ing computational ethics modules and machine behavior might be accomplished through some sort of ‘Ethical Turing Test;’ metaphorically, not literally, evaluating the degree to which machine responses match human ethicist responses.

Computational Ethics Systems: Enumerated, Evolved, or Corrigible
There are different approaches to computational ethics systems. Some involve the attempted enumeration of all involved principles and processes, reminiscent of Cyc. Others attempt to evolve ethical behavioral systems like the prima facie duty approach, possibly using methods like running machine learning algorithms over large data corpora. Others attempt to instill values-based thinking in ways like corrigibility. Corrigibility is the idea of building AI agents that reason as if they are incomplete and potentially flawed in dangerous ways. Since the AI agent apprehends that it is incomplete, it is encouraged to maintain a collaborative and not deceptive relationship with its programmers since the programmers may be able to help provide more complete information, even while both parties maintain different ethics systems. Thus a highly-advanced AI agent might be built that is open to online value learning, modification, correction, and ongoing interaction with humans. Corrigibility is proposed as a reasoning-based alternative to enumerated and evolved computational ethics systems, and also as an important ‘escape velocity’ project. Escape velocity refers to being able to bridge the competence gap between the current situation of not yet having human moral concepts reliably instantiated in AI systems, and the potential future of true moral superintelligences indispensably orchestrating many complex societal activities.

Lethal Autonomous Weapons
Machine cognition features prominently in lethal autonomous weapons where weapon systems are increasingly autonomous, making their own decisions in target selection and engagement without human input. The banning of autonomous weapons systems is currently under debate. On one side, detractors argue that full autonomy is too much, and that these weapons no longer have ‘meaningful human control’ as a positive obligation, and do not comply with the Geneva Convention’s Martens Clause requiring that fully autonomous weapons comply with principles of humanity and conscience. On the other side, supporters argue that machine morality might exceed human morality, and be more accurately and precisely applied. Ethically, it is not clear if weapons systems should be considered differently than other machine systems. For example, the Nationwide Kidney Exchange automatically allocates two transplant kidneys per week, where the lack of human involvement has been seen positively as a response to the agency problem.

Future of Work and Leisure
The automation economy is one of the great promises of machine cognition, where humans are able to offload more and more physical tasks, and also cognitive activities to AI systems. The Keynesian prediction of the leisure society by 2030 is becoming more imminent. This is the idea that leisure time, rather than work, will characterize national lifestyles. However, several thinkers are raising the need to redefine what is meant by work. The automation economy, possibly coupled with Guaranteed Basic Income initiatives, and an anti-scarcity mindset, could render obligation-based labor a thing of the past. There is ample room for redefining ‘work’ as productive activity that is meaningful to one’s sense of identity and self-worth for fulfillment, self-actualization, social-belonging, status-garnering, mate-seeking, cooperation, collaboration, and meeting other needs. The ‘end of work’ might just mean the ‘end of obligated work.’

Persuasion and Multispecies Sensibility
As humans, we still mostly conceive and employ the three modes of persuasion outlined centuries ago by Aristotle. These are ethos, relying on the speaker’s qualities like charisma; pathos, using emotion or passion to cast the audience into a certain frame of mind; and logos, employing the words of the oration as the argument. However, the human-machine interaction might cause these modes of human-related persuasion to be rethought and expanded, in both the human and machine context. Given that machine value systems and character may be different, so too might the most effective persuasion systems; both those employed on and deployed by machines. The ethics of human-machine persuasion is an area of open debate. For example, researchers are undecided on questions such as “Is it morally acceptable for a system to lie to persuade a human?” There is a rising necessity to consider ethics and reality issues from a thinking machine’s point-of-view in an overall future world system that might comprise multiple post-biological and other intelligent entities interacting together in digital societies.

Computational Ethics Systems
One main research activity in machine ethics is developing computational ethics systems. The status is that there are several such systems, however, a paucity of overall standards bodies, general ethics modules, and an articulation of universal principles that might be included like human dignity, informed consent, privacy, and benefit-harm analysis. Some standards bodies that are starting to address these ideas include the IEEE’ s Technical Committee on Robot Ethics and European committees involved in RoboLaw and Roboethics.

One required feature of computational ethics systems could be the ability to flexibly apply different systems of ethics to more accurately reflect the ways that human intelligent agents approach real-life situations. For example, it is known from early programming efforts that simple models like Bentham and Mill’s utilitarianism are not robust enough ethics models. They do not incorporate comprehensive human notions of justice that extend beyond the immediate situation in decision-making. What is helpful is that machine systems on their own have evolved more expansive models than utilitarianism such as a prima facie duty approach. In the prima facie duty approach, there is a more complex conceptualization of intuitive duties, reputation, and the goal of increasing benefit and decreasing harm in the world. This is more analogous to real-life situations where there are multiple ethical obligations competing to determine the right action. GenEth is a machine ethics sandbox that is available to explore these kinds of systems for Mac OS, with details discussed in this conference paper.

There could be the flexible application of different ethics systems, and also integrated ethics systems. As in philosophy, computational ethics modules connote the
idea of metaethics, a means of evaluating and integrating multiple
ethical frameworks. These computational frameworks differ by ethical
parameters and machine type; for example an integrated system is needed
to enable a connected car to interface with a smart highway. The French ETHICAA (Ethics and Autonomous Agents) project seeks to develop embedded and integrated metaethics systems.

An ongoing debate is whether machine ethics should be separate modules or part of regular decision-making. Even though ultimately ethics might be best as a feature of any kind of decision-making, ethics are easiest to implement now in the early stages of development as a standalone module. Another point is that ethics models may vary significantly by culture; consider for example collectivist versus individualist societies, and how these ideals might be captured in code-based computational ethics modules. Happily for implementation, however, the initial tier of required functionality might be easy to achieve: obtaining ethicist consensus on overall how we want robots to treat us as humans. QA’ing computational ethics modules and machine behavior might be accomplished through some sort of ‘Ethical Turing Test;’ metaphorically, not literally, evaluating the degree to which machine responses match human ethicist responses.

Computational Ethics Systems: Enumerated, Evolved, or Corrigible
There are different approaches to computational ethics systems. Some involve the attempted enumeration of all involved principles and processes, reminiscent of Cyc. Others attempt to evolve ethical behavioral systems like the prima facie duty approach, possibly using methods like running machine learning algorithms over large data corpora. Others attempt to instill values-based thinking in ways like corrigibility. Corrigibility is the idea of building AI agents that reason as if they are incomplete and potentially flawed in dangerous ways. Since the AI agent apprehends that it is incomplete, it is encouraged to maintain a collaborative and not deceptive relationship with its programmers since the programmers may be able to help provide more complete information, even while both parties maintain different ethics systems. Thus a highly-advanced AI agent might be built that is open to online value learning, modification, correction, and ongoing interaction with humans. Corrigibility is proposed as a reasoning-based alternative to enumerated and evolved computational ethics systems, and also as an important ‘escape velocity’ project. Escape velocity refers to being able to bridge the competence gap between the current situation of not yet having human moral concepts reliably instantiated in AI systems, and the potential future of true moral superintelligences indispensably orchestrating many complex societal activities.

Lethal Autonomous Weapons
Machine cognition features prominently in lethal autonomous weapons where weapon systems are increasingly autonomous, making their own decisions in target selection and engagement without human input. The banning of autonomous weapons systems is currently under debate. On one side, detractors argue that full autonomy is too much, and that these weapons no longer have ‘meaningful human control’ as a positive obligation, and do not comply with the Geneva Convention’s Martens Clause requiring that fully autonomous weapons comply with principles of humanity and conscience. On the other side, supporters argue that machine morality might exceed human morality, and be more accurately and precisely applied. Ethically, it is not clear if weapons systems should be considered differently than other machine systems. For example, the Nationwide Kidney Exchange automatically allocates two transplant kidneys per week, where the lack of human involvement has been seen positively as a response to the agency problem.

Future of Work and Leisure
The automation economy is one of the great promises of machine cognition, where humans are able to offload more and more physical tasks, and also cognitive activities to AI systems. The Keynesian prediction of the leisure society by 2030 is becoming more imminent. This is the idea that leisure time, rather than work, will characterize national lifestyles. However, several thinkers are raising the need to redefine what is meant by work. The automation economy, possibly coupled with Guaranteed Basic Income initiatives, and an anti-scarcity mindset, could render obligation-based labor a thing of the past. There is ample room for redefining ‘work’ as productive activity that is meaningful to one’s sense of identity and self-worth for fulfillment, self-actualization, social-belonging, status-garnering, mate-seeking, cooperation, collaboration, and meeting other needs. The ‘end of work’ might just mean the ‘end of obligated work.’

Persuasion and Multispecies Sensibility
As humans, we still mostly conceive and employ the three modes of persuasion outlined centuries ago by Aristotle. These are ethos, relying on the speaker’s qualities like charisma; pathos, using emotion or passion to cast the audience into a certain frame of mind; and logos, employing the words of the oration as the argument. However, the human-machine interaction might cause these modes of human-related persuasion to be rethought and expanded, in both the human and machine context. Given that machine value systems and character may be different, so too might the most effective persuasion systems; both those employed on and deployed by machines. The ethics of human-machine persuasion is an area of open debate. For example, researchers are undecided on questions such as “Is it morally acceptable for a system to lie to persuade a human?” There is a rising necessity to consider ethics and reality issues from a thinking machine’s point-of-view in an overall future world system that might comprise multiple post-biological and other intelligent entities interacting together in digital societies.