Issue: Lack of value-based ethical culture and practices for industry.

Issue: Lack of values-aware leadership.

Issue: Lack of empowerment to raise ethical concerns.

Issue: Organizations should examine their cultures to determine how to flexibly implement value-based design.

Issue: Lack of ownership or responsibility from the tech community.

Issue: Need to include stakeholders for adequate ethical perspective on A/IS.

Section 3 — Research Ethics for Development and Testing of A/IS Technologies

Issue: Institutional ethics committees are under-resourced to address the ethics of R&D in the A/IS fields.

Section 4 — Lack of Transparency

Issue: Poor documentation hinders ethical design.

Issue: Inconsistent or lacking oversight for algorithms.

Issue: Lack of an independent review organization.

Issue: Use of black-box components.

Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI)

Section 1 — Technical

Issue: As A/IS become more capable, as measured by the ability to perform with greater autonomy across a wider variety of domains, unanticipated or unintended behavior becomes increasingly dangerous.

Issue: Designing for safety may be much more difficult later in the design lifecycle rather than earlier.

Section 2 — General Principles

Issue: Researchers and developers will confront a progressively more complex set of ethical and technical safety issues in the development and deployment of increasingly capable A/IS

Issue: Future A/IS may have the capacity to impact the world on a scale not seen since the Industrial Revolution.

Personal Data and Individual Access Control

Section 1 — Digital Personas

Issue: Individuals do not understand that their digital personas and identity function differently than in real life. This is a concern when personal data is not accessible by an individual and the future iterations of their personas or identity cannot be controlled by them, but by the creators of the A/IS they use.

Issue: How can an individual define and organize his/her personal data and identity in the algorithmic era?

Section 2 — Regional Jurisdiction

Issue: Country-wide, regional, or local legislation may contradict an individual’s values or access and control of their personal data.

Section 3 — Agency and Control

Issue: To understand the role of agency and control within A/IS, it is critical to have a definition and scope of personally identifiable information (PII).

Issue: What is the definition of control regarding personal data, and how can it be meaningfully expressed?

Section 4 — Transparency and Access

Issue: It is often difficult for users to determine what information a service provider or A/IS application collects about them at the time of such aggregation/collection (at the time of installation, during usage, even when not in use, after deletion). It is difficult for users to correct, amend, or manage this information.

Issue: How do we create privacy impact assessments related to A/IS?

Issue: How can AI interact with government authorities to facilitate law enforcement and intelligence collection while respecting rule of law and transparency for users?

Section 5 — Symmetry and Consent

Issue: Could a person have a personalized privacy AI or algorithmic agent or guardian?

Issue: Consent is vital to information exchange and innovation in the algorithmic age. How can we redefine consent regarding personal data so it respects individual autonomy and dignity?

Issue: Data that is shared easily or haphazardly via A/IS can be used to make inferences that an individual may not wish to share.

Issue: Many A/IS will collect data from individuals they do not have a direct relationship with, or the systems are not interacting directly with the individuals. How can meaningful consent be provided in these situations?

Issue: How do we make better user experience and consent education available to consumers as standard to express meaningful consent?

Issue: In most corporate settings, employees do not have clear consent on how their personal information (including health and other data) is used by employers. Given the power differential between employees and employers, this is an area in need of clear best practices.

Issue: People may be losing their ability to understand what kinds of processing is done by A/IS on their private data, and thus may be becoming unable to meaningfully consent to online terms. The elderly and mentally impaired adults are vulnerable in terms of consent, presenting consequence to data privacy

Issue 2: The addition of automated targeting and firing functions to an existing weapon system, or the integration of components with such functionality, or system upgrades that impact targeting and automated weapon release should be considered for review under Article 36 of Additional Protocol I of the Geneva Conventions.

Issue 3: Engineering work should conform to individual and professional organization codes of ethics and conduct. However, existing codes of ethics may fail to properly address ethical responsibility for autonomous systems, or clarify ethical obligations of engineers with respect to AWS. Professional organizations should undertake reviews and possible revisions or extensions of their codes of ethics with respect to AWS.

Issue 4: The development of AWS by states is likely to cause geopolitical instability and could lead to arms races.

Issue 5: The automated reactions of an AWS could result in the initiation or escalation of conflicts outside of decisions by political and military leadership. AWS that engage with other AWS could escalate a conflict rapidly, before humans are able to intervene.

Issue 6: There are multiple ways in which accountability for the actions of AWS can be compromised.

Issue 7: AWS offer the potential for severe human rights abuses. Exclusion of human oversight from the battlespace can too easily lead to inadvertent violation of human rights. AWS could be used for deliberate violations of human rights.

Issue 8: AWS could be used for covert, obfuscated, and non-attributable attacks.

Issue 9: The development of AWS will lead to a complex and troubling landscape of proliferation and abuse.

Issue 10: AWS could be deployed by domestic police forces and threaten lives and safety. AWS could also be deployed for private security. Such AWS may have very different design and safety requirements than military AWS.

Issue 11: An automated weapons system might not be predictable (depending upon its design and operational use). Learning systems compound the problem of predictable use.

Economics and Humanitarian Issues

Section 1 — Economics

Issue: A/IS should contribute to achieving the UN Sustainable Development Goals.

Issue: It is unclear how developing nations can best implement A/IS via existing resources.

Issue: The complexities of employment are being neglected regarding A/IS

Issue: International, national, and local governments are using A/IS. How can we ensure the A/IS that governments employ do not infringe on citizens’ rights?

Section 3 — Legal Accountability for Harm Caused by A/IS

Issue: How can A/IS be designed to guarantee legal accountability for harms caused by these systems?

Section 4 — Transparency, Accountability, and Verifiability in A/IS

Issue: How can we improve the accountability and verifiability in autonomous and intelligent systems?

Affective Computing

Systems Across Cultures

Issue: Should affective systems interact using the norms appropriate for verbal and nonverbal communication consistent with the societal norms where they are located?

Issue: Long-term interaction with affective artifacts lacking cultural sensitivity could alter the way people interact in society.

Issue: When affective systems are inserted across cultures, they could affect negatively the cultural/socio/religious values of the community where they are inserted.

When Systems Become Intimate

Issue: Are moral and ethical boundaries crossed when the design of affective systems allows them to develop intimate relationships with their users?

Issue: Can and should a ban or strict regulations be placed on the development of sex robots for private use or in the sex industry?

System Manipulation/Nudging/Deception

Issue: Should affective systems be designed to nudge people for the user’s personal benefit and/or for the benefit of someone else?

Issue: Governmental entities often use nudging strategies, for example to promote the performance of charitable acts. But the practice of nudging for the benefit of society, including through the use of affective systems, raises a range of ethical concerns.

Issue: A nudging system that does not fully understand the context in which it is operating may lead to unintended consequences.

Issue: When, if ever, and under which circumstances is deception performed by affective systems acceptable?

Systems Supporting Human Potential (Flourishing)

Issue: Extensive use of artificial intelligence in society may make our organizations more brittle by reducing human autonomy within organizations, and by replacing creative, affective, empathetic components of management chains.

Issue: The increased access to personal information about other members of our society, facilitated by artificial intelligence, may alter the human affective experience fundamentally, potentially leading to a severe and possibly rapid loss in individual autonomy.

Mixed Reality in Information and Communications

Section 1 — Social Interactions

Issue: Within the realm of A/IS-enhanced mixed reality, how can we evolve, harness, and not eradicate the positive effects of serendipity?

Issue: What happens to cultural institutions in a mixed reality, AI-enabled world of illusion, where geography is largely eliminated, tribe-like entities and identities could spring up spontaneously, and the notion of identity morphs from physical certainty to virtuality?

Issue: With alternative realities at reach, we will have alternative ways of behaving individually and collectively, and perceiving ourselves and the world around us. These new orientations regarding reality could enhance an already observed tendency toward social reclusiveness that detaches many from our common reality. Could such a situation lead to an individual opting out of “societal engagements? ”

Issue: The way we experience (and define) physical reality on a daily basis will soon change.

Issue: We may never have to say goodbye to those who have graduated to a newer dimension (i.e., death).

Issue: Mixed reality changes the way we interact with society and can also lead to complete disengagement.

Issue: A/IS, artificial consciousness, and augmented/mixed reality has the potential to create a parallel set of social norms.

Issue: An MR/A/IS environment could fail to take into account the neurodiversity of the population.

Section 2 — Mental Health

Issue: How can AI-enhanced mixed reality explore the connections between the physical and the psychological, the body and mind for therapeutic and other purposes? What are the risks for when an AI-based mixed-reality system presents stimuli that a user can interact with in an embodied, experiential activity? Can such MR experiences influence and/or control the senses or the mind in a fashion that is detrimental and enduring? What are the short- and long-term effects and implications of giving over one’s senses to software? Moreover, what are the implications for the ethical development and use of MR applications designed for mental health assessment and treatment in view of the potential potency of this media format compared to traditional methodologies?

Issue: Mixed reality creates opportunities for generated experiences and high levels of user control that may lead certain individuals to choose virtual life over the physical world. What are the clinical implications?

Section 3 — Education and Training

Issue: How can we protect worker rights and mental well-being with the onset of automation-oriented, immersive systems?

Issue: AR/VR/MR in training/operations can be an effective learning tool, but will alter workplace relationships and the nature of work in general.

Issue: How can we keep the safety and development of children and minors in mind?

Issue: Mixed reality will usher in a new phase of specialized job automation.

Issue: A combination of mixed reality and A/IS will inevitably replace many current jobs. How will governments adapt policy, and how will society change both expectations and the nature of education and training?

Section 4 — The Arts

Issue: There is the possibility of commercial actors to create pervasive AR/VR environments that will be prioritized in user’s eyes/vision/experience.

Issue: There is the possibility that AR/VR realities could copy/emulate/hijack creative authorship and intellectual and creative property with regard to both human and/or AI-created works.

Section 5 — Privacy Access and Control

Issue: Data collection and control issues within mixed realities combined with A/IS present multiple ethical and legal challenges that ought to be addressed before these realities pervade society.

Issue: Like other emerging technologies, AR/VR will force society to rethink notions of privacy in public and may require new laws or regulations regarding data ownership in these environments.

Issue:Users of AI-informed mixed-reality systems need to understand the known effects and consequences of using those systems in order to trust them.

Well-being

Section 1 — An Introduction to Well-being Metrics

Issue: There is ample and robust science behind well-being metrics and use by international and national institutions, yet many people in the A/IS field and corporate communities are unaware that well-being metrics exist, or what entities are using them.

Issue: Many people in the A/IS field and corporate communities are not aware of the value well-being metrics offer.

Issue: By leveraging existing work in computational sustainability or using existing indicators to model unintended consequences of specific systems or applications, well-being could be better understood and increased by the A/IS community and society at large.

Issue: Well-being indicators provide an opportunity for modeling scenarios and impacts that could improve the ability of A/IS to frame specific societal benefits for their use.

Section 3 — Adaptation of Well-being Metrics for A/IS

Issue: How can creators of A/IS incorporate measures of well-being into their systems?

Issue: A/IS technologies designed to replicate human tasks, behavior, or emotion have the potential to either increase or decrease well-being.

Issue: Human rights law is sometimes conflated with human well-being, leading to a concern that a focus on human well-being will lead to a situation that minimizes the protection of inalienable human rights, or lowers the standard of existing legal human rights guidelines for non-state actors.

Issue: A/IS represents opportunities for stewardship and restoration of natural systems and securing access to nature for humans, but could be used instead to distract attention and divert innovation until the planetary ecological condition is beyond repair.

Issue: The well-being impacts of A/IS applied to human genomes are not well understood. (“There is an urgent need to concurrently discuss how the convergence of A/IS and genomic data interpretation will challenge the purpose and content of relevant legislation that preserve well-being ...“ S. 263)