8. Principle of user assistance

Publisher: Ministry of Internal Affairs and Communications (MIC), the Government of Japan

Developers should take it into consideration that AI systems will support users and make it possible to give them opportunities for choice in appropriate manners.
[Comment]
In order to support users of AI systems, it is recommended that developers pay attention to the followings:
● To make efforts to make available interfaces that provide in a timely and appropriate manner the information that can help users’ decisions and are easy to use for them.
● To make efforts to give consideration to make available functions that provide users with opportunities for choice in a timely and appropriate manner (e.g., default settings, easy to understand options, feedbacks, emergency warnings, handling of errors, etc.).
And
● To make efforts to take measures to make AI systems easier to use for socially vulnerable people such as universal design.
In addition, it is recommended that developers make efforts to provide users with appropriate information considering the possibility of changes in outputs or programs as a result of learning or other methods of AI systems.

Related Principles

Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications. As we develop and deploy AI technologies, we will evaluate likely uses in light of the following factors:
Primary purpose and use: the primary purpose and likely use of a technology and application, including how closely the solution is related to or adaptable to a harmful use
Nature and uniqueness: whether we are making available technology that is unique or more generally available
Scale: whether the use of this technology will have significant impact
Nature of Google’s involvement: whether we are providing general purpose tools, integrating tools for customers, or developing custom solutions

Trustworthy AI requires that algorithms are secure, reliable as well as robust enough to deal with errors or inconsistencies during the design, development, execution, deployment and use phase of the AI system, and to adequately cope with erroneous outcomes.
Reliability & Reproducibility. Trustworthiness requires that the accuracy of results can be confirmed and reproduced by independent evaluation. However, the complexity, non determinism and opacity of many AI systems, together with sensitivity to training model building conditions, can make it difficult to reproduce results. Currently there is an increased awareness within the AI research community that reproducibility is a critical requirement in the field. Reproducibility is essential to guarantee that results are consistent across different situations, computational frameworks and input data. The lack of reproducibility can lead to unintended discrimination in AI decisions.
Accuracy. Accuracy pertains to an AI’s confidence and ability to correctly classify information into the correct categories, or its ability to make correct predictions, recommendations, or decisions based on data or models. An explicit and well formed development and evaluation process can support, mitigate and correct unintended risks.
Resilience to Attack. AI systems, like all software systems, can include vulnerabilities that can allow them to be exploited by adversaries. Hacking is an important case of intentional harm, by which the system will purposefully follow a different course of action than its original purpose. If an AI system is attacked, the data as well as system behaviour can be changed, leading the system to make different decisions, or causing the system to shut down altogether. Systems and or data can also become corrupted, by malicious intention or by exposure to unexpected situations. Poor governance, by which it becomes possible to intentionally or unintentionally tamper with the data, or grant access to the algorithms to unauthorised entities, can also result in discrimination, erroneous decisions, or even physical harm.
Fall back plan. A secure AI has safeguards that enable a fall back plan in case of problems with the AI system. In some cases this can mean that the AI system switches from statistical to rule based procedure, in other cases it means that the system asks for a human operator before continuing the action.

Published by: Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles

Developers should take it into consideration that AI systems will not harm the life, body, or property of users or third parties through actuators or other devices.
[Comment]
AI systems which are supposed to be subject to this principle are such ones that might harm the life, body, or property of users or third parties through actuators or other devices.
It is encouraged that developers refer to relevant international standards and pay attention to the followings, with particular consideration of the possibility that outputs or programs might change as a result of learning or other methods of AI systems:
● To make efforts to conduct verification and validation in advance in order to assess and mitigate the risks related to the safety of the AI systems.
● To make efforts to implement measures, throughout the development stage of AI systems to the extent possible in light of the characteristics of the technologies to be adopted, to contribute to the intrinsic safety (reduction of essential risk factors such as kinetic energy of actuators) and the functional safety (mitigation of risks by operation of additional control devices such as automatic braking) when AI systems work with actuators or other devices.
And
● To make efforts to explain the designers’ intent of AI systems and the reasons for it to stakeholders such as users, when developing AI systems to be used for making judgments regarding the safety of life, body, or property of users and third parties (for example, such judgments that prioritizes life, body, property to be protected at the time of an accident of a robot equipped with AI).

Published by: Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles

Developers should make efforts to fulfill their accountability to stakeholders, including AI systems’ users.
[Comment]
Developers are expected to fulfill their accountability for AI systems they have developed to gain users’ trust in AI systems.
Specifically, it is encouraged that developers make efforts to provide users with the information that can help their choice and utilization of AI systems. In addition, in order to improve the acceptance of AI systems by the society including users, it is also encouraged that, taking into account the R&D principles (1) to (8) set forth in the Guidelines, developers make efforts: (a) to provide users et al. with both information and explanations about the technical characteristics of the AI systems they have developed; and (b) to gain active involvement of stakeholders (such as their feedback) in such manners as to hear various views through dialogues with diverse stakeholders.
Moreover, it is advisable that developers make efforts to share the information and cooperate with providers et al. who offer services with the AI systems they have developed on their own.

Users should make efforts to utilize AI systems or AI services in a proper scope and manner, under the proper assignment of roles between humans and AI systems, or among users.
[Main points to discuss]
A) Utilization in the proper scope and manner
On the basis of the provision of information and explanation from developers, etc. and with consideration of social contexts and circumstances, users may be expected to use AI in the proper scope and manner. In addition, users may be expected to recognize benefits and risks, understand proper uses, acquire necessary knowledge and skills and so on before using AI, according to the characteristics, usage situations, etc. of AI. Furthermore, users may be expected to check regularly whether they use AI in an appropriate scope and manner.
B) Proper balance of benefits and risks of AI
AI service providers and business users may be expected to take into consideration proper balance between benefits and risks of AI, including the consideration of the active use of AI for productivity and work efficiency improvements, after appropriately assessing risks of AI.
C) Updates of AI software and inspections repairs, etc. of AI
Through the process of utilization, users may be expected to make efforts to update AI software and perform inspections, repairs, etc. of AI in order to improve the function of AI and to mitigate risks.
D) Human Intervention
Regarding the judgment made by AI, in cases where it is necessary and possible (e.g., medical care using AI), humans may be expected to make decisions as to whether to use the judgments of AI, how to use it etc. In those cases, what can be considered as criteria for the necessity of human intervention?
In the utilization of AI that operates through actuators, etc., in the case where it is planned to shift to human operation under certain conditions, what kind of matters are expected to be paid attention to?
[Points of view as criteria (example)]
• The nature of the rights and interests of indirect users, et al., and their intents, affected by the judgments of AI.
• The degree of reliability of the judgment of AI (compared with reliability of human judgment).
• Allowable time necessary for human judgment
• Ability expected to be possessed by users
E) Role assignments among users
With consideration of the volume of capabilities and knowledge on AI that each user is expected to have and ease of implementing necessary measures, users may be expected to play such roles as seems to be appropriate and also to bear the responsibility.
F) Cooperation among stakeholders
Users and data providers may be expected to cooperate with stakeholders and to work on preventive or remedial measures (including information sharing, stopping and restoration of AI, elucidation of causes, measures to prevent recurrence, etc.) in accordance with the nature, conditions, etc. of damages caused by accidents, security breaches, privacy infringement, etc. that may occur in the future or have occurred through the use of AI.
What is expected reasonable from a users point of view to ensure the above effectiveness?