6. Principle of privacy

Publisher: Ministry of Internal Affairs and Communications (MIC), the Government of Japan

Developers should take it into consideration that AI systems will not infringe the privacy of users or third parties.
[Comment]
The privacy referred to in this principle includes spatial privacy (peace of personal life), information privacy (personal data), and secrecy of communications. Developers should consider international guidelines on privacy, such as “OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data,” as well as the followings, with consideration of the possibility that AI systems might change their outputs or programs as a result of learning and other methods:
● To make efforts to evaluate the risks of privacy infringement and conduct privacy impact assessment in advance.
● To make efforts to take necessary measures, to the extent possible in light of the characteristics of the technologies to be adopted throughout the process of development of the AI systems (“privacy by design”), to avoid infringement of privacy at the time of the utilization.

Related Principles

In an age of ubiquitous and massive collection of data through digital communication technologies, the right to protection of personal information and the right to respect for privacy are crucially challenged. Both physical AI robots as part of the Internet of Things, as well as AI softbots that operate via the World Wide Web must comply with data protection regulations and not collect and spread data or be run on sets of data for whose use and dissemination no informed consent has been given.
‘Autonomous’ systems must not interfere with the right to private life which comprises the right to be free from technologies that influence personal development and opinions, the right to establish and develop relationships with other human beings, and the right to be free from surveillance. Also in this regard, exact criteria should be defined and mechanisms established that ensure ethical development and ethically correct application of ‘autonomous’ systems.
In light of concerns with regard to the implications of ‘autonomous’ systems on private life and privacy, consideration may be given to the ongoing debate about the introduction of two new rights: the right to meaningful human contact and the right to not be profiled, measured, analysed, coached or nudged.

Privacy and data protection must be guaranteed at all stages of the life cycle of the AI system. This includes all data provided by the user, but also all information generated about the user over the course of his or her interactions with the AI system (e.g. outputs that the AI system generated for specific users, how users responded to particular recommendations, etc.). Digital records of human behaviour can reveal highly sensitive data, not only in terms of preferences, but also regarding sexual orientation, age, gender, religious and political views. The person in control of such information could use this to his her advantage. Organisations must be mindful of how data is used and might impact users, and ensure full compliance with the GDPR as well as other applicable regulation dealing with privacy and data protection.

Published by: Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles

Developers should pay attention to the verifiability of inputs outputs of AI systems and the explainability of their judgments.
[Comment]
AI systems which are supposed to be subject to this principle are such ones that might affect the life, body, freedom, privacy, or property of users or third parties.
It is desirable that developers pay attention to the verifiability of the inputs and outputs of AI systems as well as the explainability of the judgment of AI systems within a reasonable scope in light of the characteristics of the technologies to be adopted and their use, so as to obtain the understanding and trust of the society including users of AI systems.
[Note]
Note that this principle is not intended to ask developers to disclose algorithms, source codes, or learning data. In interpreting this principle, consideration to privacy and trade secrets is also required.

Published by: Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles

Developers should pay attention to the security of AI systems.
[Comment]
In addition to respecting international guidelines on security such as “OECD Guidelines for the Security of Information Systems and Networks,” it is encouraged that developers pay attention to the followings, with consideration of the possibility that AI systems might change their outputs or programs as a result of learning or other methods:
● To pay attention, as necessary, to the reliability (that is, whether the operations are performed as intended and not steered by unauthorized third parties) and robustness (that is, tolerance to physical attacks and accidents) of AI systems, in addition to: (a) confidentiality; (b) integrity; and (c) availability of information that are usually required for ensuring the information security of AI systems.
● To make efforts to conduct verification and validation in advance in order to assess and control the risks related to the security of AI systems.
● To make efforts to take measures to maintain the security to the extent possible in light of the characteristics of the technologies to be adopted throughout the process of the development of AI systems (“security by design”).

Users and data providers should take into consideration that the utilization of AI systems or AI services will not infringe on the privacy of users’ or others.
[Main points to discuss]
A) Respect for the privacy of others
With consideration of social contexts and reasonable expectations of people in the utilization of AI, users may be expected to respect the privacy of others in the utilization of AI.
In addition, users may be expected to consider measures to be taken against privacy infringement caused by AI in advance.
B) Respect for the privacy of others in the collection, analysis, provision, etc. of personal data
Users and data providers may be expected to respect the privacy of others in the collection, analysis, provision, etc. of personal data used for learning or other methods of AI.
C) Consideration for the privacy, etc. of the subject of profiling which uses AI
In the case of profiling by using AI in fields where the judgments of AI might have significant influences on individual rights and interests, such as the fields of personnel evaluation, recruitment, and financing, AI service providers and business users may be expected to pay due consideration to the privacy, etc. of the subject of profiling.
D) Attention to the infringement of the privacy of users’ or others
Consumer users may be expected to pay attention not to give information that is highly confidential (including information on others as well as information on users’ themselves) to AI carelessly, by excessively empathizing with AI such as pet robots, or by other causes.
E) Prevention of personal data leakage
AI service providers, business users, and data providers may be expected to take appropriate measures so that personal data should not be provided by the judgments of AI to third parties without consent of the person.