The development of artificial intelligence technology is reshaping society and bringing about new changes in social governance.

Recently, China’s National Governance Committee for the New Generation Artificial Intelligence issued “Governance Principles for the New Generation Artificial Intelligence” with a subtitle of “Developing Responsible Artificial Intelligence.”

Some scholars are also concerned with how to establish a sound governance system to guide AI’s development.

Zhang Wen, director of the Institute of Fiscal and Financial Research at the Shandong Academy of Social Sciences, said that AI can provide strong technical support for advancing the modernization and intelligence of social governance. It can promote sound decision-making and improve the efficiency of services and the supervision system.

AI is both a tool for and an object of social governance, said Zhang Chenggang, executive director of the Institute for Social Governance and Development at Tsinghua University. The development and widespread application of AI technology will bring about new issues for social governance as well as legal and ethical issues.

The dual attributes of technology and society embedded in AI and their contradictions will become increasingly prominent, said Liu Gang, chief economist at the Chinese Institute of New Generation Artificial Intelligence Development Strategies.

In addition to employment, data privacy and ethical issues, algorithmic discrimination will also become a concern, Liu warned.

“The AI algorithm is a machine learning system based on existing data, but it may contain discrimination since the existing data is the result of human behavior. As such, the bases for algorithmic decisions also contain bias,” Liu said, warning that it is possible for a platform with a monopoly on data and information to hide information from and levy price discrimination against consumers and customers. In this regard, researchers should do a forward-looking study.

Zhang Chenggang said that the “Governance Principles” lacks a model for public participation. He claimed that it is necessary to raise people’s awareness of AI security and enhance their participation in AI risk management.

To establish a sound AI governance system, Zhang Wen said that a rational perception of AI is essential. The premise of the development and application of AI should be conforming to human values and ethics. People should be proactive when designing the AI management system to prevent risks. It is also necessary to clarify the boundaries for R&D and the application of AI technology and to prevent the illegal use of AI.

Chen Fan, a professor from the Center for Philosophy of Science and Technology at Northeastern University, suggested carrying out a technical evaluation of AI. While considering AI’s economic benefits, people should also consider its social and environmental consequences, Chen said. It can be guided by morality and legal norms.

Although it is important to manage AI and prevent risks according to specific problems emerging in its development, we must avoid excessive intervention in order not to hinder the development of the AI technology industry, Liu said.

“You cannot give up eating for fear of choking. Norms are designed to advance AI in a better way, not to hinder technological progress,” Liu asserted.

Zhang Wen suggested promoting international cooperation in the field of AI. By strengthening exchanges and cooperation between countries, we will jointly build AI risk monitoring and management systems on a global scale, reduce its potential risks, and ensure that AI is safe, reliable and controllable.

In-depth cooperation and coordination between different countries and regions is also facing challenges, Chen said, adding that it is necessary to rethink the global governance of AI from the perspective of globalization.