Description: The creation, dissemination and accumulation of information is one dimension of structural power. The vast majority of conflicts today are not fought by nation states and their armies; increasingly, they are fought not with conventional weapons but with words. A specific sort of weaponry—“fake news” and viral disinformation online—has been at the center of policy discussions, public debates, and academic analyses in recent years (Horowitz, 2019). Technology, including digital platforms, that enable connections and participation can be used for misinformation and fake news. In addition, what has been called the “emerging information arms race” (Posetti& Matthews, 2018, July 23) is plaguing mature and emerging democracies alike (Horowitz, 2019). A variety of approaches have adopted in different regions/–nations, to fight misinformation and fake news: from content intervention (fact-checking and filtering), technical intervention (dedicated anti-rumor platforms, algorithm) economic intervention (undermining advertising sources)， legal intervention (civil and criminal penalties ) etc. Different stakeholders including state actors, NGOs, platforms, news media are involved. However, it is important to determine,

How effective are those approaches? What are the shared policy principles, norms and mechanisms across regions and nations?
What are the responsibilities of actors such as Internet platforms and government regulators?
What roles do technology (e.g. Algorithm and bots) and others play in the process?
What are the best practices in light of freedom of speech and the necessary neutrality and legal certainty for platforms?
How can we restore the trust of the public to the Internet platforms, news media and politics? How can we hold the actors accountable for their interventions?

In this session, speakers from different regions/nations including UK, Finland, China, India, Africa, Middle East and Latin America will discuss the above questions from diverse geographic and stakeholder’s perspectives.

Government Sector

Yang Xiaobo, Head of the International Department, China Cyberspace Security Association Secretariat, China

Expected Outcomes: 1) Facilitate the debate as well as shaping the evolution of norms, principles, best practices of online disinformation and fake news refutation and model of Internet governance. 2) Identify differing viewpoints regarding Internet governance approaches regarding AI to help the creation of an environment in which all stakeholders are able to prosper and thrive; 3) Policy recommendations and key messages report to the IGF community; 4) A collaboration amongst speakers who are from different stakeholder sectors, in fake news and disinformation refutation and researches.

Policy Question(s):

1. What are the reasons of the proliferation of misinformation and fake news in different countries and regions? Is the current challenge of misinformation and fake news, its manifestation and effects, including reaction to disinformation similar in different nations, regions?

2. Are the initiatives (policy, technical, capacity building, others) taken so far by different stakeholders, especially the intermediaries and governments to curb spread of misinformation and fake news globally, regionally and within nations adequate?

3. Is it possible to moderate misinformation and fake news through government and private actors, while ensuring freedom of expression and privacy of users? How can trust and accountability to the internet platforms and government interventions be maintained or restored?

4. Are there any best practices and approaches which may be adopted to counter misinformation being spread through messaging and social media platforms in light of freedom of speech and the necessary neutrality and legal certainty for those platforms?

5. Is there any role of the multistakeholder process, other than Governments and intermediaries, such as technology (e.g. algorithm and bots) in disinformation and fake news mitigation?

Relevance to Theme: The IGF community is considering the potential risks to the security and stability to the Internet, and how to achieve the safety and resilience of a healthy digital environment. The session will contribute to the discussions of fake news, trust, accountability, and freedom of expression under the theme “security, safety, stability and resilience”. It will address those issues by looking at online disinformation and fake news refutation from different stakeholders’ perspectives. Specifically, the workshop will discuss: 1) the responsibilities of Internet Platforms and government regulators in fighting the online fake news and misinformation; 2) the role of technology (such as AI & Algorithm) and other actors in fake news and misinformation refutations; 3) how to hold Internet platforms and government accountable; 5) How to restore the public trust in the Internet Platforms, government, and the news media; 6) How can globally accepted standards and best practice be developed. The topics of discussions make this panel directly relevant to the theme “security, safety, stability and resilience.”

Relevance to Internet Governance: The proposed session will discuss the timely issues of fake news and misnformation, information security and online safety, responsibility and accountability of digital platforms, and function of government regulation and trust in platform and government in the Internet governance. It will involve stakeholders from the private sector, civil society, and technical sectors at both developed regions (EU and US) and developing regions (China, India, Middle East, Africa, Latin America) to share their professional knowledge, experiences, best practices, policy framework in misinformation and fake news regulation. The proposed session will facilitate the global debate as well as shaping the evolution of norms, principles, best practices of online disinformation and fake news mitigation and model of Internet governance.

Discussion Facilitation:

The session will be opened by the onsite moderator to provide participants an overview of the policy questions discussed in the session, the professional background of the speakers, and the format of interaction. In the second part, the session will move to the discussions and debate. The moderator will invite each speaker to express their views on a set of questions and guide the debate amongst speakers and the audience to foreground their common ground and differences. The workshop organizers and moderators will discuss the content of questions with speakers in advance to ensure the quality and flow of the discussion and debate. The moderator will ensure the audience from both offline and online being able to ask questions to the speakers immediately following their discussions to encourage active participation. In the third part, moderators will invite questions from the audience and online participants, the question time will last about 30 minutes in order to provide sufficient interactions amongst speakers, audience and online participants. Online participants will be given priority to speak, and their participation will be encouraged by moderators. The onsite moderator will summarise the findings and recommendations and future actions of the panel.

Online Participation:

The online moderator will participate in the online training course for the Official Online Participation Platform provided by the IGF Secretariat's technical team to ensure the online participation tool will be properly and smoothly used during the proposed session.

The moderator will invite speakers to answer five policy questions that are prepared in advance. Each policy question will be discussed amongst speakers for 10 minutes with 1 minute of immediate audience response