Session
Classroom
Duration (minutes): 90
Format description: A 90-minute open forum in a "classroom" setting is ideal for starting a conversation on an already adopted and available instrument (Council of Europe Guidelines on the Responsible Implementation of AI Systems in Journalism) because it allows for structured yet open discussion. This format encourages focused interaction, providing ample time to introduce the topic, outline key points, and engage the audience. The classroom setting fosters a collaborative atmosphere, while the raise-of-hands method ensures that all participants have the opportunity to contribute and voice their opinions. This approach helps clarify any doubts, stimulates deeper engagement, and promotes diverse perspectives, enabling a more comprehensive understanding and progression of the conversation. The duration of 90 minutes will be sufficient to analyse topics that need reflection from substantial but also from technical point of view.
This session will address the challenges deriving from the use of Artificial Intelligence tools generating and spreading disinformation and the distinctive threats these pose to democratic dialogue. The quality of public debate is threatened at various level, ranging from false content spreading at large scale, unlikely to be tackled solely by human intervention, to the propagation of false information by individuals who consider it as true and share it in good faith.
Starting with a presentation of the Council of Europe Guidance Note on countering the spread of online mis- and disinformation through factchecking and platform design solutions in a human rights’ compliant manner, participants will discuss practical measures policymakers and stakeholders can take, such as support for fact-checking, platform-design solutions and user empowerment. The session will also examine the role and responsibilities of digital platforms in both the dissemination of false AI generated information and the promotion of quality journalism.
Bringing together media professionals, AI experts, policymakers, and other stakeholder, the conversation will highlight AI’s dual role—both as a potential vehicle for producing and distributing disinformation when misused and as a tool for enhancing fact-based information and play a positive role in enabling a safe, inclusive and favourable online environment for participation in public debate.
Panellists will share their experiences, challenges, and strategies for combating AI-driven disinformation and the efforts put in place to maintaining trust in news production. The discussion will also address the challenges arising in this context from the rising use of generative Artificial Intelligence systems, including key technologies like deepfakes and ChatGPT, highlighting the need for regular updates and careful vigilance in understanding disinformation.
To foster an interactive and inclusive dialogue, audience members — both onsite and online — will be encouraged to ask direct questions and actively engage in the discussion. A shared real-time document will capture key insights, recommendations, and collaborative solutions, ensuring diverse perspectives are reflected. This document will serve as a valuable resource for future dialogues and policy considerations.
The session will conclude with a summary of key takeaways, reinforcing the importance of assessing AI potential benefits and risks in countering the spread of online mis- and disinformation.
Council of Europe
Giulia Lucchese, Co-Secretary CDMSI, Freedom of Expression and CDMSI division, Council of Europe
Evangelia Vasalou, Project officer, Division for Cooperation on Freedom of Expression, Council of Europe
Sophie Lecheler, Professor of Political Communication, Department of Communication, University of Vienna
Aedin CONBOY, Government Relations and Public Policy Manager, TikTok
Ronan FAHY, Assistant Professor at the Institute for Information Law (IViR), University of Amsterdam
David Caswell, Strategic product leader in the application of AI into journalism, USA
Chine Labbé, Editor-in-Chief and Vice President of Partnerships at NewsGuard
16.10
Targets: Disinformation undermines trust in the media and threatens the reliability of information that feeds public debate. Combating disinformation and ensuring a healthy, plural and reliable digital environment is crucial for upholding democratic values. Addressing the risks posed by AI and expanding the benefits that can derive from its use to enhance transparency, accuracy, and accountability in the information landscape, directly serves the UN goal of ensuring public access to information and protecting fundamental freedoms.
Practical guidance and recommendations to policymakers and stakeholders (including governments, regulators, industry, journalists, civil society, researchers, and users) on countering the dissemination of online disinformation through fact-checking and platform-design solutions, all in a human-rights compliant manner, and with due regard to user empowerment, is an important precondition for addressing the negative impacts of online disinformation.