Session
Organizer 1: Civil Society, Asia-Pacific Group
Speaker 1: Charles Mok, Civil Society, Asia-Pacific Group
Speaker 2: Chilufya Theresa Mulenga, Technical Community, African Group
Speaker 3: Zunair Yasir, Private Sector, Asia-Pacific Group
Speaker 2: Chilufya Theresa Mulenga, Technical Community, African Group
Speaker 3: Zunair Yasir, Private Sector, Asia-Pacific Group
Format
Roundtable
Duration (minutes): 60
Format description: A panel format enables diverse expert perspectives from cybersecurity, policy, AI ethics, and journalism, facilitating focused yet comprehensive dialogue. This format is optimal to balance structured presentations and interactive Q&A sessions.
Duration (minutes): 60
Format description: A panel format enables diverse expert perspectives from cybersecurity, policy, AI ethics, and journalism, facilitating focused yet comprehensive dialogue. This format is optimal to balance structured presentations and interactive Q&A sessions.
Policy Question(s)
A. What are the emerging trends in AI-generated misinformation, and how can we effectively detect and counteract them?
B. What policy and regulatory approaches can help mitigate the risks associated with deepfakes while balancing freedom of expression?
C.How can multi-stakeholder collaboration (including governments, tech platforms, and civil society) contribute to building digital resilience against AI-driven disinformation?
What will participants gain from attending this session? Participants will gain an understanding of current trends in AI-driven misinformation, effective strategies for its detection and mitigation, insights into policy and ethical considerations, and practical tools and resources to build digital resilience against misinformation.
SDGs
Description:
The rise of deepfakes and AI-generated misinformation is fundamentally reshaping the landscape of digital trust. From manipulated political speeches to synthetic media used in cyber fraud, these technologies are being exploited to spread disinformation at an unprecedented scale. As AI tools become more sophisticated and accessible, distinguishing real content from AI-generated fakes is becoming increasingly difficult, threatening public trust in digital communication, news media, and governance. This session will examine the risks, ethical dilemmas, and security challenges posed by AI-driven misinformation while exploring effective strategies to combat its spread. To counteract these challenges, this discussion will bring together experts from cybersecurity, digital policy, AI ethics, and journalism to assess the current landscape of AI-generated misinformation and its global implications. The session will explore technological detection methods, policy frameworks, and digital literacy initiatives aimed at mitigating the risks associated with deepfakes. Additionally, this session will incorporate insights from the ISOC Online Safety Special Interest Group (SIG) to highlight best practices for ensuring online safety and resilience against AI-driven misinformation. By fostering collaboration between governments, technology platforms, and civil society, the session will outline actionable recommendations for strengthening digital trust while ensuring the responsible use of AI-driven media.
The rise of deepfakes and AI-generated misinformation is fundamentally reshaping the landscape of digital trust. From manipulated political speeches to synthetic media used in cyber fraud, these technologies are being exploited to spread disinformation at an unprecedented scale. As AI tools become more sophisticated and accessible, distinguishing real content from AI-generated fakes is becoming increasingly difficult, threatening public trust in digital communication, news media, and governance. This session will examine the risks, ethical dilemmas, and security challenges posed by AI-driven misinformation while exploring effective strategies to combat its spread. To counteract these challenges, this discussion will bring together experts from cybersecurity, digital policy, AI ethics, and journalism to assess the current landscape of AI-generated misinformation and its global implications. The session will explore technological detection methods, policy frameworks, and digital literacy initiatives aimed at mitigating the risks associated with deepfakes. Additionally, this session will incorporate insights from the ISOC Online Safety Special Interest Group (SIG) to highlight best practices for ensuring online safety and resilience against AI-driven misinformation. By fostering collaboration between governments, technology platforms, and civil society, the session will outline actionable recommendations for strengthening digital trust while ensuring the responsible use of AI-driven media.
Expected Outcomes
Identification of best practices and policy recommendations to combat AI-generated misinformation.
Discussion on technological solutions for deepfake detection and verification mechanisms.
Strengthening digital literacy and public awareness strategies to enhance digital trust.
Collaborative strategies involving ISOC Online Safety SIG and other stakeholders to promote online safety in the face of AI-driven misinformation.
Hybrid Format: Interaction will be facilitated through structured Q&A segments, interactive polls, and real-time online feedback via virtual conferencing tools like Zoom and Mentimeter. The online moderator will integrate virtual audience participation seamlessly, ensuring balanced representation from online and onsite attendees.