Check-in and access this session from the IGF Schedule.

IGF 2024 WS #31 Cybersecurity in AI: balancing innovation and risks

    Organizer 1: Igor Kumagin, Kaspersky
    Organizer 2: Yuliya Shlychkova, Kaspersky
    Organizer 3: Jochen Michels, 🔒Kaspersky
    Organizer 4: Fonarev Dmitry, Kaspersky

    Speaker 1: Sergio Mayo Macias, Technical Community, Western European and Others Group (WEOG)
    Speaker 2: Melodena Stephens, Technical Community, Asia-Pacific Group
    Speaker 3: A Wylde, Technical Community, Western European and Others Group (WEOG)
    Speaker 4: Yuliya Shlychkova, Yuliya Shlychkova, Vice President, Public Affairs, Kaspersky

    Moderator

    Gladys Yiadom, Private Sector, Intergovernmental Organization

    Online Moderator

    Jochen Michels, Private Sector, Western European and Others Group (WEOG)

    Rapporteur

    Fonarev Dmitry, Private Sector, Eastern European Group

    Format

    Theater
    Duration (minutes): 90
    Format description: The format of the session will be a combination of a panel discussion with round table lasting approximately 90 minutes. Great emphasis will be placed on discussion with participants – onsite and online. In addition, small surveys will be included to engage participants further and obtain feedback on individual questions

    Policy Question(s)

    A. What are the essential cybersecurity requirements that must be considered while developing and applying AI systems and how to ensure that AI is inherently secure by design?
    B. What are the roles and responsibilities of various stakeholders engaged in AI system development and use?
    C. How can we engage in a permanent dialogue and maintain an exchange on this issue?

    What will participants gain from attending this session? The goal of discussion is to identify core principles of cybersecurity-by-design for the development of AI. These principles can serve as a basis for further technical governance models. 

    In preparation for the workshop, Kaspersky has developed "Guidelines for Secure Development and Deployment of AI Systems". This paper has benefited from the contributions of all the speakers at the workshop and will be discussed during the session. The document is available here: LINK

    Description:

    The technological landscape has recently witnessed the emergence of AI-enabled systems at an unprecedented scale. However, nascent technologies go hand-in-hand with new cybersecurity risks and attack vectors. The concept of security in the development of AI systems has been thrust to the forefront of various regulatory initiatives, such as the EU AI Act or the Singapore Model AI Governance Framework for Generative AI, to minimize the associated cyber-risks. Despite these regulatory strides, a gap between the general frameworks and their practical implementation at a more technical level remains.

    In the forthcoming multi-stakeholder discussion, we seek to explore which fundamental cybersecurity requirements should be considered in the implementation of AI systems, and how policymakers, industry, academia, and the civil society can contribute to the development of new standards.

    Our initial thoughts are:
    (1) AI systems must undergo thorough security risk assessments. This involves evaluating the entire architecture of an AI system and its components to identify potential weaknesses and threats, ensuring that the system's design and implementation mitigate these risks.
    (2) Cybersecurity for AI systems should not be an afterthought but integrated from the initial design phase and maintained throughout the system's lifecycle (cyber-immunity).
    (3) Cybersecurity measures must address the AI system as a whole to demonstrate holistic approach that ensures all its parts are secure and resilient to multiple types of cyberthreats.
    (4) Continuous review and improvement of cybersecurity measures to ensure that security measures keep pace with new technological advancements and emerging cybersecurity threats.
    (5) An institutional process to share information about AI incidents should be established to ensure industry is informed about latest attacks and prepared to mitigate them.

    Expected Outcomes

    Following the session, an impulse paper titled “Balancing innovation and risk: fundamental security requirements for AI systems“ summarizing the results of the discussion will be published and made available to the IGF community. The paper can also be sent to other stakeholders to gather additional feedback.

    Hybrid Format: The moderators will actively involve the participants in the discussion, through short online surveys (1-2 questions) at the beginning and end of the session as well as after the initial statements. The survey tool can be used by participants both online and onsite via their smart phones. This will generate additional personal involvement and increase interest in the hybrid session.
    During the ’Roundtable’ discussion, onsite and online participants can also participate actively, as we encourage all attendees to contribute their ideas actively. Both onsite and online participants will have the same opportunities to get involved.
    Planned structure of the workshop:
    • Introduction by the moderator
    • Survey with 2 questions
    • Brief impulse statements by all speakers
    • Survey with 2 questions
    • Moderated discussion with the attendees onsite and online – Roundtable
    • Survey with two questions
    • Wrap-up

    Key Takeaways (* deadline at the end of the session day)

    Cybersecurity standards for AI-specific threats, which are being actively developed in various jurisdictions, mostly cover the development of AI foundational models or the overall management of risks associated with AI. This has created a gap in AI-specific protection for organizations implementing applied AI systems based on existing models.

    The guidelines for secure development and deployment of AI systems presented and discussed during the workshop will be instrumental for organizations relying on third-party AI components to build their own solutions. Kindly find the document developed by Kaspersky in junction with leading academic experts here: https://kas.pr/1yt9.

    Call to Action (* deadline at the end of the session day)

    Organizations should implement rigorous security practices when developing, deploying and operating AI systems to mitigate associated risks, follow leading regulatory frameworks and advanced guidance as industry benchmarks, and establish an internal culture of security and accountability.

    Governments and international organizations should promote a responsible approach to the development and use of AI systems, facilitate the exchange of best practices among different stakeholders, and work towards the harmonization and interoperability of security standards and their implementation in critical industries.

    Session Report (* deadline 9 January) - click on the ? symbol for instructions

    The pace of AI development has increased significantly worldwide in recent years, and a growing number of organizations have implemented AI and the Internet of Things (IoT) in their infrastructure, or have plans to adopt these technologies in the short term. However, for all the positive impacts that the integration of AI brings, it is accompanied by significant risks and therefore requires robust security standards to mitigate them. The challenge of balancing innovation and risk in the design, deployment and operation of AI systems was discussed during an IGF workshop held on December 18, 2024.

    The session began with a debate on trust in AI. Allison Wylde, team member of the UN Internet Governance Forum Policy Network on AI, emphasized that trust in AI is subjective and mostly depends on cultural and individual factors. She stressed the importance of defining and better understanding this concept to ensure proper transparency and reliability, and advocated for more quantified and measurable indicators. Furthermore, Allison Wylde pointed out that it is imperative to adopt a zero-trust approach with respect to AI systems, highlighting the need for continuous verification of both models and their data before deployment.  

    Yuliya Shlychkova, Vice President for Public Affairs at Kaspersky, provided a brief overview of the current cyberthreat landscape in relation to AI, which, like any software, is not completely immune to attack. Notably, AI is increasingly being used by cybercriminals to automate their intrusions. In addition, AI systems can be exploited through data poisoning, prompt manipulation, or backdoors. She noted that cybersecurity in organizations is particularly important, as many employees unknowingly expose sensitive information when using AI models.        

    Sergio Mayo Macías, Innovation Programmes Manager at the Technological Institute of Aragon (ITA), Spain, reflected on the challenges of relying on datasets to train AI models. Specifically, such vulnerabilities such as poor data quality or data bias, when stereotypes and incorrect societal assumptions about gender, ethnicity, geographic location, etc., infiltrate the algorithm’s dataset, lead to AI systems making unfair or discriminatory decisions and providing inaccurate outputs. In this regard, individuals designing and operating AI models need to be aware of these biases and take steps to mitigate them in order to ensure fairness and reliability. Sergio Mayo also pointed out the need to create safe spaces to ensure data sovereignty, and secure data sharing for AI training across different states and regions.

    Dr Melodena Stephens, Professor of Innovation & Technology Governance at the Mohammed Bin Rashid School of Government, UAE, underscored the differences between digital literacy and AI literacy, as the latter is much more complex and requires constant updating to keep up with rapid technological advancements. In this context, she endorsed comprehensive societal education on AI, including training for engineers, policymakers, and the general public. In addition, Dr Stephens questioned the real short-term possibility of aligning different cybersecurity policies due to geopolitical fragmentation and differing views on human rights and privacy issues, although such harmonization would be highly desirable and productive. Instead, she advocated better adaptation of regulations, such as those developed by ISO or NIST, to make them more understandable and actionable for people and organizations at different levels of expertise.

    As a step towards the practical implementation of general regulatory frameworks, Yuliya Shlychkova presented the “Guidelines for Secure Development and Deployment of AI Systems” developed by Kaspersky and the workshop speakers. This document is particularly useful for companies that rely on third-party AI components to build their own solutions, and covers key aspects of developing, deploying and operating AI systems, including:

    1. Cybersecurity awareness and training

    2. Threat modelling and risk assessment
    3. Infrastructure security

    4. Supply chain and data security

    5. Testing and validation

    6. Vulnerability reporting

    7. Defense against ML-specific attacks

    8. Regular security updates and maintenance

    9. Compliance with international standards     

    The active participation of the audience also contributed to the discussion. In particular, the debate highlighted the imperative need to follow the principles of security by design in the development of AI models, to address the cybersecurity of AI as an integral process, and to consider the human factor as crucial for the sustainability of systems. Participants also touched on ethical dilemmas in the use of AI, especially in developing countries, underlined the risks associated with application programming interface (API) vulnerabilities, and agreed on the importance of security audits, with a focus on assessing the integrity and fairness of AI models.

    At the end of the session, there was broad consensus on the need for transparency, education, and collaboration across regions to address the vital issues of AI security standards and interoperability, while recognizing the local cultural and economic context in which these systems will be deployed.