Check-in and access this session from the IGF Schedule.

IGF 2024 WS #31 Cybersecurity in AI: balancing innovation and risks

    Organizer 1: Igor Kumagin, Kaspersky
    Organizer 2: Yuliya Shlychkova, Kaspersky
    Organizer 3: Jochen Michels, 🔒Kaspersky
    Organizer 4: Fonarev Dmitry, Kaspersky

    Speaker 1: Sergio Mayo Macias, Technical Community, Western European and Others Group (WEOG)
    Speaker 2: Melodena Stephens, Technical Community, Asia-Pacific Group
    Speaker 3: A Wylde, Technical Community, Western European and Others Group (WEOG)
    Speaker 4: Yuliya Shlychkova, Yuliya Shlychkova, Vice President, Public Affairs, Kaspersky

    Moderator

    Gladys Yiadom, Private Sector, Intergovernmental Organization

    Online Moderator

    Jochen Michels, Private Sector, Western European and Others Group (WEOG)

    Rapporteur

    Fonarev Dmitry, Private Sector, Eastern European Group

    Format

    Theater
    Duration (minutes): 90
    Format description: The format of the session will be a combination of a panel discussion with round table lasting approximately 90 minutes. Great emphasis will be placed on discussion with participants – onsite and online. In addition, small surveys will be included to engage participants further and obtain feedback on individual questions

    Policy Question(s)

    A. What are the essential cybersecurity requirements that must be considered while developing and applying AI systems and how to ensure that AI is inherently secure by design?
    B. What are the roles and responsibilities of various stakeholders engaged in AI system development and use?
    C. How can we engage in a permanent dialogue and maintain an exchange on this issue?

    What will participants gain from attending this session? The goal of discussion is to identify core principles of cybersecurity-by-design for the development of AI. These principles can serve as a basis for further technical governance models. 

    In preparation for the workshop, Kaspersky has developed "Guidelines for Secure Development and Deployment of AI Systems". This paper has benefited from the contributions of all the speakers at the workshop and will be discussed during the session. The document is available here: LINK

    Description:

    The technological landscape has recently witnessed the emergence of AI-enabled systems at an unprecedented scale. However, nascent technologies go hand-in-hand with new cybersecurity risks and attack vectors. The concept of security in the development of AI systems has been thrust to the forefront of various regulatory initiatives, such as the EU AI Act or the Singapore Model AI Governance Framework for Generative AI, to minimize the associated cyber-risks. Despite these regulatory strides, a gap between the general frameworks and their practical implementation at a more technical level remains.

    In the forthcoming multi-stakeholder discussion, we seek to explore which fundamental cybersecurity requirements should be considered in the implementation of AI systems, and how policymakers, industry, academia, and the civil society can contribute to the development of new standards.

    Our initial thoughts are:
    (1) AI systems must undergo thorough security risk assessments. This involves evaluating the entire architecture of an AI system and its components to identify potential weaknesses and threats, ensuring that the system's design and implementation mitigate these risks.
    (2) Cybersecurity for AI systems should not be an afterthought but integrated from the initial design phase and maintained throughout the system's lifecycle (cyber-immunity).
    (3) Cybersecurity measures must address the AI system as a whole to demonstrate holistic approach that ensures all its parts are secure and resilient to multiple types of cyberthreats.
    (4) Continuous review and improvement of cybersecurity measures to ensure that security measures keep pace with new technological advancements and emerging cybersecurity threats.
    (5) An institutional process to share information about AI incidents should be established to ensure industry is informed about latest attacks and prepared to mitigate them.

    Expected Outcomes

    Following the session, an impulse paper titled “Balancing innovation and risk: fundamental security requirements for AI systems“ summarizing the results of the discussion will be published and made available to the IGF community. The paper can also be sent to other stakeholders to gather additional feedback.

    Hybrid Format: The moderators will actively involve the participants in the discussion, through short online surveys (1-2 questions) at the beginning and end of the session as well as after the initial statements. The survey tool can be used by participants both online and onsite via their smart phones. This will generate additional personal involvement and increase interest in the hybrid session.
    During the ’Roundtable’ discussion, onsite and online participants can also participate actively, as we encourage all attendees to contribute their ideas actively. Both onsite and online participants will have the same opportunities to get involved.
    Planned structure of the workshop:
    • Introduction by the moderator
    • Survey with 2 questions
    • Brief impulse statements by all speakers
    • Survey with 2 questions
    • Moderated discussion with the attendees onsite and online – Roundtable
    • Survey with two questions
    • Wrap-up

    Key Takeaways (* deadline at the end of the session day)

    Cybersecurity standards for AI-specific threats, which are being actively developed in various jurisdictions, mostly cover the development of AI foundational models or the overall management of risks associated with AI. This has created a gap in AI-specific protection for organizations implementing applied AI systems based on existing models.

    The guidelines for secure development and deployment of AI systems presented and discussed during the workshop will be instrumental for organizations relying on third-party AI components to build their own solutions. Kindly find the document developed by Kaspersky in junction with leading academic experts here: https://kas.pr/1yt9.

    Call to Action (* deadline at the end of the session day)

    Organizations should implement rigorous security practices when developing, deploying and operating AI systems to mitigate associated risks, follow leading regulatory frameworks and advanced guidance as industry benchmarks, and establish an internal culture of security and accountability.

    Governments and international organizations should promote a responsible approach to the development and use of AI systems, facilitate the exchange of best practices among different stakeholders, and work towards the harmonization and interoperability of security standards and their implementation in critical industries.