Check-in and access this session from the IGF Schedule.

IGF 2018 WS #257
AI policy: Looking beyond and between frameworks

    Organizer 1: Vidushi Marda, ARTICLE 19
    Organizer 2: Mallory Knodel, ARTICLE 19
    Organizer 3: Collin Kurre, ARTICLE 19

    Speaker 1: Corinne Cath, Technical Community, Western European and Others Group (WEOG)
    Speaker 2: Amber Sinha, Civil Society, Asia-Pacific Group
    Speaker 3: Malavika Jayaram, Civil Society, Asia-Pacific Group

    Moderator

    Vidushi Marda

    Online Moderator

    Mallory Knodel

    Rapporteur

    Collin Kurre

    Format

    Panel - 90 Min

    Interventions

    Each speaker mentioned below brings a distinct voice to this debate and is an expert on AI technologies. This group brings together a mix of training, geography, gender, and perspectives that makes for a promising and lively debate.
    Moderator: Ms. Vidushi Marda, ARTICLE 19, Civil Society, Female, India. Ms. Marda is the Digital Programme Officer leading ARTICLE 19’s work on AI.
    Speakers:
    1. Ms. Corinne Cath, Oxford Internet Institute, Academic, Female, Netherlands. Ms. Cath is an experienced expert in the area, with involvement in the Partnership on AI, the IEEE Global Initiative on Ethically Aligned Design and has published in this area as well. She will speak to the benefits of a human rights framework. (confirmed)
    2. Mr. Amber Sinha, The Centre for Internet and Society, Civil Society, Male, India. Mr. Sinha is a leading expert on emerging technologies in India and has worked on a variety of issues that are crucial to this debate, most recently around privacy in India. (confirmed)
    3. Ms. Malavika Jayaram, Digital Asia Hub, civil society, Female, Hong Kong (confirmed). Ms. Jayaram has led multiple conversations around AI in ethics, security, privacy, innovation, healthcare, urban planning, automation and the future of labour, most recently through a series of workshops titled “AI in Asia”. (confirmed)
    4. Ms. Brittany Smith, DeepMind, Industry, Female, London. (TBC) Ms. Smith is at the forefront of DeepMind’s efforts towards responsible uses of AI and will speak to industry perspectives of development and deployment of AI.
    5. Ms. Marrousia Levesque, Government Affairs Canada, Government, Female, Canada. (TBC) Ms. Levesque has worked at the intersection of AI and human rights, and also been a part of important conversations around implementation challenges.

    Diversity

    This proposed panel reflects diversity in every aspect: from background and training, to gender, stakeholder group, geographic location and involvement in the space. 5 of the 6 proposed speakers are women, we have representation from at least five countries. We will work towards this balance not just on paper, but also in substance during the discussion.

    This session will begin by explaining the rationale underlying each school of thought, the interaction, overlap, and tension between schools, and present a short overview of ARTICLE 19’s new policy paper on how human rights and fairness, accountability, transparency frameworks interact with each other. This will serve as the background to the session.

    Next, speakers will be asked to spend 5 minutes speaking about the interaction between different models and explaining their thoughts on the most appropriate approach going forward.

    This will set the stage for a round of initial interaction with the audience, who, having understood the orientation of the panel, can make informed interjections or expose gaps in the foundational discussion of the session.

    This will pave the way for a second round of interventions from speakers, who in addition to addressing questions and comments from the audience, will discuss the challenges, opportunities, and objectives that they have carved out in their work so far.

    The remaining time will be spent by inviting interaction between the audience and the panel, with the objective of identifying commonalities, differences and tensions between the three schools of thought, namely, ethics, human rights, and fairness, accountability, transparency.

    Goals of discussion:
    1. What are the various points of divergence between the three frameworks being discussed?
    2. What are the ways in which these can inform one another?
    3. What challenges crop up within each of these frameworks?
    4. How do strategies of engagement align or differ between these?
    5. Are there real-world examples or case studies where these frameworks have been used simultaneously? Is there theoretical grounding for such approaches?
    6. What concrete steps can stakeholders take to build conversations between approaches?

    The opening round of comments from the moderator and speaker will develop the landscape of the conversation. After a short round of interventions of 5 minutes each from panelists, the floor will be opened to online and in-person audience participation, to ask questions following initial comments, and highlight gaps in the discussion that must be addressed.

    This will pave the way for a second round of panel interventions where panelists will address issues brought up by co-panelists and members of the audience, to create a lively conversation and engaging debate.

    This panel is intended to be a conversation that critically analyses different policy approaches to AI. By having two extended audience participation blocks, and multiple interventions by each panelist, we hope to facilitate a constructive conversation.

    This panel aims to look at how emerging conversations and implementation mechanisms around Artificial Intelligence (AI) can be harmonised to inform future policy approaches.

    AI systems find increasing application in critical sectors, from deciding credit-worthiness of individuals to predictive policing. Practitioners, researchers, companies and governments simultaneously grapple with the challenges that AI poses, while trying to determine the frameworks and principles that must guide this technology. In the process, at least three distinct approaches to AI standards and principles emerged. Some turn to existing human rights frameworks to guide the development of AI, others gravitate towards conversations around ethical principles, while a third school of thought is to pursue these as technical-ethical challenges and work towards determining what fair, accountable, transparent AI systems look like. Each of these present strategies of engagement and dialogue in unique ways. However, there is less attention paid to how these conversations can constructively inform each other.

    This panel seeks to build on existing multi-stakeholder discussions on this issue, by taking the next logical step towards a conversation across schools of thought, i.e. by engaging governments, companies, academia, and civil society actors to talk about how 1) human rights frameworks, 2) ethical principles, and 3) technical determinations of fair, accountable, transparent AI can inform each other. What are the similarities? Where are the points of tension? Is there space for these ideals to exist together? What mechanisms are in place to do so?

    The discussion will take place against the background of an upcoming ARTICLE 19 report on Fairness, Accountability, Transparency in AI vis-a-vis human rights. This is particularly relevant and timely, as important multi-stakeholder initiatives around AI systems will begin producing substantive and strategic outputs in the next year. The multi-stakeholder discussion is beginning to gather momentum, and a constructive debate around guiding frameworks is crucial.

    Online Participation

    The session will facilitate online participation both on the panel and through audience interaction. Online attendees will have a separate queue and microphone, which will rotate equally with the mics in the room; the workshop moderator will have the online participation session open and will be in close communication with the workshop’s trained online moderator, to make any adaptations necessary as they arise. Given our commitment to ensuring equal participation in all of our work, we take the responsibility and importance of remote participation very seriously, and the IGF will be no exception.