Check-in and access this session from the IGF Schedule.

IGF 2024 WS #78 Intelligent machines and society: An open-ended conversation

    Organizer 1: Sorina Teleanu, DiploFoundation🔒
    Organizer 2: Jovan Kurbalija, 🔒
    Organizer 3: Andrijana Gavrilovic, 🔒

    Speaker 1: Sorina Teleanu, Civil Society, Eastern European Group
    Speaker 2: Jovan Kurbalija, Civil Society, Western European and Others Group (WEOG)
    Speaker 3: Yung-Hsuan Wu, Civil Society, Western European and Others Group (WEOG)

    Moderator

    Jovan Kurbalija, Civil Society, Western European and Others Group (WEOG)

    Online Moderator

    Andrijana Gavrilovic, Civil Society, Eastern European Group

    Rapporteur

    Andrijana Gavrilovic, Civil Society, Eastern European Group

    Format

    Roundtable
    Duration (minutes): 60
    Format description: Given that the nature of an open-ended conversation relies on maximising the degree of audience engagement, we proposed a session format that brings the audience, the speakers, and the moderators on the same level and encourages anyone to speak up. A less squarely structured setup of the room reduces the barriers and formality among participants; a more face-to-face environment is also conducive to multi-directional participation as everyone can be viewed. The length of the session is medium, allowing just enough time for participants to warm up and not too long for the conversation to lose its focus.

    Policy Question(s)

    In coming up with impact assessments of AI models, what are the additional dimensions we should consider apart from the more publicised human rights and ethics that are already in the public consciousness? How else can we ensure fairness, not just in the sense of fair representation of people of various characteristics, but also in respecting human competencies vis-a-vis machine automation? How can we operationalise a combination of human and non-human creativity that will be fed into future innovations in arts, technologies, and society? How do we steer such innovations to achieving sustainable development goals?

    What will participants gain from attending this session? By joining this session, participants will: Gain a basic understanding of various types of machine learning algorithms embedded in the social, political, and economic life of all of us. Acquire thinking tools and frameworks to assess the societal impacts of these various ‘intelligent’ agents. Actively contribute to a common sense-making exercises of the role of AI models mostly absent from policy consultation processes and learn from peer experiences. Experiment with and review AI tools, both commercial and Diplo-built under the speakers’ and moderators’ guidance.

    Description:

    Talks about AI have permeated the digital governance and policy space, from the principles and values with which we should steer AI development to which risks are the most urgent to mitigate. We talk a lot about challenges and opportunities and ways to ‘govern AI for humanity’; we tend to believe that new governance frameworks will be the solution we need to leverage AI for good, address the risks, and account for misuse and missed uses of the technology. But there are also broader, perhaps more philosophical questions about AI that we may want to spend a little more time on. For instance, how much time do we take to reflect on what it means to have intelligent machines functioning and working for or alongside us in our society? We’d like to invite you to an open-ended conversation filled with questions. Through collective sense-making, we wish to ground the talk about risks and opportunities that AI brings in human experiences. In this out-of-the-box workshop, we promise not solutions but a set of critical questions that prompt us to clarify the former. The following questions are a primer: Epistemological challenges in knowledge creation: Large Language Models (LLMs) as our new coworker? Analysts? Assistant? What roles do we imagine LLMs play vis-a-vis humans? Missing the forest for the trees: Are there other forms of intelligent machines/agents beyond LLMs we tend to talk so much about? If so, how much are they reflected/considered in our AI policy and governance discussions? Assigning human attributes to AI: What do we talk about when we talk about AI ‘understanding’, ‘reasoning’, etc.? When words lose their meaning: Five years from now, will we all sound like ChatGPT? How will human-machine co-generated language evolve, now depending less on contexts but on tokens associated with probabilities?

    Expected Outcomes

    This open-ended conversation is an attempt to encourage honest and humane reactions to the gradual integration of non-human agents in the society at large and discover the underlying values that humans commonly wish to protect or further reflect upon. Such an open sense-making exercise will help us get a firmer grasp on the society to come and potentially point to pathways that will lead to better outcomes/versions of this future society. Concretely, the unstructured yet thematised questions and responses coming from the audience will be transformed into a series of recommendations for further research to explore under-discussed areas of AI’s societal impacts, and proposals for more open-ended conversations to continue encouraging critical reflection and questioning spirits in eventual AI society.

    Hybrid Format: First and foremost, we invite IGF participants to join us for a somewhat different type of session - an open-ended conversation dependent on audience inputs. All participants will have a chance to speak, onsite and online; we will have two speakers/moderators onsite to guide the discussions. There will be no panellists with lecture-like presentations. The session will rely on Diplo’s experienced speakers/moderators who will – throughout the entire session – pay equal attention to onsite and online participants, ensuring that interventions from both audiences are treated equally. Online participants will be constantly encouraged to contribute their views, both by voice and by text chat. An additional experienced online moderator will engage with participants in the chat and ensure that the discussions happening there are integrated into the overall session. Moreover, we will use live discussion tools (e.g., Slido, Mentimeter, and Pigeonhole) to facilitate real-time exchanges between onsite and online participants.

    Key Takeaways (* deadline at the end of the session day)
    There is a need for more philosophical and ethical discussions around AI beyond just bias and ethics
    Bottom-up AI development is technically feasible and ethically desirable to preserve human knowledge
    Legal and ethical responsibility for AI systems needs to be clearly defined
    Call to Action (* deadline at the end of the session day)
    Proposal to create a ‘Sophie’s World for AI’ session at the next IGF in Norway to discuss AI from various philosophical traditions
    Plan to continue the discussion on philosophical implications of AI among interested participants
    Session Report (* deadline 9 January) - click on the ? symbol for instructions

    This discussion, featuring experts from the DiploFoundation, delved into the profound philosophical and ethical questions surrounding artificial intelligence (AI) and its impact on humanity. The speakers emphasised the need to move beyond surface-level discussions of AI ethics and biases to address more fundamental questions about human identity and agency in an AI-driven world.

    1. Introduction and Framing

    Jovan Kurbalija, Director of DiploFoundation, opened the discussion by stressing the importance of understanding basic AI concepts to engage in meaningful discussions beyond dominant narratives of bias and ethics. He argued for a more critical examination of AI’s impact on human knowledge and identity, proposing the concept of a “right to be humanly imperfect” in contrast to AI’s pursuit of optimisation.

    Sorina Teleanu, Director of Knowledge at DiploFoundation, presented a series of thought-provoking questions to frame the discussion. She questioned the anthropomorphisation of AI and the tendency to assign human attributes to machines. Teleanu raised concerns about how AI might affect human communication and creativity, encouraging consideration of other forms of intelligence beyond human-like AI.

    2. Philosophical Considerations of AI

    The discussion touched on various philosophical aspects of AI. Kurbalija introduced the concept of a “right to be humanly imperfect,” arguing for the preservation of human agency and imperfection in an AI-driven world. This idea resonated with other speakers, who expressed concern about the potential loss of human elements in pursuit of AI-driven efficiency.

    Teleanu expanded on her concerns regarding the anthropomorphization of AI, highlighting the potential risks of attributing human characteristics to machines. She also raised important questions about the interplay between AI and neurotechnology, emphasizing the lack of privacy policies for brain data processing.

    A thought-provoking perspective on the potential personhood of advanced AI was introduced. The idea that if Artificial General Intelligence (AGI) becomes indistinguishable from humans in capability, it might deserve human rights, challenged conventional notions of humanity and consciousness.

    3. AI Governance and Development

    The speakers agreed on the need to focus on immediate and practical impacts of AI rather than long-term hypotheticals. Kurbalija criticised ideological narratives that postpone addressing current issues in education, jobs, and daily life. He advocated for bottom-up AI development to preserve diverse knowledge sources and prevent the centralisation of knowledge by large tech companies.

    Kurbalija also stressed the importance of defining accountability in AI development and deployment, arguing that legal principles regarding AI responsibility are fundamentally simple and should be applied accordingly.

    Participants emphasised the importance of open-source models and data licensing for AI development. He proposed systems to validate and test AI outputs, similar to human education processes, to ensure reliability and prevent “hallucinations” in AI-generated content.

    4. DiploFoundation’s Approach to AI Development

    Towards the end of the discussion, Kurbalija elaborated on Diplo Foundation’s approach to AI development. He explained their focus on creating AI tools that preserve and enhance human knowledge, particularly in the field of diplomacy. These tools aim to assist diplomats and policymakers by providing quick access to relevant information and analysis, while maintaining human oversight and decision-making.

    Conclusion and Practical Demonstration:

    The discussion concluded with a practical demonstration of AI tools developed by the Diplo Foundation. Kurbalija showcased how these tools can be used to analyze complex diplomatic texts and generate summaries, emphasizing the potential of AI to augment human capabilities in specialized fields.

    The speakers emphasized the importance of continuing these philosophical discussions to examine what it means to be human in an AI era. Key unresolved issues included the effective implementation of AI ethics education, the long-term impacts of AI on human identity and interaction, and the ethical implications of AGI potentially becoming indistinguishable from humans.

    This thought-provoking discussion challenged common AI narratives and highlighted overlooked issues, encouraging a more critical and philosophical approach to understanding AI’s role in shaping the future of humanity. The session ended with an invitation for continued dialogue and exploration of these complex issues.

    2. Summary of Issues Discussed

    This discussion focused on philosophical and ethical questions surrounding artificial intelligence (AI) and its impact on humanity. The speakers, Jovan Kurbalija and Sorina Teleanu from the DiploFoundation, emphasized the need to move beyond surface-level discussions of AI ethics and biases to address more fundamental questions about human identity and agency in an AI-driven world. They raised concerns about the centralization of knowledge by large tech companies and advocated for bottom-up AI development to preserve diverse knowledge sources. The speakers questioned how AI might affect human communication and creativity, and whether humans should compete with machines for efficiency. They introduced the concept of a “right to be humanly imperfect” in contrast to AI’s pursuit of optimization. The discussion touched on the anthropomorphization of AI and the need to consider other forms of intelligence beyond human-like AI. Practical examples of AI tools for knowledge management and analysis were presented, demonstrating how AI can be used responsibly with proper attribution. Audience questions addressed topics such as AI ethics education, the potential personhood of advanced AI, and open-source approaches to AI development. The speakers concluded by proposing further philosophical discussions on AI’s impact across various cultural traditions, emphasizing the importance of examining what it means to be human in an AI era.

    3. Key Takeaways

    There is a need for more philosophical and ethical discussions around AI beyond just bias and ethics

    Current AI development and governance discussions often lack critical questioning of long-term implications

    Bottom-up AI development is technically feasible and ethically desirable to preserve human knowledge

    AI’s impact on human-to-human interaction and communication needs more consideration

    There are concerns about anthropomorphizing AI and assigning human attributes to it inappropriately

    Legal and ethical responsibility for AI systems needs to be clearly defined