IGF 2024 - Day 2 - Workshop Room 4 - DC-CIV & DC-NN From Internet Openness to AI Openness

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR: Can everyone here?

>> VINT CERF: We can hear you online.

>> MODERATOR: Thank you. Olivier, we should start. For your remote panelists, I'm not sure if you're seeing us. We are now starting in one minute.

>> VINT CERF: We can see you.

>> MODERATOR: Fantastic. We can see you also now we have all you on our screen. We have the agenda that Olivier so kindly shared with our remote participation team. We have the remote participants on-screen. We only -- we are only missing Olivier who is arriving. Finally.

We are looking forward to your arrival, Olivier, so we can start.

You want to kick the dance? So you to want know start?

>> OLIVIER CREPIN-LEBLOND: Please, yes.

>> LUCA BELLI: Fantastic. Let me introduce the panelist and introduce the team and then I will give the floor to my friend and co-moderator Olivier Crepin-Leblond.

We will start with the adviser at the ministry of science and technology of Brazil and also the Chairwoman of the CGI, the Brazilian Steering Committee.

Then we will have Sandrine Elmi Hersi, the French regulators unit on Internet openness.

Then we will have Sandra Mahannan, if she is with us or not? Yes. So that work for the UNICON group. And then we have Mr. Vint Cerf. He is chief Internet evangelist, also vice president of Google.

And then we will have Yik Chan Chin, sorry, pardon my Mandarin. And last but not least, we should have our friend Alejandro Pisanty somewhere -- sorry, there's also Wanda Munoz. Do we have Alejandro Pisanty online? I don't see him.

>> VINT CERF: He's online. I don't know whether his video has been enabled.

>> LUCA BELLI: He's online but we hope he will be soon. And then we have Wanda Munoz, she's already online of course.

And last but not least, Anita Gurumurthy on I.T. for change.

Now, we had very similar proposals for this year's IGF sessions to bring together individuals that have been working for, in some cases, decades on Core Internet Values and one decade on Internet openness and net neutrality to discuss which kind of lessons can be learned from Internet openness and Core Internet Values from any, and can be transposed to the current discussions on AI openness and Internet openness.

We know very well that Internet and AI are two different beasts. But there's a lot of things that can overlap. To do AI, you need somehow to do Internet connections and Internet, and a lot of things happen on the Internet, especially most applications nowadays, rely on some sort of AI. There's a deep connection between the two, but they're not exactly the same thing.

In many of the Core Internet Values or Internet openness principles that we have been discussing for the past decade or so, may apply or not. Some things may be more -- more intuitive, like transparency. Transparency of Internet traffic management. We have discussed this on net neutrality or a decade is essential to understand to what extent the ISP is managing traffic and is this reasonable traffic management? Is this blocking unduly some specific traffic or not.

This is also essential to understand which kind of decisions are taken by this AI system we may rely upon for enormously important part of our lives from getting loans and credit in banks to being identified and maybe arrested by police as criminals or being -- use our services for -- with face recognition.

So these -- these kind of transparency, although different, be it in rule-based system or in very intensive system like LLMs that relies on a lot of computational capacity and are more -- more predictive than -- and probabilistic than deterministic, in both cases we need to understand how they function.

It's not something from a rule of law and due process perspective we can accept to simply say, we don't know how it works. I understand that in many cases we don't know how it works. But this kind of transparency is needed for the counterpart and regulations at large.

And then we debate about interoperability which is the core of Internet functioning, but most of AI systems are not interoperable and actually they -- the most advanced are also developed by very small number of large corporations which may lead us to the kind of concentration that nondiscrimination and decentralization at the core of the Internet and core of net neutrality and core of Internet openness aim at avoiding.

And to conclude also, very concrete example of how this concentration in the lack of net neutrality can be even put on steroids by AI to some extent. We have very good examples now, we have been debating zero rating and it's combatting the or incompatibility with net neutrality for decades and we know in most of the global south people access the Internet through family of apps, especially Whatsapp. And the fact that now Whatsapp includes very top of his homepage meta AI, means that de facto most of the people in the Global South have Internet experience primarily Whatsapp and will have as AI experience primarily meta AI, period. That's the reality of most of the people which are poor and will only have that as an introduction to AI.

And we also work to train for free that specific AI. So those are a lot of considerations we have to have in our mind to understand to what extent we can transpose Internet openness principles, to what extent we can learn lessons from regulation that already exist. And to some extent we have failed over the past decades in terms of openness and which kind of governance solutions we can put forward in order to shape the evolving AI governance and the hopeful, maybe, AI openness. At this point, this would be where my co-moderator would provide his introductory remarks, but I'm seeing him intensely speaking with our remote moderation team.

So as the show must go on, I think we can start with the first speaker that we have in -- on our agenda, if I'm not mistaken, is -- I think Olivier, do you want to give us your -- sorry, Renata. Olivier arrived again and he is going to provide his introductory remarks and then we will pass the floor.

>> OLIVIER CREPIN-LEBLOND: Apologies for this.

Olivier Crepin-Leblond speaking. I've been trying to get our remote panelists to have camera access to be able to be recognized by us, so on and so on. Sorry for running in and out. Thank you for the introduction and it's great to have a meeting of both organizations, both dynamic coalitions together for a topic which is of such interest.

I'm going to give a few words about current Internet values. I'm not sure everyone in the room knows about those, I see new faces as well. And this dynamic coalition started quite a while ago.

Based on the premise that the Internet works due to a certain number of Internet fundamentals that allowed the Internet to thrive and to become what it is today.

And those are quite basic, actually. They're all technical in nature and so if I just look at them, the first one is the point that the Internet is a global resource. It's an open medium, open to all. It's interoperable, which means that every machine around the network is able to talk to other machines. I'm saying machines, when we started it was every computer, but now it is of course you're speaking about all sorts of devices on this.

It's decentralized.

There's no overall central control of the Internet, short of the single naming system, the DNS, apart from this, there's so many organizations that are involved in its coordination. It's also end to end.

So it's not application specific. You can put any type of application and make it work with something at the other end. So the actual features reside in the end nodes and not with the centralized control of the network.

That makes it user centric. End users have the choice of what services they want to access, how they want to access them, and these days, of course, using mobile devices they're able to upload into that -- or download into their mobile devices any type of application that they want to run.

And they don't really think about the Internet running behind the scenes. And of course, the most important thing, it's robust and reliable. To think that there are so many people now on a network which started with only a few thousand people and then a few hundred thousand and then a million and then a few million and some people back in the day thinking this is going to collapse. Well, it's still working and it's still doing very well and very reliable considering the number of people and the amount of -- the number of people that are trying to break it, the amount of malware and everything else that is on this.

So it's pretty robust, pretty reliable.

A few years ago we also added another core value, which was that of safety. The right to be safe was one of the things that we felt was important to add as a core value. In other words, being able to allow for cyber measures to make sure the network itself doesn't collapse and, you know, all sorts of ways to not control as such, but to make sure that you are safe when you use the Internet and you're not going to be completely overblown by the amount of malware and everything that's out there.

That's something which I think we were quite successful in doing. All the antivirus software, all of the devices, all of the things that we now have on the net to make it work.

These are very open values, and of course they're open for people to adopt and of course we have seen erosion of these over the past years.

The openness of the network has been put to test on many occasions. There's also been certainly as far as network neutrality there are has been some things affecting the Internet. But on the whole, it's still global, it's still interoperable, it's still got the basic values that we've just spoken about now.

Whilst we're seeing an erosion, we're also seeing that it's quite well understood by players out there. And we're looking at various different levels. So the telecommunication companies, governments, the operators, the content providers and so on, that we have this -- this equilibrium, if you want. I can't say a sweet spot because it keeps on moving forward, but this equilibrium today. And we hope we will continue having this equilibrium tomorrow that will make this Internet both keeping the innovation, but at the same time make it as safe as possible and as stable as possible. Because that's really something that now with a network that's so important in everyone's lives, we need to make sure we have for the future.

The economic implications and societal implications to having a broken Internet are too big for us not to do this. Hopefully that's a message that's been well understood.

Now we have AI. AI has come up and seems to make an absolute revolution. We need to regulate, regulate, regulate, that's what some are saying. I think today's session is going to be looking at this, do we need to regulate, regulate, regulate, or can we learn some lessons from the current Internet values and how the Internet has thrived to be what it is today and apply it to artificial intelligence? Well, let's find out.

>> LUCA BELLI: Thanks. Let's start with our first panel list, Renata Mielli. The floor is yours.

>> RENATA MIELLI: Thank you, Luca, Olivier, it is a pleasure to be here discussing this interesting approach about AI and Internet.

I am very happy with this bridge that you bring to us to reflect about AI and Internet and the core values that we have to have in mind when we talk about this new technology.

So thank you very much, my colleagues.

I am going to bring some perspective -- historical perspective, and try to make this bridge between the core values that we have in Brazil to Internet into AI.

Well, long term analysis of Internet development shows that Internet benefited from a core set of values that drove its creation such as openness, innovation, interoperability and others. The Internet and its technologies started to foster in interoperable environment guided by solid and universally accepted standards that had shared best practice, collaboration, and benefit from a network deployed worldwide.

Just as other types of technology, the development of the Internet was also based in academic research with researchers working to deploy the initial stage of the Internet.

It has been [?] related industries.

In the same, it's safe to start from the assumptions that AI greatly impacts society in several terms. Economically, political, environmental, social, and many others.

The harder challenge we have is to drive this evolution in such a way that positive impacts super pass the negative ones with AI being used to empower people in society for a more inclusive and fair future for all. And the first step for that is to have a clear consensus on fundamental principles for AI development and governance.

The Brazilian Steering Committee outlined a set of ten principles for the governance and use of the Internet in Brazil. Our so-called Internet [?] provides accommodations and is very much in line with the Internet core values in a way that we believe can be leveraged to also meet expectations for the governance and development of AI systems.

Principles such as standardization in interoperability are important for opening development processes allowing for exchange and joint collaboration among various global stakeholders, strengthening research and development in the field.

In the same sense, AI governance must be founded on human rights provisions, taking into account its multiple propose [?] applications.

Principles such as innovation and democratic collaborative governance can be also considered as foundations for artificial intelligence in order to encourage the production of new technologies, promote stakeholder governance and transparency. Same goals for transparency, diversity, multilingual and inclusion which can be interpreted in the context of AI systems development from the perspective that these technologies should be developed using technological and ethical, safe, secure, standards curbing discriminatory purposes.

At the same time, the legal and regulatory environments hold particular relevance for the interpretation that the development end user for artificial intelligent systems should be based on ethical laws and regulations in order to foster the collaborative nature and benefits of this technology while safely regarding basic rights.

It should also be noted that adopting a principles-based approach ends up generating more general guidelines which can lead to implementation difficulties.

However, this should be balanced with the need for each country to adopt AI governance to its local [?] in addition to granting -- to granting greater sovereignty over how this governance should take place in terms of politics, economics, and social development.

As a bottom line, we could think of the AI and governance as a priority for more intense, self-to-self collaboration fostering research and development as well as openness and responsible networks with long-term cooperation agreements and technology transfer in order to [?] development for new works for the Global South.

It is important to not try to reinvent the wheel and draw upon good practices that already exists, such as the global articulated across the IGF and processes and even more stable sets of works such as [?] principles and guidelines that can enter into the evolution of the ecosystem to be even more inclusive and results oriented.

Last but not least, existing coalitions, political groups and others could be leveraged as platforms for collaboration with think digital governance and cooperation as a whole, including in traditional [?] spaces such as the [?]. Brazil, for example, held the [?] in 2024 this year, and we will do the same with bricks in 2025. We believe that in both cases, there were -- it will good opportunities for best practice in digital governance collaboration across different countries.

Thank you very much.

>> LUCA BELLI: Thank you very much, Renata. What a way to start. A lot of points being made here. I lost that we all try to stick to our five minutes because otherwise we'll run over. We can speak for hours on these topics.

But next is Anita Gurumurthy, I.T. for change. Abdul, are they ready? She can speak, yes? Go ahead, Anita, we can hear you.

>> ANITA GURUMURTHY: You can hear me, I hope. Yeah, all right.

So thank you very much. I just heard that from Renata and also note this wonderful point that Mr. Vint Cerf made that the Internet is used to access AI applications, but operationally our systems don't need the Internet to function.

I mean, I think we're making reference to the fact that in many ways algorithms predated the Internet or Internet-based revolution. However, the fact of the matter is, just like the Internet relationship between time and space -- or space and time, we have relationship between the Internet and content particularly AI, which Mr. Cerf calls [?] AI.

Allow me to be critical of openness itself. Because I think when I open up my house, what I mean is everyone is welcome. But I think the idea is of the open Internet and open AI do not necessarily, you know, map on to this kind of sentiment. So the term open Internet is used very frequently, but it doesn't have a universally accepted definition.

And that is because, as all of us know and up in of us -- none of us needs an introduction to the data paradigm here, we see that the data collection has become pivotal when we talk about the Internet paradigm and it's used either to target ads in large proportions or build products.

And only a handful of players with the scale to meaningfully pull this off. So the result is a series of competing wall gardens, if you will were and they don't look like the idealized Internet we started with.

And it runs on a string of stores, app stores, social networks, feeds, and those networks have become for more powerful in the web in large part by limiting what you can see and what you can distribute.

So the basic promise of the Internet revolution, the scale, the possibility, is -- well, I mean, I would say it's not possible at this juncture.

Alongside all of this, the possibilities of community and solidarity haven't died, thank God for that, because we have the open source communities, the open knowledge communities, and of course all of these remain open and vulnerable and fortunately to capitalist cannibalization and to start authoritarianism. That's a bit of the morning of the state of the Internet.

All of this points to an important thing, which economic order that could have leverage the global Internet for the global common data paradigm, we now have centralized data creation by a handful of transnational platform companies, and we could actually have had, as was pointed out long ago, different form of web creation.

Now I come to the openness in AI. And I'm cognizant of the five minutes I have.

I think it's worthwhile to look at the analysis of AI labeled open. What is open AI? We're actually talking about a long gradient, right? And everything is open. You can talk about open as if, you know, it's one thing. But you could actually have something with very minimal transparency and reusability attributes.

So that could also be open. And therefore, open is not necessarily open to scrutiny. And the critiques that even others like Whitaker mount against this paradigm is that we don't necessarily democratize or extend access when we talk about openness. And openness doesn't necessarily lower costs for large AI systems at scale.

Openness doesn't necessarily contribute as scrutinizability. And when it was published by open AI, explicitly they plan to release details about its architecture, including model size, hardware training, compute, data construction and training methods.

So here we are, what we need to do is rest ideas of transparency, usability, extensibility, and the idea of access to training data in its politicized form. If you don't do this, then we will be lost.

And my last submission here is to be able to politicize each of these notions and make them part of ex-anti-public participation. We need to turn to environment law and look at the [?] convention. We need a societal and collective rights approach to openness, whether it's the open Internet or open AI.

And collective rights, so rights that do not preclude individual rights or liability for harms cost to individual, I'm not precluding that. But we need to understand what will benefit society and what will harm society.

We're looking at a society framework for rights which doesn't just always come back to my product caused you harm but really looking at the ethics and values of societies of sovereignty of the people, you know, as a collective.

And here I think we should understand that there are three cornerstones in quality. The right to dignity and freedom from misrecognition. The right to meaningful participation in the AI paradigm, not just in a model. And the right to effective inclusion in the gains of AI innovation, which is for all countries and not just a couple.

Thank you.

>> LUCA BELLI: Thank you very much, Anita, to this reality shock and for remembering us that actually behind the label of openness or open, one has to look at the substance of things and the very good example of open AI which was practiced in architecture is undetected to openness despite having open in its own name is something that allow us to think about the fact that if we want to have market players, very large one including multibillion corporations stick to their promises, maybe some type of regulation is actually essential.

And here is very good to then pass the floor to Sandrine Elmi Hersi, because we've been very vocal and leading AI openness since 2015 of the open regulation in Europe. So it's very good to, based on the experience you had over the past decade, understand how -- which kind of mistakes might have been done, which kind of limits may exist and which kind of lessons we can learn to better shape openness of AI.

Please, Sandrine, the floor is yours.

>> SANDRINE ELMI HERSI: Thank you. First of all, let me thank the organizers of this session for this important conversation on how to incorporate Internet core values in the development of AI.

So I will focus this introduction on the AI impact on the consent of Internet openness.

As we know, generative AI is an innovation with vast potential across many sectors and more for the economy and society.

This technology also raised the legal, social, and technical issues that are progressively being tackled. But we can see that policymakers at your [?] level have primarily focused their action and initiatives on the risks of these systems in terms of security and data protection as seen in the EU AI act.

But impact of these technologies on Internet openness and the potential restrictions this application could bring on users' capacity, to access, share, and configure the content, they do have access to the Internet, have only started to become -- to pique our attention in the debate.

And now generative AI applications are becoming new intermediary layer between users and Internet content, increasingly unavoidable.

For example, a study published by Gartner said that such traffic could decline by 25% due to the rise of AI chatbots. Aside from this conversation, generative AI systems are increasingly being adopted by traditional service providers, including through well-established platforms, like social media and connectivity devices.

From this perspective, we can say that generative AI would soon become integral part of most users' digital activities, potentially serving as a primary gateway for accessing content and information online.

So thanks to the user-friendly interfaces, AI -- generative AI tools open up new possibilities to a wider range of users with generative AI, it has never been easier to create text images or code. However, we must consider the challenges and risks in terms of Internet openness and users are paramount.

So we have long time emphasized Internet service providers, the main purpose of EU open have the regulation and understood as a right for users in general to access and [?] the content of the [?] on the Internet.

In 2018, we published a report highlighting that complementary to operations, the systems and other structuring platforms on devices and Internet openness should be tackled with appropriate regulatory reprise. So the digital markets act adopted in 2022 has introduced new tools for promoting nondiscrimination transparency and interoperability measures addressing problems raised by gatekeepers.

But for us it's a time to access impact of generative AI on Internet openness and users empowerment. This is why we have started to work on the issue and already sense first observations to the European Commission from our experience on net neutrality.

And we can already see effects of generative AI application on how users access and share content online, just as some examples, the transition from search engines to response engines is not a neutral evolution. It [?] the user experience as the interface of generative AI tools offer the user little control and agency over the content they have access to providing [?] single response with often lack of transparency, no clear sources, and no way for users to adjust the setting.

We can -- we must also take into account the technical limitations of AI, including biases, lack of [?] and risk of [?] that are now becoming part of the digital landscape. In generative AI development brings fundamental changes to content creation, which could impact how information is shared and diversity and richness of content available online. Because as generative AI tools become primary gateways for primary access, AI providers could also capture the essential part of the economic and symbolic value from content dissemination which could threaten the capacity and willingness of traditional content providers such as media or digital comments to products and share original content to the benefit of the economy and society.

While these developments are concerning we are convinced as other person around the -- around the table today, that we can create the conditions necessary to apply the presuppose of open Internet to artificial intelligence in terms of transparency. Next year we will publish a set of regulations and we are looking for partners to work on this. And to conclude a final word, to say that we believe that we have the ability to shape the future of the artificial intelligence governance in a way that secures its development as a common good.

This means the adoption of [?] in terms of openness, but also innovation, sustainability, and safety. So thank you again for the opportunity to be here and I look forward as the discussion ahead.

>> MODERATOR: Thank you for sharing the perspective of a regulator. We now have a perspective from a business community, and that's Sandra Mahannan data scientist in UNICON group of companies. You should be able to unmute and take over.

Sandra. We cannot here Sandra. Can we check why we cannot hear Sandra? Sandra, can you try to speak again?

>> VINT CERF: No, we can't. We're not hearing Sandra online either.

>> MODERATOR: Should we maybe -- Sandra, can you do a last attempt? Yes, can you try to speak? Yes, there's a problem -- while we try to solve this in the interest of time, let's move ahead to the next speaker and then we will have Sandra maybe later.

So Vint Cerf, please, the floor is yours.

>> VINT CERF: Well thank you very much for asking me to join you today. This is a very, very interesting topic. I will say that AI and the Internet are not the same thing, and I think that the standardization which has made the Internet so useful may not be applicable to artificial intelligence, at least not yet.

What you see is extremely complex in large models that operate very differently from each other. And the real intellectual property is a combination of the training material and the architecture and likes of each of the models that are being generated.

And those are being treated largely as proprietary. So open access to an AI system is not the same as access to its insides, and its training material and its detailed weights and structure.

So we should be a little careful not to try to equate the things that make the Internet useful and try to force them on to artificial intelligence implementations.

I don't think we're at a place where standardization is our friend yet. The one place where standardization might help a lot for generative AI and agentic AI would be semantic ways of interacting between these models. Humans have enough trouble with speaking to each other in language which turns out to be ambiguous. I do worry about agents using language as a way of communicating and running into the same problem that humans have, which is ambiguity and confusion and possibly bad outcomes.

Generally speaking, if we're going to ask ourselves whether we should regulate AI in some way, I would suggest at least in the early days that we look at applications and the risks that they pose for the users.

And so the focus of attention should be on safety and for those who are providing AI applications, to show that they have protected the users from potential hazard.

I will also -- I feel strongly that there is a subtle risk in our use of generative AI. Those of you who know how these things work know that the AI -- the Large Language Models essentially compress large quantities of text that a complex statistical model. The consequence of that is that some details often get lost. And so we have a subtle risk in our use of Large Language Models where we may lose specific details, even though the generative output looks extremely convincing and persuasive because of the way it's produced.

So I wonder whether we will end up with sort of blurry details as a result of filtering our access to knowledge through these Large Language Models. And I would worry about that.

I guess the last thing I would say is that accountability is as important in the AI world as I think it is in the Internet world. We need to hold parties accountable for potentially hazardous behavior. Same is true for parties offering AI-based applications. They should be accountable for any risks that these systems pose.

My guess also is that we should introduce into the training material better  provenance so we know where the material came from so we can assess its accuracy. I'll stop there and thank you for the opportunity to intervene.

>> OLIVIER CREPIN-LEBLOND: Thank you very much and since we're pressed for time, we'll go  straight over to Yik Chan Chin while we're working out with Sandra to get her mic working. You have the floor.

>> YIK CHAN CHIN: Thank you for inviting me.

So from the [?] perspective, as you may know that we have policy, we have intersection called policy level artificial intelligence. So we did a report on the reliability and sustainability and other issues.

So first of all, I think I do agree with [?] in terms of the infrastructure of AI. Actually it's quite different from the Internet, because actually AI system and the users are supported, connected over the Internet, but they're quite different. Because AI, including algorithm, data, and the computing power, a lot of them must be unified or standardized, category, okay.

And then according to our past experience, the interoperability of AI is extremely compressed issue. So we have been working on the topic for two years, the last two years, but it's really extremely difficult issues.

So before going to detail about the interoperability, I think there's two things, two principles worth paying special attention, because when I look at the question, you talk about [?] innovation, whereas a precautionary open principle.

And I'm not sure why this principle should apply -- should be applied to the AI regulation, because AI's extremely compressed and there's some features -- unique features of AI. For example, compressed and unpredictable and the way it behaves. All of these special particular features make AI harmful. If there's a risk.

So whether we should allow these permissive innovation approach which is applied to the Internet, because we know the history in the Internet, in the governing it was self-governed. But should we allow this to be applied to the AI system, we should be cautious about that.

But on the other hand, we see there's some overlapping principle between the AI regulation, Internet regulation. For example, I think those values should be applicable to both system. For example, like the human-centered system, okay, both AI and Internet inclusiveness. Universality, transparency, and the safety and neutralities. So all of these apply both systems.

So in terms of the interoperability which is my area, so I would like to say something particularly focused on interoperability. So first of all, what is interoperability? Interoperability basically we're talking about the capacity of the AI system in terms of machine that can talk to each other, communicate with each other smoothly.

So including not only machine, but also the regulatory policy. So they can communicate and work together smoothly.

But this doesn't mean it has to -- I mean, the regulation or the center has to be harmonized, because we can have a different mechanism to accommodate interoperability. For example, we have a compatibility mechanisms to accommodate the interoperabilities.

So therefore, you know, first of all, I think interoperabilities are crucial for the openness of the Internet and AI system. But AI system can be diverged as well as convergent. As I just explained before, because the system is quite different. It's not a necessary to be unified.

So therefore, first of all, when you can figure out what area of AI system or even AI regulations or governance has to be interoperable, what areas can be allowed by divergent and the respect of regional diversity.

So from the analyst perspective, so in our report, we identify the several areas which has -- which could be, you know, addressed at the global level, which it may have the interoperable framework. We have the risk and you have the evaluate. You have the new approach, China has Chinese approach. America, the U.S. just released their global standardization framework and mechanisms.

So we need to have a kind of, you know, interoperable framework in terms of AI risk, categorization and evaluation.

And the second is about the liability. So we have a huge -- just over there about liability of AI system. And the who should take the responsibility and what kind of responsibility is criminal or civil.

So this -- we haven't had a global framework, even national framework in terms of that because they're still debating at the EU level, at the national level. So liability of the AI model is other area which could be addressed at the global level.

The second -- this other one I think Vint Cerf just mentioned about it training dataset. All these issues can be addressed at the global level.

The other thing and the last thing I want to mention is about how to balance the regional diversity and the harmonization needs. We need to respect the regional diversity in AI governance, but at the same time establish the compatibility mechanisms to reconcile the divergence in the regulations. So different mechanism we can use, but it's a contest dependent and there's a case by case. I think is my time up?

So last thing that I want to say is about the area we have to improve is a regime capacity of international institution and coordinated alliance. So that's kind of concept we have regime capacity of the international institute. Which means we have a lot of the international institutes, like ITU, the UN, but how can we --

>> LUCA BELLI: I can ask you to wrap up in one minute so the others also have time to speak.

>> YIK CHAN CHIN: So the last thing that we need to have some kind of global institution, which can coordinate different initiative at national level, regional level, and the international level in terms of the openness of the AI and the openness of the -- of the Internet.

So the TDC have not provided a concrete solution in terms of how to strength the multi-stakeholder in the AI governance. We have plenty to decide. I think we should address this in WSIS+20 debate. I stop there. Thank you.

>> LUCA BELLI: Fantastic. Let's try to see now if Sandra can be audible. Can you try to speak so we can check. We are not hearing you. Can you try again.

>> SANDRA MAHANNAN: How about now?

>> LUCA BELLI: Yes. Keep the mic close to your mouth, please. Thank you very much.

>> SANDRA MAHANNAN: Thank you. Once again, thank you so much for the opportunity to be here. I'll try to speak very short.

>> LUCA BELLI: If you could keep the mic very close to your mouth, because literally we can hear very well when it's close to your mouth at not at all when there's five centimeters from your mouth. Thank you very much.

We cannot hear you, Sandra, I'm sorry.

Sandra, unfortunately, we keep on not being -- I think it's a mic problem. So if you have another microphone where you are, I suggest you try to change it while we go to the next speaker. Alejandro Pisanty. Because we are not able to hear you in this moment, Sandra. Alejandro Pisanty, very old friend, not because he is old at all, but we know each other since a lot of years.

So please, Alejandro, the floor is yours.

>> ALEJANDRO PISANTY: Thank you. Can you hear me well?

>> LUCA BELLI: Yes, very well.

>> ALEJANDRO PISANTY: Thank you. Thank you, Luca and Olivier for the yeoman's work you did to put this together and the coalition for exchange of ideas. Also friends I see on screen, I think I see Martin, I think I see others. I think that's right.

So briefly, mostly responding as well as putting forward what I have prepared. The dynamic coalition was created to try to follow on these core values.

Which if you take them away, you don't have the Internet anymore. If you take away openness, you have an Internet. If you take away interoperability, you have a single vendor network and so forth.

That's what we're trying to now extend to -- or let's say to challenge how much we can extend  it to AI. We have to be very careful what we call AI. In people's mind are the generative AI systems are start with text and give you more text on a conversation or interface, or can give you images, video, and audio.

But artificial intelligence is a lot of more things. It's molecular modeling. It's weather forecasting. It's every use of artificial intelligence that we use for basically three purposes, which is finding patterns in otherwise apparently chaotic information. Finding exceptions in information that appears to be completely patterned, and extrapolating from this.

And we know that extrapolating from algebra, extrapolating from things that you only calibrate for interpretation is always going to be risky. That's our basic explanation and concern for hallucinations in LLM systems.

We have, second, one of the lessons we've learned for many years from this dynamic coalition is to separate the effects of the human and the technology. Separate the effects of human agency, human intention, cybercrime wasn't invented by the Internet, it happened because people want to hurt you and take your money in certain ways. And now use the Internet as they previously used fax, post, or try to cheat you face-to-face.

Same for many other undesirable conduct. So we have to separate what is -- what people are wanting to do and how technology amplifies it by anonymity and crossing borders and so forth.

Same for AI. It's not AI that is misinformation. We have had misinformation I think since Babylonians and probably even before we have written language. But now we have very subtle and easy to apply ways to apply misinformation on a large scale. But we still have to look at the source and intention of the people who are creating and providing this misinformation.

I'm not trying to regulate the technology instead regulating the behavior or helping users avoid it all together.

Third point here is not trying to regulate artificial intelligence in general, in total, but being sure that you are not -- by trying to regulate what you don't like about LLMs doing misinformation, you don't kill your country's ability to join molecular modeling revolution for pharmaceuticals, for example.

There's living behind the concept of digital sovereignty and replace it with digit agency. Luca and I were in meeting two weeks ago where this concept was put forward and it's a very powerful one. What I extract from it instead of trying to be sovereign and closing borders and putting tons of rules which are basically copied from rules that developed the technology based on the fears, it's tried to be powerful even if you have to sacrifice some sovereignty in the sense that you have to collaborate with other countries, have to collaborate with other academic institution and so forth. By the way, has always been the way of developing technology in the dynamic research.

There's recent French paper that came to my attention only yesterday which speaks about de-demonizing. Some demonizing artificial intelligence without becoming confident or overconfident, but try to regulate and to promote AI.

Your country is looking, if your legislators are looking to regulate AI and not putting a lot of money into research and the development and into say like Denmark has done recently or Italy putting together major computing facility for everybody to use to develop AI, they are lying to you. They are cheating you, because they're actually closing the door to the effects of innovation and condemning you to only getting this from outside the country in the end in subtle and controlled ways.

How do we bring multi-stakeholder governance which is another lesson from dynamic coalition toward artificial intelligence? We have to find a way maybe to scale the companies with the fear of harder regulation to come together with other stakeholders, like academia, like rights holding organizations and so forth, as we did with, for example, the domain name market with ICANN five years ago. It's not doing an ICANN again necessarily, but it's how you bring these very diverse stakeholders together designed for these type of systems and risk that are present in reality and not only the imaginary ones.

There's been some talk about open sourcing, which is very valuable. It's -- the risks have already been mentioned, and one risk that has not been mentioned that we learned from the history of open source software is derelict. Software that just abandons, systems that are abandoned, they're not maintained anymore which are very risky because the effects can creep in and then this seems to become part of the infrastructure of the Internet. We've already seen some major security events, for example, happening from unmaintained, open-source software which was at the core of different systems.

So a challenge here will be to avoid the delusion of one-world government. We don't need the GDC. We don't need UN artificial intelligence agency. We need to look more at the federated approach. And I think that this will be more approachable, more available. There's a better path to it.

If we do, for example, the UK has been doing all go by the verticals, go by the specific type of regulation which already have, use all the tools that you already has. Like liability for commercial products, like liability public officials who purchase things badly. It's bad to purchase a system that does bias or discriminatory assignment of funds in a Social Security system as it's bad to purchase cars that end up killing people in crashes because you don't have air bags. And that would be it.

Thank you.

>> LUCA BELLI: Thank you very much, Alejandro, I was going to remind you to wrap up but you did it yourself. Fantastic. Let's try to see if Sandra has a new mic that can work. Let's try to give it last shot at Sandra's presentation. Sandra, can you hear us?

>> SANDRA MAHANNAN: Yes, can you hear me now?

>> LUCA BELLI: We can hear you very well.

>> SANDRA MAHANNAN: Finally.

>> LUCA BELLI: Please go ahead.

>> SANDRA MAHANNAN: I'm so sorry about the mix-up with my mic and all. I'm going to try to keep this very short.

So I would want to come in from the angle of the business angle, so to speak.

So I work with UNICON companies and it's an AI and robotics company. So I read one time that AI often reflects [?]  its creators, right? So we all know that AI response quality is a very huge content, it's a huge content because we have cultural biases, religious biases. Recently I was in a religious gathering where a religious leaders were trying to discuss the adoption of AI and the, you know, concerning response is that AI gives and all of that.

And we all know that AI responses are heavily dependent on the data quality fed into the model, right? And yeah, so the acquisition of such data is usually not -- not cheap. It's very expensive. We talk about computing power, we talk about acquisition of data. These are very expensive processes.

So my tip would be to regulate AI, not really from the user angle, but from the -- the openness should come from the aspect of the developer, the development angle where we talk about data quality, data privacy, security, data sharing protocols, operating in the market as an entity, interoperability and all of that.

Thank you.

>> LUCA BELLI: Thank you for being so fast. To we want to have our last speaker, last but not least, of course. I think we'll have Wanda Munoz who is a member of the women for ethical AI platform and Civil Society. Over to you, Wanda.

>> WANDA MUNOZ: Thank you so much. Can you hear me?

>> LUCA BELLI: Yes.

>> WANDA MUNOZ: Thank you so much. I'm delighted to be here. Thanks to the organizer for having me and thanks, Alejandro, for recommending me to be here.

I would like to take a somewhat different perspective from what has been shared so far. What I like to put on the table today is my perspective as someone who comes from policy making and from human rights implementation.

So my contributions come from this perspective, and I will also build on the results of a report from the global partnership on AI called [?] equality in AI that I invite you all to review and for which we counter with Anita. I want to thank her for her report.

First, I like to share that I think the core values of Internet Governance have been very useful to have a common understanding of the Internet as we want and that serves the majority.

But arriving to the discussion when these values were already adopted and implemented for a while, I want to put it on the table that maybe we could benefit from analysing these values from a gender and diversity perspective.

And I think there's already a wealth of research from feminist AI scholarship in this record. For instance, just to mention an example of the six core principles of feminism is [?] I don't know if you're familiar with it, but I invite you to look for them. Which propose values such as rethinking binaries and raising [?] examining power and experimenting through empowerment. And these are quite different from the Internet corset of values today, but also complementary. And what I like from these is that they question social constructs power, and distribution of resources. And these are applicable to Internet and others but they're left out of mainstream discussions.

That being said I move to human rights. What I'd like to say first is that I like to give you some -- a couple of ideas of how I feel it is that human rights should be at the front and center of any discussion on AI governance. At least on the standing and ethics principles and values.

And I think maybe for some of you see it differently, human rights are not just words. Human rights are actions, policies, budgets, indicators, and accountability mechanisms. As which were already mentioned by Renata and Anita before.

In the context of artificial intelligence, human rights allow to us reframe the discussion on AI in different terms and to ask different questions. So let me give you three examples.

Instead of saying that we must mitigate the risk of AI, what we would say from a human rights perspective is that when AI harms occurs, it is systematically results in violations of human rights that disproportionately affect women, rationalized persons, Indigenous groups and migrants among others. I'm sure you know more that have affected the right to employment, social services, and many others which you can find, for instance, in the OECP AI incidence monitor.

Another example is that the AI governance we should -- risk and innovation. If we have knowledge that the generally benefit a few and the harm is primarily for those already marginalized, we would talk about the need for AI regulation when violation of human rights results from the use of AI. And I want to tell you that this research we carried out for GPEG where we consulted more than 200 people from all walks of life, backgrounds in five continents there is possibly the number one demand that was documented in the report and I also want to say appreciate the perspective on the need for international norms specifically regarding liability and accountability.

And a third example is regarding nondiscrimination. That I think generally speaking people understand nondiscrimination as saying, I don't go out and say slurs to people in the streets, right? So I don't discriminate. But from a human rights perspective, this is far from enough. What it means is that you must take positive actions to avoid and to redress discrimination that already systematically exists in our organizations, in our data, in our policies, and this is particularly the case in Internet and artificial intelligence.

So in a human rights framework, unless we take action, we are effectively perpetrating discrimination. Similarly, we could have a discussion about what are general rules around safety means, but only if we adopt specific actions to ensure the safety in the context of the reliabilities of specific groups in each specific of these things, we will clean excluding those already more marginalized.

And here Alejandro as often, I want to respectfully disagree with you when you talk about the need not to demonize technology. Because I hear this and I don't think it's a helpful term. That is often from those of us who are pointing out the recent harms of AI.

So I think we are doing this from the documented impact and from the evidence, and trying to raise the alarm to at least start bringing into reality, you know, into the reality of what AI is causing.

I mean, to contrast with what we see most of the time, which is this AI hype, and of course I think that for all the problems that the UN has, we don't need it to [?] AI regulation. If we want to have some equality in terms of negotiation. But it leads to other issues that I hope we could discuss another time.

Just to conclude, I think when we speak about AI governance, what is at stake has a potential to change the core of our -- how our societies function. So I fully agree with Anita and the need for a societal and human rights approach. To me, this cannot happen without regulation.

So thank you.

>> LUCA BELLI: Fantastic. Thank you very much, Wanda, for bringing this really very good and intense thought-provoking points. I think it's a very good way to open our discussion now with the floor. We have good 20 minutes to speak.

Also, let me share with the floor that one of the intentions when we started to design the organization of this session was try to distill some core elements maybe that we could put into a joint report for next IGF. We know very well in six months, before next IGF, there will be very few things that we could do in terms of outcome. But some very joint paper on what could be the elements for an open AI system or something like that.

So if you have any ideas, if you can help us guide us, identify what could be these core elements. Or if you have any other reflections on what has been said, we have 20 minutes to discuss this, feel free to be punchy, while of course diplomatic and respectful. Just raise your hand and our mic will be -- [?]

>> OLIVIER CREPIN-LEBLOND: If I can add, there's sometimes some panels at IGF where everyone agrees with each other. I was really pleased to see there's viewpoints and some panelists not agreeing with each other. So that's really good.

By the way, if you all as panelists also have points to make about each other's interventions, please go ahead.

If you're online, you can put your hand up in the Zoom and we'll be seeing this. And if you're in the room, put your hand up and a mic will fly in your direction. Or maybe be brought over to your direction.

Does anyone wish to fire off?

>> LUCA BELLI: Who wants to start our collective exercise?

>> OLIVIER CREPIN-LEBLOND: It's a lot to digest.

>> LUCA BELLI: Yes, I see. Are you stretch organize raising your hand? Okay, let's break the silence with Desiree.

>> Desiree: Hi. Desiree. The question -- thank you all for your very [?] Comments and I don't know the title, but we're focusing on the core principles of AI, it's a dynamic coalition working group on core principles of the Internet.

So really really glad to see the differentiation that AI is not the Internet confirmed by some of our panelists. And then we also had heard that the AI is building this intermediary layer, like user interface between these structures.

I think it's important to see the AI's something being built on top of the existing infrastructure. And my concern is really that we will end up with Internet that is even fuller of deep fakes and disinformation at this current stage. And that trying to have a sustainable Internet where we need to be really careful about the capacity that we have in society and running the Internet and getting bits of information true to network should the layers of the network, you know, be looked separate and protected.

And what I think I'm hearing, and I'd like to have a confirmation, is that AI being built on the top should really be regulated at the AI layer and, like, not to go deep down into the Internet infrastructure as such.

But then there are arguments that some networking parts will be using AI as well. And so how do we see this regulation being played out and what is the core principle here? Is it still net neutrality that we're stupid about the bits that go through the network?

It just raised a lot of, you know, questions in my mind.

>> OLIVIER CREPIN-LEBLOND: Thank you, Desiree. We have Anita online who wants to react and then we've got Vint having put his hand up. Let's take Anita and Vint's reaction and then we go around the floor again.

>> ANITA GURUMURTHY: I must apologize that I wasn't responding to the point from the floor because I wanted to come in earlier. So is it okay that I go in now?

>> LUCA BELLI: Yes, go ahead and then we'll have Vint.

>> ANITA GURUMURTHY: It's a minor point that may be linked to what was just observed. I think when you talk about the Internet and enumerable struggles in our regulatory landscape, and I recall my organization and it's, you know, the good fight that we put up for net neutrality, the way that nondiscrimination in the network is very, very different I think when it comes to artificial intelligence.

I think AI is linked to truth conditions of the society, and you're really not necessarily prioritizing nondiscrimination. I think that's somewhat of a technicalized representation of data and AI debates. And what we are actually doing is use discrimination and social cognition in a manner that you can use data for social transformation. So there is a certain slippage there. Very often and in fact in our joint work with Wanda, we actually said that we might sometimes have to do affirmative action through data.

So we really have to be cautious about conflating a nondiscrimination on the Internet with principles for responsible AI.

>> OLIVIER CREPIN-LEBLOND: Thank you.

Vint.

>> VINT CERF: And literally just thinking on the fly here about AI as another layer, as the interface into this vast information space we call the Internet, first of all, Alejandro's point that machine learning covers a great deal more than Large Language Models. He's mentioned weather prediction, for example. It resonates with me because we recently discovered at Google that we can do a better job predicting weather using machine learning models rather than using equations.

But I think that we should be thoughtful about the role that machine learning and Large Language Models might play.

One possibility is that they filter information in a way that gives us less value. That would be terrible. But another alternative is that it helps us ask better questions of the search engines than we can compose ourselves.

We have a little experience of this through what's called the knowledge graph that helps expand queries into the index of the worldwide web, and then pull data back.

Summarization could lose information, that's the potential hazard. But I think we should be careful not to discord the utility that these Large Language Models might have in improving our ability to ingest, analyse, and summarize information.

This is an enormous canvas which is mostly blank right now and we're going to be exploring for the next several decades.

>> OLIVIER CREPIN-LEBLOND: Renata.

>> RENATA MIELLI: Just a point, of course. AI and Internet are not the same thing, they are different. But my point of view, we have some challenges that we are facing regards how to address the risks of AI and the impacts on society are pretty much the same in terms of they need more -- they need -- we need more transparency, accountability, diversity, more decentralized and democratic not only Internet, but AI.

And we need to focus also on how AI are impacting the Internet and how people interact with Internet.

Now, we are in a situation that, for example, when you do a search on Google, you don't have a lot of links to click and interact with the content of the link about something, how to take -- how to cook a cake -- orange cake, for example.

Because the artificial intelligence brings the results and you don't need to click anymore. And in a lot of times, the result are not accuracy and have bias and this is our impacting -- the Internet, how we experience it. So there's not the same thing, but one impact the other and we have to have in mind that the core values we need to have to regulate the AI in this actually moment needs to be taken into account, this transparency, accountability, and liability and so on that we need, and net neutrality and other core values that you have to Internet.

>> Yes, as it comes to Internet Governance, we have these poor infrastructures which we all recognise it has to be public good, even global public good. It's a way of continuing regulate as a core infrastructures.

So, but AI system, I think, is more under educational -- but there's an issue, I think, many people already attached a specialty term, which is cybersecurity. Because AI actually put a lot of the -- because it caused a lot of the problem in terms of cybersecurity, because it make the Internet more vulnerable, you know, to set of attacks.

That's one areas with just [?] that have mutual impact, especially in terms of cybersecurity, in terms of the AI may help and the dangers and harms to the Internet stability. I think that's one area we have to put focus.

But there's other impact, which needs to take a long observation on the impact of the AI on the Internet and the core infrastructure of Internet. Yeah.

>> LUCA BELLI: Let's get to  Sandra, and unless there's anyone else with an urgent question or comment, we can then wrap up. Sandra, please.

Can you speak again, because --

>> SANDRA MAHANNAN: Can you hear me?

>> LUCA BELLI: Yes, very well.

>> SANDRA MAHANNAN: I just wanted quickly to react to what I think two speakers ago she made mention of a concern about having erroneous responses. Because some way, somehow AI just summarizes the feedback from searches. This was one of -- I think I totally agree with her, which was one of the points I made earlier, that these biases, whether we like it or not, AI is here to stay.

And then these biases are really concerning. They are concerning because we would agree that there are really big wigs in the business and they get to -- I don't want to say control the narrative, but for lack of a better way of expressing it. And then the small players, no matter how accurate they are, don't get to really -- I don't know, access is really low because the big wigs have occupied the markets, which means that responses automatically or people automatically go there.

And then what happens when the decentralization is not really happening?

When it's not really decentralized. When people are not getting to or the other players in the industry are not really getting to -- to -- people are not really having access to those. And that's why it's really important, I think, that regulation should really come heavy on the side of the development or the developers or the models themselves.

>> LUCA BELLI: I think that now as we will be kicked out of the room in two minutes it's time to wrap up and to thank the participants for their very thought-provoking comments and presentations.

I think we have illustrated very well the complexity of this issue and also the interest of them trying to keep on having this very productive joint venture for next IGF to present. The result of what could be a very brief report on the elements that can enable and open AI environment as it was suggested also in the chat during the session. Our best effort to distill the knowledge, to share in the hour and a half (broken audio).

>> OLIVIER CREPIN-LEBLOND: I think the mics are giving up on us.

>> LUCA BELLI: Yes.

>> OLIVIER CREPIN-LEBLOND: Try this one.

>> LUCA BELLI: It's a sign that we have to wrap up. So thank you very much, everyone, and we will do our best effort to put everything we learn today into this report. Thank you very much.

>> OLIVIER CREPIN-LEBLOND: I just add, if anyone is interested in joining the coalition on network neutrality and Core Internet Values, come to talk to us and we'll take your email address and name and happy to have you on it. Thanks again to all of our panelists and great job. Thank you so much.

>> ALEJANDRO PISANTY: Thank you again and congratulations for the session.

>> WANDA MUNOZ: Thank you.