The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> MODERATOR: Thank you very much for being with us.
My name is Olga. I'm from Argentina. I'm from the Internet Governance. And I am the Dean of the National Defense, University of Argentina, and I've been invited to moderate this very important session. Thank you for inviting me as moderator. This is a great honour for me. This second is a workshop. Interoperability of Artificial Intelligence governance: Scope and Mechanism.
So let me give the scope of Interoperability. It is different systems to communicate and work seamlessly together. This is a concept we all have about interoperability about different systems and machines.
But the IGF Network and Artificial Intelligence, beginning of interoperability in the 2023 report is slightly different. I think it's more broad which is very interesting. The report -- this beginning includes the ways through which different initiatives, including laws, regulations and policies, codes, standards that regulate and govern Artificial Intelligence across the world could work together in legal, semantic and technical layers that become more effective and impactful.
This reminds me of the beginning that was made by the -- about the Internet Governance. It was a broader definition of Internet Governance, not only the technical identifiers and the technical coordination.
At the same time development and uptake of Artificial Intelligent systems are proliferating. We are using them all the time at a pace and across sectors. A concerned effort in governing Artificial Intelligence is to look at the opportunities while remaining the challenge and risk as a result of new technologies.
We all are working on different regulations in different countries and regions as artificial intelligence is increasingly involved in our society. It is critical that global governance frameworks encourage interoperability to promote a safety, secure and Artificial Intelligence ecosystem. So final lit interoperability becomes really imperative.
This is why we are here with a distinguished groups of analysts that I will introduce now. We have Dr. Yik Chan Chin. She is a professor from Beijing, China, and co-founder of the IGF Policy Network on Artificial Intelligence PNAI. Thank you for being here.
From remote we have Mr. Poncelet lleleji, Jokkolabs Banjul from Gambia, Africa.
We have Mr. Sam Daws, senior advisor of Oxford Martin Artificial Intelligence Governance Initiative and the director of the Artificial Intelligence.
And Xiao Zhang, director of China Internet Network CNNIC and deputy director of China IGF. And Mauricio Gibson is here from the Head of International AI Policy and the director of the Department of Science and Innovation and Technology in the United Kingdom.
And we have to deal with this noise and sound thing, but we will manage. Don't worry, and we have a discussion that she will give her input at the intervention of our panelist, Dr. Neha Mishra from the Geneva Graduate Institute in Switzerland.
We have three policy questions that will be answered by our distinguished panelist. And we have comments from Heramb who is remote. And I will post the first policy question to our distinguished panelists which is understanding the interoperability of AI governance.
So for the panelist, what is your understanding of enter operate ability? And what are the most issues that need to be addressed at the global level? And what are the obstacles. I don't know who would like to start responding to this question, i.e., I won't put you on the spot. Welcome. The floor is yours.
>> DR. YIK CHAN CHIN: Thank you. So I speak on behalf of the interoperability from AI governance. So as Olga mentioned we take a board understanding of the interoperability. So we particularly look at legal semantic and technical layer of the interoperability, because we identified that the most important thing in terms of enter operate ability.
So we also look at how the law regulation and policy, code standards, of course a different part of ward, that the ward can working to and address the programs at a global level. And make it more effective and impactful. So in terms of global issues, we recommend to address in the short-term or the medium term there are several thing.
Most importantly is AI risk categorization and the evaluations. I think most of the country we agree on. That but we have a different approach in terms of the categorized risk. And also mechanism. The second one we identify is liable. The liable of the AI system and the third is the risk of AI training data some we know that the AI system depends on the data they use to train AI. The risk of training data is the issue we think is important to address globally.
And the last one is the technical standardization, the alignment of the technical standard.
And the regulator fragmentation and divergence requirement. So these are the global issues we recommend to focus on. So what is the major obstacle? The major obstacle, first of all, we identify is the GEO political tans between different powers in the world.
And the second one is the lack of trust among the different countries and regions, and even the countries. And the third one is equal distribution of AI technology, and the maturity of their policy making.
So we see different AI technological power that have different dynamics and the governments.
The third one is about AI interoperability we see reasonable and national interoperability policy. But they have different principals and values and objectives. I think that's all for me. Thank you.
>> MODERATOR: Thank you very much. That was very interesting. Especially following the comments in the open ceremony about the differences between the global south and the north. Sam, you want to also tell us about what is your understanding about interoperability?
>> SAM DAWS: Thank you very much. It's a real pleasure and privilege to be here. I wanted to thank Yik Chan Chin and others for their record for interoperability. Building on her economics I would add two areas we need to additionally focus on. One we need an interoperability approach to the stainability of AI.
So AI and energy demands are said to grow with increasing multimodal inn preference with the use of IOT darts and with agenda AI. So we need different ways to measure, track and incentivize better water use of data centers of chips, algorithmic efficiency and data sobriety.
So work is already underway on this in the ITU, ISO, IEC, IEEE SA. And the acronyms but the International Energy acts as a partner such as the programme -- and the programme taking a approach mapping full life cycle of AI from mining through to end of life reuse.
We also need international scientific collaboration for AI's positive climate contributions, for example in new materials research and solar PV and batteries and climb weather modelling through digital twins.
And also, especially, I think looking towards Berlin and Brazil, COP30 next year, AI will really help deliver efficiency targets across all industries.
So what are the obstacles to this particular issue area? Well building on Yik Chan's remarks common factors are significant. We have seen U.S. export controls on high-end chips on China and in return China restrict things in response.
Countries race to acquire high-end chip to tracking focus for interoperability on sustainable approaches.
The other obstacle is tracking energy use by grids and companies can be economy click sensitive. So companies don't always volunteer this themselves. And while companies have been doing a remarkable job in Google and so on achieving 100X efficiencies in datas and chips and software design, the overall electricity of use of AI continues to rise.
So we need a multistakeholder framework for industry transparency and accountability. Singapore is a great member state example of integrating sustainability into its AI verified and model AI frameworks. And lastly I want to touch briefly on cultural interoperability. Because it's not talked about enough in AI governance but we really neat cultural interoperability dressed to the global level.
For humanity to flourish it's important that our AI coaches diverse into AI so we can better use it to convert into lives. That includes insights from low resource languages and also the wisdom of indigenous people who have a minimal digital footprint not captured by large models trained on the internet.
The trend for solid reasons at a national and regional level, especially in data governance and MLM world views is I think going to continue. That's not in itself an obstacle to interoperability. The obstacle would be if we have a fragmentation into a closed loop of culturally informed logical Generative AI ecosystems. It's a bit of a mouth full, such a socially conservative bricks AI alliance which president Putin announced this week. An ecosystem around that, alongside a western ones, combining Russian ties separated by mistrust.
(Audio Difficulties)
>> MODERATOR: About stainability and how that effects our environment. Because we use technology, but technology has impact in the environment. So that's very interesting. And also what you mentioned about the society and languages.
So I commend you for that comment. And I commend you both for being so respectful about the time in responding to questions. Any comments from anyone? Yes, please what is for you interoperability?
>> MAURICIO GIBSON: Can you hear me? I will build on insights, but from a more practical governance situation.
So I think recognizing what people have said, I believe that there are innately going to be the different governments, their interests, which at times will compete. But I think seeing how interoperability can happen is looking at this sort of broader areas where there are tune for a corporation and recognizing and honing in on those particular areas. And also looking at how we can plug the gaps and continue to bill on those areas where there are gaps and work towards that furor progression of coordination.
And I think a lot of that is through building those foundations and building blocks of what are the core principals we are starting to see across different governance work streams and that doesn't need to be harmonisation but really building on that because there will be the regional domestic working on that.
But echoing what everyone said here but I think thinking more broadly about the technological advancement of AI. And we are hearing a lot more about not just GenAI but agenetic AI and the governance on this, but not because of the nature of who is responsible on building on liability there. Keeping up in terms of governance with these challenges is going to be a real battle.
And so from the U.K. perspective, I think the science behind the most advanced AI which is progressing at an exponential rate is a real focus or has been a real focus, not least with the safety reporting that we have been producing, our Secretariat has been producing and bringing a lot of scientific evidence on this.
And the state of the science, it's rapidly evolving and we have been having to produce a lot of reporting on a regular basis and even by that point, you know is that going to be out of date? How can we keep up with that? And understanding the scientific basis is going to be a vital thing to try to overcome.
I think the other thing is building on what other people said is the capacity building to -- I think there are differing understandings in different environments. The digital divide is so significant.
And given the advances in technology, we are supporting policy officials and civil servants and public sector and everyone to support their AI talent uptake and also other part in the world will help understand in how they can engage in the government's process at the issue level and in their own domestic system too.
And I think a third point is supporting that is a sort of clarity and messaging that is needed to different communities across the world. So to support on thing like stainability or the cultural cross exchange of information. I think how do we land the key point needed to support that interoperability. And one almost of that is using the different forums and I think what we are see, however, is in the multilateral domain.
There's still a lot of duplication and the messaging isn't clear. And it's not very clear where people between prioritize particular engagement on governance in these different areas. Some people are seeing some thing happening in the UN. Some people want to see it in other areas. But one obstacle that needs to be overcome is the duplication and some activity and how can we try to manage that and see how they fitting to. And that's going to be a real challenge, I think and something we need to work together on going forward.
>> MODERATOR: Thank you, Mauricio, you bring up an interesting point at the regional or global level. And so that's challenging and you brought up an interesting point about capacity billion. As we know cybersecurity and cyber defense and cybercrime, Artificial Intelligence we are running short with the people trained.
Yik Chan you want to add any comments?
>> DR. YIK CHAN CHIN: I want to add to the questions -- my understanding of what is interoperable for the AI system and for me, I use the word, it should be one ecosystem. I want to make a comparison to the internet.
You know, for the internet in the past 50 years, the only 50 years as I can see, it's a one world, one internet. Why? It's a digital economy followed by flourishing. There's so many applications. But we have something in common.
We divide the Internet Governance to layers at least in the technical layer. Logic layer, TCP and IP. And we can connection them. We -- by the same route. That means even though the different country have different regulations for content or something like that. But on the technical layer we obey the same regulation.
So that means we can working to on one ecosystem. So as an internet user can you use any application. You send an email. You call VIP or you search online or something. You don't feel that you are being around the road seamlessly. That means you don't feel where it is. So I think for AI systems -- at least we should fin something that we can working to in one ecosystem some that's my response to the first question.
And the second one is what the priority? What is the most important thing that we should do? I think actually because we have a different culture. We have develop developed stages and the priority for each country is not the same for economic growth for different areas. So our understanding of AI governance is quite different. I think because two points experience of people around the world, have no access to the internet. So AI means nothing to them.
We cannot leave them behind. So maybe I think the most important thing is to sit down and find a priority of all the questions of AI and find priorities. We can narrow it down. What is the global issue? Does what is -- maybe for developing countries and for Africa and for -- we can one by one -- it's not just arriving to I risk or something like that. They have no AI. How to say - AI risk.
So I think the issue is also very important. And the same question is what is the obstacle? After we had the discussion. I think trust is the most important issue. AI is bill on trust. It's not limited to the GEO political reasons. We shouldn't have different ecosystems. So it's all these ecosystems build on trust. So how to build trust I think is something to discuss. Thank you.
>> MODERATOR: Thank you very much about trust. As we know Artificial Intelligence is based on a big amount of data, capacity of processing that data. And some algorithms that gather that information.
So trust is -- I would that it's a layer overall that gives this tool the confidence for us to use it. So that's a very interesting point from you. So I want to address our distinguished panelists a second question, which is how can different actors address interoperability? And how can he balance regional variations with global approaches? Who would like to start? I don't want to put someone on the spot.
>> DR. YIK CHAN CHIN: At 3:30 we will release our report this year's report so part of the report enter operate ability and also liable on the mental issues and the neighbouring issues. So welcome to join us in the main port.
From the PR perspective we looking at -- I think how can we work together? I think multistakeholder is very important. And the way -- from our own experience. Because I'm meeting the group. So we got a lot of the input from different sectors and from around the world. So it's just surprising how much information evidence -- you know can you collect through the multistakeholder molds because we have a different sector and private government and Academia.
So that is really impressive for me personally and as a group as well. And secondly I think disciplinarity approach is very difficult. Because it's very complicated to understand the AI system, you know and how to validate the test and how to know the security and the safe issue.
So for us I think it's a multistakeholder and multi-disciplinarity team. And the regionalization alignments that's a global issue. Because we have to respect the regional and national diversity, where at the same time they try to rely on the --
(Audio Difficulties)
They have -- they do not recognize -- actually we need only one
(Audio Difficulties)
We have UN at the global level. But it's not to do anything. Actually it's more about the coordination. You know. So actually we respect the regional diversity and the national diversity. So first of all I think we have to make sure the police has to be met just like Xiao mentioned. So we have the
(Audio Difficulties)
What happened at the next level. It's up from the community and from national. And then we have regional initiative. We already saw so many regional initiatives like in Latin America and African unions and EU. You know we have seen all of these regional initiatives but what we need to do in the end, and how do we coordinate the national regional issue to the global level.
So that's what we need to do in terms of interoperability. So that's the way we can do this. The first way in our report we identified 16 very effective mechanisms to do the global interoperability. For example like the UN as a multistakeholder platform for us to negotiate and communique. So this is one way we can have some policy die leg.
The second one can he can think about international collaboration in terms of the safety governance. So a good example set up by the government in terms of the test and the verified AI safety. And there's many safety institutes already set up in Europe and in Japan and in the U.S. and even in China we have a regional one. We do not have a national one. But this is a good kind of collaboration, you know.
And the third way is kind of technical self-regulation and the cooperation. So this is in 16 mechanisms to help to do the global interoperability.
And the second mechanism we can use is compatible mechanism. So for example we are talking about like a mutual recognition. So mutual recognition approach. So we have a divergence in terms of regulation. But we can look up mutual recognition. So this is one kind of mechanism.
The second one we can rely on the national standard set up (?) and thing collaborated all of this international and ITU, the collaborate together as well to align with each other.
Second we can talk about the certificate, security certificate. Okay. And also the safety testing or alignment mandate. Still we can have a harmonisation of the AI regulation or even the harmonisation of the AI principal (?) and the last one I wanted to mention is very important in terms of national and regional policy making. So the policy maker in the national domestic level and regional. When they do the policy they try to incorporate international standards in their policy making.
So when they are doing policy making, first of all, of course they have to -- we have to respect domestic public interest objective. But at the same time, if they try to align with the regional issue standards which will reduce the unnecessary barriers and the cost in the end, you know for the interoperabilities. So try to ensure alignments of global standards is increasing international regulatory corporations. So radios unnecessary seconds and divergence. I think I stop there.
>> MODERATOR: Thank you very much. And also the difference between -- mainly developing economies by technology developed by developed economies. And once they have to develop the regulations then they have to have that in minute. So thank you very much for your comments. Who would like to follow? Sam, please, go ahead.
>> SAM DAWS: Okay. Building on Yik Chan's comments, reflecting diverse cultural process is inevitable and can be very positive. But positive interoperability can be more difficult ones nations have enshrined their approaches in law or in negotiated regional agreements. At that point, tools like international crosswalks, other tools, remain valuable to determine docking points and clarify taxonomy and language differences. But in the future I think we can do better in two ways. First is to start earlier. Start earlier in the process of regions and at the same time we draft our own national and regional approaches.
Those of you familiar with UN negotiations, negotiating UN resolutions, once a region has negotiated a common position, it's very hard to unpick it in the face of criticism or objections from other groups.
So sometimes just being aware of the key concerns of other groups can allow subtle changes in language or in framing rather than on substance as we elaborate on our own position, which then aids interoperability of approaches later. I think we can consciously use the four tracks coming out of the UN digital come back and the HLAB so we can use them for early intertechnology exchange through dialogue and through science exchange and through scientific convening and through capacity building.
The other area I thought we could be creative with is using cross-regional forums so forums that have at least one member state from more than one rein region. For lesson region to reduce siloing of AI approaches.
So let's use cross-regional political, cultural for run at both a member-state and a multistakeholder level. So we have the IGF now. Other examples the organization of Islamic corporation. The DCO that Saudi Arabia leads. The digital former small states, Singapore lead. CESA is in central Asia. And the cultural organization. And the CIS, the organization of Turkish states, RDCJP, and Belgium all of these have a contribution. And also we musk forget the role of the issue science council, the IOC and national academies. I think those are vital, especially since two or three years ago ISC embraced the social sciences as well as the natural designs.
And I felt strongly about psychologists, economists and social anthropologists have been pouring insights into how human behaviour can be an obstacle to policy, interoperability approaches so we need them at the table.
And then lastly the network of AI safety instance institutes can play a cross-regional interoperability role but I would only if it can broaden its membership and it's agenda to widen its relevance to the global south.
>> MODERATOR: Thank you, Sam, and thank you for naming the examples of interregional spaces of debate. Because I was going to ask you that. But you already mentioned. That I was thinking for something for Latin America.
>> SAM DAWS: I would say pay AloPay.
>> MODERATOR: Thank you for that. And who would like to follow?
>> DR XIAO ZHANG: I'm Xiao. I definitely think the government should be multinational oriented. So I think I'm a little different from the two.
But I find something in common. Both multistakeholder and lateral engagement is very important but the AI governance is very different. So still want to make a comparison with Internet Governance. With internet just -- something happened, it's not -- they do not harm, no harm normally.
But AI is totally different. With the beginning of AI you know we can bring risks. It would be comparable to the atomic bomb and we all could be at risk. So because it could be using a weapon in the military or something like this.
So it's totally different from the internet. It must be the cultural oriented so it depend on the lateral resources it's not a technical problem. It's something legal and the understanding of AI, what it is and its harp it could bring. So normally, of course not a stakeholder engagement is very important. Very important. But multilateral is definitely should. Because we have the resources and the acts to take.
So I think both of these two side are very important. And it's different from Internet Governance, thank you.
>> MODERATOR: I think you bring a very interesting point. So when you mean multilateral you mean governments talking to governments? Is that the idea? Like the United Nations?
And the interaction you mentioned with the multistakeholders place, I think that would be the idea way to work. Because governments have a special role in taking care of economies, security of the country, or during the loss and all the environment. So very interesting point of view. Mauricio you want to add something?
>> MAURICIO GIBSON: Yeah, building on that role that government gave perspective here we can convene, arrange different stakeholders using that interaction and engage in these spaces to really understand the issues that have being reflected by different stakeholders and help funnel that into action and domestically and into action.
That's a useful conjure we can provide and building again the stakeholder. And there is the resource question. There is the resource that governments have. Barb and I think what I was saying before about capacity building and the particular role governments can play by using that resource.
So we can point to a U.K.-led AI for development programme or we have invested almost 08 million, sorry, in African government programmes and now increasingly in Asia. And a lot of that can go into skills and to compute. And that's -- you knee clear example of where we can really leverage the resource that we have to support what is going on in the ground. You know further action on sort of upscaling and governance as a key component of that too.
Not least, I think in many areas. And I think particularly on safety I think it's an area we are trying to use our resource and our experience in convening safety institutes and using the safety summits to really highlight it a wider global audience about all the safety components and risks that have been mentioned by my colleagues here.
I think a second point also sort of the better communication of the key tools that support thing like interoperability in the private sector. So practical examples can you give. So we have got the U.K. funded to AI standards hub which an international networking mechanism that can help socialize the technical standards across the world and bring together the different industry and the multistakeholder end to really talk about these particular areas. And I think by having those conversations that can really bring to light a lot of the areas that might come across as a bit difficult to access in the standard setting community to a wider audience.
We have verified through AI management essential areas as well which is a self-assessment tool to make sure if you are a business can you support insurance and support the trustworthiness and developing thing in line with policy principals that might be of importance, so like transparency, accountability, thing like this. But then thinking back to, I guess, the public sector adoption, how can we support and communicate to the public sector ourselves and enhancing the processes for enabling them to build uptake on a lot of this too?
And I think with that, going a bit broader in terms of implementation. So obligations like -- up know with can talk about interoperability in terms of the -- you note important principals that we might share but it's how do you help implement that in practice? And I think there's a role for governments to support those mechanisms.
Working with regulators, ensuring that there's a necessary support and guidance and skill for those who are working domestically to look at the international activity and bring it to the domestic level as well and translating the thing that are happening at the inch level which we are working together and doing that domestically.
I think one particular example and the more advanced AI front is the work G7 has been doing on the AI process looking at code of contact for advanced AI. OECD is looking at that and monitoring and keeping that going and a regular assessment of what is going on to help obligations. And then I guess also how to we strengthen the foundational principals. So looking at what we were just talking about. Reinforcing what we were looking at and important to bring to light where the overlaps are for other areas. So I will give a practical example of recent engagement in OECD in African-union dialogue. The second one took place in Cairo a Monday ago.
This was a really positive space where there was workshopping on an Africa chart for trustworthy AI. So what was looked at was a range of different governance mechanisms and tools including the OECI tools interoperability and the UNESCO recommendations. Bringing all of these together and looking at how we can draw thing and new work happening in the African environment. And we want to continue with that work and help support it.
So it's kind of strengthening what is out there and bringing those things together and helping that communication and using the resource that we have to help support is a really key thing and finally on the second part of your question about the regional disparities and bringing that together in the global environment. How do we do that and getting the balance right, so the OECD African union is a definition of two regional activities. Bringing that together is a really helpful example.
Another example is this year we assigned and counselled the AI activity. This was interesting because it brought together a global grouping. And even with that there were a few challenges in really getting agreement on some of the core principals and the real detail. But we got there in the end because we were able to keep language broad and flexible enough to enable different global regulatory regimes to engage in it.
And I think that's the key thing. So obviously while it's a legally binding treaty enabling space and tech support is going to be key to getting that balance. And I think that's something we have to recognize as we move toward interoperability and move toward regions and discussions in this space.
>> MODERATOR: Thank you for these good examples of cooperation. And I love the standards app. I like very much that concept. All the internet is global and based on standards as were you mentioning at the beginning.
So I think agreement on global standards is the key to anything. So thank you, Mauricio. So now I will share the third question. And thank you all of you for being respectful of time. Some you have took more but some of you less. So it's a good balance in between all of you.
So the last question for you is the role of the united nations and global Artificial Intelligence governance. What role should the United Nations play in tackling the international government part. Sam?
>> SAM DAWS: I just noticed that Mr. Poncelet came off the screen. It would be good to get an African comment. Make sure he is accessible.
First and foremost the UN can help build trust interoperable. So this is very much building on Xiao's point.
And trust is not a fixed constant. It's based on regular interaction. So people to people, which is why IGF is of such value. It's based on attitude. We need to approach this issue with empathy with approaching knowing the other with curiosity.
And trust is built on experience, so a corporation built on by trust. So we can begin with the global reputation of the two UN regulations agreed by consensus this year. One proposed on responsible AI by the U.S. co-sponsored by China. And the other proposed by China on AI capacity building. Co-sponsored by the United States. And both F those are guided by universally agreed on ethical AI. So I think that is our foundation.
Then I would suggest we focus on AI capacity areas or capacity building in areas where cooperation has already shown to be able to be advanced despite GEO political headwinds. This is areas like food security, bio diversity, climate change, health emergency prevention, macro economic stabilization, counter terrorism and crime and data for the implementation of the UN sustainable development goals.
>> SAM DAWS: The GDC and H-lab on AI have given us a good road map. It's clear the road for the UN is not -- at least not for now to regulate AI. Nor is it to enforce clients but that may come over time.
But the UN secretary general can provide more leadership on the need for inconclusive AI. The ITU, other agencies, DESA, UNDP can bridge the divide through capacity building. The UN can be a source of scientific insights and expert data to guide decision making and convene policy dialogue in a standard setting.
So lately, and this is again trying to be a bit creative. I think the UN should look at the success of common security in the peace and security domain and look at whether those organizations could also play a role.
I think an existing common security organization established a trust in the peace and security and economic domains. They could collaborate also in areas where AI can support shared objectives and knowledge ecochange.
So I've got a different set of acronyms here to the -- first regional ones but the OSCE, DESA, the African Union is the EU and Latin America and the community -- and the forum. These are all examples where they have shown ability to build confidence building through diplomatic engagement which can be applicable.
And finally, we've seen the emergence in the peace and security of AI space of very good initiatives by the Netherlands and South Carolina and co-sponsored by Switzerland, Kenya and others, on the responsible AI in the military realm. And those have been very good. They have included China. Which is really important.
We have also seen U.S. /China bilateral successful cops takes on not using AI in nuclear guidance systems. But I think we need to move in a direction of travel to back to the UN for AI peace and security. So the security council being more seized of AI and business security. It's already done some work as well as the UN's work on nonproliferation and disarmament in Geneva and Vienna. Thank you.
>> MODERATOR: Thanks to you, Sam. Thank you for summarizing what has been happening in the UN and also your suggestion about the future.
That's very interesting. Who would like to follow the comments from Sam? Yes, please -- the floor is yours.
>> DR. YIK CHAN CHIN: Thank you, Sam. Because we know he's an expert in terms F UN. Thank you for your very insightful comments. I think from the perspective as we mentioned before, as we mentioned before. International collaboration the UN resolutions is one example. How to do the international collaboration. And second it's very important -- actually the UN has a function
(Audio Difficulties)
To inform the objective --
(Audio Difficulties)
So for example Artificial Intelligence from the generated -- in the general assembly to the solutions. So these are two functions. And the third one --
(Audio Difficulties)
Internet Governance, especially the kind of policy --
(Audio Difficulties)
To facilitate change. And to understand the policy and exclusion, the best
(Audio Difficulties)
It's very important why people come to IGF
(Audio Difficulties)
It's very important to understand each other to bill up personal connections. So I think it's the most important
(Audio Difficulties)
They should use it as a global AI dialogue for the IGF. But at the same time we need to also strengthen IGF's capacity in terms of punish support and medical support and resources support.
(Audio Difficulties)
I really want to have a personal example. So I'm part of
(Audio Difficulties)
I took over as a chair and then
(Audio Difficulties)
A multistakeholder, like other sectors like private
(Audio Difficulties)
Give them that policy takes a role.
(Audio Difficulties)
Join in consultation and keep them tour during the process. Because I was invited from China --
(Audio Difficulties)
So the UN is doing -- in terms of how do they incorporate multistakeholder dynamics --
(Audio Difficulties)
This is a very positive focus. I'm going set a precedence of other agents and process. Thank you.
>> MODERATOR: Thank you very much. We mentioned this audio, Mauricio.
>> MAURICIO GIBSON: Thank for floor and thanks to the colleagues. Really interesting opportunity from the UN and I think from the perspective, it's -- like the real tune here with the conclusion of the global impact that's been mentioned.
There's an opportunity I think for us to really capture the opportunity that are presented with the UN's convening power for every country and range of stakeholders tolling through environments like this to highlight the potential of cross cultural informing ecochange and sharing and building that mutual understanding and I think really highlighting and reinforcing the points you were saying about building that understanding is very really fundamental to the value here and the UN environment.
And I think the one thing to sort of clarify is -- because of that factor there are so many different body and UN agency I think it's really important to reinforce the coordination, vote for the UN to not duplicate but highlight where it has the different values depending on each agency and activity that is going on.
(Audio Difficulties)
Widely on AI governance we are seeing more interest across different agencies to play a bigger role. But I think what we need is coordination so an understanding and what exactly is to be delivered on the ground and giving that practical benefit and move ago way the principals and interpretation and coordination but actually supporting the nation on the ground and delivering actual benefits to the communities feeling the digital divide as well.
I think one of the ways delivering on that. Looking at what you mentioned, Sam, about these global dialogue on governments initiative which has been proposed through the digital text and just about to launch in negotiations about the modernalities of it.
I think it's important we really highlight the sharing information on these boards and building an information and understanding like this. Highlighting the different initiatives and the points about -- these are the actions that we are doing. Better understanding that and bringing those together. And there is a role, you know for the IGF to be considered. And it's interesting you mentioned that with the (?) we need to consider these in the next stages of thinking about that.
And I think on top of that it's important that we don't create too MP thing. It's men to be margins of existing
(Audio Difficulties)
Activities the ITU and the national global forum on ethics as well. How can we working to on these. And then you also mentioned the scientific panel on AI. And to the U.K. There's interest in this because we produced safety reporting from an advanced area risk from an expert panel and leading scientists and research and there's a role for this. And the role for the research that is out there and bring it to a wider audience so we can support that inclusivity in terms of moving for when you have that understanding of science.
But I think again, like you know it's ensuring these are clear and rounded and -- the scope is clear and the mandates are clear so we don't get into a situation where things are muddied. And that is also reflective of what a lot of people here are talking about, the process. There is a consideration of the role of AI in this process but we need to make sure there's an active coordination so it delivers for people through the AI governance.
And just a final thing to underpin a lot of this. As mentioned before. There's differing approaches and technological advancement that is moving so quickly. It's violation we stay alert to the need for agility in AI governance and flexible approaches that can adapt to different developments in the world. You know, I think at times --
(Audio Difficulties)
And the system is not quick moving but maybe to recognize we need to keep up with the advance in technology and that's a fundamental thing as well.
>> MODERATOR: Thank you Mauricio. I like the point about United Nations being a point of spreading information to other countries that are a part of it. Xiao, the floor is yours.
>> DR. XIAO ZHANG: I think
(Audio Difficulties)
Some successful vacations like climate and security
(Audio Difficulties)
It's not a single thing. It should be all the digital conversations in the system. So I think it provides a fast pace for us to --
(Audio Difficulties)
They can come here. But I don't have the energy and resources to go anywhere. So I can come here once a year to IGF, for example.
(Audio Difficulties)
I agree that IGF could
(Audio Difficulties)
Thank you.
>> MODERATOR: Thank you very much and for this sort of combination for the United Nations and IGF. Interesting.
Poncelet we have not forgotten you. I will give the floor to you now that the speakers in the room have already answered the questions. Would you like to comment about the three questions we are talking about. What is the role of the interoperability and what is the role of the united nations and how different actors could interact to work on the three very important issues and welcome. The floor is yours.
>> PONCELET ILELEGI: Thank you very much, Olga. Yik Chan and Mauricio and Sam. I would like to say first and foremost. All my colleagues and speakers talking for me from an AI perspective spoke about the three key pillars of what we are talking about in terms of tools and the -- in terms of interaction and enter connection and in terms of comply competition and cooperation. And I want to focus -- coming from the global south perspective I would like to focus on the communication and cooperation part. I have to be a little bit biased here.
And I will say one thing that guides me in this. Is for us to remember that at the end of the day, we have in September the government for AI by humanity by the AI advice report. One of the key recommendation was about the setup of inaction in the international panel on AI. We should be multi-disciplinarity.
We also have issues that I feel that some talk about them, and one that was very key for me. If I related to the advisory reports, deals with producing? In research which will help achieve the SEGs. When we looking at poverty, these are things that AI can be used as an enabler.
We have to remember at the end of the day, we want people to have inn inclusion (i). And the policy network for AI we try to look at thing from their perspective. In as much as possible we have various stakeholders but we try to look at the constituencies we come from. And that's why I'm aligning it with all the regional initiatives, what is the Africa and what is the EU is very important.
But I think if AI can make a different to us achieving SEGs we are going a long way. Where we bill on trust and equity. Thank you.
>> MODERATOR: Thank you very much, Poncelet, especially about trust and contributions about countries and the international organizations.
I would like to give the floor now to Neha. She has been patiently listening to what our guests have been saying. Neha, what is your comments about the debate and ecochange of ideas we have been having?
>> DR. NEHA MISHRA: Thank you very much, Olga. And I join the others in welcoming the PNI for the report and I'm delighted to be part of this panel. The discussion has been a disparity of riffs.
I wanted to lead with ideas that I thought was common through the discussions. The first thing I thought that was very interesting is the different dimensions of interoperability that the different speakers mentioned. In addition to the technical legal and yes manic issue operate ability which was discussed there were cultural interoperability or sustainable issues being brought together.
I think it was quite interesting when some of the governmental perspectives were shared, particular particularly how to navigate different interests of different governments to kind of figure out an interoperability framework that might be feasible. And also I think here from a practical implementation perspective, questions might be relevant in terms of thinking about whether -- you know a need a more modular approach, whether it's something to be tested in specific second sores. How incremental it should be. And what the prospects of a multistakeholder approach is.
Because one thing I thought was common through the discussion also was multistakeholderism and multilateralism needs to align with each other. And there can be some intentional points that need to be resolved.
I also found it very interesting that a lot of the speakers, including Poncelet, brought this idea of the development divide, the AI divide, and there was a lot of very, very encouraging discussion on how to bridge the different gaps.
I think one perspective I would like to add to that is that while it is great to think of capacity building initiative of more meaningful issue law correlation one should also be confidence about the limits of interoperability in the sense that in certain scenarios developing countries, least developed countries may not be able to participate in many of the interoperability products.
So to that extent, it is important to assess what are the areas in which we are look for interoperability and how represented those discussions are. And while I fully encourage that -- you know, it is important to have these open dialogues to have more sustained technical and capacity building initiatives this is an incremental slow process.
And developing countries should not lose their autonomy to decide how they want to develop the AI frameworks given it can have very specific influences across different communities.
And that's why it was very important at the beginning to highlight the cultural aspect for or the human layer of interoperability into the discussions.
I also found it very interesting we discussed so many different tools and mechanisms and different stakeholders and organizes including at the global level at the UN that can contribute to different aspects of interoperable. But at the same time I agree with -- I think Mauricio, that it's important to streamline these efforts and to not duplicate these efforts.
From a global south perspective I think the question is very practical, if there are multiple forums that can create different competition between different forums. In that sense the UN still has an important role as being the umbrella organization or the framework organization where the high level values could develop.
But at the same time, I think it's inevitable and that's why it was very helpful that Sam mentioned so many different examples both of in-regional and different kind of -- even trans-national policy networks and I think even Mauricio mentioned how the private sector could be involved.
Because I think these high level principals and achieving them in practice, there are many, many different stakeholders, including private sector, academics and engineers and technical bodies, different cultural groups and different communities. And really bringing them together is not an easy task. So it was quite helpful to that overarching perspective.
I think one last point I would like to mention is that -- this is a question that we often think of -- even from my disciplinary training as an issue lawyer. One question we often think of is -- you know how multilateralism is changing in the current world and even in the context of -- I think looking at AI interoperability, I think -- especially because the development of the technology is not necessarily always data driven but also driven by a variety of organizations and standard developmental bodies, I think they need to find better modalities of engagement between multistakeholder bodies and the trans-regulatory bodies and the multilateral bodies is important.
And I don't think it's going to be a perfect process. I think it's about continuing efforts and figuring out what are the tension points, GEO political conflicts that are completely not resolvable and what can be.
And it was great to see also many examples being discussed where despite all the GEO political differences, the developmental differences there are common points of consensus and coordination that one can see at the UN level or at other issue regional body.
I will end my comments here. Thank you very much.
>> MODERATOR: Thank you, Neha, for such a very concise and complete summary of what has been discussed. And I like the concept that you mentioned that this is a process.
I think the journey is a destination. We are going through, we have been talking about Internet Governance for almost 20 years so far. So maybe we think how many years are we going to talk about art fish intelligence.
I will give the floor to the nice audience that is have been patiently waiting for the opportunity to talk. Please grab a mic. And introduce yourself and tell us your name and your organization.
>> QUESTION: Thank you. Can you hear me? Thank you very much for the very engaging and enlightening discussion. I'm the chief information officer from the United Nations pension fund, and I'm involved in the IGF. I play several roles. One of my roles is to lead the blockchain organization.
The comment and question I would like to pose to the speakers is I think it's time to acknowledge that AI is it not work in isolation. But there is a convergence of AI with many other technologies. And I think there is an opportunity, for example, to see how the convergence between AI and blockchain can indeed address many of the issues presented by the interoperability needs.
There were many references to trust. And I think blockchain can indeed provide that common layer of trust in demonstrating that there is a list in data source, because on blockchain we can store datasets that can be audited and can be verified and in a manner can be validated as an input to AI. And there's will is a synergy between the two technologies not only in one way but both ways. Think of AI to calculate and predict the volume of a transactions usually are considered to be slow and not performing, can help blockchain scalability issues. So that's the point I wanted to bring to your attention. Thank you.
>> MODERATOR: Thank you very much for your comments. Any other questions or comments? Do we have an online question.
>> REMOTE MODERATOR: Yes, we have a number of questions and comments from the online audience.
I think a bunch of you a touched on how -- yeah, so a bunch of you touched on how multistakeholder bodies in the UN are at a point of information sharing but they are ultimately terribly slow. So what dough with want to see in terms of improving to address this kind of rapid use of technology and to keep up with that pace?
>> MAURICIO GIBSON: Yeah, it's an interesting question. The challenge of it, UN reform, is a long drawn-out process. I think we have to think about these new stages of AI governance as we move to the next chapter implementing the global come back and consider what are our core priorities we need to work on to achievability.
And for example with the scientific panel we need to learn lessons and draw on the experiences of previous scientific panels that have been developed. Some have taken a lot longer than other. Some have different parallel negotiating processes.
And I think if there's a way of connecting with sort of more multistakeholder ad hoc engagement. So for example the U.K. is internationally reporting -- you know Secretariat outside the UN. If we can draw on the experiences of existing initiatives that doesn't require new UN body to take a wile no keep want technology. That would be a much better way to be nimble and agile to advances in AI and that can apply to other areas as well.
>> MODERATOR: Another question?
>> REMOTE MODERATOR: , so another question we had from the online audience, how do we keep AI in other countries and now how is the flow, and for our countries, India as an example, and -- who wants to develop their own -- how do you balance that and the broader conversation around the inclusion and having a unified approach?
>> MODERATOR: Who would like to take the question? Sam? Go ahead. The floor is yours.
>> SAM DAWS: So just on blockchain, absolutely. Indelible ledgers are, I think, a real tool in increasing accountability in the future. And blockchain is one such thing. And it's great to hear that someone involved in the UN bench is thinking about it for that purpose.
On the speed of technology, we have an exponential increase -- not just the UN but governments are finning it very hard to actually set policies in response to it. The UN is capability of very rapid response.
I worked in Kofi Annan's office in the early 2000s, and we work opened a 24/7 response type for conflicting work. So if you look at the IAA and the work for the world food programme and emergency situations WHO and Ebola outbreaks and so on. We think it's remarkable to come to speed, but member states must want that capability.
Member states have failed again and again providing an international capability and strategic forecasting and these sorts of areas to international organisations. So I think there's that. And where the UN takes a longer time is often valuable to actually grow understanding cultural issues over time.
So I think there is a role for the UN to be slow and steady and there's a role for the UN to be fast. I think it's only after we have a major AI accident, God forbid, that we are likely to see agreement that the UN can have the capacity for enforcement in the AI safety and security realm. In the meantime, the UN will rely on the each nation states intelligence, military, foreign affairs and other resource to be able to monitor threatening and challenges in realtime.
And lastly on data sovereignty, this idea was floated of data embassies where you can have data of your own country stored somewhere elsewhere they have renewable power to power the data center quite cheaply but that data is invaluable in the same way that diplomats are invaluable so you can have these little data embassies around the world. I think that's an interesting concept that could be developed further.
>> DR. YIK CHAN CHIN: Yeah, I think a blockchain, the in China the company used a blockchain with security and blockchain. I think UN's role. Like Mauricio said, there's so many overlapping things. So you know that you are streamlined and they need to have a clear beginning of each duties and reduce the overlapping.
And I agree with Sam. Because I personally work in the process. I think the UN has a big capacity to reach out to around the world collector information from the multistakeholder and bilateral countries you know. But the negotiations between the states is so slow. Personal experience, very slow. So how can they speed up to the negotiation process? I think that's the key.
In terms of sovereignty issue. AI sovereignty, my colleague published a paper on that. So I think the thing is we need to irregular figure out as I said what should be solved internationally, globally. What should be left for the individual country, left to their own jurisdiction. And this has to be discussed. This is a process we have to reach an agreement on that.
Just like the internet. Some will have a core infrastructure which is a public good, a global public good but in terms F content mediation which is left to the national jurisdictions. So we need have common jurisdictions and at the same time national jurisdictions. Thank you.
>> DR XIAO ZHANG: I want to respond to the question with the United Nations. I think the UN is not perfect. A lot of limations but it's better than nothing. And you see I think always there is a balance between efficiency and a fairness. And always a balancing. So I think what we should do, maybe we call on the leadership in the AI era, because the leadership's awareness of what is happening is so important, and it should be willing or something engagement or something like this.
And also I should suggest we find some priority and UN agenda. GDC are following and find some priority and we can focus and step-by-step I think the UN still could play a very important role in the AI era.
And besides, I think engaging in the AGF and the stakeholder approach. And I think we can strengthen this approach because we have the national and regional branch of each IGF. It's very, very important. And if we have -- we can support the policy maker of AI. So that's my point and I still think AI -- the UN and IGF could play an important role in this era.
>> MODERATOR: Thank you very much. Are there any other questions from the audience? From online? Heramb? No. So I will give the floor one last comment for from each of the panelist -- we have a question online. Can you read it?
>> REMOTE MODERATOR: Yes, so I am asked in which specific forum do you see AI governance coalition taking place? Especially as we think about duplication between all of these different forums and potential geo political tensions or baggage which might come with certain forums?
>> MODERATOR: Would you like to take that? You're the expert, Sam.
>> SAM DAWS: I'm not the expert. Everything is here. I would say that the UN are being -- treaty based universal body is the go-to for where we can to implement AI government
(Audio Difficulties)
And I think we are going see wonderful synergies within and across those four tracks going forward. And I hope we can then bring in all the valuable regional mini-lateral and national approaches into that.
But the UN is only as strong as the willingness of its member state to cooperate. So the UN is great at the level of principals, but as I said, I don't think there's appetite in member states to get into regulation and enforcement. So we will need interoperability which I think is one of the purposes of this panel is that reality.
>> MODERATOR: Thank you. Thank you, Sam.
>> DR. YIK CHAN CHIN: I think I actually agree with Sam. The UN -- the fundamental reason that UN. Every country has one role. So it's effect -- because no matter you have small, medium or the strong nation, each country has one role. So this is to give the fundamental legitimacy of the UN.
You know, we have the effect Eco-feat. So we support the focal point for AI governance or dialogue. But the enforcement is like Sam said, where the stakeholders, they each give the power to the UN to at least allow it in terms of the safety issue and security issues. Maybe we can give more power to the UN for the enforcements. Thank you.
>> DR XIAO ZHANG: I actually agree with Yik.
>> MODERATOR: Thank you. Any comments from Neha or Poncelet?
>> PONCELET ILELEGI: I agree with my colleagues in terms of interoperability and good practices that was led by my colleague Xiao who did a fantastic job on. That it covers a lot of stuff.
And no matter what happens, within interoperability we have to look at public interest and inclusivity matters. And that will be my closing remark. Thank you very much.
>> MODERATOR: Thank you very much. Neha any comments?
>> DR. NEHA MISHRA: One thing we haven't spoken about at all, but I wanted to add it to the maximum. Increasingly digital agreements are looking at interoperability issues and trying to find synergy between integrity, operability, and that is something I want to add to the mix because we haven't discussed it at all.
And I think there are prospects, Especially at the regional level or between like-minded countries that sign these agreements. Thank you.
>> MODERATOR: Thank you, Neha, thank you. Poncelet. We have 4 minutes. We have more comments from the audience? Any other questions from the audience? From online? Okay. Last comments of the 3 or 4 minutes that we have?
I cannot read -- 5 minutes. Okay I think we have had a very interesting session. Thank you all very much. Thank you Yik Chan, Xiao, Neha, and Mauricio and Sam and Poncelet. And thank you for being a part of this important conversation and thank you -- you between moderate --
>> I propose a group picture.
>> MODERATOR: Yes, that's very important. Now we take a pic.
>> Very quick. I shared the link to the operability report isn't -- it's mentioned to the speakers and the report. We also have for those in person, the PNAI special which start in I believe about 20 minutes. So we looking forward to seeing you there. And if you have -- you know more questions or you want to just know more about the PNAI work please feel free to join. Thank you.
>> MODERATOR: Thank you for that. I participated -- I was one of the leaders of the PNAI on labour issues, so thank you for allowing me also to do that. Now let's get a photo. Let's do the picture. Thank you all very much.