IGF 2024 - Day 2 - Plenary - AI Governance We Want - Call to action Liability, Interoperability, Sustainability & Labour Policy Network on Artificial Intelligence

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR: Good afternoon.  This is a very weird way of doing a main session.  Bear with us as we get accustomed with the lights and the huge room and everything else.  Welcome to the main session of the IGF Policy Network on AI.  We have about one hour and 15 minutes to go through the main outcomes of work that has been happening for about a year, and before I go and introduce our guests and go into debates, I'm going to ask Amrita to come on stage and tell us about the work that has been going on for this year behind the Policy Network on AI. 

>> AMRITA CHOUDHURY:  Good afternoon, everyone, and thank you for coming.  I agree with Sorina, the room is too large.  The audience is too far away, but thank you for coming to this Policy Network on AI's main session.

Just to give you a background, the Policy Network on AI originated from the 2022 IGF held in Addis Ababa where the community thought that there should be a particular Policy Network which works on AI issues, especially related to governance with a focus on Global South.

And so the first year that is last year we produced a report, and you can go and see it in the PNAI website, and this is a multistakeholder group which actually decides what is going to be discussed, how it is going to happen and how the report is formed.

We have a few of the community members also sitting here, and this year we had four sub groups, interoperability, sustainability, liability and labor related issues.

Some of the community members who have been very active, of course, all community members have worked in their capacities and they are all volunteers, but some names which I would like to mention is Caroline, Shamira, Asharaf, (Listing names).  They were great leaders of the various subgroups.  We also thank all of the members, volunteers, proofreaders, and our consultant, Mikey who is working behind the screen.  I think she is sitting there, and our MAG coordinator who is also sitting there for all of the hard work which has been put in.

If you want to see the report, it is there online, and if you are in the Zoom room, it will be put into the chat, and I think Sorina has something planned.  With that I will pass it to Sorina and our panelists.

>> SORINA TELEANU: Thank you so much, Amrita. I will get closer to you because that feels a bit odd.  We heard about the work of the Policy Network on AI and I’m sure you have heard lots of talks about AI these days.

We are deploying reporting, and you can probably also guess the main word over the past two days at the IGF has been obviously AI.  So it's obviously talked about quite a lot.  And it is in this context that we will be trying to unpack some of the discussions around artificial intelligence, more specifically around AI governance with our esteemed guests and I'm going to introduce briefly, but before that, let me tell you also a few words about the report.  Amrita mentioned it is available online. It is the result of a one year long process, so I invite you to take a look at the summary.

The report covers four main areas, one is liability as a policy level in AI governance, the other is environmental sustainability within the GenerativeAI value chain, then the third area is around interoperability, legal, technical and data related, and the final area covered in the report is on implications of AI.

So now I'm going to ask the obvious question.  Has anyone here even tried to open the report before joining this session?  Am I seeing a hand?  I'm seeing a few hands.  Excellent.  Thank you for doing that.  I also have some, I hope, good news for you here and also for our colleagues who have been working so long on this report.

Our attention spans, you know, it's kind of limited these days, and reading a 100 something page report may not be the first thing we want to do.  I have a gift for you, and that's an AI assistant.

We talk about AI, let's also walk the talk a bit. So what my colleagues at DiploFoundation have been doing is to build an AI assistant based solely on the report so you can go online and interact with the AI assistant and ask questions about the report and its recommendations.

We are going to share the link and you can access it during the conversation as well.  I'm pretty sure colleagues who have been working on the report will be looking forward to hearing your feedback as well on what is written there.

Let me turn back to our guests and introduce them briefly.  And then the plan for the session is to hear a bit from them about the four main areas of the report.  Also hear from them about how they see the recommendations of the report and where do they think this recommendation could be going moving forward.  So they actually have an impact in the real world.

And then we do hope to have a dialogue.  Although this room may not be inviting for the kind of dialogue we are hoping, I will be looking at you, and I hope there will be a few raised hands in the room.

So let me do what I have been promising for quite a while.  In no particular order, we have Jimena Viveros, Managing Director and CEO of IQuilibriumAI, another excellent report was produced that I encourage you to take a look at if you haven't yet.  We have online, Meena Lysko.  Thank you, Meena, for joining us, Founder and Director of Move Beyond Consulting and codirector of Maritime EmpowerHer. 

Also online, Anita Gurumurthy, Executive Director of IT For Change.  Thank you.  Joining, back to the room, we have Yves Iradukunda, Permanent Secretariat, Ministry of ICT and Innovation of Rwanda.  Thank you for joining.  Brando Benifei, member of the European Parliament and co rapporteur for probably the most famous piece of legislation on AI at the moment, the EU AI Act.  And Mutaz Ghuni, Assistant Deputy Minister for Digital Enablement Ministry of Communications and Information Technology of Saudi Arabia.  Thank you for hosting us.  We have an online moderator, Mohd Asyraf Zulkifley will be giving feedback from the online work.

So in no particular order, I'm going to invite our guests to reflect on a section of the report and try to look also at the recommendations if possible and tell us how you see the recommendations moving forward.  I will do a random pick, Anita, would you like to start?

>> ANITA GURUMURTHY: Sure.  I can do that.  I want to recommend the report and for the focus areas.  Those come with a lot of insights and also reflect the state of our analysis, especially on crucial but often neglected areas of environmental labor.

Also I think it takes up to very, very difficult areas.  One was the idea of liability and the other is the area of interoperability.  I will focus on these two because I would like to really Zoom in on what I think we should be looking at in this domain.

What would be interesting and useful is for the report to enlarge its remit in terms of liability rules which should apply to both producers and operators of systems because a fairly invested level of care is needed in designing testing and deploying the update solutions and we need to understand that while producers control the products' safety features and producers look at how interfaces between the product and its operator can be improved, let's take the whole context of social welfare systems or the Government employing systems.

So in that case, the operator of the system also is implicated in decision making around the circumstances in the sometime and where they are put to use.  These are real world situations.  It's important that operators also become liable and bear some of the associated costs when risks become actual harms.

So that's one thing.  The second thing is a particular thought that I have around the training that we will do of the judiciary, the training needed for lawmakers, policymakers, et cetera.  The elephant in the room cannot be disregarded.  That is really the whole absence of global space to make certain decisions.

We are particularly concerned not only about the opacity of algorithms, but the opacity of cross value chains, in trade, for instance, gets compounded because of trade secret protections.  So trade secret claims over the AI can become an obfuscating mechanism and limit disclosure of such information systems.

I want to draw your attention to a recent paper from CIGI in Canada where a landmark case about Lyft and Uber came up to the court and Washington Supreme Court ruled that reports in question maintained as trade secrets by Lyft and Uber qualify as the public records and in the public interest they have to be put out so we have to look at this very carefully.

The other thing I want to say is that the recommendations in the environment section could also look at very useful concepts coming from international environmental law, biodiversity Convention common but differentiated responsibilities because the financing that is needed for AI infrastructures will require us to adopt a gradient approach.

Some countries are already powerful and some not.  So that's very important.  I would also like to just focus a little bit maybe one minute or 1.5 minutes on the vital distinction between interoperability that's a technical idea and interoperability that's a legal idea.

I think if we look at interoperability, sometimes while calling for this important principle, it's like openness.  We have to be careful about who we are making something open for or whether there is public interest underlying such openness.

So interoperability often enables systematic exploitation of creator's labor.  So oftentimes if we don't have guardrails, the largest firms tend to cannibalize innovation.  So I would like to conclude by saying that we should really look at technical interoperability and policy sovereignty as not things that are polarized, but we should work towards a framework in which many countries can participate in global AI standards.

My last comment would be a fleeting remark about the wonderful chapter on labor which could, perhaps, do with one addition about the idea of cross border supply chains in which labor is implicated in the global south and the fact that while guaranteeing labor rights we need to understand that working conditions in the Global South include subcontracting, and, therefore, transnational corporations must also be responsible in some way when they outsource AI labor chains to third parties or subcontracts.

So that we are actually looking at responsibility in the truest sense of the term.  I will stop here.  Thank you.

>> MODERATOR: We are already adding key words to the ones we have in the four main sections in the report.  I have two on the list, transparency and responsibility.  I will be adding more and at the end we will see what were the key words in the debate.

So moving from the Global South to the Global North, and I'm going to invite Brando Benifei to provide reflections because they relate to what Amrita has been talking about interoperability and labor.

>> BRANDO BENIFEI: Thank you very much.  First of all, I'm very happy to be able to talk in this very important panel because clearly on AI we need to build global cooperation, global governance, and we need to examine together what are the challenges.

And in fact the impact on labor and opportunities of having an interoperable technological development around AI are some of those challenges.

In fact, I think that the fact that we have chosen, which was, I mean, debated choice.  It was not obvious.  We have chosen to identify the use of artificial intelligence in the workplace as one of the sensitive use cases that in the AI Act is regulated to try to build safety and safeguards for workers, for those that are impacted by AI in the labor place, in the place where they work is one important direction.

And also, as a larger policy point of view clearly the impact of AI on the labor market is already very impactful.  So we need to build common strategies to manage the change in how the workforce will be composed.

In fact, I think we could compare the AI with the impact that is already showing considered that we are only two years into the GenerativeAI Resolution to some extent we can call it like that.  It's only two years.

When it reached the general public.  And we will see what will happen in a short time after.  So the impact is already strong.  We need to consider the change that is happening, like when electricity was introduced.  Sometimes I hear it's like with Internet.  No.  Because Internet is not as pervasive as AI can be.

AI can change every workplace, every dynamic of labor.  And it's like the invention of electricity.  It's like the use of the steam in the development of preindustrial automatic processes.

We can look at that with that eye, I would say.  And then that's why we need global governance.  We need rules because the impact on our societies is in fact even larger, not just on labor.

But obviously I say that as one who negotiated a regulation that dealt with market rules, we need to build a set of policies which are fiscal policies, which are budgetary policies, permanent lifelong learning policies that are able to deal with these changes.  And I really believe that we need to build common standards, common definitions.  We are working on that in various international fora so that we can have more interoperability.

In fact you know that the U.S. will be leading on pushing against those that limit interoperability.  One other legislative Act of the EU, the Digital Markets Act is also targeted at increasing interoperability.

And we think that this is crucial if we want our different parts of the world to work together and to find solutions between our different businesses that can have our AI cooperate together, work together, not be in different silos and be separated.  I don't think this is, this would be good for our economies, but also for the global understanding.

We need AI to be also respectful of different traditions, different histories.  I say that because we risk instead because of the dynamic of how the training of AI happens to have a very limited cultural scope, and I say that from Europe.  So it could be even more applied to other parts of the world, I would say.

So I think this is something that, I mean, these are some of the challenges.  We are in a, I strongly believe that we need to combine, and I conclude on this point, two different efforts.  On one way domestic policy in the sense that we need to have our own rules on how we deal with AI entering into our society, and there can be different models.  There will be different models, but we can build some common understanding.

For example, and this applies also to the labor topic, we have built some common language also looking at the work of the UN on the issue of the risk categorization, the idea of finding different levels of risk attached to different ways of using AI.  As a common way of looking at how we use AI.

And on the other hand, I think we also need to concentrate on where we need to work on a super national level because there are issues where we cannot find solutions without working over the borders.

I mentioned one thing that is outside the true topics of labor and liability, but I think it's especially important to mention and to conclude it's the issue of the security and military use of AI.  I think it's very important that we work on that because all of the other actions that are not effective if we are not able to control AI used as a form of a weapon or a form of security in all its implications.  So I think these are some of my reflections on the topic.  Thank you very much.

>> MODERATOR: Thank you also for covering quite many topics.  The good news on your final point about the discussions on the security and military implications of AI is that there is a debate at the UN General Assembly on a potential Resolution for that.  So for anyone in the room who belongs to a Government, do encourage your Ministry of Foreign Affairs to be part of the discussion because it is important to have some sort of universal agreement at the UN level.

On the interplay between global governance and rules and domestic policies, I hope we can get back to later in the session because that's an important point.  If we agree on something at an international level what next and how can we implement those policies locally at the national and regional level as well?

And I also like the point about common standards and definitions.  It's not easy to agree on these things at a regional level, at an international level it's a bit more complex as well, but it would help when we discuss the interoperability, liability issues and the thing that have been raised so far.  Let me move on to Jimena Viveros because she also will speak about liability.

>> JIMENA VIVEROS: Hello.  Thank you very much.  It's a pleasure to be here with all of the distinguished speakers and the audience.  First of all, I would like to highlight what Brando was saying before about peace and security because I think that is key.  And as a Commissioner for the Global Commission for the Responsible Use of AI in the Military Domain, we like to expand this into the broader peace and security domains.  The implications of AI obviously we know it in the civilian space, in all types of different forums, but in the peace and security domains it's not just limited to the military.

So we can see that in civilian actors that are state actors such as law enforcement and border controls and we can also see it in non state actors that are also civilian which can range from terrorism, organized crime, mercenary groups and just rogue actors.

So it's very important to also look at it from all of these dimensions because they do have a very destabilizing effect internationally, regionally, and at every type of level because of the scalability and the easy access, the proliferation of it all.  So that's why accountability and liability is so important.

So the report is great, and it really tackles a lot of the good topics about liability, however, the report only focuses on liability in terms of administrative, civil, and or product liability.  It was a deliberate choice to exclude the criminal responsibility, but I would also go a little bit further also and say that we need to look at state responsibility as well for the production, the deployment, and the entire lifecycle basically of any of these AI systems.

I think it's accurate that this liability part is the first section of this report because it's extremely important.  Why?  Because we need to in the current landscape that we are living in, where international law is pretty much blatantly violated with complete impunity all of the time, talking about accountability seems like fairytales, but it's really important to uphold the rule of law, to rebuild the trust in the international system, which is at a critical moment right now also for the protection of all human rights for people, especially those in the Global South. 

I am Mexican and the Global South are disproportionately affected both by the digital divide, but by the deployment of it, and the fact that we are basically consumers, not developers, also influences greatly how we are affected by the technology.

It also matters because there is a deterrent effect when we are talking about accountability in the criminal domain especially that really deters and that, this deterrent effect helps promote safe, ethical and responsible use, development and deployment of AI.

It also allows for remedies for harm.  These mechanisms are very important and should be included in every type of accountability framework.  Because we do have a lot of problems that stem from AI in terms of liability, accountability, which I prefer the term accountability because it's more encompassing.

So we have the anatomization of responsibility, there are so many factors involved throughout the entire life chain of these technologies, and both, you know, enterprises, people, and also states as a whole.  That's why I involved the state responsibility.  I identified three categories, so the users, the creators, and the authorizers, and but they are not mutually exclusive.  Each type of responsibility can be allocated and should be allocated on its own.  Obviously what was mentioned, the opacity, the black box, it also affects the proper allocation of responsibilities to each one of the actors.

And the fragmentation of governance regimes.

What we are witnessing now is forum shopping to whichever jurisdiction is more amenable to your purposes and that's where you set up or that's where you operate and so on.  So that's why global governance regime is extremely important because these technologies are transboundary as has been said.

So having a patchwork of initiatives is completely insufficient.  And also the regimes that we have right now for reporting, for monitoring, verifying, everything that could eventually lead to some type of accountability, they are all based on voluntaryism, and in my opinion that's insufficient.  It's ineffective.  At the OECD we have this monitoring of incidents and framework where it's obviously based on self reporting.

We have witnessed there that the lack of transparency and accuracy in these types of systems of voluntaryism is just not going to work and it's absolutely unsustainable.  Also the type of self regulation that is being used or like self imposed by the industry sector is also not going to work if we don't have actual enforcement mechanisms.  So in a centralized authority to do so because if we go, again, state by state, it's really not going to be very efficient.

So I think we all have like the general notion of what accountability is and what it means and why it matters.  We just need to find solutions.  And the willingness to do so because everyone should be accountable throughout the entire lifecycle of AI, and I will leave it here, but I'm happy to expand on issues later.  Thank you.

>> MODERATOR: Thank you, I think we are collecting suggestions for the Policy Network to continue working on these issues next year, and I'm taking notes of some of the areas that could be in focus.  You mentioned the impact of AI on peace and security broader and going beyond military domain, the notion of state responsibility and liability, and then the fragmentation of AI governance. 

I'm going to put a question out there that I hope we can explore a little later with everyone in the room as well.  The idea of a global governance regime.  Is that feasible?  How feasible it actually is, and what can be done concretely to get there?  We all know that the appetite for multilateralism these days is not as much as we might want it, but maybe it's not all lost.

All right.  Let me continue with our speakers, I'm going to invite Yves Iradukunda to continue, please.

>> YVES IRADUKUNDA: Thank you and good afternoon.  It's great to be here in this critical conversation, and thanks to the IGF for inviting us and particularly commendable work that the Policy Network on AI has done and the report that offers really good recommendations that should if implemented, if guiding our engagements going forward, should make significant impact.  This conversation is very critical and as I hear my fellow panelists share their reflection on the report but also insight on their respective context in work, challenge to think about, we can talk about responsible AI as AI isolated for everything else that we do in our lives.

When you think about AI as a technology, we also need to reflect about why AI to begin with?  Why technology to begin with in and what has been the impact of technology all along before the AI came in?

I think if we reflect on that, then AI is just not a new concept from a perspective of its impact on our lives on a day to day.

I say this because technology has been able to help advance innovation, solve different challenges, and help tackle some of the issues that we have, but at the same time, technology has driven some of the inequity issues.  So I think as we will reflect particularly on AI today we also need to really acknowledge the fact that if the foundational values of why we do technology are not revisited, it's not just about AI, it's about the values of our society all together.

So since we are focusing on AI, allow me to reflect from the perspective of Rwanda we always ask ourselves to what extent, to what end, what is the end goal?  And we focus primarily on the impact we want to have on our citizens, and whether it's AI or any other emerging technology, we want to see it as a tool, a tool that we use to improve the lives of our citizens, whether it's in healthcare, education, agriculture where we are really prioritizing our investments to use AI leverage in addressing the gaps that we have.

So what we are seeing as an outcome is really leveraging technology investments and also successes of AI to bridge the gaps we see in equity and inclusion, but most importantly improving the lives of our citizens.  So the themes for today's discussion whether it's focusing on interoperability in governance, looking at the environment sustainability or the issues around accountability, and the impact AI is having on the labor, all of this can be addressed if we, again, zero in on the impact we want to have on our society.

I think unlocking AI's full potential I would agree with what has been said before.  All of these values and ethical guidelines and principles that guide how it's implemented have to really be guided.  There has to be consensus and dialogue on how we deploy the different solutions ethically, and I think as was said earlier on, the responsible approaches have to really understand the different players that are accessing the AI tools.

And the standards should follow those values and should protect against ill intention of the use of AI.

So when I look at the report and the different recommendations, I find confidence in this global community within the Policy Network, but, again, for this session, I really want to call upon the leaders in the room, the technology specialists and companies, corporate companies that are deploying these tools to really follow these recommendations, but most importantly figure out what is it that we want to do for our society.

And so whether it's coming to building capacity, I think it's something that we need to double down on.  Right now there is inequity in terms of how different countries are adopting AI.  The talent is probably available in all countries, but in terms of access to the tools, in terms of awareness still there is a big disparity.  So even as we speak, most of the people across the globe may have limited understanding and appreciation of the impact AI is going to have on their lives.

So I think building capacity should start with awareness, and then the deployment of the AI tools should really be focused on improving people's lives at all levels whether it's the highest advancement in security as it was just said, whether it's in medicine, in other applications, we also need to think about how does it affect farmers in their respective society.

I think we should foster partnership as we follow the implementations of this report.  Like I said, it's not just for Governments or corporates alone or international organisation.  I think we need to really bring partnerships forward to make sure that we bridge the divide and accelerate innovation across all levels.

And finally, I think we should really commit to the adoption these policies within our with respective jurisdictions.

I think the boundaries of the impact of AI are limitless.  So if you look at environmental impact of AI, you will not know boundaries.

So the innovation around manufacturing of equipment that is used for AI solution, looking at energy solutions that are renewable and also limiting applications of AI that are really against environmental approaches should also be limited.

So to conclude, I think really commendable working the report, the recommendations, I think the insights have been shared on the panel, but really a call on the leaders here present to really put at the centre the impact on the people, on the citizen respectively, and really think about how does UI serve that purpose    AI serve that purpose of improving their lives.

>> MODERATOR: Thank you so much for bringing the focus back on issues of inequality, access, capacity building, how do we bridge the divides we see throwing instead of shrinking and I like your question, to what end, where are we going with this technological progress.  While listening to you I was reminded of two quotes we came across the other day.  I want to read them quickly and hoping we can reflect on them, again, building on your point to what end.  One came from the Secretary General of the UN, who is actually convening this forum. 

It was very simple but very powerful.  Digital technology must serve humanity, not the other way around.  We might want to think about this a bit more as we develop and deploy AI technologies.  And the second one is a bit more elaborated but along the same lines.  Are we sure that the AI revolution will be progress, no the just innovation, not just power, but progress for human kind?

I'm hoping we can have a bit more reflection on this here but also beyond in our broader debates on AI governance.  Going back to our speakers, I'm going to move online and invite Meena Lysko to provide her intervention.  Over to you.

>> MEENA LYSKO: Thank you very much.  Maybe I could start with first thanking Internet Governance Forum's policy network on artificial intelligence for actually organising this important discussion and for inviting me to participate.  I appreciate the chapter on environment sustainability and GenerativeAI.  I would like to maybe first paint a vivid picture where this picture as well as other scenarios, I am of firm belief have actually been the premise for the Internet Governance Policy Network on AI.

So as it stands, the Global South is indiscriminately impacted by GenerativeAI and its associated technologies.  The Global North economies are strengthened largely by providing technologically advanced solutions which are taken up worldwide, and at the same time, the Global North have the resource and time to implement and enforce policies which will protect the local environments.

The entire day is not necessarily spent on hard and hazardous labor to get food into the mouths of the hungry.  This may not be the same with poorer and developing countries.

Just as with our plastic pollution, we will see greater disparities in impact of non green industries on the environments of the most vulnerable.  To illustrate this view, I will use the example of GenerativeAI in transportation, and this is by no way picking on Elon Musk and Tesla.

The automotive industry is being transformed by the integration of electric vehicles, software defined vehicles, smart factories and GenerativeAI.  Identifying red flags related to environmental harm across the entire value chain of electric vehicles is crucial for sustainable development.

So permit me, key red flags are the biodiversity loss from mining raw materials.  GenerativeAI relies on a large scale data centers, GPUs and other computational hardware as well as all of us with, for example, our smartphones.  All of which require metals and minerals like Lithium, Cobalt, nickel, rare metals and copper.

Extracting these materials impacts local ecosystems, wildlife and the broader environment.  Let's look at this from the perspective of deforestation and habitat destruction.

Cobalt mined in the forest of the Global South down through, the democratic Republic of Congo.  Cobalt is used to produce Lithium ion batteries.  The country has seen genocide and exploitive work practices, the cutting down of millions of trees in turn negatively impacting air quality around mines.

More so, Cobalt is toxic.  The expanded mining operations result in people being forced from their homes and farm land.  According to a 2023 report, forests, highlands and like shores of the eastern DRC are guarded by armed militias that enslave hundreds of thousands men, women and children.  The destruction of forest due to COBALT mining reduces the earth's natural carbon sets which are crucial for mitigating climate change.  Let's be reminded of the negative impacts of copper and nickel mined in the Amazon Rainforest, the extraction in Indonesia as well as Chile. 

Besides biodiversity lost from mining raw materials, we can look at water pollution from processing and battery disposal.  The carbon footprint from energy in terms of production energy and charging wastes generation at every stage including battery disposal and component manufacturing.  The social and ethical issues like child labor and mining.

Addressing these red flags requires stricter regulation, sustainable sourcing, clean energy use and investments in circular economy practices.  We need to be extra mindful of the impact of batteries on the environment in the longer term.  We are presently having to manage disposal of electronic waste including plastic.  These impregnate our vital land and waters, but still at a micro and nano level.  If we fast forward a few decades from now, the battery waste bodes to be far more unmanageable as we are now looking at seepage of fluids into our ecosystems.

So the Policy Network on Artificial Intelligence Policy Brief Report provides seven multistakeholder recommendations for policy action.  I would like to emphasize that in developing of comprehensive sustainability metric for GenerativeAI, that's recommendation one, the standardized metrics must have leeway to adapt.  To take into consideration our rapidly evolving digital space.  Today we are having to look at the repercussions of elements such as Cobalt, nickel and Lithium.  We are having to consider greener technologies to meet a nominal energy demand relating to GenerativeAI.

A decade or even a few years from now our targets will likely be completely different.  Also if I can add one more, I suggest that we have in addition to the seven recommendations an outlook into the impact of the environment because we have moved beyond just terrestrial.  We are mining outer space.

So the global space race for mining resources to quench our GenerativeAI thirst needs also consideration.  I'd like to pause there for now.  Thank you very much.

>> MODERATOR: Thank you also Meena.  Thank you for making us think of more right in front of us issues that sometimes we tend not to see just because they are right in front of us.  Thank you for raising more awareness about the use and misuse of natural resources here on earth, but also in outer space.  That's not something we talk so much in AI governance discussions, but it is a very important point also because we don't necessarily have a global framework for the exploitation of space resources, and it would probably be better to start thinking about that sooner rather than late because we do see a lot of competition for the use of resources for the development of AI and data technology.

So thank you so much for bringing that up also.  Moving down to Mutaz Ghuni for your reflections, please.

>> MUTAZ GHUNI: Thank you so much, Sorina, really happy to be here with you guys on this session at IGF.

I think there is a lot of ambiguity and uncertainty as you mentioned surrounding AI.  This is not just the talk of the hour and not just the talk of the hour in Saudi.  This is the talk of the minute and second everywhere in the world.  And before I talk about the paper and the report, I want to take a step back and look back at history.  History is not just the greatest teacher. 

It's also the greatest predictor of the future.  We as human beings, as global society, as a United Nations, we have been here before, four times.  We have been here before in the first industrial revolution from the transition from agriculture to industrialization, then again with the introduction of electricity, and again with the introduction of computers, and then on the fourth industrial revolution with the introduction of the Internet, and finally now, we are on the cusp of the fifth industrial revolution with the transition from the digital age to the intelligence age.

Each one of the previous four industrial revolutions had profound impact on three specific aspects, on infrastructure, on society, mainly on labor, and on policy.  Let's take electricity as an example as was just mentioned.  When electricity was introduced, we had to develop a lot of new infrastructure to deliver electricity to every home, to give a chance to everyone to be able to harness its power and use in a safe and robust manner.

When we talk about electricity and its impact on the society and jobs, jobs, the jobs market was never the same before and after electricity.  It changed forever.  And we have adopted, adapted and prospered together with electricity.  Have skilled, upskilled and reskilled economies and people to be able to leverage that technology for the greater good.

In terms of policy, we have developed standards, we have developed frameworks, we as an International community came together to build a robust and meaningful framework that we can all work together on for the greater good in the use of electricity.  AI is not going to be different.

If we look at the same three lenses in the AI perspective, let's take infrastructure for an example.  Today we are using about seven gigawatts of power, of electrical power in data centers in the world today.  This is projected to grow to 63 gigawatts by 2030 in just five years.  We are expected to grow and consume 10 times the electricity that we consume today for the use of data centers.  This will have profound impact on the environment.  But the good news is 30% of the seven gigawatt that we use today is actually, and this is a funny anecdote.  It's actually being used to predict the weather.

So if we can actually, and it's using very old technologies and machine learning technologies in order to predict the weather, just to predict seven days of weather.

Now, we can actually use GenerativeAI to predict not just seven days, twelve days, and much less power.  And reutilize that excess power for new uses of AI.

Now, in terms of society, yes, AI is going to have a profound impact on jobs.  Jobs are, again, not going to be the same before and after we fully adopt AI, but we as a global society, we need to come back again, skill, upskill and reskill our economies in order to adopt, adapt and prosper together with AI.

Finally, in terms of policy, this is the main topic of discussion in this session, like every technology, there are two aspects when it comes to policy for AI, and this is true for every technology.  There is a local aspect and a global aspect.

In terms of the local aspect, we can look at the collection, the use, the utilization and access of data and AI technologies within the specific geographies according to the local priorities and agendas.  In the global aspect, which we have already done amazing work with the establishment of the report, we actually need to work as global bodies with local Government, with the Private Sector and the public sector and the good news is everyone is willing to actually put their hands together and leverage whatever we have today for the good of humanity and to ease the adoption of AI.

And with that, I look forward to the rest of the discussion and the session.  Thank you so much.

>> MODERATOR: For the good of humanity is a very good way to end this section of the discussion.  I did promise we will have a dialogue and we only have 19 more minutes of the session.  So I'm going to try to do that.  I will look in the room and I will also count on my colleague online to tell us what's happening there.  Any hands, anyone would like to    do you have a mic there or how does it work?  I'll come to you.  Please introduce yourself.

>> AUDIENCE: So thank you so much for giving me the opportunity to ask some questions.  So at first I would like to congratulate your hard work to release this report.  I know that's, it's a very hard work to address such complicated issues, and I'm eager to read this report. But back to the main theme, AI Governance We Want, I want to ask a fundamental question about what is the overarching goal for the AI governance?  Is it acceptable to use the title of the ever first, the very first United Nations Resolution adopted by the General Assembly in March, the title is seizing the opportunities of safe, secure, trustworthy AI systems for sustainable development.  If not, what is the articulation for the overarching goal of AI governance?  So that's my first question.

My second question is that I believe governance is beyond regulation.  Governance dealing with technical innovations because we do need technical innovation, but we also need the governance to guide this innovation for the greater good of the people and the globe, the planet.

So if we use that for sustainable development overarching goal of AI governance, how can we guide the AI innovation in line with the Sustainable Development Goals and even to accelerate the implementation of the Sustainable Development Goals.

And the third question is back to the regulation, the common concern for AI application is that about dis and misinformation, but dis and misinformation is from the mal-use of using AI tools.  So take traffic safety as an example.  For traffic safety, we need safe cars.  We need safe roads, but more importantly we need people, the drivers to obey by the rules.  So how can we have a comprehensive governance framework to regulate the behavior of the AI users?  I stop here.  Thank you so much.

>> MODERATOR: We will get a few more questions and then provide reflections.  Any more points from the room?  I don't see any hand, and we covered quite many topics so I'm pretty sure you do have at least a small reflection in mind thinking about all of the    I'm seeing a hand there, could you please come?  There are only mics here unfortunately.

Meanwhile I do like your question, what do we want from AI governance and what is the AI governance we want, and while we are waiting, Jimena Viveros, I want to provide some reflections.

>> JIMENA VIVEROS: Yes.  So obviously there have been around four important Resolutions this year regarding AI.  One was promoted by China, the one you mentioned, which is fantastic.  So all of the resolutions are steps forward, and they are also leading up to the global governance that we want and that we expect.

That's why we had the Summit of the Future this September and we had the Global Digital Compact and we had the Pact for the Future and all of the documents that were adopted therein are monumental step because we are now guiding the path of where AI is going to be governed by and for humanity.

So that's actually the title of the Secretary General's high Advisory Body report governing AI for humanity.  And I like to say also for the benefit and protection of humanity.  As you mentioned, AI has enormous potential and can be harnessed for good, can be harnessed for all types of enhancement of the, all of the sustainable goals, however, as Aminah Mohammed said this past March in Beirut, there can be no sustainable development without peace.

So going back to the point of the peace and security and the importance of AI and the dual use nature of it, what we want to create is global governance that encompasses both of this dual use nature, repurposability, all of that.  And it's important to have it because, again, if not we are just going to have fragmented approaches that are not interoperable, that are not correlated or cooperated.  So we all need to work towards this, and the only way we can do it is by adoption of a binding treaty.

So that's going to be hard, but we need to be ambitious in order to have this technology governed by us, not that we eventually get surpassed by it.

>> AUDIENCE: I am with the EUY.  I would like to congratulate the Policy Network on AI for this very important report.  I would like to invite the panel to reflect on the interaction between the liability and interoperability aspect, specifically if we have interoperating AI systems, how best to identify where liability would lie in case issues do arise.

Is there a role for contractual agreements in this and if so, how to deal with the imbalances in both informational and economic power that various actors within that network of interoperating players may have?  Thank you.

>> MODERATOR: Thank you as well.  Do we have more hands in the room?  Yes, we do.  Try the mic over there, if not, I will come your way.  Not so much.

Okay.  It's going to take me a while.  If anyone would like to provide any reflection while I do the walk, please go ahead.

>> AUDIENCE: Thank you.  Riyadh Najam media and communications sector.  Now, in order for us to govern something, don't we have to define it first?

I mean, we all talk about artificial intelligence and what it is and what it can do good or bad to us, but until now, I cannot see a correct and definite definition for AI.  By this definition, we will, does it mean the speed that we can execute our computation or is it the access of data that we can reach and manipulate at the same time?

But without doing that, we all know that artificial intelligence can, was established a long time ago.  The only thing that is becoming relevant now is because we are able to access data at the same time with great amount, and we have excessive and high speed of computation.

So we need to define it first before we try to govern it.  And maybe my other comment for the past almost maybe 20 years we have not been able to govern the Internet itself on a global level.  All what we get are like sometimes guidelines, some initiatives and so on, and there was never a treaty that can cover this.

Are we able to do that with artificial intelligence?  I leave that to the panel to answer.  Thank you.

>> MODERATOR: Thank you as well for taking many, many steps back and asking the questions about what exactly do we talk about when we talk about AI.  I am going to turn online and see if we have questions from there briefly from our online participants, not our online speakers, if our online moderator can go ahead and unmute.

>> MOHD ASYRAF ZULKIFLEY: Thank you very much. Actually we have a problem with audio, so I'm not sure whether you can hear us well or not.

>> MODERATOR: Please go ahead, we can hear you well.

>> MOHD ASYRAF ZULKIFLEY: There are a few questions from the online audience.  The first one I think is a repetition of the last question, more or less the same.  How do we address the regulatory arbitrage of the countries especially between the Global North and Global South because the situation is very difficult and very different?  And even for the Internet, for the past 20 years it is hard for us to regulate it.

So how do we arbitrage this issue.  And the second question is we have a problem of the wrong uses of AI.  So the wrong usage of AI is worse in the case of military applications.  So how do we safeguard our self to ensure that AI is not being wrongly used for military purposes?

And then we have an interesting question from Omar, the third question is, okay, AI is promising a lot of harms actually in terms of the online interactions.  So there are several cases, there are what do you call it a lot of things that are being generated falsely by GenerativeAI.

So how do we protect ourselves, especially for the young generations?  Any panelists that can answer these questions, and then how to foster collaborations when we encounter this problem.  I think the focus is now because we have seven minutes more.  Thank you very much.

>> MODERATOR: Thank you also, and to everybody else online and here.  We have seven minutes to answer quite many questions and I'm going to turn to you.  Please go ahead.

>> BRANDO BENIFEI: A lot of different things have been asked.  I will try to answer a few.  On the issue of the definition that was touched, it was a big issue for us too.  In the end, I think it's very important that we concentrate as much as possible on defining the concrete applications of AI so that we define the systems, we define what we want to regulate for the regulation sake because we are not talking about philosophy for other sciences that should analyze AI in different aspects.

So we have been working on that as EU and there are important processes ongoing at UN at OECD, and I think we need to stick to the minimum that we can so that we can find more agreement, otherwise we will lose sight.

On the sustainable goals issue that was mentioned, I think it's important to also mention the risk of excessive well concentration that will limit access to services, to the same issues we mentioned of permanent learning, et cetera.

So in fact there is also an issue while we distribute the added value created by AI, increase our productivity, it's something that if we look at the policy side, it cannot be avoided.  I think we need to bring that on the table too.

So it's a fiscal policy, budgetary policies, it's a new welfare system because in fact with the Resolutions we talked about with the industrial Resolution, electricity, the digital space, with he have seen changes in how we organize our safety nets and our state support systems.

So we need to work also on that.  And finally, on the liability, I want to say that it's very important that we work on the, on finding more transparency.  This is what we have been working on with the AI Act because if there is no downstream transparency between the various operators in the chain of value of AI, then the risk of asymmetry and, I mean the transportation of responsibility down the stream will be damaging to the weaker actors.

It will strengthen the incumbents and not have a healthy market for AI.  So liability, transparency, yes, we can have contractual agreements.  It was mentioned.  But only if we have strong safeguards to avoid the lack of information.  Otherwise we will just entrench market advantages and we will, I think, suppress innovation.

So we need to find a good way.  In Europe we are now working on a new liability legislation.  AI liability legislation that complements the AI Act, and we will be discussing these for sure also in the future in this kind of context.  Thank you very much.

>> MODERATOR: Thank you, Brando for highlighting whole of Government and whole of society approaches to dealing with challenges of AI.  Any more reflections from speakers?  We have three more minutes.

>> MEENA LYSKO: Am I audible?

>> MODERATOR: Meena, give us two minutes to wrap up here and we will get back to you.

>> MUTAZ GHUNI: There were a lot of questions regarding regulation, governance, the definition of AI.  I just want to sort of take a step back and highlight an amazing approach that has been taken in the report in regards of focusing on the value chain of AI because you cannot govern the whole of AI together.  You need to actually distribute it into components and look at each component in isolation from the other components.  And I also want to mention that we still don't fully understand AI.

This is the first technology, maybe not the first we still don't understand electricity, for example, fully, but this is a technology that is giving us answers without, in a way that is not very transparent.  We don't know why did the model give us that answer.

So I think a change on how we regulate and how we govern such a technology is very much needed.  We cannot take a reactive approach, especially when it comes to liability.

We also need to serve a proactive approach in the appropriate components within the value chain.  So in the data layer, for example, the collection of data, we are going to take a reactive approach, but in the access to AI, for example, maybe we need to consider a more proactive approach when it comes to governance and regulation.

And from that I want to talk about interoperability because one of the biggest questions that we get from investors when it comes to investment in Saudi, for example, is if I'm compliant with the laws and regulation in country X, am I going to be compliant with the laws and regulation in your country, especially when it comes to the GDPR and the differences between the GDPR and the KSA, so having frameworks and interoperable law when it comes to data because data currently is kind of clear, and we can move from data upstream into the value chain of AI from there.  Thank you so much.

>> MODERATOR: Thank you as well. Meena, back to you online.

>> MEENA LYSKO: Thank you very much.  Perhaps, just from my side I'd like to just emphasize that in order for us to have a future world and an equal future, sincere and responsible collaboration is crucial, and we need to prioritize the sustainability design like put in the report deployment and governance of GenerativeAI technologies.

And maybe a last point.  Without an environment, there is no point in collaboration to boost economies or on developing societies.  We need to move off from our path to global destruction.

>> MODERATOR: We have quite a few powerful messages out of the session.  I hope somebody will be taking good notes.  If not we have AI enabled reporting.

>> JIMENA VIVEROS: I wanted to say that I think that even know there is no one single definition to AI, we are getting, I mean, the technology has been here for over 70 years, so we have some understanding of it, and what we are trying to do now is to whiten the black box in terms of explainable AI and so on.  So all of these things, like we are trying to do forensics, for example, on the models to see how they came up with the outputs and so on.

This is an important thing that will help us make AI more accountable.  And the global governance framework, I think it should be overarching of all of the topics.  Obviously there is going to be a lot of sub sets regimes, but they should all be dependent on the umbrella of governance.

And just to finish on the liability, I think the one conclusion we can come to is that if you cannot fully control the effects of a technology, you should accept by the mere fact that you are using it that you will be responsible for whatever happens, that will happen in this case.

So I think that should be the general rule that we should keep in mind for now.  Especially when it comes to the peace and security domain or when there are human rights violations involved so high scale or high risk frontier models and all of the other type of decision support systems and autonomous weapon systems.  Thank you.

>> MODERATOR: Thank you.  Yves, Anita, any final reflections from you before we wrap up?

>> YVES IRADUKUNDA:  Just to again agree with the comment around the liability, I think it goes back also to the emphasis that has been done on awareness and capacity building because some of the liability may come from the most vulnerable within our ecosystem, so that then means that we need to emphasize on partnership, because if that sort of responsible use of some of this method is applied in any one of the jurisdictions, it would not leave the rest of the countries or organisations safe.

So I think, again, an emphasis on building the partnerships that enforce collaboration and partnership to advance some of these values that have been discussed.

>> MODERATOR: Anita, if you are with us and would like to add something?  Okay.  Perhaps not.  We are out of time.  I'm not even going to try to summarize the many points that have been touched on today, but I'm sure there will be a very comprehensive report by the Policy Network facilitators and there will also be one from as Diplo was saying AI enabled.  I do encourage everybody to take a look at the report, there is a chat bot that will allow you to interact directly.  Looking forward to seeing how the Policy Network will continue its work building on some of the useful and thought provoking reflects from today.  Many thanks to speakers here and online, many thanks in the room for your contributions and online participants also.  Enjoy the rest of the IGF, and let's see where we get with this AI humanity governance society and all of the implications around them.  Thank you so much.

(Applause).