The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> TIMEA SUTO: I think we're ready to get started. Welcome, everyone, to this session organized by the International Chamber of Commerce. In case you’re wondering, this is Workshop #98, Towards a Global, Risk‑Adaptive AI Governance Framework.
I am very glad that you decided to spend an hour and a half of your time with us this afternoon.
My name is Timea Suto. I'm the Global Digital Policy Lead at the International Chamber of Commerce, and I will be moderating this session today.
We have proposed this session for the agenda of the IGF, not because there are not enough conversations on AI, because there clearly are quite a few, but because we wanted to find a way to discuss or take stock, rather, a little bit of all the various initiatives that are out there on AI governance and governance frameworks, and try and see if we can find some commonalities or perhaps some ideas through which we can look at AI governance from a truly global perspective and push for a more interoperable outcome or some sort of common approach on how we look at artificial intelligence governance.
I’m not going to spend too much time introducing the landscape of AI, because we all have heard a lot about it, and I’m sure our speakers will talk a lot about it, as well. But I will take a moment to just introduce the speakers that are going to be here with us today trying to uncover some of these questions. In the order in which they will be speaking on the panel, I have Ms. Lucia Russo, who is Artificial Intelligence Policy Analyst at the OECD; Mr. Thomas Schneider, who is Vice Chair of the Council of Europe’s Committee on AI; Ms. Sulafah Jabarty, CEO and Founder of Clear Vision, and Chair of ICC Saudi Arabia’s Digital Economy Committee. I also have Ms. Noura Alhakbani, who is Vice Dean at the College of Computer and Information Sciences in King Saudi University; and Ms. Paloma Villa Mateos, who is joining us online from Spain. Thank you, Paloma, for being with us, who is Head of Digital Public Policy at Telefonica. And Ms. Melinda Claybaugh, who is Director of Privacy Policy at Meta.
So to start off the roundtable, I am just going to ask our panelists to share a little bit about their experience in fostering trusted and responsible and inclusive AI, and share a few of the good practices or projects that they’re working on that incorporates a risk‑based approach to AI governance framework.
Why have we chosen to ask our panelists about a risk‑based framework? It’s because we hear a lot when we look at the governance frameworks around the world that say, yes, our governance framework is risk‑based. The approach to AI governance needs to be risk‑based. So there seems to be agreement on that, but there’s little agreement on what it actually means. So that’s what we’re trying to figure out together in this session.
So to first look at this, I’m going to turn to Lucia, and I hope that you can share a little bit of information on how the OECD is looking at facilitating cross‑border collaboration on AI governance, and what are some of the key challenges and opportunities that operationalize or look at this risk‑based approach.
>> LUCIA RUSSO: First of all, let me thank you for organizing this very important session, and welcome all the other speakers and participants here.
So I will talk a bit about the way the OECD is promoting interoperability in international AI governance. And I will mention a few examples of how we are putting this risk‑based approach into practice.
So just to start off, at the cornerstone of the work of the OECD is the OECD Recommendation on Artificial Intelligence that was adopted in 2019, and recently revised to take stock of some technological and policy development, and notably advanced AI, advanced systems.
And since then, our work has been really focusing on how to move from these high‑level principles into practice.
And when we talk about risk‑based approach here, of course, we mean having a proportionate system of duties and obligation that is tailored to the level of risk that each and every AI system brings. And so already in 2022 the OECD has developed its own AI classification framework in the form of a scoring table that evaluates AI systems according to five different dimensions: People and planet, economic context, data and input, AI model, and task and output.
And I don’t want to go too much in detail here, but basically under each of these dimension, then there would be an evaluation of where, for instance, in the data and input, there are considerations related to privacy or copyright; under the task and output, on the autonomy level of a system; and then in the economic context, the business function of the system, which in turn, it’s basically telling us about the impact that this system may have on this business environment.
And so this risk‑based approach is what, then, we see also in a regulatory framework such as the EU AI Act that of course takes this risk‑based approach and establishes stricter measures for systems that are deemed to bring, to have a highest risk for safety and fundamental rights in the EU.
And we see this risk‑based approach also emerging in other frameworks. For instance, the G7 Hiroshima process that was launched under the Japanese Presidency in 2023 led to the adoption of a voluntary code of conduct for AI developers that also calls to develop and implement and disclose AI governance and risk management policies in line with a risk‑based approach.
And to build on this code of conduct, what we are currently working on at the OECD is supporting the G7, the Italian Presidency, in the development of a monitoring and reporting framework for these commitments, which means moving from this code of conduct that can be, again, high level in a sense to what it means in practice for companies to adhere and to respect the commitments that are embedded in this code.
And this is obviously to respond to the needs of transparency, accountability, but also it is I think a good example of how we go at a level up from the national borders to an international co‑operation that really is across jurisdictions because it is developed by the G7, but of course is not limited to companies in G7 member countries, the adherence to this code of conduct.
And lastly, I would just perhaps talk about another initiatives that we have at the OECD, the AI Incident Monitor. Because again, when we talk about risks, what we need to take into account is also the evidence on which we build the frameworks. And the objective of this monitoring reporting framework is also to understand where the actual harms materialize, and so to have a better informed decision‑making when it comes to establish what are the high‑risk categories and how to regulate these categories.
And so this is an online tool already and is also a reporting framework that is harmonized across different countries.
I'll stop here, and I'm happy to engage in the conversation later.
>> TIMEA SUTO: Thank you so much, Lucia. Quite a lot going on at the OECD, but it’s not the only forum that does work. You mentioned also how the OECD’s work inspires work in the EU AI, or how it inspires work at the G7. And I also want to ask Thomas on how you are collaborating from your previous role as Chair of the CAI, and now as Vice Chair, on some of these risk‑based approaches into AI, both as you were negotiating the convention itself and now the risk‑based impact measurement mechanism.
>> THOMAS SCHNEIDER: Thank you very much.
And actually, yeah, it’s good that somebody, one of the sessions actually tries to concentrate on the risk‑based and what that actually means, because we talk a lot about legal texts, and we forget about the operationalisation of all of this.
So before going into how the Council of Europe’s work fits into all of this, let me also again start with the allergy to engines, because there are many similarities. We have engines in machines that produce goods that are more or less big, more or less dangerous for the people. We have engines in cars, in airplanes, in tanks, in many other vehicles. It may be the same engines or similar engines, and they all have, of course, opportunities to produce something, but they also have risks. But we do not have one regulation for the engine; we have thousands of legal norms for the engines. But for the vehicle itself, for the drivers, for the infrastructure, liability rules for parts of a car or parts of an airplane, for the airplane company, for the one selling the tickets and so on, and we have thousands of technical norms and we have social cultural norms.
From culture to culture, there are different expectations on how to deal with risks. In some cultures, they expect the king or the president or the state to take care of your risk. In other cultures, you have more than expectation that people are capable of dealing with risks themselves. And you have everything in between.
And basically, the same logic applies to AI, as well. Because, again, the risks are very much context‑based in terms of where you apply a certain algorithm or a set of algorithm. And normally, it’s not the algorithm itself. Algorithms are part of machines, of tools that we buy, like we have an engine as part of a car or part of an airplane.
And I think one has to look at the legal texts and the convergence and all the legal texts. As you say, they talk about risk‑based approach. They talk about impact. The Council of Europe Convention is built on a graduated and differentiated approach, which I think is a slightly more exact, because it’s not just vertical risk, high or low, but it’s also horizontal. It may be in different areas, the same thing may be different; although, it’s the same algorithm. Even if it’s in the health sector, you may have differences and so on. And for instance, the Convention of the Council of Europe, that is an open convention to all countries in the world. So it’s not an instrument for Europe. It just requires states to have mechanisms in place.
So it’s a very general requirement to have functioning mechanisms in place, and it says what they should be able to deliver, i.e. identify risks with regard to human rights, democracy and rule of law; and that states have remedies in place in case risks become actually impacts, and a mitigation plan, and so on.
It doesn't go into further detail. This is where the second instrument comes in that the Council of Europe is currently working on, and this is done in co‑operation with the technical standards bodies, with the OECD, with UNESCO, with hundreds of experts from civil society, academia, and businesses.
It’s a non‑binding instrument, on the contrary to the Convention. It’s a non‑binding instrument on several levels. It’s a methodology for human rights, democracy and rule of law, risk and impact assessment tool.
Also, the level two document is a document of about 20 pages explaining, giving guidance what you should need, which is a context‑based risk, initial risk analysis, stakeholder engagement in order to see whether your initial risk analysis goes in the right direction or whether you’re missing something. Then, it’s the actual risk analysis, which is a classical checklist question thing. Then, there’s a mitigation plan. So if you realize that risks become reality, how are you going to react? How are you protecting people? And then, of course, some logic about iteration, how you do this with technology that is evolving.
And it’s building on the work of the technical standard institutions that are also participating. It tries to make the link between the legal text, a legal norm and the technical norm, but also giving the flexibility to take into account social and cultural norms and expectations of how to deal with risk, which you may not be able to harmonize. You may be able to harmonize technical norms, but not social and cultural norms. And I think this is important.
Just one final thing. And we see how difficult it is. The EU has given a mandate to CEN‑CENELEC two years ago to develop technical norms to operationalize and implement the AI Act. And both sides are still struggling to understand each other and to see whether they actually are able to come up with something. So this shows it’s just one example. I don’t blame them. It’s really a difficult, difficult issue, but how important it is that there is co‑operation.
And the OECD is very helpful in bringing people together, the Council of Europe, as well, standardisation organizations and others. We need to build bridges between these technical bodies and the legal bodies and the cultural bodies in the end so that we understand how to make this work as a whole and not just on paper as a legal text or in a questionnaire for programmers. So this needs to fit together, and there’s a huge work ahead of us.
>> TIMEA SUTO: Thank you, Thomas. That was a great intro to the work of the Council of Europe on this.
And I want to keep focusing on this element of regional cultural differences and approaches to context. And as we move out from the OECD setting and the Council of Europe setting into the MENA region, I want to turn to Sulafah next and ask, what are your insights working in a technology company in this region? And maybe perhaps even further than Saudi Arabia in the entire MENA context, what are some of the views that you see on how AI technology works here, and how are the risk‑based approaches on the table here? And also, what are some of the elements that we can maybe elevate into a more global approach?
>> SULAFAH JABARTY: Okay. So I guess we all agree that AI has been reshaping the economy and the society all over the world, and based on such a globalised economy and a globalised area that we’re speaking about, which is AI, one of the most advanced technologies in the world. So globalisation aspect here is much more wider than regular business and regular digital transformation aspects.
And so speaking about what’s kind of unique or specific, if we want to go out, zoom out of this globalised space, I think the uniqueness of the MENA region, led by countries like Saudi Arabia, that are investing heavily in AI, so one of the very unique pointers in the MENA region is heavy investment and leadership in the digital transformation supported by government, supported by private sector.
As an example, the Alat company that has been launched under the PIF recently with a capital more than $1 billion. That is a specified company just for investing in AI, deep technologies, manufacturing and localising all of that out of here, making the best of the international minds, the international technologies and the investment environment here.
Also, the investment in the sector, whether it’s a financial investment or investing the minds, the regulations, the government mindset has actually gave us a result that we have reached number one this year in the United Nations Indicator of Digital Government, where we stood six years back in number 52. And that just says how much investment is going on, and the speed. Speed cannot be based only on financial investment. It is definitely a mandate collaboration between mindsets, government, private sector, academia, all together; based, of course, by a very strong economy.
Second uniqueness aspect, in my opinion, is something everyone also I guess agrees upon is such a young, let’s say, generation and tech savvy youth, which makes the biggest amount of our population. So that also adds to the speed of embedding these technologies. I mean, a lot of technologies are just embedded and live before even we know about them. I guess this is also part why regulations are very important. We need, when we speak about risk‑based regulations, the advantage of that is that they are flexible, supposedly, and to meet this kind of different levels of maturity of these applications and these technologies. That’s why flexibility is very much needed in this kind of regulations.
Also, the adaptability to the kinds and the ongoing different risks and differentiation between the kinds of applications versus the kind of blanket regulations that are not definitely needed for these kinds of technologies.
So if we consider back to the globalised framework, and I guess we all know that the European Union this year has activated their landmark AI law, which is considered the leading global law, and nothing met before this mature, based on the EU Act, AI Act 2021. Considering that kind of effort put into such a law, we speak today about localisation.
Basically, we were just speaking about in technology we never believe in starting from scratch. You capitalise on what’s there, open source and other technologies, where you can build on. It needs to be the same kind of mindset in terms of regulations.
So what we need to do in MENA is that, okay, we take those frameworks and then just fill the gap, taking into consideration the unique, let’s say, socio‑economics, cultural, technology, differentiation of aspects, which I don’t believe are going to be a lot, speaking in this kind of making, which is the AI.
And then embedding them. I guess as we speak there’s a lot that has already been done in Saudi Arabia, and I speak about Saudi Arabia as leading in the region in this area. We have the SDAIA, which is the authority for data and AI. They have launched a couple of frameworks in different areas, and I believe we can definitely match and fill the gap between what’s been done internationally and locally to move this faster.
So summing that up, I guess what we all agree, MENA and globally, is that this kind of risk‑based framework supposedly gives a much wider space of flexibility and adaptation and inclusivity, supposedly, for everyone to make the best of what’s going on all around the world, and for us to be able to lead that ongoingly for sustainable framework adjustments. Thank you.
>> TIMEA SUTO: Thank you very much, Sulafah. Lots to learn from. I’m always amazed every time you quote this number from 68 to number 1 in six years. I think this is an amazing feat, and I like how you put that into the context of what that requires. Of course, there's investment, collaboration with the various expert groups, but of course also the energy and the talent of young people.
Which brings me to Noura to ask you what role do you see from your perspective? I’m sorry I messed up your title before, but in your work at the University in the Information Technology Department, how do you see the role of universities in building this new generation of developers and tech workers?
>> NOURA ALHAKBANI: Hello. First of all, I’m just pleased to be among the distinguished speakers. As I want to start with, I would like to add, as Ms. Sulafah mentioned, Saudi Arabia and the MENA region is leading. So according to Vision 2030, AI actually has a pivotal role at the core of Vision 2030, basically. Because they want to diversify the economy, reduce dependency on (?) and establish the Kingdom as a global leader in technology and innovation.
SDAIA actually spearheads that effort and aims to develop robust AI and generative AI ecosystem.
As Ms. Sulafah mentioned, they published several frameworks. They published the framework in 2023, September. And again, they published an AI adoption framework in September, 2024. Recently, they published in January 2024, the AI intelligence guidelines.
So they are keeping up updated with all what’s coming within the technology and legalization.
In the latest publication, the artificial intelligence guidelines, SDAIA ensured responsible use of AI and emphasizing data privacy and ethical standards, and tried to balance innovation with societal values, potential risks, and mitigation strategies. They talked about explicitly certification fraud as a risk, since, as you all know, gen AI now could produce human‑like content. You could write, you could have essays, even detailed research, undermining all traditional educational and professional standards.
Therefore, SDAIA also stated mitigation measures for assessment, education, and training explicitly here in Saudi Arabia.
In terms of AI adoption in higher education institutes, actually the adoption and management of new technology in higher education institutes can be complex due to their diverse constituents, including faculty, students, staff, each with different needs and priorities. But there is a paper that was published in September, 2024. It is titled AI Governance in Higher Education: Case Studies of Guidance at Big Ten Universities. It was published in the journal Future Internet. This study examined how the prestigious universities in the United States are approaching the governance of artificial intelligence, particularly in response to the growing influence of generative AI in higher education.
They reviewed AI governance policies and strategies in 14 prestigious universities. What we can see from this study is that universities started investing generously in AI governance. For example, you could see Massachusetts Institute of Technology developed a comprehensive framework for ethical AI governance and has invested $1 billion in AI initiatives.
University of Utah launched a $100 million responsible AI initiative aimed at using AI to tackle societal issues while protecting civil rights.
And Tsinghua University established the Institute for AI International Governance and the Center for AI Governance focusing on AI ethics, policy development, and international co‑operation.
And the University of Oxford launched the Oxford Martin AI Governance Initiative to understand and mitigate AI risks through research and collaboration.
Also, University of Birmingham’s Center for Artificial Intelligence and Government.
Lastly, universities also recognized the importance of dialogue and take innovative steps to promote it. For instance, University of Illinois Urbana‑Champaign harnessed the power of social media and they created an online space discussion to discuss issues related to gen AI within the university community.
So these universities are not only investing financially but developing comprehensive programmes, research initiatives, and governance structures to address all these issues.
To go back to the MENA region, again I'll go back to Saudi Arabia. In Saudi Arabia, universities are focused on AI within, obviously, the Vision 2030. In KSU, we have established KSU Zakat Center and KSU Zakat Office. Both are concerned with AI. The KSU Zakat Centre has educated its efforts to its numerous partnerships to localising knowledge and technology within the field of AI, while the CaTS Office is concerned with developing AI research and applied programmes that serve different academic and professional disciplines.
And again, there’s KAUST. Also, they established the Center of Excellence for Generative AI, which is dedicated to placing Saudi Arabia at the forefront of AI research in the region and globally.
>> TIMEA SUTO: Thank you very much, Noura. Quite a lot that universities are able to do, and I guess also when they’re able to do that, when they’re supported to do it.
So again, I think what you've said fits very nicely in what the panel has said earlier already in how we make sure that expert communities that are either based in academic circles, in private sector circles, or government or international organizations manage to come together and build on each other’s knowledge to further this work, and that we need the expertise of all of them if we want to get the approach right.
So in that vein, I also want to turn to Paloma online and ask, where do you see the role of the private sector’s efforts in driving this responsible AI innovation by design?
And what are the role of the policies that are necessary around this to help us make sure that private sector can do this?
>> PALOMA VILLA MATEOS: Yeah, thank you. Thank you. Can you listen to me well? It’s okay? Okay. Great.
Well, thank you.
Well, I do think that the magic word here is AI governance, and this applies for private and public sector. I do think that We need to be humble and have a substantial conversation between us, because otherwise we will not benefit from the AI.
I think we have done a great job in the last decades in the different international organizations and also in the companies.
And the question for us is, in the end, how to ensure AI that is developed responsibly while fostering innovation.
I do think that the AI governance from the company perspective lies in four interconnected pillars of AI governance, which are really important. The first one is principles and guidelines, mainly come from international organizations. Regulation is the second pillar. Technical standard and industry self‑regulation. Most of them have been already mentioned, but I think it is important trying to get this interconnected proposal, starting from some principles to the more sophisticated development of AI. No? Regarding, for example, the principles and guidelines, I do believe that the OECD, Council of Europe, UNESCO, Hiroshima principles, executive order, all this going around the world is directly connected to what companies are doing. I think the development of what we have been doing in the last two decades has been going in parallel, and this is very good news.
The principles are there when we talk about transparency, fairness, privacy, human rights, democracy, rule of law flow. At Telefonica, we have, with many other companies, Microsoft, Meta, we have been working with the Council of Europe, with OECD on a daily basis. With UNESCO, we have signed. These principles are there. And I do think, this is my positive insight, that we are in the same role.
The problem comes when, I think Thomas has said, when we come from the high‑level principles to the lower. We don’t know how to apply all these principles.
Now, for example, at the OECD and many other organisations, we are developing in a more sophisticated way things related to the AI, not only high‑risk. I mean, the high‑risk approach, everywhere the high‑risk approach, there is no discussion about that.
But we are now discussing more specific topics, for example, AI and intellectual property. This is, again, the problem of how we make possible this interoperability of regimes in Europe with other regions, where the history of the juristical, the law tradition, is completely different. How we can find this common interplay? No?
So the second pillar is regulation. And I think that here, companies, in the case, for example, of Europe, where the AI is already in place, and based on the principle we have already discussed, I do think that companies are doing a great job, for example, signing the EU AI Act, which is really relevant for companies trying to voluntarily implement things before they are into force.
And many companies are engaging in core commitment in AI governance strategy, mapping the AI system, and developing AI literacy in the companies and outside the companies. These three core commitments of companies are relevant for what we are talking now. I mean this collaboration between the institution, the public sector, and the companies are extremely relevant.
The problem here in this second pillar in regulation is how we will implement regulation. Again, this is the problem. Maybe the problem is not the regulation itself, but all the standardisation, what it implies in high‑risk systems. And sometimes there is a grey zone. Sometimes when we talk with companies, with institutions, the problem is that the discussion is not substantial. I mean, because we are trying to very quickly resolve the standardisation process, which is very difficult, and the technical details are really difficult. So when I start talking about being humble and having a substantive conversation between the public and private sector, because sometimes we have a legal instrument from the 20th century, but the technology is from the 21st century. This is a challenge, a challenge for the institution, but also for the companies because we have to comply with this regulation when the legal framework is not fit for purpose.
For the third pillar, which is the technical standard, I have to say that companies, Telefonica, many others, I’m talking about Telefonica, we are involved in the standardisation process, participating in all the conversations, also with the AI office, with the standardisation of code of practice, but we do have also international standards with ISO and NIS and so on.
In the end, what we have, we are seeing also in the ITU, is a complex scenario with many standardisation processes going around. So here we have a lot of work ahead.
But I have to say that this conversation is taking place also with the participation of companies.
And the fourth pillar has to do with self‑regulation. Here I have to say that the companies in the last decade, especially those who are using AI internally and offering the data service, we have put in place an AI governance strategy with a very substantial model, scaling the process internally with responsibility within the companies, and also ways to identify the risks internally that are really in line with what you have already said.
I think self‑regulation is relevant because the technology goes very fast. We have seen that during the process of the AI Act. We started talking about AI. In the end, the global focus on AI was in the middle because the technology is faster than the legal framework. So I do think that self‑regulation and responsible AI is critical here.
And I stop here because I think we can go in depth later.
>> TIMEA SUTO: Thank you. Thank you. So thank you very much for that, Paloma. And it’s quite a complex framework, as you said. I think one commonality out of all those four pillars is the collaboration between industry and regulators to make sure that we get the balance right, that we balance the innovation and the rapid development of technology with some of those commitments and goals that we want to address through risk management.
So I want to stay with some of this idea, as I turn to Melinda. We’ve heard a lot about safety risks of AI, and there’s been a number of global summits already on this issue. So I’m just wondering if you might want to draw out a few lessons learned there? And see what we can do to maintain or get this balancing act right between innovation and investment risks, but also what is it that the private sector is already doing to help that balance?
Over to you, Melinda.
>> MELINDA CLAYBAUGH: Great. Thank you so much.
Just a little bit of context to explain Meta’s, my company’, s context and how we’re coming at the AI conversation. So we have two main buckets of AI products. So one is our generative AI products, which are in the app, in any app in Facebook and Instagram and WhatsApp, you may have seen a Meta AI Assistant. So it’s basically a chatbot powered by a large language model that you can interact with and ask it to do things and answer questions. Also, we have image generation tools, things like that, that help you create content online.
The other bucket of our AI products is a large language model called Llama that we have released several generations of. It’s an open source model, which means we make it freely available to anyone to download. So it’s essentially giving away, you know, many, many millions of dollars of investments to entrepreneurs and developers who want to build on it for their own applications.
I think that’s just important context to set for kind of how we come at the conversation as both a model provider and a gen AI system deployer.
So at the model ‑‑ let me start at the gen AI system level, so our Meta AI Assistant, we assess risk in the way we would assess privacy risks in general. So we built our AI risk management programme on top of our privacy risk management programme. So it’s to say that any time a new feature or product or assistant is developed or improved in a certain way, it goes through a risk assessment and review process, and mitigations are identified and applied, and there’s kind of a cycle of improvement in the same way as happens on the data privacy side.
With respect to our large language model, there risks are assessed and mitigated at different points in the development of the model. So at the stage of the data collection, the pre‑training stage, we’re identifying. You know, we’re actually going out of our way to not collect personal data, and then we’re identifying potential personal data, removing it, identifying, you know, data that may have copyright protections, you know, going through all of those risks at the pre‑training stage, training the model.
Once it’s trained, we're implementing certain red teaming, other, you know, safety testing and risk assessment and mitigation processes to make sure that the model we’re releasing is safe.
And then we release it and developers can build on it.
I think, you know, in addition to those and kind of the product development process, we also have signed up to multiple kind of international frameworks. So domestically to start in the U.S., we were an early adopter of the White House commitments, which are kind of high‑level commitments to the safe deployment of advanced AI.
Then we signed on to the Seoul Frontier AI Safety commitments.
So I think what we’re seeing is a really positive harmonization at around safety frameworks for advanced or frontier AI. I think that will be furthered, in addition, by the development of the various AI safety institutes and how they are going to be working together to understand the science of risk identification, mitigation, evaluations, benchmarks, all of that.
So I think that those are really positive developments.
I think where some of the challenges arise is in the more bread‑and‑butter AI. So not the kind of frontier AI, you know, safety stuff we're talking about, but how is AI being applied in our everyday lives to maybe make decisions about us or offer us goods or services?
And I think that’s where some of the stickiness comes up in terms of reaching consensus about what are the risks that we are trying to identify?
What are the mitigations that should be applied?
Is there a global view on that or should it be kind of nationally determined? Because there’s going to be differences in how different societies view different risks.
So I think that’s a really interesting thing to keep in mind, the difference between kind of the very advanced AI safety concerns, and then kind of the day‑to‑day bread‑and‑butter AI concerns.
And just a few general thoughts on risk. I think it’s really important to focus on the marginal risk we're talking about, because I think we tend to come to this and think, "Oh, my god, AI is new and it’s different and it’s terrible." You know, in fact, we've been dealing with AI, classic AI, for a really long time. And I think what people get concerned about is this really advanced stuff that maybe we'll lose control of, you know, people worry about, or maybe it’s doing things we don’t understand and all of that.
So we have a whole legal ‑‑ we have many, many legal frameworks that already govern things like data privacy, that already govern things like kids’ safety online. So we have a lot of mature frameworks to draw from.
I think from a company’s perspective, what is going to be really important is how these things are rationalized. So I think there’s a risk of imposing in the lens of, you know, AI, imposing a whole new framework and regime on top of all of the ones we already have. And then how do those relate to one another?
we're seeing this to some extent in Europe, in the AI and privacy conversation and how data can be used in AI or not.
And how does the legal regime on data privacy intersect with AI?
And that balance of innovation and privacy protection is really at a tension point, where we all recognize data is needed for AI advances, but of course, there’s limits around it.
I think the unique nature of large language models means that we may not be able to implement data subject rights or other things that arise in data privacy frameworks the way that we can in other types of data processing. So there’s a real‑life tension there that I think has to be grappled with.
Then another, just two other points I want to make real quick, is I think it’s really important to focus on the use cases. So for us, as a large language model provider, and particularly as an open source LLM provider, we release our model. We do all the mitigations that we can, we release it, and we have no idea how it’s used. Anyone can build on it for any purpose, and it’s up to them to put into place the mitigations that are necessary for their particular use cases.
So I think it’s important to ‑‑ I know the OECD is looking at the value chain and really breaking down what are the roles and responsibilities of the various actors in the AI value chain, and what is in their control to identify and mitigate. I think that’s a really important conversation, and again the use case conversation, and then particularly looking at what are the laws we already have in place. We already have in place laws about discrimination in employment in most places. We already have in laws discrimination in housing services. So what is net new here that is already not covered? And can we cover those risks in existing frameworks, as opposed to new frameworks?
>> TIMEA SUTO: Thanks. Thank you, Melinda, for that. I forgot to turn on the microphone.
It’s been quite a rich first round around this table. We've heard a number of ideas coming out of the speakers here on what is it that we're facing in terms of risk‑based approach to AI, what are some of the elements that we can build on.
So I want to focus on our second round of questions. I have the same question to all of you. And in addition to reacting to what you've heard from one another, is to just really share a little bit on how you think forums like where we are sitting today, and these global conversations at the IGF, and other global fora, can help bring what you've mentioned in your interventions into fruition for an actual global approach to the governance of AI in a way that, as most of you highlighted, it balances the rapid growth and allows the rapid growth of technology and innovation while making sure that some of the harms that we fear from are actually mitigated.
So I don’t want to summarize what you've all said because it’s going to take too much time, but I hope we can take this one question and do a round‑robin around the table and react to one another and bring out those elements that can actually help in global conversations.
So Lucia, you spoke first. I'll hand the microphone over to you.
>> LUCIA RUSSO: Thank you, Timea. It’s truly fascinating to hear from such a diverse group of speakers.
I think for me what resonates the most with what we heard is, on one hand, this need for Multistakeholder conversation and collaboration, the need also to have a contextual and cultural approach to this type of regulation, and also the need to think in practical terms of what it means to translate these principles into concrete requirements and along this risk spectrum that we have advocated.
So what I want to get at is that we see some sort of regulatory fragmentation, and this is no news to anyone. We perhaps shouldn't seek to have full harmonization because that’s maybe not achievable, not even perhaps desirable, because as we have heard there are some cultural considerations to be made. There's local values or technological developments, but even cultural and institutional history.
So I think the way we are approaching this issue at the OECD is really to have these Multistakeholder groups coming together and discussing. So we have these expert groups. Overall we have a network of 600 experts that work with us and they are divided into expert groups that focus on specific topics. For instance, one of them is working on a group which is called Risks and Accountability. So it’s a group that’s the name that speaks for itself, and it really is taking this approach of looking at the different risk management frameworks that have emerged so far and try and see where they share commonalities and where they differ. So the idea is to develop responsible business conduct for enterprises, which is not yet another framework they have to comply with but more of a framework that would indicate to companies especially those operating trans‑border, when they comply with a given requirement, what it means for instance in the EU, what it means in terms of complying for in the US, or in another jurisdiction.
So the idea is to really put this interoperability in practice, meaning having a level of alignment or a level of understanding for operators of where these different requirements intersect.
So this is the project that we are currently carrying out, and we should have the due diligence guidance ready next year.
And perhaps the last point that I would like to add, and Melinda hinted at that, is that it’s a risk management framework that is not only looking at one specific actor in the chain but it looks at AI development and deployment across the value chain, because of course it’s not only one part of the chain that is responsible, but there are upstream and downstream operators that also have due diligence requirements to abide with. So that would go down to data, to the very first investment, and data labeling. So it’s really a more holistic approach.
So yes I would say that the value of these conversations is really to bring together these perspectives and it’s the way to go. There is no other alternative.
>> TIMEA SUTO: Thank you, Lucia. Same question to you, Thomas. What is the role of the global community here?
>> THOMAS SCHNEIDER: Yes, thank you.
It’s actually interesting to see to what extent and I think the value of a forum like this is to hear from each other where we are, and to what extent we are on the same page or going in the same direction, to what extent processes are converging, legal processes, standardisation processes, and also to what extent they may be not converging or they don’t have to converge.
And a fundamental question that hasn't been raised here is actually who defines what a "risk" is and who defines what a "high" or a "too high" risk is? And that largely diverges from country to country, and not just with AI.
Just to give you one example, in England in Liverpool you have the River Mersey and nobody would ever think of going in the river to swim. On the contrary, you have a metal fence that is from 1920 that tells you "Forbidden! Water! Danger! Beware!" You have a second fence one meter ahead of it from the 1930s that says, "Oh, danger! Water! Don’t go in! There may be ships!" And there’s even a third fence added in the '50s.
In Switzerland, in Basel for instance, you have a river with cargo ships, but thousands of people go swimming in the water. They go beneath bridges. They navigate between cargo ships and the ships between them, because this is one of the greatest things to do in summer if you live in Basel and have no access to the sea. So if the government would decide to forbid swimming in the river because there are cargo ships and it may be dangerous, the people would just say no. And this is just ‑‑ and the UK and Switzerland is not like 5,000 or 10,000 or 20,000 kilometers apart, but it's just to say that while in an airline business where people are okay to trust experts because it exceeds their personal knowledge, also. In the airline business, people are willing to agree on internationally harmonized risk management because they want to be sure that the airplane lands safely because they can’t run it themselves.
But the closer it gets to your own capabilities, to your life where you want to take a decision, and that will also be the same with AI. On the heart surgery operation, you may be happy that it’s clear what the red lines are, what the doctor can do, what the tools, what safety tests the tools need to pass. But when it’s about AI‑generated content with your freedom of expression, expressing your cultural or political views, you may not want some expert or, I don’t know, the government to tell you what is right or wrong, but you may want to decide it yourself.
So I think there will be harmonization, which is fine for people. People will want to have harmonization so that they don’t have to care. They can trust experts. But there will be areas where people want to be the master and use AI the way they want and discuss it with their neighbors what is right or wrong and not with any with the government or people from far away.
So I think we will have to live with some kind of diversity in this field.
>> TIMEA SUTO: Thank you, Thomas.
Sulafah, how do you see this?
>> SULAFAH JABARTY: Well, capitalising on what they just said, which is I guess I can see how we're all coming closer to the same area, which is I really liked what you said in terms of what we need to develop or not develop. Because this area is actually re‑qualifying the whole drive, because it’s just, okay, we need to regulate this sector, so let’s go and drive and do regulations every day and question everything.
And as she said, this is a scary new thing.
And the idea is actually we really need to be very objective but also very connected to the technology itself and to the society itself.
So I think Pamela or if I’m not mentioning the name right or wrong, but yeah, she’s Paloma, she said something about that the speed of technology sometimes exceeds the speed of regulations, and it’s not fair to, like, ask the businesses to slow down and just wait for regulations, which does happen sometimes.
On the other side in a business world, as an example for the cyber security area, which is a very, very highly regulated area and still part of this whole, as they say, crowd, a very small example. And some of the applications we provide to some very highly regulated entities. We every now and then need to adjust the applications we provide with the regulations of cyber security, which are very highly adjusted in our country.
So we ended up realizing that some entities, because they’re just giving us the regulations and the updates just like they are and they want us to just you know adjust the application to it, without actually having an eye for the business itself or the business owners themselves in the organization, we end up to a place where the authorized users can’t enter to the application. And then we have to you know drive some concept into it, and we actually bring our business culture, our business understanding to them.
And this brings us back to why we need a Multistakeholder‑governed frameworks because we need to bring the society in, academics in, and technology people, businesspeople altogether.
And I guess if I want to sum that I think we need flexibility, coordination, and awareness. Awareness is a very important part because to give people the right establishment and the right ground to be able to think with us on the same harmonized approach, we need to enable them first to know what they need to know.
And that also brings us back to exactly being very clever and actually inviting the right entities and the right stakeholders to participate in this. Some people are very closed in boxes of regulations, law, or academia, despite the other side, which is the business itself. So no one should work on this in a closed box. They need to be very much attached with live embedded data, informatics, and this is what it’s all about.
So I’m sure we all sometimes find people who are working on this who are very isolated from the core of itself and the spirit of this technology AI, which is based on very live data and information flows. So I think what we need, at the end, we all aim to reach a very robust, trusted, and adaptive framework that everyone can use all over the world.
>> TIMEA SUTO: Thank you very much, Sulafah.
Noura, how do you see this going for you?
>> NOURA ALHAKBANI: Actually, I see the global forum as a very good place to get everyone thinking together. I was noticing that like now AI is actually getting everyone is afraid of what AI will do, how AI will develop, and I could see that because when I started AI, when I started studying, it was just I’m doing an AI algorithm or machine learning algorithm in one specific area and it will, for example, find a tumor. Now it’s a different thing. It’s a generalized model. And what happens is that creators of the AI really don’t know how the AI will respond, because they teach the AI the learning model, and then the AI will respond the way it responds.
So regulating it, I see it’s important to regulate it from the beginning, from entering the data, from the early steps. Because whenever the data is in, like after it is in, it is very difficult.
For example, a cake, when you bake it, if you take the ingredients before mixing the ingredients, you could do that. But if I would ask you to take this ingredient or whatever data after baking the cake, it’s kind of impossible. And that’s what happens. Whenever the risk comes, it will come anyway. So I do see why there is a great concern.
And I see some positive things have a great concern just to regulate it but I see that it’s coming and it’s coming strongly, because it is very beneficial, and you can see the benefits of it day after day with healthcare, with every aspect. You can see that it’s very beneficial.
Like last year, there’s a surgery that happened that there’s a blind girl that managed to be seeing now because of an AI surgery. So there is huge benefits.
The fear is, we could understand.
But I think, like, and other thing is, also, the government should be very specific for each sector. It should be very different. We can’t have just one framework that governs everything. Every sector is completely different and has its own characteristics that we need, other than society, other than the region.
So I think we're on the right track. we're working and it’s a work in progress and let’s hope for the best.
>> TIMEA SUTO: Thank you. Step by step, and no one‑size‑fits‑all, I think.
Paloma?
>> PALOMA VILLA MATEOS: Yeah, thank you.
So Thomas and also Sulafah have said something which is for me really relevant. I mean the definition of "high risk." No? I mean, if we think on the Europe AI Act, , in the end, what we have here is a regulation on high risk application, mostly. And here, we are developing this standardisation process.
And the problem is how to go from the theory to the real world. This is something more difficult than some of the policymakers thought it is.
Last week, for example, we were in Brussels having some conversation with the AI office. So they have a mandate that in the next seven months they have to come up with this code of practice, and they have thousands of people participating in this code of practice.
At the same time, we have responded to a public consultation, again, on the definition of some of the application on high risk and so on.
So it’s more difficult than it is and, in the end, it is true that we as a company, we have to protect people’s rights, safety and so on, but we have also to protect in Europe innovation, and also how to compete in the global economy. So this pattern is really difficult.
I do think that engaging with companies is really relevant because having this theoretical approach sometimes is against what we are trying to do. And in parallel, I have to say that companies, we are also learning how to provide or how to work with a responsible AI. GSMA, for example, you know GSMA, we are now working on a responsible AI Maturity Roadmap. So trying to provide a framework for companies to work on an AI governance strategy that, from the beginning to the end, we are able to provide an ethical AI system.
So this is going hand by hand, and I think it is important, as I said, to combine and to balance people’s rights and innovation. This is something that is relevant and more relevant in the next year, where in Europe, for example, we will see this new code of practice, standardisation, and CEN‑CENELEC. So it’s critical now in Europe to balance that because it could be a regulation that other parts of the world are looking to. So it is important that we do it right. Thank you.
>> TIMEA SUTO: Thank you, Paloma.
Melinda?
>> MELINDA CLAYBAUGH: I mostly echo what other people said.
But just on the point about the EU AI Act, I think that it’s an interesting reflection of how unsettled things are. With the code of practice in particular, there’s still live conversation and no consensus on what even is a prohibited practice, or what is a high‑risk practice.
So you would think the prohibited practices would be fairly understood, generally, but it’s not.
So I think just as we ‑‑ I guess my recommendation for kind of convenings and global convenings is to take some time to do it right. Because I think what’s happening is that the EU AI Act was finalized in a frenzy around gen AI development and advanced gen AI development, and now they’re kind of having to figure out, oh, actually, what is prohibited and high risk. Meanwhile, the clock is ticking on compliance for all the companies. So it’s really a difficult situation to be managing.
So I think building more consensus around some of the risks and some of the high risks and what’s inbounds and out of bounds, recognizing, of course, there will be cultural differences, but taking some time to set that step right rather than rushing ahead, as the technology is still advancing, as well.
>> TIMEA SUTO: Thank you so much, Melinda.
So a lot to take away from the panel. we've discussed the importance of Multistakeholder approach and a cross‑cultural approach, the importance of bridging fragmentation in regulatory spaces, and trying to build towards common principles, but not a one‑size‑fits‑all approach. To try and work together to define what high risk and low risk is, and also the value of conversations, and the acknowledgement that it might not be the same across regions. To make sure that we are looking and are connected at the technology when we're trying to pass regulations. Again, the value of Multistakeholder approach here so that we don’t pass regulations that are actually restrictive to the benefits of a technology that we're trying to regulate. To go step by step, and make sure that we place the regulations at the right moment, not necessarily taking an approach that covers everything from one go. The role of standards and balancing innovation and regulation with an approach to standards and industry initiatives. And then, of course, taking the time to do it right and to allow time to tell us where actually the risks are, and to look at that from also the user perspective, the way that the technology is being used in the field, as opposed to where we think risks might be coming.
So a lot coming out from the panel.
We have about maybe 20 minutes, a little bit to turn to the audience, a little less than that, both online and here in the room.
I understand Paloma will have to leave.
So if there’s anything last second that you want to share before you have to move to your next meeting, please go ahead. Otherwise, we thank you very much for being here.
If there are people in the room, for the rest of the speakers or online, please, we'll get you a microphone and then we'll try to get your answer, as well.
You and then them.
>> AMAL AHMED: Thank you very much. My name is Amal Ahmed. I’m currently working in DGA. I’m not asking a question. I’m just having an emphasis. First of all, welcome here in Saudi Arabia. It’s an honor to have you all here. And my experience is a total of three years; two I've spent in the private sector and one in the governmental sector in DGA.
I want to say that it’s really exciting working here, and I've seen how the government sector is working very closely with THE citizens to be human‑centric.
And I've realized a challenge that we are facing to enhance the practices of creating new products, which is the first one is how to actually adhere to the best practices that are available to doing what humans really need.
Because the more we contact through the workshops the different stakeholders, we realize that some of the practices we're doing, they’re not very fit on a product level when it comes to, let’s say, creating some sort of a feature.
Going through the right process sometimes is not the very best option to it.
So this is one of the things that I've seen.
And it’s kind of like a balancing between the frameworks and the reality itself.
>> JACQUES BEGLINGER: My name is Jacques Beglinger. I’m from Switzerland. I’m here with the EuroDIG, the European IGF, and with the Swiss IGF process, but also in the business ICC team.
My question is following on what Thomas was saying on different perception means different aversion to risk or embracing risks. And wouldn't that call for governments and for business to engage much more in education and explaining as much as possible so that the users can make a free choice?
>> TIMEA SUTO: Thomas, the question was addressed to you, I think. But all of you around the table, if you’d like to elaborate a little bit on how we educate around AI.
>> THOMAS SCHNEIDER: Well, I do not necessarily think that it’s addressed to me. But, of course, what I said before about people swimming in the river in Switzerland, they don’t want the government to forbid swimming in the river. They want the government to make sure that the water quality is okay so there’s no damage. They want the government to make sure that everyone properly learns how to swim at school and society teaches also foreigners and immigrants how to deal with water. And they also want the drivers of the cargo ships to know that, okay, I go on the left, and the people are on the right so I will not kill them.
So education is key to freedom of choice, in the end, and to also make people adaptive to be able to assess the risks in a situation that may not be foreseen. Because you may set up rules, but reality may be not foreseen by the rules. And then what do you do?
The more people are able or the system or the society is able to deal with risks, also in unforeseen moments, and we will have them probably also with AI, then, of course, it’s easier for people to react.
>> TIMEA SUTO: Thank you, Thomas. Does anybody else want to react to what we've heard from the audience? If not, are there any other questions?
In the back there.
>> WOUTER COBUS: Hello? Yeah? Okay. Great.
Thank you. My name is Wouter Cobus. I’m with the Dutch Government, a standardisation Advisor.
I’m seeing a difference between the Internet which we discussed at the IGF and AI, where the Internet is confounded by standards, really based on standards, and in AI we are now trying to develop new standards. And I can imagine that difference has also implications to how we govern it.
So, what are your opinions about how this difference affects the governance model that we have to choose for AI compared to the Internet?
>> TIMEA SUTO: Some question there about the role of standards and whether standards need to come before development, or development needs to come before standards, if I understood the question correctly.
Any other questions that we could maybe walk together? No?
It’s quite unfortunate that Paloma had to leave because she always has a lot to say on standards, but perhaps others? Melinda, do you want to take that up?
>> MELINDA CLAYBAUGH: Actually, I’m not that close to the standards development work. In the U.S., I can say that the quote/unquote "standards," I mean, not the ISO things, but the NIST is the primary soft standard body in the U.S., they've been focused primarily on risk management frameworks for Gen AI. I think there’s a place for that because that is kind of a standardisation of a process of how to assess and mitigate risks that you want to make standard across anyone developing and deploying AI.
As for the technical standards, which I know are so important to the Internet, I actually don’t have a view on them. I defer to you, if you’re saying it’s more challenging in the AI space.
>> TIMEA SUTO: Thomas?
>> THOMAS SCHNEIDER: Maybe just a quick reaction. The question is, what do you mean by standards on the Internet?
I mean, of course, the TCPIP is there for a few decades, but the IETF is continuing to develop norms and standards.
And also there, basically, it’s probably not fundamentally different, because somebody proposes a standard, you test it, and like a running code and so on, and if nobody has a problem with the standard, then it may get de‑standard; although, you may have competing standards or a variety of standards, and you had this with television and previous. So you may have competing standards, and over time maybe one of the standards or two will succeed in just being the most attractive, not necessarily the best, but the most attractive for businesses or whatever.
So I don’t see a fundamental difference.
But of course, it’s a difference between a standard for an infrastructure, if you take the Internet as an infrastructure, or service using an infrastructure. So of course, it’s also standards are case‑sensitive, but I don’t see a fundamental difference in logic, because also there you just try and see what happens, and then you standardise as you go, more or less.
>> TIMEA SUTO: Thank you, Thomas.
Yes, just one thing, if I can add from my role as the moderator. We also need to make sure that, as we develop standards, we are mindful of not fragmenting the space further. So that standard is the interoperability approach that we want to take to regulation, to actually use of technology, also that standards do not add to creating pockets of technology, that this technology works on this standard, and the other one works on that standard, and the two don’t talk to one another, because then we are actually fragmenting the opportunities that we can get out of the technology. That’s just two cents from me.
But we have a question there.
>> AUDIENCE: When we talk about standards, we also need to bear in mind that standards are not carved in stone. So for me, and also from my experience in business, it’s okay to have standards, but they shouldn't be too rigid to start with, but then there must be a serious review process, or at least the expectation that it’s going to be reviewed once that flaws are expected. So in that sense, what has been done at the Council of Europe, principle‑based, is fine. Whether the AI Act went a little bit too far in this respect, and not enough expectation to be revised pretty soon, as we saw it with the GDPR, which was not revised so quickly, you might learn from it. But I think it's really essential that there is a perspective and certain know‑how on the subject that there will be revision.
>> TIMEA SUTO: Thank you for that addition.
I think we seem to have exhausted the questions from the audience. I hope not the audience itself. (laughs)
We have, yes, about five minutes to end our session. So I just want to turn back to the panelists here on the podium and ask, what is your main takeaway from the session? If it still had the character limitations that we have on social platforms to express our opinions, what would be your one‑sentence takeaway from this that we can put in the report about what we discussed today?
I’m going to skip the speaking order and I’m going to start with Sulafah and just go around the people here.
>> SULAFAH JABARTY: I think mostly it’s to make this sustainable it’s actually the harmonization of the global framework that like we've heard bits and pieces from different backgrounds. And we all I guess agree that as much as the process is flexible, inclusive, and, as they say, connected to Multistakeholders, as well, and listening out to everyone, giving everyone the space to embed their process. And I think that’s the way to actually make it faster and more convenient and more sustainable, let’s say. Because at the end, this is an ongoing process. So as much as the flow is connected to multiple entities, as much as it’s sustainable and objective, if we may say, and considering all of the aspects together.
>> MELINDA CLAYBAUGH: Yeah, I echo that, and I agree that finding the balance between what we agree on and then allowing for variability. So setting a floor and then you can add to it as needed for the use case, for the country, for the context that something is being deployed in. And so firming up the foundation and then whether looking to kind of sector‑specific assessments beyond that, however that differential should be implemented.
>> TIMEA SUTO: Like the floor, and then allow space to move up.
Lucia?
>> LUCIA RUSSO: Yeah, I think for me, as well, is this notion of having an adaptive framework, not having something set in stone that you can’t review and can’t reopen, especially in light of the speed of the technology and the length of the policymaking process, and so this notion of footer‑proofing legislation or regulation in a way that is not set in stone or that you have processes to update your requirements.
And also I think this, really, the need of what we call a risk‑based, well, tailored approach to the use cases but to the sectors, as well.
And I think Melinda expressed it very well, this notion that we have advanced AI systems, and then we have what we may call everyday AI, and also Noura was mentioning that transition from the narrow AI to now the large foundation model that can do much more.
So I think that is at the core of what we call risk‑based approach. I mean to tailor the requirements that are imposed to really a careful consideration of what the impact will be.
>> NOURA ALHAKBANI: Hello? Yes. I do agree with Lucia that it should be adapted, and especially since it’s a global, as also Melinda said, we should have a basic and then different differences, and I think all that could be done through dialogue, and again dialogue, and reiterative process of setting the standards. And it should be, like, regularly and continuously, because things change. Our beliefs or our point of view change with the changing world. So I think as I will actually emphasize whatever they said, and that’s all.
>> TIMEA SUTO: Thank you.
Thomas?
>> THOMAS SCHNEIDER: Yes, thank you.
I also think, what a surprise, that "adaptive" is the keyword I think of this afternoon. And I think it is important that the framework is adaptive, but the goal should always be the same: To make sure that people are free, but people use their freedom with responsibilities, that there is protection for human rights, for democracy, for rule of law things, and also like clear rules for the industry that they know what can they do, what can they not do, at least when a certain level of risk is reached. So the principles should be stable and reliable, but the way they are implemented, the way it’s made sure that people continue to be free, but safe to the extent that they want to be safe, need to be adaptive.
And I think also, my country is not a member of the EU, but we are grateful to the EU that they dared to do something of which we can all learn. And of course, a colleague from Telefonica is right, it’s not easy, but not doing anything and just letting everything go may not be the right thing, too. So we watch closely what the EU is doing, what difficulties the Member States have in implementing this in the local level and so on. And of course, yeah, they are the frontrunner. They have some advantages, but they also pay a price. But as long as we stay engaged and can learn from each other, I think it’s a mutual benefit.
In my small country, we will try to achieve the same goals with something different, something more agile, something smaller, because also we have to. We don’t have the resources that the EU as a big group of countries have.
So as long as we can learn from each other, I think, yeah, we will go in the right direction, if we share the basic fundamental principles of freedom and respect and autonomy and human rights and solidarity and so on.
Thanks.
>> TIMEA SUTO: Thank you. So, we started from one word, or one hyphenated word, risk‑based. And then we added quite a couple to this, but I think Thomas is right that the end word that we seem to converge around is "adaptability," an adaptive framework that moves with the times, that moves with the technology, that moves with the changes of our views and perspectives and the way that we, our culture, develops with the technology together, while making sure that we keep our eyes on the prize, keep our eyes on the right goals that we've set for ourselves in the beginning.
To all the words that we've said today, I will just add two more, which is thank you. Thank you to all of you who have come and shared your knowledge and expertise with us for the past hour and a half.
Thank you to all who came to listen and contribute to the conversation.
Thank you to those who joined us online. I know Paloma had to go, but the audience that is there still.
I hope this was as useful for you as it was edifying for me, and hope to see you next year at the next IGF and see how we progress from adaptive to who knows what the next word be.
Thank you, everyone.