The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> ANANDA GAUTAM: Welcome, everybody. We're just about to start our session on Ensuring Human Rights and Inclusion: An Algorithmic Strategy.
We have two of the speakers on‑site and two of them online. I would like to welcome you all. To start with, we will be starting with opening remarks and then likely move on to questions. So I'd like to request Monica to start with her opening remarks along with her introduction, and then likely go for the second round of questions.
>> MONICA LOPEZ: Okay, yes. So can you hear me okay? Yes? All right.
Well, first of all, thank you for the forum organisers for continuing to put together this summit on really such critical issues related to digital governance. I’m really excited to be here, at least online.
And I also want to thank Paola Galvez for really bringing all of us from across the world together, whether virtually or in person.
So as a brief introduction, I’m Dr. Monica Lopez and I come from a technical background. So, I’m trained in the cognitive and brain sciences, and I've been in the intersecting fields of human intelligence, machine intelligence, human factors, and systems safety now for 20 years.
I’m an entrepreneur and the CEO, co‑founder of Cognitive Insights for Artificial Intelligence, and I essentially work with product developers and organizational leadership at large to develop robust risk management framework from a human‑centered perspective.
I’m also an AI expert on scaling responsible AI for the Global Partnership on AI.
So I certainly do recognise many, many individuals.
So as for my contribution, I really do hope to complement the group here. I’m coming from the private sector perspective.
Certainly, as we all know, today’s rapidly evolving digital landscape, we know that algorithms have essentially become the invisible architects, perhaps we can call that, of our social, economic, and political experiences.
So what we have are very complex mathematical models designed to process information and make decisions many times fully automated, that now essentially underpin every aspect of our lives, as we all well know at this point from job recruitment, to financial services, to criminal justice, and social media interactions.
So this promise of technological neutrality essentially masks a reality, one where algorithmic systems are not objective but instead they are essentially reflections of the biases, historical inequities, and the systemic prejudices that are across our societies. So they are essentially embedded in the design and training of data.
So this, as we all know as well, has direct human rights implications of algorithmic bias that are profound at this point and really far‑reaching. These systems essentially perpetuate and amplify these existing inequalities and are creating digital mechanisms of exclusion that are systematically disadvantaging marginalised communities.
So just very quick, before I enter into why we need a human‑centered perspective on this, but I’m sure very clear examples that you may be familiar with already are with facial recognition technology, or FRT, and that they have demonstrated significantly higher error rates for women and people of color. We continue to see that problem.
AI driven hiring algorithms have shown to discriminate against candidates based on gender, on race, and other protected characteristics.
And AI‑enabled criminal justice risk assessment tools, and we’ve certainly seen this. I’m based in the United States, and so we’ve shown that it has continued to perpetuate racial biases leading to more severe sentencing recommendations for black defendants compared to white defendants with similar backgrounds.
So essentially, why do we have this?
And the root of these challenges really lies in the fundamental nature of algorithmic development. We know that machine learning models are trained on historical data that inherently reflect, as I mentioned earlier, these societal biases, power structures, and systemic inequalities.
I want you to take a moment right now to consider what a data point even means, how a single data point has limits. As those of you know, for those who work closely with data on a daily basis, and by that I mean whether you’re collecting it, whether you’re cleaning it, analyzing it, making conclusions from it, you know that the basic methodology of data is such that it systematically leaves out all kinds of information.
And why?
Because data collection techniques, they have to be repeatable across vast scales, and they require standardised categories. And while repeatability and standardization make database methods powerful, we have to acknowledge that they have power at a price. So it limits the kind of information we can collect.
So when these models are then deployed without any sort of critical examination, they don’t just reproduce existing inequities; they actually normalise and scale them.
So here is where I would argue, and I know the rest of the panel will continue to discuss this, but why a human‑centered approach to algorithmic development offers essentially a critical, at this point, pathway to addressing these systemic challenges.
And essentially what this means is that we need to reimagine technology as a tool for empowerment and well‑being instead of a tool for exclusion.
So in this regard, prioritising human rights, equity, and meaningful inclusion at every single step of technological design through implementation, and by that I mean across the entire AI lifecycle becomes essential.
So I work with a lot of clients. As I mentioned earlier, I am in the private sector. And there are key strategies right now that are very clear that we know that we can advance this human‑centered approach.
I'll just briefly mention five of them real quick.
So first, we need comprehensive diversity across algorithmic development. I’m sure you’ve been hearing that a lot, but the problem is that the change has not, the transformative change has not really begun. And we know that if we diversify teams, we do get more responsible development of algorithmic systems. We do get new perspectives at the table. So I would say that’s absolutely essential no matter what moving forward.
The second element is rigorous algorithmic auditing and transparency. Again, that is another element that we have seen. It is now, in fact, in part related to the European Union’s AI Act requirement. But we need to see this across the three perspectives of equality, equity, and justice.
And this is not just for big tech companies to be engaging in. This is truly for everyone.
We know that irrespective of emerging legal requirements in some jurisdictions and some where there isn't much work happening on the legal side, all organizations must implement mandatory algorithmic impact assessment to thoroughly examine the potential discriminatory outcomes before deployment. Then, not just that, but continuously monitor those outcomes as more data are collected and models drift.
I have noticed that when companies do that, whether they’re small or medium‑sized or large, we do see better outcomes.
A third element is the establishment of proactive bias mitigation techniques. Now, there are all sorts of technical strategies for that. Some of them are based essentially on what I was mentioning in regards to you really need to think about what data means, so careful curation of the trading data. We need to make sure it truly is representative and balanced across the data sets. It does matter and it does change outcomes.
Implementation of fairness constraints. Also the development of testing protocols that actually specifically examine the potential for discriminatory outcomes. We know that when you identify that beforehand and you actually look for that, you will see it and you can actually mitigate change. and actually improve on the issue.
The fourth element is of course the classic need for legal and regulatory frameworks. So here I can’t stress enough at this point that governments and international bodies, we have to truly come up with comprehensive regulatory frameworks that treat algorithmic discrimination as a fundamental human rights issue. And from a business perspective, what this means is that there needs to be clear legal standards for algorithmic accountability. There also need to be very clear mechanisms for individuals to be able to challenge algorithmic decisions. There certainly are not enough. And even in some cases where we have the requirement for companies to actually put on their website their auditing results, that is still not enough.
And then of course, we need significant penalties for those systems.
And then the last issue, which is the fifth, is that we need ongoing community engagement. I also cannot stress that inclusion does matter and it requires continuous dialogue with communities most likely to be impacted by algorithmic systems. This is not an easy task. It’s a lot to ask for, but we know and I've seen it with companies that actually make concerted efforts to create participatory design processes across the AI lifecycle. That essentially means you’re establishing relevant feedback mechanisms of communications as you create and design these systems, you pilot them, and you work with those individuals. And then you’re essentially empowering marginalised communities to actually want to actively provide their input because it is of value.
So what I’m calling for here essentially, to conclude, is that we need this fundamental re‑imagining of technological innovation. We know at this point that algorithms are not neutral tools, but they’re very powerful social mechanisms that either perpetuate or challenge existing power structures. So if we change now our methods, and every single one of us, in the design and deployment choices of today, then I think we will very profoundly actually shape the future of human rights in the digital age in a very positive way.
So I look forward to your questions, and I know we’re going to discuss this more in detail. So thank you. Thank you for listening.
>> ANANDA GAUTAM: Thank you, Monica, for all your thoughts.
And I think you have also covered the second part of the questions already. My apologies. I should have mentioned the time before.
So I'll go to Paola to give a short introduction.
For the first round, let’s wrap within five minutes, and then we’ll go for the second round of questions. For Monica, I think we’ll be going a bit short on the second round. We’ve covered almost most of the things.
So Paola, over to you.
>> PAOLA GALVEZ: Thank you, Ananda.
Hello, everyone. Thank you so much for joining us for this very, very critical conversation.
I’d like to start by posing a question: What does it take to make society more inclusive?
You know, my interest in social impact began early, inspired in part by my grandfather, who is a judge in the Central Highlands of Purdue, who often spoke about societal disparities he witnessed. I went to law school believing it would really equip me with the tools to drive meaningful change in a country with high levels of inequalities and social disparities, like this country where I’m from.
But my first year lacked inspiration at all. I think my courses were disconnected from real‑world problems.
But my perspective really changed in 2013 when I began an internship in Microsoft. I was looking at a demonstration video of seeing AI prototypes. It was 10 years ago, but it was this project that used artificial intelligence and helped the visually impaired perceive their surroundings. That opened my eyes that really showed me the profound potential that this technology can be as a catalyst for social change.
So, I said, I can really help as a lawyer to help leverage this technology as a force for inclusion, and I can use public policy and help drive human‑centric and evidence‑based policy. And that’s when my commitment started to transform Peru into a more inclusive and detailed society. And I think that’s the path that led me to what I’m doing now, and I hope will help beyond.
So I worked in the private sector for a long time. I was in the position that Dr. Monica Lopez was mentioning how private sector does. Then I received a proposition from the government to work there to help them with the National AI Strategy and the (?) strategy. And most of my friends told me, "You’re going to be so frustrated. The bureaucracy is going to kill you. Come on, you’re used to Microsoft, big tech." But I said, no, I can actually bring and shed a light on disruptive ways to govern, so I decided to do it.
I’m a firm believer on participatory bottom‑up processes, so the first thing I did was form a Multistakeholder committee to do this policy.
We’re here at IGF, a global forum. We’re talking about AI and data at a global level. And I have seen firsthand a local experience bringing civil society, academia, private sector together to find solutions to challenges, and one of the most challenging things is AI policy.
And I do believe that protecting democracy, human rights, the rule of law, and establishing clear guidelines on AI is a shared responsibility that a government alone cannot do, not a private sector company, nor academia. It is an endeavor that must be taken in a Multistakeholder approach.
But I do think that one stakeholder is crucial in this pursuit, and that is youth civil society. The youth must be included, and youth engagement is a critical area that we need to protect now. That’s what I believe in these first remarks and that I wanted to mention, because I do see generative AI producing fake and biased synthetic content, large language models reinforcing polarization, poorly designed AI power applications that are not compatible with assistive technologies, leading to discrimination against youth with disabilities.
And I have the expert here. Yonah will mention more about that.
But apart from that, I sincerely believe that AI holds immense potential as a technology if we use it wisely. AI systems can break down language barriers. I mean, IGF is as powerful as it is, and the youth IGF and youth sector of Internet Society is powerful, if we’re a community of more than 2,000 youth connected, and sometimes we use translation that is powered by AI. So that’s powerful; or, of course, making resources more accessible to diverse youth populations.
Sadly, AI has yet to live up to its potential. Dr. Monica Lopez mentioned most of its challenges, which I absolutely agree with. AI is reproducing the society’s bias. It is deepening inequalities. I heard someone saying, "But that’s just the way the world is. The world is biased, Paola. What do you think? That’s what AI is going to do." And yes, that’s true, but I agree at one point that it depends on us how we want to develop this technology. It depends on us the results that this technology is going to provide us an output. Because data is the oxygen of AI, and transparency should be at its core. So it’s up to us to shape the future of AI now, to talk about the data that should be more representative.
And the focus on IDF of bringing youth to the discussion, I think it’s a great tool to really congratulate, because we have a big youth community in this IGF.
So I’m really looking forward to this discussion, and over to you, Ananda.
>> ANANDA GAUTAM: Thank you so much, Paola, for touching a bit of how powerful it can be, and the work of your government in bringing a Multistakeholder committee was really commendable.
I’d like to go with Yonah Welker. I'll also give you five minutes to briefly introduce you, and touch upon the base that Dr. Monica and Paola has set up. Over to you.
>> YONAH WELKER: Yes, thank you so much. It’s a pleasure to go back to Riyadh. Three years ago, I had the opportunity to curate the Global AI Summit of AI for the Good of Humanity, and we continued this movement. I’m a visiting lecturer for Massachusetts Institute of Technology, but also I’m an Ambassador of EU Projects to the MENA region. And my goal is to bring all of these voices and ideas to actual policies. Let’s say EU AI Act or Code of Practice.
Today, I specifically would love to address how it may affect the most vulnerable groups, as Paola mentioned, individuals with disabilities. And that’s why I would love to quickly share my screen. Hopefully, you can see it.
So 28 countries signed the agreement about AI safety, including not only Western countries, but the countries of the Global South, Nigeria, Kenya, countries of the Middle East, Saudi Arabia, and UAE.
And the big question IS how these actual frameworks can address designated and vulnerable groups? For instance, currently there is a one billion people, ’s 15% of the world, living with disabilities, according to the World Health Organization. And it’s important to understand that sometimes these disabilities are invisible. Let’s say neurodisabilities, at least one in six people living with one or more neurological conditions.
And it’s actually a very complex task to bring all of these things to the frameworks. Let’s say why for EU, we have a whole combination of laws and frameworks. We address classifications and taxonomies in Accessibility Act and standardization directive. We’re trying to address manipulation and addictive design at the level of AI Act, Digital Services Act, GDPR. We’re trying to understand and identify higher risks for systems related to certain critical infrastructure, transparency risks, prohibiting particular use of effective computing. But still, it’s not enough because we need to understand how many systems we actually have, how many cases we have.
For instance, for assistive technologies, we have over 120 technologies for the recent OECD report, and I had opportunity to contribute to this report.
We use AI to augment smart wheelchairs, walking sticks, geolocation and city tools. We use AI to support hearing impairment using computer vision to turn sign language into text. We support cognitive accessibility including ADHD, dyslexia, autism.
But we also should understand all the challenges which are coming with the AI including recognition errors in individuals with facial differences or asymmetry, craniofacial syndrome, just not properly identified by facial recognition system, as was mentioned by my colleague.
Or cues identification errors that individuals can't understand AI interface, they can't hear or see the signal, or when we deal with excluding patterns and errors or exclusion by generative AI and language‑based models.
Also we have all the complexity driven by different machine learning techniques, supervised learning which are connected to errors induced by humans. Unsupervised learning which brings all the errors and social disparities from the history, or reinforcement learning, which is limited by training environments including robotics and assistive technologies.
And finally, we should understand that AI is not limited by software. It’s also about hardware it’s about human centricity of physical devices. It’s about safety, motion, and sensing components safety, power components and environmental safety, production and training cycle.
So overall, working on disability‑centric AI is not just about words. It’s an extremely complex process of building environments where we have a multi‑model and multi‑sensory approach. When we deal with the families, caregivers, patients, and different types of users, then they try to understand and identify scenarios of misuse, actions and non‑actions, so‑called omission, potential manipulation, or addictive design.
So it’s why the next level of AI safety institutes, offices, and oversight will include all these comprehensive parameters. We talk not only about risk‑based approach, but understanding different scenarios: Workplaces, education, law enforcement, immigration. We think about taxonomies, frameworks, and accidents repositories, working with UN, World Health Organization, UNESCO, OECD.
And finally, we try to understand the intersectionality of disabilities, thinking about children and minors, women and girls, and all the complexity of history behind these systems and context.
Thank you.
>> ANANDA GAUTAM: Thank you, Yonah, for your wonderful things, how AI could be used in assistive technologies, but there are challenges like a very minor issue might also be a kind of we cannot accept minimal level of error in the use of, like, AI in healthcare system.
And we’ll come back to you on these questions. So I'll ask Abeer to talk about herself, and I'll give you five minutes, as well. Please introduce yourself, and we’ll do opening remarks. Thank you.
>> ABEER ALSUMAIT: Thank you. Hello, everyone. It’s a privilege to be a part of this discussion and I would like also to thank Paola for initiating this and kick‑starting it. I’d like to thank the rest of the panel and the moderators, as well, and event organisers.
Just to introduce myself briefly, this is Abeer Alsumait. I’m a public policy expert with a little over a decade of experience in cybersecurity, ICT regulation, and data governance in the Saudi government.
I hold a master’s degree in public policy from Oxford University and a bachelor of science in computer and information sciences.
My interest lies in shaping inclusive and sustainable digital policies that drive innovation and advance the digital economy.
I would like to briefly just start the conversation of this session by mentioning examples that show, while algorithms and AI promise efficiency and innovation, they have the power to replicate and amplify crucial inequalities when not governed responsibly.
The first example I would like to mention is from France. In France, a welfare agency used an algorithm to detect fraud in welfare and errors in payments. And this algorithm, while in text was a wonderful idea, in practice it ended up impacting specific segments of its population and marginalised groups, specifically single parents and individuals with disabilities far more than any others. It ended up tagging them more as high risk more frequently than the rest of the beneficiaries of the system. So this impact was profound on those individuals. It led to more investigations, a lot more stress, and in some cases even suspension of benefits. So in October of this year, a coalition of human rights organizations launched legal action against the French government for this algorithm used by the welfare agencies, arguing that this algorithm actually violates the privacy laws and anti‑discrimination regulations. So this case shows us the reminder of how risks can be inherent in some opaque systems and maybe broadly‑governed AI tools.
Another example I would like to quickly highlight in the healthcare sector, where a study in 2019 from Pennsylvania University highlighted an AI‑driven healthcare system that was used to allocate medical resources for a little over 200 million patients, and that system relied on historical hysterical healthcare expenditure as a proxy for healthcare needs. So this algorithm was not considering the systematic disparity in healthcare access and spending in society at that time, and it ended up resulting in black patients being less likely to be flagged for need of enhanced care to a percent that reached 50 percent than their white counterparts. So even though this algorithm and this system was intended to streamline healthcare delivery, it ended up perpetuating inequality and deepening distrust in AI systems and in technology overall.
So this example will underscore one undeniable truth: That algorithms are not neutral when built on biased data or flawed assumptions, and it might lead to amplified existing injustices and exacerbate exclusion, often impacting the most vulnerable population.
These challenges and these issues generated actions from governments on an international level, one of which, as mentioned by Dr. Lopez, the EU AI Act that was entered forth this year, and it classifies AI systems based on risk, classifying things such as welfare, employment, and healthcare as areas of high risk where very high standards of transparency, quality, and human intervention is required.
A lot of nations and governments followed suit, I believe. One example for that is here in my country, in Saudi Arabia. The Saudi Data and Artificial Intelligence Authority, established a few years ago, started or adopted recently the AI ethics principles that emphasizes transparency, fairness, and accountability.
Therefore, I believe governments play a very important role. While every actor and every player is really important in discussions and conversations, governments have critical roles in regulating and establishing responsibility and advancing the way forward for AI adoption in an equitable and fair way.
Thank you.
>> ANANDA GAUTAM: Thank you, Abeer.
So I'll come back to Dr. Monica. You have touched over how algorithmic bias are there and what could be the role of private sector. I’d like to catch you up on what are the measures that private sector could take to overcome those biases, along with the role of other stakeholders? If there are any best practices that could be shared, kindly share.
And I'll ask you to wrap up very soon.
Thank you.
>> MONICA LOPEZ: Yes, absolutely. Thank you. Thank you for that question.
And I know I briefly mentioned some of them, but I think I'll right now highlight some that, in fact, many of our fellow colleagues have been already, in fact, mentioning.
So the first one, and one that is starting to happen, but not to the extent that I believe should happen more, is the whole question of diversity in teams. Again, we hear this a lot. We hear that we need to bring in different perspectives to the table, but at the end of the day, unfortunately, and I have seen even startups, so small and medium‑sized enterprises who make the argument, "We don’t have enough resources. We can’t." And they actually do. And sometimes it’s as simple as bringing the very customers, the very clients that they intend for their product or their service to be for, to the discussion. So I would say that that is one very key element, and we just need to make that a requirement at this point, and it needs to essentially be something that's a best practice, frankly, at this point.
And the other one is the bias audits. We are seeing certainly across legislation the need for the requirement for it, so one needs to comply now with providing audits for these systems, particularly on the topic of bias, so to ensure that they are non‑discriminatory, non‑biased. So that is a good thing.
However, what ends up being the problem is that we haven’t yet standardised the type of documentation, the type of metrics and the benchmarks. So that is, right now, the conversation, at least in not just in the private sector, but certainly also in academia.
I also am in communication work with individuals from IEEE, from ISO, who set the industry standards, and so this is a very big topic right now of debate as to how do we standardise what these audits. How do we make sure that not only we standardise that, but we actually have the right committees in place, experts, who can then review this documentation?
So I would say that that in a way, while extremely important, sometimes does become a barrier of sorts, precisely because individuals, just organizations rather, companies don’t know exactly what needs to be put into these audits. So that’s the second element.
The third and final point here is the whole issue of transparency and explainability of these systems. We’ve heard many, many times about the black box nature of these systems, but to be quite honest we know much more about these systems. Developers do know the data that is involved. We do make mathematical assumptions. So there’s a lot of information at the very pre‑beginning stage of data collection of system creation for which we have a lot of information about, and we’re not necessarily being very transparent about that in the first place.
So I would say that that in and of itself is extremely important, but also is becoming a type of best practice, because if you can establish that from the beginning it does have downstream effect across the entire AI life cycle, which then becomes extremely important when you start integrating a system. Let’s say you have a problem, you have a negative outcome as someone ends up being harmed, and then you can essentially reverse engineer back, again if you have that initial very clear transparency put out in the beginning.
We are starting to see some good practices around that, particularly around model cards, nutrition‑like labels, especially in healthcare. There’s examples given in healthcare. I do a lot of work with the healthcare industry. And so there’s a very big push right now to essentially standardise and normalise nutrition‑like labels around AI model transparency that I think then should be utilised across all systems, frankly, at this point, all contexts and domains.
Thank you.
>> ANANDA GAUTAM: Thank you, Dr. Monica.
So I'll go to Paola. I think after you guys complete, we can go ‑‑ so I'll go to Paola, that you have already worked on the AI readiness assessment for the country and how countries and regions are making declarations, how it can be transitioned into the action, you know, like based on your experience. Can you share, please?
>> PAOLA GALVEZ: Sure, Ananda.
And what you said is key, right? How to pass from declarations to actions. We’ve seen so many commitments already. So great call. Thank you for the question.
I’d say, first of all, we need to start by going into the international frameworks of AI. If there are countries that have not adopted, they will be left out. So that way, we ensure alignment with global standards, best practices, and that also helps with local business to join and be easy to go out to the borders. This is first.
But second, when you start formulating the national AI policies, governments need to develop a structured and meaningful public participation process. This means receiving comments from all stakeholders. But it’s not only that, because that happens a lot in my country, I can tell you. By law, they need to publish 30 days any regulation. Actually, it just happened. The second draft of the AI Act regulation was published.
But what we need for a meaningful participation is government saying how they took this comment. And if they are not considering, why?
I believe that the citizens and all the civil society organizations, the private sector, all that commented, need to know what happened after they commented at any bill.
Third, enhance transparency and accessibility. Any AI policy material must be readily accessible, complete, and accurate to the public.
Then, independent oversight I think it’s a must, Ananda, creating or designating an independent agency. Here, Abeer mentioned the Saudi Data and AI Authority. I think that is a very good example. Sometimes governments have a challenge with this because they say, "Oh, it’s a huge amount of effort, people, resources," right? But if it’s not possible having a new one, then let’s think. Maybe the Data Protection Authority can take over AI capacities. Right?
Also, and I think this cannot be left behind, investment in AI policy, AI skills development. That’s a must. We can have the best AI law, but if we don’t help our people understand what is AI, how to read and know that the AI can hallucinate, we will be lost. So AI skills for the people is a must.
And just to finish, from always what I've said, with a gender lens. Because gender equity and diversity in AI is a must, as something that is not being looked at as it should be. You mentioned I conducted the AI Readiness Assessment Methodology of UNESCO, and I’m proud to say that the UNESCO Recommendation on the Ethics of AI is the only document at the moment that has a chapter on gender. And it must be reviewed because it’s very comprehensive and it has practical policies that should be taken into consideration and into practice.
And of course, environmental sustainability in AI policy should be considered. It is often overlooked. What is the impact on the energy? Should we promote an energy‑efficient AI solution? Definitely. Minimizing carbon footprint? Of course.
And fostering sustainable practices, because this is, I will finish with this data, but when you send a question to a large language model, as we all know, ChatGPT, Claude, Gemini, et cetera, it’s the same consumption that an airplane has in a year from Tokyo to New York. So we should be thoughtful on what are we sending to AI, or maybe Google can do it for us, too. Thank you.
>> ANANDA GAUTAM: Thank you, Paola, for your strong thoughts.
I'll come back to Yonah. You have mentioned about AI in assistive technologies. So now I'll come back to how legal frameworks can complement the assistive technologies while protecting the vulnerable population that are using those technologies. We have briefly underlined that minor, or might be a major, case of assistive technologies.
Over to you, Yonah.
>> YONAH WELKER: Yes. So first of all, we have a few main elements of these frameworks. The first one is related to taxonomies and repositories in cases. And here, I would love to echo my colleagues, Dr. Monica and Paola. We actually need to involve all of the stakeholders, for instance, co‑operating with OECD. We involve over 80 organizations to understand existing barriers of access to these technologies. It’s affordability; it’s accessibility; it’s energy consumption; it’s safety; it’s adoption technique, is the first thing.
Second thing is the accuracy in original solutions. So one of the lessons we learned both working in EU and MENA region, we can’t localise OpenAI. We can’t localise Microsoft solutions. But we can build our own solutions, sometimes not large language models, but small language models, not with the 400 billion parameters, maybe with 5, 10, 15 billion parameters, but for more specific purposes or languages. For instance, when we’ve made the research for Hungarian language, we have 1,000 times less sources of training for ChatGPT in comparison to English. So we have a similar situation for many other non‑English languages. It just doesn't work, not only from original perspective, but from scientific research and development perspective.
Another thing is a dedicated safety models. Sometimes we can’t fix all of the issues within the model, but we can build dedicated agents or additional solutions which track or improve our existing systems. For instance, currently for the Commission, I evaluate a few companies and technologies which will address the privacy concerns, compliance with the GDPR, with the data leakages, breaches, and also online harassment, hate speech, and other parameters. It’s also complemented with the safety environments and oversights. So it’s the job of the government to create so‑called testbeds and regulatory sandboxes. It’s a kind of specialised centres where startups can come to in order to test their AI model, to make sure they’re, on one hand, compliant and, also, they build actually safe systems. It specifically relates to areas of a so‑called critical infrastructure. These are areas of health, education, smart cities. And for instance, Saudi Arabia is known for so‑called cognitive cities. All these areas are a part of our work when we’re trying to build efficient, resilient, and sustainable solutions.
And finally is a co‑operation with intergovernmental organizations. So for instance, we work on frameworks called Digital Solutions for Girls with Disabilities with UNICEF. We work with UNESCO on AI for Children. So we’re trying to reflect on more specific scenarios and adoption techniques related to specific ages, let’s say from 8 to 12 years old; or specific regions; or specific gender; including both specific of adoption, but also safety considerations and even unique conditions or illnesses, which are very specific to particular region.
For instance, we have a very different statistic related to diabetes, craniofacial syndrome, different types of cognitive and sensory disabilities, if we compare the MENA region and EU. So it’s a very complex process.
As I've mentioned, now our policy is becoming overlapped. So even for privacy, for manipulation, for addictive design, we have an overlap not only in AI Act, but also for other frameworks, Digital Services Act, for data regulation. So some essential pieces of our vision exist in different frameworks. So even governmental employees are aware of it.
And the final thing is AI literacy adoption. So we’re working to improve the literacy of governmental workers and governors who will employ these policies and bringing them to life.
>> ANANDA GAUTAM: Thank you, Yonah, so much.
So I'll come back to Abeer. So we have been talking about the complexity of making AI responsible. And when it comes to making the responsible AI, it demands for ensuring accountability and transparency. While we are seeing many automated AI systems, who will be responsible if automated car kills a man in the street? This has been kind of serious question, and there are other consequences.
So in this context, how can governments ensure the responsible AI while ensuring the accountability and transparency? Kindly go through. Thank you.
>> ABEER ALSUMAIT: Thank you.
So I think this question actually relates to what Dr. Lopez mentioned. The keywords here are transparency and explainability. Of course, for sure, regulations and law establish responsibilities and make it sure every actor involved in any event knows their role and knows when to be responsible. But also, the fact that they can explain and they can be transparent at how they work and how they operate and how they might impact other individuals, specifically vulnerable populations, is really key.
And as Dr. Lopez mentioned, private sector knows more than maybe we understand, but we’re not very clear on how we want the transparency and explainability to work.
And maybe my thoughts on that is government should work hand in hand, should push for standardisation to happen as soon as possible, should be clear in establishing the responsibility and be clear about what it is, what it means to have a point for transparency for AI and algorithm.
One extra thing that I think government should also focus on is to establish a right, establish a way, for individuals to challenge such systems and impactful algorithms on their life. So my idea is that there should be continuous evaluation and risk assessment of how it is actually working in real life. In case any incident of bias or discrimination happens, there should be a clear way, clear procedure, for individuals and for governments to start auditing, reviewing, any system that’s working and impacting lives of individuals.
>> ANANDA GAUTAM: Thank you, Abeer. Maybe we’ll come back to you going to the Q&A session.
There is one contributor in our audience. I'll ask her to provide her, and then Matilda will bring what we have in the discussions in the chat and if there are any questions online, and we’ll go to the question and answer.
Over to you, please.
>> MEZNAH ALTURAIKI: Thank you so much. My name is Meznah Alturaiki, and I’m representing the Saudi Green Building Forum, which is a non‑governmental and non‑profit organization that supports and promotes green practices as well as decreasing carbon emissions and decreasing energy consumption.
Of course, it contributes to the digital transformation that the world now is witnessing. And for that, I would like to just participate and give an idea that we’re going through a critical perspective, which means that as algorithms offers an immense potential to enhance our daily lives, yet we face fundamental challenges relating to biases and exclusion.
Now, many of these systems function as an opaque, as Dr. Monica said, lacking transparency, which of course perpetuates social disparities and exacerbates discrimination against marginalised communities.
Now, in the absence of the proper scrutiny and accountability, algorithms sometimes contribute to human rights violation instead of addressing them. So what should we do about that as a civil society?
We need to take an action, and we need to call for a greater transparency and accountability to ensure algorithms are open to scrutiny and include clear mechanisms for identifying and addressing biases.
Of course, we need to integrate human rights into algorithm design, which means we need to focus on developing human‑centered algorithms that prioritise the needs for marginalised groups.
Of course, finally, we need to foster a multilateral collaboration to engage all stakeholders, as you all mentioned, to ensure algorithms are fair and inclusive, considering diverse cultural and social dimensions.
Now, we recommend the following. First, we need to launch a global algorithmic transparency initiative that establishes an international platform to set standards for evaluating the impact for algorithms on human rights and promoting transparency.
Second, design inclusive‑oriented algorithms which develops algorithmic tools that prioritise accessibility, improve service delivery for people with disabilities, and ensure greater inclusivity.
And last but not least, implement training programs that build capacity of developers and design makers to understand the risks of algorithms, bias, and address them effectively.
Thank you.
>> ANANDA GAUTAM: Thank you so much.
So if we have any kind of question on site? There are no online questions, I believe. So while asking questions, please also mention whom you are asking to so that it is easier to answer. Or if it is common, let it know, as well.
Please.
>> AARON PROMISE MBAH: Okay. Thank you. My name is Aaron Promise‑Mbah, and I worked on this with Paola and all of you here. So I’m very excited because of the insights we’ve been sharing.
So I have a question, and I would like Dr. Monica to help me address it. I understand where you talked about algorithms helping for marketing and some other business. Right? And then the kind of divide that comes with it, the risk that comes with it, that it can actually amplify the digital divide, especially with persons with disability. Right?
And then I've worked with some persons with disabilities using social media and all of that. And then there’s a particular case where I think Abeer also mentioned something about depression, suicide. Right? So now you have someone click on Spotify to listen to music. Maybe he’s feeling down. And then after that, you see Spotify recommending music, suicidal music. Right? That kind of music.
So how do we address this? Right?
And Paola also mentioned something about ‑‑ sorry. Let me get it. Standardisation. Right? Having a policy. And then countries are making declarations. Right? How to take action on this?
Then she talked about ownership. Right? Public participation.
Now, when you are talking, I'm from Nigeria. Right? When you are talking about a particular policy that Nigeria has adopted, so I wanted to know, Nigeria has a lot of policies, even AI policy. Right? We are always at the forefront of adopting when we look at other countries doing a lot of things, and then we start doing our own. And then we have a lot of this document, but then there’s no implementation and enforcement. Right?
So now how do we ensure that it’s not just paperwork. Right? So we don’t just do all of these policies and that's it, but that it’s actually being enforced and then it’s followed through onto an implementation on all of that?
So if you can share some of your insights about that. Thank you very much.
>> MONICA LOPEZ: Thank you for that question. It's very complex. I mean you really touched upon many, many aspects. But I think something that actually really stands out, and perhaps Paola I think had also mentioned this at one point, is that there really needs to be I think at this point what makes ‑‑ so let me backtrack a second.
So yes, everybody’s talking about regulation. Everybody’s talking about standards, normalization. Everybody’s talking about we need implementation. How do we do this enforcement?
But I think part of the problem lies in we simply do not have enough public awareness and understanding. Because I think if we actually did have more of that, there would be more of a demand.
And I see this in terms of, I mean, yes, we hear some even very tragic examples. So you did mention about, you know, someone who has depression and may use Spotify and then get recommended different new types of music to apparently, quote/unquote, improve or fix ‑‑ one has to be careful with the words one uses here ‑‑ to deal with that situation. And we’ve seen two recent even suicides as a result of chatbot use, because of an anthropomorphization of these systems.
And I think it really goes back to this question of many times, many users, unfortunately, maybe most users, do not understand these systems fundamentally. That’s an education issue. That’s an education question. Because if you know and understand, then you can critically evaluate these systems. You can be more proactive because you know what’s wrong or you see the gap. You see what needs to be improved.
I didn't mention this, but I’m also in academia and I do teach in the School of Engineering at Johns Hopkins University in the Washington, D.C. or Maryland region in the United States. And I teach the courses on AI Ethics and Policy and Governance to computer scientists and engineers.
And I love when they come to the beginning of class with no awareness, and at the end, they are absolutely more engaged and they all say, "We want to go and be those engineers who can talk to policymakers."
So to me, that is very clear evidence. Whether they’re high schoolers, undergraduate students, professionals, working professionals who go back to school, graduate students, whatever it is, I see this change. And it’s changed because of the power of knowledge.
So my main, really my call here is we need far more incentivization to make much more educated users in everyone, all ages. Then we’re gonna see the need for, and I really think that because there’s gonna be that demand from companies that we want to ensure that our data is private. We want to ensure that we’re not being harmed. We want to ensure that we actually have benefits from these technologies.
I'll stop there. I think, yeah, others can add to it, I’m sure.
>> ANANDA GAUTAM: Hello. Thank you, Monica, for your wonderful response.
We have only five minutes left. So Matilda, is there any online discussion or question or any contribution? No?
If there is any question, please feel free. And contributions are also welcome. We have five minutes. Please keep the time in mind, both speakers and like speakers.
>> AUDIENCE: Thank you. I'll be quick.
It’s been a great discussion.
We do get this. The point on education is very well made. And we’ve realised in our work. I work in New Delhi in India. And we’ve realised even with very specialised sector of the population, like judges and lawyers, it takes a lot of conversation, a lot of detailing to get to a point where something like bias, that judges work with daily, for them to start to understand what bias in an AI system might look like.
So my question, I guess what I’m trying to ask is, when something requires such specialised and detailed understanding, then clearly the problem isn't with people not being able to understand. Maybe it’s with the technology not being at a stage where it’s readily explainable, where it’s easily explainable for societal use.
So is there any merit to ‑‑ frequently we keep getting these discussions on maybe there’s a need to pause, especially with technologies like deep fakes, which everyone who does research in this area knows are primarily going to be used for harm, or not primarily, but massively going to be used for harmful ends. So is there any credence or is there any currency to pushing for a pause at certain levels? Or are we way past that point already, and we just have to mitigate now? That’s a small question. Sorry if it’s a little depressing. Yeah.
>> ANANDA GAUTAM: Thank you so much for the question.
If there are any questions, let’s take it.
And I'll give each speaker with one minute, and then wrap it up.
Any questions, contributions, from the floor? No. None from online.
So each speaker can have one minute and respond.
If not, they can proceed. Yeah. Okay. Like just one‑liner that you want to give for the wrap‑up. Thank you.
You can start with Abeer, maybe.
>> ABEER ALSUMAIT: I think we’re just pondered about it. I don’t think there is a real answer. Are we beyond that point? I don’t think so. But should we pause? I also don’t think so, to be honest I think we can put more effort into making technology more explainable and just bridging the gap little by little. And that’s, I think, what everyone, every actor and every player, should work towards. That’s my thoughts on that.
>> PAOLA GALVEZ: Totally agree. Absolutely, we cannot pause, because if some group decides to do it, then some others will continue. And it’s like just putting a blanket over your eyes. So we cannot do it.
But we can use what we have. And if our countries don’t have a data protection law or an AI national strategy, we need to pull for it to happen. Because if a country does not have an idea of how they want this technology to develop, what is the future of us as citizens?
So I just leave this question for us, and let’s reflect on how we can contribute to the future of AI.
>> ANANDA GAUTAM: Thank you, Paola.
Now, Monica and Yonah, please.
>> MONICA LOPEZ: Yeah, I would agree absolutely with both comments. We can’t pause. We can’t ban. That’s not going to work, absolutely. We’re moving far too fast anyway at this point.
But I would say that where there’s a will, there’s a way. So if we all come to the agreement and acknowledgement that we need, and I mean all of us, not just those of us right now here and our colleagues, but everyone, that we need to do this, then I think it’s possible, and we need to act.
>> ANANDA GAUTAM: Yonah, please.
>> YONAH WELKER: Yes, I’m always on the positive side, because finally we have all the stakeholders together and it includes also the European Commission.
I would love to quickly respond to the question of Aaron about the keywords and suicide, because it’s actually about awareness. Yes, because if you know that recommendation engines use so‑called stop words, if you know how the history of these engines works, you can easily fix it through regulatory sandboxes. And emerging companies and start‑ups just come into the centres, and you can provide the oversight to fix these issues the same as a bias. Then you know that bias is not an abstract category, but just the problem of under‑ or over‑representation. It's just bigger errors for smaller groups is purely data and mathematical things coming from society. You can clearly identify the issue. It can be a technical issue, it can be a social issue, and then you see it, and you can fix it.
And that’s why now we have these tools, testbeds, regulatory sandboxes, policy frameworks, and all the stakeholders working together to come up with real‑life terms, understanding, and finally we can fix it together.
Thank you.
>> ANANDA GAUTAM: Thank you, Yonah. Thank you, all of our panelists, and thank you, Paola, for organizing this.
To all of our on‑site audience and audiences online, this is not the end of the conversation. We have just began it. You can just connect with our speakers in the LinkedIn or wherever you are.
Thank you so much, everyone. Have a good rest of the day. Thank you all.
Can you just stay, our panelists? We can take a picture with you on the screen.
Thank you.