The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> LUCA BELLI: Let's see if the transcripts are functioning now. It looks like they are functioning, fantastic, all right. So good morning to everyone. This room has a very special configuration. Feel free to come closer, if you want, so we could have the conversation. We still have one panelist who is arriving here, but she is stuck in traffic, so we can start even with some minutes of delay. All right.
So this session will be dedicated to Building the AI Commons. My name is Luca Belli, Professor and Director at the FGV Law School, where I direct Center For Technology and Society. And together, with friends, include Anriette from APC, which is not here but is today substituted by Renata Avila from APC; and Aliya Bettancort, Syed Iftikhar, Bianca Kremer and Alek Tarkowski online.
This is our annual pre‑event of the Internet Governance Forum, to understand how the AI Commons can be built in expression of digital items.
Today we meet to explore the issues. Aliya Bettancort, who will be with the first session; then introductory keynote by the CEO of the foundation and should be only -- and another segment with Syed Iftikhar and Dr. Bianca Kremer. As I mentioned, she is stuck in traffic but arriving from CGI. A board member of CGI, and my colleague at CTS‑FGV. We will have a discussion and -- discussion with Alek Tarkowski about the future and Gurumurthy Kasinathan, director of IT For Change, from India.
Let's have an understanding of what we are speaking. When we speak about AI Commons, we speak about considering the elements that constitute AI as common resources, be it data, great models or computing. Sort of the provoking question we want to start this session with, is it possible to state AI Commons -- the concept of AI Commons from AI. We know from decades of Commons, even from Nobel Prize Winner Elinor Ostrom, it is reasonable, as long as there is governance in place.
So a common resource is only as manageable as it is effective. The governance that allows to manage it, right. Here we can already see there is some tension. For instance, generative AI, where AI training, in general, with a lot of environmental resources are under strain because there is overusage of water or electricity to build and maintain facilities. There is massive scraping of personal data or content created by creators that is publicly available online but does not mean it is ‑‑ it can be overt and used for any purpose and trained for any model or any reason because that is intention of data protection frameworks we have, on one hand.
The Copyright and IT frameworks we have, on the other hand, we clearly see if there is governance and regulation already placed. The way in which AI is being built in terms of infrastructure creates tension with environmental regulation in terms of data and models create tension with at least IP and data protection regulation.
So the understanding is which kind of governance can lead us to have an AI Commons. It is possible to build or an ideal we cannot achieve? I will immediately give the floor to Aliya Bettancort to introduce the next speaker.
>> ALIYA BETTANCORT: Thank you, you have what are related to Commons around artificial intelligence and governance point of view, obviously open is inclusion, accountability and transparency are core elements also to an eventual AI Commons. Because digital justice and digital transformation should go together, I think it is going to be very important to hear from Reneta Avila, CEO of Open Knowledge Foundation for what we need from a governance point of view if we want the digital justice built and supported around commons, around artificial intelligence.
Without taking more time, I would like the opportunity to provide her to open her speech. So Renata, welcome. We are lucky to hear you.
>> LUCA BELLI: Renata is connected but cannot speak. If we can unmute her camera, please. Also privileges to share her slides, video and audio, please. Sharing is now turned on. So Renata now you should see us and be able to speak and share your slides.
>> RENATA AVILA: Hi. I have been unmuted but still on video. I'm sure I can do it. I still need the video, if you could, that would be super nice, thank you.
>> LUCA BELLI: Renata, can you hear us?
>> RENATA AVILA: Yeah, can you hear me? Hello. Now I can speak but I cannot activate video. I cannot activate my video.
>> LUCA BELLI: Can you try now to speak and share your slides?
>> RENATA AVILA: I am speaking. Can you hear me?
>> LUCA BELLI: So they can hear us but they are not able to share ‑‑ so let's do something. Not to lose any more time, because time is money, let's ‑‑ I propose we go on with the first in‑person speaker, Dr. Syed Iftikhar. As I mentioned, the concept can be applied only if we have governance that supports effective data, algorithms compute, and before governance is an essential pillar of any kind of AI management from the type of configuration. So the recent organisation, has developed some very interesting approaches to AI governance and specifically focusing on the Member States but also trying to scale it up. So please, Dr. Syed, if you could provide us insights on the work.
>> SYED IFTIKHAR: So thank you. First, Luca explained digital cooperation was established in November 2020, so it is used as global multi‑level and intergovernmental position. Digital vision and mission focus on achieving prosperity and enhancing the growth of global, sustainable digital economy. There is a 16‑member state with population of 18 hundred million, and combined of 350 million dollars. In this ecosystem we have observers, more than 40. We have partners, as well. AI technology is at the heart of the agenda.
This is providing information and control to promote adoption of responsible AI for across the countries of the world. Now I come to Luca’s questions. Globally, AI governance approaches. Data (?) Once is principle‑based approach and the other is perspective approach. We are talking about the perspective approach that is adopted by U.S., Singapore and OECD. Such responses include transparency, global cooperation and accountability to boost ecosystems, where AI and GenAI tools can be applied in a safe, fair manner.
When we are talking the second school of thought about the AI governance, it is a prospective approach adopted by China. This approach is more focused on identifying the risk and setting rules in advance to protect against harm from what is there.
This approach provides flexibility and adaptability. The framework offers more clarity and enforceability. However, there is actions to finding right balance between these approach, which is critically effectively for the ‑‑ to govern AI while promoting innovation and ensuring legal safeguards. So this is somehow the two standard approaches. Now I just gave you highlights about our Member States, what exactly how they are approaching the other region of the world.
Like we see Saudi Arabia, the National Strategy for Data and AI is aiming to position Saudi for the two of AI development, focusing on skills, development and regulation, while emphasizing the AI interpersonal relationship as well. Saudi introduced guidelines and focused on responsible development on AI and GenAI. (?) Introduce plan from 2023 to 2027. Aims to implement the country and policy from 2020. This approach focuses on economic and social aspect. Similarly focus on ministry of information technology, government is Pakistan creating AI policy with stakeholders to knowledge‑based economic and responsible development.
Greece is following the EU Act, more focused on risk‑based framework. I give you overview about Rwanda. They introduced AI policy in 2023 and also focused on the Responsible and Ethical AI. They also proposed to establish responsibility office within the government of innovations to promote the ethical and responsible AI.
So this is from my side, Luca, thank you.
>> LUCA BELLI: Thank you very much. Yes. I'm just following up with Syed, then going back to Renata. These are very good points because resonates what you are mentioning about Saudi Arabia investing in this explicit regulatory stance.
Last year in the coalition here in AI governance of IGF we released report on sovereignty, transparency and accountability. Something we highlighted is the fact that investments are a type of regulation. There is ‑‑ I think besides the principal approach and risk‑based approach, we can really identify an investment‑based approach, which is putting money ‑‑ of course we have to have financial resources like Saudi Arabia, but shaping how AI will be produced and used by directing development through investments, interesting point.
A quick follow‑up. You mentioned you have observers, and CTS‑FGV is one. We are participating in shaping of this AI readiness tool kit, which is important. I would like to share with participants insight because it is a very interesting tool you are using to support the members in shaping data in AI governance.
>> SYED IFTIKHAR: Thank you, Luca. This is at the heart of the agenda. We are in the process of developing different initiative focus on AI and particular AI governance. One like Luca mentioned is AI toolkit. This is some sort of collection of practice‑based artifacts that help Member State and other than the Member States how to assess the AI readiness and can adopt AI. We are providing a playbook for the AI governance.
This is on the AI governance perspective and how the country built worlds from these perspectives and give more impression and guidance to the Member States how they contribute to global AI standards. This is not a theoretical toolkit. Now we are in phase of piloting this in our three Member States, targeting Jordan, Kuwait and Navana(?). We have co‑created this tool kit and implementing in our member state to test it, pilot it, so it will be Holistic and implemented across the board. Not only to the initial Member States.
Recently launch Global Center of Excellence on GenAI. This is to promote multi ‑lateral cooperation on GenAI and assist member state how they grab benefits and opportunities from genAI and we are creating a forum through these initiatives so countries can develop ethical solutions so this is another initiative.
Another, the third initiative is on responsibility tool. So this is also more focused on AI governance perspective. We are supporting to countries and to Member States how they embed ethics and address the Human Rights issues, risks when implementing the AI. This is a toolkit very useful, thank you very much. This is somehow few initiatives.
>> LUCA BELLI: Excellent. We have Renata? Excellent.
>> RENATA AVILA: Yes, you hear me?
>> LUCA BELLI: Can we allow Renata to speak? She is muted. Can you speak again?
>> RENATA AVILA: Yes, can you hear me?
>> Luca cannot hear you.
>> RENATA AVILA: I don't know what is going on because people in the room ‑‑ maybe, let me see ‑‑ I don't know. Hello, hello, hello. Please tell me you can hear me? Hello, hello, hello. Nope.
So what shall we do? I don't want to delay and present only for people in the Zoom. I mean, you can skip me.
>> LUCA BELLI: Is it possible to have remote speakers speaking?
As the show must go on, let's anticipate our open floor session. So to give -- I guess do we have several participants here that might be interested in sharing their view about whether the AI Commons concept is something that may be achieved and how and what extent ‑‑
>> RENATA AVILA: Hello ‑‑
>> LUCA BELLI: To‑to‑to -- We see the transcripts.
>> RENATA AVILA: Great.
(Echoing audio)
>> LUCA BELLI: We almost have a technical solution.
(Echoing audio)
>> LUCA BELLI: Can you try to speak?
>> RENATA AVILA: I don't know if I can speak. Can you hear me well?
>> LUCA BELLI: When I speak there is a huge echo.
>> RENATA AVILA: Same.
>> LUCA BELLI: Now seems to be okay. I don't hear echo anymore ‑‑
(Echoing audio)
>> RENATA AVILA: I don't understand what you said, should I start?
>> LUCA BELLI: Yes.
>> RENATA AVILA: Thank you, apologies. It is very difficult. I believe it happens even in Internet‑related events. So what I want to talk about today is possibilities they have. And we think about different ‑‑ we frame it as a Commons instead of us raise to the best, biggest, fastest AI existence is that we can ‑‑ my precision is about how the digital architecture is possible and how within this digital architecture we should frame the future of AI with innovation instead of seeing it as competition between countries or within sectors. What we should is imagine a digital future with a different set of values and different set of principles and different set of governance and different allocation of resources so achieve a better digital future and, the end of the day, achieve improvement in people's lives.
So basically what I have ‑‑ I haven't seen at scale ‑‑ I have seen many efforts but haven't seen at scale when we talk about AI is a future where we can be active participants, where we can actively shape future of these technologies together.
Basically my call today of the IGF is, let's abandon the narrative of AI system impossible to govern and system impossible to audit and let's abandon that narrative that it is not helping us. It is important to keep going, our work, and accountability and workability, but we need empowerment. For a moment stop and imagine the AI future we could build together. The AI future we could build together will need absolutely different design principles.
We have to consider the positives, among the positives we have today, the most educated generation ever. Never before the world have so many well‑educated people. So many well‑educated people. There is not only well‑educated but interested in building a better world. Not only well‑educated and interested in building a better world but also, which is very, very important, concerned about the actual future of the world.
Never before were we connected at this scale, so it is very important to grab that possibility and remind ourselves the most important element of any AI future is not data, not super‑sophisticated knowledge or outcomes but the people that can be part of this collective project.
So one of the most important things is to realise the potential of an AI Commons. That is not only a governance system ‑‑ data set. It goes beyond just data sets. And AI Commons mean universal principles to advance people's rights. If we imagine engineers instead of having a principle ‑‑ the guiding principle efficiencies, increase profitability. We consider as the first step in these AI systems to meet ‑‑ to embed in technologies universal principles to advance rights. That is the first principle when I think of framing differently the AI Commons.
The second is always remember we are building upon other people's knowledge, instead of extracting. One of the things I always remember is how brutal the copyright enforcers were ‑‑ from India to Latin America to the books and how tolerant with a few companies, extracting all this ‑‑ like no photocopying gazillions of data to feed the AI systems. An AI common sense always come from acknowledgment that it is building upon other people's knowledge and building upon knowledge of not only data points but knowledge of communities. It is not only knowledge leaving the data sets but leaving the practices and sectorial practices of the sectors that it is going to affect.
The third is the participatory approach an ideal commons should have. These participation, it should be participatory and accessible at every stage of the process. Something that we have ‑‑ Civil Society focuses on intensely is auditing algorithms, but the foundation is more broad. We need to be in the room even when, you know, someone, especially for public systems, decides why do we need an AI system.
Sometimes at that very initial stage the decision would be stopped. The decision would be like, actually, we could find better ways to do these without AI. Actually, that will help us narrow the assistance ‑‑ it alleviates in doing instead of proliferating the AI in the planet.
The participatory approach in the AI Commons, the comment governing the AI Commons would make sure of the effect of the systems. In this feeding process, it will make sure to fix the system and improve it constantly. Simply discover when it is effective or there is no ‑‑ unintended consequences and package. Having the suitability in AI Commons structure to switch the system in this participatory approach is going to be like key.
Of course, up inclusion is multi‑dimensional. It is feedback loops and tools and resources to also make it a reality. The third is reproducible efforts. Imagine an AI Commons of language models in Africa. It makes a brave truth and makes this very easy to do something as important as primary education. Even in how we like to have ability to reproduce it and localize it in other communities, the Caribbean as part of the commons, create enablers to transfer the technology. Not only the data set ‑‑ actually the dataset may be irrelevant but how too was done to other countries.
And sustainable. Not only for the planet but for the people, the communities working. We have seen extractive and unsustainable processes they have today. Imagine a future in consistence where everybody is appreciated and volunteers excel.
Generativity, communities, rooted local and with exponential impact. As I said before, AI Commons, when we decide for AI it has to be a careful decision because of the impact it has. It shouldn't be a trend. It shouldn't be proliferation of AI as magic buzz word to solve problems. Sometimes the technical and efforts we are pushing forward the foundation, sometimes the technological solution is very simple and is very simple and low‑tech and the impact has to be one of the values that pushes AI Commons to actually pursue the final goal in systems.
Those are just the initial thoughts on some guiding principles for an AI Commons that is not only about data but as some of aspects they would address by previous speaker are literacies. We need literacy. If we imagine in AI Commons we need to fix first the big gaps on knowledge. Fix what we are doing so the big gaps on knowledge we face not only AI but democratic governance, participation, collaboration and so on.
Communities will have a key role to play in interconnecting the AI nodes, as I imagine it, standards and tools. I guess, you know, the vast investments, the needs can be transferred, localized and reproduced depending on the AI we are talking about.
If we narrow the AI Commons vision, to address the most person support of our time, I guess there is already coordination in the sectors. AI is a tool that can enable those already organised communities to govern research. My position shouldn't be like communities are working already in solving the problems. And that is it. Basically summarizing my mission of AI Commons goes beyond data, includes the skills, includes communities, includes communities working already in ‑‑ in that AI solutions tries to address and focuses on sustainability. It cannot be done without democratic governance, collaborative processes and without having clarity in the, of course, resources and so on.
And that is my very, very short presentation. I'm conscious of time. I'm happy to get any feedback, reaction or questions.
>> LUCA BELLI: Thank you, Renata ‑‑
(Echoing audio)
>> LUCA BELLI: Amreen, can you unmute ‑‑ standards utilized by (echoing audio) (?)
Amaru
>> AMREEN TANEJA: Yes, can you hear me?
>> LUCA BELLI: Amreen can you unmute? Amreen,
>> AMREEN TANEJA: Yes, seems I'm unmuted. Can you hear me? You hear me in the IGF room? Seems everyone in the Zoom can hear.
>> LUCA BELLI: Amreen (?) (Echoing audio) Amreen.
>> AMREEN TANEJA: I hear a lot of echo here. I hear a lot echo, but I hope I'm audible now. If it is possible if you can switch the video on as well, thank you. Am I audible?
>> Can she go on video? Amreen.
>> AMREEN TANEJA: Yes, but not able to switch it on from my end.
(Echoing audio)
>> Open the camera? Amreen.
>> AMREEN TANEJA: Yes, but says the host has stopped it.
(Echoing audio) (?)
(Off‑mic discussion)
(Echoing audio) (?) Amreen.
>> AMREEN TANEJA: Am I audible?
(Echoing audio) (?)
>> AMREEN TANEJA: Am I audible?
(Echoing audio) (?)
>> AMREEN TANEJA: Yes, I can hear you. There seems to be a lot of echo. I'm not sure if you can hear me, at this point.
(Echoing audio) (?)
‑‑ and educating community in AI, what is the rationale and more or less mathematics ‑‑
(Echoing Audio) (?)
>> AMREEN TANEJA: Are you able to hear me now? Am I audible?
(Echoing Audio)(?)
>> ALEK TARKOWSKI: Luca, I think you are mentioning me but, I'm sorry, the audio quality is really bad.
(Echoing Audio) (?)
>> ALEK TARKOWSKI: I'm sorry but in chat I'm assuming you want me to deliver remarks?
(Echoing Audio) (?)
>> ‑‑ there is a focus on personal data but enormous personal data and derived value. It is not only montization but social value. The final point is when community is self‑organizing it is great to see them organise for themselves but so these communities are not detrimental to others. Not only in data or the (?) or anti‑trust or trust activity. So I just wanted to emphasize that the notion of self‑organizing the communities is a field like explored the way between complete privatization and a full public management of the commons, see organized, the governance of the commons.
For data, it is the same. We should move also from the expression that a sharing -- to sharing data access, which is a very different situation. You don't necessarily transfer the data. So the debate is not only dataflows. It is about, for instance, if you use federated learning, you don't move the data but you move the model of AI to a particular repository and never leaves the repository. We need to explore new ways, including technological ways, to manage access and shared access today that. So expanding the notion of comments, yes.
>> LUCA BELLI: Thank you for the remarks. We have three online speakers that are very patiently waiting for us. So we have in order Amreen, Alek and Gurumurthy. Can we have some online speakers speaking? Is it possible? Amaru
>> AMREEN TANEJA: I will try this time.
>> LUCA BELLI: Speaking. Amreen, I can see you. Let's see if you can finally speak.
>> AMREEN TANEJA: All right. Yes, am I audible?
>> LUCA BELLI: We cannot hear you. No. All right. Yes, that is a very good solution. So the only solution apparently we have is the transcript. People can hear you, you can deliver your speech and I will read it for you, so you can speak. The transcript people will transcribe what you are saying, and I will be your voice here. That is apparently the only thing we can do.
>> AMREEN TANEJA: I request the room can be muted so I don't hear you as well at the same time as I'm speaking, I just request, if that is possible.
>> LUCA BELLI: Yes. So can we please mute the room so she doesn't hear us while speaking.
>> AMREEN TANEJA: Thank you so much. All right. Can I begin? I hear a lot of talk, background noise. I'm not sure. Okay. All right. So then I'll begin with my speech. Thank you so much for inviting me here today. I could not catch a lot of the conversation that was happening, but I'd like to give my two cents in regard to AI Commons specifically to public goods, a type of DPG, along with other open source solutions that recognises DPGs, content collection and open software.
So in the context of AI DPGs, perhaps we can start by the link between democratization and sustainable goals. I'm sorry, hearing a lot of background noise still. Okay. I can hear. If you can mute so I can speak? Will that be okay? All right, thank you so much. I can hear you, I'm sorry. So perhaps I will continue. Like I was saying, I think it is very critical to explore the vital link between AI democratization and sustainable goals. Around 70% of SDGs depend on ‑‑
(Multiple conversations)
>> AMREEN TANEJA: I'm sorry. It is very difficult to know exactly how to contribute right now with the disturbances in the back. Okay. I shall try my best.
AI is a crucial part of the digital landscape in terms of fulfilling this map to SDGs. Essentially has potential to drive significant progress if ‑‑ accessible that are community‑driven and if inclusively.
(Multiple conversations)
>> AMREEN TANEJA: This means adopted a local and adaptable tools that generally benefit the community and to achieve this concept of digital public goods, right, it is very essential because by supporting open accessibility technologies, DPG allow AI to be used responsibly and effectively across diverse context. So I'll briefly explain how DPGs work for those relatively new to the concept.
(Multiple conversations)
>> AMREEN TANEJA: Essentially the definition comes from the United Nations by the Secretary‑General of UN, high‑level roadmap for cooperation 2019. Along with this recent global policies as well from the UN resolution on AI for sustainable development that took place in March through the G20s group (?) And to the UN Compact, you know, they have underscored the value of digital public goods and open source AI as critical to delivering AI's benefits at‑scale, essentially.
(Multiple conversations)
>> AMREEN TANEJA: For this reason, we are proud to be working to advance digital public good standard that essentially will better now ‑‑ which it will essentially better address and recognise AI systems that embody values of openness, transparency and accessibility.
Just recently we introduced fundamental dates to the standard to address the unique requirements and opportunities of AI. For those of you who are new to the DPG standard I just like to briefly mention that it is a standard that is created out of the definition of DPGs and it is operationalized by the alliance sector and it is a set of nine indicators that establishes the baseline criteria requirements that must be met in order for a solution to be recognized as a digital public good.
So when comes to AI, as AI continues to evolve ‑‑
(Multiple conversations)
>> AMREEN TANEJA: It is essential for transparency but address the AI (?) As well. Perhaps I can give a few example of changes we will propose, if that will be all right? Okay, great. So one of the main changes we are going to be proposing in the DPG standard is around openness across AI competence. So we have expanded requirements for these in solutions will now need to provide documentation for source code, for model requirements and open data, right. So this means essentially anyone anywhere can access, adapt and improve these systems. So for example if an AI model is used for crop prediction in sub‑Saharan Africa, so they should be able to adopt to reflect unique agricultural patterns of that region as per the climate without any issues surrounding (?)
(Multiple conversations)
>> AMREEN TANEJA: So these are some of these things. We have used a fundamental change to the DPG standard in it will be managed for AI solutions to have open training data to qualify as public good, right. Essentially in alignment with this standard as when which essentially talks about documentation, we require detailed model cards and sheets for AI systems. These essentially explain how AI models work, what the intended use cases are and limitations and in addition to training model of cost. It would contain use cases intentions use and limitations and it will also capture performance of the model.
That is for data sheets essentially include use cases suitable and unsuitable and sufficient information to recreate that data set as well, right. So then I would also like to talk about the fact we have a do-no-harm requirement as part of the standard. We only focus on design and development of digital public goods so context, which we are going to be assessing if AI solutions are designed in a way that mitigates harm, right.
So you can understand there is a fine balance that we essentially maintain in terms of the ambit of design as far as what would be counted as implementation.
Thirdly, I'd also like to speak about one of the indicators to the ADP standard, adherence to standards and best practices. So these will have two requirements we will be proposing. One is adherence to responsible AI best practices. Another is an AI risk assessment essentially, which, you know, which we have created that essentially looks into aspects around proportion -- around bias, fairness, transparency and mitigation as well so these are examples to enhance accountability and transparency by providing information essential for the auditing system and subsequently reducing biases. So more can study and assess the system as well.
(Multiple conversations)
>> AMREEN TANEJA: Then they tackle the issue of AI model, the creators and specific geographies. With AI Commons and IPG stakeholders can culturally create and develop the AI systems and bring in new perspectives and values into the field. This will inevitably be global AI polyculture, so to speak. That would be my two cents on the topic for now, thank you.
>> LUCA BELLI: Alek is gone. Who is there --
(Off‑mic conversation)
>> GURUMURTHY KASINATHAN: Hello, I think my video on. Luca, can I speak? Can I take five minutes to share my thoughts?
>> LUCA BELLI: Yes, yes, five minutes.
>> GURUMURTHY KASINATHAN: Can I go ahead? Can video be available for me? Now I'm unable to show my video. It is muted.
>> Put up the video.
>> GURUMURTHY KASINATHAN: Can I speak?
>> LUCA BELLI: Yes.
>> GURUMURTHY KASINATHAN: Are people able to hear me? The ‑‑ thank you. Thanks, everybody. I think it's been several glitches. I will take five minutes because I think we are severely behind schedule. I will speak on, is it possible to build the AI Commons. I want to purposely not take a technology perspective and explore contents of how we build them but build a (?) perspective to it and start by asking the question. When we look at the way the society of economy is being built today and when we see that AI is building on the current challenge, problems that it has created for the world today, then the question that I think, is it possible the current paradigm of AI continue? When we know the current paradigm is necessarily, absolutely the cause for a lot of economic challenges. We have enormous inequities in commonwealth distribution, the worst since documenting them, because of tech barons controlling the AI infrastructure across the world. Therefore is the world be one.
We see political disturbances where tech is usurping elections, democracies and creating huge challenges all over the world. Therefore we can only imagine tech in the way applied today. Again, if we look at issues of jobs and unemployment and obviously relays that the current deployment of technology, current deployment of AI is not the ideal scenario. We certainly are suffering, AI are suffering with tech.
Therefore the question is not is it possible to build AI Commons; the question, to me, is it possible for us not to build it. The answer is clearly we need to build it because we need to work on the challenges of the current models. What are the challenges we have today? We have the challenge that we inherently believe technology can only be developed by for‑profit players. Even if we look at OpenAI, it subsequently got taken over and not available. The previous speaker, Amreen, spoke about public goods and important if we want to build. Secondly, the current models are in play --
(Multiple conversations)
>> GURUMURTHY KASINATHAN: -- and controlling the tech element and as I mentioned two AI models that are available and therefore ‑‑ monopoly and monoculture as she mentioned and in the hands of a few oligarchs of democracy.
The most dangerous thing is not any of these. The solution is problem stratification due to (?) Is a problem but an appropriation and we don't know and told programmes and AI cannot be understood or explained. We cannot allow something to run society, run the economy when we don't understand what it does. We cannot simply trust people telling us it is doing the right thing. We know it is already full of biases and ubiquitous and (?)
People and current AI models promote power, political power and promote inequalities and push unemployment and social strife. Therefore current models are certainly (?) We have to understand the most important part of technology is the part we usually forget, what usually make tech companies so profitable. Unlike other company, other common problems, the Digital Commons is (?)replicating, costs are nil and would (?) (Audio difficulties) not be available because not commons, whether it is data, we know we can replicate it without cost and to build monopolies and ‑‑ what we need for. If we need to create AI, these are challenges and should be bottom‑up, local and people-- (Audio difficulties) -- about AI need to be made bringing people into discussion and discussions cannot only be had by ‑‑ had only by owners of the companies or governments alone.
The political imperative should first ‑‑ needs to be done first and from this political understanding build the technology of understanding. I want to give a small example of the work my organisation has been doing. Actually, deploy AI Commons I have been talking about. We have used small language models in form of open-sourced algorithms. These open-sourced algorithms on servers inside schools. Inside the school on the desktop the AI model runs.
So the data and code and analysis remains in school. That means they are desired by teacher. This model has been functioning the last year in government schools in South India. I think this is an example of how we can develop, we can deploy AI Commons.
Unlike ChatGPT, which takes the whole world by storm, this isn't going to be one thing everybody uses. It will be bottom‑up. Several contributions from different parts of the world, will federate their network and how the AI Commons will work.
I just want to share these ideas and think work in public education using small language models and decentralized local AI may be relevant to the AI Commons. Thank you Luca, everybody. Hope you were able to hear me. I will stop now. If there is a Q&A session, I will be happy to answer questions, thank you.
Thank you,Luca. Thank you everybody in the room.