The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>
>> MODERATOR: Hello, everyone. Welcome. We are glad to see you here today.
We are going to ask you something though.
Can you step closer to the stage or well sit closer to the stage? We want to show you some things.
We will probably have to use our computer screens to do that.
It will be just easier if we are a bit more closer to each other.
Yeah. Don't be shy. We would like to ask you to speak. If you don't want to, we won't call on you but want you to be close enough to see where we are and what we will be discussing.
If you are in the back, please sit closer.
>> MARCUS: I have the pleasure to be here. Okay. Thank you for the technical assistance.
It is an excellent initiative for the platform and foundation to use artificial intelligence to beta mine the rich archives of the IGF. Over the years since 2006, there is a tremendous wealth of experience archived on the IGF Website and foundation and platform tries to make this available to those who want to dig into it.
Sorina will demonstrate not on the big screen but do it on her own computer and show a bit what is possible. I apologize. I have to run out again. I am double booked in a workshop right now but will come back to you. Over to you, Sorina.
>> SORINA TELEANU: Thank you. (Audio fading in and out) so I don't have to use this. I will move around so you see what I'm doing. Question to you how many of you actually have seen our daily reports from the IGF?
1, 2, 3, okay. Thank you. What we have been doing many years at the foundation is to report to the IGF and here there are 10 sessions in pattern that is a bit complicated to follow using that word and if you are in the organization at same time you obviously can't be in two or three places at the same time and our reporting is here to help.
We have reporting from every single session. (Audio fading in and out). Artificial intelligence, that is a key word from the IGF and we will show how we use AI for this reporting and will talk more about AI in general and how we can use it a bit extensively too and how much we will tap into the 20 years of acquiring IGF knowledge.
Andrijana, thank you for talking about reporting these days.
>> ANDRIJANA GAVRILOVIC: Thank you. So where do I even start? I used to be part of the human team that would sit in each session and take notes while speakers are speaking and write the report by hand.
These days we, first of all, record the session. We have our own video archives. We use AI to transcribe those sessions.
Then we have the complete transcript, which is, perhaps, more complete than the transcript that you can see on your screen while you are watching IGF. Why?
When you see the transcripts of the IGF, there are actually captions and is a captioner sitting somewhere and following what we are saying and writing that -- typing that by hand.
So, of course, because they are human and not inhuman, they can't really type each word that the speakers are saying, especially if someone is speaking faster.
So the transcript can actually pick up every single word that the speakers are uttering. Every single oh, and ah and it is more complete in that sense. The transcripts take some time to generate. It is not too long. Maybe between 20 and 30 minutes. We have them pretty fast.
Then because they are quite long, like 17 or 18 and sometimes even 30 pages, we also use AI to summarize such transcripts. Then we have a short session report. When I say short, I don't mean too short. Perhaps 1,000 words so you can read it in 10 minutes.
What is especially new now is with the help of AI, we are able to synthesize what are the agreements between the speakers and what are the disagreements between the speakers and AI helps with thought provoking comments that some of the speakers made and perhaps some discussions for the way forward. So it is a much more substantive analysis.
AI can work on multiple reports at the same time, which a human cannot, and the human limitation is -- if I remember correctly, two reports. Then by the time you are writing your third report, you are absolutely losing coherence, and you are writing sentences that don't really make a lot of sense.
The editors are asking you what is this?
Sometimes the answer is I don't actually remember, and I don't actually know, or I think it was this and it is quite an intensive process and used to be an intensive process. You would sit in a session for an hour and a half and write the report for 2 hours and multiply that by three. That is nine hours at least every day.
At that point you are just ready to sleep. AI can generate 50 reports a day, which is quite nice but one thing. AI is not completely self-reliable. We still have to help it read that data.
For instance, what we do is we tell AI, so this speaker, whose name is this and that started to talk here. They ended their intervention here. So the AI actually does have data on who said what. So, it is not just a general report where you have all of these general ideas, and you don't know the peculiarities and who made each example. It is very, very detailed. And as I said, it is -- it is 50 sessions a day.
It can follow things that the human brain cannot be because it gets too tired. So, without any more details, I saw some nodding. I think you followed -- you followed along. Maybe we can try to show one of the reports now. Do we think it will work? Not there? Okay.
So perhaps everyone can take their own device and go to our website. You see that they are in the upper left corner dig watch. It is dig.watch and on the very, very homepage, you should see a banner or news IGF 2024. If you can't see it on the homepage, we are not doing a good job [Silence].
Maybe we could have done a better job at displaying our landing page, which is something that we need to discuss later.
It should be dig.watch/IGF2024. Try that one. Oh, okay. Okay. Oh, there we go. You can see the session at a glance. Where do I stand so you guys can see?
There is a short summary. There is key points. Well, I have to stand in the middle. I don't know. Maybe you should -- look on the left and the right of me. Then you have a full session report which as I said, is much fuller and flushed out.
Then you have the entire session transcript. I did forget to mention that.
So, if you want to see exactly, exactly what was said, if you want to quote someone, it's -- it's in the transcript itself. Then we have a comprehensive analysis by speaker. In this case we only had two speakers so you could see what the main arguments were.
Those were things in black and bold and you can see how AI actually extracted that and why did it extract that and you see the explanation and what are major discussion points and you see what the speakers agreed on and who they agreed with and that I mentioned before. Now we have a knowledge graph.
This is something that is super fun. I think I will give the floor to Jovan to explore knowledge graphs. That is his passion. Yeah.
>> JOVAN KURBALIJA: Thank you Andrijana. Good morning, everybody. Knowledge graph on the right, knowledge graph is a visual presentation and one of AI techniques that we use. We can in discussion go through techniques and we use large language model and knowledge graphs. Oh, my God. It is complicated, huh?
It is really an innovation and Sorina had to go behind the stage in order to run the screen. You know? Sort of innovate.
We have knowledge graph and knowledge graph is visual representation.
You have a notes, blue notes about people and other notes are about topics that are covered. Main topics and subtopics.
Then you can have it, a glance, what is basically discussed and who connected to whom on the content level.
Something which you cannot follow easily if you are at the session, and you basically listen, and you probably browse. At least I do it half of the time.
At the end of the session, you say, okay. Here are the convergencies and here are the differences. People agreed on these and these are takeaways. Can you have also how many notes and you can search to notes and see if they agreed, they disagreed, or they just refer to each other. That way you get completely new insight into overall -- overall session and discussion.
What is also interesting, and Sorina can -- Sorina, sorry for being behind the screen but -- behind the scenes and behind the screen. There is also in-depth analysis, which is very interesting where the -- our large language model is going to all transcript and seeing what are the agreement points between in this case and needy and Gustavo and what they agreed upon. Obviously, they didn't negotiate. What is in their statements, it is common for both of them.
You can have also contentious points where they disagreed. You can have also partial agreements and takeaways and thought-provoking comments and follow-up questions. There is always discussion what is the follow-up after our discussion? Our AI model is going and indicating here is something you can discuss in the future or here are takeaways from that, from the session.
It is basically complete X-ray of the overall discussion which we cannot -- as Andrijana said, adjust easily if you are in the session. That is what it is. Sorina, if you can go back, what is then even more interesting is that we collect the -- we analyze all sessions. And at the end of the day, we say, okay. What has happened yesterday?
And this is for me particularly useful because like most of us at IGF, I come with ambition and the list that I will follow this, this and that and probably post-morning I end up in cafeteria and socializing that is part of the important IGF to reconnect with people.
This tool helps to say, okay. I know what has happened. I know what is the grasp. You have a feeling of cognitive, let's say, control of the discussion and also reporting for people when they are back in the capital and you say, hey, what did you do for 5 days? How is it relevant for a country and NGO and another organization?
Here, Sorina, if you can just open one day or Monday or on the top summary reports and therefore what you have on the top, you have Monday or Sunday, whatever. Monday, yeah. We have reporting now from Tuesday. Okay. Or you just crawled -- yeah. Thank you. Monday, Monday or -- yeah.
No, no. Don't worry. Okay. That is fine. Yes, yes. Go back to that. Yes. Here are the daily summaries. You can go also back to the page, previous page. If you subscribe, you get daily summary every morning where the main points happened during the day and if you scroll down Andrijana, it is very important, and it is a mix of artificial and human intelligence, and our team gets inputs what is happening during the day and Andrijana and Sorina and myself said these are interesting points. You always highlight what is artificial and what is human intelligence, and you have survey of the topic and nice photos.
You have -- if you scroll down what I'm particularly interested in you have thought-provoking ideas and metaphors people use for the day and interesting metaphors for the future.
Sorina, if you can go to the main page for IGF reporting and all sessions for Sunday and Monday, you scroll down. Our colleague, where is he? Stephan is making video summaries.
Look at this knowledge graph. How complex is it for one day? For one session we have a knowledge graph for one session, which I show, it is now combined with the whole day. Therefore, you can suddenly see how one discussion in one panel in the workshop is related to some other discussion.
This is a value-added element you cannot get even by participating carefully in the session. It is a bit busy, but you can see or Sorina, you can click on all notes. It is very strange, Sorina.
You can find your name and see how you are linked to a topic that was discussed and how you were linked to any other discussion during the previous day. At the end of IGF, we will have a complete knowledge graph for everything what was discussed during the IGF. If you can scroll down, Sorina, thanks.
Then you have an AIS assistant and reporting our AI model is developing an AI assistant. Therefore, if you don't know what basically you can ask some questions, Sorina. You basically heard in the coffee chat or reception there is an interesting idea, and somebody mentioned at some session, and you don't know what it is and what are main points and cybersecurity and from governance. Okay. Whatever.
More specific the better and Indonesia and Brazil and Switzerland yesterday in AI and you get points and what is important in all our activities is you always have a source. We insist in all activities to point to the source. For a few reasons.
First, it shows the reliability. You can know on what AI is built. Second, we think that we should respect the knowledge of people. If I said something in the session and somebody creates an AI answer, that's fair to -- to be quoted. This is a big problem that you know with the main AI system that is basically not quoting people. It is not quoting our knowledge. That is another element of bottom-up AI which we are arguing for. Sorina, if you scroll down a bit, thank you.
You have statistics and total report and words and like this longer session and session with more speakers. Who was the fastest speaker during present day. So far Mathilda is leading and second is Betsy and third and trying to speak today, try to speak fast and we will have award at end for fastest speaker at IGF. So far that is it.
Then you have a key preferences and concepts used during the session and what are main points discussed and digital Internet online and cyber and at the end receive to -- register to receive IGF daily updates. Now, why does it matter?
Every year we develop reporting. We have now finding -- trying to have a project of codifying historical knowledge, whatever was discussed since the first IGF. Andrijana used theme of data cleaning. We tried to use AI in it. There are limitations in it. You still have humans to say okay. This speaker you need to do coding when people spoke and there are some other details we can discuss.
It could be manual work. We are working on it and looking for partners and sponsors who can join. We hope to have the presentation of this -- let's say IGF knowledge ecology at IGF in Norway, where we will basically represent what is the historical knowledge of IGF and how discussion evolve on cybersecurity from the first IGF to today.
What were the key words used and how were issues and discussion was shaped? For example, mine -- I have two projects I'm interested in. One is a collection of the way how people frame opportunities and risks in any speech delivered on the opening session AI digitally offers opportunities and risks. I have collection so far of 150 ways how different speech writers are framing it. People have to be innovative.
If you repeat again AI is offering opportunities but there is risks, which is true. But that is -- now you get interesting insights into -- into the linguistics of the event and how people are framing and reframing issues. Now why -- this is part of a deeper and broader approach to AI.
We started it in 1992. I wrote my master thesis on AI for international law. At that time, it was rules-based AI and AI based on codifying international rules. Obviously with ChatGPT and neural networks quite a bit and ChatGPT came and big hype and Dada dah and big excitement and skeptical in 2023 about the hype. We continued to develop bottom-up AI and institutional AI by making annotating documents.
Maybe Sorina can show you that. That would also be useful how we annotate documents and how we basically develop knowledge. One of the key principles we use is that anyone's contribution, knowledge contribution should be recognized. Whether you are a student, whether you are a researcher, whether it is from your paper.
AI must recognize people’s contribution. Otherwise, it is not good. It is not sustainable. It is not fair. And it is possible to do it. We also argue with big companies that it is not true that it is impossible.
The problem is obviously copyright and other issues we are aware of and it is possible to track back -- for example, Sorina has a very interesting annotated documents on the global compact and Sorina, you can maybe just highlight something or just to see how the annotation work and function -- you are not -- you don't have the -- over there in -- it is in our classroom. Okay.
Therefore, any document is annotated. You have layers of meaning. Therefore, when the AI system comes, it can process a global digital compact and to be fair, you will get some good answers. But if you analyze it with the layers like steps in the medieval era, if you have layers of meaning, you enrich the quality of answers.
And our call is that any community, any organization, any governmental department should and can develop local AI. And that is, I would say, critical for the future of AI. People should develop their own AI based on their knowledge, their experience. It is completely feasible with open AI source platforms. It is financially affordable. It is ethically desirable.
Technically feasible, financially affordable, and ethically desirable. And I will say this is a critical issue for the future not only on AI and digital, but I would say for the future of our society because knowledge is what defines us as humans do one of the elements. It defines our human dignity, our history, our identity, and if that knowledge is taken from us, we as a society, and I'm speaking about all societies worldwide may have a problem.
We are showing practically and functionally that it is possible to develop something like this. Sorina is literally behind the scenes. She is doing a lot of knowledge analysis and writing an excellent book, which you can see this afternoon. We will present it.
Sorina, you may, if you have open, you may go to our classroom and see how it works with annotation just to give it the idea of one of the techniques we use for adding this layer of meaning, which is crucial. What is happening now?
Basically, big companies aside from most knowledge data which is available online. Now the next level is how to enrich data. You can enrich data in two ways. You outsource somebody usually in developing countries who is basically labeling data.
But it is not useful because you cannot -- you can label data this is a cat, this is a dog, this is a bicycle. You can't label data what is digital global compact or what is the AI act or these things. For that you need expertise.
What we have been doing and Sorina will show later on in the classroom how it works. What we have been doing is we have been annotating texts and annotating texts I read newspaper and highlight saying Andrijana, this is interesting for discussion of cybersecurity. She deals with cybersecurity. She sees that and reads and comments on basic text and journalist develop players of meaning.
Where systems goes, it basically analyses this and say when you ask what is cybersecurity and what is that at the IGF? If just basic texts, it is fine. This is a problem. You will get always oh, it is a nice answer and asking ChatGPT or Gemini or any other platform, it sounds nice. And question here is how useful is the answer and will it provide you with additional reference or additional points or just read and say that is sort of not bad. That is the next element. Here is where fine tuning will start to get the answers which are -- which are connected.
We have now -- there are many things. This is basically the idea. IGF reporting and any other reporting we had from UN security council and UNJ and noticed from united nation summit of the future where basically people told us one use which we didn't know. You have all transcripts of the speeches as presidents are delivering their speeches in the united nation general assembly and always the first according to the practice in UNGA and people suddenly asked the question what is -- who are the countries that support climate change and inclusiveness? It was a example. System goes through all of the transcripts and says I'm now inventing (?) And you are a diplomat from country X and sit in general assembly and stood and called colleague from country this and country that and would you go for coffee? Hey, your president mentioned this. Is it just speech writer who just put it in it or is it your policy? They say no. It is our policy okay can we have initiative in general assembly around this? We had two examples of that and how it makes it very practical, and you basically get new insights and otherwise you sit in the meeting and can be concentrated and follow and frankly speaking after 10 minutes this is our attention span. You basically started to browse restaurants and what you will buy and shopping and it is very human.
In this case, you suddenly get references to the points that you may consider. Or here at IGF. You are interested in very specific issues, I would say. AI and local communities and multilingualism and indigenous communities. That is it. And you search and say who at IGF is interested in it?
You suddenly get in the session 6 transmit Abdula spoke about it. I see future IGF in addition to this lovely exchanges person-to-person. You just click and say, hey. By the way, Abdula, you discussed yesterday in indigenous communities and AI. Could we go for coffee? Maybe next year you will have a workshop on it, and you will suddenly find the community, which is -- which you cannot detect easily. You know how it works at IGF. We don't see each other for one year and you want to see all friends. In my case, former students.
You basically limit your sort of cognitive proximity and ideas for new developments. That would be all we have other features. Sorina has -- will send you the link if interested in it and how about the layering of the meaning works and basically then -- that's -- I guess that's it. We share maybe short video how you are cleaning and your team is cleaning data, which is not particularly exciting, but -- but it is one of this works to do in order to have a good AI. Now I guess is a good time to open the floor for your questions, comments, follow-up. Please introduce yourself. Thank you.
>> ANDA: Hello. Can you hear me? Can you hear me now? Yes. I can hear myself. Very interesting. Thank you so much. I am Anda and with the center for European policy analysis and with usual language models, one of the big problems is the fact they hallucinate, and the question is how do you address this? Is it the temperature of the language model? Maybe the second one I used to be with EU delegation of G1 and discovering UN texts and how do you account for silences and sometimes a diplomatic way of sending a message would be to be silent about it and if you -- of course maybe through the annotation process may highlight this and maybe people that are outside of the diplomatic community can get insights into this as well. Thank you.
>> JOVAN KURBALIJA: Thank you Anda. Your first name is Anda? Thank you. First let's be clear AI is a probabilistic model and will hallucinate somewhat and telling diplomats to be careful. You should have final say. Hallucination is in built in the model is the first point, and I guess a shared knowledge in this audience and temperature which is standard model and temperature for those that are new, higher temperature you leave an AI model to invent, level of temperature to move to manufacturer and problem is you lose lots of quality in that data that is getting very, very cautious.
Then you lose -- it lose elements. Now, what we use is we use RAG model and knowledge graphs. Two techniques. RAG model tests previous documents and annotated documents collecting relevant paragraphs and sending it to one of the large language models we use, currently about 12 large language models having experiment with internal and also using from -- from open AI obviously and Gemini and Chinese and in China you have large language model every day. AI is a commodity. It is not science fiction but commodity, and in this context what we do is RAG is very useful and pick up paragraph and makes references and reduces hallucination substantively and is generated based on specific paragraphs and is no more open-ended like (?) You have knowledge graphs that are logically structured and therefore hallucination is almost impossible.
The idea is through few techniques, standard temperature and using it less and less and RAG knowledge graphs and annotated documents, we reduce hallucination to minimal level. I would say I haven't noticed -- if you notice, please let us know. We are adjusting to it constantly.
Hallucination is very, very minimal which is tricky in diplomatic circle. You don't feel comfortable if AI hallucinates. Diplomat sometimes hallucinates but complain if machine and AI does more than we do. AI is always based on action. You publish the text and you input. You annotate the text, you input. You comment, you input. You ask the question, you input. Silence by itself is not built into the AI model but we are faking it by making presumption, which is tricky presumption. This is a bit tricky element. Thank you for asking it.
Presumption is if you don't complain, you are okay with it, which is tricky. I understand. With science procedure, it is easy. That is the fact. When you have a science procedure we basically don't answer until Monday that time. When not silence procedure, it is more, more trickier I would say. We are now developing one new application that will be particularly useful for diplomatics, that is dealing with square brackets. Dealing with square brackets is a nightmare.
I don't know if you negotiated pack for the future and GDC I don't know about. The pack for the future was basically unreasonable. Therefore, the idea is that you basically deal with square brackets in a more human way. You say what are the square brackets by EU Member States and how text would look like if we accept all square brackets by EU Member States and Africa and other things and that is therefore basically the idea how we deal with that with real diplomatic silence it is still there. Henri, hello.
>> HENRI: Thank you very much.
>> JOVAN KURBALIJA: You are at the right place at the right time. I don't know if it is by coincidence or --[Laughing]. Please, thank you.
>> MIKE PERKINS: I'm a researcher in tools and this is a really great tool. Do you have this available for the research community or is it a -- a tool that is -- you can use it as a commercial basis? It is really, really fascinating for qualitative data analysis.
>> JOVAN KURBALIJA: Thank you. Data is critical. We have almost 25 years of annotated data in our system and have half a million annotations, which is basically if a big company and major that is started, they need estimate 3 to 5 years to come to that level of annotations that is basically. Annotation is not cat, dog. Annotation is what is contextual meaning of silence procedure, for example.
We are not -- we are not for profit organization. There is now push to commercialize that and we will probably start up soon in Geneva for -- for offering this to UN and EU and other Member States. That will be having.
For researchers, you are a researcher, and we have AI apprenticeship course, and we are teaching people how about AI by using AI. It is very popular and effective. If you develop the system, and you use the system and learn about neural networks and drag and knowledge graphs and deeper layer probability statistics but in a simpler way. Let us know if you want to join future AI apprenticeship that could be a way to use it for your needs.
>> CHRIS ORDU: Okay. Can you hear me? All right. Good morning, everyone. I'm Chris Ordu. Website technology in Nigeria and thanks for the opportunity and you introduced your work and what you do, and I made sure I'm present today to actually see you speak about it. My question is also related to his. I might go back to the second one. I was going to ask if it is the AI model if it is actually open source for the public. I will go to the second question. What would be the major hurdles you faced when coming up with this, your AI model?
>> JOVAN KURBALIJA: Everything we have is based on open source we are basically discussing issues when coming to data not by default open source and you have sensitivities here about data protection and privacy issues and intellectual property rights.
It is not related to us sharing the model that is by the way an open-source model. Okay. It is basically related to other issues, let's say policy and legal issues.
Everything we do is developed -- we use open-source models and large language models, RAG, knowledge graphs, databases and everything is based on open source. What is the major hurdle and link between human and artificial intelligence? That is the basic issue.
We are now changing Diplo's internal organization that by doing any work and by teaching and by doing research and by annotating the texts, you are basically doing your work. You teach. You train. You prepare a project proposal. In the same time you are building a model. That is the crucial issue.
I'm seeing all that Diplo has a fantastic team. You know most of them and they are very engaging and creative people. We are still trained. I have a book. You don't have a book. I need a book to read in order to know.
We are not trained into this what we call cognitive proximity. When you relate to the people, I relate to the question that Anda asked and tomorrow when I basically find something on silence in AI, I will say Anda, based on your question, here is the answer to your -- I will just annotate let's say article. Point to Anda. The system will say Jovan, I'm engaged in discussion on silencing AI. Anda answer, yeah. This is not exactly what I need and giving practical example from today's session. What do you need? Interested in diplomatic pattern of science and develop language and knowledge between us and imagine if we have 10 people doing that or if you have 20 people. Then you have the quality of intellectual quality that is higher than big systems of thousands of people. We call it knowledge proximity and we think it will be future way of organizations and of governments and companies and may take 10 or 15 years.
That is basically the biggest hurdle. Diplo is creative organization with fantastic individuals, and we have a bit of a problem to introduce that way of interactions between this example between Anda and me. You can't hold in your memory active system 1 from Danielle Feneman thinking fast. You can keep in any moment 10 pieces of information. For example, you are listening to me and thinking what is next session and thinking how can I use it in Nigeria and thinking what will I do this evening that is not -- not reading your mind but this is how I think. Therefore, our cognitive capacity when we read the text is limited because of system A about 10 pieces of information. That is a bit of a problem.
Therefore, if I read a text, would I know that Anda is interested in silence or AI or not or how and how AI can help as well? When I open the text, you say, by the way, you discuss this with Anda that is interest to you and read and see if something is interested to Anda. That is biggest challenge and we will be on time and if there are no other questions -- we have -- well --
>> UGENIO GARCIA: Thank you. My name is Eugenio Garcia from Ministry of foreign affairs from Brazil. Congratulations on this amazing tool. I think this is a public service for the community. As a young diplomat, I used to sit at UN general assembly hall taking notes to summarize speeches by the end of the day and send to my capital and this is annoying thing to do and think that diplomatic reporting is changing. This is good news.
I would like to invite you to come to Brazil next year to share your knowledge in those and talk to young Brazilian diplomats and how they can leverage their profession in terms of using AI tools and AI for diplomacy. I think we can of course discuss all of the talks as well. This is an opening invitation to you and Diplo Foundation. Thank you.
>> JOVAN KURBALIJA: Thank you to a great invitation and bringing me to Brazil and I was in Sao Paulo and you said best way to develop AI is to find what is basically -- what we don't like to do and take notes and send to capital and you like to meet colleagues in cafeteria and UN not just for coffee but to discuss and we don't like and are frustrating leaving IGF not knowing and you are on plane and okay I had many good meetings with people and what happened in air? It is based on practical needs and five minutes and seems colleague is quite determined about timing.
Do we have one more question? We have two more quick comments and questions and have a colleague here.
>> Thank you very much. I'm (?) From Ghana and commend you on what you have been doing now. I would like to quote something which you wrote. Realities that current bureaucratic machinery is largely outcoding our models and production and concepts of bureaucracy and AI era and think that is apt to what you said. My question is whom are we leaving in charge of this whole AI revolution and creating this kind of gap and people are just getting richer and richer and creating this kind of divide too. How do we approach this whole thing?
>> JOVAN KURBALIJA: Quick answer. You should be in charge of your knowledge. Your Ministry, your country, your community, your company should be in charge of your knowledge. That is the critical element. Yes. You will spend less time for bureaucratic things and will have more time to real things, but you should be in charge of your knowledge.
Never underestimate bureaucracy that always finds a way to kick back. This will be a big challenge it is coming bureaucracy is based on two elements: Text and hierarchy. Text will shake core of bureaucracy and therefore big changes are ahead of us. Comment here and we will then be on time.
>> Okay. Okay. I will be pretty quick. I'm a researcher and professor at university from South Korea and first of all big applaud to you and your team. It is great achievements and is basically very close to what I wanted to do with your Website.
I have done manually, personally. You know, you -- you -- sort of obviously what I wanted to do in the past. My quick question was that so it is a very sort of conventionary for academia that we want to be transparent about what model and how we code it to get the outcome results that we show in the paper; right? You present outcomes and it is very nice and very, very intuitive and phenomenal. But do you have a plan to be transparent how you code it or how you, you know, set parameters to get results you show?
>> JOVAN KURBALIJA: Yes. We are all transparent and will add the link what models we use and what RAG models and neural networks we have elements we can explain and some elements we can't explain as you know. There is a research paper and had a researcher who was part of our team for 1 year analyzing how ethical social evolution of AI system is more important than all of the checkpoints and guidelines that they don't work.
He came with and said okay. How ethical AI can be developed in different way in anthropological approach? I will share and you can leave the links, and I will share with the comments. That is critical and AI story and with that I will conclude AI story is so critical for future of humanity.
It gets a question of knowledge that defines us who we are and our identity and dignity in the forefront. Not data that is input but we don't have knowledge anymore in our policy discussions. We had it. It has been cleaned somehow over the last 20 years from that. Therefore, this is critical and that.
AI should be transparent, explainable, based on open source, and practical tools that people can help and bottom up. Knowledge should be saved with people who generate knowledge.
This is one of our principles. If you test, we always point to the source including IGF on which we generated knowledge answer to the AI chat bot. Those are few principles.
Yes. We are trying to do it. I'm particularly pleased that Marco had comments with us today and he leads coordinating project from policy side and that is a signal Marco is leading memory of IGF and is critical. Find people that know the story in order to train the AI model. Marcos is one of them and thank you for joining us. We will be persona non grata if we don't stop now, huh?
>> ANDRIJANA GAVRILOVIC: Everyone our booth is workshop 9 if you have questions, you heard or would like to have heard but didn't and will you find me there and if I'm not getting there, I am probably getting something to eat and will be back in 5 minutes. Right at booth 9 is Diplo booth. Thank you very much.