IGF 2024 - Day 2 - Workshop Room 7 - WS78 Intelligent machines and society- An open-ended conversation

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>>  JOVAN KURBALIJA:  Welcome.  We have to because of people outside.  We have to speak.  I know I can tell you I've been attending IGF.  I first see the biggest challenge, two biggest challenges are food sometimes, and the second is sound.  It seems the sound works now.  Yes.  It's okay.  Good.

Thank you for coming.  I'm head of Geneva Internet Platform.  My colleague, Sorina Tealanu, is working on the intersection between human and artificial intelligence, human‑artificial/artificial‑human.

One minute.  One minute and a half.  One minute is done.  Okay.  Okay.  Okay.

I have a mission.  Where are you coming from?  Bangladesh.  You guys in Indian subcontinent, you invented number zero and put us in trouble.  Otherwise, we were using Roman numbers, and it was so easy, but you invented one and zero in digital world, and all trouble starts, you know, including sound in this room.  I'm always teasing my colleagues and friends from India, Bangladesh because number zero was invented through the continent, and it created the algorithm to, first, Arab traders and then it came to north Africa when (inaudible), who was very young ‑‑ it's better than me.

Those are challenges.  Oh, there was a proposal that I start singing, which is terrible idea.  I can tell you.  Okay.  Some people are giving up, huh?

You basically try to cover the area, in this case international law, and do.  If there is a breach of this then, and the other thing.  Now, in the last five years neural networks started, and especially after the initiation of the question of ChatGPT.  We have been focusing on AI, but in slightly different way than other systems.  We have been developing AI, and in addition we have been trying to see what the philosophical governance of AI are.  The principle is that you don't need to be a programmer or have quite a few programmers, but you have to understand basic concepts in order to discuss AI.  Otherwise, the discussion ends up with unfortunately dominant narratives that we have here at IGF and not only IGF, but many also meetings bias, ethics, and we can generate typical speech.  Mix of bias, mix of ethics, and what we notice is that there is a narration which is not going into the critical issues.  What we basically did in this context was the following.  I'm sorry.  It should work.  We were discussing are we holding the globe together, or are we fighting machines and people?  Most are discussions especially when the hype came in 2023 with ChatGPT.  We were really concerned because being involved in AI we were concerned that ‑‑ discussion.  Those of you who are in this field, you know, that there was a long‑time moment of effective altruism, it basically sent a message to you ‑‑ no.  Like this.  Outside.  Okay.  So many details.

That was basically long‑term movement, which was saying don't AI ‑‑ I'm simplifying.  Don't discuss AI currently.  Let's see what is still happening in 20, 30 years, and let's prevent that AI destroys humanity.  I'm caricaturing a bit, but this was the narrative.  You can recall the letter signed by lead scientist stop AI and these things.  One worrying signal which I noticed was it became a bit of an ideological narrative.  When there is ideology, then there is something fishy.  I was born in a socialist country, and I can detect in no time ideological narrative.  Bless similar to communist narrative, which says don't ask any question now.  Just enjoy ‑‑ just follow the orders of the party, and in 30 or 40 years we will be in the ideal society.  If you complain, you go to gulag, and you get in trouble.  That was a bit of an initial narrative, and we said, okay, it's a bit tricky.  We have to make more serious discussion about it.

Therefore, we started pushing for immediate risks of AI in the education, in jobs, in the day‑to‑day life, in use of AI by content moderation platform, in any walk of life.  We were particularly concerned about one aspect, which you can consider is a midterm aspect, which was a massive codification of human knowledge by a few platforms.  Mainly platforms in the United States and China.  You basically, as we know, you're training the model, and that was Ali Baba, it's a matter of coming, but the idea that knowledge is centralized and captured was very problematic.  We started developing boto map AI trying to show that with a relatively little resources, you can develop your own AI, and you can preserve your knowledge.  We say that it is technically feasible, financially affordable, and ethically desirable because knowledge, which is codified by AI.  It defines our dignity.  That's extremely important that we know what is happening with our knowledge.  Or knowledge of the generations before us.

This is no less the context in which we have been doing that, but I'll ask Sorina now to build more on these discussions.  You have to hold it here.  No, here. 

>> SORINA TELEANU:  Let's see if I can do this right.  Thank you for joining our session.  I think what we wanted to do today is to have a bit of a dialogue around some of the more philosophical issues surrounding AI because at least I feel we talk a lot about challenges and opportunities of AI, and we need to govern it because how do we deal with challenges and how do we leverage opportunities.  But what about the more human side of all of this ‑‑ of all of this, and I do have a few questions that I'm hoping we can go through quickly, and I'll just ‑‑ yeah.  No.  I'll actually do it myself if we can just maybe switch.  I do like slides quite a lot, and we do have quite a few nice illustrations that I couldn't miss an opportunity to share with you.  I'm going to do a bit of that.  Bear with me.

Then hoping to have more questions from you as well.  ‑‑ which I feel we miss more often than not.  So starting with this, we talk a lot about large language models, right?  Generative AI and ChatGPT and all of these things.  But what are some of these challenges in knowledge creation?  How do we look at large language models and at generative AI tools?  Are they a tool?  Are they a shortcut?  Are they our assistant, new co‑worker?  How do we relate ourselves with these tools?  Are we even making conscious choices when we interact with them, or are we exercising our agency as humans?  To what extent?

Yeah, the question, what roles do we imagine large language models playing with us humans as we interact with them?  Are we missing the forest through the trees?  We talk a lot about generative AI, but are there other forms of intelligent machines, agents, whatever you want to call them, that we might need to focus a bit more in our discussions on governments, policy, again, implications and what does it all mean?  If we do need to do that, how do we actually get there?  Then something that really, really bothers me is this way to which we do assign human attributes to AI.  We do talk a lot about AI understanding, AI reasoning, and using these sorts of words that are much more adequate to use for human intelligence, right, but does AI actually reason?  Does AI actually understand?  When we use these words, we actually understand what we mean by them.  Yeah, there is a bit of hype around anthropomorphizing AI.  I'm not sure how many of you might have attended Geneva, the global AI for good summit.  I see at least one person who was around Geneva, but he is busy typing.

Thank you for that.  Let's see.  There was a lot of focus on humanoid robots.  You would walk into the conference venue and see a human‑like‑looking robot here and another one here and another one there.  What you didn't see was people actually questioning, okay, what does that robot actually mean?  Does it understand?  Does it reason?  Does it think?  Or is it just another way for us to hype technology in some way?  We have a robot here as well.

Then a bit more on the interplay between AI and other technologies that tend to also join a bit of the hype.

Another example from exactly the same conference.  Last year the focus was on newer technologies and how the interplay between AI and newer technology might be changing the way we relate to these technologies in the future.  There are many companies that are coming to the summit and presenting neurotechnology devices, applications, and what not, and we had these brand new curiosity.  Let's see what kind of privacy policies these companies have when it comes to their processing of brain data.  If you use neuro devices and brain data, what do you think we found out?  Any guess?  One had a line in the privacy policy saying, well, we might be processing your brain data, but because you agreed to use the service, we also agreed to us processing the data, and then the privacy policies were mainly related to cookies, how you interact with the website, but nothing about the technology itself.  Then the question is, if you as an international organization invite these organizations, companies to speak about how amazing the technology can be at the end of the day and help solve whatever problems, you should be a bit more careful about how they also deal with the human implications of this and human rights and what not.  I think sometimes we talk, but we don't also walk the talk in the policy space, and I'm hoping we will see a bit more of that going forward.

When words lose their meaning, how many of you here use tools like ChatGPT?  At least ‑‑ okay.  I'm seeing a lot of hands.  I'm a bit worried, to be honest.  You're also teaching at the College of Europe, and you know how it is.  When you have an assignment that is writing, you go to ChatGPT and put it there and get your essay.  We're also seeing this in the policy space quite a lot.  There was an anecdote from the head of the international organizations in Geneva.  Would you like to talk about that?

>>  They went for a conference, and they were hearing all the same speeches by all opening statements.  We had one organization that created their strategy on 120 pages, and initially it's an important organization, and they said let us read it.  Somebody even didn't dare to remove chat GPT references.  We said, oh, my God, what's going on?  That was a funny anecdote with eight or nine speeches.  We analyze them and found the patterns that are basically generated by ‑‑ even they don't make an effort to go to Germany or other platforms, but everything was created with ChatGPT.  Okay.

>>  Speeches would be very worse.

>>  Just for colleagues online, there was a comment, which is good comment, and we often discuss that.  We always ask how AI should be perfect, but then in the same time we are who we are, and the speeches, as you know, even within the humans, are not that exciting.  Then it was a good point.  Who worried us was this huge document, document on AI strategy.  We were thinking, it's many countries will read it as a strategy for AI for debt organization.  First, can they read 120 pages?  Second, is it really an expression of the policy interest of that organization?  It's not.  It's on a very common sense level.  That's it.  Sorina. 

>> SORINA TELEANU:  In speaking models I think the first question that is one expressing the best what we have been discussing for quite a while, at least what I don't see so much in the other space, if we rely on AI to provide our text and communicate with our emails, right now it's easy to kind of easy to spot what is AI‑generated text and what at least has some sort of human intervention, but if we end up relying so much on ChatGPT and like tools, will we still sound like humans 5, 10, 15, 20 years from now?  What happens with all these kind of self‑perpetuation of AI if AI comes up with new text based on data already available now?  But five years from now all of the new data will still be AI‑generated.  What does it also mean for broader issues of human knowledge and those of us as humans and how we relate to each other at the end of the day and how we communicate to each other?

This is one of my favorite books.  I'm not sure if someone in the room has actually read it.  I think we also have this kind of obsession as humans to try to develop AI, which is really like us.  We want general artificial intelligence to be as good as humans as every single task because, I don't know, we want that to happen, but what about other forms of intelligence out there?  Can we develop intelligent machines that act more like octopuses, which we have discovered recently that are quite smart, right, and intelligent.  More like fungi.  More like forest.  What about other forms of intelligence around us that we might want to borrow a bit from as we develop whatever we mean by intelligent machines?  We tend to be so focused on us humans.  We are at the center of the earth.  We have the best, but maybe it's not exactly that also as we look into developing technology.

We're having more trouble with technology because why not?  I'll end with a few more questions.  This is probably the overarching one.  What does it mean to still be human in an increasingly AI era?  This is more ongoing.  I was mentioning earlier between neurotechnology and modern technologies, AI, and we can cover that a bit later as well.

Another example of interplay.  What we have been trying to kind of advocate for in the policy space in even Eva is this right to be humanly imperfect.  Jovan, would you like to reflect on this a bit? 

>> JOVAN KURBALIJA:  When I go to human rights council and human rights community, they look at me as if I'm from the other planet, but I was arguing for quite some time, three to five years, and yeah, I even proposed a workshop at the IGF, but probably whatever market they dismissed that.

I've been arguing for the idea that we have the right to be imperfect, but our efficiency civilization made the center about optimizing efficiency is basically making it unthinkable that you have a right to be lazy.  You have a right to make mistake.  You have a right to ask.  You have a right to this.  If you really consider carefully, humanity made its main breakthroughs when we were lazy.  In ancient Greece, these people had plenty of time to walk through the parks and to think about philosophy.  Or in Britain all the tennis, football was invented in the British time obviously.  Some of the people were working for the elite.  That's another story, but they were lazy, and they were inventing things.

My argument is that we have to find the right to be imperfect, to refusal to remain natural, not to be hacked biologically.  A right to disconnect.  A right to be anonymous.  And a right to be employed ala machine.  I have a bet that this five years’ time I'll will be already retired, but some of you are younger.  We will have at least one workshop at IGF asking, do we have a right to be imperfect?  Then I can offer it as a bat that we can do it.  It's a serious question going beyond, let's say, catchy title.  It's going into what Sorina said.  The critical question:  What does it mean to be human today?

What will be our humanity in relation to machines in coming years? 

>> SORINA TELEANU:  It's not so much about talking about robots coming and taking over and disturbing a business.  That used to be in focus for quite a while.  It's about human‑to‑human interaction and how AI comes into play.

We'll end with a few more questions, and then we're hoping you will be adding more questions at the bottom of our slide.  Trying to wrap up what we actually want from AI.  There is a quote from a company ‑‑ maybe I shouldn't tell the company at least.  Developing artificial ‑‑

>> JOVAN KURBALIJA:  It's a permanent company. 

>> SORINA TELEANU:  It's trying to develop artificial general intelligence.  Again, the type of AI that would be at least as good as human at doing everything and anything.  So the quote I'm going to paraphrase probably goes a bit like this.  Our aim as a company is to develop artificial generating intelligence to figure out how to make it safe and then to figure out its benefits.

When I heard that statement, I was thinking, okay, but shouldn't it be the other way around?  Like sorting out what are the benefits of AGI and see if it's safe and then to develop it.  Isn't it a wrong narrative there?  In our policy discussions at the IGF, we should be questioning these companies a bit more rather than just going around the place and say, hey, AI is going to solve all of our problems and we're going to sleep happily every night because AI will do the work for us.  Okay, but have we actually thought carefully about this?  Again, it's not about these kinds of robots hitting humanity, but what does it mean to still be human? 

>> JOVAN KURBALIJA:  When you said sleep, we are sleepwalking. 

>> SORINA TELEANU:  We kind of are.  We don't see these in the debates, and we're coming from Geneva where every other day at least you do have a discussion on AI governance.  You can confirm.  Thank you for nodding.  How many of these questions do we see in those debates?  Yeah.  So I'm hoping to see more of them.

Again, how do we interact with AI?  To what extent are we even aware of these interactions, and how much of these interactions involve informed choices?  How about that with human agents?  As I was saying, AI is having an impact on how we interact and relate with each other as human beings?  Is AI making choices for us?  Should it be making choices with us?

Related to Jovan's point about the right to be imperfect and the focus on efficient ‑‑ (silence) ‑‑ should we just do what we're good at and give machines creative tasks?  Finally, is there a line to be drawn?  Can we still draw this line at a point?  Is it too late?  Can we be asking more questions?

Over to you, I would say, and I'm hoping you will have some reflections on some of these questions, and ideally more questions because I think questions are important.  We should be asking more. 

>> JOVAN KURBALIJA:  I have the geeky approach.  Sorina is not into it.  For those that read Sorina's book, I'll probably repeat myself.  She wrote a book on the Global Digital Compact.  When it was adopted on 27th in September at the U.N., Sorina has been following compact for the last 18 months.  Her slides are shared by many governments.  I said let's use AI and convert your slides into the book.  Sorina said, okay, maybe.  Next day I see Sorina typing.  I said, come on, let's use the slides and convert into the book, and you have a book.

Sorina wrote the book herself in 47 days on 200 pages.  Here is the book.  I lost my battle that AI can help us in writing this book.  It was written in a very solid analysis in 47 days.  Then when we have internal battle, which is healthy debate with me being more optimistic, and Sorina being more careful and pessimistic, but we do reporting from IGF through the website with the use of AI.  We are going to ask at the end of the IGF whole analysis of IGF.  How are the realistic questions about AI are asked for the five days.  Mind you, that's important discussion, but we are very often not seeing the forest through the trees.

Critical issues about the future of the knowledge.  We start with the questions.  Or comments.  Introduce yourself.

>>  (Inaudible) my question is, given teaching AI ethics, knowing everybody saying that we should follow the new AI ethics (inaudible). 

>> JOVAN KURBALIJA:  (Inaudible).  I would focus on how it functions and implications for society.  (Inaudible) AI apprenticeship, and people interact.  We tell them this makes sense, this doesn't make sense.  The biases that could be tolerated and biases which are illegal and harm people.

(Inaudible).

>>  Misinformation and disinformation, bad information on social media and so many.  Even there is no ethics, but there is (inaudible).  Now we are facing upcoming days AI ethics.

(Inaudible).

How do we manage this kind of thing?  Nobody maintains rules or ethics.  Everybody is saying that we should (inaudible).

>> JOVAN KURBALIJA:  It's very simple.  People have AI systems.  I'm the director of people.  If you go to our website and you ask the question and you feel insulted, I am responsible.  That is very simple.  It's very simple.  (Inaudible).

You start something.  You start it, and when damage is created.  I'm sorry to say, but (inaudible).  Legal principles are very simple.  (Inaudible).  Has anyone forced you to do it?  No.  Is it harming somebody?  No.  If it harms, I'm responsible, yes.  You legally apply to it, and I've been working on this for many years, and I go to this AI, and I think machine model wills take over, and then it's rather simple.  That is notification of ethics.

Don't kill somebody.  Don't insult somebody.  Don't steal somebody's property.  That's simple.  You're trying to simplify discussion, but not oversimplify because we've found that a lot of energy.  In this space it's focusing on issues which are nice to talk about, but which are not even sometimes good philosophy.  If you discuss good philosophy, it's great, but sometimes it's basically repeating notions and ethics, bias, and that's a bit of my concern on it.

Of a practical apprenticeship and legal responsibility, focus on issues, train people to understand it.  That would be my advice.

We have a question on that?  Let me just bring ‑‑ last year as well.

>>  Now I'm going to jump straight to the deep end of the philosophical questions here.  Assuming that AI becomes actually possible, which I'm not sure of, and it's assisting people in every way.  (Inaudible).

I argue the first questions are (inaudible).  Why should I care in the end?  We understand how we think.  I don't think we are (inaudible).

Perhaps it can come up.  It becomes acceptable to create and develop human asset.

(Inaudible audio).

>> JOVAN KURBALIJA:  I will make a few points.  You can go ‑‑ there is obviously ‑‑ (inaudible) basically he argued that the flying man, who is in this room is philosophical exercise.  It's from the body of the sense of experiences.  It's fully consciousness of the flying man.  It's for me still one of the most fascinating lines of the neutrality of thinking and to the point that there are issues, but this one, the other issue with you is so what.  It is sort of equaling into so what why we should be worried.

(Silence).

>> JOVAN KURBALIJA:  The one thing about AI is it is not any more precise machine like everything else before.  Similarly it's like us.  Hallucinates.  It was interesting, do you have a question? 

>>  You select them and bring them to something because (inaudible).

I'm saying because it can generate ‑‑ there's a lot of topics right now, but just on that answer if you generate and you don't know what generate and you must know how to have a system to validate what the AI is doing, this engine is doing, you need to kind of like human goes to a graduation process, and at the end of it you graduate.  AI should also be kind of testable and make sure that what you say is not hallucination.  That's what I wanted to say about that.

How you see AI request information is coming from so many sources.  Is there a way that you could cob receive an AI system where you have an open interface to the data that is populating this process and mentions AI, but in the common way so that they can integrate because otherwise the single model according to some kind of people, but to integrate it, you're going to have like community efforts so everybody could work towards an AI of value because.  It could be integrated afterwards.

Today it's like the connection, like a big vacuum cleaner that's sucking all the data to different purposes, and maybe sometimes conversion purposes more than intelligent type purposes.  It is something that we should talk about.  The last comment I would say is what about losing the model like open source.  Who is the data for?  Is it the one (inaudible) or the one that connects it?

If the open source (inaudible) could have some different aspects.  You have license that save my data, domain, whatever you want with it.  Some others will keep ‑‑ (inaudible).

(Inaudible audio).

>> JOVAN KURBALIJA:  Sorina, you can reflect a bit on the knowledge.  (Inaudible).

>> SORINA TELEANU:  I'm not in a position to answer.  (Inaudible).

>> JOVAN KURBALIJA:  An answer to your question is a critical question.  Yes, (inaudible) the map by us, and here is a very concrete example.  We are basically ‑‑ if you use what we use is our resource.  From language models to all applications that we use.  We are now transcribing or reporting from IGF.  Our session will be transcribed, and here is the key question.  Let's say yesterday was a session benefit.  You can click on any session, and you have the session at glance.  You can go to reports to transcript.  What's very critical, you have also what the speakers and their knowledge and their input into the AI model.  You have the speech length.  You have a knowledge graph for the other session.  You have in‑depth analysis.  Today we had many agreements.  Not disagreements, but different view between.  Differences, partial agreements, and other elements.  You don't have it for the whole day.  Therefore, what was discussed during the whole day, here is a knowledge graph of discussion, and it's very busy.  You can find your discussion and how you relate to another discussion.

The key question is, okay, we create an AI chat bot, and we ask ‑‑ we can ask the question.  (Silence).

What should be human right to be imperfect?  Yes, five minutes.  We basically process whole transcripts, and we get the answer.  Common knowledge, that's ‑‑ but then what is the key?  AI identifies on what basis the answer was given.  It makes reference to what you said.

If you spoke at the session and you said this, that should refer to you.  Not into some abstract chat bot.  (Silence).

I'm sure there will be some answer.  What is just explained?  We tribe transcribe the public sessions into transcripts and analyzed by AI system.  Here we have some answer.  Answer is based on source, and you can have exactly who said it at what time, what were the pointers, and this is ‑‑ it generates any answer, then it must be attributed to somebody or something for the sake of precision, for the sake of fairness.  This is the major problem.  A new question pointed in this direction.  Can we do it?  Yes.  It's doable.

OpenAI and others can do it.  Why they don't do it? This is another question, but they cannot give us explanation that it is technically impossible.  It is technically possible.

That's basically what is I would say critical for the future of serious discussion about AI.  Here let me give you summaries.  You can go to our website.  Everything is public.  (Inaudible).

You have the answer.  The details are based on the knowledge which was delivered yesterday and today during this space.  Here you'll find the ABC, and then here are the sources.  What was the specific session?  AI decided to rely on this, (inaudible).  AI decided to choose their paragraphs, and that's critical.

Then when you click, you go to the website with the page and read the transcript from that session.  You can go to that session.  It is technically possible, financially affordable and ethically designed.  I'll answer with a concrete example what we are experiencing today in the building.

We go to the implication that we have five minutes.  What suggests this issue started with questions.  There are many questions which Sorina listed which we have to discuss.  You can ‑‑ we are creating a small group of philosophers.  We are proposing one of the (inaudible) ‑‑

To have a session with all the (inaudible) ‑‑ and ask him how he would be considering writing Sophie's Word today.  We plan to engage discussion on Arab philosophy, Asian philosophy, Confucius, Buddhist, Christian, African philosophy.  Those are traditions that have to feed into this serious discussion.

Ethics is an important part, but I would say knowledge, education, what does it mean to be human?  What does it mean to interact with each other?  Is it going to be refined or remain the same those are the critical issues in addition to ethics and other things.

Now, without risking becoming son non gratis with our generous host, we will leave.  If you are interested, I don't know if it's possible if you are going to click on that.  I would like to invite you people to continue this discussion and see how far we can move and get answers to your excellent questions.  Thank you very much.