IGF 2024 - Day 3 - Workshop Room 8 - WS31 Cybersecurity in AI- balancing innovation and risks

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> GLADYS O. YIADOM:  ‑‑ Cybersecurity in AI: Balancing innovation and risks.

Despite various regulatory initiatives ‑‑

(Audio is cutting in and out)

>> JOCHEN MICHELS:  Gladys, there's a problem with your microphone.

>> GLADYS O. YIADOM:  We're here with our distinguished speakers to determine which requirements should be considered and how a multi‑stakeholder approach should be adopted to produce new standards for AI system.

Standards mostly cover AI models development or overall management and risk associated with AI.

This has created a gap for an organisation that implements applied AI system based on an existing model.

My first question will be to you, Allison, but, first, let me share some of your bio. Dr. Allison is ASIS Commission on Standards and Guideline tool and practice. Previously an international commissioner on standards, she (?) Physical and cybersecurity. Allison is also a senior lecturer, assistant professor at DCU London and intervenes at (?) University and more.

My question to you, Allison, is this one.

The use of AI has increased significantly world‑wide in recent years.

Case studies have revealed that more than 50% of infrastructure with companies have limited AI and IoT in their infrastructure with a 33% planning to adopt this interconnected technologies within two years.

Does this widespread acceptance of AI mean that the issue of trust is no longer a concern for users and organisations?

>> ALLISON WYLDE:  Thank you. That's a fascinating question. We're back to trust.

Thank you for inviting us back to IGF 2024. I'm delighted to be here. I think the question of trust really follows on from earlier talks. In the plenary, the other day, there was (?) Who was talking about trust, and he said we need to trust in AI products and also to have transparency and trust. I think this really resonates with your question. So we have the issue of people saying we want trust, but the question for us is, well, what do we mean? How do we define trust? Trust is subjective. So maybe I trust you. I think I probably do. I don't really know you too well, but I trust you. I'm a human. Our human behaviour is to naturally trust. Children trust their parents without thinking about it. I think that's one of the issues.

In business, people see a new technology, and they want to be with the top technology, with the new technology, and, of course, they want to use it really without thinking, and I think that's part of the issue.

Of course, there's lots more I can say about this. Stop me when you've heard enough, but I think if we look at how are we understanding trust, how are we defining trust, what is our conceptual framework for trust, what is your trust in your culture? Are you a high‑trusting nation or not, depending on where you are in the world? So we need to really look at this as a subjective issue and start with that.

I can come back again, but maybe a few more things. So I think because trust is subjective, we can't use statistics. We can't use regression. We can't go with central tendency. This is not something we can run a regression model and look at trust measures and look across the world. We can't do that because it's subjective. So we need to have something more sophisticated if we're going to really try to get the conception right and then, ideally, get toward some sort of measurement.

If prominent members are calling for trust, when what do they mean? And how are we going to have a conceptual framework for that? How are we going to implement it if we don't know what we're talking about?

>> GLADYS O. YIADOM:  Thank you very much, Allison, for that. As you highlighted, trust is a key point here.

I will hand it over to Yuliya. Yuliya Shlychkova, Kaspersky. She leads the company relation with government agencies, international organisation, and other stakeholders. She oversees Kaspersky participation in public consultation at regional and national level on key topics such as artificial intelligence, everything related to AI ethics, and also governance.

My question to you, Yuliya, is: If there are still concerns regarding the trustworthiness of AI, what are the reasons for this mistrust?

Can you give us a brief overview of the cyberthreat in relation to AI?

>> YULIYA SHLYCHKOVA:  Sure. Our experts do research on threats, and we actually see that AI is still software, and software is not 100% safe and protected. Therefore, there are already registered case of AI being used by cybercriminals and, also, AI has been attacked.

So that's why with people with understanding of the matters do have concerns, and this is also on a cybersecurity angle because AI also brings a lot of psychological social concerns.

But back to cybersecurity area.

So we actually see that more and more cybercriminals are trying to automate their routine tasks using AI. So there are a lot of talks on the dark web, them sharing, this and that.

Also, on the dark web, they are trying to sell hacked ChatGPT account.

Some include data poison and open‑source datasets used to train models. We saw backdoors and vulnerabilities there.

We also saw attacks in the wild on prompt ‑‑ when the attack is on the algorithm and how the AI model worked and trying to impact the output of the model.

And what's happening ‑‑ because so many organisations like to play with AI, and as mentioned, those people who answer in organisations using AI, they don't even know the scale of shadow AI used in the organisations because a lot of employees are using ChatGPT to do their regular work quickly.

So there's an absence of knowledge, like how many of these services are used. And what is happening is that in place of sharing confidential information, financial information with AI models and those models can be impacted, and this information can get into wrong hands.

So, like, just to summarise, we almost see in the wild attacks everyone component on AI development chain. Therefore, cybersecurity should be addressed. We need to talk about this and help not to stop AI usage but to do it safely and have basis for this trust for AI use in the organisation.

 

>> GLADYS O. YIADOM:  Thank you. Thank you, Yuliya, for this comment.

Mentioning the use of AI and the idea that we need to be careful, in terms of models, it leads me to the question that I will now address to Sergio.

But before my question, Sergio Mayo has more than 30 years of programme and information system management in various fields such as finance, telecommunication, health, and more. He cooperates with IGF as a member of the policy network of artificial intelligence, as a member of (?) Since 2023. He focuses on the social impact of AI and data technologies and digital (?). He currently (?) The European Innovation App.

Thank you for being with us today online. My question to you: Given that the Internet contains a wealth of information, sometimes contradictory or even fake, can one rely on the dataset utilised to train AI models?

>> SERGIO MAYO MACIAS:  Good morning. Good morning. Thank you.

Thank you, Gladys, and thank you to the organisation for inviting me to this workshop.

Actually, I think that tracking the data used to train AI models is part trusting that the technology and part trusting in the human creating that technology.

That's a question that I will not go deeper in this but going deeper regarding the data issues for trusting or not trusting in data use for training AI. There are an amount of problems that are really big. I will mention some of them.

First of all, and the most important one that comes to our mind is data bias. That's when the training data used to develop AI models is not representative of the real‑world scenario. And if the data is skewed, in terms of gender, ethnicity, location, or any other attribute, the AI model will inherit these bias. This results in discrimination and so on.

But, also, even with half the data quality issues, like poor‑quality data, incomplete data information, and it also concerns the reliability of AI models.

But, at the end of the day, even if we have a good dataset, we have a human using this data. And a human is creating the algorithm and a model. So going beyond the good or bad data that we used for training this model, we have to put the focus on the algorithmic furnace. And the algorithmic furnace is an issue that's statically pointed at the human using the data. So the human using the data must be aware of the quality of this data. Must avoid the data bias, the data privacy concerns, for instance, and so on, the data manipulation, the sufficient data representation.

But, at the end of the day, they're able to produce a fair (?) With this data.

I think this is a fair point for this question.

>> GLADYS O. YIADOM:  Thank you. Thank you, Sergio, for your comments.

So now I will turn it over to Melodena. But before, Melodena is a professor of technology at the Russian School of Government in Dubai. She has senior leadership and international experience and consults with organisations such as (?) Nation, Council of Europe, and the Dubai (?) Foundation.

We were previously addressing regulatory issues. My question to you: To maintain the balance between the progress and the security, it is assumed that the emergence of new technology should be accompanied by development of corresponding regulatory base.

Can we say that the current governance of AI are existing standards such as (?) Sufficient for the security of AI, or do we need specific regulations?

>> MELODENA STEPHENS:  If you look at how many policies are there for cybersecurity, I think there are more than 100 countries that have policies. While some of them are on security and they're looking at algorithmic security, we see recently, over the last two years, maybe focusing on criminal infrastructure, and there's two things driving it. One is we're moving away from individual security or corporate security or industry security to national security. So this becomes an interesting trend, right?

And I think the main thing we have is fragmentation. AI is global. If you just look at the supply chain of AI, it is impossible to nationalise it. So how can you maintain even national security or individual security or corporate security when AI is global?

So that's the first thing, fragmented regulations.

Anna Bradford (phonetic) has written an interesting book called "atlas AI."

If you look at U.S. and allies, I think we're talking about 27 countries, if you look at NATO alliance.

And then she looks at the EU, which she says is driven by human rights and law and democracy. Again, 27 countries, if you look at it.

And then she talks about state‑driven national strategies, and you're looking at countries like China. If I just take the BRI project, you're talking about approximately 140 countries.

So then you've got a good idea of how this fragmentation and how alliances will be created across the world.

So it's very geopolitical.

If I look at the strategies that are currently ‑‑ or the frameworks that you mentioned, the ISO and the (?) So there are a couple of challenges with it.

The scope and context is decided by the organisation itself. So it's not taking the wider perspective.

So we need the whole of society, whole of government, and whole of industry perspectives which are missing. Right?

And I think also the focus on risks is a challenge itself because when you come to a place like cybersecurity, you're looking at a public value domain space, and it's really about decisions on trade‑offs. Do I put national security ahead of individual privacy? That's a trade‑off.

Do I invest in today's technology knowing that a data centre costs billions. Right? And I know that it will create an environmental footprint and a sustainability issue later. That's a trade‑off.

Do I connect everything through the Internet of things, which is great? Does that mean I'm creating vulnerabilities because of this because no one company has the technology stack from the bottom to the end? So that's a trade‑off.

I do not think when we talk of risks we talk enough about trade‑offs, and that's one of my concerns.

>> GLADYS O. YIADOM:  Absolutely right, Melodena. And I think we'll also dive into it a little bit later in this session. I also invite, afterwards, a participate to share any question they would have.

So moving to that, this workshop is also the opportunity to display some of the guidelines that have been produced with Kaspersky's team and also with the speakers who are here among us.

I will kindly ask the team to share the slides.

The floor is yours, Yuliya.

>> YULIYA SHLYCHKOVA:  Okay. So as Melodena said, a lot of focus is on critical use of AI and on developers of large language models and national competitiveness in AI. We see there's this gap because adoption of AI is happening on the mass scale, and it's skyrocketing. These users, these organisations who are fine‑tuning existing models also need some sort of guidelines. Maybe not regulation and compliance but some guidance. Do these 10 things, and you will get at least 80% more secure.

And this is what we have put our thoughts into and produced these guidelines.

Just a little bit to illustrate the scale of adoption. That's more than a million models available in the public repository. Like developers at GitHub already say, the majority of them, they are using AI at some point.

So in a few years, I think there will be no one ‑‑ (chuckling) ‑‑ not using this. This is covered in my short intervention, but, again, we see almost every point in the AI supply chain can be vulnerable to attack.

In the public, we see more than 500 records of vulnerabilities in AI and they account.

So we asked, in our survey, professionals working at organisations, do they estimate the rise or decrease of incidents within their organisation. And the majority, more than 70%, reported they see a rise in incidents.

But the interesting thing, 46% out of these believe these attacks were with AI use in one way or another.

Also, the same professionals also reported that they believe they are not equipped enough to address these challenges. They have a lack of training, lack of qualified staff, insufficient IT team size. So these problems already are here. They already exist. When we add AI usage, especially the shadow usage, it's like with the system where every person has rights. It breaks under pressure.

That is why we believe some basic requirements of organisations adopting AI.

Our guidelines cover four main pillars, (?) Infrastructure and data protection requirements. How can resilience achieve through validation and testing and adherence to government compliance.

So talking about AI Security Foundation, we believe that, first of all, leadership of organisation has to know about what AI services are used and whether there's threats and how they're mitigated.

The team has to be trained. IT team has to be trained and also regular users who can use AI in their work also needs to have this awareness about risks and what to do and what not to do.

And these courses have to be regularly updated. There needs to be field and exercises. It should be continued exercise.

Also, the response of the organisation has to be proportional to the use.

So each organisation is advised to have threat model.

(Audio is cutting in and out)

>> YULIYA SHLYCHKOVA:  ‑‑ and what threats of AI can use and what threats of misuse of AI can be, and how those threats can be addressed.

So to have this is very recommended.

Talking about infrastructure security, a lot of organisations are relying on cloud‑based services, hence, the traditional approach to cybersecurity is irrelevant here.

Access to AI services has to be very ‑‑ it has to be logged. It has to be limited. Only (?) Needs to have the success. There has to be a two‑factor authentication. There has to be data models in one place waits in other places. So it's all mentioned in our guidelines, and I will provide you with a link.

The other just mentions highlights of this.

Then, talking about supply chain, in a lot of regions, some popular models are not available. That's why a lot of organisations turn to proxies. Some of them can be reliable and some not. That is why it's very important to check from which source information it comes and to have this order of supply chain.

Because of that's, a lot of organisations choose to have localised data models within their organisation. If you choose this approach, there's also importance to follow requirements such as login access and backing up your assets.

Then, if your use is very wide within the organisation, you need to be prepared against machine‑learning attacks. There's already best practices on how to do this. You see offensive words like distillation techniques and (?) For policy people like this, it's rocket science but IT people will know what this means, and we provide more details in our guidelines.

Then, also, Sergio mentioned if you're using a model from a third party, this model was trained on specific examples, specific datasets. So before releasing it to the public, you need to train this on the real‑life scenarios on your industry benchmarks and in real life.

So test and validation is really important, and you need to be ready to pick up to the previous version if testing goes wrong.

And, also, general cybersecurity requirements, please ensure to have regular security updates and monitor in public sources information about vulnerabilities, have internal audits regularly to test.

And, of course, vulnerability and bias report. You, as an organisation, need to have information available to the public so users and your clients using your AI services have an opportunity to contact you if they notice vulnerability or bias, and you have the opportunity to fine‑tune this.

And we also, as an organisation, advocate for programmes to include AI in your (?) Programmes to have more and more community ‑‑

(Audio is cutting in and out)

>> YULIYA SHLYCHKOVA:  Check, check. I'm speaking too long. (Laughter). So vulnerability report is important, and, of course, since regulatory space is very, very active, it's important to keep an eye and ensure that what you are using is adhering to the standards and regulation.

And I think the last slide is the most important. So the full text can be accessible upon this link. It's over 10 pages. We really did our best. A big thank you to Allison and Melodena for contributing.

The idea of these basic standards actually come from cybersecurity. A lot of nations like the UK, Germany, Ministries of Communication and Technologies are trying to raise awareness of these basic cybersecurity standards and publish this information on their website. We believe it would be a good idea if nations worldwide can also maybe take a look at what we have produced, develop, fine‑tune it, and to promote it on national and international levels so that as much usage of AI can happen in more secure way.

Thank you for the opportunity.

>> GLADYS O. YIADOM:  Thank you very much, Yuliya, for sharing the guidelines. Again, do not hesitate to reach out at the Kaspersky booth if you don't get the chance to download it here.

So now, moving to another set of questions.

Yuliya, you were mentioning somehow AI trainings and literacy.

My question will be to you, Melodena, now. In such cases, how best to address the issue of increased AI literacy among AI population.

>> MELODENA STEPHENS:  Right now, most of what passes for digital literacy is digital skills training, and I don't think it's the same thing. So we need to be very mindful of that. AI is a much more complicated topic. I think the challenge we're facing is we need societal education. We need education of industry. We need education of policymakers. I work with IEEE. Even engineers struggle when you look what the is being deployed and what implications it has.

If I look at NIST, there's 108 subcategories. If I look at ISO, for example, we're talking about 93 controls. And what people are doing is making 93 policies. I don't know about you, I don't know who reads 93 policies, but the problem is implementing it.

The way we're delivering knowledge, the current method is not work. The policies put over there, we don't know how to translate it. We don't know what it means for me. So we need to be able to translate this for different people based on their level of expertise.

And I will just give you one example.

I heard the word you mentioned, transparency. How can we get algorithmic transparency? If I look at what Google has just released in the last week, which is Willow, it does calculation in five minutes, which according to them, a super computer will take 10ceptillion years to use. It's impossible to do it at the speed technology is doing it.

If you're talking about 175 billion parameters we're talking about 10 million queries per day.

How many people do you have to employ to audit 10 million queries per day?

So what we're doing right now is taking a rough sample, and we're auditing it, and then we're reporting error rates, and we're only reporting sometimes one type, not false negatives or false positives. Both are important.

So there's a lot of things that are missing currently right now in the way we're evaluating AI.

I also want to highlight something like this. They talk about let's have a human in the loop. If anyone has read the foreign policy article on Project Lavender, which was a facial recognition drone technology. They did have humans in the loop to decide who or to target. The amount of time they spend, 20 second for review. I don't know about you, but my brain does not think in 20 seconds. We're not computers.

So the first thing is I'm not a machine. I'm a human being. My skills are different from a machine. We need to understand both of that.

And I think AI literacy is kind of understanding what a machine can do and what a machine cannot do.

I will take the last example, which is in 2021, Facebook had an outage. It was a control gateway ‑‑ border gateway protocol issue.

Now, what was interesting is they're very high‑tech. So their systems are all on facial recognition and authentication. So they should have been able to enter and fix the issue. Unfortunately, what happened is they got locked out of their own offices.

So you have backups, and we're depending on technology for backups, but, in the end, it's about human beings. You have to have a backup, which is a human being. My worry right now is the knowledge the human beings are having is becoming obsolete because we're not valuing it enough.

>> GLADYS O. YIADOM:  Thank you for that comment, Melodena.

My next question to you, Allison, how can a (?) Approach be used in the development of AI.

>> ALLISON WYLDE:  I'm sure you're familiar. As mentioned, we're humans. So we're preimposed to support without trust.

My Russian is really bad. (Speaking Russian). Trust but verify.

Zero trust, non‑presumptive trust.

So we have to verify validity. Whether it's a person or an application, we have to verify that before we can grant trust. So we have continuous monitoring.

So in a process like artificial intelligence where we're looking across a very complex dynamic ecosystem, we've got all of the moving parts all moving at the same moment, the humans taking decisions, the prompts going in, the black box doing its thing with the model. We're not sure where it's coming from.

The data that we're using to train the input. The outputs coming out.

So we're saying operate zero trust throughout this ecosystem to give us a chance to verify it before things come out the other side and before they're implemented. As colleagues have said, companies are just doing this without thinking, just like a new technology, just like driving a car before people had a driving license. Jump in the car and drive. You know, people don't know what they're doing. Same in industry at the moment. Industry is adopting this at pace and at scale without ‑‑ I think the word is guardrails and zero trust can be one of the guardrails.

I'm happy to come back in depth for questions.

We have everything happening at the same time at scale with no common frameworks from whether it's our friends in ISO or NIST or wherever in the corporate world, using technology, developing standards with no interoperability across those different domains, it's a very complicated, systems‑base ecosystem.

>> GLADYS O. YIADOM:  So basically what you're saying is how to use it responsibly.

So it will lead me to my next question, Sergio. Given your experience as a coordinator of regional European digitalisation ‑‑ can you tell us more about the blueprint of (?) Development of AI in Europe?

>> SERGIO MAYO MACIAS:  Thank you, Gladys.

Actually, the AI environment in Europe is known and has been labeled as an (?) For environment. This is because the AI Act and the Data Act, among others. They're the known frameworks.

This is only partly through.

The (?) Work has been going on for a long time. I always put the same example. We (?) Have a Boeing company in Europe, this U.S.‑based big company. We have Airbus, it's a consortium of really small companies.

So the way we're working in Europe is this way, the corporation, the consortium, and so on.

For instance, from 2009, there's a group on artificial intelligence established by the European Commission, and in 2019, they also provided the ethics and guidance for trustworthy AI. These guidelines emphasized a need for the (?) To be robust and ethical. They're producing year after year new drafts regarding this regulation, but we have the AI office supporting the development. This is only from the top, but we are also working from the bottom, from small companies. In 2024, the Commission launched an AI innovation package called the Gen AI initiative packet. This is an easy‑reading packet to develop trustworthy AI that complies with AI values. These provide security. For citizens to not be aware of the law and so on.

The Data Support Centre was launched for contributing to the data space. Data spaces are a safe space for data sovereignty, interoperable, and trustworthy data‑sharing environment. They are directly related to the AI deployment. They point to the core issue, declaration of trust. If you can create a place where data is reliable and secure, you've created trust. And you can go a step farther and use this data for training AI models.

Also, the network of European (?) And hubs, which I am coordinator of the one in the (?) Region in Spain. We are close to the citizen industry. We are producing guidelines, blueprints, and a lot of help for this key issue, to create security and trust by default and let people using AI not being aware of big documents or big frameworks or the AI Act or the Data Act.

>> GLADYS O. YIADOM:  Thank you, Sergio.

Coming back to what you said, Allison, is there a need to harmonise AI regulations from different jurisdictions? If so, is it possible to (?) Vulnerability?

>> ALLISON WYLDE:  Two parts. Is there a requirement and is there a need? Sorry.

>> GLADYS O. YIADOM:  Let me repeat the question. So is there a need to harmonise AI regulations from different jurisdictions? And, if so, is it possible to ensure search interoperability?

>> ALLISON WYLDE:  So I speak from a personal perspective here. Realistically, I don't know if harmonization is possible because we're looking across the world, across multistakeholder groups, private sectors, states, actors, individuals, and it's really difficult because there's different cultures in play. I think it's right that individuals should have their culture and their way of being. I think that's really hard.

I think cybersecurity and risk management standards, we do see some global take‑up of the big standards there. So maybe we can look to what's happened with those ISO 27,000 family or even the (?) And look at the standards and look at what's happened there with the future.

There's differences across the globe and private sector and different sectors.

This is my personal view of harmonisation, I don't know if it's possible. Is it desirable? In an ideal world, we would have interoperability across tools and standards and frameworks, across all of those different factors. That would be the ideal. Whether it's possible, I don't know, but I certainly think guidelines are a really helpful steppingstone forward. So if everyone has the same framework to work from and a common understanding, I think that's a really big step in trying to achieve a future where we all understand where we're going.

I hope that answers your question. Thank you.

>> GLADYS O. YIADOM:  Yes, absolutely, Allison. Thank you very much.

My next question will be to Yuliya and Sergio. You mentioned how important it is to address it from a cybersecurity perspective. Why is it crucial for AI systems? What would the state‑of‑the‑art security system look like?

>> YULIYA SHLYCHKOVA:  So with AI, it's a new thing. So every technology is developed, and then people have this afterthought, oh, I had to put more thought about security there. So with AI, we have this opportunity to think about security by design. The same as regulation. Regulation is always catching up. There's a chance to not be late, but that's why it's important to keep on par and think about the cybersecurity and not only about how to technological to protect this but also to spread awareness about issues so that regular users are not feeding AI with their personal data without necessity in place. Don't share confidential information and et cetera, et cetera.

>> GLADYS O. YIADOM:  Sergio?

>> SERGIO MAYO MACIAS:  Yes. I agree with you. For AI, we cannot push people to install the antivirus for AI. No. That's not realistic. We have need to (?) Cybersecurity by default. We cannot send the elephant in the room to the final users. We have to define safe spaces for using the AI systems, and we cannot expect final users to do it.

For instance, I was mentioning before the data versus virtue, that goal. To create this framework, a space with legal governance and also technical issues are developed and deployed by default, just to be used.

So we have the acts, the AI Act, the Data Act in the background, but we have to define these spaces for letting use AI without concerning any other issue.

>> GLADYS O. YIADOM:  Thank you, Sergio.

Perhaps turning to the audience to check if there are any questions.

We do have one question here.

Sir, I will kindly ask you to come to the middle to ask your question. Please share your name and organisation and who you address the question to.

>> My question is for Yuliya. As mentioned, there's a difference in the (?) Security and the AI security. For example, in convention security, if you send certain requests, you get the same responses. If AI, it's very different. So how do you see the security if every time the response generated is different? Even if you train your model, you cannot expect it will provide the same answer next time. So we are a security firm. We work heavily in the AI security right now. So we have faced these problems. Like, I mean the security options. Even if you try sometimes, the same errors and vulnerabilities arise again. You cannot handle it properly. Number one, how do you see that?

Second, as far as the programme you mentioned, companies are not taking it seriously if you report biasness as a vulnerability or as an issue. They're not accepting. Even if you see the bug, they have clear mention that they're not accepting bias, racial (?) I would love to see the response on that.

>> YULIYA SHLYCHKOVA:  I like your comments. So I think they are more like comments than questions. Thank you for sharing your experiences.

It took two years for big companies to start doing vulnerability reports. I think we just need to push for it and do this awareness. I'm sorry. We're human beings. It takes a while for us to accept the problem and start moving to the solution.

As for the issue with AI security being different, we also see this. We are using machine learning in our solutions. Again, you need to ensure that you have representative dataset to train your model. Then you're dealing with these false positives, false negatives, like trying to find the bar where the performance is acceptable, but, still, we have this human control on the top because 100% confidence is not there. That's why we have human experts who are analysing the output and can interfere.

So what we call it, it's multilayered protection. So we're trying to use different models. They check on each other. At the end of the pyramid, there's the human factor.

>> GLADYS O. YIADOM:  Thank you, Yuliya, for your response.

I will just take one online question, and then I will hand it over to you.

I believe, Jochen, we have one question online.

>> JOCHEN MICHELS:  Actually, there are three questions. First off, it was valued very much and also there was positive feedback to Allison's remarks with regard to trust versus verify and having the transparency aspect with regard to cybersecurity on artificial intelligence.

One question to Yuliya. Please excuse if I misspell your name.

Lufuno Tshikalange from (?) Consulting, he is interested in getting information about the role for open source in artificial intelligence and, in particular, he raised the question whether it is enhancing security or increasing vulnerabilities.

>> YULIYA SHLYCHKOVA:  It's a very good question. On one side, we advocate for open source. It's great that community has been built around AI models being shared, datasets being shared, because it's limited innovation if it's only (?) Model. Especially for regions like Africa and others, I think it gives opportunity to leverage innovation, like this openness and the availability of open‑source information.

On the other side, those who are deploying the models need to own security for the things they are using and to check and audit. Do not admit if someone developed this for you, that it's 100% ideal. This would be my answer.

Please, panellists?

>> GLADYS O. YIADOM:  Do we have any other comments from our panellists on this topic?

>> ALLISON WYLDE:  I think it's perfectly valid. If you're using AI for cybersecurity. That goes to the question here. I think it's good to have transparency, to know what you're using as a training data and yes, there's the innovation. I'm sure, in the future, there would be a way beyond this. So having a closed system that's off the cloud, that's proprietary, and is able to learn and has the security badge.

>> YULIYA SHLYCHKOVA: I want to have something in the middle. We, as a company, have transparency centers where in a secured environment, we are sharing the models we're using, our data processing prince principles. These can be shared.

Good point.

 

>> GLADYS O. YIADOM:  Before taking another question online, Jochen, we have a question here.

>> Thank you for the panel. It's very interesting.

I have a question maybe for Yuliya. When we speak about AI and security, okay, we have AI that can be used for enhancing security. We have the normal security issue about platform infrastructure data, data centre, blah, blah, blah. And then we have the data security. I mean, when we speak, if there is any other dimension that we miss, in addition to these ones ‑‑ I have the feeling it's small data security and (?) Security in the cloud. If there's anything related to, let's say, machine‑learning process or algorithmic process that we have to consider, according to your knowledge on this regard ‑‑ I'm not sure it's clear, but I have the feeling that we mix AI security with data security and infrastructure security. Is there any other direction?

>> YULIYA SHLYCHKOVA:  I believe that you are right. Model security is also should be considered in the holistic picture because this is a black box. We can be in the classic programme to be sure that the (?) Will perform as intended. Therefore, it's very important to test model, and we already saw adversarial attacks trying to impact the way a model functions. Maybe it's with the noise and invisible for (?) And let model misperform. So model securities also integration, the algorithm, definitely.

>> MELODENA STEPHENS:  I was just going to add, today, if you look at the traffic on the Internet, 70 to 80% is API calls, which means it's basically code talking to code, and each one of that is a vulnerability. So it's not just data and critical infrastructure. I think it is also because we're looking at telling uses which are made with different languages, and we're trying to map them together with interoperability, and it is not working.

So one update is happening. We're not updating in real time. I saw a piece of research that says it takes about 200 days on an average to find a security vulnerability. That's 200 days for a hacker to access your data. So just think of all of us. We're here at a conference. How many of you have ensured that your data on your device is updated? And that's the challenge, right? Yeah.

>> ALLISON WYLDE:  I will jump in really quickly. I think some developers are like chefs. They have their cuisine, and they use their process for the model. And your mother's process is probably different from mine. I think there's probably a lack of ‑‑ what's the world.

>> ‑‑ replicatability and passing the steps to the next person. Once the model start going, then we don't know what's happening, and there's no record.

Thanks.

>> GLADYS O. YIADOM:  Sergio, do you have any comments?

>> SERGIO MAYO MACIAS:  Yes, indeed. I'm happy to hear this question. I totally agree with Melodena and Allison's comments. Let's say that we have an ideal world with no data problems and we have fair data, secure data, so on and so on. And data is not a problem anymore. This is an ideal world. This is not possible at all, but think about that.

Afterwards, as you said, there is a programmer who has the black box. We have the algorithm. We have the human being there. You see fair data, good data, data with no problems, without bias and so on.

And what do we do with a black box? It is the thing that happened, if you remember, with COVID crisis, with the vaccine. There's data, the components, but, afterward, we have the people working with those components. Let's say the programmers here with the black box, do we trust them?

I already said at the end of the day, trust is not about data. Trust is about human beings. So we have gone beyond trusting data. We have to go beyond trusting the black box. We have to think if we are ready to trust in human beings developing the AI models.

>> GLADYS O. YIADOM:  Thank you, Sergio. Almost a fickle question, at the end of the day.

>> SERGIO MAYO MACIAS:  It is, indeed. Yes.

>> GLADYS O. YIADOM:  Thank you.

Jochen, do we have another question online, please?

>> JOCHEN MICHELS:  Yes, we have.

Some of them were properly answered by Sergio, for example.

One question is by Max (?). He would like to know what the relationship between regional legislation and limitations with regard to artificial intelligence and also on the level of different states and whether that is a hurdle to try to find harmonised roots and globalization in that regard.

That's a question to, perhaps, Melodena and Sergio, and there is one further question by Mohammed. That was also particularly answered by Sergio. It's about classification of AI technology. Sergio already referred to the EU AI Act and the risk‑based approach.

Perhaps Melodena can share examples from other regions, whether there is the same approach or other approaches regarding high‑risk AI and so on and so forth.

Thank you. Those are the questions from the online attendees.

>> GLADYS O. YIADOM:  Thank you, Jochen.

Perhaps, Melodena, first question.

>> MELODENA STEPHENS:  Okay. So the first question was on the AI regulations and ‑‑ okay, the EU is the only one I would look at currently that has harmonised across its 27 countries, but it's in implementation. Right? So it will take some time. Right now, what we don't have is time.

The rest of the world, what I'm seeing is a strong trend toward bilateral agreements. Part of it is on defence and part of it is on data sharing and part on knowledge and talent.

So we're seeing a slightly much more polarised world where it's focusing on ties.

This becomes very interesting. If you want to take a step further, is it about governance or tech firms? I think that's a far more interesting discussion for me. If I look at the 500 cables that are undersea that are transmitting about 99% of the data, most of that has private ownership.

If I see data centers, most of them, again, are private. So I think there's a whole other discussion which we are not taking into place in policy regulations, which is the role of private sector, many of which these companies are having revenues and market capitalisations much larger than countries. So you can see a power asymmetry coming there.

I know this is an interesting question, I'm going to move away from risk. There's been a lot of debate about whether we recollected look at it as AI technologies or AI for industry regulations. And this is a hard one because what we're seeing right now ‑‑ if I ask you a question: Is Tesla a car with software, or is it software disguised as a car? What do you think it is? Therefore, how should it be regulated?

And the very fact that we don't have an answer tells ‑‑

>> AUDIENCE MEMBER:  (Off microphone).

>> MELODENA STEPHENS:  Sorry. He says software.

>> Hello. My name is (?). I'm president of (?). Regarding your question, I think it's software developed by a person, developed by an engineer. So, therefore, the regulation, he has liability regarding the software that he developed. This is my answer. It's not about the car, is it a car. It's autonom. It works by itself. The reason he should have responsibility is because he developed the software.

>> MELODENA STEPHENS:  But I just want to add one point. You're right, but when it is registered, how is it registered? It will be registered as a car.

>> It will be registered as a car, but the responsibility is who is riding in the car.

>> MELODENA STEPHENS:  That's why there's challenges. Think about your health app, your watch. Is it a health app?

It's an interesting discussion. AI will move across industries, and we don't have oversight. The purpose for which it was developed, for one purpose, allows it to scale into a different industry for another purpose, and we don't have transparency on weights. Why were those developed? They were used for health, but now it's used in other things.

Thank you for the comment.

>> GLADYS O. YIADOM:  Do you have any comments regarding the first question that was asked, Sergio? Then I will hand it over to Allison.

>> SERGIO MAYO MACIAS:  It's just more or less repeating the same. Also, I agree with Melodena, that writing data and being able to establish contracts for ensuring trust is the key now.

Now with (?) In the European Union, we are trying to escape that problem for SMEs and for citizens and to establish this safe space with no need of contrast, with no need of agreements for sharing data, and, actually, I am aware that this model is being, let's say, also used in some countries in Latin America. They are consulting us on how they work, and they're trying to do more or less the same in South America for sharing data without the need of establishing this type of one contract or one agreement for each time that we share data.

>> GLADYS O. YIADOM:  Thank you, Sergio.

Allison?

>> ALLISON WYLDE:  At the University a couple of weeks ago, with students coming from from all sectors, critical infrastructure, nuclear, everything you can imagine, and everyone wants to use AI for cybersecurity because, of course, we're just human. But once we have the developer bearing liability, but once the model starts modeling, then it's gone from the developer. It's gone from their hands. It's not in their development anymore.

The Cognitive Security Institute, a really interesting discussion there. We're human. 80% of people we can train, but the other 20%, you know, it doesn't matter how smart they are or whether they're on the board, but these are the people that will always click on the link. We know that because that's human psychology. So do we implement security. Okay. We're just going to implement this security to stop that from happening. So let's secure the system and take out the 20%. Let's secure the system so that can't happen. And that's one of the trade‑offs that Melodena was speaking about earlier.

So maybe the company says, yeah, we'll have zero trust and best practice, but, in the end, let's put some baseline security in just to deal with the baseline security. Maybe that's how we deal with risk.

Innovation, it's an amazing space, and we can see this out there, but it's (?) Hopefully, we can reap ‑‑ get to the benefits.

>> GLADYS O. YIADOM:  Thank you, Allison. We'll take another question from the audience. There's one lady.

>> Thank you. Good morning. My name is crystal (?). I work for (?) Which is the developing agency of the African Union. We discussed this earlier. You said harmonisation happens ideally. Then my question goes ‑‑ so last July, the African Union developed a continental AI strategy. There is quite a lot that needs to be done on the continent. The countries have different levels of policies and regulation defined. So it varies a continental strategy that's been developed. It should be implemented nationally. Should we then not talk about harmonisation because we talk about a system that is global and is difficult to ‑‑ do you understand what I mean? That's one. What will be your recommendation about implementing the strategy that has been defined, going about it nationally and engaging with the countries for the development agency that we represent?

Thank you.

>> MELODENA STEPHENS:  I will start. I was pleased. 55 countries, I think we underestimate Africa as a continent. There is a chance now to be actually in the forefront. Now, a couple of things important to realise, between the U.S. private sector model, which is on capitalisation and the European model, there are two different things that Africa will have to decide. Are we in it just for the profits for the economy? Or is it also about lifestyle?

If you look at EU, I believe one of the discussions that was happening on Germany is why don't you list on the stock market? Why don't wow want to be a trillion‑dollar company?

One of the investors said, well, I'm happy with the money I've earned. I can take care of the families. Why do I need to grow?

That's one thing Africa would have to figure out. You have a lot of society values. Family is important. Society is important. What do you want to focus on.

The second thing that I think is important is just to understand what are the assets within Africa?

We know cobalt, DRC is a major provider. I think there's a winning situation for all 55 countries that's there. This is really important in the future because we see across the world a lot of countries have assets, but they are sold as commodity products, not value added.

Again, I like the EU model because you look at the trade in the EU model at 60 to 70%, which is huge. I think there's enough for everyone in Africa to benefit.

All of us with USBC, thank you, European Union, for that.

I think interoperability will be key on how you would want to make it work and everyone deciding who would be your key markets because who you would sell to would also decide whether you want to align your standards with them.

And I think that's things you would have to decide at the strategic level.

>> ALLISON WYLDE:  Thank you, Melodena. I think have to come back from an educational piece. An ideal world would be engaging the youth. There's youth ambassadors from different countries.

There's vitality in young people. I think they keep going younger and younger into schools. Doing an education piece that makes sense so your parents' business, what happens to your parents' business? The risks are involved so people can embrace the risk, and young people, in particularly, can get involved and embrace the things that will help families and businesses locally. So maybe from the education piece.

Maybe Yuliya has something to say on the education piece.

>> YULIYA SHLYCHKOVA:  I was just listening to you. (Laughter). Education is, indeed, important. I think education helps organisation. When people are connected with their minds, like, it automatically motivates more organisation. I believe education efforts should also be shared responsibilities from government, private sector, university, parents. It's also like a common goal. Is the private company ready to contribute?

>> GLADYS O. YIADOM:  Thank you, Yuliya, Allison, and Melodena, for your comments.

Jochen, perhaps do we have another question online?

>> JOCHEN MICHELS:  Currently, we do not have questions online, but there's a little bit discussion between the attendees but no direct question.

>> GLADYS O. YIADOM:  Thank you, Jochen.

We have a question in the audience.

Can I ask you, sir, to come by and ask your question.

Please share your name, organisation, and who you're addressing your question to, please.

>> Hi. I'm Otis. I'm from Uganda. We're based in (?).

Yuliya, what you mentioned around data poisoning and datasets. My question is around have you seen some of these instances where there's data poisoning in open‑source datasets and the tools that can be used in security audits of those datasets?

>> YULIYA SHLYCHKOVA:  We did see data poisoning, unfortunately. Because I'm not a technical expert, I think I would not be able to move further, but even at the (?) Phase, there were some back doors. So I'm ready to exchange business cards with you and connect you with our experts who can provide for information.

In terms of AI audit, this is a raising trend in Europe. There's companies providing audits in AI, adding audits in their portfolio. I was able to chat with some of them, and what they are saying is they develop a methodology. It's their pilots that go on and test this methodology. So I believe we'll see more and more of this.

>> GLADYS O. YIADOM:  Thank you, Yuliya.

We have another question from the audience.

>> Thank you very much. My name is Frances from the communication authority of Kenya, which is a regulator for the ICT sector. My question is about the ethical AI considerations. When you talk about the innovation in AI, you cannot miss to talk about the ethical issues, especially with the psychological effects with developing the models.

We've seen big tech companies using proxies to (?) The cheaper labour within developing countries.

What do you think are the considerations in terms of AI practices, to promote AI practices with respect to the ethical use of AI?

>> MELODENA STEPHENS:  So this is a tough one, right, because when I look at ethics, I think ethics are great. (Chuckling). The line between good and bad is a difficult one. On the one hand, I go, I want to increase the level of income, so I come and I choose cheap labour, but I'm also willing to close when I find another cheaper labour source. This is the challenge we have to face.

I want to introduce AI, but I don't have any implications on the consequences to environment, as an example. Water consumption, electricity, e‑waste recycling. E‑waste is far more toxic than carbon monoxide. There are many standards. I think UNESCO put up one. They agreed on certain standards. The problem is operationalising it. So there are guidelines. I think it's for us to figure out what does that mean for our country and our people, and I always like it to be people‑centric. So if I'm saying transparency, why do I want transparency for my people?

And it could be because I want a cultural ‑‑ I want it to be culturally sensitive. If I think, in my culture, a child is someone to the year of 16 or 18, not necessarily 12, then I want it to also be aligned for my culture. Family is important. Maybe in my culture, it's collective family. It's aunts, uncles, extended family. I think translation is the difficulty, which we don't have alignment worldwide. So we have all of these things. We don't know how to operationalise it, and we don't know how to go and implement it.

So, right now, at this point, because AI is being perceived at the "in" thing and because of national security issues, there's a huge investment in AI.

I wanted to mention this. The current tech debt is around 40 to 50%. That means if you put in 1 million into a project, you need to keep half a million for upgrading the system, retraining the system for cybersecurity. We are not considering that. That is leading to a lot of failure. Currently, the AI failure rates is around 50 to 80%. So I just want to share this dataset with you. 1.5 million apps on Google and Apple have not been updated for two years. 1.5 million apps. That's a data 1 reliability point. That's a cybersecurity issue.

In 2022, Apple removed something like half a million apps.

So we're seeing we're starting businesses using AI, and the first question is why. What is the benefit for the human being?

We have not considered we cannot sustain the business? So, yes, ethics ‑‑ IEEE has policy on this, but they're all guidelines. We're not able to implement it because there are cultural differences and interpretation. I'm happy to talk to you about it.

>> GLADYS O. YIADOM:  Thank you.

Perhaps Sergio, Yuliya, any comments?

Sergio, go first and then Allison.

>> SERGIO MAYO MACIAS:  I think ethics is a brave field. It's difficult to mandate ethics. Let's say, for instance, you're hiring people, you're a recruiter, you're using AI to help in recruitment. Is it fair, for instance, if you want a German‑native speaker to develop a system promoting what is received from Germany? Are you avoiding to use from other countries? Are you going to read everything on the CV for filtering before calling people to interview?

I was mentioning before the algorithmic furnace. This is something that we have to have in mind, of course, furnace, but furnace is different than ethics.

So we should think before developing another system, if we want to use it for personal use or for including other people.

>> GLADYS O. YIADOM:  Thank you, Sergio.

Allison, please.

>> ALLISON WYLDE:  Yes. This is outside my domain, but I think we discussed earlier that ethics is something that's a cultural norm. Ethics for you are slightly different than ethics for different people around the world. You've probably already done all of this and thought about this, but what about something from bottom‑up? What does ethics mean to you? Where does the norm come from? What are ethics?

These are schools getting involved in consultations and helping you develop ‑‑ I'm sure you've done all this. And then leveraging, as Melodena was saying earlier, your unique resources with those tech companies because the tech companies ‑‑ and we know who they are. Well, I don't see any of the exhibition stands, actually. But it's interesting because they have so much weight in the world. If you look at your assets and say, well, these are our unique assets and maybe leverage that in this really imbalanced world, with those tech companies. Maybe ‑‑ I don't know ‑‑ I hope that helps.

>> GLADYS O. YIADOM:  Thank you.

Yuliya, (?) AI principles?

>> YULIYA SHLYCHKOVA:  We believe ethics and transparency is important in addition to data regulation, self‑imposed standards also is vital in the whole ecosystem, and we, as a company, developed our own principles, ethical principles. We declare what to adhere to. I think this is good practice, and more and more companies are joining and different pledges showing their principles. This has already happened. This is good.

I also wanted to commend that we, internally, had this discussion about whether the usage of AI can influence the workforce because right now, in Kaspersky, we have 5,000 engineers. Our top top‑notch researchers, they're part of the community, 100, 300 in the world. It's a unique talent, but they all started being regular virus analysts, investigating simple viruses before they grew to that level.

We were thinking about introducing AI to do more simple tasks. We ended up with positive thinking because everybody was more AI being used to automate skills like the professional shift from doing things manually from maybe being more operator of AI models.

So skills will be a little bit different, but, still, the journey will be there, and humans will be required.

So at least internally, we hope that it will affect human employment, but we'll introduce more opportunities in the different job profiles.

>> GLADYS O. YIADOM:  Absolutely. I think this is one of the questions we hear in the international forum, is the future of work in the context of AI.

Thank you very much for sharing that, Yuliya.

We can take one or two other questions. Are there any questions from the onsite audience?

Jochen, do we have one or two last questions ‑‑ oh, see, we have one. Sorry.

>> Hello. Can you hear me?

>> GLADYS O. YIADOM:  Yes, we can.

>> My name is Paula. I'm from the African Union.

I think you showed some cyber incidents that happened based off AI. Do we have any case studies on cybersecurity incidents based off AI that have destabilized a nation? For instance, any sort of use of mass weapons to attack a particular nation?

(Captioner has no audio)

>> JOCHEN MICHELS:  We cannot hear. Sorry to interrupt. There's an issue for the microphone for online attendees. We cannot hear you anymore.

(Captioner has lost room audio)

>> JOCHEN MICHELS:  Gladys, can you hear me?

(Team is working on technical issues for audio online)

>> YULIYA SHLYCHKOVA:  ‑‑ very persuasive. So this is the majority of AI in routine cases to send more persuasive social messages. But we also started to see more advanced use used by advanced actors, but it can happen in a very persistent manner. For example, the recent collection of samples for all cybersecurity companies to refer to. We see for some time malicious actor was sending samples with specific logic so that all cybersecurity engines later train on these samples. We recognise or not recognise this thing. I'm trying to explain this in simple words, but definitely more advanced attackers trying to use these and to affect machine‑learning algorithms which are working in cybersecurity software so that later when they release their highly capable campaigns, the defence technologist would not see or would not act. Unfortunately, we'll see this more, but we're used to the race in cybersecurity. They're coming with new attacks, and we're coming with defence.

We're using very highly efficient AI which can detect anomalies. So we are good. We are on par. So there is hope.

>> GLADYS O. YIADOM:  Thank you very much, Yuliya. I think this leads us to the end of the session.

I would like to first thank our speakers for joining us today, the online moderators, and participants online and onsite.

We're available. Please do not hesitate to reach out to us. The guidelines will be available online. So please, also, do not hesitate the check them.

Thank you very much.