The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> HARISA SHAHID: We are going to start. I'm an information security officer from Pakistan. I'm joined by my co‑organizer, Muhammad Umair Ali.
Without further ado, I would like to introduce the esteemed panelist for the session today. We have Mr. Jacco-Pepijn Baljet. He has experience to address the issues. Next we have Hafiz Muhammad Farooq. Serving as the member of the Advisory Committee and other Working Groups. He also served on the advisory board of several Fortune 500 companies. He's an esteemed cybersecurity professional joining us here on site.
Now without further delay, I would like to give the stage to Mr. Umair Ali.
>> MUHAMMAD UMAIR ALI: Thank you.
We have Daniel Lohrmann. He's an accomplished author and cybersecurity professional with over 20 years of work experience from the national security agency and the government and then has worked up with the department of homeland security as well as other organisations. He's joining us today from New York. I guess it is quite an early time there. Thank you, Mr. Daniel, for joining us.
Followed by that Jenna Fung. She's with the NetMission Academy. She leads the IGF as well as a member of the youth coalition. Welcome, Jenna.
Up next we have the final panelist, Mr. Gyan Prakash Tripathi. He's working with several think tank organisations. It is a Civil Society group today. He's based in Vienna, Austria. Thank you for joining. Over to you.
>> GYAN PRAKASH TRIPATHI: It can have serious consequences for the public health, safety, and national and global economy. Such includes but not limited to defence services and manufacturing among others. Today, we aim to discuss navigating the security of such critical infrastructure in the rapidly developing age of AI through the multistakeholder participation, international cooperation, development, and resilience into the infrastructure for the developing counties.
So, this brings me to my first question. We have ‑‑ I would like to ask Mr. Farooq. What are the unique challenges faced by the developing countries? Particularly we've seen in the Middle East and south Asia in securing infrastructure. How can AI be used to address them?
>> HAFIZ MUHAMMAD FAROOQ: First of all, thank you.
It is a great question. In the developing countries, especially MENA itself, the challenge that we have is the area of critical infrastructure. I would say there are three major areas that we see there are issues. The area number one I would say is the legacy infrastructures.
In the developing countries, the companies don't have a huge budget to operate. They keep using outdated systems and technologies because of the lack of resource and lack of budgets. Here comes the problem. These are all of the systems they keep using. They have lots of vulnerabilities. Attack the infrastructure and here comes the problem. That's one problem.
The second problem that I wanted to highlight is the lack of security expertise. You know, the expertise in the critical infrastructure domain is a global problem. It is not only a problem for the developing countries, it is a develop everywhere. But obviously, developing countries are also getting the heat of the problem. You will find many security experts in the industry who know about the TCP protocol.
When you talk about any CS protocol, you will not find many experts. In the in‑depth detail of the technology. I would say lack of expertise is one of the problems and companies need to dedicate some budget on training the resources and individuals to make sure the technology is in the area. The third and most important area which I wanted to highlight is the digital transformation. It is not an issue. I know all of you love the digital transformation. I really appreciate that also.
The problem is people do spend money on digital transformation. They don't give attention to spending some money on securing the infrastructure. So, when are you deploying the infrastructure, make sure that you deploy cybersecurity controls on top of that. If you don't do that, the transformation will become a pain in time to come. You need to keep this in mind.
Now, coming to the second part of the question, Harisa, how can you use AI for this? Obviously, AI is a great technology. It can do many major stuff to secure the critical infrastructures. Two areas are the key areas where AI can be very useful. One of them is threat protection and response. You can adjust all of your data from the critical infrastructure in realtime to your algorithms. They can find anomalies out of your daily operation and find out if there's a realtime security trip. Detection and response can be implemented by AI big time. There's no doubt about it.
Especially, the company like Morocco. We have a massive infrastructure and millions of assets scattered all across the world. We need an army of resources sitting in the realtime and doing analysis on the events. Which is impossible. Here comes the role of AI. AI algorithms can tap in. You can jump in and they can make life easy for you. This is what my company is doing. We can't just employ hundreds of security analysts just to do everything. We have to allow AI. I hope to answer your question. Thank you.
>> HARISA SHAHID: Perfect. Thank you for the great points made. One of the things you highlighted is the lack of expertise. We know it is a major problem. The first thing that comes to mind when we talk about cybersecurity and AI.
For us to deploy the solutions, we must have the expertise in order to work in the areas. This brings me to my next question, which is to Jenna Fung. What are the most effective strategies for training and upscaling technical professionals in developing countries? As I've seen it, you have been working with some civil organisations and the community as well. How can you leverage AI for critical infrastructure security? What is the limitation for the adoption? The floor is all yours, Jenna.
>> HARISA SHAHID: Okay. They are working on it. It is done. Jenna, can you please try again?
>> JENNA FUNG: ‑‑
>> HARISA SHAHID: Okay. We're going to move on. What are the primary risk in the systems and how can they be mitigated to protect the critical infrastructure? Are you able to unmute yourself, Daniel?
>> HARISA SHAHID: Can you please unmute Mr. Dan and Jenna Fung?
>> DANIEL LOHRMANN: Hello. Can you hear me now? I cannot. The video is not started. I don't know if you can see me. I can certainly start talking, if you would like.
>> HARISA SHAHID: Yeah. Sure. No problem.
>> DANIEL LOHRMANN: I'm getting a message the host has not unabled video. As soon as the video becomes live, I'm happy to be on video. I'm actually in Michigan in the USA.
The question is a really important question. There's a lot of different challenges. Repeat again answer the primary risk associated with AI systems and how can it be mitigated? Is that correct?
>> HARISA SHAHID: Yeah.
>> DANIEL LOHRMANN: Great. AI is being used to attack us. The systems can be exploited through automated attacks such as malware distribution and AI‑driven attacks are actually spreading and broadening and deepening the attacks against critical infrastructure worldwide. This is happening all over the United States right now, all over the world right now. The actual AI systems themselves. I want to mention four or five different areas. Then we can dive in and mitigate these. From data poisoning attacks to privacy attacks, adversarial attacks.
For example, poison data can misclassify a threat resulting in an outage. Privacy attack or adversarial attack. It is causing the outputs. You know, we need to make sure those are protected against. Another type of risk that we had is model threat. AI model is stolen via exposed API and application programme manager interfaces and threats. They will attack and duplicate and misuse them. Stolen models can be weaponised and they were sold to competitors. They had the supply chain attacks.
The third party using the AI systems that might contain vulnerabilities, the compromised library and AI application and supply can serve as an entry point for attackers. The second part of the question I'll mention briefly. What is things that we can do around mitigating these and mitigating the cybersecurity risk in critical infrastructure require us to have a robust data governance model, such as validating data sets, and using differential privacy. Which is a technique to prevent data privacy attacks. We need to make sure we're doing secure model development, including adversary training and building resilience.
When we have attacks, they will be able to sustain them and recover. Access control, encryption, and network segmentation can protect against unauthorized access in the spread of these attacks. With the third‑party risks withing they can be reduced through stringent vetting and secured software practices, continuous monitoring with the driven anomaly protection can ensure proactive threat management.
Last, I want to mention the response plans need to be updated. There's really great response plans. Be ready for when attacks do happen. You can collaborate on threat intelligence, the threat that will strengthen people's defences. As was mentioned earlier, there needs to be more training and awareness. Those are my ‑‑ it is working now. Good to see you. Those are some of my opening comments.
>> HARISA SHAHID: Thank you so much, Mr. Dan. Now we would like to move towards Jenna. Jenna, I will repeat the question. What is the most effective strategies for training and upscaling the professionals in the developing countries to leverage AI? What do you see as the limitations for the adoption?
>> JENNA FUNG: Thank you so much. I hope I'm audible in the room. Awesome. It got a thumbs up from Dan as well. I assume they can hear me clearly. Thanks for having me on this panel.
Given my background, I work mostly with young people in Asia‑Pacific regarding the capacity building, having some knowledge to extent cybersecurity and all of that. I can't speak for the technicalities. I have opinions on what we can do better, especially if assuming the title in the sessions.
In any sense, we're using this ever‑evolving technologies for critical infrastructure in this case. And with my experience working with many young people in the Asia Pacific in the past six or seven years. You can see that many of the times, there's some knowledge gap in that. As I currently reside in North America as well.
So, I think that's some differences when we are somewhat exposed to the same level of, like, development. Things are really new. People who have knowledge, like, for example, government or companies are using it on infrastructures or things that people from anywhere use every day. But because, for example, there should be tailor‑made strategies for how to do capacity building for people who implement or execute this kind of technologies in their work.
Especially, for example, government or even civil servants who use it at their work. They should ‑‑ I think they should be the first group of people who need resources. But also there's people who receive the use of ‑‑ people who have been impacted by the implementation. I think national strategies will be ideal and helpful. Many over time, there's financial restraint or resources. There are many other even more critical things that you need to invest and put effort in to it or prioritise. There's geopolitical attention or getting to reallocate and dealing with, for example, climate change and all of that.
There are times that capacity reading will sometimes been put behind and ahead. I think there are times where individuals or people in the developing country can leverage the power of the Internet, perhaps, to look for resources elsewhere. Even, like, what ‑‑ not within your own country, perhaps you can look within your regions if there's any NGOs or organisations providing this kind of, like, educational opportunity for you to enrich your own knowledge. I mean perhaps many people are aware a lot of big corporations also offer some sort of, like, skill trainings, like microcredentials and opportunities for you to learn about things as well.
I think that will be helpful for young people to kind of, like, develop the knowledge as well. I will stop here. Hopefully we can chat about more and touch up on other questions as the audience also asks questions later on. Thanks.
>> HARISA SHAHID: Thank you so much, Jenna. Point is well made. One of the most important points that I would highlight is you mentioned we can look within our own region to educate people. Because we are specifically talking about the developing countries, it is always difficult for the developing countries to invest more resources and to get resources from across the borders; right? This leads to the next question which I would like to have from Gyan. How can they rely less on security for the critical infrastructure and maintain digital sovereignty?
>> GYAN PRAKASH TRIPATHI: Thanks. Great to see everyone on the panel and in the audience. The question of technology and the governance architecture kept popping up during democracy. To the analysis of global literature, we observed the emergence of corporate incentives due to strategies, practices, involved in designs and developing social and technical solution that is are at the heart of the information ecosystems. These ‑‑ they make global by further divulging information. I suggest the three‑prong strategy that emphasises the safeguards, accountability, and capacity‑building measures.
The first is legal framework and strategic contracting. Clear obligations into their legislations which must be the human right centric. They must enact legislation and regulations that mandate transparency and Human Rights and data protection protocols for all technology suppliers regardless of the religion. They must have terms that demand the technology and long‑term support arrangements. By these provisions, they can include the training for local engineers, commitment to open standards, and clear exit strategies that can even the lock in.
The second I would strongly suggest is inclusive mechanisms which can be achieved for multistakeholderism. There must be a clear and direct independent oversight by bodies that include government representatives, CSOs, industry experts, and also Human Rights Advocates. But I don't think I need to elaborate on this in this forum.
And the approaches, of course, is well documented. The third and more critical prong of the strategy is regional cooperation and capacity building for long‑term strategy. It is important to have blocks for the resource that is are available. This can be done through collaboration with geographically proximate countries for the legal and technical standards. Another way it can be achieved is by forming issue or interest‑based blocks which can reduce the collective bargaining power. Each prong seeks to read to protect local interest and uphold human right standards. By implementing this, developing countries can create a balanced, forward‑looking legal and policy ecosystem which will respect Human Rights, reinforce serenity, and fair technology partnerships. Thank you. Back to you.
>> HARISA SHAHID: Thank you. You have work in the government sector. What do you see are the key challenges in establishing international partnerships to share the best practices and technologies for critical infrastructure security, particularly the context of AI?
>> SPEAKER: I think many countries have ideas. It is logical to say the energy grid or water supply. Yeah. Your own cybersecurity operation. It is part of your critical infrastructure. So I would say one has to prioritise every country. We have to find the national priorities. We have an exchange within the region that's also mentioned. It is the most priority issues.
One opportunity in international partnerships is also exchanges best practices and exchanging ideas. Also a negative experiences. It is also very important to share negative experiences together, so that people can learn from each other. And let's see. There are a number of international mechanisms ready for that. Both the governance forum and the AI for good summit in Geneva. There are many forums open to stakeholders. There's also the global forum on cyber expertise. Which is sharing the knowledge together and put them together to bring the stakeholders from the private technology and governments together from the global south and other countries.
Also, these days we also hear a lot about the digital cooperation. It is an interesting organisation that brings stakeholders together. I don't know if they do a lot of work with AI yet. I think that will be a logical step to. Here at the IGF, there's talk about the global digital compact and what has come up. There are a number of mechanisms that have to be implemented now on the AI governance. You'll see it is built to bring all of the stakeholders together.
That's the most key message that I want to give. It is important that any mechanism or international partnership actually brings both Civil Society, academia, private sector, and the community and the governments together to really learn from each other and not only speak in their own bubble and cyto. Critical infrastructure is its own state. It is important within your country that you have mechanisms that we can exchange knowledge and exchange experiences between the different stakeholders.
>> HARISA SHAHID: Thank you so much for your input.
Now, this leads to the next question that we have already mentioned. It is important for the platform of the application for the stakeholders in the private sector and the government ‑‑ okay. Over to you, Mr. Dan.
>> DANIEL LOHRMANN: Thank you for the question. I appreciate the comments. They lead into that very well. I think that this is a huge challenge. I would echo some of the comments. Private sector can collaborate with governments and Civil Society. I think first of all, it just starts with a commitment that you want to do it. That we all, it is the same that we say in the U.S. Which is when you've climbed the ladder, send it back down.
It is in all of the interest and global interest to work together and partnerships all around the older. They recognize it is in the long‑term best interest of everyone, the whole society. Also, their own companies and of where they want to go and how they want to work together and partner in the future. How can you do that? You know, public‑private partnerships is a big one. Talk about NGOs, non‑governmental organisations partnering with those. I think from a practical perspective, you need to have tier pricing models and offered subsidized pricing for AI‑powered solutions to ensure affordability for low‑income regions.
I think that's done in other areas, in society, we talked about considering AI and technology. The skill development across organisations and really making sure we have local training programmes that meet the local needs. I'm sitting here in the United States. I don't understand the specific in developing countries in Africa and around the world.
Honestly, you know, developing the partnership and transferring the local professionals will ensure long‑term sustainability. It needs to be done and contextualise and vocalise to those. This is infrastructure such as cloud storage and broadband access in developing regions and also ensuring local needs are being met from privacy perspectives and but that we have proper funding mechanisms in place as well. I think that's a big challenge.
Leveraging international development funds and this is a UN panel. Looking at ways that we can do grants and finance and AI‑powered solutions. Then really talk about and we have some questions. We'll get to those in a few minutes.
Around local pilots or proof of concepts in the local context. I think those are really important. It is going to require multistakeholders coalitions. Establishing coalitions with international organisations, whether that's the UN, World Bank, and really talking about working together with NGOs as I mentioned, advocacy groups, and then just really making sure that we all speak the same language. I want to close on that.
Even some of the different terms that we use in the U.S. are different than some of the terms that people use around the world. Part of that is language. You know, different views and spellings of words and English and that kind of a thing. I'm horrible with foreign languages, by the way. I admit that up front.
Even the terminology. As we think about AI in cities and governments around the United States. They used to have two languages. They now come together and support 140 and 150 language. The same applications can be scaled to work in a wide variety of different communities. Maybe even in, you know, Washington, D.C. and Montgomery County is a great example. The Monty app. That's a great application. In the Washington, D.C. part of the United States. That serves people from all over the world that live in the area that have access to 100 applications in their own language. I think AI can help us in that. I think it can be part of the solution to make solutions that are available maybe in English available in multiple languages around the world.
The ability to relearn is a big part of the solution. Being able to not reinvent the wheel, if you will, but partner and say the government in the United States and government in Europe is doing the successful application. How can we apply that in developing countries?
(No audio)
>> Are you still speaking? We're not able to hear you.
>> HARISA SHAHID: My apologies. I switched my channel. How can multiple stakeholders work together to develop the global or regional framework? If we see some regions have a framework for cybersecurity and things like that.
If I talk about some developing countries like if I talk about my country in Pakistan, we don't have a framework specifically for the cybersecurity or information security. How can multiple stakeholders work towards developing the global or the regional framework in cooperation of AI in the critical infrastructure security?
>> HAFIZ MUHAMMAD FAROOQ: Thank you for the question. I agree with what Dan said. There has to be a global standard for us to follow. So I ‑‑ the frameworks and the legislation. It was good for the cybersecurity. We have seen many standards and many legislations coming across. I will give you a few examples. I can take an example of Singapore. Singapore recently released in 2024 and the critical infrastructures.
Similar we have Hong Kong. The first time they passed the bill for the protection of the infrastructures. Another example. Similar to the U.S., they have revamped their cybersecurity strategy by including security track cases for the production of the agreement. This is how the cybersecurity industry is moving towards the framework in the standardization.
Also you might be aware, most of you, about the European Union. They recently forced the directive and AI act. This is something very promising as well. Things were positive in 2024. The missing part is the critical infrastructure legislation, I would say. All of these legislation I'm talking about doesn't include the use of AI or cybersecurity for the infrastructure. That's the missing piece right now. How to address that has Dan said it has to be global first. I don't think the reasonable approach is going to help us.
First of all, the developing countries they need to sit together and they need to work on a global framework first. Then the regional framework, they should follow them. I mean I don't think so. Only the regional country like Pakistan or, for example, even Saudi Arabia. They can't handle this bigger spectrum. ITU is a good forum. They can take the lead and the use of AI for the cybersecurity for the critical infrastructure. I think more research and development and more collaboration is required for the time being to understand how GenAI is going to be used and how AI is going to be used for the protection of infrastructures.
Still more work to do in the years to come. Before the attacker, they start using AI. We should start using the AI as well to protect the infrastructure. I'll help answer your question.
>> HARISA SHAHID: Definitely. When we talk about AI, there's organisations and other security issues as well. AI has its own concerns as well.
Moving on to Mr. Jacco. How can they effectively balance the need for privacy concerns and political organisations?
>> JACCO-PEPIJN BALJET: Thank you, Harisa. Thank you for mentioning that. The need for standard. It is a relevant question. I would say there's not a dichotomy between the two. You need both at the same time. Usually more privacy or more ethical considerations do not mean less security. Both would go hand in hand, of course.
Also, here I would say this question. I think the basic general principles will be the same. Whether you use AI or not. I think the only big difference is for AI it is enlarging many things. It makes many things much more impactful.
Especially, positively and negatively. You can better defend against cyberattacks with AI. You can also, of course, there's also the risks of the privacy risks and the risks of false data. I think the best way to actually incorporate this is both. I'm going to continue with the threat here on international standards and cooperation. It is to both think about what do we have in common university? Universal rights and standards already there at the UN level.
Basically, globally. Next to that, we have the local context. Pakistan is different from the Netherlands or different in Saudi Arabia. I think we need to take that into account too. I think to actually and we've seen that in the global compact too. When we talk about ethical considerations. We had a push to include that in the global digital compact. I think the best approach to do this is to have a high‑level. I agree with Hafiz at the UN we have a general agreement on the principles.
At the more regional or local level, you can have more legislation or specific infrastructure. We agreed to the general principles here and the protection of privacy or the protection of security also. We base the national legislation on this. The next is not a stakeholder is important. You can do that locally. You can do it internationally. We have many standard organisations internationally. Resources are always a challenge. In the different organisations. This is a platform that you can engage with the big technology companies. It is important there on the technical level to visit the stakeholder approach and principle open to everyone. It can be improved for exclusivity to start there with the standards for incorporate AI in the cybersecurity field.
Thank you.
>> HARISA SHAHID: Thank you. With that we have some to the end of the session. Concluding the points that we have highlighted that the skill‑building is very important. For the skill‑building collaboration exactly the main point which every speaker has highlighted here. The collaboration between all of the stakeholders is the crucial part to enhance AI and cybersecurity and create awareness about the use of AI. Right now AI is not being much used for the protection of the infrastructure. It is really important.
Thank you so much for all of our speakers for joining us today. Now we're moving towards the Q & A session. I would like the audience, if you have any questions, please feel free to ask.
>> We have a question from the chat box and in the online audience.
>> HARISA SHAHID: Yeah. Sure. We have one question here.
>> AUDIENCE: Hello. I'm Fernando. I work in the network provider. I'm part of the technical sector. One thing that was presented as a problem was the lack of professionals in the cybersecurity AI. Another problem that I see is given the long and continued cybersecurity training, most of the professionals eventually go to another world. Basically, my question is how do rethink the talent to ensure they work?
>> HARISA SHAHID: That's an important question. If anybody from the panel would like to ask. I mean answer.
>> HAFIZ MUHAMMAD FAROOQ: The world is moving global. It is difficult to maintain the talent. If you are sitting in Brazil, and the company needs you in some other part of the world, they would book you. There's challenges there. I think instead of retaining the talent. There's not enough documentation available. There's not much stuff available where you can train yourself to see how the system is going to operate.
Instead of localization, we should concentrate on the training. The old legacy lenders will have to re‑do the documentation. Instead of trying to localise the resource at a different location.
Thank you.
>> HARISA SHAHID: Thank you. Does it answer your question? Perfect. Any more questions that we have?
>> AUDIENCE: I'm from the technical community. I have a question. We are talking about promoting using AI to protect our critical infrastructure. What do you think will be the call? Thank you.
>> HARISA SHAHID: What infrastructure is the critical infrastructure; right?
>> AUDIENCE: Yeah. My question for the panel is: what have you named some of the infrastructure that will be in the scope of critical infrastructure that should be protected by AI? Thank you.
>> HARISA SHAHID: Thank you so much. Anybody from the panel would like to answer?
>> DANIEL LOHRMANN: I can certainly talk. We have people from 15, 16 sectors. Everything from utilities, water, power, to finance, you know, certainly banks and all different levels of government. Then you can start talking about transportation. Airlines and trains. All of the core critical infrastructure in society. You can type in critical infrastructure in the USA and in North America, there's a defined list on what's covered and what's not covered in the critical infrastructure.
>> HARISA SHAHID: Does this answer your question?
Actually, we are running out of time. If you have a question, connect with our speakers afterwards. We have only one more question from the offsite participants.
>> MUHAMMAD UMAIR ALI: Right. We have one question. Can you elaborate on the specific resilience strategies and capabilities organisations should develop to ensure rapid recovery from an AI‑driven attack?
>> DANIEL LOHRMANN: Absolutely. There's a number of things people can think about. When you start thinking about threat intelligence, invested AI power threat intelligence to detect and predict emerging patterns. Basically fight AI with AI guess soar. Making sure that, you know, cyberattacks are moving faster than ever. You need to have the fight fire with permanently. Having table top exercises and AI augmented defence tools that allow you to respond very quickly and first of all, you need to know. They start with a resilience strategy.
Resilience is a popular word in the U.S. cybersecurity community right now. I think globally it is a hot word. You need to have a comprehensive, incident response plan. If that's the water, utilities, or banks. You need to be aware of it. You need to be able to detect it and have all parts of your organisation able to respond. Not just technology perspective, but people, process, and technology. That means communication, it means working with all levels.
If your bank was hit and your utility was hit, water supply was hit, everyone needs to know from the business side of things with your clients and customers, what are the steps that you are going to take? How are you going to respond quickly? Once you detect that, being able to respond and recover quickly in a resilience way is really, really key.
>> MUHAMMAD UMAIR ALI: That brings us to the end. Any questions?
>> HARISA SHAHID: We should have a photograph. Can you all please turn on your camera?
>> MUHAMMAD UMAIR ALI: Should I stop sharing the screen? We have part on the screen.
>> HARISA SHAHID: Yeah. Yeah. Perfect. Thank you so much, everyone. Connect with them on Linked In. If they have more time, they can engage with you as well. Thank you for joining today. Good‑bye. Have a nice day.
>> MUHAMMAD UMAIR ALI: Thank you so much. Have a good day. Bye.
>> DANIEL LOHRMANN: Thank you.
>> HARISA SHAHID: Thank you so much.