The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> Hello, everyone. We are about to start in five minutes. And technical team, can you help me to cohost Annie, A N N I E?
Hello, technical team. Can you hear me?
Can you pin, too, Anoosha, Yasmin?
>> I think we should start right now.
>> ANNIE: I got Anoosha's number. Maybe she can pick up.
>> Let's get started because we are running, you know, like
>> ANNIE: Okay.
>> ONSITE MODERATOR: Apology, everyone, due to unknown issue, our (?) has been added. But we are now back and let's start our session right now.
And first of all, thank you for joining the session, AI in warfare, role of AI in upholding the international law session, version number 184. And this session is, you know, about the we are going to explore the sensitive nature of the AI in warfare demand on the formats that foster open, frank discussion as a roundtable discussion. But let's see how we can manage the time anyway.
I will not consume so much time on that. So our we are now having the online moderator Annie and also one of the organization, Abeer, with us today. So, I will pass the floor to Annie for introducing to their organizer, just quick introduction.
We can't hear you, Annie. Is there any technical issue.
>> ANNIE: I was saying, it's an immense pleasure to have you all with us today. And I am Annie (?). you can call me Annie. And I am a senior year law student, as well as a governance and policy analyst at (muffled audio).
I have with me the other organizers, bio, we have (muffled audio) also are organizers. So I want to quickly thank all of them for support and immense help for finding all the speakers and the experts on this topic and, like, I am very (audio difficulty) and have you on board topic. We all are aware that AI has reshaped how we are living in this world and warfare is no separate from that.
So, I would love if we can quickly start the session, because we have already apologies for that, but we have already, you know, (muffled audio) so back to you (muffled audio) muffle for introducing (muffled audio).
>> ONSITE MODERATOR: Thank you, Annie. In this session, we are going to have three speakers, Ms. Anoosha from the civil society and Yasmin Afina from the Intergovernmental Organization, and also Ms. Jimena Sofia who are also joining on site here. As opening of this session, I would like to ask speaker about the on site on top sessions like AI in warfare. And so I will ask to the Yasmin, how do you see the future of the AI and also the warfare? Yasmin, can you hear me?
>> YASMIN AFINA: Yeah, perfect. Hi, thank you, everyone. It's nice to meet you, my name is Yasmin Afina from the United Nations Institute for Disarmament Research, or UNIDIR. Thank you so much for the organizers of this panel for inviting me today. And I'm so sorry for not joining you in Riyadh in person. Due to personal circumstances, I could not travel in time for the workshop.
But so, I know that you wanted me to speak a little bit about the future of AI in warfare. But if you would allow me, I might just share a few slides, if I may. Is it correct? Is it okay?
So, let me just yeah. I hope that you can see my screen. Perfect.
So, I know that you wanted me to speak about the role of AI in warfare and its role in upholding international law, specifically from a responsible AI perspective. But please allow me to twist the framing a little bit and instead looking at international law as a key and essential aspect of AI in the military domain.
In the first half of 2024, union Deere took part in regional consultations with state and experts in Asia Pacific, the Middle East and Central Asia, Africa, European, Latin America and the Caribbean and based on its consultations we have identified and established a nature of facets of responsible AI in the military domain based on what states have shared during this consultations.
And one of them relates to compliance with national and international law as you can see in the top right of the diagram. And in fact the overwhelming majority of states across regions plays compliance with international law as a central component of their governance approaches to AI and the military domain and wider security domains.
And there is the shared sentiment that international law is an important framework that must be upheld throughout the lifecycle of AI technologies meant for deployment and use in defense and security and, thus, including in the context of warfare. So, international considerations must be considered from the earliest stages from the design, development, testing and evaluation, which would require efforts to translate international law implications into technical requirements in order to frame and shape the predeployment stages of these technologies in such way that they will somewhat be compliant by design and I will get back to that later in my concluding racker.
In addition international law and in particular international humanitarian law and international human rights law must inform or even shape and frame procurement processes as many states are increasingly considering purchasing AI enabled capabilities so it's not just for states that are developing AI, but also those that are purchasing.
So, from a policy standpoint, however, it's also important to note that while this overall shared sentiment that international is important is it does not mean that states approach it in a uniform way and there are nuances across regions in states' approach to AI and the military domain And the applicability of international law.
For example, states in Latin America and the Caribbean, they generally dedicate more attention and efforts to foster compliance with and uphold international human rights law and this approach is somewhat reflective of the regional security landscape where transnational efforts of combating organized crime prevail and in the light of the international human rights laws protect ability both in outside of conflict and of course states in all the regions acknowledge the importance of international humanitarian human rights law, international humanitarian law tends to be overwhelming dominating the policies and discourse of states in other regions although our findings were such that the African Region were also dedicated more attention to international human rights law, particularly with within the framework of the African Charter on and peoples rights.
And there's more of these findings in the report that launched back in September which I would invite you to download and read from UN Deere's website following the QR code on the slide or by going to UN Deere.org Kaleidoscope AI.
International law and as an important component of responsible AI in the military domain, I wanted to add another layer to our discussion, the role of the multistakeholder community and in the report that I previously mentioned one of the other key areas of nuance convergence that we have identified is the importance of multistakeholder engagement. In states in fact generally recognize the value of multistakeholder and cross sectoral engagement to promote responsible AI in the military domain, but states generally disagree on how such engagement should be conducted and UN Deere in our capacity as an independent research within the UN ecosystem and with a mandate of informing member states, we have launched earlier this year in March a programme of work on the round Table for AI security and ethics a RAISE in partnership with Microsoft and we have been working closely with representatives and consultancy organizations civil society and academies and we basic asked them what are the main themes this should be prioritized in the AI governance and security defense. I want to note as part of the RAISE programme of work will be holding the inaudible global conference on AI and security and being ethics on the 28th and 28th of March in Geneva. It's open to all. We will be issuing a call for abstracts for you to present your insights into the UN. So please do mark your calendars and let me know if you would like to be kept in the loop.
The group has identified six themes that must be prioritized for the governance of AI and security and defense in across all of the six themes, international came across as a recurrent pattern. So, for example, the second priority theme was trust building, and one of the key recommendations put forward is in order to enable this trust building, there is a need to clarify the interpretation of applicable laws. And for this group states should develop clear national positions on how to interpret and apply international law in the context of AI applications in the military domain and thus ultimately contributing to build this trust between states.
And another example is a third priority theme which unpacking the human element in the development, testing, deployment and use of AI systems and in military domain and so clarifying how international law applies can help clarify then what is the level of human element that is required at each stage of the lifecycle of the technologies of AI in the military domain and under what basis in international laws.
So, again, all of this can be found in the report that we published on UNIDIR website's or you can do so by scanning the QR code on the slide.
Finally to conclude, I wanted to circle back to something I mentioned earlier on how all of these initiatives, basically, can contribute to efforts towards compliance by design and the development testing and evaluation of AI technologies in the military domain. While also acknowledging and addressing and mitigating some of the risks that these technologies can present with regards to international law.
So, anecdotally last month I submitted my manuscript specifically looking at how international humanitarian law considerations should frame the development, testing and evaluation of AI technologies for military targeting, so anything that is going on before the deployment of battlefield and the thesis has been dropped with the acceptance and personally personally not UN Deere's side that AI in the military domain is happening already and without bridge to possible instruments in the future that may prohibit and outlaw some applications, but at this stage, it's important to dedicate efforts and research to ensure whatever technology come out of the lab for ware warfare they have been developed with compliance in mind instead of an afterthought. So earlier I mentioned the need to translate legal requirements into technical requirements one example that I looked at my thesis is, for example, the use of proxy data for training and testing the use of AI technologies and I argued that while proxy data can to a certain extent be necessary by virtue of the rule of precautions due to the (?) nature of warfare, it cannot be separated from direct indicators and (?) as a natural part of the ecosystem of intelligence needed for military decision making.
So, all of this to say that with the right efforts, dedicated resources and political will, compliance with international law should in principle be at the heart of the development of AI technologies for military domain, and this is not about coding international law into algorithms but rather identify and prioritize practical measures for the implementation of international law and ensuring that the deployment and use of AI in warfare upholds international law from the outset and does not jeopardize it instead by remaining as an afterthought. Because at some point you just lose the right to state groups and on that note, thank you very much. And I look forward to our Q&A.
>> ONSITE MODERATOR: Thank you, Yasmin. And I would like to ask to another speaker who are joining on site, how do you think of the international laws and the future of the AI in warfare. Could you also please give insight on this on your experience as well.
>> SOFIA VIVEROS: Hello. Thank you. Can you hear me? Perfect.
First of all, thank you for the organizers for inviting me. I think I don't like to just circumscribe this conversation to warfare, because these technologies are being used to attack civilians also during peacetime. So, I would like to call it the peace and security spectrum of things, and also because they are not only used by military actors, but also civilian accelerations actors, both state and nonstate. Not state actors that are civilian, law enforcement or border controls.
Whereas, nonstate actors, it depends from the context as Yasmin very well pointed out, for example, in Latin America where I'm from, organized crime is a big threat. In other regions it's terrorism. In other regions it's mercenaries. So, those all of these actors are using the same technology so it's important to acknowledge the different implications and the different treatment under international law of each one of this.
Because when we are talking about AI in the peace and security domains, we are talking about many different sets of rules, right. So, we have, obviously, IHL, international humanitarian law. We have international human rights law, which applies to, well, wartime and peacetime. And also by civilian actors and it involves state responsibility so it also comes to public international law which deals with this. But which also deals with use at balance, so that's the use of force, the right to use of force or self defense type of considerations that can stem out of the use of these technologies.
We also have international criminal law, we also have national, regional regulations and laws around different types of liability modes and compliance and procurement and all the different mechanisms that apply to the entire life chain of these technologies.
So, it is quite a broad spectrum to talk about the future of international law, because we are also seeing it as in the present. It's not it's not just the future situation, especially right now when we are living in a world where international law is blatantly violated and with complete impunity, unfortunately.
So, we are living in a voluntarism world where compliance seems to be optional, and that's not really not should how it should be because we are seeing very dire consequences for civilians in different types of contexts around the world.
So, what we need to do is to everyone advocate, promote and foster a coherent AI global governance framework. And I am saying AI in general because by its dual use nature we cannot really divide by civilian, by military precisely because of the distinction that I made at the beginning. The convergence of actors, the convergence of movements of use and types of use and et cetera.
We all really need to strive for this global AI governance framework to materialize, to be binding, and to have the correct mechanisms for implementation, because that will be crucial. And this, obviously, requires enforcement mechanisms, which, you know, it's going to be even harder. But we need to be ambitious because this is a very ambitious goal to preserve international peace and security at this time.
So, what we have in the current governance landscape in this particular domains, we, obviously, have the GGEs, which is the Group of Governmental Experts in Geneva that are under the convention of conventional weapons which I think is iconic because these are the least conventional weapons, autonomous weapons.
We also have REAIM, which is the Global Commission on the Responsible Use of AI in the military domain, where I am a commissioner. We also have RAISE, as Yasmin mentioned, and we are now receiving the development of the transfer from the GGEs to the General Assembly with resolutions that are coming out by the initiatives of different states, for example, Netherlands, Korea, Austria which are leading this conversation, amongst others, of course.
So, these are very welcome steps that we are building towards, but we still need to create a lot more awareness about the fact that these are situations that are going on right now. They are not future, eventual possibilities. And we also need to be very mindful because there is a tendency to try to, like, separate either alleged pros of this technologies, like, okay, well they will be more precise, they will more accurate, there will be less lies. But we know that's why it needs to be comprehensive within the same global governance framework area because we all know the problems with AI itself, right, so the bias, the brittleness, the hallucinations, the misalignment. Et cetera.
So, those two cannot be disassociated when we are looking at what the actual consequences and effects of the use of these technologies in this space will be.
And also, the differentiation between offense and defense capabilities, it's completely illusory because the same technology is interchangeably used. Any type of defense is an office in itself. That's something we should be mindful when we are having this conversation.
And I will leave it there for now. And, again, also looking forward for the Q&A. Thank you.
>> ONSITE MODERATOR: Thank you, Ms. Jimena and Sofia. It is very insightful to understand the dynamics of the emerging technology and the challenges that we are facing.
So, I also would like to ask to the Anoosha, who are joining here online, from the civil society perspective, how do you see AI in warfares and how what will be how we can facilitate the collaboration and for why making sure the (?) conversation among the community? So Anoosha. Annie, are you going to share the screen or is there any PowerPoint? No? Yeah, go ahead, please.
>> ANNIE: Ms. Anoosha can you share your screen and the slides on the screen, if that's possible?
>> ANOOSHA SHAIGAN: Sure, sure.
>> ANNIE: Thank you, thank you.
>> ANOOSHA SHAIGAN: Did that work? Yes? Okay.
So, thank you, everyone for recognizing this and thank you for having me. It's such a privilege to be here and talk about such an important issue.
I am quickly going to go over some of the points, and the slides are just bullet points of my some of the issues that I would like to touch upon, so you can follow along.
So, thank you to the speakers for setting the stage for international collaboration. That is, of course, the first and foremost thing that we need to do.
But let's also look at some of the very specific issues. So, I am a technology lawyer by profession. I started my career in human rights and international human rights law working on treaties and I was responsible for I was part of the team responsible for bringing the first seven core human rights treaties to Pakistan. So, we got the government to sign these treaties and then we started working on them. So, my association was a long way back when the SDGs were called MDGs. So, you know, we have come a long way since then.
And I would like to touch upon some very specific legal issues. The aim is not to give you more anxiety about these issues but help you form an opinion because as civil society experts, as lawyers, as development professionals, you know, your opinion matters as well, because this is a very new area, and as we go into the future, digital technologies become more and more decentralized, which means that governments have to rely on the civil society, the academia, and, you know, the Development Sector, and the private sector, not just technology companies to be able to, you know, to start forming these principles and guidelines moving forward.
So, let me just so I am going to touch upon AI and international humanitarian law, some of the very specific issues. And then I will go into some of the ethical considerations as well.
So, when we talk about the key principles of international humanitarian law, you know, they can be found in UN principles, the Geneva Conventions, the ICRCs handbook, if you are a person of faith they could be part of your religion as well. And they make common sense. There's the principle of distinction, proportionality and necessity and we will talk about proportionality and necessity first since we might be more familiar with that.
So, you know, specifically talking about are the military responses towards civilian population proportional, are they excessive. You might be more familiar with this. Do you think autonomous AI systems or weapons or autonomous drones or any kind of robots, do you think they would be able to make these kind of proportional responses. That is something to consider. As we have seen in the past one year, they have not been doing that.
Then there's the principle of necessity. It obviously talks about whether this military response is necessary. When it comes to AI, you know, there's been there have been, you know, calls to simulate certain situations first and then see and verify whether, you know, they warrant an actual military response.
So, these are some of the principles, you know, that you might be familiar with.
As far as distinction is concerned, you know, there are laws available as well at the international level, perhaps not at the state level or domestic level, where states are supposed to distinguish between civilian and military targets or civilian or military figures or entities. And that has somehow, you know, translated into applying to AI targets as well. But do you think AI will be able to make that distinction? Let's hold that thought and we will come back to that when we discuss ethical issues.
So, liability is, of course, a very important, you know, issue that we have seen with autonomous weapons. If an AI should shoot somebody down which was civilian target, perhaps, perhaps it was a hospital or a school, who is going to bear responsibility for that? Will that be the person who was operating the AI? Will that be the AI itself? Will that be the commander of the person or the agent representing, you know, a certain team, or will that be an entire state? Command responsibility, you know, there are rules around that, but they have to be applied in the context of AI. State responsibility, of course, it talks about that the state can be held responsible for the actions of its agents. I believe the principles are laid down under R SEWA. Then there's developer liability, somebody who developed an AI system that did not work and now it's being reviewed, it will go back to whether they followed all the protocols, they followed, you know, government guidelines or international humanitarian guidelines and whether they tested these systems and whether they removed glitches and they made sure all the laws were followed.
And there's a reason for those more familiar with how legal compliance works in highly regulated industries, like nuclear power plants or those working in energy sector or climate. Trainers can also be held responsible if the training was inadequate. So if you didn't document things properly or if you did not impart adequate trach your trainers could be held responsible as well.
If somebody did not know how to use an AI system, their trainers could be held responsible as well. And by responsible and accountable and liable, we also mean that it would include monetary compensation towards the victims and their families also.
There have been calls for developing an international military AI tribunal. Coming from Pakistan, you know, we do not believe in military trials in principal, but just to, you know, let you know that this is a form of accountability. But do we need additional forums when we already have international courts and when we have, you know, these other international tribunals, how would they impact states individually? Would states have to sign treaties? Would they have to, you know, incorporate them into their domestic laws? So these are some of the considerations.
Then, of course, this is an area that I specialized in so I really wanted to touch upon that as well. Of course, there are laws around Outer Space Treaty as well which mandate peaceful use of space and sharing resources and, you know, keeping things clean and debris free.
But then there are, of course, issues with AI guidance satellites. If there are issues, who is going to resolve them? Is it going to be the International Space Station? Is it going to be the United Nations? Is it going to be the state being affected or is it going to be the state that actually launched that AI satellite. And how do their actions, you know, work.
So, oversight is might be a bit of questionable issue here.
Then let's quickly touch upon ethical frameworks. Data bias and model drift are, you know, the main concerns with AI models. Data bias is, of course, a train your AI with biased data. For example, if you train it with screwed data like or discriminatory patterns like kill all the dark looking people or kill all the brown people or kill anybody who doesn't look white or Caucasian. So, you know, these kind of stereotypes. If AI picks up on these, you know, these elements, it can be very indiscriminate in the actions that these autonomous or AI based weapons take, especially during military action.
So, the datasets need to be checked for bias. They need to be audited. There are algorithmic checks as well so that you can fix those as well. But constant and regular oversight is very necessary.
Ask then, of course, there's the issue of model drift. Model drift is when, you know, you keep, you know, you overstuff and overfit your AI so much with data that it starts behaving unpredictably. So maybe per se you not treat AI like a child or a person and you keep feeding it information and training it and one day it will start, you know, making better decisions and wiser decisions.
Personally, I don't think that's quite accurate because at the end of the day, it's still a machine. It's still, you know, it's still something technical or technological, and if you look at, you know, for example, the language of some of the AI, recording language, that they could be, you know, zeros and ones which means they are very black and white, very exact and very specific so overstuffing it with data can lead to unpredictable outcomes that it becomes so confusing that you don't know how it's going to act. And who takes responsibility, that's the issue that we have been discussing. So audits and monitoring are important.
Then of course there's the issue of privacy, especially in conflict zones. Surveillance is an issue. Surveillance of civilian population is an issue. You know, facial recognition software, is that allowed during a conflict, especially when you are trying to target civilians or pinpoint somebody's exact identity or, you know, trying to, for example, you know, we have seen in the (?) conflict that very, very specific people individually have also been targeted, like doctors, journalists. So, the privacy becomes very, very crucial in such scenarios.
So, of course, we need a solution for this, perhaps on the lines of the GDPR, as in Europe. But do we need another international regulation or can we come up with a general framework or some specific standards that all countries that are, perhaps, part of the United Nations must follow without having to, you know, sign additional treaties or additional paths, additional laws within their countries. This is another issue.
Then of course, autonomy versus human oversight is a concern as well. Human sight or added oversight is, of course, important when AI is being used in conflict zones.
I think one of the areas that we could be following for development around autonomous, you know, robots or technologies is, basically, comes from the autonomous vehicle economy. When we look at some of the laws that, you know, have been some of the cases that have been going on around autonomous vehicles, those cases are going to help us determine some very, very minute details and very, very specific issues of particular to autonomous vehicles and robots. So they could apply to drones as well, for example, or other autonomous weapons that have been used in conflict zones.
So, again, human in the loop is another solution where, you know, where you always make sure there's human oversight present when an autonomous system is being used.
And then, of course, you can create ethics committees as well which will constantly monitor these developments.
Corporate accountability is, of course, important, just like, you know, we require corporations to, you know, submit transparency reports on how they are doing on climate change, we could ask the same from military, you know, different militaries from different states who actually sorry, military suppliers or military contractors who are part of the corporate world, who submit these kind of transparency reports.
And then, of course, if they lack training, if they didn't follow certain laws or if they didn't follow certain standards, they could be held liable for those as well.
There's, of course, a proposition to develop certain standards for military AI, which I think it's still a very, very nascent area, still developing, so it would be interesting to follow these developments.
And at the end, I would like to touch upon generative AI as well. So, we have seen a lot of, you know, issues first related to deepfakes and disinformation. Now generative AI, again, we should be able to, you know, use detection and prevention tools to spot it, especially on social media, for example, or especially for people operating AI based weapons or AI systems within conflict zones.
So, developing these tools is very important to counter disinformation and deepfakes because at the end of the day our ultimate goal is to save human lives. This is I think this is all what we are here for. We are not here to talk about how we are going to make profits or make money. The ultimate goal is to prevent civilian casualties and, you know, have this dignified regard for human life.
With that, I would like to end the presentation, and I am hoping to questions, I look forward. Thank you.
>> ONSITE MODERATOR: Thank you, Ms. Anoosha. Your presentation is very informative, talking about the open data bias and drift, privacy and data, autonomous versus human oversight and also from the side of the corporate accountability, as well as use of the adequate use of the AI is very informative and insightful.
I noted that we have to end this session very soon, since we are running a bit late for this session due to the technical error.
But before moving to the open floor session, I noted Mr. Mohamed is here in the room with us. So, I also would like to invite him to give any comment, if he has.
>> MOHAMED: Thank you very much, organizers and the Kingdom for hosting this. Being from the ICRC on acknowledging that we are running late, I will just focus on few things and I will not duplicate what has been said by other colleagues.
AI and autonomous weapons should comply and respect the international humanitarian law. Proportionality, distinction and precaution. Can an AI control autonomous weapon that has been tasked to execute at an operation apart, autonomously the operation if they see a child or a civilian or a fighter who is no longer capable of participating in the conflict.
Because a soldier who was in a front line operation, who was participating to the conflict, once they are injured and they are no longer part of the conflict they are protected under international humanitarian law. So will these autonomous systems comply with those basic principles of international humanitarian law is a huge concern that we have.
And therefore, we are actually in ICRC today, we have a specialist the Silicon Valley, we have a delegation in China that are discussing with the technology companies that are contributing to the development of these systems and having this kind of conversation. I absolutely agree with the notion of engaging with the tech companies and those who are developing these technologies from the design stage. That is quite key.
We are also calling for human oversight and control on any kind of weapons. A decision to kill, life and death decision, should not be made by a tool or by an algorithm. It has to be a decision that at least important or final engagement or discharge of the munition should be, you know, controlled by human being. That is quite important.
And we also convening as some of my colleagues have already mentioned, and discussions and dialogue. And how to incorporate and integrate the international humanitarian law in the development of autonomous weapons and artificial intelligence controlled warfare.
And regulations are needed, but in the you know, in terms of humanitarian law applies in any kind of warfare. Whether it is carried out by human or by autonomous weapon. So, that's very clear.
Where we have to seek clarity is who assumes responsibility, you know. Is it the developer? Is it the commander, as these questions were asked. And that has to be clarified. Therefore, we are convening discussions just last year or the end of 2021 2022, there were two workshops in Geneva with experts discussing these kind of issues, and the recommendations and reports is out there on our website.
Treaties that are binding and that are allayed with international humanitarian law are actually necessary and ethical. It was mentioned by one of the (?), I don't remember. Ethics, dignity and preservation of human life is the ultimate goal and that's what international humanitarian law is eventually about. Thank you very much.
>> ONSITE MODERATOR: Thank you, Mr. Mohamed. And before the next session, I would like to pass to our online moderator any (?). Just a reminder, we only have left nine minutes.
>> ANNIE: That's okay. Right. Thank you so much, Mr. Mohamed, and I completely don't want to forget Mr. Neemes at first in bringing Mr. Mohamed Ali on board with us. Thank you so much. I love how this session is not only about identifying problems but also about practical solutions and exploring exactly AI can be used as a force of good.
I would really quickly want to, you know, move our discussion towards our last policy question, because you guys have effectively answered all other policy questions in your presentations.
So, just quickly bringing the discussion towards it. Can Explainable AI technologies be effectively applied to autonomous weapon system to ensure human oversight and understanding of how targeting civilians are made. I understand how there was compliance by design discussed and also biases in AI systems discussed by Anoosha and also Ms. Yasmin. I would really appreciate if any if any of the speakers present on site or online would like to take this question up.
>> YASMIN AFINA: Sure. I am happy to have a first stab at it and colleagues, please feel free to, sort of, complement just from the top of my head, I think first of all, thank you for the very interesting question. I think that it's very pertinent and something that in Geneva as well diplomats are grappling with every day. I think based on what I have seen and what I have heard engage working group stakeholders range franchising state representatives but also civil society industries, I think in terms of explainable AI generally that's an issue that has that the AI community is trying to grapple with, but in the military domain, there's quite a few implications especially with regards to in IHL you have the legal duty to investigate alleged violations of international law, and then when you look at machine learning based systems where you have a black box or you, basically, know what is the input, you know the output but you don't know how it went from the input to the output, so, for example, why the system specifically recommended to target this particular person, for example, then there is you know, there may be issues as to how the investigation, if this output led to a potential violation of international humanitarian law, how you can effectively conduct an investigation when the system has a black box.
But at the same time there are measures, growing research efforts and trying to circumvent that problem. So you might, for example, have in the military domain, you also have to understand that when the commander authorizes the use of force, it's held to such a high level of standard or supposed to be be held to as high level of standard, then the commander will always maintain some level of responsibility over their decision, even when the commander decides to use and follow the recommendation of a weapon system or of an AI based support system and even when the commander doesn't know how the system can (?) output. And then there are also recommendations related to best practices with regards to documentation in the military domain, I think that's something even when not using AI it's something that is increasingly being looked at.
There is growing research in (?) but I don't think there is any silver bullet for this question. But there is research that is ongoing. But it depends on how much resources are being dedicated to that and how much political willingness there is to dedicate those resources.
>> ANNIE: Thank you, thank you so much, Ms. Yasmin. And I would love if, you know, the onsite speakers can also add their insights on this very question.
>> MOHAMED: All I can add is for now, until the technology is advanced enough, which in our opinion, from the international humanitarian law perspective we will never get there. And human accountability is necessary. I think I lost yeah, leave the decision, you know, audio difficulty) In the hands of technology. So I think there's a technical issue.
But all I'm saying is, human control and oversight and accountability is ultimately necessary. Even if the technology is so advanced. So, that's our position at the moment. And there's an expert here.
>> SOFIA VIVEROS: I agree entirely. I believe that unless one can completely understand and control the entirety of the effects of a technology, one should not be using it especially when human lives are at stake.
And I also do not like the term laws because it's not legal. Even other types of physical harm or integrity or targeting toe detention of other purposes is also quite harmful and should also be encompassed in the regulation of these technologies and their uses.
And I would like to end with a quote of Amina Mohamed, Deputy Secretary General of the United Nations. She said this at the Arab Forum that was held in March in Beirut. And she said, there can be no sustainable development without peace. So, that's something that we should keep in mind because without piece, we really have nothing. Thank you.
>> ANNIE: Amazing. Thank you so much, the onsite speakers and as well as the online ones. I understand how we still have Ms. Anoosha left to answer this question, but we quickly need to wrap it up. I love how, you know, in a very short time we covered a lot of topics of accountability, global coordination and collaboration and also the compliance and design part has got to be my favorite.
So, guys, we have, you know, heard some great key takeaways of this session and I hope really that it further inspires more action and discussion, because we really need it in this time.
So, the report of this session will be shared right away. And I would quickly request all the speakers that are present on site to maybe, you know, come closer to the screen so we can have a group photo together. Feel free to come and we can pin Ms. Yasmin and Ms. Anoosha on the screen and how about have a quick group photo.
>> ONSITE MODERATOR: Please pin Abeer who are also our organizing member.
Unfortunately, unfortunately we have to end this session. It is the time is up right now. Thank you, everyone. I any that you guys have comments or questions. Maybe you can approach to the speaker who are on site later. Sorry about that.
>> ANNIE: If you also pin Abeer Nisa. She is also one of the organizers.
>> ONSITE MODERATOR: Could you please have Abeer to pin on the screen. Thank you.