The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> MIA McALLISTER: Welcome to this panel, Ensuring the Online Coexistence of Human Rights & Child Safety. My name is Mia McAllister. I'm a Program Manager at the FBI.
Today’s session aims to provide meaningful insights into the complex interplay between technology, privacy rights, and efforts to protect children in the digital space. As digital technologies continue to evolve, they offer both opportunities and challenges. Today’s panel brings together experts from diverse fields to explore how we can foster an online environment that respects human rights while prioritizing child safety. I'll go ahead and introduce our panel today. Online, we have our moderator, Stewart Baker. I don’t think he’s joined just yet, but Stewart Baker is a Washington, D.C.‑based attorney specializing in homeland security, cybersecurity, and data protection. He has held notable government roles, including serving as a first assistant secretary for policy at the Department of Homeland Security and General Counsel of the National Security Agency. Stewart is also an author and host of the weekly Cyber Law Podcast. His vast experience in cybersecurity law and policy adds depth to this discussion on human rights and child safety in the digital age.
Next, we have Dan Suter. He is a Principal Advisor to the Prime Minister and Cabinet Ministries in New Zealand on Lawful Access and Cross‑border Data Policy. With a background as a criminal defense lawyer and a prosecutor specializing in serious organized crime, Dan has also served in international roles, including as the UK Liaison Prosecutor to the United States. He is a contributor to the UNODC Practical Guide for Requesting Electronic Evidence Across Borders and brings significant expertise in sustainable capacity‑building and cybercrime policy development.
Next we have Mallory Nodal. She is the Executive Director of the Social Web Foundation. She is a technology and human rights expert specializing in internet governance and digital policy. Mallory is active in Internet and emerging technical standards at the IETF, IEEE, and the UN. Her background in computer science and civil society brings a unique perspective to the intersection of technology, policy, and human rights.
Next in the room we have Katie Noyes. She is the Section Chief for the FBI Science and Technology Branches, Next Generation Technology and Lawful Access Section. She serves as the organization’s lead on 5G, Internet governance, and technology standards development. Katie is a Senior Strategic Advisor for Technology Policy at the FBI with over 20 years of experience in the intelligence community, including service as an Army military intelligence officer and various roles with the Defense Intelligence Agency. Katie brings extensive expertise in security and policy development.
Lastly, but certainly not least, we have Dr. Gabriel Kaptchuk. Gabe is an Assistant Professor in the Computer Science Department at the University of Maryland, College Park. Gabe’s work focuses on cryptographic systems with an emphasis on making provable secure systems practical and accessible. His expertise spans academia, industry, and policy, including work with Intel Labs and the United States Senate. Gabe’s insights bridge the technical and policy realms contributing to the development of secure online environments.
So, as you all can see, today we have a wide range of experts, and I’m really excited for today’s discussion. We’ll have about 60 ‑‑ oh, perfect, okay. We’ll have about 60 to 65 minutes of moderated discussion, and then that will leave room for questions from both the audience and online.
So, without further ado, and since Stewart is online now, I'll turn it over to you, Stewart.
>> STEWART BAKER: Okay, that’s great. Hopefully, I can turn on my camera as well. Yes, there we go. All right.
Thanks, Mia. That was terrific and a great way to get started.
I thought it would be useful to try to begin this by. talking a little bit about where we are today, what’s happened over the last year or so, that would give us a feel for the environment in which this discussion is occurring. Particularly because there’s been a lot of movement in Western democracies on this question, I thought it would be useful to ask Dan to start by giving us a feel for what some of the British Commonwealth countries have been debating with respect to protection of children and the worries about undermining strong encryption.
Dan, do you want to kick us off?
>> DAN SUTER: Hey, thanks, Stewart. Thanks to everybody over in Riyadh. Great to be part of such an esteemed panel today, all the way from New Zealand in the small hours over here. We’re only about 1:00 in the morning.
So look, it’s really important at this point to highlight that I’m going to speak about Australia and the UK, but I’m not a representative from those jurisdictions. But you are right, Stewart, the legislation in both countries, it really has been a point of discussion on how it can be used.
But look, I really want to say there are practical implications on what can be achieved by regulation in this space, and a more meaningful strategy would be to consider how governments consistently engage with tech firms on the issue of child safety and lawful access. It’s really not enough simply to recognise the risk, as probably we have done as Five Countries. So I’m looking there at the UK, Australia, Canada, New Zealand, and the US. We need to really raise our ambition and develop a collective approach, engaging with each other and towards a safety by design ethos, including designed and lawful access that does not undercut cybersecurity or privacy.
And certainly, that’s exactly where those Five Countries are moving towards in relation to, and I may speak to this a bit later, and others on the panel, in relation to 2023 and 2024 Five Country ministerial communiques. But look, one of the primary duties of a government is to keep their citizens safe from serious harm, and here we’re talking obviously about child safety, as well. And carefully governed and exceptional lawful access should be a crucial part of those efforts to protect the public from harm. So when I speak about the legislation that follows, this primary duty is reflected there with very much an incremental approach, either through consultation or voluntariness.
So in relation to Australia, so here we’re going to get a little bit more technical, but Australia has their Telecommunications and Other Legislation Amendment (Assistance and Access) Act 2018, so shortened to TOLA. And that introduced a framework for Australian agencies to make both voluntary and mandatory requests for industry assistance to gain access to encrypted data. Part 15 is the really important aspect of TOLA, and to emphasize this, it establishes a graduated approach for agencies in Australia to receive assistance by establishing three main powers. So one is a technical assistance request where agencies can request this voluntary help from designated communications providers, so industry where they are willing and able to give assistance.
Secondly, technical assistance notices, or TANs. Agencies can compel designated communications providers to give assistance where they have already the technical ability to do so.
And then there's a technical capability notice, or a TCN. Agencies can require providers to build limited capabilities to help law enforcement and intelligence authorities.
So these three powers, they can be used in secret with penalties for disclosing their existence. Therefore, customers of those platforms may not know if data has been requested or even accessed under TOLA. There is independent oversight that’s already there in Australia in relation to actions conducted by intelligence agencies, by an Inspector General of Intelligence and Security, and equivalent oversight for law enforcement agencies, as well. And the operation of the act is subject to ongoing review by Australia’s Parliamentary Joint Committee on Intelligence and Security, and that actually reports on how many orders have been issued.
I can tell you in 2019‑20, there were 11 TARs issued; 2021‑22, there were 30 technical assistance requests ordered.
So let’s move on to the UK, and the UK passed its Investigatory Powers Act in 2016, and that includes an obligation on communication service providers to remove encryption or electronic protections on communications for investigatory purposes after service of a technical capability notice. And to really emphasize again, this is an incremental approach. So there’s a consultation phase in relation to those communication service providers before a technical capability notice is issued. So again, there are really robust safeguards. There’s a double lock mechanism, for example, with independent oversight by the Investigatory Powers Commission.
And just very quickly in terms of my own jurisdiction as well, New Zealand, obviously most of the major communication service providers are based offshore. The main issue in relation to New Zealand, therefore, is extraterritoriality and enforcement. There are a couple of important provisions within our legislation, which would be the Telecommunications (Interception Capability and Security) Act 2013. Commonwealth jurisdictions, we often have really convoluted act names. But there is a duty for service providers to assist with, and we’re talking about encryption here, decrypting telecommunications when there’s an execution of an interception warrant. So that legislation that’s used, its shorter name is TICSA, its overriding objective is to ensure that communication service providers are ready to be able to intercept communications. And there is a provision for a minister to direct those communication service providers to be able to say, "Look, you need to be in a position to be intercept ready and accessible, and part of that duty will be in relation to decrypting communications." I’m not aware of that ever having been done, but the simple fact is that it’s really difficult to enforce for those over‑the‑top providers, such as Meta with WhatsApp and Facebook Messenger, for example, to be able to enforce any of those provisions through a ministerial direction.
But look, again, just to complete this phase is one aspect. And there have been those points of discussion with the use of these particular pieces of legislation and the provisions that they provide.
But the real emphasis should be here on what we can all agree and moving the debate on to ensure that we reach a position where we understand in terms of that safety‑by‑design ethos and progressing towards where we have commonalities in the debate.
So passing back to you, Stewart.
>> STEWART BAKER: Yeah, Dan, just one follow‑up question. When the Investigatory Powers Act and the Australian bill were moving through parliament, there was considerable anxiety expressed by industry and civil society that these acts enabled the government to insist that encryption on a particular service be redesigned so as to allow for lawful access, and in a way that might impact the security of communications. Do you think that’s a reasonable reading of those bills? And as far as we know, has there ever been an order or capability notice that required that kind of modification?
>> DAN SUTER: Look, of course, in terms of the debate, there’s always going to be focus on what the extremist point can be in relation to legislation. But I think it’s really important to, again, re‑emphasize that there is that incremental approach in working towards in relation to a point where, ultimately, it is for governments to determine in terms of the protection and the safety of their citizens. But built in within that legislation, of course, when we might have a debate about this, we’re not going to focus on the intricacies, because we haven’t seen this work through in terms of how it practically applies, but there are robust safeguards, safeguards that have been there and well‑established in a long time, as well. We’re not talking about safeguards that are just being plucked out for the benefit of government. These are safeguards that have been there and we know that work, the double lock mechanism, the IGES in terms of having oversight in relation to intelligence agencies. It has to be in terms of any legislation to ensure that there is that social license that these safeguards are built in. But there also has to be that balance in relation to ensuring that, where the public do need to be protected, that power is available.
But I can tell you from a New Zealand perspective, in terms of the legislation that I've referred to, there has to be a balance with cybersecurity and also privacy in relation to preventing any collateral intrusion in relation to individuals. These have to be specific and targeted to ensure that there isn't that collateral intrusion.
And I think it’s really important that when we talk about this debate, we’re understanding that we are talking about the protection of citizens. We are talking about that being at a very last stage. But there has to be the power and the capability there, if needed, with those safeguards built in.
But back to you, Stewart.
>> STEWART BAKER: That’s great.
Mallory, do you see this the same way that the English‑speaking countries, other than the United States, have given themselves the authority to have a pretty dramatic impact on the encryption services that are provided, but have, for a variety of reasons, not gone to the full extent of their authority and have built in a number of protections for privacy and security?
>> MALLORY KNODEL: Right. So because we are in such a late stage of these debates and not a lot has changed on the regulatory side for a while, I'll have to say no. And I think that’s not a surprise. I think we've obviously had a similar debate now for a very long time. I do actually think a lot of other externalities have changed besides government positions on this.
And I'll only mention, because we’re, of course, really short on time by now, that what’s relevant to what Dan was just saying is Australia, because of TOLA, you now have one less privacy startup. So there’s an application folks were using called Session. Session is an end‑to‑end encrypted app. It’s interesting because it doesn't use phone numbers or usernames in a persistent way, so it provides a little bit more pseudonymity when using the application. So that’s what kind of differentiates Session from maybe other apps. They’ve announced very recently that they have to leave. They’re going to Switzerland because they have been visited by the authorities and they’re quite worried that they’re going to be asked to backdoor it or to provide user information to the police. And that is exactly what private sector companies have said about the UK Online Safety Act.
It’s unfortunate that Ofcom, the regulator, has been somewhat silent on how they would handle orders to backdoor, whether they would do it under a gag order, whether they would be transparent about that, but we have heard from Signal, at least, and certainly WhatsApp has not been shy about expressing Meta’s position on this, but that they would leave the UK before backdooring the software, for sure.
And already, right, and this gets into more of the solution space, already there is data that can be obtained that can be provided, and that is provided based on leaks from a few years ago. And sort of, I don’t know, it was like a slide deck that the law enforcement community was using to explain which of these encrypted services have which metadata and how you can get it. These sort of already exist, right?
So once an application decides to completely leave a jurisdiction or to completely not comply with requests like a backdoor, then you also lose access to that metadata, as well. You also lose access to the helpful service data that you could have potentially used. So it’s not a great move for anyone, right, when this happens, but it will continue to happen because what is being provisioned in these laws amounts to mandated backdoors that change the software for everyone all over the world, not just for that jurisdiction. And it changes it in a persistent way so that that backdoor or that capability is always there, and it changes it for everyone who’s ever used the application, irrespective of whether or not they are suspected in a crime, and it’s just a much too overbroad piece of legislation.
And yeah, so that what you’re talking about, Dan, where we would rather complement regulation with the ability to work together and find solutions, you take that off the table when applications start leaving jurisdictions over your risky laws.
>> STEWART BAKER: One question, Mallory. Sessions left Australia as its corporate headquarters. Maybe they also plan never to sell in Australia. I’m not sure we know that. Yeah, that’s potentially.
But quite significantly, nobody else who provides end‑to‑end encryption has said, "We’re leaving." That suggests that maybe Sessions’ concern is over a capability notice that might have affected their efforts to make pseudonymous accounts.
>> MALLORY KNODEL: No, just to interrupt you, Stewart, because I know where you’re going with this question, it’s just because the law is about companies in Australia having to comply. So as Session leaves the jurisdiction, they are no longer swept up in this regulation. And also, I'll note that as far as I can tell, staff members have also had to relocate physically because they’re ‑‑
>> STEWART BAKER: Because obviously they’re subject to jurisdiction, yes, okay. So this is a question of the Australians having limited their authority to people who are located in their jurisdiction as opposed to people who sell services in their jurisdiction, because it wouldn’t be hard to extend jurisdiction to people who are offering those services.
>> MALLORY KNODEL: I think it’s hard. I think it’s definitely hard. I think that’s what the UK wound up doing eventually, but TOLA was some years ago.
And I wanted to also mention that I think it’s interesting we’re just basically talking about the Five Eyes countries, because there is an obvious concerted and coordinated effort to work on legislation as a block. So you had Australia sort of doing the impossible, getting any kind of backdoor law on the books first, taking that hit, but kind of with some measured approach so that it wasn't like every end‑to‑end encryption app on the planet. It’s just the ones that within Australia’s jurisdiction.
Now you have the UK coming in some years later, managed to put a backdoor on the books, but it’s again, like, limited powers.
Anyway, so we've all these countries, Canada has also managed to do something and just follows from there, but this is certainly an effort done. I think that Australia doing something more measured was a tactic to get something that people could live with. They probably would have rejected something a little stronger. So yeah.
>> STEWART BAKER: You’re absolutely right. It feels as though people, the attackers, are circling and taking occasional nips from their target without actually launching an attack.
Why don’t we move just to focus on what’s happening in Europe, as well, so we have a complete picture.
Katie, can you give us a sense of where the debate is in Brussels?
>> KATIE NOYES: Yeah. So first of all, let me just extend my gratitude. I wish you all were here in the room. We've got an awesome audience of folks here. You can’t see half of them, but you’re all missed here. We wish you were here.
But I think really, let me just hit a tick before I get there, if you’re okay with it, Stewart, which is the whole goal of bringing this to the Internet Governance Forum was because we’re Multistakeholders. We’re representative of that on this panel. And I’m really grateful for that.
I will now bring this home, which is that’s what’s going to solve this problem. Candidly, I don’t think it’s going to be government, certainly not alone. It’s not going to be the private sector and the companies alone. It’s not going to be just civil society. It’s also going to include people at their kitchen tables. I absolutely want to make sure we bring this home for the audience in the room, who are very interested in policy.
But I think we all want to know, what does this mean in tangible terms?
Going back to Brussels for a minute, and how this actually even affects the FBI is that these are global companies with global capabilities. We have global public safety challenges. There are global terrorism organizations. There are global fentanyl traffickers. There are global trafficking and child sexual abuse material networks that work across the globe. I think it’s key to highlight that first, that it’s not a European problem, it’s not an Asian problem, it’s not an African problem; it’s an all of us problem.
Why do I say that? Because we all are trying really hard to learn from each other. I think that idea of trying to harness best practices is key. On this, the European Commission actually just put out a report, so it’s very timely, in November. They had commissioned a high‑level group, and the group was specifically to look at, and I want to make sure I get the title right because it’s key here, it was Access to Data for Effective Law Enforcement. And if you get a chance to read the report, I highly recommend it, because I think what it goes to is some of the things we've been talking about.
I guess I will take a slightly different approach and say I think things are very different, and I think they’re very different around this conversation, because I was sitting in Berlin at the International Governance Forum a few, you know, right before, like, I think a year before COVID. And the conversation was very different. It was, I’d say, very strict on the privacy side. There seemed to be, and please don’t take this as a pejorative comment, but there was a lot of trust in technology companies, and that they were solving civil society’s problems. And that sort of idea that public safety might come in and sort of mess that up or, you know, be a chilling effect. I have found the last two days I've been sitting in on multiple panels, it is a wildly different conversation. And the conversation is coming down to responsibility. What roles and responsibilities do each of us have?
And again, I want to go right into the face of this narrative that somehow safety, security, and privacy are diametrically opposed. I think it’s a false narrative. If you go back to the UN rights, we’re at a UN organization, there’s a right to safety, a right to security, a right to justice, a right to privacy. There is an expectation that this all coexists. Thus, the name of the panel.
So I think when you read it, you know, what they’re doing in the European Commission, it really does look to us. And it’s something we’re also trying to emulate with a partnership we newly have with UC Berkeley where we had a summit to kind of have some of the same conversations, the major themes around responsibility. So it talks to what are the expectations of government in this realm? Is there an idea around incentivization? So it’s putting a more active role and a more active responsibility on governments as well to meet industry, to meet civil society, to meet the needs. Because again, we do need to achieve that.
And then take it one step further. Again, it is not up to government, and we all understand that, to prescribe a technical solution. And that’s not what we’re trying to do. But we do recognize it probably does take some government incentivization, some communication of priorities and needs. And I think there’s a lot of space there to achieve that. And again, going back to that report, it actually details out some of these approaches and fresh off the presses from November.
>> STEWART BAKER: I understand all of this and there’s no doubt that the European Commission has proposed legislation that would clearly incentivize better access and more law enforcement insight into what’s going on on some of these services, but that proposal has really been stuck for a few years now due to strong opposition from some members of the European Union. Do you think that’s changing?
>> KATIE NOYES: Yeah, you know, I can’t speak to how the process works there or take any bets on that, Stu, but let me kind of get to some of what we’re hearing. And we heard it out of the G7, by the way. I don’t know if folks are aware, but the G7 Roma‑Lyon group actually commissioned a lawful access working group, and it ran last year, and they voted to renew it for this upcoming year, as well, with the Canadian Presidency. I think it’s key, and it’s key because I actually think the landscape has changed.
And I'll give you maybe two areas where I think it’s the combination of these two issues kind of intersecting. One is the threat landscape. We have solid data, and it’s solid data not coming from law enforcement this time. It’s coming from outside non‑government organizations. So many of you are familiar with the National Center for Missing and Exploited Children, my colleagues around here in the room. It’s a U.S.‑based non‑profit that really takes tips and leads from the electronic service providers. Last year was the highest number of tips ever received by the electronic service providers, like META for Facebook and Instagram, if you’re wondering what an ESP is, but it was 36 million reports. Then, and NCMEC is very public about this, they take those tips and leads and they provide them as tips and leads to law enforcement all over the globe. So we in the FBI, we get a number of those and we start an investigation, at least looking into assessing whether there’s an active threat or not. So the threat environment is booming. Why is it booming? Because the technology industry is booming.
You know, sitting around the table years ago as a teenager, I wasn't talking, we weren't talking about social media and gaming platforms where you are connecting to others. But that tech boom sort of comes with a little bit of a cost or a tax, which is the tech industry is moving at such a fast clip. And this is where I think some of the difference is. I think the Multistakeholder environment, particularly civil society, as I've heard here, but I also heard it even from a few company representatives, they’re taking a slight pause to say, okay. And this is a good one to talk about, and it goes to sort of what Mallory was getting to, as well, which is the focus. Like when something has been deployed, well, we know Meta, Apple, all these companies have gone back now and they’re instituting technical remedies or ability for reporting. So a user can report when something has been assessed as harmful to them or a potential criminal activity. They’ve all now gone back and created reporting mechanisms. That’s very new, and a lot of that was announced at the Senate Judiciary Committee hearing in January.
So I do think this landscape is changing where more questions are being asked by legislators. And again, I’m using a US example, because I apologize, I haven’t quite followed the process in Europe as closely; although, we’re seeing a lot more reporting and I think real push for some of these changes to bring industry and governments together to solve these challenges.
So again, just a quick summary, I think the threat environment has changed. We see digital evidence in almost every single one of our cases. If you had asked me that question even five, six years ago, I would have given you a very different figure.
And then we’re seeing just the ubiquitousness of tech deployments, and now we’re seeing the ubiquitousness of adding that layer of end‑to‑end encryption that can’t be pierced. And so I think we’re seeing it by default, by the way, so a user doesn't get to decide for themselves anymore. Now the company is deciding.
And again, last point and I'll turn it back over to you, I think that’s the key point here. Maybe what we’re seeing is maybe this issue is really finally going to a Multistakeholder conversation. I think with very prominent cases like sexual extortion, actually ending up with 17‑year‑olds and teenagers in the U.S. committing suicide, people want to have this conversation, because they’re seeing it in their neighborhoods and at their kitchen tables.
>> STEWART BAKER: Back to you, Mallory. Do you see this the same way that despite or maybe because of the fact that legislation hasn't really gone to the ultimate point of mandating lawful access, that there is a better opportunity for more voluntary cooperation?
>> MALLORY KNODEL: Yeah, so I guess from my perspective, again, we've been having the same public debate for a while. It’s been a couple of years now that I've been on a stage with NCMEC and FBI talking about the same thing. It was an IGF, but it was a USA IGF. The conversation here has been the same. The externalities have changed.
So around that same time, my then‑employer Center for Democracy and Technology put out a report suggesting that reporting and user agency features in end‑to‑end encrypted apps would be a good way forward. We also suggested metadata. And now, companies are doing that. So civil society suggested it. Companies do it. Companies also have now expanded very significantly trust and safety as a whole area of work that all of them are concerned about. Because as we know, this problem of child safety exists far beyond the boundaries of end‑to‑end encryption. It is all over social media, in the clear, and it’s still a problem. And so working to clean that up has been a huge effort.
And probably there’s a lot of explanations for why those numbers have been changing. We don’t know what those numbers mean. It doesn't necessarily mean that there’s more threat or risk. It may mean that there’s a lot more reporting and there’s a lot more awareness of it. And we don’t even know how much of that is new versus old content, et cetera.
So I think that, yeah, there’s a lot of really interesting solutions that are cropping up. I think the tragedy is that there’s a lot of us still stuck in this backdoor conversation that’s not really going anywhere and it hasn't for a long time. And it would be great to truly actually engage to the solutions. But I think that requires, which is what civil society and industry have done, a sort of acceptance of end‑to‑end encryption as a feature that users all over the world have required and wanted and requested and begged for, because they want that protection. We didn't see such a demand for end‑to‑end encryption until it was revealed that the Five Eyes countries were spying on everyone back in 2013. So there’s also that part of the story. But I think, again, if we can accept this as a sort of minimum requirement of secure communications for a lot of different reasons, right? Because encryption protects kids, because encryption protects businesses, et cetera, et cetera. Then we can really build some cool stuff on top of it and try to fix this issue.
So I’d love to see us get into that space.
And then I'll just add one more thing, which is we've also seen externally that backdoors don’t work too. So another thing that’s happened very recently is that for some communications that have been built in lawful access backdoors, I’m talking mostly the network layer, so this is where telecommunication services have encryption ostensibly, but it’s been, by law, backdoored. The law in the US is called CALEA. That was exploited just as sort of folks like Civil Society and other security and safety professionals were saying it would be. And that was the sort of Salt Typhoon hack. So I wanted to bring that into the conversation because we've seen both major successes in figuring out how to do trust and safety, child safety work on top of end‑to‑end encryption. We've also seen major fails where we've had insecure communications and how that’s been negatively affecting businesses and the security of all people using those networks.
>> STEWART BAKER: Let me ask Katie if she wants to address that because I’m not sure everybody agrees that that’s what happened with the Salt Typhoon hacks.
>> KATIE NOYES: Yeah, we certainly don’t agree to that. We actually, you know, the media, quite frankly, got that one a little bit wrong.
Can you all hear me? Yes. Going through? Okay, great. I can’t hear it on my own.
But yeah, we've gone out publicly, by the way, to try to dispel this myth and correct the record. What we’re actually finding, because we are investigating, so Mallory, if you have direct access to the information, certainly would like to talk to you more, but from the investigation what we’re actually learning, again, not to say that when we get through all of the investigations, because there are multiple here, that we won’t find there was some vector or something, but I can tell you right now, the investigation has not yielded that, that the CALEA and the lawful intercept capability was not appearing to be the target.
And actually, what we've seen in two specific targets, that the perpetrators of Salt Typhoon, the Chinese‑backed Salt Typhoon group, actually had access to the network well before they actually accessed the CALEA capability. So that tells us it wasn't the vector and it wasn't the main target.
We already do know, too ‑‑ and we put this out very publicly, so if anyone is interested, we do have published awareness on our website FBI.gov. You can find it. But we certainly do not want that to be used or leveraged in this debate when it is erroneous. Again, it does not mean that there shouldn't be strong security. It doesn't mean that there actually shouldn't even be encryption. We are very supportive of encryption technologies. We just want them to be managed in a way, much like in the telecommunications. And again, I’m with everyone who says there should be stronger security and stronger security even around CALEA. Absolutely, we join those calls. But certainly, we want to make sure the record reflects accuracy here that it does not appear to be the target or the vector. But we did see access, so that is an actual truism.
>> MALLORY KNODEL: Yeah. I think, yeah, target versus vector versus leveraged. The fact that widespread communications had this capability, I think, are maybe three different things, but still significant.
>> KATIE NOYES: But it also matters that I think the general population is probably, I would say, as a citizen myself, I’m way more interested about what also else did they have? Because to everyone’s argument, "Most people were law abiding citizens." You don’t want any of the security to change for that. Well, those law abiding citizens wouldn’t have been in the CALEA data anyway. This is for individuals where we have some sort of predication or authorized access. Again, I’m not arguing that it’s not a terrible security problem. Don’t misunderstand me. It’s a terrible security problem. And there should be enhanced security. And again, go back to encryption is one of those, but also multi‑factor authentication, strong passwords. I mean, all of that was a factor in a lot of what we’re seeing here. So I don’t think isolating this to this one issue makes very much sense for this.
>> MALLORY KNODEL: No. So I was just going to say that I think there might be a lot of elements to it, but we are talking about encryption right now. And so of course, we’re going to only talk about the things that are impacting on the debate around encryption. I think that’s totally fair game.
>> STEWART BAKER: So let me ask then about the encryption. Katie, one of the things that the FBI suggested people do if they’re concerned about the Salt Typhoon hacks, which are certainly a major security threat, was that they use strong encryption, and I assume end‑to‑end encryption. And a lot of people in civil society have said, "Well, there you go. Why, even the FBI thinks you ought to have strong encryption." And isn't there some inconsistency between wanting to have lawful access and wanting people to use strong encryption to protect against very real threats?
>> KATIE NOYES: So absolutely not. Again, we’re back to we think that we can achieve all of these things. Will there be trade‑offs to some degree? Certainly. Will there maybe be differences for the way we approach the entirety of a population of a user base against perhaps looking at, you know, a scaled solution only for individuals where we actually have court authorization and our authorities warrant some type of access to content data? We’re very open to the conversation.
But yes, please let me say for the record, the FBI supports encryption. This is the part of the debate that I think is also not new, and I’m very surprised that we continue to have to answer this question, but happy to do it again, is that we are very supportive of that, particularly from a cybersecurity perspective. And the FBI is a user of encryption.
But what we don’t do is willfully blind ourselves to all of the activities because there is a responsibility. Again, we are all responsible. And I think this is where the debate, I do feel, it’s changed. Again, I go back to, I understand, I feel like we’re here today to talk more of an action plan. At least that’s what I’m here to do. I think the FBI’s point of view in this debate today, I’m hoping we’re going to get to that conversation of something that could be achievable because, agreeing with the UN, we have got to achieve all four of those. I think the discussion now needs to stop being, "Should we?" And it now needs to be not that we accept we can’t and we just stop trying, but that we’re the best innovators in the world. We all represent countries and institutions that are the best innovators of the world. We didn't say, "Oh, cancer’s a hard problem. Let’s not try to solve it." We don’t do that. So let’s ‑‑
>> STEWART BAKER: No, that’s quite right, Katie.
Dan has been quiet this whole time. He is going to have to bear the burden of representing the innovators of the world because he’s our technical expert cryptographer.
And there have been some interesting suggestions about how to square or at least accommodate both security and lawful access, including the idea of scanning for objectionable material on the phones of the sender before it gets sent so that none of the private communications are compromised unless there’s a very, very good reason to believe a particular communication has objectionable material in it.
Gabriel, if you could, talk a little bit about both that proposal which came up in the EU debate, and any other technical approaches that you think are promising to get us out of what’s a pretty old debate.
>> GABRIEL KAPTCHUCK: Yeah, thanks.
It’s an interesting place to be in that some really core parts of the technical puzzle here have not meaningfully changed for a long time.
At the same time, we have cryptographic and computing capabilities that are a little bit different than they were before. And this allows for different types of processing to happen on endpoints.
So to pick up I think on something that Mallory was saying earlier, we've seen a lot of changes that are happening on what is available to users on their endpoints. So this is not shifting the "is there a backdoor? is there not a backdoor?" in the actual encryption layer. But rather saying, "Can we put some kind of processing on the client’s device that locally processes and gives them some more information?"
And so one thing that came up a couple of years ago that was proposed by Apple was a proposal in which they were going to blur certain images, particularly for youth accounts. And then there was a kind of mechanism by which if the youth wanted to look at the thing, look at the actual image, it would notify an adult. And there was kind of two different things that were happening there, one of which kind of showcased the ability to do some kind of powerful stuff on the endpoint, and one of which showed the kind of brittleness of this type of approach. So on the one hand, we now have the ability to actually kind of process images on somebody’s phone and say, well, maybe we should blur this thing. Maybe this isn't the type of thing that we should just show to people no matter what. And I think there’s actually a fair amount of consensus that this is not a radical idea. Maybe if I blurred every image that I got or ones that kind of locally were determined to be not something great, that would not be that problematic.
Where there was a lot of pushback from the community, it was the fact that then there was kind of an automatic trigger of some kind of information pushed to another device or pushed to a server or pushed to something like that. That is to say, kind of breaking out this model of end‑to‑end encryption in order to alert somebody else of an event. And that’s actually where that proposal was found to be most objective. And so we have various different kind of ways of thinking about this, right? If we’re able to process on the device itself and able to identify this is content that we’re concerned about, we can kind of give users a little bit more. Maybe you could say "push" or maybe a little bit more usable type of ways to kind of control the information that they see or report the information they see. And that’s something we really know how to do.
When it comes to kind of active scanning, that then kind of pushes information off the device itself. This is where things start to get a lot more complicated and a lot more controversial and a lot more difficult to do. In particular, you kind of brought up in the EU, we've kind of seen a push, a concerted push to move away from kind of an old paradigm, particularly around child abuse material to kind of flag the known instances of child abuse material. So this is an image which matches another image that NCMEC has. And therefore, we kind of with high confidence can say that this image is a problem image. And kind of with confidence ‑‑ I’m going to return to that in a moment, but with some degree of confidence that there’s a match there. And there’s been a push to shift away from that paradigm and towards detecting new images or new content or the solicitation of images or solicitation of content. And this is a much trickier problem.
As a technologist, I don’t know how to write down a program that can, on the client side, with 100% certainty, actually differentiate between this is a problem conversation, this is not a problem conversation. And when the ramifications of getting that wrong is that people’s information is going to get pushed to a server and it’s going to get kind of opened, that’s a really high risk environment to write that kind of program. That’s not a low‑risk kind of choice, and it’s not the kind of thing that you want to get wrong. And this is kind of where it’s important to start making technical differentiations between the types of access that are being requested. If it’s detecting new content, that’s really, really difficult. And I don’t think we have the technical capabilities to do it actually meaningfully.
>> STEWART BAKER: What about detecting old content that’s been tweaked in order to evade the algorithm?
>> GABRIEL KAPTCHUCK: Right. So this is kind of this older paradigm, which is one that, again, there’s still more things to pull apart here, and it’s not just kind of one thing, right? So we have seen some work in doing what’s called perceptual hashing. This is where you take two images and you kind of run them both through an algorithmic function to determine whether or not they’re what’s called a semantic match, where this kind of semantics somehow capture what’s in the image, as opposed to the details of the image. And on the one hand, this seems like a promising way forward, right? Because this means that you could match two different images which have had minor edits made to them, but are still kind of fundamentally the same.
Unfortunately, the reality of it is that our modern perceptual hashing technologies do not live up to their task. In particular, in the aftermath of Apple’s announcement that they were going to be doing some amount of client‑side scanning, they also released this neural hash, a particular hash function that was supposed to do this. And it took people, I don’t know, about a week and a half to reverse engineer it and start to find kind of ridiculous what are called collisions, or two images that match kind of according to the function, but are actually semantically wildly different from one another.
And this is because, you know, this is a really hard computer vision problem to determine whether or not two images are the same. And, you know, you can kind of think about this going kind of out of the context of child stuff, and thinking just back to kind of the way that the US thinks about pornography, right? "I can’t define pornography, but I know what it is when I see it." That kind of says that people are the ones who are able to determine whether or not content is a match. And probably, there’s edge cases where they won’t agree. To get a computer to do that when humans actually have a difficult time doing that, that’s a problem. That means that you’re going to inevitably build functions that are going to do scanning of some variety which are going to be overbroad. And they’re going to kind of have really obvious fail cases or really obvious ways to abuse them.
Something like I can kind of manufacture images that look like, according to this kind of hash function, that they are child abuse, and send them to somebody else when, in fact, they’re not child abuse. It’s just that I've kind of exploited kind of relatively easy ways of modifying images so that they kind of look, according to the algorithm, like the same, but not to our eyes.
And so that’s kind of where we are today. There is kind of a push for scanning on endpoints. In my opinion, there are ways in which this could potentially empower users to have an easier time moderating the content that they see or making better decisions for themselves. At the point where that data then gets pushed off device, that starts to open up kind of a different type of rights impact assessment that needs to happen. And we have to have a different kind of confidence level in the technology than we have in it today.
>> STEWART BAKER: But let me ask you from a technical point of view. We've heard a lot of talk about how valuable it would be to have more conversations and to find common ground. But I wonder if with Signal having long been offering end‑to‑end encryption by default, Apple, WhatsApp, having done the same, and now Facebook adopting the technology for its other services, isn't this debate really over as a practical matter? The big companies that offer these services have all moved to default end‑to‑end encryption, and they’re showing no signs of saying, "Well, maybe we should look for common ground here." They’ve done it.
And maybe I’m misunderstanding the impact in the market, but what’s the incentive to look for some mechanism to satisfy child safety and law enforcement, given what has happened in the market?
>> GABRIEL KAPTCHUCK: Yeah, I mean, I guess if the conversation’s over, we can all go home and go on with our day. I don’t think it’s quite that simple. I think what we’re seeing is the deployment of end‑to‑end encryption technologies around on many, many communication platforms as being a very clear signal that this is what users want. Right? If nothing else, this is like, you know, we’re trying to fill a market need or a market want or something like that.
And importantly, I want to pick up on a thread that I think popped up a couple of times in what Mallory and Dan and Katie all said, this question about like by defaultness. And what is the value or the risks around "by defaultness"?
And at least, you know, from a technical perspective, I like to think that by defaultness is kind of the only reasonable way forward, because you want end‑to‑end encryption to protect the people who are not going out of their way to evade kind of surveillance of any kind. Those are the people you kind of want to protect. And if you don’t protect them, then actually your system is not getting you very much. Right? The ability to build encrypted communication platforms is something that we've seen criminals do for quite some time.
And obviously, there’s kind of a lot of conversation around the ways that international law enforcement have tried to kind of approach those systems and whatever. Putting those aside, we know that people are trying to evade surveillance. They’re going to build these systems. They’re going to use encryption. Right? You want encryption by default to make sure that it’s, you know, you, your spouse, your kids who are protected against somebody inside of a tech company stalking them.
And this isn't like a wild, crazy thing to do. We've seen this happen before where people kind of elevate the powers that they have within, or kind of take the powers that they have and abuse them within a tech company, or a company is breached maybe by a foreign government that wasn't supposed to have access to that system, whatever it is. Right?
So we really do want encryption by default in order to protect the people that you’re trying to protect. That’s kind of like an important part of the puzzle here.
You know, I think in terms of whether or not we’re done with this conversation just, you know, simply because it’s being deployed everywhere, I think that that’s kind of like giving up on trust and safety. That doesn't make any sense here, and I think that trust and safety is obviously going to be part of tech platforms’ responsibilities going forward.
The question is, what are the tools that they’re going to use, and what are the capabilities that they’re going to build into their systems to ensure that users have the ability to protect themselves?
Now, we get into some tricky waters in terms of exactly what the right thing to do there is. Obviously, I've kind of advocated to some extent through what I've been saying for this kind of user and ability to control the information that people are seeing and to report and stuff like that as being a powerful mechanism, as we've seen kind of deployed over the last couple of years.
I'll offer one more kind of piece of this puzzle, and maybe this moves us towards a different part of the conversation, is that when I think of the big risks of trying to kind of move forward in this conversation, and I think one of the pieces that’s new is trying to understand, is there any way beyond kind of an all‑or‑nothing capability?
And this is something that I’m technically interested in, and I think is an important part of the conversation. In particular, lawful access or backdoors as a paradigm is fundamentally kind of an all‑or‑nothing, from a technical perspective, type of trade‑off. Either there is a key somewhere that lets everybody into the communications and there’s a bunch of protections. Maybe those are social protections about who gets access to that key. That’s one paradigm. Or, there is no key. That key does not exist. And therefore, it can never be materialized.
And I want to offer that this pushes us from a regulatory perspective into kind of an opportunity to get a worst‑case scenario. If we mandate there must be a backdoor, that means that this key exists now, and that key is very, very dangerous, a very high‑value target, and somebody is going to go after and get it. And whether or not the Salt Typhoon as a particular instance is evidence of something or another, it is evidence of a paradigm in which there is the willingness by international governments to put a lot of resources going after these capabilities. The minute there’s a key, that key is going to be a high‑value target.
And one thing that I think is interesting in this conversation is wondering if there is a way to create a key that only works for certain types of content. And that’s something in the cryptographic world that may or may not exist. And there’s kind of ongoing research.
But as a paradigm, I think it is a different part of the conversation which starts to shift us away from we have to accept that there’s never going to be any backdoor, or we have to accept that there is going to be a backdoor, and saying, "Well, what is this backdoor for?" If we want a backdoor, and we want to just talk about kids, can we talk about a specific, limited backdoor that doesn't then make everybody else vulnerable at the same time? Just because this mere key’s existence is kind of a vulnerability. This is a difficult paradigm to work with. It’s a hard design space. We don’t know much about it. But I think it is one potential way that we can start thinking about avoiding this worst‑case scenario of keys actually being created and software actually being made that’s really, really vulnerable.
>> STEWART BAKER: Okay. That’s the first suggestion I've heard that there might be a way out of the all‑or‑nothing aspect of this debate.
But let me ask Katie and Mallory to weigh in on whether a content‑based lawful access mechanism is available. I suspect Katie is going to say yes, and it’s a warrant. So having previewed what I suspect Katie’s argument is, let me start with Mallory.
>> MALLORY KNODEL: Thanks. No, it’s okay. I'll be really quick.
I also wanted to connect what Gabriel was just describing to what Katie said earlier, because I think this idea that what I’m putting forward where we sort of accept the constraints of end‑to‑end encryption as sort of giving up, I think suggests that the goal is the back door, right? And I think that for technologists like Gabriel and myself and others, public interest technologists in civil society and in academia and industry, the problem space, the requirements are we need to keep people safe. That includes kids. We need to make sure our communications are secure, and that is a wider frame. It’s a sort of you list the requirements, and then you build the thing that meets the requirements.
Maybe that’s a backdoor, but maybe it’s a whole lot of other things. So when we say, like, we are giving up on backdoors, and I suspect that is true, that that has been the goal all along.
It’s also the UK Tech Safety Challenge that was a few years ago was the same. They said it was about finding solutions to child safety. They created a brief for it that said it needs to be about scanning images in end‑to‑end encryption. Like it was a presupposed goal, and that really narrows the field in terms of what kinds of innovations you can get. So you got five different projects that all did the same thing to varying degrees of success, and the best one was not very good, because perceptual hashing is hard.
So I want to just say I think what Gabriel is describing is these are really interesting ideas. I have more of a technical background in the Internet networking and encryption. I have less of a technical background in AI, but I've had to learn it in the context of this work, because it’s similar to a paper that’s coming out very soon that I’m working on because there’s a lot of imagination around what you can do with this data. I think some of it could be very interesting and fun. Like let’s think about how these secure online platforms are being used a lot more like social media platforms, et cetera, and that’s great. That’s what people want. That’s where they feel safe expressing themselves, increasingly, in a world that seems kind of scary, and that yet will still have some of these features. And can you do cool things with content that also allow users to protect themselves and allow the platforms to make sure the experience is enjoyable. That’s another incentive. Nobody wants to use a platform that has all kinds of unwanted or gross content on it. Then, yeah, we get into more of a solution space. So let’s continue this conversation and live in that sort of innovation space. I think that’s a good idea.
>> STEWART BAKER: Katie?
>> KATIE NOYES: I couldn't agree more. I think that’s what we've been trying to do is get to the table and discuss. I do think I like it that this panel’s gone this way. I think we've all moved off of the absolutist point of views which is, look, there is going to be compromise around and a lot of innovation needed on how this all can actually be achieved and coexist. And for our part, we’re very willing to come to the table.
This sort of giving up thing, I mean, I’m keen on it because I think the idea too of, you know, thinking through ‑‑ I'll just give a case example, because we haven’t talked a case example, and I think it’s worthy for these types of conversations. And I’m going to talk about a quick success. But this is what we’re afraid of, right? So take the sextortion. I think many people are suffering this challenge, which is why I picked this case because it’s a universal. It’s in many different countries. And by the way, the actual subjects were Nigerian. And so if you haven’t followed this case, we have a case out of our Detroit field office, a young gentleman named Jordan Demay. So he ‑‑ and again, I’m going to kind of just pierce through to all these preventative measures, even though we haven’t gone there. They’re wonderful. And please let me give a plus one to all the companies that are doing this great work.
But here’s the challenge, right? The hacking thing, you’re right, if things are available and a criminal thinks they can benefit from it, they’re going to target it. So they targeted dormant accounts that were being sold on the dark web, dormant Instagram accounts. They hijacked one of them, just changed the pictures of it, used the same name, and enticed an individual who thought he was talking to a 16‑ or 17‑year‑old girl, created a relationship, and pretty soon an explicit image was shared, and that’s when the extortion starts. Our young 17‑year‑old Jordan paid the first ransom and couldn't pay the second.
And here’s what we mean by "why content." Because I know we on the panel understand this. I’m not sure everyone has been following this, particularly at IGF, the way that we are. So here’s why content matters. If the only information we had was metadata that Jordan’s Instagram, this Instagram account, was talking to this fake Instagram account, there’s no real prosecution there. There’s victimization. We could see it because, unfortunately, Jordan took his own life. And here’s the interesting part of this. The mother has gone out very publicly. So I only use this because she’s gone out publicly and she has told law enforcement she never would have known why her son committed suicide if the FBI was not able to gain access to the content which showed the communications and it showed this subject goading Jordan to take his own life. That added to the sentencing. It added to the prosecution. This is what is at stake.
And I really will push back, too. We talk about this very academically. And I do it, too. So I’m castigating my own self. I think people really do need to understand the design choices and the way they are affecting them, right? I think it’s key.
I also resist this idea of a backdoor. I can’t stand the definition. I tried to look. What’s the universal definition of a backdoor? And if you go back and look at it, what it was at least five, six years ago was the FBI and law enforcement having direct backdoor access to communications. That is not ‑‑ I don’t want anyone to think that is what law enforcement is asking for. We’re asking for the technical assistance side.
The other thing I also kind of resist a little bit on that is there’s this idea that you’re somehow in your own home, and it’s a backdoor to your own home. But you’re not in your own home. You’re in a provider’s home, and there are all kinds of backdoors. And yes, I’m wise to the fact all of these hashtag backdoors are not created equal, but there are a lot of access points. There are. And all of those access points could be vulnerable. And again, vulnerable for different reasons.
I see Gabe laughing because we had this conversation. I’m not, again, I’m not saying that all of these accesses are equal, but they are there, and they’re there for a reason and not a bad reason. They need to make sure they’re updating any vulnerability they actually find or one, by the way, we might find from seeing other victims and sharing that vulnerability that was identified as a tactic or technique or procedure by a criminal who’s using it here to prevent further victimization.
So, sorry. Long‑winded, Jim. All over the place.
But my quick answer is, yes, that there are absolutely solutions. We are willing. We know there has to be an active negotiation. We know it’s not going to be absolute access.
By the way, there have been some really interesting discussions, and I'll just chuck them out there because we had great conversations at UC Berkeley with an academic institution who has a lot of cryptographers. Would love to talk more to Gabriel, too. But thinking about things like homomorphic encryption and some promise about, again, like you’re saying, Mallory, identifying additional sort of, you know, categorizations of the data. Right? And what it offers.
But also this idea that someone raised to us, how about a prospective data in motion solution? Where you’re not affecting all of the users, but perhaps we’re affecting a specific subject’s designer architecture. I raise it. It’s been raised publicly. It’s in articles if you, in fact, I think, Gabe, it was in your article. And I think you even said "abuse‑proof lawful access." And we’re talking about the way that a prospective solution, meaning today forward and orienting that way, would also offer additional oversight. And we agree to that, as well.
So anyway, a resounding "yes" from us, Jim. We stand at the ready to start getting working on an action plan to get together and kind of start talking, taking our law enforcement operational needs and what we’re seeing from our cases, and bringing it into the conversation with folks like this.
And again, let me just go back to one quick hit again for the Multistakeholder approach. This is the best way we solve these problems.
Thanks.
>> STEWART BAKER: So, Gabriel, do you think there is a abuse‑resistant encryption?
>> GABRIEL KAPTCHUCK: You know, it’s hard to say when you’ve written a paper called Abuse‑Resistant Law Enforcement Access Mechanisms that you think they don’t exist. It’s difficult to quite put that back in the bag. You know, I think the work that we did in that paper was try to understand this design space more and try to think about, you know, if we are in a world where, you know, the folks who are using TOLA start issuing technical capability notices left and right ‑‑ right? ‑‑ and suddenly there’s keys everywhere, right? That is the worst‑case scenario. How is it even possible to build a system that meets the technical, you know, requirements without being a total disaster? That’s what we’re trying to ask, and that’s what we’re calling "abuse‑resistant." That’s not a global notion of abuse‑resistance, right? We were actually very careful to say, like, we need to talk about what it means to be abuse‑resistant. We need definitions on the ground, right? We need something on paper so that the cryptographic community can go back and actually answer a specific technical question instead of saying, "Aha, it’s abuse‑resistant," and that’s it. Right? No, we need something a little bit more formal to work with. And so the kind of particular notion that we worked with in that paper was trying to say, well, is there some way such that, you know, okay, you have warrants that are activating backdoors in some way. Okay? And is there some way that, like, if that key gets stolen, at least we would know. At least we would all be able to tell. We would be able to say, like, something terrible has happened, right? A foreign government has taken this key and is just, like, rampantly using it to decrypt people’s stuff. Right? If we’re in that world, like, can we at least detect it and say, "We need to rekey the system right now. Something very bad is happening." And these are notions of abuse‑resistance that I think haven’t been part of the conversation, and we risk going towards really, really bad solutions if we don’t explore this space.
>> STEWART BAKER: So let me push on a point that has always bothered me about the argument that these keys are going to be everywhere, they’re going to be compromised, all of the communications are going to be exposed, and that that’s a risk we can’t take. It does seem to me that everybody who has software on our phones or on our computers has the ability to gain access to that computer or that phone and to compromise the security of my communications on my phone. I am trusting every single provider of every single app that is on my phone. Obviously, that’s a worry, but we expect the manufacturer to undertake the security measures to prevent that from becoming the disaster we've been talking about here. Why doesn't that same approach, saying to the company that provides the communication service, you also have to have a mechanism for providing access, and we expect you to maintain that every bit as securely as you maintain the security of your patch update system, why is that not the beginning of an approach here?
>> GABRIEL KAPTCHUCK: Yeah. Let’s talk about this. Let’s start to split these into technical categories because there’s multiple things happening here. The first thing is whether or not I need to trust, I don’t know, Duolingo. They might have the ability to access my unencrypted messages. When you’re saying I need to trust every provider of every single app on my phone, it turns out Apple has done a really good job sandboxing these things so that it’s actually highly, highly non‑trivial.
>> STEWART BAKER: Yes, they have made sure that we have to trust Apple, but nobody else.
>> GABRIEL KAPTCHUK: Great. Now, let’s talk about Apple for a moment. Let’s say there are these software update keys that is part of the ecosystem today, for exactly the reasons that you mentioned. I think one really important part of this puzzle is thinking about the hotness of these keys. So this is maybe a little bit of a technical term, but like how much access does this key need? How live is it, right? For a software signing key, that thing isn't living on a computer that somebody has access to, right? That thing is living inside of a TPM offline, sitting somewhere. And if you want to go sign an update, you literally have somebody get up and walk over and do the thing. Right? And that reduces the amount of exposure of that key. And you’re not doing this every day. Right? You’re doing this ‑‑ I mean I don’t know how many updates, how many times I should be updating my phone, but we’re getting updates not that frequently from Apple. So it’s a very kind of slow and methodical capability that’s audited by a lot of people, and there’s a lot of eyes on it.
This is a very different world when we talk about getting access to people’s messages, right? You think that, like, just if there is this key, it’s only going to be asked for once in a while? Like, no. It’s going to be fielding thousands, tens of thousands, hundreds of thousands of requests from countries around the globe. And there’s going to be a lot of requests that come with very, very like short time turnarounds, right? "we need this content decrypted in the next five minutes because there’s a kid somewhere and we need to find them." That is a request that we are going to see, because we already see it for unencrypted information. And moreover, we see that capability has been exploited in practice, right? Verizon handed over data to somebody who impersonated a member of law enforcement because they said, "Hey, I need this data right now," and they just handed them the data first, and then we’re going to kind of do the due process later. And that person was just using an owned account of some kind, right? This happened I think in 2013 ‑‑ I’m sorry, 2023. Right? So the hotness of these keys makes a tremendous amount of difference because the number of times you have to access it really shifts the dynamics around it. That’s one piece of the conversation. There’s more to unpack there, but I'll stop there for now.
>> STEWART BAKER: All right. So I want to make sure we have left enough time, and I'll ask Mia to keep me honest here. Should we be moving to questions from the audience? And if we should, Mia, I'll ask you to begin the process of assembling them.
>> MIA McALLISTER: Yes, we have about 15 minutes left in the session. So for questions, let’s move to the audience, and then we’ll pivot online. I know there are already some in the chat.
>> ANDREW CAMPLING: Is this working? Yes. Hi. Andrew Cummings speaking. I’m a trustee for the Internet Watch Foundation. I should firstly say that there’s not agreement in civil society on this issue. There are lots of different points of view. That’s true of all of the different parts of the Multistakeholder community. And there’s a lot of frustration, certainly for some of us, that the weaponization of privacy is being used to override the rights of children and other vulnerable groups, completely forgetting that privacy is a qualified right. And all of the human rights of children are being transgressed up to and including their life, as we heard just now. So I think we just need a reality check on that.
And also we shouldn't use encryption interchangeably with security. They’re not the same. They’re quite different. And when we start to encrypt indicators of compromise and other metadata, A, we weaken security and, therefore, we completely trash privacy anyway. And it’s generally a bad practice.
The scale of the problem we haven’t talked about, so just to give some non‑abstract sense to this, we’re looking at about 150 million victims of child sexual violence per annum around the world. And we are seeing, at the moment, over 100 million reports of CSAM images and videos being reported per annum. That’s three every second. This is something which has been greatly magnified by the Internet.
This is a tech sector problem, not something which is a societal problem. It’s on us to fix this problem. And end‑to‑end encrypted messaging apps are widely used to find and share CSAM. There’s an enormously large sample size of research which is available to back that up. So we know that the messaging apps are a big part of the problem space here. We don’t need to backdoor them. Client‑side scanning would immediately stop the sharing of known CSAM and it has no impact on privacy if it’s known CSAM images and it certainly doesn't break encryption either.
And also, simple things like age estimation or verification would at least keep adults off of child spaces and vice versa. So there’s some easy steps we could take here with known technology which would immediately affect this problem.
And then finally, let’s not forget the sector is hugely hypocritical here. A lot of these problems apply in democracies. They don’t apply in other types of states. So as a trivial example, Apple Private Relay is not available in China because it’s illegal in China. They care a lot less about the negative impacts of some of these technologies in democracies, but concede to the autocratic states and trade it for market access. So we've got a sector here which is very hypocritical.
And then finally, Vint Cerf in a session earlier this week said sometimes we do need to pierce the veil of anonymity for law enforcement. And I think that’s absolutely the right approach. Yeah, we can’t treat privacy as an absolute right when that’s wrong in law and has serious consequences.
So I’m not sure there’s a question there, but with the conversation so far, let’s talk about some of the victims. There are fixes here, and some groups are stopping us from making progress when progress could be made tomorrow if there was a willingness to do the easy things.
Thank you.
>> STEWART BAKER: All right. Well, that is sort of a question in the sense of a long set of propositions followed by the words, "Do you agree?" So let me ask Mallory if she does agree?
There were a lot of ideas there, that anonymity needs to be limited. I’m not sure that it is raised by the encryption debate because you can have encrypted communications that are fully attributable. But that client scanning would be a straightforward approach to this. That age limits on access to communications services would be worth doing. And that we’re a bit too high on our horse when we say encryption is about privacy because it’s certainly also becomes a vector for transmission of malware that wrecks people’s security. So it’s a double‑edged sword.
So Mallory, with those thoughts uppermost, what do you find in that that you can agree with?
>> MALLORY KNODEL: Well, that’s an interesting way of phrasing the question, Stewart. Thank you. It’ll challenge me. But first I wanted to just say I’m particularly frustrated by the fact that the EU Child Protection Regulation has been stalled for years because of the encryption mandate. If that were removed, that whole piece of legislation that has all kinds of aspects of child safety could have moved forward ages ago. The fact that this is the one thing that’s been holding it back, I think should infuriate everyone who cares about child safety. So again, maybe it’s not worth saying these folks have held this issue back, or these folks have held this issue back, because again, what we’re trying to do is come up with a list of requirements and constraints, and that’s going to differ per jurisdiction. That’s going to differ per culture, et cetera. We’re in different places in the world.
I think we can all agree that’s sort of the promise of the interconnected Internet is that we all kind of come with our own version of that, and interconnect, and that’s the whole idea. One‑size‑fits‑all platforms are not going to be ‑‑ I don’t think they’re the way sort of moving forward. I would certainly agree with that.
I think there’s some of the things in there that have been said that then accommodate these kinds of other design ideas. But the issue is that backdoors or whatever you’re calling it, these measures have been then mandated for everyone at scale. So if we can start to chip away at that idea, then I think you get all kinds of different messaging apps that can thrive, to varying degrees of encryption, varying degrees of scanning.
But that you would mandate everyone to do that the same, that you would mandate everyone to do that the same for everyone, those are the problems. And if you look at, for example, the statement about the EU mandated backdoors and chat control from the Internet Architecture Board, they get to the heart of that, right? It’s something Gabriel said before: Encryption exists. People are going to use it. Even if you were able to sweep up, say, the largest providers, like you could sweep up WhatsApp and then ‑‑ well, I think you’d just get it with WhatsApp, right, wouldn’t you? So you just sweep up WhatsApp. Everyone else could kind of do what they want. Then you’ve just disenfranchised all the WhatsApp users, and that would be a change. That would be a fundamental change to the software that everybody downloaded. When they downloaded WhatsApp, you might get migration to the other services then that aren't swept up in that piece of legislation that do provide stronger encryption and don’t provide kinds of backdoor access, just to kind of game this out.
So I am all for a very plural world in which we have lots and lots of different communications providers. What I don’t think is fair or what we actually want, right, is then requiring them all to work exactly in the same way and requiring them all to have struck the same balance when it comes to user privacy versus content moderation, because different users and different jurisdictions do want a different answer to that.
>> STEWART BAKER: Okay. Mia, do you have more questions, or do you want to go back to other panelists?
>> MIA McALLISTER: Yeah. Dan, I want to bring you in. Are there any questions? There’s a lot of ‑‑ we see we have someone online from Germany. Any questions you want to address in the chat, Dan?
>> DAN SUTER: Hey, look, Mia, I can see that there are a lot of comments there from Ben, from Andrew. We've obviously heard from Andrew, and equally in relation to Leah. I think the thing that’s really coming over, and obviously we know this in terms of how this is a really difficult space. We often hear in terms of, well, we need to be regulated from industry. But equally then we hear about, well, then we’re going to have companies that are going to leave and go offshore. And there’s so much that can be done, and we need to move to that place where we are actually doing it. And look, these are thorny issues. "Wicked problems" as former Prime Minister Adern in New Zealand used to say. And that requires people to come into the room and to discuss and understand our commonality, because often there are points here where we do have a common approach. I hear absolutely everything that Andrew just said in terms of this question‑and‑answer session. And believe you me, as a former defence lawyer, as a prosecutor, absolutely hear in terms of we should be speaking more about the victim’s voice. We really should be here. It’s so important.
And equally to say, look, from a New Zealand legislation point of view, we need the content. We have a high court ruling that says we cannot prosecute without the evidence of the content in relation to child sexual abuse matters. We have no choice here.
Do we change the legislation and say, well, actually, we can convict people on the basis of metadata? Is that really where we want to go to? I don’t think that’s the case.
And that’s when I say in terms of whether regulation pushes into this space to ensure that we can make children safer online.
And do you know what? We are being pushed into a place where there’s self‑reporting, that equally isn't a good space to be in, as well. I can’t see my 15‑year‑old child ‑‑ we talked about the sextortion case. Is he going to be self‑reporting in relation to that case? Should we be pushing the responsibility onto my 15‑year‑old or other children? Again, I don’t think we want to be in that space.
But I’m sure we would also hear that there is agreement in relation to that. So that’s where we need to absolutely come together. But who’s going to lead that? And that’s a big question that is left hanging here, because I can really see the positivity coming out in terms of this panel, but who is going to take the lead? Is it where most of the service providers are located? Is that what’s required? Is it required in terms of a multilateral institution? And we know how that can be particularly difficult. Having taken part in the UN Cybercrime Convention negotiations, it’s not easy, right, in terms of multilateral process, as well.
But we really do need somebody to come to the fore and say, right, we’re going to get the right people in the room. We need the technologists. We need the academics. We need civil society. We need the NGOs. We need governments. And we need to come together to do this, because we do have the victims. We do have people who are dying. We need to move this point on sooner rather than later for all the good reasons that we've all discussed today.
But passing back to you, Mia.
>> MIA McALLISTER: Thank you, Dan.
We have time for one more question in the room. I'll look to this side.
Oh, oh, thank you. Is your hand up? Okay. It looks like no more questions in the room.
Going online, any more questions online? You could just come off mute. All right. Not seeing any.
>> STEWART BAKER: Yes. Well, then we can give the audience back three minutes of their life. (laughs) They go to break early.
I do think that our panel has done a great job of enlightening us about the nature of the considerations that are driving this debate and why it has been so prolonged and so difficult.
And so I hope the audience will join me in thanking Mallory and Dan and Kate for their contributions, and Gabriel. Pardon. Thank you.