IGF 2024 - Day 2 - Workshop Room 5 - WS137 Combating Illegal Content With a Multistakeholder Approach

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> AUKE PALS: Good morning from Riyadh, Saudi Arabia, and also welcome to our online participants.

My name is Aouke Aoukepals, working for KPMG and moderating this session, Combating Illegal Content with a Multistakeholder Approach.

I'm not doing that alone. We do have great speakers here in the room today and online, and that's for the greater good. While this is a quite sensitive topic and it's a complex challenge in the Internet Governance Forum area because we're going to discuss today who decides what's allowed online and how to prevent censorship and how to ensure openness and freedom of the Internet, while regulating content online.

So first of all, I would like to welcome our speakers. So in the room today we do have Arda Gerkens from the Authority of Terrorism Content and Child Pornography Material. Welcome. Mozart Tenorio from the Brazil regulator. Tim Scott from Roblox. Deepali Ibrahim from Meta. And online is Brian Cimbolic, a Chief Legal and Policy Officer at Public Interest Registry, a dot org.

First of all, Arda, can I ask you to have the floor and give a short introduction on what the Authority of Online Terrorist Content and Child Pornography Material mean?

>> ARDA GERKENS: Yeah, sorry about that name. I didn't think of it, but it's awful. Anyway, we're the new established regulator and we execute the terrorist content online regulation from the European Commission and national regulation to combat child pornographic material indeed. We do basically everything from detection of this kind of material to sending out removal orders to anything of giving fines or the appeal that any hosting party or platform might do. Basically, everything.

And because we're a regulator, I think we also look at the landscape as such because we can send out removal orders, as many as we want, but we actually don't want the material to be on there. So we seek co‑operation to find ways to diminish that kind of material online.

>> AUKE PALS: I think you do that by having good collaboration with platforms?

>> ARDA GERKENS: I hope so. (laughs.) No, indeed, we have a sector council. That means for the infrastructure parties in the Netherlands we have regular talks with them to see how we can co‑operate and also to know we do the right thing technically, and also to debate with them whether things are possible or not. I think we talk about this later, can we block on DNS level or geo blocking and stuff like that. And I have regular talks with the platforms to discuss with them what they are doing, but I don't speak with the platforms in a council, in a group, because they are competitors. So they will not speak their tongue when they are next to each other. So I talk to them one‑on‑one.

>> AUKE PALS: But now they're in the room together. But Deepali, what you are doing regarding content online?

>> DEEPALI TAMHANE: I work on the Trust and Safety Team and my job is primarily to help make sure our users feel safe online. Our approach to safety is a multi‑pillar approach. Our team looks at three things. One is having the right policies in place to keep our users safe, including policies for tackling illegal content, all those categories of content. Two is features that are really important that users must have to exercise choice and control on the platform, whether it's reporting, whether it's tools like Limit or Restrict to customize experience on our platforms. And the last, our team works a lot on partnerships. When I started out at Meta a decade ago we had maybe one or two Safety Partners we worked with in India, for example. But today, globally, we have a network of over 500 Safety Experts that we work with and we use this expertise to actually help inform a lot of the work that we do on the justice and safety side.

>> AUKE PALS: Thanks very much. Mozart, from the Brazilian point of view, how are you regulating contented online?

>> MOZART TENORIO: Sure. First of all, thank you for the invitation. We are the telecom regulator. So we don't deal directly with content, but as there's no regulator for content in Brazil we kind of do what the courts order, tell us to do, regarding this issue. So what we can do is when we receive a court order like it could be from Supreme Court or supreme Elector report or maybe a lower court sometimes, we tell the telecoms operators to take down those websites. But we cannot do that on the DNS level, obviously, so they do it on an IP level on the networks. So that's more or less how we are trying to deal with that. We have also the NIC.BR in Brazil which are responsible for DNS and IT but they're not government or state‑owned or nothing like this.

So also, I am on counsel there, as well, so we also comply with court orders, if they go to the NIC.BR for the DNS level, as well. But mostly talking from the perspective, we tell with them to comply with the court order and take those websites off the grid.

>> AUKE PALS: Very interested. I'm very interested to hear others' opinions in the room about that.

Let's move to Tim Scott (microphone going in and out)...Roblox involving content online.

>> TIM SCOTT: I'll use this. Okay. In a very similar matter as Meta, we take a fairly multistakeholder approach for this. I think it goes to the heart of the point of this people, this session. Safety and safety of our consumers, our users on the platform, is right at the heart of what we do and always has been from our inception 20 years ago, but we approach this from the idea of partnerships and provide the tools and features for people to use and we implement the policies and adapt those policy as situations change, and as we develop these dialogues with regulators, with governments, et cetera. Rather than come up with something top‑down and draw a line under it, we are constantly in that dialogue and understanding where the risks are coming from so we can continue to make it as safe as possible.

>> AUKE PALS: Thank you very much. Now let's move to Brian. Brian, welcome online. Brian, from the point of the dot org registry, how are you involved in this?

>> BRIAN CIMBOLIC: Yeah, hi, thank you very much. Thank you for having me, particularly remotely. I work for Public Interest Registry. We are the registry operator for the dog org top‑level domain as well as a few other mission‑driven top‑level domains like charity and foundation.

Dealing with content at the DNS level is difficult and typically it's not the right place for it to be dealt with. We have a pretty robust anti‑abuse programme that focuses primarily on technical harms, things that within the ICANN world fall under the category of DNS abuse, like phishing, malware, botnets, things like that.

However, we recognise sometimes the scale of harms when you're dealing with online harms can be so great that if other actors aren't stepping in, that we are prepared to step from. So we have partnerships with, for example, the Internet Watch Foundation to deal with childhood sexual abuse materials online. We've been working with them for five years so we have a process in place where we work downstream with registrars and registrants and hosting providers to try to get that content removed so we don't have to suspend with the domain name. The DNS is a really blunt tool to deal with website content. If there's a piece of childhood sexual abuse material, one image on a file‑sharing site, at our level we can't remove that particular piece of content. The only thing we can do is suspend that entire domain name. While that might render that piece of content inaccessible by the domain name system, it also renders potentially the other hundreds of thousands of legitimate or benign pieces of content inaccessible from the DNS. So it's impossible, but you have to be careful and very deliberate when dealing with online abuses via the DNS.

>> AUKE PALS: Thank you very much. So we've heard some techniques on regulating and removing online content. I would also encourage the room to participate in this discussion, so not only this panel talking. If you do have an intervention, stand up, move to one of the areas and we'll give you a mic. We encourage all the interaction, as well. If there's any intervention right now from the audience, if you know otherwise, I would like to give the floor to our panel to give a little overview, also, on regulations online. Because there are quite some regulations dealing with online content, like DNS, dot com, Terrorism Content Online regulation. Who can tell me something about the regulation? Who can I give the floor?

>> ARDA GERKENS: I can tell you the Terrorist Content Online regulation is basically the regulation giving every competent authority in all of Europe and every country, there should be one, the authority to send out a removal order to any platform or service that is aiming at customers in Europe. So basically that's very swiftly, could be any language, from the European Union, or maybe if you pay your services with euros, that's already a European connection. Once we give you the removal order, the material needs to be taken down within one hour. One hour takedown time, no questions asked, after which you can debate that. If you don't take it down in one hour, we can fine you for that. If you don't take it down at all, we can even fine you more.

If you are, for instance, based in the Netherlands, and we have some companies in the Netherlands which are really interesting, for instance, Reddit and Snapchat, to name two. Discord is based in the Netherlands and they have legal representatives there. If they have terrorist content more than twice in a year, then we will tell them we have special interest and we'll go into talks with them to talk about how to not have this material on their platform. So that's the regulation. I think it's quite obvious that child sex abuse material is illegal in itself. Basically, we shouldn't even have to debate that. It's just taken down. Other regulations coming up is of course the Digital Service Act. We have also E‑Evidence coming up. We have the Video ‑‑ I don't know the abbreviation.

>> AUKE PALS: Talking about DSA? Mozart, would you like to give a little introduction of DSA?

>> MOZART TENORIO: Maybe it would be better for a European to talk about DSA but Brazil, I can say is it's very inspiring for Brazil. So we have some piece of legislation being discussed but we don't still have any bill approved by both Houses, but what has been discussed in Congress now is pretty much inspired about the European experience. So DSA, DMA, how we can deal with that, but nothing yet proves good enough for our Congressmen to approve that in a final stage. So maybe when we have something that's going to be inspired, that is set by European legislation, but what can I say that I see in those bills proposed in Brazil is that we are trying to aim for undoubtful harmful or illegal content. We're not giving a lot of room for us, for the regulation, whoever they are, to have much of a judgement about that. So only what is really clear that is illegal, that is harmful, is going to be removed a priori, I would say.

>> TIM SCOTT: This is working now. In the UK, we have the Online Safety Act making its way through implementation at the moment. The Illegal Harms Codes have just been published. I think there's 2,500 pages of information and guidance, and that's not an exaggeration. So they're going through a quite forensic approach to this, which I liken to a DDoS attack in terms of trying to keep up with it sometimes. There's clear illegal harms, but then there's also content harmful to the children, which is a far more grey area and a lot more open to interpretation in terms of how you react to it.

I guess the focus from our regulatory needs in the UK is about what is the evidence base, but also what's the mitigating factors you're doing on your platforms to keep users safe from harm? And that does give, again, to the opposite of what you're saying in Brazil, it gives quite a lot of scope to work with the regulators to say, look, these are the risks we see on our platforms and these are the measures we're taking and this is the dialogue we're entering into to demonstrate we're coming up to scratch on what you're expecting. I think it's been a really collaborative experience thus far which hopefully should aim towards meeting a shared goal. Colleagues and I have just come with meeting with the Digital Cooperation Organization. Again, so that point, what is it you want to achieve as a regulator or government? What do we want to achieve for our users and what's the shared ground? How do we reach that? I think rather than top‑down legislation, "Let's ban x," I don't mean the company, although ‑‑ you know, "Let's ban x," but instead let's have a dialogue about how we might achieve that shared aim.

>> AUKE PALS: Deepali?

>> DEEPALI TAMHANE: I thought I could give a little bit of an overview on content top‑down how companies like us deal with it because we have to do it on a global level and make sure the Internet is not splintered in any way. In terms of a lot of on our policies, take‑down actually predates a lot of this legislation. Including content that is illegal or might be harmful and not illegal. Child protection policies go beyond sexual childhood abuse. For example, we have an approach of removing this content and also have technology that proactively removes it a lot of times even before somebody reports it to us. We have to deal with at a very local level is each country's legislative regime dealing with specific content. So we have geo blocking policy. And those geo blocking policy are designed essentially to deal with the laws of that particular country. So as a company, we're dealing with, you know, with the laws of Brazil, for example, or the laws of India. Relating to very specific content.

Now, in each country, there is a very slight, I would say slight, to broader gap, between our community standards and our geo blocking policies. So I think the challenge really is for us to make sure there's a category of content that we don't allow on our platform, that we don't think should be on our platform, plus the additional content of what countries' regulations require, and to try and bridge the gap between those two by having dialogues with the regulator. It's not necessarily that we agree all the time, but I think that these are really important discussions to be had because content that is illegal in one country will be free speech in another country. And we've dealt with a lot of those situations. The question for us is, what do we do with that piece of content?

>> AUKE PALS: Arda?

>> ARDA GERKENS: It's really interesting what you're saying because you're basically saying we do a lot and still we find Facebook challenging the removal orders for terrorist content because they said this actually didn't happen from the Netherlands but from another country, Facebook stated it's glorifying terroristic actions which is not illegal for Facebook, but it is illegal under European law. So I can still see. I mean, I find it sometimes difficult we have these discussions because you make, for me, quite clear content, border line content, because it's different in different jurisdictions. And I do understand that sometimes that can be challenging. If you look, for instance, in Europe, when it's white supremacy content, or whether it's geodesic content, that's quite clear to us. If it's somebody from Catalonia speaking on something, then it might be viewed differently. But to debate and say that, "We do a lot," I think if you both ‑‑ I'm not just looking at Meta or Roblox, but if the industry would do more we wouldn't have this conversation. We wouldn't need legislation. We should say, "This is where we draw the line." For me it was quite surprising to find that for Meta, even if there was a removal order sent by a competent authority, they didn't have to comply with that legislation.

>> AUKE PALS: There's a gentleman in the room, as well. We'll go over with a microphone. Thank you.

>> ANDREW CAMPLING: Hi. Thank you.

My name is Andrew Campling. Amongst other things, I'm a Trustee for the Internet Watch Foundation, which is focused on the removal of CSAM from the Internet.

Firstly, thank you to PIR for partnering with us, and their entirely voluntary efforts, which are making a big impact.

To challenge, tech choices are often made by the platform operators. In my view, that gives them plausible deniability for not acting, to the point about not complying with domestic legislation. For example, switching on end‑to‑end encryption which hides the existence of CSAM being distributed actively on the platforms. I'll pick on Meta because I know the numbers, but it's equally true of other platforms. Meta, on Facebook, about 30 million or so instances of CSAM being found a year, and then they switched on end‑to‑end encryption on Facebook Messenger so of course you can't see the content. It's implausible that it's gone away, but it's a way of avoiding having to do the moderation.

So the terms of service that the platforms have are useful, but they're not actually enforced most of the time. So for example, you don't have meaningful age verification. And, you know, surprise, children tell lies about how old they are. Again, I think on Meta, apologies to do this twice, but the report account shows billions of dollars of profit made from children who are too young to use the platform, and that's equally true of the other platforms that don't have appropriate age verification controls or even age estimation controls.

So to get to the point, regulation is absolutely necessary, but only if it's attached to significant consequences for the platforms and preferably for their senior execs. The example I'll finish with, Telegram was an outlier in this space until their CEO was arrested. They immediately joined the IWF and are now actually searching and removing CSAM from their channells. That shows when the senior execs are in jeopardy, they will comply. Maybe that's where we need to start.

>> DEEPALI TAMHANE: In the transparency report that we had for this quarter, I think, off the top of my head, for CSAM particularly I think we would have removed more than 7 million pieces of content. For bullying and harassment, also 7 plus million pieces of content. For suicide and self‑injury, more than 5 million pieces of content. And that's just this quarter. We proactively report this content even before somebody reports it to us often. For CSAM, we probably report the highest volume of numbers every year. I don't know what the number is this year.

But I don't think that that suggests that we don't do anything. I think that we do do a lot in terms of the content that we remove.

In terms of end‑to‑end encryption, I think the suggestion here is that end‑to‑end encryption means that there's no safety. We actually work to ensure there are safety in place with end‑to‑end encryption. From the safety side of things, we designed a Prevent, Detect, Remove response for end‑to‑end encryption which says let's think about preventing these interactions in the first place. That's one of the approaches. That's why we put in really strict messaging restrictions on Messenger and Instagram so a young person is not able to be messaged by an adult or a teen who they are not connected, who they don't follow.

Also, end‑to‑end encryption doesn't mean that your content can't be reported. It means it can be reported and we can take certain action.

Also, in terms of public services, we can also use our technology across the public services on end‑to‑end encrypted services, and we do that as well, and we reported content that we found.

If the conversation is about, is end‑to‑end encryption bad? Should we not have it? I think that there are a lot of people, I don't know if there are any privacy advocates, who would disagree that that's a bad thing?

>> AUKE PALS: I'm curious. Do we have them in the room? Do we have any in the room?

>> DEEPALI TAMHANE: But I think that's a different conversation and I think that's a debate we've had in many countries, but I don't think in any country they've decided to have a regulation that bans end‑to‑end encryption, because creating a back door for one means creating a back door for everybody. I think there's a lot of considerations when we need to take into account, but that's a separate and important discussion.

To my point, just because a service is end‑to‑end encrypted doesn't mean that we can't have safety mitigations. We can't see the content, but we can do a lot.

>> AUKE PALS: Thank you very much. I also put a question on the screen.

So I would encourage everyone in the room to participate and log in on the website Menti.com, code 773‑6669, to answer the question, in what way can we shape collaboration between regulators and industry without regulators losing their independency?

We have some regulators in the room. While we collect responses, I'm also curious on the perspective of the regulator. Who can I give the floor?

>> MOZART TENORIO: I think I can talk a little bit as a regulator.

But I would also like to share the experience in Brazil. As I said, we don't have a regulator for the DNS or IPs, but we have a multi‑sectoral multistakeholder entity which deals with that, and the regulator is on the board in the entity. We can interact with them and come to reasonable solutions together. In Brazil, the regulator, as in the UK, we have panels that include Civil Society, consumers, Private Sector, other government branches, et cetera.

So they can constantly give feedback about what we're doing.

And finally, they can even challenge our decisions in court, if they think it's appropriate.

So that's a good kind of checks and balances to involve all society, all the sectors, and still keeping the regulator independent. That's something I would like to share.

>> ARDA GERKENS: I think I do find this very challenging because, indeed, as a regulator you should keep your eye on the ball. Right? So what we want to do in my organisation is make sure that, in the end, the Internet is cleansed of terrorist content and child sexual abuse material online.

But we also realize we cannot do this without collaboration with any party involved in this. That's because you will always have bad actors and always people on purpose put this material out there. So it's a fight we need to fight together. But I do find it difficult, first of all, there's a lot of debate in my country, the possibility of geo blocking or doing anything on DNS level.

So you have to have a debate on that.

and the other thing is that in the end if I talk to the platforms, I mean, let's be honest, you're big companies. Right? You're in here for profit, of course, and with a nice tool maybe but you're in here to make profit so it's going to be also challenging not to be too much left by the information given by the companies. But if you look at the debate on end‑to‑end encryption, if you're not very well‑informed by either stakeholder on the Internet, then you might say things like, breaking end to end encryption, which I think is really not good for the safety of children, also, online, so yeah. It's very hard to find a balance. But I do think it's very much needed as a regulator to do this closely together. And so yeah, I would really be interested to see what the public has to say, what they see as a challenge for us, as a regulator, but also maybe chances for us.

>> AUKE PALS: Yes, indeed. We collected some responses. I'm quite curious about the clarity of responsibilities within the ecosystem response. So we'll get that answer in the room.

No one in the room? Maybe online? Any shy people?

>> SPEAKER: Thank you. We're working on a public private partnership on content moderation. This is what we really need. We need clarity of responsibilities because it's a complex ecosystem with lots of responsibilities. We need to talk about where responsibilities lie and how to actually check on these responsibilities, so they work out the way they should.

>> AUKE PALS: Anyone want to say further on that? Otherwise, I'll take another one.

>> AUDIENCE: Hello, everyone. (?) From the Brazilian association of Risk  providers. Is this working?

>> AUKE PALS: It is working.

>> AUDIENCE: I want to reiterate and give clarity in the case of Brazil as well. We've been dealing with a lot of issues. The regulators sometimes ask to do things that's outside of its control and sometimes the ISDs have to do some DNS blocking or block services, like X, and sometimes they do not have the capacity to do that in the speedy manner which the regulator would like. So we have this back and forth. So it's always a challenge. And clarity of responsibilities I think would be the most important thing.

>> AUKE PALS: Thank you very much.

Go ahead, Deepali.

>> DEEPALI TAMHANE: I wanted to say I think it's really important for companies like us to have a continuing dialogue with regulators, especially as they're in the process of legislating, or more importantly passing tools. For example, I know our teams have done a number of deep dives with the Ofcom teams. We've met them a lot and consulted on a lot of rules they're passing. I think those dialogues are really important because companies like us can talk about the work we're already doing and also understand the intention of what those rules are and ways we can get to the substance, we can get to the substance of the matter.

So I think that sort of dialogue is really important. And I think that that should definitely be ‑‑ that should definitely be encouraged.

>> AUKE PALS: Thank you.

>> MOZART TENORIO: Just to add a little bit what was just said, what Deepali just said, I agree with her. I think it's really important to the industry. But sometimes other sectors of society see that as a loss, some kind of loss or threat for the independency of the regulator.

But I agree with her.

But to add to what Ian said about X and maybe bring Brian into this as well. The X issue in Brazil was a very interesting one because the court order came and it was about to take down for a certain time, until they complied to the laws in Brazil, about DNS, about the name of DNS that was outside of the Brazilian jurisdiction, outside of Brazilian sovereignty. It was not a dot BR domain. So we had to go through the ISP's and telecom operator's networks to make it comply with this court order. So it's a little bit trickier. It's harder to do. We cannot do that within our sovereignty. So I believe we should be talking internationally about that.

And maybe establishing some agreements that we should share at least with court orders known by each other and things like that. I know Brian can talk about dot org. We can talk about dot BR, which we call it CCTLD, country code top level domain. But if it's another country code, we cannot do much in Brazil. So I believe we have really to discuss that internationally like we're doing here now.

>> AUKE PALS: Yeah. Thank you. Brian, do you think that content moderation should be a joint effort?

>> BRIAN CIMBOLIC: Yeah, I think that I come at things from the sort of voluntary practice model more than the regulatory model. Again, we're sort of different, especially from a Meta or a posting app. We don't have content. We're infrastructure. We point via the DNS. That doesn't mean there's not a role for us in that. So the gentleman from the Internet Watch Foundation mentioned a sponsorship, or partnership, really, and I just put in the chat, that PIR put in place that any registry, be they CCTLD, like dot BR, or GTLD like, dot org, can take advantage of. And that is that we're a not‑for‑profit. PIR is a charity, which makes us slightly different from most GTLD registry operators. So we're sponsoring this as our nonprofit mission so that any registry operator can work with the IWF, Internet Watch Foundation, to receive notifications when child sexual abuse material is found in their TLDs. They can also take advantage of programmes to block or prevent the registration of domain names that are known to be used for CSAM in other TLDs.

So I think there's a lot of room for sort of voluntary practices and collaboration between registries and regulators and organizations known as trusted notifiers. That's the term we use in the DNS space where you work with an expert organisation like an Internet Watch Foundation because we as registries don't have the expertise or the tools or even the content to go out and look for and try to find CSAM. In fact, it's illegal for us to do so. So it's pretty essential that we work with organisations like the Internet Watch Foundation or like other trusted notifiers, like the Red Cross for identifying fraudulent fund‑raising sites.

So working across industry and NGO's is I think there's a lot of room for improvement across platforms and across registries and across registrars, and that's something we're actively exploring.

>> AUKE PALS: And that's also shared resources. I've seen the response from the audience on shared resources, materials and databases. Who has given that answer?

>> AUDIENCE: Basically, I was speaking on answering this question. I can imagine that technical resource that you can share in a safe way is maybe one of the easiest things to set up while keeping independency of your regulators. So that's why I answered.

>> AUKE PALS: Yeah, no, no, that's clear. Arda?

>> ARDA GERKENS: I think that we have a problem here. It's also I see more remarks on sharing of information between public/private partnerships. I do know for instance that for the platforms it's difficult to accept hash databases coming from a governmental organisation because that would basically indirectly mean that a government is telling them what to host and not to host on their platforms. Right? So, on the other hand, I do think we need to solve this problem. Because in the coming years we will build up a lot of information or databases on either child sex abuse material or terrorist content. I would very much be interested in having the databases that you're having, because I'm pretty sure you have some, and I think you would be very interested in our databases, too. So how can we solve this problem?

>> DEEPALI TAMHANE: I think that, you know, we are required as a U.S. company to report, which is where these considerations come from. And we've actually worked with NCMEC in the past to try to address issues, not this one specifically, so I'm just broaching that as an idea of something that maybe we can talk to NCMEC about, but one of the things we have done, we report to them and NCMEC works with law enforcement across the globe. We've worked with countries to have a downloader tool that helps to download this content in a very safe way because when you're sending across these reports, it's important they're done in a secure and private way. We also work with the Tech Coalition which is an alliance of tech companies to go beyond just the content. A Tech Coalition is an organisation where participating companies can share signals about accounts and behaviour.

So this goes beyond content because we know that predators don't stay on one platform. Right? I'm talking specifically about CSAM here. They move from platform to platform. So what Project Lantern does is enables us to share those signals and participating companies can receive them and do investigations. In the pilot phase of this we received a number of links of Mega which violated their child safety policies ‑‑ (someone speaking off microphone).

Exactly, but what I'm trying to say is there are ways available to address CSAM and go beyond CSAM and address signals and behaviours as well and be able to address this issue at a holistic level.

But in terms of specifically to your point, I don't know if you've had any discussions with NCMEC. But we're happy to ‑‑

>> ARDA GERKENS: I also think that NCMEC cannot solve the problem. This is what really hampers us. If we want to work together, you cannot just receive information from governmental organisations with the question whether you want to take that content down. I do acknowledge the Project Lantern is a perfect project that helps identify perpetrators from sending this material on your platforms. It's very good that you talked amongst each other. I'm very happy that you're doing that. But still, really this will hamper us from stopping information spreading rapidly online whilst we really want to stop that. I think if we look at what we're doing in terrorist content, where we have the incident response. Well, after the Christchurch shooting, where the content was disseminated online rapidly, and different kind of formats, it would be so great if we could do that also for childhood sexual abuse material. If we look at the figures, that's something you know very well, a lot of these images are duplicates and not unique. A lot is duplicates. Some of them are going viral. So we would like to cooperate and see how we can stop it. As long as we cannot share this information among each other, it's not very helpful for the kids.

>> AUKE PALS: A remark from the room.

>> ANDREW CAMPLING: Andrew Campling again with two brief comments. First, going back to end‑to‑end encryption, one an easy solution that doesn't break encryption or breach privacy, any of the platforms if they chose to could scan content that's uploaded, speaking specifically of image files, to see if they contained known CSAM before sending and encrypting the messages. Doesn't break encryption or privacy. That would immediately impact on the state of the problem. To quantify, we're looking at roughly three new victims of CSAM is second, over 300 million a year, which is a scary number.

Then, just a second very brief comment. It's worth bearing in mind that the tech industry as a whole is actively changing some of the underlying Internet standards, again arguably because of privacy, to bypass many of the existing protections. So it will be quite hard in the future to stop actors, for example, from these Kiwi farms type of sites, or to know that you have effective parental controls, because some of the changes to the Internet standards will mean that parental controls don't work anymore.

So I think that's an area where maybe regulators need to challenge the behaviour or some of the tech companies and really question whether their motivation really helps, because it will break even the existing but weak enforcement mechanisms. Thank you.

>> AUKE PALS: Also, someone next to you. You did have an opinion. I saw that. No? No, please.

>> AUDIENCE: No.

>> AUKE PALS: Okay.

>> ROELOF MEIER: Thanks. Roelof Meijer from SIDN. I am still chewing on the question a little bit. I would have understood in what way can we have collaboration in order for it to be successful, but losing the independence, sometimes regulators claim they cannot collaborate because they use the independency and they say you just do what we tell you to do, and the industry says, "Okay, we'll do exactly that, at the latest possible moment."

So I think, second, too, we need clarity on responsibilities. I think it's very important also that we need clarity on the ultimate purpose of the collaboration

I think also sometimes it goes wrong. The regulator wants you to do what they tell you, and the industry just wants to spend as little money as the effort as possible, and to make as much profit as possible.

I think there's a very strong interdependency. Regulators can never be successful if they don't collaborate with the industry because they will have laws but they will not prevent what they forbid, and in the end industry is very dependent of regulators because they need a license to operate. I think for quite some time they will think that they don't need that, but in the end they will. So, yeah, that's my two cents on this.

>> AUKE PALS: Thank you very much, Roelof. In the room, as well? State your name and organization.

>> MAURICIO HERNANDEZ: Mauricio Hernandez from Mexico. I wanted to share with you ‑‑ sorry for my cough ‑‑ that part of our duties as industry academy and regulators is to be aware about bad practices we can be incurring. From industry, it's very common, you can try right now, in one of our providers of domains, trying to buy a domain with the word "child sex" and they are available. There are no limits. Just pay. So I think this could be a good beginning, just an example, of what we need to do as industry for creating balances or good practices, and in some countries like in my region, LATAM, regarding cookies. And some countries, like Brazil and Chile, are now developing their privacy laws to create guidelines and good practices not to give these options to customers. This is one of the back doors that a lady has mentioned. They are open all worldwide for creating these areas of opportunity to illegal content, to upload illegal content. That's my comment.

>> AUKE PALS: Thank you. Brian, do you want to reflect on that?

>> BRIAN CIMBOLIC: Yeah, actually, I was about to raise my hand to do so. That's one of the programmes that we have in place. It doesn't, for example, take "child sexual" something, like work with that, and block that term. What it does is the IWF has identified domain names that have been registered in TLDs that have been dedicated to child sexual abuse material. Let's just for sake of an example let's say it's bad CSAM domain dot something. The registry operator suspends that one. Bad CSAM domain dot, another TLD, pops up. Then that domain name has hopped. Once that domain has hopped twice, the IWF maintains a list, and now any registry can receive that list from the IWF under this sponsorship with us so that it can prevent registration of that term in their TLD. So it really helps to sort of protect the TLD from being abused, and also helps to disrupt these commercial brands. Unfortunately, there's sort of brand recognition with known pedallers of CSAM. So if bad CSAM domain dot org is suspended and it hops to bad CSAM domain dot example, then the consumers of that recognise that brand. So it doesn't solve CSAM online, but it helps introduce friction and make it harder for the sort of bad guys to continue their brand online.

But coming back to the earlier point, why don't you just block anything that says "CSAM" or whatever? We've actually explored in other sort of discussions with regulators around opioids and narcotics online. An interesting thing we ran into is that they wanted to block known terms for opioids or narcotics, and we were interested in that. And then they provided us the list, and it included things like "lemonade". Anything with the word "lemonade" they wanted to be banned online, and not recognizing that there might be some really legitimate uses of the word lemonade.

So it's something that there's, to the kind of core of the question, that there has to be a sort of good faith feedback loop between industry and regulators to ensure that whatever we want to give regulators the tools that they need to have good and educated regulation, and so having that feedback mechanism is key, because you don't want to inadvertently block, for example, any registration with the word "lemonade" in it. There's lots of generic terms that are street names for drugs and that's bad, but those street names also have legitimate uses online, and you don't want to inadvertently hamper speech online.

>> MOZART TENORIO: I would just like to add two little pieces of information about those issues.

In Brazil, we do have a regulation to avoid kind of names in domain. And but recently, it doesn't always work. So it has its flaws. Recently, in a judgement in the Supreme Court in Brazil, the justice asked if he could go online and register the domain name as deathtodemocracy.com.BR. He shouldn't be able to do that, but some guy saw that because it was on TV, and he did it and he registered that domain. And within our regulation, we could suspend his domain in a few hours, but not everything always works. So without court order in this case.

And just to add about this independency thing about the regulator, I would like to say that one very recent piece of legislation that's been proposed in Brazil was talking about self‑regulation of the industry. And that the industry could create an entity to self‑regulate itself and that entity would be overseen by the regulator. So there's a state regulator. So it's probably possibly kind of a midway measure to deal with these lack of possibilities of the regulator to predict everything so the industry could be a little bit more comfortable.

>> AUKE PALS: Thank you very much. I put a new question on the screen so you can log on again at meant Menti.com and answer the question. And while you do that, I will give the floor to Brian to answer this question.

>> BRIAN CIMBOLIC: Yeah. Thank you very much. So the question, what role can technical and infrastructure actors play in combating illegal material online. So I sort of already covered the drawbacks or technical impediments dealing with illegal material online with the DNS. You think of Craig's List in the U.S.  It's based on a dot org. In a lot of other countries, it's based on different country codes. But if somebody uploads CSAM to Craig's List, is the right solution for me as the registry to suspend craigslist.org, rendering all those millions of pieces of content inaccessible? I think we can all agree the answer is no. But that doesn't mean we do nothing. That's why it's incumbent on infrastructure actors, when the scale of harms is to great, to work to notify downstream.

I do want to draw the distinction. If you have a site dedicated to something like CSAM or threat to human safety, which we have come across, to me there's no issue. Registry should step in right away and suspend that.

These principles are codified in a document that came out in 2019 called the Framework to Address Abuse.

This was a document originally it was 11 registries and registrars that signed on. Now it's more than 50 different registries and registrars. Basically, it has two principles. The first is that registries must step in, registries and registrars, must step in when faced with DNS abuse or technical abuses, those phishing, malware, botnets, et cetera, et cetera.

But then, it accepts the premise that generally speaking content is best dealt with at the level at which it can be directly removed, the hosting provider level, the platform, et cetera, because of these issues of collateral illness.

But in instances of CSAM, threats to human safety, human trafficking, and opioids and narcotics online, the registries and registrars should step in and do something to try and disrupt those sites, whether that's referring to the registrant and demanding they remove the content, whether threatening suspension of the domain name. In those instances, the burden shifts, and I think it does become appropriate then for the registry or registrar to step in and do something to disrupt that content. Again, it doesn't always mean suspending the domain name because of the issues of collateral damage, but to try and do something. I think it does become incumbent on the registry and registrar to do something.

>> AUKE PALS: Thank you. Any reflections?

>> ARDA GERKENS: I think it's very interesting. If you look at the Terrorist Content Online regulation, we can only send out removal orders to the party that on behalf of another actor has published the information. So basically, that's 99% of the time that's platforms or social media or ‑‑ and 1% of the time it might be a hosting company for a website. Whilst at the child sexual abuse material legislation we can actually go up in the line. So the problem that we encounter in the Netherlands is that we have some rogue hosters who apparently host a lot of this material, and they basically always say, yeah, there's nothing we can do about it because we have a managed hosting, we don't know who our customer is, we cannot reach them, so there's no way we can act.

So this is something hopefully the legislation, which is quite new, so I don't know how effective it will be, but hopefully we will be able to somewhere down that line say, "Well, you know, whether you know your customer or not should not be our problem. That's your problem. You need to do something. If you cannot find your customer, I'm going to go to you, because you are the one that is giving the possibility to customer to host this kind of material."

But it is something we are still struggling with and also seeking collaboration with the infrastructural parties to see what way we can be effective without just taking down the whole of the Internet or, I don't know, 30 other websites, while you just want to take down that one image. For us, it's difficult. And I'm really seeking co‑operation here.

>> AUKE PALS: So I'm also curious about the questions that have been coming from the audience. So I'm curious, who made the comment about the Clinton Administration approach? Is that online? No? No one? Or shy people?

Then let's move on.

Then I'm curious about what you've been writing now.

>> ABHILASH NARAYAN: If you must know, I was rudely texting somebody. I wasn't writing down anything on that.

(Laughter)

But, I'm Abhilash Narayan from the University of Exeter in the UK.

Much of what I wanted to say has been covered here, but I wanted to go back to the earlier point. I had a little thing to add on age assurance and on what regulators can do. We did a study, a pan‑European study, a couple of years ago for the European Commission where we looked at the state of play for age assurance for technologies in terms of how it helps children. And we found that the ground reality was there was hardly anything out there. One of the reasons for that was industry really didn't know what standards to use really, and regulators provided very little guidance, and that came up quite a lot.

One of the recommendations we made was, this is going back two years ago, in the UK, that regulators need to help industry with issuing guidance, standards, so that they can comply with the laws. It's not enough for legislation to stipulate that this needs to be done, but they actually need a bit of help.

I'm not talking about big companies like Meta who have the resources, but also smaller companies who have obligations to comply with not just media services directives but also, say, online goods that are inappropriate for children.

Thank you.

>> ARDA GERKENS: That's really a very good point. At least we know for instance for ‑‑ there are several initiatives out there. You were talking about the Technology Coalition which is a coalition of platforms to combat child sexual abuse online. There's also Tech against Terrorism and GIFTT which are both organisations where they help platforms or co‑operate to combat terrorist content. And certainly for the smaller companies, this legislation is really bad. I mean so many legislation coming up to them. I really agree with you. What we do in our organisation is to see what other initiatives are out there and help give them guidance where they can find it. Already, so many has been done, but it needs to be implemented. Very good point here.

>> AUKE PALS: Thank you. Also from the back of the room?

>> DAVID McAULEY: Can you hear me? I'm David Macaulay. Like Brian, I worked for an Internet registry named VeriSign. I want to answer your question. Although, I'm not on Menti right now. A couple of things I think technical companies can do is to reach out to two target groups. One is government regulators to talk to them, to try and speak with them about what we do, how esoteric it is, all the implications of taking action, and what that means; and vice versa, hear the same concerns from the government, on terrorism, especially, things like that.

The other is to reach out to groups like this. I think participating in a session like this answers your first question. How can we share information and collaborate without regulators losing their independence? So I would encourage organisations that have sessions like this to do more like this. I took part in one that had to do with DNS abuse about a month ago and it was an eye‑opening session where Internet registrars spoke about the implications of taking sweeping actions and the need to be sort of circumspect in what orders might come. Very informative stuff. I want to thank you for the session, and that's my way of doing it, trying to answer two questions at once. Thanks.

>> AUKE PALS: Thank you very much.

Any other comments from the room?

If not, what does industry think of the remark that was just made?

>> TIM SCOTT: I haven't said anything for a minute. On the last comment exactly in the UK, I worked in and around the games industry for about 20 years, in the UK government, and then in the trade association now for Roblox.

Some of the initiatives we've been involved with have been that Multistakeholder approach, working with like the MSA, the National Crime Agency in the UK, working with Ofcom, but working with trade bodies and then working with the individual companies in a forum to share best practice to tackle problems, to look if there's a white label solution, particularly in the area of CSAM or CSEA. And it's been incredibly effective.

The problem is maintaining that emphasis as people come and go in roles. You will have someone incredibly proactive within, say, a government organisation who is promoted or retires or goes somewhere else. And they kind of take that knowledge and that enthusiasm and willingness to collaborate with them out of the building. It's quite different for industry to establish those government relations in a really meaningful way.

So I echo the sentiment of the gentleman in the corner that absolutely this forum, and IGF is a great way of doing that. We have people from around the world here talking about shared issues

The other thing I would say is they're truly shared issues. I think we've almost got the wrong people in the room if you want to tackle some of these problems because we do care. Right? And if our platform becomes synonymous with problems, people won't be on our problems. It won't be economically viable for us to run it. There are other people who are less concerned about these sort of things and trying to get at those people is the real challenge.

>> AUKE PALS: Arda?

>> ARDA GERKENS: I think you do yourself short here. I don't know if that's an English saying,but the fact that you're here and acting up means that they need to comply more. If nobody would be here in the room complying, the job would be much harder. Also for me as a regulator to be able to stipulate, if I would ever have to go to a judge, that there's policies out there which are common and within the industry, that means that, you know, that really helps me to fight that battle.

So I'm really happy that we're here in this room together.

Also, I think the discussion on DNS abuse, to me, that's really ‑‑ that's a new sound that we have. For many, many years we didn't want to interfere on the infrastructural level. We always said, no, no, that's content; we cannot touch content and we can't mess up infrastructure.

So I'm really happy we're having this discussion now and looking at the possibilities here, because we need to do more than just looking at platforms, too.

>> MOZART TENORIO: I would add more information. If what Tim has just said, maybe part of the answer for your question is we would have more stability in public service staff, something like this.

>> TIM SCOTT: If not necessarily the staff, then in the policy approach. I mean when somebody leaves they don't take the proactive approach with them. That's part of the role.

>> MOZART TENORIO: Part of the answer for that is in the first question, the independency. Then we don't change if the governments change. So if a regulator is truly independent, their staff will go on, even though the presidents or prime minister has changed. So we can keep on doing our job in a regular way and through time. So independency of the regulator is part of this answer and it's really important for us.

You all mentioned that we do care. In France, the owner of Telegram was arrested. And is that a way to deal with nonoperative parties?

>> DEEPALI TAMHANE: (Microphone going in and out)...working on the Safety Team and working on safety organisations and there's a huge role they play in helping combating of a lot of harmful content online. Like Project Lantern, which is run by the Tech Coalition, which is an alliance of tech companies. But we also have equivalent things like StopNCI.org and TakeItDown, which is run by Revenge Porn Helpline and by NCMEC, which allows participating companies to receive hashes from them, and we're able to remove that content if it is uploaded. Safety organisations that we work with also help us design certain interventions and communication that may be helpful for users as resources. So I think they're definitely a very important part of the ecosystem. And for the Telegram enforcement ‑‑ should I hand it to you?

>> MOZART TENORIO: Just to say something really quick, because that question also carries something that clearly co‑operation isn't even. That's why we need legislation. We need to put everybody in a bar that's the lowest acceptable level of co‑operation. We have some companies that are very co‑operative and others that are not. So that's one of the reasons we really need regulation.

>> TIM SCOTT: Just to finish on that point, the risk of doing that is you lower the bar. If you codify it and put it on the minimum standard, people will say, "That's all I need to do," and just put it in that box and move on. We don't get the innovation. I would not happily change places with you; although, Brazil is probably nicer weather.

>> MOZART TENORIO: Exactly. That's why it's not a straight answer. It's a very tricky thing to do. So we have to do it the best way possible. You're right.

>> ARDA GERKENS: The interesting thing you said is Telegram was not complying. They were already complying with Terrorist Content Online regulation. They were very compliant for our organisation and others, too, in Europe but they really just don't do enough because I can still see channells with names terrorists, Christchurch shooting, or other names which clearly point out the type of material probably being found in those chat rooms. Even if they are arrested, they may take a little step. I don't know if that, in the end, will solve the problem.

Again here, I do think it would be nice if the industry themselves could also put more pressure on Telegram to be compliant because I sometimes feel that there's, like, we have the good actors here on the table and the bad actors are not on the table. I don't agree. I think you're one group and you should also talk amongst each other and if you have bad apples in the room you could help to get them out.

>> ROELOF MEIER: Roelof Meijer, SIDN, again. SIDN runs the dot NL country code top‑level domain. My answer is the bottom right one. I think technical and infrastructure actors can do a lot, but all those things cost money. So we are a not‑for‑profit. So it's relatively easy, although it's justifiable costs, but I think for a commercial company to spend really significant amount of money on combating crime, online or offline, by the way. It has to be part of the strategy, the culture, the company values. And I think that is very often the problem.

It is seen as a cost.

And as soon as you arrest the CEO, suddenly, these are justifiable costs because spending them you will prevent the second arrest of the CEO.

So I'm ridiculising it a bit, but I think that's very often the point that shareholders will complain because these are costs that are, as long as they are not enforced, it's a voluntary choice. So our biggest challenge is to make sure that, in the end, fighting abuse should be part of the culture or the values of also these very large companies.

>> ANDREW CAMPLING: I completely agree with all of that. I think the point of arresting the CEO or other senior execs is there has to be meaningful consequences for noncompliance, and increasingly for some of the big players the level of fines exercised are just not meaningful. Even Apple's fine of $12 billion, of a $3 trillion company, is not really material. Whereas, arresting the CEO focuses minds and does drive action.

And then briefly, the other point, yes, the risk of having legislative action to enforce compliance might then, you know, lower the bar for some. But the regulation doesn't have to be static. There's no reason why you can't raise the bar every year so that you do keep pushing certain companies to do more, try harder. I'm sure regulators will be entirely up for doing that.

And having a general duty of care, for example, absolutely raises the bar because you're wide open then to challenge if you can show that there are widespread abuses irrespective of whatever measures you've taken.

>> AUKE PALS: Thank you.

>> TIM SCOTT: I mean we can keep trading this all day, Andrew, because that creates barriers to the market, and then stifles innovation, which causes problems. This will go back and forth, but I take your point.

>> DEEPALI TAMHANE: The only caution I have when we talk about arresting CEOs, in a lot of countries a lot of political speech is also illegal. And I think we also have to be cognizant of that. A lot of companies, including ours, have pressures from government to remove content which we would not consider illegal but we would consider fair political discourse.

>> AUKE PALS: Comments from the room?

>> AUDIENCE: Hello again. Not speaking personally or my organisation, but trying to be a little bit of devil's advocate here. Going back to the point of clear responsibilities, aren't we putting an oversized responsibility on the platforms to solve an issue which as a society we haven't been able to solve in such a long time? Are we doing enough as a society to solve the problem at its roots and not necessarily at the television, words, spreading itself? Maybe we're putting too much on the platforms and not doing enough? Because the discourse it's been us versus them pushing forth, and perhaps not enough collaboration and Multistakeholder everybody sitting at the table and trying to find the root causes.

>> AUKE PALS: Thank you. I did see in the room, as well, comments on public/private partnerships. Anybody have a comment about that?

>> ARDA GERKENS: I totally agree with you. I do believe that many times, or the problems that we have, people are looking at all the end lines. How can we look into encrypted environments to stop the spreading of child sexual abuse material? Well, maybe we should go back and address why there's even this material out there. Because many of it doesn't even come from abuse but it's even voluntarily leaked images or AI‑generated or whatever, and still a form of abuse. So I think there should be much more attention in policies for the beginning and see how we can solve this as a society. But I also think that what we really need is more co‑operation, also. I'm looking at the regulators, for instance. My organisation itself is part of the Global Online Safety Regulators Network which now has eight regulators. It's very interesting to see that some of the platforms, which are not here on the table today, but basically think that we don't talk to each other. So they tell one organisation this, and then we found out that they do different. So that helps. But I think we need to do that more. And also, I'm worried about ‑‑ you're talking about the height of the fines. We can alter that. You know? We can make 10, 20, 30 percent, whatever.

These are the bigger companies you're talking about. Big tech. But we see a lot of problem was the smaller tech and smaller platforms, what we're not talking about here, like Gab, Team, you name them. They're smaller platforms that really don't care that much about regulation. For us, it's going to be difficult to either collect those fines, if we can impose them. We're going to need to co‑operate within the countries to make sure that if I cannot collect the fine, I can go to your country and you can help me collect that one.

So the Internet is global. So the solution needs to be global. And so the co‑operation needs to be global to even be able to tackle the problems that we have.

>> AUKE PALS: Thank you. Meanwhile, you want to answer, I put a new question on the screen. Please all, grab your phones and join Menti.com and use the code. Deepali?

>> DEEPALI TAMHANE: The point you made, I think it's really, really important. When we make it, I don't think it's taken ‑‑ I think it's really important to have that conversation, and one of the examples I will give is that, as a platform, issuing threats is something we remove as violation of community standards and I think multiple platforms do that. But we still see that people think it's normal discourse to issue threats online. And what can we do as a society and less as a platform? What can we do to address that? Is that the role of schools, educators, parents, as well as platforms? I think that's a conversation that's not always had

The second is we report millions of pieces of content to NCMEC that goes to law enforcement authorities. We also don't have the ability into what are the prosecutions taking place based on the reports that we're giving. I think that's also really important because it's the last part of the chain to understand, are the reports we're giving useful? Or are we just going in cycles where we're removing the content but at the end of the day you're not able to take criminal action against actual predators.

>> AUKE PALS: Thank you. Let's move on

In the last 15 minutes, I want to answer the question, how do we prevent legislation that threatens the open and free Internet whilst addressing illegal and harmful content? I do see some people actively typing in the room as well. I'll just give someone the floor who I see is not actively typing. Sir, can I give you the floor?

>> AUDIENCE: That's my comment, the third one. I think we need to preserve privacy and so on but at the same time we need clear procedures to be able to know who put harmful contents. So we need to be able to reach the channells or whatever. So I think this is needed.

>> AUKE PALS: Thank you. Gentleman next to you, did you also put down? No? Okay.

Did you?

And any reflections on the comment just made? Or on the question?

>> MOZART TENORIO: Just to inform also this last piece of legislation in Brazil, that privacy on anonymous stuff online, it proposes that people can keep their identity, their real identity to the public, but the platforms should know who that person is. If necessary for any court order or something, the platform will know who each profile on their platforms really are.

So it kind of puts the possibility on the digital platforms. I don't know if that's the best way to do it, but it's one way that's been proposed in Brazil and we'll have to discuss it.

>> AUKE PALS: Thank you. We'll do someone else I saw actively typing. Roelof.

>> ROELOF MEIER: Roelof Meijer again. I feel strongly about it. I have two answers there. One is by realizing we share responsibility and act on it. I feel that's the summary of it all. Just talking from my industry, it's been far too long that the domain industry can say, "No, no, we can't do anything because we'll break the Internet and we'll get financial claims. It will be slippery slope because now it's illegal, and then it will be unwanted and then unpleasant and then political," and so on and so forth. There are all kinds of excuses to do nothing, I think for fear or something. And slowly, we're overcoming this position. Arda is talking about things that I think it's the first step because it's still saying we don't do content, but we can do something if the DNS is being abused. I think it's kind of an artificial distinction on online venues, but anyway, that's the most important thing, that we feel we share a responsibility. And of course, also, regulators realize we are not the responsible party, but we have a responsibility. And again, like in my previous reaction, that responsibility, that feeling, should stem from company values.

>> AUKE PALS: Thank you. Brian, I saw you actively nodding.

>> BRIAN CIMBOLIC: Yeah. I agree 100% with Roelof that, you know, I think there has to be the recognition that while the registries and registrars shouldn't be the natural place to first approach issues relating to website content, there is a role for us to play.

Particularly when you're talking about not file‑sharing sites, but sites that are dedicated to a specific purpose, whether that's child sexual abuse material, whether that's stolen credit cards, whether that's you name it.

There are instances in which there are clear just patently illegal issues and that there are times where it's appropriate for registries and registrars to step in and do something. So I agree with Roelof entirely.

>> ARDA GERKENS: I think the way DNS abuse took this task on is very interesting because they established a framework in which they said, if it falls under these and these categories, that's something that we should address then in that moment. Of course, this is for us a very important question as we have ‑‑ as we were assessing Terrorist Content Online. We have regulation. How do you explain that regulation? It's always for us a day‑to‑day job to evaluate. But I think we have universal human rights. For me, if it hampers one of these universal human rights then I think that's something that can be illegal and hurtful content. For me, anybody can say what they want, freedom of speech is okay, but as long as you're not interfering with these human rights then it's fine by me, but once you do I think that's something we should act upon.

If you don't ‑‑ that's something we tend to forget. We think we cannot interfere with anything online unless it's really illegal content, because then it will hamper the open and free Internet and free speech, but we're now in a world online that especially women, but I think many of us, don't even dare to speak or be themselves anymore. Because if they do, that might have serious consequences, not only online but also offline as we've seen. For instance, as we've said, women, LGBTQ+, we know women don't go into politics for that reason, rape threat, we all see them. That's hampering freedom of speech and that should end. We should have a world online where we should all discuss whatever we want without those threats.

>> AUKE PALS: Thank you. I saw hands raised.

>> AUDIENCE: Just a small comment. I put it on the screen. Which is to turn this question around. Oftentimes, there's pushback to do certain things because it will break the free and open Internet. I observe, though, some companies, I'm not sure if this is true of any in the room, do precisely that in order to get market access into certain countries. So they absolutely change their products, change their approach, about things to comply with exactly the same rules they're complaining about, but to get access to certain autocratic countries, but refuse to do that in democracies. It's almost as though democracies are being punished, but on the other hand they will happily comply with exactly the same things in autocratic places.

That's just to call out where there's hypocrisy like that. If you can do that in that country, do it here as well, please.

>> AUKE PALS: Thank you very much. Anyone who wants to comment on that, in the last five minutes of the session? I saw you. No? Okay. My sound is off

Wrap up. Oh. Yeah.

>> AUDIENCE: Thank you. I'll be very quick. The question made me think and took me back to the 1990s when we started to talking about how to regulate the Internet or not regulate it at all. It was just school of thought that argued that Internet should be left alone without laws. But we've moved far from that. And in fact, there are laws that address illegal and harmful content. The solution is not more legislation. Essentially what we need is more effective enforcement of the laws, like regulators and other mechanisms that might not been enforced, rather than thinking about more legislation.

>> AUKE PALS: Thank you very much.

>> AUDIENCE: No, I don't want to delay more, in fact, because, I'm sorry, it's a very important session. Sorry. I couldn't stay with you from the beginning, start with you from the beginning, but because DNS abuse was mentioned, the next, by the way, workshop is on Room 2 if you would like to continue at 3:15 about how DNS abuse is defined and experienced. If anyone wants to join us, we would be happy to do that.

>> AUKE PALS: Thank you very much. As we reached the pitch part of the next session, I would like to give room to anyone who would like to reject or reflect on the session.

>> DEEPALI TAMHANE: It's important to continue to have these conversations and we are glad to be invited, and I think this is exactly the kind of information‑sharing and also the sharing of what our position is, even if we disagree, I think it's really important. Thank you.

>> ARDA GERKENS: I just wanted to add to what you said. Indeed, we have had legislation for a very long time, but I think it's very hard for law enforcement to enforce that. And I do believe that we regulators can play a very good role because for us it's also easy to talk to the companies, the platforms, the infrastructural platforms, or companies, and we need to do so to get our regulations right.

>> MOZART TENORIO: As a final remark, I would like to say that I'm very pleased about what happened in this time because we start to get very technical about DNS and stuff like this, CCLD, et cetera, and we ended with I believe the straightforward answer that this question is democracy, democratic values, democratic process, protests if you think something has not been doing accordingly. Society has agreed to participate on that all. Thanks God we're in democratic countries, free that we can talk like this and come to good and meaningful results. So that's what I believe. That's what I think must happen in Brazil. And I'm very glad to participate on this.

>> TIM SCOTT: To echo sentiments on the panel, in terms of from a Private Sector perspective, talking to regulators, talking to policymakers, and having them talk to us, means we can understand each other better. It doesn't have to be adversarial and we don't have to have conflicting and competing aims. In many cases, certainly in my experience and my career on both sides of the fence, that is rarely the case about understanding each other. And talking to each other is the way we can achieve that better level of regulation. This is to be well continued.

>> BRIAN CIMBOLIC: Just thank you very much for accommodating me being remote, and I think there's room for responsible and thoughtful action at the infrastructure level, but again it has to be carefully tailored to avoid collateral damage.

>> AUKE PALS: Thank you very much. And thank you to the rapporteur Marjolein. Do we have a final conclusion?

>> RAPPORTEUR MARJOLEIN: There is a simple conclusion. We need the dialogue and to share the responsibility. And thank you for organizing this session. Thank you all very much.

>> AUKE PALS: With this, I would also like to thank all the panellists and all the audience for your active participation, and the online participants, and the remote moderator Dorijn. We would like to continue this discussion, and that's what we're hoping for is to continue this discussion locally and also at the next IGF in Norway.

So if we do get our session submitted and accepted, hopefully we will see you there. Thank you very much.