IGF 2024 - Day 1 - Workshop Room 5 - WS41 Big Techs and Journalism- Disputes and Regulatory Models

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> BIA BARBOSA: Hi, everyone.

Do you hear us? Online? Yes?

>> NIKHIL PAHWA: Hi.

I can hear you clearly.

Loud and clear.

>> BIA BARBOSA: I don't hear you.

>> JULIANA HARSIANTI: Hello.

We can hear.

>> NIKHIL PAHWA: Hi.

I can hear you loud and clear.

Can you hear us offline?

>> IVA NENADIC: Same for me.

>> BIA BARBOSA: Oh.

Now it's good.

I'm going to listen to myself all the time but I think it's working.

Can any one of you online say something so we can check.

>> NIKHIL PAHWA: Hi.

Just testing.

One, two, three, four, this is Nikhil.

Can you hear us? Perfect.

>> IVA NENADIC: Yes, hello, everyone.

Can you hear me? Okay. Perfect.

>> JULIANA HARSIANTI: Hello.

It's Juliana.

Can you hear me? Okay. Thank you.

Perfect.

>> BIA BARBOSA: Good afternoon everyone who is here in Saudi Arabia.

My name is Bia Barbosa. I am a journalist and member of the Brazilian Internet Steering Committee.

I'm actually going to moderator this workshop.

Thank you everyone for coming.

I'm in the place of Rafael Evangelista who was supposed to be here, but had problems getting into the country because of visa issue.

Thank you for everybody being here and thank you for the people here in the room, as well.

So welcome to the Big Techs and Journalism: Disputes and Regulatory Models workshop.

The idea of today is to have an open debate on what are the alternatives to promote journalism sustainability in the digital era and what can we learn from regulatory endeavors on the remuneration of journalism by digital platforms across different countries.

In a brief introduction, I would like to share with you that the demand for a fair remuneration from digital platforms in favor or journalists or news companies is not new. It's a tension that has deepened since the prevalence of large information platforms and the rise of communication mediated by social media, the exponential growth of digital platforms that transform the digital advertising ecosystem. Their business models based on data collection and analysis for targeted advertising has profoundly impacted contemporary journalism and the systematic shift of revenue from journalism to digital platforms has reshaped the landscape of media consumption, production and distribution.

This transformation has not only altered the circulation of journalistic content, but also exacerbates power imbalances, potentially widening the gap between those with access to quality, reliable, and diverse information and those without.

This is particularly evident in crises such as those surrounding public health and political electoral communications.

At the core of the concerns lies the question of how journalism is compensated by digital platforms, igniting a wave of regulatory proposals across many nations and mobilizing multiple stakeholders.

Australia, notably, passing a pioneering legislation addressing this issue.

In Canada, the approval of the Online News Act prompted Meta to remove news from their platforms.

A decree has been issued in Indonesia, while South Africa is currently conducting an inquiry on digital platforms markets.

In Brazil, where I come from, since 2021 two proposals have been at the forefront of the debate, the determination in law of the obligation of digital platforms to negotiate with journalism companies and the approval of a public sector fund financed by digital platforms.

Although these proposal do not necessarily contradict each other, the idea of a fund is defended as an alternative to the direct bargaining model and not as its complement by many actors.

At international level, regulatory initiatives have been the subject of years of negotiations involving not only the executive and legislative branch but also the judiciary.

In addition to the state actors, a myriad of other actors are taking part in the debates, such as digital platforms, media companies, researchers, journalists, and Civil Society Organizations, and international bodies.

Last year, the Content and Cultural Goods Chamber of the Brazilian Internet Steering Committee published a study entitled Remuneration of Journalism by Digital Platforms in which we have mapped out five controversies on the subject.

The first one is who should benefit? In other words, what should be the scope of any legislation regarding remuneration of journalism by platforms? The trend in legislative proposals has been to create minimal criteria for designating potential beneficiaries such as number of employees or media turnover; however, this criteria has been criticized because they potentially exclude individuals or small businesses.

For some, journalists themselves should be paid directly, and for others this is not feasible.

The second controversy is who should pay? The journalism remuneration proposals we've mapped use different terminology to define the actors responsible for remuneration.

"Digital platforms" in Australia.

"Online content sharing services providers" in the European Union.

"Platforms and digital news intermediary companies" like in Canada.

In Brazil, the bill on platforms regulation uses the terminology of "social media providers, search engines, and instant message services."

A third issue, pay for what? The understanding of what journalistic content is changed greatly, for example, in a report published by the Organization for Economic Cooperation and Development in 2021. It defines "news" as information or commentary on contemporary issues, explicitly excluding entertainment news.

However, this is a narrow view that can be interpreted from some of the regulatory initiatives analysed in our report.

In addition, an important part of content made available by media and which generates high levels of engagement on social media platforms refers to sports and entertainment.

This controversy is also related to the content of voluntary agreements between platforms and journalism companies negotiated without intermediatation of a public authority.

The guarantee of confidentiality of these commercial agreements prevents the evaluation of the criteria used to remunerate journalism and its impact.

Therefore, there's concern that the use of the criteria such as the number of publications will serve as an incentive to reduce the quality of the content produced.

The fourth controversy highlighted is related to the demand for more transparency in the work of the platforms. Whether in relation to digital advertising revenue or the algorithms used in the content recommendation systems for users. So remuneration based on what data?

And finally, what should the role of the State be? To what extent should the State interfere in relations between journalistic content producers and digital platforms? The Australian code left a wide margin for these actors to negotiate on their own. However, there's no consensus on whether this is the best model, even considering some specifics in countries like Brazil, where free negotiation between the parties can result in an even greater concentration of resources and power in a small number of players.

The idea of a public sector fund financed by digital platforms and managed in a participatory way is based on a more proactive and broader vision of the whole of the State.

In this case, decisions about the beneficiaries of the initiative would be part of the construction of public policies to support journalists.

So, much to discuss about our work session, workshop session.

We'll be dividing into three parts. The first will consist of speakers espousing your views and policy experiences.

The second is to have a short debate on different perspectives raised by you, the speakers.

And the third part will be devoted to Q & A.

We would very much like to talk to our colleagues here in the room and on the online room at Zoom, as well.

So I think we could ‑‑ I'm going to present to you not all of you right now, but one at a time that you are going to speak.

I think that we could start with Iva Nenatic.

Iva studies media pluralism in the context of content curation, ranking, and moderation policies of online platforms; democratic implications such policies may have on related regulatory interventions at the European University Institute Centre for Media Pluralism and Media Freedom.

She has been involved in designating and implementation of the Media Pluralism Monitor.

Iva, thank you very much for being with us. It would be great if you could present your thoughts on information pluralism online.

Thank you very much.

You have eight minutes.

>> IVA NENADIC: Thank you very much for having me and I will try to stick to eight minutes and also maybe be briefer so we have time for exchange.

I apologise in advance because my view may be a bit more Eurocentric because this is the main view we focus on being a research centre on media pluralism at the European University Institute, and running the Media Pluralism Monitor in all your Member States and in Candidates Countries, so candidates for EU membership.

But of course we do regularly exchange with our colleagues and partners in South America, and Australia, and the U.S., and all over the globe, to understand basically our work is focused on the health of the information system.

The way we understand media pluralism as a concept is a little bit different than perhaps as this concept is understood in the U.S. or Australia or elsewhere in the world.

Because we don't just speak about media pluralism it is not just in the market, but the enabling environment for journalism and for media which are enablers of freedom of expression somehow. So we're looking at fundamental rights protections such as access to information, freedom of expression, both in regulatory framework but also in practice, the role of relevant authorities, status and safety of journalists, including digital safety is an important aspect of media pluralism.  

As well as social inclusiveness or representativeness of different social groups, not only in media content but also in management structures, and not only of media but increasingly also of media companies or big tech, whichever terminology we want to use.

Then there's this element of political independence or political power. Our work very much revolves around the concept of power.

The way we approach, we understand media pluralism and the way we regulate somehow in Europe to protect media pluralism is to somehow curb or limit opinion centralization or concentration of opinion‑forming power.

This is how we've been doing this for the media world that we had in the past.

Of course we are still not there when it comes to platforms but I think it's quite obvious and probably not just from this conversation that the opinion‑forming power has increasingly shifted from the media, if it's even still with the media, to online platforms or digital platforms or digital intermediaries.

So we live in an information environment in which digital platforms largely excluded, so big company technologies largely excluded, from liability and accountability actually do have power over shaping our information systems and do have power over the distribution of media and journalistic content. So media, unlike digital platforms, do have liability over the content they produce and they place and publish. So we're seeing somehow a power paradigm shift where these technology companies are becoming or have become in many instances, especially for certain demographics, the key infrastructures where people engage with news and information that affect and shape their political opinion. So they have tremendous power but very little responsibility with respect to that.

Because the focus of today's conversation is on the economic side somehow, or the economic implications, I will focus a bit more on that, or this relationship between big tech and journalism in economic terms.

But I think it's really important to emphasize even in the economic terms the rise of centrality in platforms has led to disintegration of news production, which is very costly, especially if you think of analytical and investigative journalism, and quality reporting in general, to distribution which is kind of cheap and easy these days to distribute the content and then benefit or monetize on that.

It's also disintegrated from advertising because the platforms have positioned really as intermediaries between the media or journalism and their audiences and also between the media and advertisers. We know traditionally the business model of media or the business model was developed as a two‑sided market, so providing news to audiences or even charging them through forms of subscription or paying for newspapers and similar, and then selling the attention of audiences to advertisers.

And now both sides of this market or of this ‑‑ both sides have been disrupted or controlled somehow by the digital platforms or big tech.

And in the multisided market of big tech companies, the media are just one component of this value chain. So I think this is also something important to keep in mind.

And I think you opened with a relatively strong focus on online platforms, digital platforms, but I think what's also important to introduce into this conversation is also the role, increasingly relevant role, of generative AI companies who are extensively using media content to train their models and to provide, to generate outputs, very often separating the content from the source. So diluting the visibility of media brands which has implications, again, for the economic sustainability of the media. And again we, in that environment, we as well have negotiations or at least attempts at negotiations or establishing sort of a level playing field, which is very difficult to establish. Right? Because of tremendous imbalances in power between the tech side and the media side, but I think this dimension is also very relevant and important to look at.

Two last points I'd like to make. One is about thinking about the power of big tech in relation to media. They decide whether they want to carry media content or journalism at all. We've seen especially with attempts at legislation, for example, you mentioned the news bargaining code in Australia, you mentioned the initiatives in Brazil, in India, and South Africa, in Canada, in the U.S, especially in California which is a very interesting case of trying to establish frameworks for negotiation of fair remuneration that should go from big tech to the media.

This is not easy because, again, there's tremendous imbalance of power.

The Australian example, the most advanced one, we have seen now there's a backsliding somehow. Now, when there is recently Australia published their revision of the effectiveness somehow of this framework for negotiations that suggests that it's not strong enough to kind of ensure sustainability of this approach, because we've seen with the major platforms that they're withdrawing. They don't want to renegotiate new deals. They don't want to expand on these deals to include more local media, for example. So again, it suggests that the power is still with platforms. The power is still with big tech.

So very often, as a response to regulatory intervention they either threaten or just ban news.

What we've been seeing from them through the years is they're segregating the news in specific tabs, for example, in specific areas on the services that they provide, so that eventually they can just switch or shut it off.

So the kind of conversation we have in Europe, and the one maybe important point to make, is that unlike Australia that went with the competition law, in Europe we focused a little bit more on copyright as a basis, as a ground, for negotiations between the platforms and the media for fair remuneration.

I think this is also interesting around the wider conversation of generative AI and how to play with this problem or how to deal with this problem in that area.

And we've seen a lot of issues with this.

Right?

This negotiation, as you already emphasized, are somehow opaque so we don't really know what has been negotiated or who negotiates. In some countries, we have individual big publishers negotiating first or separately, which has implications for media pluralism because what the big ones negotiates sets somehow the benchmark for the other ones.

Then if the big ones start negotiating and excluding the smaller ones, this can really have tremendous consequences on media pluralism or information pluralism more broadly.

The big markets, of course, are much better positioned. Big languages are much better positioned to negotiate with big tech than the smaller ones.

And the same applies for this tension between the publisher, so the media companies, and journalists. From many examples, we've seen they're not always aligned or on the same side.

Who should benefit, indeed, is a big question.

The way we understand media and journalism is in a very, like, we define it in a very broad sense, trying to take into account that there is a plurality of relevant voices, voices of reference in contemporary information sphere, that should be considered at the equal level somehow as journalists. But of course, this complicates the situation even further.

I don't know if I have any minutes left or should I ‑‑ yeah? I have.

Okay. Good.

So basically, the main point I was trying to make is that what we are seeing, what we've learned somehow, from these initiatives, and mostly focussing on Australia and the copyright directive in Europe, because these two I think have the longest experience, even though I mean they've been around for a couple of years, but we can reflect a little bit and look at the effectiveness of these initiatives.

I think there are a lot of short‑comings somehow that are surfacing now that do show that we do not have sufficient instruments to deal with this enormous and even growing power of big tech; that the negotiation power is still on the side of platforms, so we haven't really managed to put the media at the same level to be able to negotiate equally.

The problems are also on the media side.

As I said, this fragmentation between the media companies, between media and journalists, and between the big and small ones, between bigger markets, smaller markets, big languages, small languages.

In Denmark, for example, they decided to form a coalition and negotiate collectively with big tech and they're really persistent with this and they're clear on setting their benchmarks high.

I think another problem we should consider in this conversation is the lack of clear methodology of what's the value, like, what is the value and who should be calculating the value? What is fair remuneration in this context? We have several examples of several cases where this value is calculated in different ways. So it's not clear.

And of course, it's not clear from these deals because these deal, as we said already, are not transparent.

So what we're seeing increasingly in the policy framework is the shift from these bargaining or negotiation frameworks to something that is a bit more direct, regulatory or policy intervention in this area, so speaking increasingly about the need, for example, to tackle the issue of the fact that platforms do have this power to decide whether they even want to carry media content or not.

In Europe, for example, we have European Media Freedom Act that introduces a precedent somehow by putting forward this principle that media content is not just any other content to be found on online platforms. They have to pay due regard to this content and if they want to moderate this content they need to follow a special procedure.

So I think this speaks a lot in the direction of policy conversations that are suggesting that these platforms have indeed become key infrastructure for our relationship with news and media and informative content more broadly. Then maybe we should consider them as public utility and maybe there should be some must‑carry rules, in order to make sure the media and journalism content remains there so they just don't have power to remove it, or to think of complete alternatives to break down these dependencies.

In terms of bargaining or negotiating frameworks for fair remuneration, there's been a shift or intention to shift this conversation looking at the failure of these negotiation frameworks somehow, or at least their shortcomings, to something more direct intervention in terms of like digital text or digital levy.

But then, this opens a new area of questions about how do you allocate and distribute this money? Especially taking into account that not all the states have all the necessary checks and balances to make sure that these kinds of processes are not abused.

I think I said a lot.

I will just stop here and look forward to the exchange.

>> BIA BARBOSA: Thank you so much.

We'll for sure have time for this exchange.

You mentioned the impact for small journalistic initiatives, and I think that is a good way to segue with Juliana Harsianti.

I don't know if I pronounced correctly your surname.

Because I would like very much to ask you to present your views on the effect of digital platforms on community development and the importance of journalism for these communities. So to introduce you, Juliana is a journalist and researcher from Indonesia. She has worked mostly about the influence of digital technology in developing countries, contributing for example to global voices and international online media.

And I'm sure that from her perspective there are much in common from our perspective in Brazil, as well. So I give the floor to Juliana.

Thank you very much for being with us.

I don't know what time it's there in Indonesia but thank you for being with us.

>> JULIANA HARSIANTI: Yeah.

Thank you.

Can you hear me? I'm sorry.

I cannot put on the video because this is better for the sound connection.

Thank you for inviting me as the speaker to this important issue. Good afternoon.

Good afternoon for everyone who is attending in Riyadh.

It's almost 9.00 p.m. here in Jakarta but this is okay to have some discussion with colleagues about the impact of big tech in journalism.

As mentioned in the opening remarks, Indonesia has this year, Indonesia has published some presidential decree about the regulation for the big tech and digital platform to share in the revenue with the publisher because the government thinks that the president of digital platform in Indonesia has been disruption for the business model for the media mindset in Indonesia.

This is still on discussion between the tech company and the association of journalism and the government in Indonesia, whether this decree can be implemented shortly or there's some modification or some adjustment in future.

But this evening I will talk about how the small media mostly on the digital platform, like social digital platform and social media, to promote the freedom of press, to get ‑‑ to spread the information more freely in Indonesia.

I will give two examples from Indonesia. Magdalene and Project Multatuli, both are online media platform based in Indonesia. Magdalene is more about focused on gender issue. Meanwhile, Project Multatuli is more focused on in‑depth journalism and, as highlighted, some issue has been avoided by the mainstream media in Indonesia.

They chose the digital platform and social media because they think it could be a smaller audience, they could get more engagement from the readers, but not from the good business side, because they try to avoid to have some Google ads on their platform.

They avoid to ‑‑ they try organically to get the ‑‑ to establish their website on Google search engine to keep their side still number one in reception year.

But the small media company or community media has more attendance, not to have the bigger venue as the big media companies, so they can do more freely to promote freedom of expression and multilingual website.

And then can discuss more freely about the issues that the news has been avoiding, the mainstream media.

And how do they get minutes, the business run?

Yes, they have some business model to be running.

Most of them get the money from the general and then from the subscript but not the news subscript, new subscription, but mostly donation from donation from those who have been supporting the platform.

And then they keep the readers who want to get more quality journalism and then alternative media in Indonesia.

I think this is enough on my side and then back to you.

>> BIA BARBOSA: Thank you very much, Juliana.

For sure, there are other challenges that we will be able to exchange regarding the system's ability of small media initiatives. I think that from the Global South perspective we still have some other challenge than the Global North has regarding this.

Because at least from the South America and the Latin America perspective, we face the problem of the concentration of media.

In a very few countries, we have public media that can more or less guarantee some fluidity in the media landscape in general. So I think there are other challenges besides the developed countries already have regarding the ability of journalism and that we are still facing different challenges and then also the new challenges that the content brings to us.

So thank you very much for sharing your experience in Indonesia, and I think that we can move forward with Nikhil Pawar.

Nikhil, I would love to hear what you have to say about the revenue and big tech companies and link them to the legal cases against AI firms. I think that's a good connection to what Iva brought to us at the beginning relating how the AI systems are using journalists' content to train models and specifically the genitive AI systems.

But not only.

So thank you very much for being here with us.

I'm going to introduce you and please feel free to complete any information.

Nikhil is a Indian journalist, a digital rights activist, and a founder of Media Nama, a mobile and digital news portal.

He has been a key commentator around stories and debates around Indian digital media companies, censorship, and Internet and mobile regulation in India.

And of course, studying this demand from big tech companies regarding the journalists' revenue.

Thank you very much for being with us.

You have ten minutes.

>> NIKHIL PAHWA: Thank you for inviting me for this very important discussion.

I'm a journalist and I have been a media entrepreneur from India for about 16 years now, and I've been a journalist for 18. I have also been blogging for about 21 years.

I'm part of a few key media‑related committees in India that look at the impact of regulations on media, including the Media Regulation Committee of the Editors Guild of India.

I come at this from an Internet perspective having built my entire career of an online platform.

We have a small media company, having about 15 people working at our media organisation.

But I also still do believe that journalism is not the exclusive privilege of traditional media, of formal journalists. Even today, news breaks on social media.

And frankly, journalism I see as an act. And therefore, people who publish verified content even on social media are also doing journalism.

So we can't really look at things purely from a mainstream media lens.

And you know, even today there are online news channells and online podcasts that run as online media businesses.

And they're just an alternative to traditional media.

The primary challenge that media companies and especially traditional media companies face is the shift of advertising revenue from traditional media organisations which had restricted distribution, to digital platforms where now they face infinite competition because everyone can report.

Everyone can create content.

and, you know, traditional and big tech companies like Google and Facebook have built business models that rely heavily on data collection and targeted advertising, which has meant that they are competing as aggregators with the media companies on their platforms, but also let's not forget media companies also compete with all users on the same platforms.

So the real challenge for media is of discovery.

And what we also have to realize is that for media businesses, and I run one, the benefit that these platforms create is the traffic.

Most media publications the majority of their traffic comes from search and social media.

The primary traffic for many news companies today, including us, what's also happening just to cover the complete situation, is that we are facing a new threat with AI summaries.

What Google does on its search, especially because unlike traditional search which used to direct traffic to us, AI summaries potentially cannibalize traffic.

They don't send us traffic anymore.

Google isn't just now an aggregator of links but it's also turning out to become an answers engine.

That is also a termed used by Perplexity that does the same function.

Perplexity and similar rag models for AI basically take facts from news companies and compile them into fresh articles that serve a user's need.

A future threat for us and that we will see play out in the next two or three years is that apps like Perplexity use our content, will start cannibalizing our traffic.

And all media analyses the traffic they have and review them.

But it's important to remember if we don't get traffic we won't be sustainable.

So while most of this conversation has been focused on getting paid for linking out, I think that's a battle that should not be fought because we actually benefit from search engines and from social media platforms linking out to us.

If it becomes ‑‑ if we start forcing them to pay, and they choose not to link out, NORD to ‑‑ which is what Facebook did in Canada, it will actually cost news companies significant revenue because audiences will not discover them.

Australia's news beginning code, as well, I feel has set the wrong precedent because we benefit from traffic on social media.

Linking out should not be mandatedly paid.

It breaks a fundamental basic principle of the Internet where the Internet is an interconnection of links. People go from link to link to link and discover new content, new innovation, new things to read.

And so, I think we should be very careful about forcing platforms to link out, because that is a mutually beneficial relationship.

The advertising issue is frankly a function of the media not building a direct relationship with its audience.

Like we have built direct relationships with our audience.

And therefore, losing out on monetization to big tech platforms. Let's not forget platforms like the Guardian chose to sign up to Facebook for its articles.

They thought they were benefiting from Facebook, but they were also giving up audience to Facebook. So I think we need to be careful and build our own direct relationships.

But I want to talk a lot more about AI because I think that's where it becomes problematic.

The tricky thing with AI is that facts are not under copyright.

And media and news reporters like us basically report facts and there's copyright in how we write things but not copyright in how we write about because facts can't be exclusively with one news company because that is effectively the public good is in the distribution and easy availability of facts.

So platforms like Perplexity actually take facts from us, piece it together into a news article, they take it to multiple news organisations, and they rewrite our content in a manner to be honest which can be much more easy to read.

And they can also query the same news article on sites like Perplexity which means that a user gets all of their answers based on our reporting, on other platforms.

Now this is not copyright violation but it is plagiarism and unfortunately plagiarism is not illegal.

Only the copyright violation is.

Most of the cases that are being run at some of the U.S, some in India, based on, like, in the U.S. brought out by The New York Times, in India by a news agency called ANI, focus on the fact that our content is being used, is being taken, by AI and injected by them to train their models.

Therefore the likelihood of them replicating our work is very high.

And that they've taken this content without a license.

I think this is an important one because there isn't licensing or compensation for using our work to train them.

I'm aware of many news organisations around the world have actually signed up with AI companies for revenue‑sharing arrangements. This is a very short‑term perspective and usually AI companies will do exactly what, for example, Google has done with its Google News Initiative and its news showcase where they will tie up with big media companies and this will end up actually ensuring that smaller companies don't get any money.

In case of AI, that's also what's going to happen.

I'll give you a small example.

When we moved our website to a new server, our website crashed because of the number of AI bots that were hitting our servers and they were taking our content. Because it moved to a new server they thought this was a new website. So this stealing of our work is I think something that we need to look at from legislation from courts to address and there needs to be copyright and AI and the outcomes happening in the U.S. with New York Times and India with A & I will address this.

There's a geopolitical battle going on right now about who comes on top in the AI race.

And they realize large language models need more and more written content and written facts and a large repository of that lies with news organisations.

So while today we are trying to fight battles related to linking out, which I think is a battle that shouldn't be fought because linking out, like I said, is a fundamentals foundational principle of the Internet.

The battle we need to fight and we need to fight it early is the battle to ensure that we get compensated for content being used by AI companies. All they need to essentially remove our content from their databases. That's the battle that I see only in courts but not in case of legislatures.

And these legal frameworks are going to be very, very important to develop because we need to create incentives for reporters to report, for news organisations to publish.

Let's face the facts. The content that AI companies generate is based on our work. If we don't do original work or get incentivized to create original work, and media companies start dying, effectively they will have nothing to build on top of. So I think this is the relationship in terms of revenue relationships that regulation needs to address.

Like I said multiple times, I strongly feel that the idea of paying for links is flawed. And what's happened in Canada and what's happened in Australia is the wrong approach.

Media companies are companies, as well. They need to figure out mechanisms for monetization. And, you know, they are moved from an environment of limited competition in traditional media to infinite competition in this new media and they need to adapt to that change. Not try and, you know, get pittance from big tech firms. They should be competing with big tech firms.

>> BIA BARBOSA: Thank you very much.

I think you brought us a very challenging perspective regarding because we didn't manage so far to address the challenge related to journalists' content use by platforms, by the news aggregators, and now we're facing already the AI training systems using our journalistic content.

I'd like to take a minute to ask you asking, here in Brazil there's a bill, a regulation, that has just passed at the senate.

We still have the Chamber of Deputies to move forward and have the approval of the bill.

But it provides that the copyright payment for journalist content used in training and in response for the AI systems, as well.

Do you think that could ‑‑ this could be even considering there is a copyright approach that it could be interesting for solving at least this kind of problem that you mentioned? I would like to hear from you a little bit.

Since we are checking all the perspectives that are on the table in different parts of the world to tackle this issue.

>> NIKHIL PAHWA: I think that if it's legislated that there needs to be competition for copyright, for users of copyright content, that is a correct approach.

It's just that once you agree that there should be compensation, the question becomes, who gets compensated and how much do they get compensated? And you know, what is a frequency at which they get compensated? Do you get paid for an entire data dump being given? Or do you get compensated on the basis of how it is used? In which case, how do you validate that your content is actually being used by AI? You know, because even Europe is struggling with algorithmic accountability.

And by the way, on the linking out part, I don't think ‑‑ while I said there shouldn't be a revenue mechanism there, I do believe we need algorithmic accountability for both social media as well as search to ensure that there is no discrimination happening in terms of surfacing our content.

And as a small media owner, I don't want someone else like them to benefit, big media, or traditional media, at my expense.

So the fairness principles also need to be taken into consideration.

In the same way that fairness needs to be taken into consideration in case of the law in Brazil.

But the question you have to ask is, who is media today? How do you identify that this organisation that you're actually supporting journalism? Because like I said at the beginning, journalism is not the exclusive privilege of just journalists, right? I'm a blogger who started a media company. So I understand that bloggers also make money from advertising and to that extent they don't get compensated.

Why should I be as a blogger different from a media company?

I'm also running my own venture.

Right?

So we're seeing an infinite ability for reporting today because anyone can report.

And in that scenario, who gets compensated? Who doesn't? It becomes even trickier.

If you're scraping a media publication's blog, I mean a media publication, shouldn't the blogger also get compensated if it's being scraped for the AI? Is the question.

Why or why not? Right?

So these are not easy answers.

I don't even know if there are answers to some of these questions.

But when you're looking at defining laws you have to create that differentiation. You have to break it up into who benefits and who doesn't benefit from that regulation.

 

 

if you look at most podcasters, they are doing ‑‑ they're doing open‑ended journalism, in a sense.

They're conducting interviews.

Would you treat them as journalists under this law as well? Should their transcripts, if they're being aggregated by AI, should they be compensated for that as well? Where do you draw the line? And that's the problem with laws. You don't know.

It's very tricky to draw the lines in these cases.

 

 

>> BIA BARBOSA: And besides the law, I think countries where you don't have democratic regulator to analyse how these kind of laws are being implemented, it gives us even more challenges to deal with.

I don't know if Iva or Juliana wants to comment on that? Or even other aspect? Iva, I would like to ask you if you can comment as well besides anything else you would like to brings us, to ask us to tell us about this coalition you mentioned in Denmark, where the media negotiated with the platforms, the digital platforms. One of the issues we had in Brazil as well, in the platforms regulations bill, that would, if it was approved, now it's in the chamber, was to compensate, not based on copyright, but based on content used, of journalists' content.

How to negotiate, how it would be possible for the small initiatives to do that? There's already some digital journalism association in Brazil that tried to represent the most part of the small initiatives but they don't manage to represent all of them.

And how this coalition is working in Denmark, that you mentioned, I felt that would be interesting to dig a little bit deeper.

But if you want to dive on this AI topic, as well, feel free.

>> IVA NENADIC: Thank you.

Yeah, I'll start with the last point.

I think Nikhil said many super interesting and relevant things.

I want to stay for a second with this last point of the complexity we have to define media and journalism today.

And this is indeed one of the key obstacles of all the ‑‑ not only regulatory attempts but also, you know, soft policy measures that we want to implement in this area, because this is ‑‑ I mean it's the first step.

It's the foundation.

Who do we consider as a journalist? Who should benefit from these frameworks and who shouldn't? How far can we stretch this? We've done a lot of work within the EU, but also the Council of Europe which covers much more countries in Europe, and the Council of Europe has put forward some recommendations on how to define media and journalism in this new information world, or information sphere we live in, and it takes a very broad approach. Right? Because it's the freedom of expression that is at stake.

It's one of the key principles somehow that we nurture in Europe the fact that the profession should be open and inclusive. So if this is the principle, how do we solve this practical obstacle? We do see a lot of paradoxes of the information systems nowadays.

Right? The more open the debate somehow is, the more demagoguery, the more misinformation we have. So we have, in a way, we have plurality of voices in the news and information ecosystem but not all of these voices are actually benefiting our democratic needs. Right? Many of these voices are actually misleading or extremely biased or not professional and not respecting ethical and professional principles. So it's creating a lot of disorder in the information system that confuses people, distorts trust, and has a lot of negative implications for our democratic systems.

I can give one example that may not be a good solution but something to look at to solve this problem.

It's something that's been heavily discussed within the negotiations around the European Media Freedom Act that does provide this special treatment to media service providers, including journalists, in content moderation by major, very large online platforms. We define them as those having more than 10% of EU population as regular users. So around 45 million people are using them on a regular basis, monthly users.

So in listing the criteria on, first of all, the law provides definition which is very broad about media service providers, but listing criteria on who are the media that can or should benefit from this special treatment, there is, for the first time in the EU law, we have a mention of self‑regulation.

And we have an explicit reference to the respect of professional standards. The law, now I don't recall exactly the text, but it says those media who comply to national laws and regulations but also comply to widely recognized self‑regulatory frameworks can benefit from this.

And of course this can be misused.

If there are wrong standards and claim that they have a certain number of media within their umbrella, but I think there is something in there.

I think we need to find a way to revive somehow self‑regulation, respect of certain standards and ethical principles for different voices in the sphere.

We can start from journalistic principles but of course they can also evolve for the new needs.

And another thing I think that is useful for this kind of conversation from that example is the transparency of the media who benefit from this.

We were battling heavily somehow to have this clause explicitly mentioned in the legal text.

It's the requirement that the media who benefit, who self‑declare as media, are transparent, that the list is easily accessible for everyone to read, to Civil Society, to academia, to make sure that bad actors are not misusing or abusing this legal provision.

So I think there is something to look into there.

On generative AI, I think this is a very relevant conversation.

And again I would agree with Nikhil that this is a new battlefield somehow.

We haven't resolved the old risks, the media problems or the political influences and so on, and even safety issues to journalists. And we've moved to the area of digital platforms. These two battles were fought in parallel and now we have also generative AI that is profoundly disrupting the information sphere.

I think the biggest change happening with generative AI is we're moving from fragmentation of the public sphere to what we call the audience of one.

This is extreme personalization of interaction between an individual and the content that this individual is exposed to and is generated by these models, these statistical models and systems, that we don't really know how they operate because there's a lack of transparency and accountability and there's a lot of issues with the data they've trained off of, like biases and repetitiveness and so on.

We're seeing cases such as Iceland as a state strategically decided that it's important for them, and for the AI future which we're entering, for their language and culture to be represented. So they willingly gave all they have in the digital data world for free to OpenAI just to be represented in those models. Because they saw this as a priority. Right?

Then on the other hand, unlike The New York Times case, where The New York Times is suing OpenAI for the breach of copyright because they used their content without license or without agreement, what we're seeing in Europe is the publishers, especially the major ones, such as Le Monde, Axel Springer, El Pais in Spain and similar, is they are making deals with these companies.

The deals are opaque so we don't know what these deals are, but for example the CEO of Le Monde said it's a game‑changing deal for them, for one company.

But this is probably not the best way forward because it's fragmenting and weakening the positioning of companies and even further smaller companies.

So I think this is interesting.

I think it's a trade union, but whether it's a professional association or a trade union, but it was an existing organisation of journalists in the country who decided the best approach is to go for collective negotiations with big tech because this will make them stronger and they also decided to use all the legal instruments and regulatory frameworks that are in place in Europe to make their position stronger, so to ally somehow with the political power in the country to back them in this fight against big tech giants.

We think of course this battle is ongoing.

There are back and forth. Sometimes they manage to progress and then there's a backlash from big tech.

This is a very early stage, very fresh.

I think it's a very interesting and relevant case to observe to see how things can or should be done.

Because I do believe that one of the lessons learned from the existing negotiating frameworks is this doesn't really serve journalists and media so a collective approach is probably a better one and we're seeing much more happening on that hand. So news media organisations coming together and finally starting to understand that they are stronger if they do this together.

Yeah.

>> BIA BARBOSA: Thank you, Iva.

Just for the record, I would mention, we at the Brazilian Internet Steering Committee, tried to invite Google representatives for this conversation here but we didn't manage to convince them to come, which usually happens in different occasions.

I see Nikhil and Juliana have raised their hands.

Just would like to check if there are anyone online asking any questions or not? So Nikhil, do you mind if I gave the floor to Juliana before?

>> NIKHIL PAHWA: Of course.

Please go ahead.

I've said quite a bit.

>> JULIANA HARSIANTI: Okay. I think our discussion has moved from the digital platform to AI, which has become the major thing.

In Indonesia, the generative AI, especially large language model, is not only threaten as the copyright as Nikhil mentioned and Iva mentioned, but also threatens the work of journalism itself.

The journalists start to generate the news by using ChatGPT for example or other large language model and then they just do some edit for the news and then they publish it on their news sites. This is the problem.

Not the problem; this is still on debate on the people who are media company and associations of journalism because they still think this is good or ethical to have generated the news and published. Or they can use the large language model and generative AI, too, to just find the news for the sourcing for the news, and then they have to write it by themselves and then publish as their news, as their news article on the media.

The problem with the regulation, yes, I think we need the regulation by the state or the government, but the problem with regulation is it still needs time to discuss and to produce the regulation by the government. Meanwhile, the technology is running fast.

When the government has the published the regulation on generative AI, maybe we already have the ChatGPT for the news area, which has the ability more than ChatGPT that we know for the moment.

Well, what we think we should be ‑‑ what we think should be done, as an association, this is not only in journalism, but also creative associations, has to discuss that they will create growing regulation not to ask for rule on how to do it and not to be done by generative AI for their work.

I think this is more ethical than the regulation and for the moment they think this is enough, but I think we need stronger regulation as the law enforcement to overcome the impact of generative AI in journalism and creative work.

So back to you.

>> BIA BARBOSA: Thank you very much, Juliana.

Nikhil, please.

>> NIKHIL PAHWA: Thanks.

I'll just respond to one thing Juliana said.

While we want strong regulation of AI, I think it's going to be very difficult to get because geopolitically what's happening is that the EU is being looked at as too strong a regulatory player and countries are afraid they will lose out on innovation and the regulatory battle.

In India, what I see is a lot of pressure to not regulate AI.

>> BIA BARBOSA: This is the position in Brazilian parliament, as well.

>> NIKHIL PAHWA: Absolutely.

The other thing to look at, responding to Iva, I think one way of ensuring that media owners get enough compensation is to not get compensation only from media owners.

If anybody's copyrighted content, whether it's musicians or authors, or media owners like us, if our work has been used for training models, we should get compensated.

I had a conversation with a lawyer a few months ago who said that AI ingesting our content is like any person reading it, because when they're giving an output it's not the exact same thing.

It's an understanding of our content.

I would actually say that the power law applies over here.

The ability of AI to ingest vast amount of our content from across the globe is far greater and so therefore there needs to be protection for creators, and that creator could be of any kind, media, movies, books, anything.

I would also say that there are other mechanisms where AI does need to be regulated.

Like there has to be regulation for data protection.

Iva mentioned bias and I think bias is the trickiest one to regulate because it's about how one sees the world.

And perhaps there is plurality of AI systems that needs to come in in order to ensure that representation is of different kinds, just like biases in society.

On The New York Times case, I'll be surprised if there's a verdict. We shouldn't forget that The New York Times filed a case against OpenAI after negotiations for compensation failed. I would be surprised if OpenAI doesn't find a way of compensating The New York Times and settling out of court because they would not want a verdict because their content has been ingested by OpenAI.

So there is one additional challenge that comes in, which is that this could be systematic for users for research purposes. AI is trying to position ingesting our content as a mechanism for research. And there can be exceptions in some countries to copyright for research purposes.

This is another challenge that I think they're faced with.

But a fourth thing that's emerging now over a period of time I'm seeing, and I've talked to a lot of AI founders, is that the use of synthetic data, data generated by AI itself, is also coming into the mix where in the future content may not be needed for large language models because they're already trained on existing content.

In that case, if it's a compensation that we are paid for future users, as well, that may no longer exist.

Because let's face it. These are language models. They're not necessarily fact models. Anyone who relies on AI for fact is probably going to get something or the other wrong, and it's going to become problematic.

So I still feel that media does have an opportunity in its factual accuracy going into the future where AI will always fail because it's outputs are probabilistic in nature.

I know I'm not answering many things because this is still uncharted territory and evolving as we speak, but we need to take all these factors into account.

Thank you.

>> BIA BARBOSA: Thank you.

I think that's another topic we didn't mention today here is for the journalistic community it's interesting to have journalist content somehow training these AI systems; otherwise, the results the AI systems will bring us will not be at least ‑‑ it will be information we cannot trust at the end. So it's important to have journalists' content being used by AI systems, but I think the way it's going to be used in a fair way or compensated way, or dealing with copyright issues. But I think for us who support the integrity of information online, it's important to have at least some journalists' content being considered in the training of these systems.

I see that Iva has raised her hand.

We just are approaching to the final of our session.

I would like to ask you to ‑‑ so I'm going to give you the floor once each one of you and ask you to bring your final comments to this topic.

Thank you again for being with us.

Start with Iva.

>> IVA NENADIC: Thank you very much.

Yeah.

I think it's probably just the beginning of conversation but it's excellent to have this conversation at such a global scale and exchange because I think this is crucial to move us forward and have exchanges like this.

I won't conclude anything because it's very difficult to give final remarks on any of this because it's all open questions somehow.

But I would like to put one more consideration forward.

What we've seen from a lot of surveys is that the trust in journalism is declining. The latest Reuters news report is suggesting that people see journalists as drivers of polarization.

Why this is the case has not been reflected enough within the profession itself. Of course, there's multiple reasons for this. There's smear campaigns by politicians who of course want to disregard or undermine the credibility of the profession because then it works better for them. Right? But I think what we're not seeing sufficiently is this sort of self‑reflection.

Where have we failed as a profession? Especially in this aspect of reconnecting with youth and young audiences, because clearly there's a gap there.

Young people are departing from the media in traditional sense and departing from journalism in traditional sense, and journalists somehow are ignoring this fact. We don't see enough self‑reflection on that side.

Then there's also this question of creating value for audiences. I don't think that, you know, media and journalism and traditional sense is investing enough in this.

There's this obsession or demand somehow that journalism and media should be treated as a public good, and I do strongly support this idea that media and journalism, when professional and ethical, are definitely a public good and should be supported by public subsidies in a way that is transparent and fair and constitutes media pluralism, but at the same time there has to be a bit more self‑reflection and incentives or initiatives coming from within the profession.

And at the moment what we're seeing is a lot of complaints. Like we are captured by platforms and being destroyed by platforms; we need the help.

But what is actually the value that journalism has to offer to the people has been pushed aside or forgotten a little bit. So I think this would probably be the best case for journalism to kind of revive or remind us all what this value actually is and how can they create value with these new tools and technologies that are in disposal to everyone, including to media and journalism?

I think that would make a stronger case for why should people go back to journalism and media and support them more?

>> BIA BARBOSA: Thank you very much, Iva.

Juliana, please?

>> JULIANA HARSIANTI: Yeah.

Thank you.

I think I agree with what Iva said.

We cannot make the conclusion for our discussion because this kind of discussion is still to be continued in the future.

And it needs to be a regular conversation either in developed country or developing country, or Global South or Global North, because this is important for journalists to create the news in the digital platform.

How to deal with the big tech? How to deal with generative AI? And how to create, to keep the ethics within the journalists in the middle of the influence of data platforms and generative AI, who has been challenging their work and then business model of the media company?

The conversation will be, yeah, the conversation will have impact to the policy, either to the nation state's policy, or to the journalism associations, and the media companies, in regional or nation states. So we have to better the environment for journalists who keep creating and keep surviving in this digital era.

Thank you.

>> BIA BARBOSA: Thank you very much, Juliana.

Nikhil, please, your final remarks.

>> NIKHIL PAHWA: Thank you for having me here.

It's a great conversation.

I'm a journalist and entrepreneur and a capitalist in how I work but I do that ethically.

I do feel as media we have to find our own business models rather than relying on subsidies and government support and anything from the government, to be honest. Because any time, and I feel this strongly, the government comes into a tripartite relationship with government, media, and big tech, two things happen.

Governments use the funds, and it may be different in Europe, but in the Global South governments use funds as a mechanism for influencing the media.

And secondly, if the media pushes for them to regulate big tech, then government creates regulations over big tech and uses that as a mechanism to regulate free speech.

So to be honest, in this relationship I don't want the government in that, because it has an impact on democracy, it has an impact on media freedom, whether directly or indirectly, whenever you have government involved.

I'd rather figure out our own business models and if it has to be regulated it has to be across society, not specific to the media. I don't feel we need special treatment and I don't feel we should have special treatment.

We have to adapt when times change. We have to adapt when we move from traditional business models to online business models, from online to AI.

But at the same time, if someone is stealing our content, we need to go to court to protect our rights, in a sense.

I strongly believe I don't want government in the picture and we don't need protection. We need to fight our own battles. And we need to innovate on our own.

For far too long, we've allowed all the innovation to centre around big techs when we've had the same opportunity to build audience relationships.

And I don't think expecting regulation and laws and policies to support us is going to solve the problem for us.

I know this is antithetical to what this conversation is about, but that's the way I run my media business.

Thank you.

>> BIA BARBOSA: And of course, one thing that is government and another thing is the state role that we brought at the beginning of our conversation, that's one of the controversies we had in this report that we published here by the Brazilian Internet Steering Committee.

I totally agree with the risk that we have some governments regulate freedom of expression issues or regulate technology related to freedom of expression.

But I also agree that we have to search for some kind of balance between big companies and in countries like mine in Brazil where you have the big national companies, media national companies, and the global big techs, that the public gets in the middle. The citizens get in the middle. And the state has a role to play, as well, to bring at least more balance to the conversation.

But of course, it's not only governments that can bring this balance. We have the judiciary. We have independent regulatory bodies. So there are other alternatives that I think we have to put on the table to try to find some solutions that represent our specificities in each of the countries that we are discussing this kind of problem.

But also in a global perspective, because we're dealing with global companies, and maybe some achievements that we have had in some countries may help us to deal with that in other realities. From the Global South perspective, I think we can learn a lot from other countries that are tackling this problem.

Once again, thank you so much for your time, your insightful thoughts, and for spending some time with us here at the IGF.

We started this conversation, as was mentioned. It was only the beginning.

From the Brazilian Internet Steering Committee perspective, I would like to thank you very much and to make us available for any kind of further exchange that we might have. And have you all, whoever is listening or hearing, and for those who are here with us, a good evening.

Thank you very much.

Bye.